text stringlengths 14 5.77M | meta dict | __index_level_0__ int64 0 9.97k ⌀ |
|---|---|---|
Q: What makes this image display in my WPF usercontrol? I've been trying to make a simple usercontrol to display an image, and eventually make it act as a button in my WPF application.
I had one of these usercontrols written into the form's XAML, and the image always displayed. However, when adding them programmatically to a stackpanel, the control was there but the image never displayed.
Here's the UserControl: (simple, but works for this example)
<UserControl x:Class="ImgButton"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
MinHeight="32" MinWidth="32"
x:Name="uc_ImgButton">
<Border BorderBrush="Gray" BorderThickness="2">
<Image Source="{Binding ElementName=uc_ImgButton, Path=Source}" x:Name="img"/>
</Border>
</UserControl>
I set up the Source property as a dependency property of the ImageSource type:
Partial Public Class ImgButton
Public Property Source() As ImageSource
Get
Return GetValue(SourceProperty)
End Get
Set(ByVal value As ImageSource)
SetValue(SourceProperty, value)
End Set
End Property
Public Shared ReadOnly SourceProperty As DependencyProperty = _
DependencyProperty.Register("Source", _
GetType(ImageSource), GetType(ImgButton))
End Class
Here's an example of how I add them programmatically in VB:
Dim newBtn As New myApp.ImgButton
newBtn.Width = 100
newBtn.Height = 100
Dim bi As New BitmapImage
bi.BeginInit()
bi.UriSource = New Uri("C:\test.png", UriKind.RelativeOrAbsolute)
bi.EndInit()
'MsgBox(bi.Width) '(a simple debug test I added)
newBtn.Source = bi
Me.StackPanelMain.Children.Add(newBtn)
Here's the strange part... the code as shown above runs without error, but the result on my form is an empty border with no image displayed inside.
However, if you un-comment the MsgBox line, now the image displays. It's as if forcing it to get some value from the BitmapImage makes it work. I also tried replacing the MsgBox line with something innoculous like "dim x as integer = bi.PixelWidth" and that ALSO made the image display. If I take it away, no image on my form.
I suspect I'm missing something or just don't understand something. I'd like to learn what's going on, rather than leave a seemingly pointless line of code in my app.
A: I'm not sure if I understand the binding that you're trying to setup. Typically when you use ElementName that means that you are binding to another control in the same visual tree. However, based on the code sample for your ImgButton class it looks like that is just a normal class. Are you sure that you want to use ElementName?
Also, I believe that in order for a class to have a DependencyProperty it must derive from DependencyObject. If you don't want to make that change, you can always implement INotifyPropertyChanged and fire the PropertyChanged event in the setter of the Source property.
Update: I think part of the problem was that I misread your question (I didn't realize that ImgButton was your UserControl class).
Anyway, I'm not really sure what's going on here. However, what if your property was for the path to the image file, rather than to the image itself? Maybe something like this:
XAML:
<UserControl x:Class="ImgButton"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
MinHeight="32" MinWidth="32"
x:Name="uc_ImgButton">
<Border BorderBrush="Gray" BorderThickness="2">
<Image>
<Image.Source>
<BitmapImage UriSource="{Binding RelativeSource={RelativeSource FindAncestor, AncestorType=UserControl}, Path=Source}" />
</Image.Source>
</Image>
</Border>
</UserControl>
C#:
Partial Public Class ImgButton
Public Property Source() As String
Get
Return GetValue(SourceProperty)
End Get
Set(ByVal value As ImageSource)
SetValue(SourceProperty, value)
End Set
End Property
Public Shared ReadOnly SourceProperty As DependencyProperty = _
DependencyProperty.Register("Source", _
GetType(String), GetType(ImgButton))
End Class
Then just set the Source property to the path to your image:
Dim newBtn As New myApp.ImgButton
newBtn.Width = 100
newBtn.Height = 100
newBtn.Source = "C:\test.png"
Me.StackPanelMain.Children.Add(newBtn)
I haven't used the BitmapImage class before, but it's possible that it takes something in order to have it render the image. I'm not really why your code doesn't work, though, but maybe if you set the source via bindings in XAML it will be more like your original case, and it will work.
A: As Far As i Know, you DO NOT need a usercontrol for this. Not even a Styled Button. Just set the button's fill property to an Image Brush and thats it. An Image-button :)
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,481 |
who gonna get it?
editor February 25, 2020
North has to be the heavy favorite, right? Who's gonna lower their price so much that they end up making no money off the deal, but want the marketing and PR value? Thoughts?
The 49er class invites expressions of interest to design and build 49er and/or 49erFX sails for the 2021-2024 quadrennial.
The 49er class welcomes world leading sail designers and manufacturers to bid on becoming our class sailmaker(s). The selected sailmaker(s) will have the opportunity to build sails for at least the four year period leading to the Paris Olympics, with the possibility to extend for further quadrennials.
The current 49er sails have been used since 2009 while the 49erFX sails have been used since 2012. The 49er class has had 3 different designs of sails over the years, while the 49erFX is coming off the original set of sails.
The class is seeking improve consistency and longevity of the sails, to keep the costs of campaigning as manageable as possible.
Each of the two rigs, 49er and 49erFX, has been updated for 2021 already, with CST being the new mast maker for the classes. The new masts are of the same geometry and bend characteristics as the previous generation of masts. The existing class masts are expected to remain class legal for the foreseeable future.
We invite all interested parties to get the full technical requirements via email. Expressions of interest are due by March 28th, 2020.
get your sheet together!
tokyo no go?
No Go Girls
my gun is bigger than your gun | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 2,963 |
\section{Introduction}
\label{sec:introduction}
\subsection{Motivation}
\label{sec:motivation}
Game learning and game playing is an interesting test bed for strategic decision making. Games usually have large state spaces, and they often require complex pattern recognition and strategic planning capabilities to decide which move is the best in a certain situation. If algorithms learn a game (or, even better, a variety of different games) just by self-play, given no other knowledge than the game rules, it is likely that they perform also well on other problems of strategic decision making.
In recent years, reinforcement learning (RL) and deep neural networks (DNN) achieved superhuman capabilities in a number of competitive games \citep{mnih2015human,silver2016AlphaGo}. This success has been a product of the combination of reinforcement learning, deep learning and Monte Carlo Tree Search (MCTS). However, current deep reinforcement learning (DRL) methods struggle in environments with a high number of states and a small number of reward states.
\begin{figure}[h]
\centerline{
\begin{tabular}{cc}
\includegraphics[width=0.28\columnwidth]{figures/rubiks-scrambled-512.jpg}
& \hspace{0.05\columnwidth}
\includegraphics[width=0.19\columnwidth,height=0.19\columnwidth]{figures/pocket-512.jpg}
\\
(a) & \hspace{0.05\columnwidth} (b)
\end{tabular}
}
\caption{(a) Scrambled 3x3x3 Rubik's Cube. (b) 2x2x2 cube in the middle of a twist.
}
\label{fig:gamesRubik}
\end{figure}
The Rubik's cube puzzle is an example of such an environment since the classical 3x3x3 cube has $4.3\cdot 10^{19}$ states and only \textit{one} state (the solved cube) has a reward. A somewhat simpler puzzle is the 2x2x2 cube with $3.6\cdot 10^6$ state and again only one reward state. Both cubes are shown in Fig.~\ref{fig:gamesRubik}.
The difficult task to \textit{learn from scratch} how to solve arbitrary scrambled cubes (i.e. without being taught by expert knowledge, whether from humans or from computerized solvers) was not achievable with DRL methods for a long time. Recently, the works of \cite{mcaleer2018solving,mcaleer2019solving} and \cite{agostinelli2019solving} provided a breakthrough in that direction (see Sec.~\ref{sec:McAleer} and \ref{sec:rel-work} for details): Their approach \href{\#hrefDAVI}{DAVI}\xspace (Deep Approximate Value Iteration) learned from scratch to solve arbitrary scrambled 3x3x3 cubes.
This work investigates whether TD-n-tuple learning with much lower computational demands can solve (or partially solve) Rubik's cube as well.
\subsection{Overview}
\label{sec:overview}
The General Board Game (GBG) learning and playing framework \citep{Konen2019b,Konen20b,Konen22a} was developed for education and research in AI. GBG allows applying the new algorithm easily to a variety of games. GBG is open source and available on GitHub\footnote{\url{https://github.com/WolfgangKonen/GBG}}.
The main contribution of this paper is to take the TD-n-tuple approach from GBG \citep{Scheier2022} that was also successful on other games (Othello, ConnectFour) and to investigate this algorithm on various cube puzzles. We will show that it can solve the 2x2x2 cube perfectly and the 3x3x3 cube partly. At the same time it has drastically reduced computational requirements compared to \cite{mcaleer2019solving}. We will show that wrapping the base agent with an \textbf{MCTS wrapper}, as it was done by \cite{mcaleer2019solving} and \cite{Scheier2022}, is essential to reach this success.
This work is at the same time an in-depth tutorial how to represent a cube and its transformations within a computer program such that all types of cube operations can be computed efficiently. As another important contribution we will show how \textbf{symmetries} (Sec.~\ref{sec:symmetr}, \ref{sec:numSymmetry} and \ref{sec:resSymmetry}) applied to cube puzzles can greatly increase sample efficiency and performance.\\[0.2cm]
The rest of this paper is organized as follows: Sec.~\ref{sec:foundation} lays the foundation for Rubik's cube, its state representation, its transformations and its symmetries. In Sec.~\ref{sec:ntuples} we introduce n-tuple systems and how they can be used to derive policies for game-playing agents. Sec.~\ref{sec:represent-ntuple} defines and discusses several n-tuple representations for the cube. Sec.~\ref{sec:learning} presents algorithms for learning the cube: first the \href{\#hrefDAVI}{DAVI}\xspace algorithm of \cite{mcaleer2019solving,agostinelli2019solving} and then our n-tuple-based TD learning (with extensions TCL and MCTS).
In Sec.~\ref{sec:results} we present the results when applying our n-tuple-based TD learning method to the 2x2x2 and the 3x3x3 cube. Sec.~\ref{sec:rel-work} discusses related work and Sec.~\ref{sec:summary} concludes.
\newpage
\section{Foundations}
\label{sec:foundation}
\subsection{Conventions and Symbols}
We consider in this paper two well-known cube types, namely the 2x2x2 cube (pocket cube) and the 3x3x3 cube (Rubik's cube).
\label{sec:convent}
\subsubsection{Color arrangement}
Each cube consists of smaller \hypertarget{hrefCubie}{\textbf{cubies}}: 8 corner cubies for the 2x2x2 cube and 8 corner, 12 edge and 6 center cubies for the 3x3x3 cube. A corner cubie has 3 \hypertarget{hrefSticker}{\textbf{stickers}} of different color on its 3 faces. An edge cubie has two, a center cubie has one sticker.
We enumerate the 6 cube faces with \\
\hspace*{1.8cm} (ULF) = (\textbf{U}p, \textbf{L}eft, \textbf{F}ront) and \\
\hspace*{1.8cm} \hypertarget{hrefDRBcubie}{(DRB)} = (\textbf{D}own, \textbf{R}ight, \textbf{B}ack).
We number the 6 colors with 0,1,2,3,4,5. My cube has these six colors \\
\hspace*{1.8cm} 012 = wbo = (white,blue,orange) in the (ULF)-cubie\footnote{We run through the faces of a cubie in counter-clockwise orientation.} and \\
\hspace*{1.8cm} 345 = ygr = (yellow,green,red) in the opposing \href{\#hrefDRBcubie}{(DRB)}\xspace-cubie.
The solved cube in default position has colors (012345) for the faces (ULFDRB), i.e. the white color is at the \textbf{U}p face, blue at \textbf{L}eft, orange as \textbf{F}ront and so on. We can cut the cube such that up- and bottom-face can be folded away and have a flattened representation as shown in Figure~\ref{fig:col-flattened}.
\begin{figure}[h]
\renewcommand{\arraystretch}{1.35}
\centerline{
\begin{tabular}{|x|c|y|z|} \cline{2-2}
\multicolumn{1}{c|}{ } & w &\multicolumn{2}{c}{ } \\ \hline
b & o\co & g & r \\ \hline
\multicolumn{1}{c|}{ } & y\ye &\multicolumn{2}{c}{ } \\ \cline{2-2}
\end{tabular}
}
\renewcommand{\arraystretch}{1.0}
\caption{The face colors of the default cube in flattened representation}
\label{fig:col-flattened}
\end{figure}
\subsubsection{Twist and Rotation Symbols}
Twists of cube faces are denoted by uppercase letters U, L, F, D, R, B. Each of these twists means a $90^\circ$ counterclockwise rotation.\footnote{The rotation is counterclockwise when looking at the respective face} If U = U${}^{1}$ is a $90^\circ$ rotation, then U${}^{2}$ is a $180^\circ$ rotation and U${}^{3}$=U${}^{-1}$ is a $270^\circ$ rotation.
Whole-cube rotations are denoted by lowercase letters $u, \ell, f$. (We do not need $d, r, b$ here, because $d = u{}^{-1}, r = \ell^{-1}$ and so on.)
Further symbols like $f_c[i], \ensuremath{s_{\ell}}\xspace[i]$ that characterize a cube state
will be explained in Sec.~\ref{sec:class_cs}.
\subsubsection{Twist Types}
Cube puzzles can have different twist types or twist metrics:
\begin{itemize}
\item \textbf{\hypertarget{hrefQTM}{QTM} (quarter turn metric)}: only quarter twists are allowed: e.g. U${}^{1}$ and U${}^{-1}$.
\item \textbf{\hypertarget{hrefHTM}{HTM} (half turn metric)}: quarter and half turns (twists) are allowed: e.g. U${}^{1}$, U${}^{2}$, U${}^{3}$.
\end{itemize}
By \textit{allowed} we mean what counts as one move. In QTM we can realize U${}^{2}$ via U\,U as well, but it costs us 2 moves. In HTM, U${}^{2}$ counts as one move.
The twist type influences \href{\#hrefGodsNum}{God's number}\xspace and the branching factor of the game, see Sec.~\ref{sec:facts}.
\subsection{Facts about Cubes}
\label{sec:facts}
\subsubsection{2x2x2 Cube}
\label{sec:facts2x2}
The \textbf{number of distinct states} for the 2x2x2 pocket cube is \citep{wikiPocketCube}
\begin{equation}
\frac{8! \cdot 3^7}{24} = 7!\cdot 3^6 = 3,674,160 \approx 3.6\cdot 10^6
\label{eq:numStates2x2}
\end{equation}
Why this formula? –- We have 8 cubies which we can place in 8! ways on the 8 cube positions. Each but the last cubie has the freedom to appear in 3 orientations, which gives the factor $3^7$ (the last cubie is then in a fixed orientation, the other two orientations would yield illegal cube states). -- Each of these raw states has the (ygr)-cubie in any of the 24 possible positions. Or, otherwise speaking, each truly different state appears in 24 whole-cube rotations. To factor out the whole-cube rotations, we count only the states with (ygr)-cubie in its default position \href{\#hrefDRBcubie}{(DRB)}\xspace and divide the number of raw states by 24, q.e.d.
\textbf{\hypertarget{hrefGodsNum}{God's number}}: What is the minimal number of moves needed to solve any cube position? -- For the 2x2x2 pocket cube, it is 11 in \href{\#hrefHTM}{HTM}\xspace (half-turn metric) and 14 in \href{\#hrefQTM}{QTM}\xspace.
\textbf{Branching factor}: $3\cdot 3 = 9$ in \href{\#hrefHTM}{HTM}\xspace and $3\cdot 2 = 6$ in \href{\#hrefQTM}{QTM}\xspace.
\subsubsection{3x3x3 Cube}
\label{sec:facts3x3}
The \textbf{number of distinct states} for the 3x3x3 Cube is \citep{wikiRubiksCube}
\begin{equation}
\frac{8!\cdot 3^7\cdot 12!\cdot 2^{11}}{2} = 43,252,003,274,489,856,000 \approx 4.3\cdot 10^{19}
\label{eq:numStates3x3}
\end{equation}
Why this formula? -- We have 8 corner cubies which we can place in 8! ways on the 8 cube positions. Each but the last cubie has the freedom to appear in 3 orientations, which gives the factor $3^7$. We have 12 edge cubies which we can place in 12! ways on the edge positions. Each but the last cubie has the freedom to appear in 2 orientations, which gives the factor $2^{11}$.
The division by 2 stems from the fact, that neither alone two corner cubies may be swapped nor alone two edge cubies may be swapped. Instead, the number of such swaps must be even (factor 2).
\textbf{God's Number}: What is the minimal number of moves needed to solve any cube position? –- For the 3x3x3 Rubik's Cube, it is 20 in \href{\#hrefHTM}{HTM}\xspace (half-turn metric) and 26 in \href{\#hrefQTM}{QTM}\xspace. This is a result from \cite{rokicki2014diameter}, see also \href{http://www.cube20.org/qtm/}{\url{http://www.cube20.org/qtm/}}.
\textbf{Branching factor}: $6\cdot 3 = 18$ in \href{\#hrefHTM}{HTM}\xspace and $6\cdot 2 = 12$ in \href{\#hrefQTM}{QTM}\xspace.
\begin{figure}%
\centerline{
\renewcommand{\arraystretch}{1.75}
\begin{scriptsize}
\begin{tabular}{|x|x|c|c|y|y|z|z|} \cline{3-4}
\multicolumn{2}{c|}{ } & 3 & 2 &\multicolumn{4}{c}{ } \\ \cline{3-4}
\multicolumn{2}{c|}{ } & 0 & 1 &\multicolumn{4}{c}{ } \\ \hline
5 & 4 & 8\co&11\co& 18& 17& 23& 22\\ \hline
6 & 7 & 9\co&10\co& 19& 16& 20& 21\\ \hline
\multicolumn{2}{c|}{ } &14\ye&13\ye&\multicolumn{4}{c}{ } \\ \cline{3-4}
\multicolumn{2}{c|}{ } &15\ye&12\ye&\multicolumn{4}{c}{ } \\ \cline{3-4}
\end{tabular}
\end{scriptsize}
\renewcommand{\arraystretch}{1.0}
}
\caption{Sticker numbering for the 2x2x2 cube}
\label{fig:2x2x2stickers}
\end{figure}
\subsection{The Cube State}
\label{sec:class_cs}
A cube should be represented by objects in GBG in such a way that
\begin{enumerate}[(a)]
\item cube states that are equivalent are represented by identical objects
\item if two cube states are equivalent, it should be easy to check this by comparing their objects
\item cube transformations are easy to carry out on these objects.
\end{enumerate}
Condition (a) means that if two twist sequences lead to the same cube state (e.g. U$^{-1}$ and UUU), this should result also in identical objects. Condition (b) means, that the equality should be easy to check, given the objects. That is, a cube should \textit{not} be represented by its twist sequence.
A cube state is in GBG represented by abstract class \texttt{CubeState} and has two describing members
\begin{eqnarray}
f_c[i] &=& \mbox{\tt fcol}\xspace[i] \label{eq:fcol} \\
\ensuremath{s_{\ell}}\xspace[i] &=& \mbox{\tt sloc}\xspace[i] \label{eq:sloc}
\end{eqnarray}
$f_c[i] = \mbox{\tt fcol}\xspace[i]$ denotes the \textbf{f}ace \textbf{col}or at sticker location $i$. The color is one out of {0,1,2,3,4,5} for the colors {w,b,o,y,g,r}.
$\ensuremath{s_{\ell}}\xspace[i] = \mbox{\tt sloc}\xspace[i]$ contains the \textbf{s}ticker \textbf{loc}ation of the \textit{\href{\#hrefSticker}{sticker}\xspace which is in position $i$ for the solved cube $d$}.
Members $f_c$ and $\ensuremath{s_{\ell}}\xspace$ are vectors with 24 (2x2x2 cube) or 48 (3x3x3 cube) elements where $i$ denotes the $i$th \href{\#hrefSticker}{sticker}\xspace location.
The stickers are numbered in a certain way which is detailed in Figures~\ref{fig:2x2x2stickers} and
\ref{fig:3x3x3stickers} for the flattened representations of the 2x2x2 and 3x3x3 cube, resp.
\begin{figure}%
\centerline{
\renewcommand{\arraystretch}{1.75}
\begin{scriptsize}
\begin{tabular}{|x|x|x|c|c|c|y|y|y|z|z|z|} \cline{4-6}
\multicolumn{3}{c|}{ } & 6 & 5 & 4 & \multicolumn{4}{c}{ } \\ \cline{4-6}
\multicolumn{3}{c|}{ } & 7 & & 3 & \multicolumn{4}{c}{ } \\ \cline{4-6}
\multicolumn{3}{c|}{ } & 0 & 1 & 2 & \multicolumn{4}{c}{ } \\ \hline
10& 9 & 8 & 16\co& 23\co& 22\co& 36& 35& 34& 46& 45& 44\\ \hline
11& & 15& 17\co& \co& 21\co& 37& & 33& 47& & 43\\ \hline
12& 13& 14& 18\co& 19\co& 20\co& 38& 39& 32& 40& 41& 42\\ \hline
\multicolumn{3}{c|}{ } & 28\ye& 27\ye& 26\ye& \multicolumn{4}{c}{ } \\ \cline{4-6}
\multicolumn{3}{c|}{ } & 29\ye& \ye& 25\ye& \multicolumn{4}{c}{ } \\ \cline{4-6}
\multicolumn{3}{c|}{ } & 30\ye& 31\ye& 24\ye& \multicolumn{4}{c}{ } \\ \cline{4-6}
\end{tabular}
\end{scriptsize}
\renewcommand{\arraystretch}{1.0}
}
\caption{Sticker numbering for the 3x3x3 cube. We do not number the center \href{\#hrefCubie}{cubies}\xspace, they stay invariant under twists.}
\label{fig:3x3x3stickers}
\end{figure}
In principle, one of the two members $f_c$ and $\ensuremath{s_{\ell}}\xspace$ would be sufficient to characterize a state, since the \textbf{fcol-sloc-relation}
\begin{equation}
f_c[\ensuremath{s_{\ell}}\xspace[i]] = d.f_c[i]
\label{eq:fcolsloc}
\end{equation}
holds, where $d$ denotes the default cube.
This is because $\ensuremath{s_{\ell}}\xspace[i]$ transports the sticker $i$ of the default cube $d$ to location $\ensuremath{s_{\ell}}\xspace[i]$, i.e. it has the color $d.f_c[i]$.
That is, we can easily calculate $f_c$ given $\ensuremath{s_{\ell}}\xspace$. With some more effort, it is also possible to calculate $\ensuremath{s_{\ell}}\xspace$ given $f_c$ (see Appendix~\ref{app:calc_s_from_f}). Although one of these members $f_c$ and $\ensuremath{s_{\ell}}\xspace$ would be sufficient, we keep both because this allows to better perform assertions or cross checks during transformations.
Sometime we need the inverse function $\ensuremath{s_{\ell}}\xspace^{-1}[i]$: \textit{Which sticker is at location $i$?} It is easy to calculate $\ensuremath{s_{\ell}}\xspace^{-1}$ given $\ensuremath{s_{\ell}}\xspace$ with the help of the relation:
\begin{equation}
\ensuremath{s_{\ell}}\xspace^{-1}[\ensuremath{s_{\ell}}\xspace[i]] = i
\label{eq:sloc-inv}
\end{equation}
(Note that it is \textit{not} possible to invert $f_c$, because the face coloring function is not bijective.)
\subsection{Transformations}
\label{sec:transform}
\begin{table}%
\caption{The three relevant twists for the 2x2x2 cube}
\label{tab:twist}
\centerline{
\begin{scriptsize}
\tabcolsep=0.08cm
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{|l|c||a|a|a|a||c|c|c|c||a|a|a|a||c|c|c|c||a|a|a|a||c|c|c|c||} \hline
& &0 &1 &2 &3 &4 &5 &6 &7 &8 &9 &10 &11 &12 &13 &14 &15 &16 &17 &18 &19 &20 &21 &22 &23 \\ \hline\hline
U twist & $T$ &1 &2 &3 &0 &11&8 &6 &7 &18&9 &10 &17 &12 &13 &14 &15 &16 &22 &23 &19 &20 &21 &4 &5 \\ \hline
L twist & $T$ &22&1 &2 &21 &5 &6 &7 &4 &3 &0 &10 &11 &12 &13 &8 &9 &16 &17 &18 &19 &20 &14 &15 &23 \\ \hline
F twist & $T$ &7 &4 &2 &3 &14&5 &6 &13 &9 &10 &11 &8 &12 &18 &19 &15 &16 &17 &0 &1 &20 &21 &22 &23 \\ \hline\hline
U$^{-1}$& $T^{-1}$&3 &0 &1 &2 &22&23 &6 &7 &5 &9 &10 &4 &12 &13 &14 &15 &16 &11 &8 &19 &20 &21 &17 &18 \\ \hline
L$^{-1}$& $T^{-1}$& 9&1 &2 &8 &7 &4 &5 &6 &14&15 &10 &11 &12 &13 &21 &22 &16 &17 &18 &19 &20 &3 &0 &23 \\ \hline
F$^{-1}$& $T^{-1}$&18&19&2 &3 &1 &5 &6 &0 &11&8 &9 &10 &12 &7 &4 &15 &16 &17 &13 &14 &20 &21 &22 &23 \\ \hline
\end{tabular}
\renewcommand{\arraystretch}{1.0}
\end{scriptsize}
}
\end{table}
\subsubsection{Twist Transformations}
\label{sec:twist}
Each basic twist is a counterclockwise\footnote{The rotation is counterclockwise when looking at this face.} rotation of a face by $90^\circ$.
Table~\ref{tab:twist} shows the 2x2x2 transformation functions for three basic twists. Each twist transformation can be coded in two forms:
\begin{enumerate}
\item $T[i]$ (forward transformation): Which is the new location for the \href{\#hrefSticker}{sticker}\xspace being at $i$ before the twist?
\item $T^{-1}[i]$ (inverse transformation): Which is the (parent) location of the \href{\#hrefSticker}{sticker}\xspace that lands in $i$ after the twist?
\end{enumerate}
Example (read off from column $0$ of Table~\ref{tab:twist}): The L-twist transports sticker at $0$ to $22$: $T[0]=22$. The (parent) sticker being at location $9$ before the L-twist comes to location $0$ after the twist: $T^{-1}[0]=9$. Likewise, for the U-twist we have $T[0]=1$ and $T^{-1}[0]=3$. We show in Fig.~\ref{fig:utwist} the default cube after twist U${}^{1}$.
How can we apply a twist transformation to a cube state programmatically? -- We denote with $f_c'$ and $\ensuremath{s_{\ell}}\xspace'$ the new states for $f_c$ and $\ensuremath{s_{\ell}}\xspace$ after transformation. The following relations allow to calculate the transformed cube state:
\begin{eqnarray}
f_c'[i] &=& f_c[T^{-1}[i]] \label{eq:utwist_f}\\
\ensuremath{s_{\ell}}\xspace'[\ensuremath{s_{\ell}}\xspace^{-1}[i]] &=& T[i] \label{eq:utwist_s}
\end{eqnarray}
Eq.~\eqref{eq:utwist_f} says: The new color for sticker $0$ is the color of the sticker which moves into location $0$ ($f_c[9]$ in the case of an L-twist). To explain Eq.~\eqref{eq:utwist_s}, we first note that $\ensuremath{s_{\ell}}\xspace^{-1}[i]$ is the sticker being at $i$ before the transformation. Then, Eq.~\eqref{eq:utwist_s} says: \glqq The new location for the sticker being at $i$ before the transformation is $T[i]$.\grqq\ For example, the L-twist transports the current sticker at location $0$ to the new location $T[0]=22$, i.\,e. $\ensuremath{s_{\ell}}\xspace'[0]=22$.
\begin{figure}%
\centerline{
\renewcommand{\arraystretch}{1.75}
\begin{scriptsize}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \cline{3-4}
\multicolumn{2}{c|}{ } & 2 & 1 &\multicolumn{4}{c}{ } \\ \cline{3-4}
\multicolumn{2}{c|}{ } & 3 & 0 &\multicolumn{4}{c}{ } \\ \hline
23\re&22\re& 5\bl& 4\bl& 8\co& 11\co& 18\gr& 17\gr\\ \hline
6\bl& 7\bl& 9\co&10\co& 19\gr& 16\gr& 20\re& 21\re\\ \hline
\multicolumn{2}{c|}{ } &14\ye&13\ye&\multicolumn{4}{c}{ } \\ \cline{3-4}
\multicolumn{2}{c|}{ } &15\ye&12\ye&\multicolumn{4}{c}{ } \\ \cline{3-4}
\end{tabular}
\end{scriptsize}
\renewcommand{\arraystretch}{1.0}
}
\caption{The default 2x2x2 cube after twist U$^{1}$}
\label{fig:utwist}
\end{figure}
For the 2x2x2 cube, these 3 twists U, L, F are sufficient, because D=U$^{-1}$, R=L$^{-1}$, B=F$^{-1}$. This is because the 2x2x2 cube has no center \href{\#hrefCubie}{cubies}\xspace. For the 3x3x3 cube, we need all 6 twists U, L, F, D, R, B because this cube has center cubies.
\begin{table}[bp]%
\caption{The U twist for the 3x3x3 cube}
\label{tab:Utwist3x3}
\centerline{
\begin{scriptsize}
\tabcolsep=0.08cm
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{|l|c||a|a|a|a||c|c|c|c||a|a|a|a||c|c|c|c||a|a|a|a||c|c|c|c||} \hline
& &0 &1 &2 &3 &4 &5 &6 &7 &8 &9 &10 &11 &12 &13 &14 &15 &16 &17 &18 &19 &20 &21 &22 &23 \\ \hline\hline
U twist & $T$ &2 &3 &4 &5 &6 &7 &0 &1 &22&23 &16 &11 &12 &13 &14 &15 &36 &17 &18 &19 &20 &21 &34 &35 \\ \hline
& &24&25&26 &27 &28&29 &30&31 &32&33 &34 &35 &36 &37 &38 &39 &40 &41 &42 &43 &44 &45 &46 &47 \\ \hline\hline
U twist & $T$ &24&25&26 &27 &28&29 &30&31 &32&33 &44 &45 &46 &37 &38 &39 &40 &41 &42 &43 &8 &9 &10 &47 \\ \hline
\end{tabular}
\renewcommand{\arraystretch}{1.0}
\end{scriptsize}
}
\end{table}
In any case, we will show in Sec.~\ref{sec:wcr} that only one row in Table~\ref{tab:twist} or Table~\ref{tab:Utwist3x3}, say $T$ for the U-twist, has to be known or established 'by hand'. All other twists and their inverses can be calculated programmatically with the help of Eqs.~\eqref{eq:T-inv}-\eqref{eq:twistsFromWCR_B} that will be derived in Sec.~\ref{sec:wcr}.
\hypertarget{hrefNormalize2x2}{\paragraph{\hspace*{1cm}\textit{Normalizing the 2x2x2 Cube}}}
As stated above, the 3 twists U, L, F are sufficient for the 2x2x2 cube. Therefore, the \href{\#hrefDRBcubie}{(DRB)}\xspace-cubie will never leave its place, whatever the twist sequence formed by U, L, F is. The (DRB)-cubie has the stickers (12, 16, 20), and we can check in Table~\ref{tab:twist} that columns (12, 16, 20) are always invariant. If we have an arbitrary initial 2x2x2 cube state, we can normalize it by applying a \href{\#hrefWCR}{whole-cube rotation}\xspace such that the (ygr)-cubie moves to the (DRB)-location.
\hypertarget{hrefNormalize3x3}{\paragraph{\hspace*{1cm}\textit{Normalizing the 3x3x3 Cube}}}
In the case of the 3x3x3 cube, all center cubies will be not affected by any twist sequence. Therefore, we normalize a 3x3x3 cube state by applying initially a \href{\#hrefWCR}{whole-cube rotation}\xspace such that the center cubies are in their normal position (i.e. white up, blue left and so on).
\hypertarget{hrefWCR}{\subsubsection{Whole-Cube Rotations (WCR)}}
\label{sec:wcr}
Each basic \textbf{whole-cube rotation} (WCR) is a counterclockwise rotation of the whole cube around the $u,l,f$-axis by $90^\circ$.
Table~\ref{tab:basic-wcr} shows two of the 2x2x2 transformation functions for basic whole-cube rotations. Each rotation can be coded in two forms:
\begin{enumerate}
\item $T[i]$ (forward transformation): Which is the new location for the \href{\#hrefSticker}{sticker}\xspace being at $i$ before the twist?
\item $T^{-1}[i]$ (inverse transformation): Which is the (parent) location of the \href{\#hrefSticker}{sticker}\xspace that lands in $i$ after the twist?
\end{enumerate}
\begin{table}%
\caption{Two basic whole-cube rotations for the 2x2x2 cube}
\label{tab:basic-wcr}
\centerline{
\begin{scriptsize}
\tabcolsep=0.08cm
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{|c|c||j|j|j|j||c|c|c|c||j|j|j|j||c|c|c|c||j|j|j|j||c|c|c|c||} \hline
& &0 &1 &2 &3 &4 &5 &6 &7 &8 &9 &10 &11 &12 &13 &14 &15 &16 &17 &18 &19 &20 &21 &22 &23 \\ \hline\hline
$u$ rotation & $T$ &1 &2 &3 &0 &11&8 &9 &10 &18&19&16 &17 &15 &12 &13 &14 &21 &22 &23 &20 & 6 & 7 &4 &5 \\ \hline
$f$ rotation & $T$ &7 &4 &5 &6 &14&15&12&13 &9 &10 &11 &8 &17 &18 &19 &16 & 2 & 3 &0 &1 &23 &20 &21 &22 \\ \hline\hline
$u^{-1}$& $T^{-1}$ &3 &0 &1 &2 &22&23 &20&21 &5 &6 & 7 &4 &13 &14 &15 &12 &10 &11 &8 &19 &19 &16 &17 &18 \\ \hline
$f^{-1}$& $T^{-1}$ &18&19&16&17 &1 &2 &3 &0 &11&8 &9 &10 & 6 &7 &4 & 5 &15 &12 &13 &14 &21 &22 &23 &20 \\ \hline
\end{tabular}
\renewcommand{\arraystretch}{1.0}
\end{scriptsize}
}
\end{table}
Besides the basic rotation $u$ there is also $u^2$ ($180^\circ$) and $u^3=u^{-1}$ ($270^\circ = - 90^\circ$).
All whole-cube rotations can be generated from these two forward rotations $u$ and $f$: First, we calculate the inverse transformations via
\begin{equation}
T^{-1}[T[i]] = i
\label{eq:T-inv}
\end{equation}
where $T$ is a placeholder for $u$ or $f$. Next, we calculate the missing base rotation $\ell$ (counter-clockwise around the left face) as
\begin{equation}
\ell = fuf^{-1}
\label{eq:wcr-left}
\end{equation}
We use here the programm-code-oriented notation \hypertarget{hrefFTF}{\textbf{\glqq first trafo first\grqq}}: Eq.~\eqref{eq:wcr-left} reads as \glqq first $f$, then $u$, then $f^{-1}$\grqq.\footnote{In programm code the relation would read \texttt{cs.fTr(1).uTr().fTr(3)}. This is \textbf{\glqq first trafo first\grqq}, because each transformation is applied to the cube state object to the left and returns the transformed cube state object.}
The other basic whole-cube rotations $d,r,b$ are not needed, because $d=u^{-1}, r=\ell^{-1}$ and $b=f^{-1}$.
The basic whole-cube rotations are rotations of the whole cube around just one axis. But there are also composite whole-cube rotations which consists of a sequence of basic rotations.
How many different (composite) rotations are there for the cube? -- A little thought reveals that there are 24 of them: To be specific, we consider the default cube where we have 4 rotations with the white face up, 4 with the blue face up, and so on. In total we have $6\cdot 4=24$ rotations since there are 6 faces.
Table~\ref{tab:wcr} lists all of them, togehter with the \href{\#hrefWCR}{WCR}\xspace numbering convention used in GBG.
\begin{table
\caption{All 24 whole-cube rotations (in \href{\#hrefFTF}{first-trafo-first}\xspace notation)}
\label{tab:wcr}
\centerline{
\begin{tabular}{lccccc} \hline
number & first rotation & $\ast\, u^{0}$& $\ast\, u^{1}$& $\ast\, u^{2}$& $\ast\, u^{3}$ \\ \hline\hline
00-03 & \textit{id} (white up) &\textit{id} &$u$ &$u^2$ &$u^3$ \\
04-07 & $f$ (green up) &$f$ &$fu$ &$fu^2$ &$fu^3$ \\
08-11 & $f^2$ (yellow up) &$f^2$ &$f^2u$ &$f^2u^2$ &$f^2u^3$ \\
12-15 & $f^{-1}$ (blue up) &$f^{-1}$ &$f^{-1}u$ &$f^{-1}u^2$ &$f^{-1}u^3$ \\
16-19 & $\ell$ (orange up) &$\ell$ &$\ell u$ &$\ell u^2$ &$\ell u^3$ \\
20-23 & $\ell^{-1}$ (red up) &$\ell^{-1}$ &$\ell^{-1}u$ &$\ell^{-1}u^2$ &$\ell^{-1}u^3$ \\ \hline\hline
\end{tabular}
}
\end{table}
Sometimes we need the inverse whole-cube rotations which are given in Table~\ref{tab:wcr-inverse}. In this table, we read for example from the element with number 5, that the \href{\#hrefWCR}{WCR}\xspace with key 5 (which is $fu$ according to Table~\ref{tab:wcr}) has the inverse WCR $\ell u^3$ such that
$$
fu \, \ell u^3 = \textit{id}
$$
holds.
For convenience, we list in Table~\ref{tab:invkey} the <Key, InverseKey> relation. For example, the trafo with Key=5 ($fu$) has the inverse trafo with InverseKey=19 ($\ell u^3$). Note that there are 10 whole-cube rotations which are their own inverse.
\begin{table}%
\caption{The 24 \textit{inverse} whole-cube rotations (in \href{\#hrefFTF}{first-trafo-first}\xspace notation)}
\label{tab:wcr-inverse}
\centerline{
\begin{tabular}{lccccc} \hline
number & first rotation & $\ast\, u^{0}$& $\ast\, u^{1}$& $\ast\, u^{2}$& $\ast\, u^{3}$ \\ \hline\hline
00-03 & \textit{id} (white up) &\textit{id} &$u^3$ &$u^2$ &$u^1$ \\
04-07 & $f$ (green up) &$f^{-1}$ &$\ell u^3$ &$fu^2$ &$\ell^{-1}u$ \\
08-11 & $f^2$ (yellow up) &$f^2$ &$f^2u$ &$f^2u^2$ &$f^2u^3$ \\
12-15 & $f^{-1}$ (blue up) &$f$ &$\ell^{-1}u^3$ &$f^{-1}u^2$ &$\ell u$ \\
16-19 & $\ell$ (orange up) &$\ell^{-1}$ &$f^{-1}u^3$ &$\ell u^2$ &$fu$ \\
20-23 & $\ell^{-1}$ (red up) &$\ell$ &$fu^{-1}$ &$\ell^{-1}u^2$ &$f^{-1}u$ \\ \hline\hline
\end{tabular}
}
\end{table}
\begin{table}%
\caption{Whole-cube rotations: <Key, InverseKey> relation}
\label{tab:invkey}
\centerline{
\begin{scriptsize}
\tabcolsep=0.08cm
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{|c||j|j|j|j||c|c|c|c||j|j|j|j||c|c|c|c||j|j|j|j||c|c|c|c||} \hline
key & 0 &1 &2 &3 &4 &5 &6 &7 &8 &9 &10 &11 &12 &13 &14 &15 &16 &17 &18 &19 &20 &21 &22 &23 \\ \hline\hline
inv key & 0 &3 &2 &1 &12&19&6 &21 &8 &9 &10 &11 & 4 &23 &14 &17 &20 &15 &18 &05 &16 & 7 &22 &13 \\ \hline
\end{tabular}
\renewcommand{\arraystretch}{1.0}
\end{scriptsize}
}
\end{table}
\paragraph{Generating all twists from U twist}
With the help of \href{\#hrefWCR}{WCRs}\xspace we can generate the other twists from the U twist only: We simply rotate the face that we want to twist to the up-face, apply the U twist and rotate back. This reads in \href{\#hrefFTF}{first-trafo-first}\xspace notation:
\begin{eqnarray}
L = f^{-1} U f \label{eq:twistsFromWCR_L}\\
F = \ell U \ell^{-1} \\
D = f^2 U f^2\\
R = f U f^{-1} \\
B = \ell^{-1} U \ell \label{eq:twistsFromWCR_B}
\end{eqnarray}
Thus, given the U twist from Table~\ref{tab:twist} or Table~\ref{tab:Utwist3x3} and the basic \href{\#hrefWCR}{WCRs}\xspace given in Table~\ref{tab:basic-wcr} and Eq.~\eqref{eq:wcr-left}, we can calculate all other forward transformations with the help of Eqs.~\eqref{eq:twistsFromWCR_L}--\eqref{eq:twistsFromWCR_B}. Then, all inverse transformations are calculable with the help of Eq.~\eqref{eq:T-inv}.
\subsubsection{Color Transformations}
\label{sec:colortrans}
Color transformations are special transformations that allow to discover non-trivial symmetric (equivalent) states.
One way to describe a color transformation is to select a valid color permutation and to paint each sticker with the new color according to this color permutation. This is of course nothing one can do with a real cube without destroying or altering it, but it is a theoretical concept leading to an equivalent state.
Another way of looking at it is to record the twist sequence that leads from the default cube to a certain scrambled cube state. Then we go back to the default cube, make at first a whole-cube rotation (leading to a color-transformed default cube) and then apply the recorded twist sequence to the color-transformed default cube.
In any case, the transformed cube will be usually not in its normal position, so we apply finally a \href{\#hrefNormalize2x2}{normalizing operation} to it.
What are valid color permutations? -- These are permutations of the cube colors reachable when applying one of the available 24 \href{\#hrefWCR}{WCRs}\xspace (Table~\ref{tab:wcr}) to the default cube.
For example, if we apply WCR $f$ (number 04) to the default cube, we get
\vspace{0.1cm}
\begin{figure}[h]%
\renewcommand{\arraystretch}{1.30}
\centerline{
\begin{tabular}{|c|c|v|z|} \cline{2-2}
\multicolumn{1}{c|}{ } & g\gr &\multicolumn{2}{c}{ } \\ \hline
w & o\co & y & r \\ \hline
\multicolumn{1}{c|}{ } & b\bl &\multicolumn{2}{c}{ } \\ \cline{2-2}
\end{tabular}
}
\renewcommand{\arraystretch}{1.0}
\vspace{0.1cm}
\caption{The color transformation according to \href{\#hrefWCR}{WCR}\xspace $f$ (number 04)}
\label{fig:colorTrafo}
\end{figure}
\noindent
that is, g (green) is the new color for each up-sticker that was w (white) before and so on. The colors o and r remain untouched under this color permutation.
[However, other transformations like $fu$, $fu^2$ and $fu^3$ will change every color.]
How can we apply a color transformation to a cube state programmatically? -- We denote with $f'$ and $\ensuremath{s_{\ell}}\xspace'$ the new states for $f$ and $\ensuremath{s_{\ell}}\xspace$ after transformation. The following relations allow to calculate the transformed cube state:
\begin{eqnarray}
f_c'[i] &=& c[f_c[i]] \label{eq:coltraf_f}\\
\ensuremath{s_{\ell}}\xspace'[\ensuremath{s_{\ell}}\xspace^{-1}[i]] &=& T[i] \label{eq:coltraf_s}
\end{eqnarray}
where $c[]$ is the 6-element color trafo vector (holding the new colors for current colors 0:w, 1:b, ..., 5:r) and $T$ is the 24- or 48-element vector of the \href{\#hrefWCR}{WCR}\xspace that produces this color transformation. Eq.~\eqref{eq:coltraf_f} is simple: If a certain sticker has color 0 (w, white) before the color transformation, then it will get the new color $c[0]$, e.g. 4 (g, green), after the transformation. Eq.~\eqref{eq:coltraf_s} looks complicated, but it has a similar meaning as in the twist trafo: Take $i=0$ as example: The new place for the sticker being at 0 before the trafo (and coming from $\ensuremath{s_{\ell}}\xspace^{-1}[0]$) is $T[0]$. Therefore, we write the number $T[0]$ into $\ensuremath{s_{\ell}}\xspace'[\ensuremath{s_{\ell}}\xspace^{-1}[0]]$.
\begin{figure}%
\centerline{
\renewcommand{\arraystretch}{1.75}
\begin{scriptsize}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \cline{3-4}
\multicolumn{2}{c|}{ } & 2 & 1 &\multicolumn{4}{c}{ } \\ \cline{3-4}
\multicolumn{2}{c|}{ } & 3 & 0 &\multicolumn{4}{c}{ } \\ \hline
23\re&22\re& 5\bl& 4\bl& 8\co& 11\co& 18\gr& 17\gr\\ \hline
6\bl& 7\bl& 9\co&10\co& 19\gr& 16\gr& 20\re& 21\re\\ \hline
\multicolumn{2}{c|}{ } &14\ye&13\ye&\multicolumn{4}{c}{ } \\ \cline{3-4}
\multicolumn{2}{c|}{ } &15\ye&12\ye&\multicolumn{4}{c}{ } \\ \cline{3-4}
\end{tabular}
\end{scriptsize}
\renewcommand{\arraystretch}{1.0}
}
\caption{The cube of Fig.~\ref{fig:utwist} before color transformation.}
\label{fig:state-beforeCT}
\end{figure}
\begin{figure}%
\centerline{
\renewcommand{\arraystretch}{1.75}
\begin{tabular}{cc}
\begin{minipage}[t]{8cm}
\hspace*{1.2cm}
\begin{scriptsize}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \cline{3-4}
\multicolumn{2}{c|}{ } &16\gr&19\gr&\multicolumn{4}{c}{ } \\ \cline{3-4}
\multicolumn{2}{c|}{ } &17\gr&18\gr&\multicolumn{4}{c}{ } \\ \hline
20\re&23\re& 2 & 1 & 11\co& 10\co& 13\ye& 12\ye\\ \hline
3 & 0 & 8\co& 9\co& 14\ye& 15\ye& 21\re& 22\re\\ \hline
\multicolumn{2}{c|}{ } & 4\bl& 7\bl&\multicolumn{4}{c}{ } \\ \cline{3-4}
\multicolumn{2}{c|}{ } & 5\bl& 6\bl&\multicolumn{4}{c}{ } \\ \cline{3-4}
\end{tabular}
\end{scriptsize}
\end{minipage}
&
\begin{minipage}[t]{8cm}
\hspace*{1.2cm}
\begin{scriptsize}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \cline{3-4}
\multicolumn{2}{c|}{ } & 8\co& 2 &\multicolumn{4}{c}{ } \\ \cline{3-4}
\multicolumn{2}{c|}{ } & 9\co& 1 &\multicolumn{4}{c}{ } \\ \hline
4\bl& 7\bl&14\ye&11\co& 18\gr& 17\gr& 23\re& 0 \\ \hline
5\bl& 6\bl&15\ye&10\co& 19\gr& 16\gr& 20\re& 3 \\ \hline
\multicolumn{2}{c|}{ } &21\re&13\ye&\multicolumn{4}{c}{ } \\ \cline{3-4}
\multicolumn{2}{c|}{ } &22\re&12\ye&\multicolumn{4}{c}{ } \\ \cline{3-4}
\end{tabular}
\end{scriptsize}
\end{minipage}
\\
(a) & (b) \\
\end{tabular}
\renewcommand{\arraystretch}{1.0}
}
\caption{The cube of Fig.~\ref{fig:state-beforeCT} with color transformation from Fig~\ref{fig:colorTrafo}: (a) before normalization, (b) after normalization.}
\label{fig:state-afterCT}
\end{figure}
A \textbf{color transformation example} is shown in Figs.~\ref{fig:state-beforeCT} and \ref{fig:state-afterCT}. Fig.~\ref{fig:state-beforeCT} is just a replication of Fig.~\ref{fig:utwist} showing a default cube after U${}^1$ twist. The color transformation number 04 applied to the cube of Fig.~\ref{fig:state-beforeCT} is shown in Fig.~\ref{fig:state-afterCT} (a)-(b) in two steps:
\begin{enumerate}[(a)]
\item The stickers are re-painted and re-numbered (white becomes green, blue becomes white and so on). The structure of coloring is the same as in Fig.~\ref{fig:state-beforeCT}. Now the \href{\#hrefDRBcubie}{(DRB)}\xspace-cubie is no longer the (ygr)-cubie, it does not carry the numbers (12,16,20).
\item We apply the proper \href{\#hrefWCR}{WCR}\xspace that brings the (ygr)-cubie back to the \href{\#hrefDRBcubie}{(DRB)}\xspace-location. Compared to (a), each 4-sticker cube face is just rotated to another face, but not changed internally. We can check that the (DRB)-location now carries again the numbers (12,16,20), as in Fig.~\ref{fig:state-beforeCT} and as it should for a normalized cube.
\end{enumerate}
\subsection{Symmetries}
\label{sec:symmetr}
Symmetries are transformations of the game state (and the attached action, if applicable) that lead to equivalent states. That is, if $s$ is a certain state with value $V(s)$, then all states $s_{sym}$ being symmetric to $s$ have the same value $V(s_{sym})=V(s)$ because they are equivalent. \textit{Equivalent} means: If $s$ can be solved by a twist sequence of length $n$, then $s_{sym}$ can be solved by an equivalent twist sequence of same length $n$.
In the case of Rubik's cube, all whole-cube rotations (\href{\#hrefWCR}{WCRs}\xspace) are symmetries because they do not change the value of a state. But whole-cube rotations are 'trivial' symmetries because they are usually factored out by the normalization of the cube: After \href{\#hrefNormalize2x2}{2x2x2 cube normalization}, which brings the (ygr)-cubie in a certain position, or after \href{\#hrefNormalize3x3}{3x3x3 cube normalization}, which brings the center cubies in certain faces, all \href{\#hrefWCR}{WCR}\xspace-symmetric states are transformed to the same state.
Non-trivial symmetries are all color transformations (Sec.~\ref{sec:colortrans}): In general, color transformations transform a state $s$ to a truly different state $s_{sym}$, even after \href{\#hrefNormalize2x2}{cube normalization}.\footnote{In rare cases -- e.g. for the solved cube -- the transformed state may be identical to $s$ or to another symmetry state, but this happens seldom for sufficiently scrambled cubes, see Sec.~\ref{sec:numSymmetry}.}
Since there are 24 color transformations in Rubik's cube, there are also 24 non-trivial symmetries (including self).
Symmetries are useful to learn to solve Rubik's cube for two reasons: (a) to accelerate learning and (b) to smooth an otherwise noisy value function.
\begin{enumerate}[(a)]
\item \textbf{Accelerated learning}: If a state $s$ (or state-action pair) is observed, not only the weights activated by that state are updated, but also the weights of all symmetric states $s_{sym}$, because they have the same $V(s_{sym})=V(s)$ and thus the same reward. In this way, a single observed sample is connected with more weight updates (better sample efficiency).
\item \textbf{Smoothed value function}:
By this we mean that the value function $V(s)$ is replaced by
\begin{equation}
V^{(sym)}(s) = \frac{1}{|\mathfrak{F}_s|} \sum_{s' \in \mathfrak{F}_s} V(s')
\label{eq:Vsym}
\end{equation}
where $\mathfrak{F}_s$ is the set of states being symmetric to $s$. If $V(s)$ were the ideal value function, both terms $V(s)$ and $V^{(sym)}(s)$ would be the same.\footnote{because all $V(s')$ in Eq.~\eqref{eq:Vsym} are the same for an ideal $V$} But in a real n-tuple network, $V(s)$ is non-ideal due to n-tuple-noise (cross-talk from other states that activate the same n-tuple LUT entries). If we average over the symmetric states $s' \in \mathfrak{F}_s$, the noise will be dampened.
\end{enumerate}
The downside of symmetries is their computational cost: In the case of Rubik's cube, the calculation of color transformations is a costly operation. On the other hand, the number of necessary training episodes to reach a certain performance may be reduced.
In the end, the use of symmetries may pay off, because the total training time may be reduced as well. In any case, we will have a better sample efficiency, since we learn more from each observed state or state-action pair. Secondly, the smoothing effect introduced with Eq.~\eqref{eq:Vsym} can lead to better overall performance, because the smoothed value function provides a better guidance on the path towards the solved cube.
In order to balance computation time, GBG offers the option to select with \texttt{nSym} the number of symmetries actually used.
If we specify for example \texttt{nSym}\,=\,8 in GBG's Rubik's cube implementation, then the state itself and 8\,--\,1\,=\,7 random other (non-id) color transformations will be selected. The resulting set $\mathfrak{F}_s$ of 8 states is then used for weight update and value function computation.
\section{N-Tuple Systems}
\label{sec:ntuples}
N-tuple systems coupled with TD were first applied to game learning by \cite{Lucas08}, although n-tuples were already introduced by ~\cite{bledsoe1959pattern} for character recognition purposes. The remarkable success of n-tuples in learning to play Othello~\citep{Lucas08} motivated other authors to benefit from this approach for a number of other games.
The main goal of n-tuple systems is to map a highly non-linear function in a low dimensional space to a high dimensional space where it is easier to separate `good' and `bad' regions. This can be compared to the kernel trick of support-vector machines. An n-tuple is defined as a sequence of $n$ cells of the board. Each cell can have $m$ positional values representing the possible states of that cell.\footnote{A typical example is a 2-player board game, where we usually have 3 positional values \{0: empty, 1: player1, 2: player2 \}. But other, user-defined values are possible as well.} Therefore, every n-tuple will have a (possibly large) look-up table indexed in form of an $n$-digit number in base $m$. Each entry corresponds to a feature and carries a trainable weight. An n-tuple system is a system consisting of $k$ n-tuples.
As an example we show in Fig.~\ref{fig:ntuple01} an n-tuple system consisting of four 8-tuples.
\begin{figure}%
\centerline{
\includegraphics[width=0.5\columnwidth]{figures/ntupleExamples-03.png}
}
\caption{Example n-tuples: We show 4 random-walk 8-tuples on a 6x7 board. The tuples are selected manually to show that not only snake-like shapes are possible, but also bifurcations or cross shapes. Tuples may or may not be symmetric.}
\label{fig:ntuple01}%
\end{figure}
Let $\mathbf{\Theta}$ be the vector of all weights $\theta_i$ of the n-tuple system.\footnote{The index $i$ indexes three qualities: an n-tuple, a cell in this n-tuple and a positional value for this cell.} The length of this vector may be large number, e.g. $m^n k$, if all $k$ n-tuples have the same length $n$ and each cell has $m$ positional values.
Let $\mathbf{\Phi}(s)$ be a binary vector of the same length representing the feature occurences in state $s$ (that is, $\mathbf{\Phi}_i(s)=1$ if in state $s$ the cell of a specific n-tuple as indexed by $i$ has the positional value as indexed by $i$, $\mathbf{\Phi}_i(s)=0$ else). The value function of the n-tuple network given state $s$ is
\begin{equation}
V(s) = \sigma \left( \mathbf{\Phi}(s)\cdot \mathbf{\Theta}\right)
\label{eq:valueNtuple}
\end{equation}
with transfer function $\sigma$ which may be a sigmoidal function or simply the identity function.
An agent using this n-tuple system derives a policy from the value function in Eq.~\eqref{eq:valueNtuple} as follows: Given state $s$ and the set $A(s)$ of available actions in state $s$, it applies with a forward model $f$
every action $a \in A(s)$ to state $s$, yielding the next state $s' = f(s,a)$. Then it selects the action that maximizes $V(s')$.
Each time a new agent is constructed, all n-tuples are either created in fixed, user-defined positions and shapes, or they are formed by \textit{random walk}. In a \textit{random walk}, all cells are placed randomly with the constraint that each cell must be adjacent\footnote{The form of adjacency, e.~g. 4- or 8-point neighborhood or any other (might be cell-dependent) form of adjacency, is user-defined.} to at least one other cell in the n-tuple.
Agent training proceeds in the TD-n-tuple algorithm as follows:
Let $s'$ be the actual state generated by the agent and let $s$ be the previous state generated by this agent. TD(0) learning
adapts the value function with model parameters $\mathbf{\Theta}$ through \citep{SuttBart98}
\begin{equation}
\mathbf{\Theta} \leftarrow \mathbf{\Theta} + \alpha\delta\mathbf{\nabla_{\mathbf{\Theta}}} V(s)
\label{eq:theta}
\end{equation}
Here, $\alpha$ is the learning rate and $V$ is in our case the n-tuple value function of Eq.~\eqref{eq:valueNtuple}. $\delta$ is the usual TD error \citep{SuttBart98} after the agent has acted and generated $s'$:
\begin{equation}
\delta = r+\gamma V(s') - V(s)
\label{eq:TDdelta}
\end{equation}
where the sum of the first two terms, reward $r$ plus the discounted value $\gamma V(s')$, is the desirable target for $V(s)$.
\section{N-Tuple Representions for the Cube}
\label{sec:represent-ntuple}
In order to apply n-tuples to cubes, we have to define a board in one way or the other on which we can place the n-tuples. This is not as straightforward as in other board games, but we are free to invent abstract boards. Once we have defined a board, we can number the board cells $k=0,\ldots,K-1$ and translate a cube state into a BoardVector: A \hypertarget{hrefBV}{\textbf{BoardVector}} $\mathbf{b}$ is a vector of $K$ non-negative integer numbers $b_k \in \{0,\ldots,N_k-1\}$. Each $k$ represents a board cell and every board cell $k$ has a predefined number $N_k$ of position values.\footnote{In GBG package \texttt{ntuple2} (base for agent TDNTuple3Agt), all $N_k$ have to be the same. In package \texttt{ntuple4}( base for agent TDNTuple4Agt), numbers $N_k$ may be different for different $k$.}
A \href{\#hrefBV}{BoardVector}\xspace is useful to calculate the feature occurence vector $\mathbf{\Phi}(s)$ in Eq.~\eqref{eq:valueNtuple} for a given n-tuple set: If an n-tuple contains board cell $k$, then look into $b_k$ to get the position value for this cell $k$. Set $\mathbf{\Phi}_i(s)=1$ for that index $i$ that indexes this n-tuple cell and this position value.
In the following we present different options for boards and \href{\#hrefBV}{BoardVectors}\xspace. We do this mainly for the 2x2x2 cube, because it is somewhat simpler to explain. But the same ideas apply to the 3x3x3 cube as well, they are just a little bit longer. Therefore, we defer the lengthy details of the 3x3x3 cube to Appendix~\ref{app:represent-ntuple3x3}.
\subsection{CUBESTATE}
\label{sec:cubestate}
A natural way to translate the cube state into a board is to use the flattened representation of Fig.~\ref{fig:2x2STICKER} as the board and extract from it the 24-element vector $\mathbf{b}$, according to the given numbering. The $k$th element $b_k$ represents a certain cubie face location and gets a number from $\{0,\ldots,5\}$ according to its current face color $f_c$.
The solved cube is for example represented by $\mathbf{b} = [0000\ 1111\ 2222\ \ldots\ 5555]$.
This representation CUBESTATE is what the BoardVecType CUBESTATE in our GBG-implementation means: Each board vector is a copy of \texttt{fcol}, the face colors of all cubie faces. \texttt{fcol} is also the vector that uniquely defines each cube state.
An upper bound of possible combinations for $\mathbf{b}$ is $6^{24} = 4.7\cdot 10^{18}$. If we factor out the \href{\#hrefDRBcubie}{(DRB)}\xspace-cubie, which always stays at its home position, we can reduce this to 21 board cells with 6 positional values, leading to $6^{21} = \mathbf{2.1\cdot 10^{16}}$ weights. Both numbers are of course way larger than the true number of distinct states (Sec.~\ref{sec:facts2x2}) which is $3.6\cdot 10^{6}$. This is because most of the combinations are dead weights in the n-tuple LUTs, they will never be activated during game play.
The dead weights occur because many combinations are not realizable, e.g. three white faces in one cubie or any of the $6^3 - 8\cdot 3 = 192$ cubie-face-color combinations that are not present in the real cube. The problem is that the dead weights are scattered in a complicated way among the active weights and it is thus not easy to factor them out.
\begin{figure}%
\centerline{
\begin{tabular}{cc}
\includegraphics[width=0.25\columnwidth]{figures/sticker_3x3x3_McAleer_topView.png} &
\includegraphics[width=0.25\columnwidth]{figures/sticker_3x3x3_McAleer_bottomView.png} \\
(a) Top view & (b) Bottom view \\
\end{tabular}
}
\caption{The sticker representation used to reduce dimensionality: Stickers that are used are shown in white, whereas ignored stickers are dark blue (from~\cite{mcaleer2019solving}).}%
\label{fig:stickerCube}%
\end{figure}
\begin{figure}%
\centerline{
\renewcommand{\arraystretch}{1.75}
\begin{scriptsize}
\begin{tabular}{|x|x|c|c|x|x|x|x|} \cline{3-4}
\multicolumn{2}{c|}{ } & 3 & 2 &\multicolumn{4}{c}{ } \\ \cline{3-4}
\multicolumn{2}{c|}{ } & 0 & 1 &\multicolumn{4}{c}{ } \\ \hline
5 & 4 & 8\bl&11\bl& 18& 17& 23& 22\\ \hline
6 & 7 & 9\bl&10\bl& 19& 16& 20& 21\\ \hline
\multicolumn{2}{c|}{ } &14 &13 &\multicolumn{4}{c}{ } \\ \cline{3-4}
\multicolumn{2}{c|}{ } &15 &12\bl&\multicolumn{4}{c}{ } \\ \cline{3-4}
\end{tabular}
\end{scriptsize}
\renewcommand{\arraystretch}{1.0}
}
\caption{Tracked stickers for the 2x2x2 cube (white), while ignored stickers are blue.}
\label{fig:2x2STICKER}
\end{figure}
\subsection{STICKER}
\label{sec:sticker}
\cite{mcaleer2019solving} had the interesting idea for the 3x3x3 cube that 20 \href{\#hrefSticker}{stickers}\xspace (cubie faces) are enough. To characterize the full 3x3x3 cube, we need only one (not 2 or 3) sticker for every of the 20 cubies, as shown in Fig.~\ref{fig:stickerCube}. This is because the location of one sticker uniquely defines the location and orientation of that cubie. We name this representation STICKER in GBG.
Translated to the 2x2x2 cube, this means that 8 stickers are enough because we have only 8 cubies. We may for example track the 4 top stickers 0,1,2,3 plus the 4 bottom stickers 12,13,14,15 as shown in Fig.~\ref{fig:2x2STICKER} and ignore the 16 other stickers.
Since we always \href{\#hrefNormalize2x2}{normalize} the cube such that the \href{\#hrefDRBcubie}{(DRB)}\xspace-cubie with sticker 12 stays in place, we can reduce this even more to \textbf{7 stickers} (all but sticker 12).
How to lay out this representation as a board? – \cite{mcaleer2019solving} create a rectangular one-hot-encoding board with $7 \times 21 = 147$ cells (7 rows for the stickers and 21 columns for the locations) carrying only 0's and 1's. This is fine for the approach of \cite{mcaleer2019solving}, where they use this board as input for a DNN, but not so nice for n-tuples. Without constraints, such a board amounts to $2^{147} = \mathbf{1.7\cdot 10^{44}}$ combinations, which is unpleasantly large (much larger than in CUBESTATE).\footnote{A possible STICKER \href{\#hrefBV}{BoardVector}\xspace for the default cube would read $\mathbf{b} = [1000000\ 0100000\ 0010000\ \ldots\ ]$, meaning that location 0 has the first sticker, location 1 has the second sticker, and so on. In any STICKER \href{\#hrefBV}{BoardVector}\xspace there are only 7 columns carrying exactly one 1, the other carry only 0's. Every row carries exactly one 1.}
STICKER has more dead weights than CUBESTATE, so it seems like a step back. But the point is, that the dead weights are better structured: If for example sticker 0 appears at column 1 then this column and the two other columns for the same cubie are automatically forbidden for all other stickers. Likewise, if sticker 1 is placed in another column, another set of 3 columns is forbidden, and so on. We can use this fact to form a much more compact representation STICKER2.
\subsection{STICKER2}
\label{sec:sticker2}
As the analysis in the preceding section has shown, the 21 location columns of STICKER cannot carry the tracked stickers in arbitrary combinations. Each cubie (represented by 3 columns in STICKER) carries only exactly \textit{one} sticker. We can make this fact explicit by choosing another representation for the 21 locations:
$$ \mbox{corner location} = (\mbox{corner cubie}, \mbox{\href{\#hrefFaceID}{face ID}\xspace} ).$$
That is, each location is represented by a pair: corner cubie a,b,c,d,f,g,h (we number the top cubies with letters a,b,c,d and the bottom cubies with letters e,f,g,h and omit e because it corresponds to the \href{\#hrefDRBcubie}{(DRB)}\xspace-cubie) and a face ID. To number the faces with a \hypertarget{hrefFaceID}{\textbf{face ID}}, we follow the convention that we start at the top (bottom) face with face ID 1 and then move counter-clockwise around the corner cubie to visit the other faces (2,3). Table~\ref{tab:STICKER2-corner} shows the explicit numbering in this new representation.
\begin{table}[tbp]%
\caption{The correspondence \textit{corner location $\leftrightarrow$ STICKER2} for the solved cube. The yellow colored cells show the location of the 7 (2x2x2) and 8 (3x3x3) corner stickers that we track.}
\label{tab:STICKER2-corner}
\centerline{
\begin{scriptsize}
\tabcolsep=0.08cm
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{|l|c||a|a|a|a||c|c|c|c||a|a|a|a||c|c|c|c||a|a|a|a||c|c|c|c||} \hline
2x2x2 & location&0\ye&1\ye&2\ye&3\ye &4 &5 &6 &7 &8 &9 &10 &11 &12 &13\ye&14\ye&15\ye &16 &17 &18 &19 &20 &21 &22 &23 \\ \hline
3x3x3 & location&0\ye&2\ye&4\ye&6\ye &8 &10&12&14 &16&18 &20 &22 &24\ye&26\ye&28\ye&30\ye &32 &34 &36 &38 &40 &42 &44 &46 \\ \hline\hline
\multirow{2}{*}{STICKER2 }
& corner &a &b &c &d &a &d &h &g &a &g &f &b &e &f &g &h &e &c &b &f &e &h &d &c \\
& \href{\#hrefFaceID}{face ID}\xspace &1 &1 &1 &1 &2 &3 &2 &3 &3 &2 &3 &2 &1 &1 &1 &1 &2 &2 &3 &2 &3 &3 &2 &3 \\ \hline\hline
\end{tabular}
\renewcommand{\arraystretch}{1.0}
\end{scriptsize}
}
\end{table}
To represent a state as board vector we use now a much smaller board shown in Table~\ref{tab:board-STICKER2}: Each cell in the first row has 7 position values (the letters) and each cell in the second row has 3 position values (the \href{\#hrefFaceID}{face IDs}\xspace). We show in Table~\ref{tab:board-STICKER2} the board vector for the default cube, $\mathbf{b} = [\mbox{abcdfgh 1111111}]$. Representation STICKER2 allows for $7^7\cdot 3^7 = \mathbf{1.8\cdot 10^9}$ combinations in total, which is much smaller than STICKER and CUBESTATE.
\begin{table}[ht]
\caption{STICKER2 board representation for the default 2x2x2 cube. For the \href{\#hrefBV}{BoardVector}\xspace, cells are numbered row-by-row from 0 to 16.}
\label{tab:board-STICKER2}
\renewcommand{\arraystretch}{1.35}
\centerline{
\begin{tabular}{c|a|a|a|a|a|a|a|c} \cline{2-8}
corner & a\re & b\re & c\re & d\re & f\re & g\re & h\re & {\scriptsize 7 positions}\\ \cline{2-8}
\href{\#hrefFaceID}{face ID}\xspace& 1 & 1 & 1 & 1 & 1 & 1 & 1 & {\scriptsize 3 positions}\\ \cline{2-8}
\end{tabular}
}
\renewcommand{\arraystretch}{1.0}
\end{table}
STICKER2 has some dead weights remaining, because the combinations can carry the same letter multiple times, which is not allowed for a real cube state. But this rate of dead weights is tolerable.
It turns out that STICKER2 is in all aspects better than CUBESTATE or STICKER. Therefore, we will only report the results for STICKER2 in the following.
\subsection{Adjacency Sets}
\label{sec:adjacency}
To create n-tuples by random walk, we need adjacency sets (sets of neighbors) to be defined for every board cell $k$.
For CUBESTATE, the board is the flattened representation of the 2x2x2 cube (Fig.~\ref{fig:2x2x2stickers}). The adjacency set is defined as the 4-point neighborhood, where two stickers are neighbors if they share a common edge on the cube, i.e. are neighbors on the cube.
For STICKER2, the board consists of 16 cells shown in Table~\ref{tab:board-STICKER2}. Here, the adjacency set for cell $k$ contains all other cells different from $k$.
\vspace{0.3cm}
Again, the details of ideas similar to Sec.~\ref{sec:cubestate}--\ref{sec:adjacency}, but now for the 3x3x3 cube, are shown in Appendix \ref{sec:cubestate-3x3}--\ref{sec:adjacency3x3}.
\section{Learning the Cube}
\label{sec:learning}
\subsection{McAleer and Agostinelli}
\label{sec:McAleer}
The works of \cite{mcaleer2018solving,mcaleer2019solving} and \cite{agostinelli2019solving} contain up to now the most advanced methods for learning to solve the cube from scratch.
\cite{agostinelli2019solving} introduces the cost-to-go function for a general Marko decision process
\begin{equation}
J(s) = \min_{a\in A(s)} \sum_{s'}{P^a(s,s')\left( g^a(s,s')+\gamma J(s') \right)}
\label{eq:Jcost-general}
\end{equation}
where $P^a(s,s')$ is the probability of transitioning from state $s$ to $s'$ by taking action $a$ and $g^a(s,s')$ is the cost for this transition. In the Rubik's cube case, we have deterministic transitions, that is $s'=f(s,a)$ is deterministically prescribed by a forward model $f$. Therefore, the sum reduces to one term and we specialize to $\gamma=1$. Furthermore, we set $g^a(s,s')=1$, because only the length of the solution path counts, so that we get the simpler equation
\begin{equation}
J(s) = \min_{a\in A(s)} \left( 1+ J(s') \right) \quad\mbox{with}\quad s'=f(s,a).
\label{eq:Jcost-rubiks}
\end{equation}
Here, $A(s)$ is the set of available actions in state $s$. We additionally set $J(s^*)=0$ if $s^*$ is the solved cube.
To better understand Eq.~\eqref{eq:Jcost-rubiks} we look at a few examples: If $s_1$ is a state one twist away from $s^*$, Eq.~\eqref{eq:Jcost-rubiks} will find this twist and set $J(s_1)=1$. If $s_2$ is a state two twists away from $s^*$ and all one-twist states have already their correct labels $J(s_1)=1$, then Eq.~\eqref{eq:Jcost-rubiks} will find the twist leading to a $s_1$ state and set $J(s_2)=1+1=2$. While iterations proceed, more and more states (being further away from $s^*$) will be correctly labeled, once their preceding states are correctly labeled. In the end we should ideally have
$$ J(s_n)=n. $$
However, the number of states for Rubik's cube is too large to store them all in tabular form. Therefore, \cite{mcaleer2019solving} and \cite{agostinelli2019solving} approximate $J(s)$ with a deep neural network (DNN). To train such a network in the Rubik's cube case, they introduce \hypertarget{hrefDAVI}{\textbf{Deep Approximate Value Iteration (DAVI)}}\footnote{More precisely, \cite{mcaleer2019solving} use Autodidactic Iteration (ADI), a precursor to DAVI, very similar to DAVI, just a bit more complicated to explain. Therefore, we describe here only DAVI.} shown in Algorithm~\ref{algo:DAVI}. The network output $j_{\mathbf{\Theta}}(s)$ is trained in line 8 to approximate the (unknown) cost-to-go $J(s)$ for every state $s=x_i$. The main trick of DAVI is, as \cite{agostinelli2019solving} write: \glqq For learning to occur, we must train on a state distribution that allows information to propagate from the goal state to all the other states seen during training. Our approach for achieving this is simple: each training state $x_i$ is obtained by randomly scrambling the goal state $k_i$ times, where $k_i$ is uniformly distributed between $1$ and $K$. During training, the cost-to-go function first improves for states that are only one move away from the goal state. The cost-to-go function then improves for states further away as the reward signal is propagated from the goal state to other states through the cost-to-go function.\grqq
\algnewcommand\And{\textbf{and}}
\begin{algorithm}[tbp]
\caption{DAVI algorithm (from \cite{agostinelli2019solving}). Input: $B$: batch size, $K$: maximum number of twists, $M$: training iterations, $C$: how often to check for convergence, $\epsilon$: error threshold. Output: $\mathbf{\Theta}$, the trained neural network parameters.
}
\label{algo:DAVI}
\begin{algorithmic}[1]
\Function{DAVI}{$B,K,M,C,\epsilon$}
\State $\mathbf{\Theta} \leftarrow$ \Call{initializeNetworkParameters}{}
\State $\mathbf{\Theta}_C \leftarrow \mathbf{\Theta}$
\For{$m = 1, \ldots, M$}
\State $X \leftarrow $\Call{generateScrambledStates}{$B,K$} \Comment{$B$ scrambled cubes
\For{$x_i \in X$}
\State $y_i \leftarrow \min_{a \in A(s)} \left[ 1+ j_{\mathbf{\Theta}_C}(f(x_i,a)) \right]$ \Comment{cost-to-go function, Eq.~\eqref{eq:Jcost-rubiks}}
\EndFor
\State $(\mathbf{\Theta}, \mbox{loss}) \leftarrow$ \Call{train}{$j_{\mathbf{\Theta}},X,\mathbf{y}$} \Comment{loss = MSE$(j_{\mathbf{\Theta}}(x_i),y_i)$}
\If{($m \mod C=0 \And \mbox{loss}<\epsilon$)}
\State $\mathbf{\Theta}_C \leftarrow \mathbf{\Theta}$
\EndIf
\EndFor
\State \Return $\mathbf{\Theta}$
\EndFunction
\end{algorithmic}
\end{algorithm}
\cite{agostinelli2019solving} use in Algorithm~\ref{algo:DAVI} two sets of parameters to train the DNN: the parameters $\mathbf{\Theta}$ being trained and the parameters $\mathbf{\Theta}_C$ used to obtain improved estimates of the cost-to-go function. If they did not use this two separate sets, performance often \glqq saturated after a certain point and sometimes became unstable. Updating $\mathbf{\Theta}_C$ only after the error falls below a threshold $\epsilon$ yields
better, more stable, performance.\grqq\ \citep{agostinelli2019solving} To train the DNN, they used $M=1\,000\,000$ iterations, each with batch size $B=10\,000$. Thus, the trained DNN has seen ten billion cubes ($10^{10}$) during training, which is still only a small subset of the $4.3\cdot 10^{19}$ possible cube states.
The heuristic function of the trained DNN alone cannot solve 100\% of the cube states. Especially for higher twist numbers $k_i$, an additional solver or search algorithm is needed. This is in the case of \cite{mcaleer2019solving} a Monte Carlo Tree Search (MCTS), similar to AlphaZero~\citep{silver2017AlphaGoZero}, which uses the DNN as the source for prior probabilities. \cite{agostinelli2019solving} use instead a variant of A$^*$-search, which is found to produce solutions with a shorter path in a shorter runtime than MCTS.
\begin{algorithm}[tbp]
\caption{TD-n-tuple algorithm for Rubik's cube. Input: $p_{max}$: maximum number of twists, $M$: training iterations, $E_{train}$: maximum episode length during training, $c$: negative cost-to-go, $R_{pos}$: positive reward for reaching the solved cube $s^*$, $\alpha$: learning rate. $j_{\mathbf{\Theta}}(s)$: n-tuple network value prediction for state $s$. Output: $\mathbf{\Theta}$, the trained n-tuple network parameters.
}
\label{algo:TDNTuple4-rubiks}
\begin{algorithmic}[1]
\Function{TDNTuple}{$p_{max},M,E_{train},c,R_{pos}$}
\State $\mathbf{\Theta} \leftarrow$ \Call{initializeNetworkParameters}{}
\For{$m = 1, \ldots, M$}
\State $p \sim U(1,\ldots,p_{max})$ \Comment Draw $p$ uniformly random from $\{1,2,\ldots,p_{max}\}$
\State $s \leftarrow$ \Call{scrambleSolvedCube}{$p$} \Comment start state
\For{$k = 1, \ldots, E_{train}$}
\State $s_{new} \leftarrow \underset{a\in A(s)}{\arg\max}\, V(s') \quad\mbox{with} \quad s'=f(s,a) \quad\mbox{and} $
\State $V(s') = c + \left\{
\begin{array}{l}
R_{pos} \mbox{\ \ \qquad if \quad} s'=s^* \\
\,j_{\mathbf{\Theta}}(s') \mbox{\qquad if \quad} s'\neq s^*
\end{array} \right.$
\State Train network $j_{\mathbf{\Theta}}$ with Eq.~\eqref{eq:theta} to bring $V(s)$ closer to target $T = V(s_{new})$:
$$ V(s) \leftarrow V(s)+\alpha (T-V(s)) $$
\State $s \leftarrow s_{new}$
\If{($s=s^*$)}
\State break \Comment break out of $k$-loop
\EndIf
\EndFor
\EndFor
\State \Return $\mathbf{\Theta}$
\EndFunction
\end{algorithmic}
\end{algorithm}
\subsection{N-Tuple-based TD Learning}
\label{sec:tdntuple4}
To solve the Rubik's cube in GBG we use an algorithm that is on the one hand inspired by \href{\#hrefDAVI}{DAVI}\xspace, but on the other hand more similar to traditional reinforcement learning schemes like temporal difference (TD) learning. In fact, we want to use in the end the same TD-FARL algorithm~\citep{Konen2021_FARL_arXiv} that we use for all other GBG games.
We show in Algorithm~\ref{algo:TDNTuple4-rubiks} our method, that we will explain in the following, highlighting also the similarities and dissimilarities to \href{\#hrefDAVI}{DAVI}\xspace.
First of all, instead of minimizing the positive cost-to-go as in \href{\#hrefDAVI}{DAVI}\xspace, we maximize in lines 7-8 a value function $V(s')$ with a negative cost-to-go. This maximization is functionally equivalent, but more similar to the usual TD-learning scheme. The negative cost-to-go, e.g. $c=-0.1$, plays the role of the positive $1$ in Eq.~\eqref{eq:Jcost-rubiks}.
Secondly, we replace the DNN of \href{\#hrefDAVI}{DAVI}\xspace by the simpler-to-train n-tuple network $j_{\mathbf{\Theta}}$ with STICKER2 representation as described in Sec.~\ref{sec:ntuples} and \ref{sec:represent-ntuple}. That is, each time $j_{\mathbf{\Theta}}(s')$ is requested, we first calculate for state $s'$ the \href{\#hrefBV}{BoardVector}\xspace in STICKER2 representation, then the occurence vector $\mathbf{\Phi}(s')$ and the value function $V(s')$ according to Eq.~\eqref{eq:valueNtuple}.
The central equations for $V(s')$ in Algorithm~\ref{algo:TDNTuple4-rubiks}, lines 7-8, work similar to Eq.~\eqref{eq:Jcost-rubiks} in \href{\#hrefDAVI}{DAVI}\xspace: If $s=s_1$ is a state one twist away from $s^*$, the local search in $\arg\max V(s')$ will find this twist and the training step in line 9 moves $V(s)$ closer to $c+R_{pos}$.\footnote{
It is relevant, that $R_{pos}$ is a \textbf{positive} number, e.g. 1.0 (and not 0, as it was for \href{\#hrefDAVI}{DAVI}\xspace). This is because we start with an initial n-tuple network with all weights set to 0, so the initial response of the network to any state is 0.0. Thus, if $R_{pos}$ were 0, a one-twist state would see all its neighbors (including $s^*$) initially as responding 0.0 and would not learn the right transition to $s^*$. With $R_{pos}=1.0$ it will quickly find $s^*$.}
Likewise, neighbors $s_2$ of $s_1$ will find $s_1$ and thus move $V(s_2)$ closer to $2c+R_{pos}$.
Similar for $s_3,s_4,\ldots$ under the assumption that a 'known' state is in the neighborhood. We have a clear gradient on the path towards the solved cube $s^*$. If there are no 'known' states in the neighborhood of $s_n$, we get for $V(s_n )$ what the net maximally estimates for all those neighbors. We pick the neighbor with the highest estimate, wander around randomly until we hit a state with a 'known' neighbor or until we reach the limit $E_{train}$ of too many steps.
Note that Algorithm~\ref{algo:TDNTuple4-rubiks} is different from \href{\#hrefDAVI}{DAVI}\xspace insofar that it follows the path $s \rightarrow s'\rightarrow \ldots$ as prescribed by the current $V$, which may lead to a state sequence 'wandering in the unknown' until $E_{train}$ is reached. In contrast to that, \href{\#hrefDAVI}{DAVI}\xspace generates many start states $s_0$ drawn from the distribution of training set states and trains the network just on pairs $(s_0,T)$, i.e. they do just \textbf{one step} on the path. We instead follow the full path, because we want the training method for Rubik's cube to be as similar as possible to the training method for other GBG games.\footnote{We note in passing that we tested the DAVI variant with $E_{train}=1$ for our TD-n-tuple method as well.
However, we found that this method gave much worse results, so we stick with our GBG method here.
}
Algorithm~\ref{algo:TDNTuple4-rubiks} is basically the same algorithm as GBG uses for other games. The only differences are (i) the cube-specific start state selection borrowed from \href{\#hrefDAVI}{DAVI}\xspace (a 1-twist start state has the same probability as a 10-twist start state) and (ii) the cube-specific reward in line 8 of Algorithm~\ref{algo:TDNTuple4-rubiks} with its negative cost-go-go $c$ which is however a common element of many RL rewards.
Algorithm~\ref{algo:TDNTuple4-rubiks} currently learns with only one parameter vector $\mathbf{\Theta}$. However, it could be extended as in \href{\#hrefDAVI}{DAVI}\xspace to two parameter vectors $\mathbf{\Theta}$ and $\mathbf{\Theta}_C$. The weight training step in line 9 is done with the help of Eq.~\eqref{eq:theta} for $\mathbf{\Theta}$ using the error signal $\delta$ of Eq.~\eqref{eq:TDdelta}.\\[0.1cm]
There are two extra elements, TCL and MCTS, that complete our n-tuple-based TD learning. They are described in the next two subsections.
\subsubsection{Temporal Coherence Learning (TCL)}
\label{sec:tcl}
The TCL algorithm developed by Beal and Smith~\cite{Beal99} is an extension of TD learning. It replaces the global learning rate $\alpha$ with the weight-individual product $\alpha\alpha_i$ for every weight $\theta_i$. Here, the adjustable learning rate $\alpha_i$ is a free parameter set by a pretty simple procedure: For each weight $\theta_i$, two counters $N_i$ and $A_i$ accumulate the sum of weight changes and the sum of absolute weight changes. If all weight changes have the same sign, then $\alpha_i=|N_i|/A_i=1$, and the learning rate stays at its upper bound. If weight changes have alternating signs, then the global learning rate is probably too large. In this case, $\alpha_i=|N_i|/A_i \rightarrow 0$ for $t \rightarrow \infty$, and the effective learning rate will be largely reduced for this weight.
In our previous work~\citep{Bagh15} we extended TCL to $\alpha_i=g(|N_i|/A_i)$ where $g$ is a transfer function being either the identity function (standard TCL) or an exponential function $g(x)=e^{\beta(x-1)}$.
It was shown in~\cite{Bagh15} that TCL with this exponential transfer function leads to faster learning and higher win rates for the game ConnectFour.
\subsubsection{MCTS}
\label{sec:mcts}
We use Monte Carlo Tree Search (MCTS) \citep{browne2012MCTS} to augment our trained network during testing and evaluation. This is the method also used by \cite{mcaleer2019solving} and by AlphaGo Zero \citep{silver2017AlphaGoZero}, but they use it also during training.
MCTS builds iteratively a search tree starting with a tree containing only the start state $s_0$ as the root node.
Until the iteration budget is exhausted, MCTS does the following: In every iteration we start from the root node and select actions following the tree policy until we reach a yet unexpanded leaf node $s_{\ell}$.
The tree policy is implemented in our MCTS wrapper according to the UCB formula~\citep{silver2017AlphaGoZero}:
\begin{eqnarray}
a_{new} &=& \arg\max_{a \in A(s)}\left(\frac{W(s,a)}{N(s,a)}+U(s,a)\right)
\label{eq:UCB} \\
U(s,a) &=& c_{puct}P(s,a)\frac{\sqrt{\varepsilon+\sum_{b \in A(s)}{N(s,b)}}}{1+N(s,a)}
\label{eq:UCB2}
\end{eqnarray}
Here, $W(s,a)$ is the accumulator for all backpropagated values that arrive along branch $a$ of the node that carries state $s$. Likewise, $N(s,a)$ is the visit counter and $P(s,a)$ the prior probability. $A(s)$ is the set of actions available in state $s$.
$\varepsilon$ is a small positive constant for the special case $\sum_b{N(s,b)}=0$: It guarantees that in this special case the maximum of $U(s,a)$ is given by the maximum of $P(s,a)$. The prior probabilities $P(s,a)$ are obtained by sending the trained network's values of all follow-up states $s'=f(s,a)$ with $a \in A(s)$ through a softmax function (see Sec.~\ref{sec:ntuples}).\footnote{Note that the prior probabilities and the MCTS iteration are only needed at test time, so that we -- different to AlphaZero -- do not need MCTS during self-play training.}
Once an unexpanded leaf node $s_{\ell}$ is reached, the node is expanded by initializing its accumulators: $W(s,a) = N(s,a)=0$ and $P(s,a)=p_{s'}$ where $p_{s'}$ is the softmax-squashed output $j_{\mathbf{\Theta}}(s')$ of our n-tuple network for each state $s'=f(s,a)$. The value of the node is the network output of the best state $j_{\mathbf{\Theta}}(s_{best})= \max_{s'} j_{\mathbf{\Theta}}(s')$ and this value is backpropagated up the tree.
More details on our MCTS wrapper can be found in \cite{Scheier2022}.
\begin{algorithm}[tbp]
\caption{TD-n-tuple training algorithm. Input: see Algorithm~\ref{algo:TDNTuple4-rubiks}. Output: $\mathbf{\Theta}$: trained n-tuple network parameters.
}
\label{algo:TDNTuple4-training}
\begin{algorithmic}[1]
\Function{TDNTupleTrain}{$p_{max},M,E_{train},c,R_{pos}$}
\State $\mathbf{\Theta} \leftarrow$ \Call{initializeNetworkParameters}{}
\State \Call{initializeTCLParameters}{} \Comment Set TCL-accumulators $N_i=A_i=0, \alpha_i=1 \,\,\forall i$
\For{$m = 1, \ldots, M$}
\State Perform one $m$-iteration of Algorithm~\ref{algo:TDNTuple4-rubiks} with learning rates $\alpha\alpha_i$ instead of $\alpha$
\State $N_i \leftarrow N_i + \Delta \theta_i$ and $A_i \leftarrow A_i + |\Delta \theta_i|$ \Comment Update TCL-accumulators
\State \Comment where $\Delta \theta_i$ is the last term in Eq.~\eqref{eq:theta}
\State $\alpha_i \leftarrow |N_i|/A_i \quad\forall i \mbox{ with } A_i \neq 0$
\EndFor
\State \Return $\mathbf{\Theta}$
\EndFunction
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[tbp]
\caption{Evaluation algorithm with MCTS solver. Input: trained n-tuple network $j_{\mathbf{\Theta}}$, $p$: number of scrambling twists, $B$: batch size, $E_{eval}$: maximum episode length during evaluation, $I$: number of MCTS-iterations, $c_{PUCT}$: relative weight for $U(s,a)$ in Eq.~\eqref{eq:UCB}, $d_{max}$: maximum MCTS tree depth. Output: solved rate.
}
\label{algo:TDNTuple4-evaluation}
\begin{algorithmic}[1]
\Function{TDNTupleEval}{$j_{\mathbf{\Theta}},p,B,E_{eval},I,c_{PUCT},d_{max}$}
\State $X \leftarrow $\Call{generateScrambledCubes}{$B,p$} \Comment{$B$ scrambled cubes}
\State $C_{solved} \leftarrow$ 0
\For{$x_i \in X$}
\State $s \leftarrow x_i$
\For{$k = 1, \ldots, E_{eval}$}
\State $T \leftarrow$ \Call{performMctsSearch}{$s,I,c_{PUCT},d_{max},j_{\mathbf{\Theta}}$}
\State $a \leftarrow$ \Call{selectMostVisitedAction}{}
\State $s \leftarrow f(s,a) $
\If{($s=s^*$)}
\State $C_{solved} \leftarrow C_{solved}+1$
\State break \Comment break out of $k$-loop
\EndIf
\EndFor
\EndFor
\State \Return $C_{solved}/B$ \Comment percentage solved
\EndFunction
\end{algorithmic}
\end{algorithm}
\subsubsection{Method Summary}
\label{sec:algo-summary}
We summarize the different ingredients of our n-tuple-based TD learning method in Algorithm~\ref{algo:TDNTuple4-training} (training) and Algorithm~\ref{algo:TDNTuple4-evaluation} (evaluation).
In line 5 of Algorithm~\ref{algo:TDNTuple4-training} we perform one $m$-iteration of Algorithm~\ref{algo:TDNTuple4-rubiks} which does an update step for weight vector $\mathbf{\Theta}$, see Eq.~\eqref{eq:theta}. All weights of activated n-tuple entries get a weight change $\Delta \theta_i$ equal to the last term in Eq.~\eqref{eq:theta} where the global $\alpha$ is replaced by $\alpha\alpha_i$.
Line 2 in Algorithm~\ref{algo:TDNTuple4-evaluation} generates a set $X$ of $B$ scrambled cube states. Line 7 builds for each $x_i \in X$ an MCTS tree (see Sec.~\ref{sec:mcts}) starting from root node $x_i$ and line 8 selects the most visited action of the root node. If the goal state $s^*$ is not found during $E_{eval}$ $k$-loop trials, this $x_i$ is considered as not being solved.
\section{Results}
\label{sec:results}
\subsection{Experimental setup}
We use for all our GBG experiments the same RL method based on n-tuple systems and TCL. Only its hyperparameters are tuned to the specific game, as shown below. We refer to this method/agent as \textbf{\textit{TCL-base}\xspace} whenever it alone is used for game playing. If we wrap such an agent by an MCTS wrapper with a given number of iterations, then we refer to this as \textbf{\textit{TCL-wrap}\xspace}.
We investigate two variants of Rubik's Cube: 2x2x2 and 3x3x3. We trained all TCL agents by presenting them $M=3\,000\,000$ cubes scrambled with $p$ random twists, where $p$ is chosen uniformly at random from $\{1,\ldots,p_{max}\}$. Here, $p_{max}=13\,[16]$ for 2x2x2 and $p_{max}=9\,[13]$ for 3x3x3, where the first number is for \href{\#hrefHTM}{HTM}\xspace, while the second number in square brackets is for \href{\#hrefQTM}{QTM}\xspace. With these $p_{max}$ cube twists we cover the complete cube space for 2x2x2, where God's number (Sec.~\ref{sec:facts}) is known to be $11\,[14]$. But we cover only a small subset in the 3x3x3 case, where God's number is known to be $20\,[26]$~\citep{rokicki2014diameter}.\footnote{We limit ourselves to $p_{max}=9\,[13]$ in the 3x3x3 \href{\#hrefHTM}{HTM}\xspace [\href{\#hrefQTM}{QTM}\xspace] case, because our network has not enough capacity to learn all states of the 3x3x3 Rubik's cube. Experiments with higher twist numbers during training did not improve the solved-rates.} We train 3 agents for each cube variant \{ 2x2x2, 3x3x3 \} $\times$ \{ HTM, QTM \} to assess the variability of training.
The hyperparameters of the agent for each cube variant were found by manual fine-tuning.
For brevity, we defer the exact explanation and setting of all parameters to Appendix~\ref{app:hyperparams}.
We evaluate the trained agents for each $p$ on 200 scrambled cubes that are created by applying the given number $p$ of random scrambling twists to a solved cube. The agent now tries to solve each scrambled cube. A cube is said to be \textit{unsolved} during evaluation if the agent cannot reach the solved cube in $E_{eval}=50$ steps.\footnote{During training, we use lower maximum episode lengths $E_{train}$ (see Appendix~\ref{app:hyperparams}) than $E_{eval}=50$ in order to reduce computation time (in the beginning, many episodes cannot be solved, and $50$ would waste a lot of computation time). But $E_{train}$ is always at least $p_{max}+3$ in order to ensure that the agent has a fair chance to solve the cube and collect the reward.}
\begin{figure}[tbp]%
\centerline{
\includegraphics[width=0.5\columnwidth]{figures/Rubiks-both-cubes-ptwist-HTM-Mix.pdf}
\includegraphics[width=0.5\columnwidth]{figures/Rubiks-both-cubes-ptwist-QTM-Mix.pdf}
}
\caption{Percentage of solved cubes as a function of scrambling twists $p$ for the trained TD-N-tuple agent wrapped by MCTS wrapper with different numbers of iterations. The red curves are \textit{TCL-base}\xspace without wrapper, the other colors show different forms of \textit{TCL-wrap}\xspace. Twist type is \href{\#hrefHTM}{HTM}\xspace (left) and \href{\#hrefQTM}{QTM}\xspace (right). Each point is the average of 3 independently trained agents.}%
\label{fig:solvedRate-ptwist}%
\end{figure}
\subsection{Cube Solving with MCTS Wrapper, without Symmetries}
\label{sec:resWrapper}
The trained TD-N-tuple agents learn to solve the cubes to some extent, as the red curves \textit{TCL-base}\xspace in Fig.~\ref{fig:solvedRate-ptwist} show, but they are in many cases (i.e. $p>p_{max}/2$) far from being perfect. These are the results from training each agent for 3 million episodes, but the results would not change considerably, if 10 million training episodes were used.
\cite{Scheier2022} have shown, that the performance of agents, namely TD-N-tuple agents, is largely improved, if the trained agents are wrapped during test, play and evaluation by an MCTS wrapper. This holds for Rubik's cube as well, as Fig.~\ref{fig:solvedRate-ptwist} shows: For the 2x2x2 cube, the non-wrapped agent \textit{TCL-base}\xspace (red curve) is already quite good, but with wrapping it becomes almost perfect. For the 3x3x3 cube, the red curves are not satisfactorily: the solved-rates are below 20\% for $p=9\, [13]$ in the \href{\#hrefHTM}{HTM}\xspace [\href{\#hrefQTM}{QTM}\xspace] case. But at least MCTS wrapping boosts the solved-rates by a factor of 3 [QTM: from 16\% to 48\%] or 4.5 [HTM: from 10\% to 45\%].
All these results are without incorporating symmetries. How symmetries affect the solved-rates will be investigated in Sec.~\ref{sec:resSymmetry}. But before this, we look in the next section at the number of symmetries that effectively exist in a cube state.
\begin{figure}%
\centerline{
\includegraphics[width=\columnwidth]{figures/Rubiks-nsym-states.pdf}
}
\caption{Count of truly different symmetric states for cube states generated by $p$ random scrambling twists. Each point is an average over 500 such states.}%
\label{fig:numSymStates}%
\end{figure}
\subsection{Number of Symmetric States}
\label{sec:numSymmetry}
Not every cube state has 24 truly different symmetric states (24 = number of color symmetries). For example in the solved cube, all color-symmetric states are the same (after normalization). Thus, we have here only one truly different symmetric state.
However, we show in this section that for the majority of cube states the number of truly different symmetric states is close to 24. Two states are truly different if they are not the same after the \href{\#hrefNormalize2x2}{normalizing operation}. We generate a cube state by applying $p$ random scrambling twists to the default cube.
Now we apply all 24 color transformations (Sec.~\ref{sec:colortrans}) to it and count the truly different states. The results are shown in Fig.~\ref{fig:numSymStates} for both cube sizes and both twist types. For the 3x3x3 cube, the number of states quickly (for $p>5$) approaches the maximum $N=24$, while for the 2x2x2 cube it is a bit slower: $p>4$ or $p>8$ is needed to surpass $N=20$.
As a consequence, it makes sense to use 16 or even 24 symmetries when training and evaluating cube agents. Especially for scrambled states with higher $p$, the 24 color transformations used to construct symmetric states will usually lead to 24 different states.
\begin{figure}[tbp]%
\centerline{
\includegraphics[width=0.8\columnwidth]{figures/Rubiks-nsym-learncurves.pdf}
}
\caption{Learning curves for different numbers \texttt{nSym} $=0,8,16,24$ of symmetries. Shown is the solved rate of (3x3x3, QTM) cubes. The solved rate is the average over all twist numbers $p=1,\ldots,13$ with 200 testing cubes for each $p$ and over 3 agents with different random-walk n-tuple sets.
}%
\label{fig:rubiks-learncurves}%
\end{figure}
\subsection{The Benefit of Symmetries}
\label{sec:resSymmetry}
\begin{figure}[tbh]%
\centerline{
\includegraphics[width=0.8\columnwidth]{figures/Rubiks-nsym-ptwist-QTM-ET16-beyond.pdf}
}
\caption{With symmetries: Percentage of solved cubes (3x3x3, QTM) as a function of scrambling twists $p$ for TD-N-tuple agents trained and evaluated with different numbers of symmetries \texttt{nSym} and wrapped by MCTS wrappers with different iterations. The red curves are \textit{TCL-base}\xspace (without wrapper), the other colors show different forms of \textit{TCL-wrap}\xspace. The solved rates are the average over 200 testing cubes for each $p$ and over 3 agents with different random-walk n-tuple sets.
}%
\label{fig:solvedRate-nSym}%
\end{figure}
In order to investigate the benefits of symmetries, we first train a TCL agent with different numbers of symmetries. As described in Sec.~\ref{sec:symmetr}, we select in each step \texttt{nSym} $=0,8,16,24$ symmetric states. Which symmetric states are chosen is selected randomly. Symmetries are used (a) to update the weights for each symmetric state and (b) to build with Eq.~\eqref{eq:Vsym} a smoothed value function which is used to decide about the next action during training. For $0, 8, 16, 24$ symmetries, we train 3 agents each (3x3x3 cube, STICKER2, QTM). The 3 agents differ due to their differently created random-walk n-tuple sets.
Fig.~\ref{fig:rubiks-learncurves} shows the learning curves for different \texttt{nSym} $=0,8,16,24$. It is found that agents with \texttt{nSym} $>0$ learn faster and achieve a higher asymptotic solved rate.
Next, we evaluate each of the trained agents by trying to solve for each $p \in \{1,\ldots,15\}$ (scrambling twists) 200 different scrambled cubes. During evaluation, we use again the same \texttt{nSym} as in training to form a smoothed value function.
We compare in Fig.~\ref{fig:solvedRate-nSym} different symmetry results, both without wrapping (\textit{TCL-base}\xspace, red curves) and with MCTS-wrapped agents using 100 (green) or 800 (blue) iterations. It is clearly visible that MCTS wrapping has a large effect, as it was also the case in Fig~\ref{fig:solvedRate-ptwist}. But in addition to that, the use of symmetries leads for each agent, wrapped or not, to a substantial increase in solved-rates (a surplus of 10-20\%). It is remarkable, that even for $p$=14 or 15 a solved rate above or near 50\% can be reached\footnote{$p$ is above $p_{max}$=13, the maximum twist number used during training.} by the combination (\texttt{nSym}=16, 800 MCTS iterations).
Surprisingly, it seems that \textit{with wrapping} it is only important whether we use symmetries, not how many, since the difference between \texttt{nSym} $=8,16,24$ is only marginal. For 800 MCTS iterations, the solved rate for \texttt{nSym} $=24$ is in most cases even smaller than that for \texttt{nSym} $=8,16$. This is surprising because it would have been expected that also with wrapping a larger \texttt{nSym} should lead to a smoother value function and thus should in theory produce larger solved rates. --
Note that this is not a contradiction to Fig.~\ref{fig:rubiks-learncurves}, because the learning curves were obtained \textit{without wrapping} and the red \textit{TCL-base}\xspace curves in Fig.~\ref{fig:solvedRate-nSym} (again without wrapping) show the same positive trend with increasing \texttt{nSym}\footnote{i.e. \texttt{nSym}$=24$ is for every $p$ clearly better than \texttt{nSym}$=16$}. The red curves in Fig.~\ref{fig:solvedRate-nSym} show approximately the same average solved rates as the asymptotic values in Fig.~\ref{fig:rubiks-learncurves}.
\begin{table}[tbp]
\caption{Computation times with symmetries. All numbers are for 3x3x3 cube, STICKER2 and QTM. Training: 3 million self-play episodes, w/o MCTS in the training loop. Testing: 200 scrambled cubes with $p=13$, agents wrapped by MCTS wrapper with \texttt{iter} iterations. }
\label{tab:compTimes}
\centerline{
\begin{tabular}{|c||c||r|r|r|r|r|} \cline{1-7}
\multirow{3}{*}
{\texttt{nSym}} & training & \multicolumn{5}{c|}{ testing } \\
& [hours] & \multicolumn{5}{c|}{ [seconds] } \\ \cline{3-7}
& & \texttt{iter} & 0 & 100 & 400 & 800 \\ \hline\hline
0 & 0.5 & & 0.5 & 48 & 196 & 390 \\ \hline
8 & 5.4 & & 4.0 & 241 & 877 & 1400 \\ \hline
16 & 9.5 & & 7.3 & 464 & 1380 & 2330 \\ \hline
24 & 13.0 & & 8.0 & 550 & 1760 & 3130 \\ \hline
\end{tabular}
}
\end{table}
\subsection{Computational Costs}
\label{sec:compTimes}
Table~\ref{tab:compTimes} shows the computational costs when training and testing with symmetries.
All computations were done on a single CPU Intel i7-9850H @ 2.60GHz.
If we subtract the computational costs for \texttt{nsym}$=0$, computation time increases more or less linearly with \texttt{iter} and roughly linearly with \texttt{nSym}. Computation times for \texttt{nSym}$=24$ are approximately 10x larger than those for \texttt{nSym}$=0$.
Computation times are dependent on the solved rate: If a cube with $p=13$ is solved, the episode takes normally 12-15 steps. If the cube is not solved, the episode needs 50 steps, i.e. a factor of 3-4 more. Thus, the numbers in Table~\ref{tab:compTimes} should be taken only as rough indication of the trend.
Bottom line: Training time through symmetries increases by a factor of $13/0.5 = 26$ (\texttt{nSym}$=24$) and testing time increases through 800 MCTS iterations by a factor of about $3130/8 \approx 400$.
Training with symmetries takes between 5.4h and 13h on a normal CPU, depending on the number of symmetries. This is much less than the 44h on a 32-core server with 3 GPUs that were used by \cite{mcaleer2019solving}. But it also does not reach the same quality as \cite{mcaleer2019solving}.
\section{Related Work}
\label{sec:rel-work}
Ernö Rubik invented Rubik's cube in 1974. Rubik's cube has gained worldwide popularity with many human-oriented algorithms being developed to solve the cube from arbitrary scrambled start states. By 'human-oriented' we mean algorithms that are simple to memorize for humans. They usually will find long, suboptimal solutions. For a long time it was an unsolved question what is the minimal number of moves (\href{\#hrefGodsNum}{God's Number}) needed to solve any given cube state. The early work of \cite{thistle1981} put an upper bound on this number with his 52-move algorithm. This was one of the first works to systematically use group theory as an aid to solve Rubik's cube. Later, several authors have gradually reduced the upper bound 52~\citep{joyner2014man}, until \cite{rokicki2014diameter} could prove in 2014 for the 3x3x3 cube that \href{\#hrefGodsNum}{God's Number} is 20 in \href{\#hrefHTM}{HTM}\xspace and 26 in \href{\#hrefQTM}{QTM}\xspace.
Computer algorithms to solve Rubik's cube rely often on hand-engineered features and group theory. One popular solver for Rubik's cube is the two-phase algorithm of \cite{kociemba2015two}. A variant of A$^*$ heuristic search was used by \cite{korf1991maxN}, along with a pattern database heuristic, to find the shortest possible solutions.
The problem of letting a computer \textit{learn} to solve Rubik's cube turned out to be much harder: \cite{irpan2016exploring} experimented with different neural net baseline architectures (LSTM gave for him reportedly best results) and tried to boost them with AdaBoost. However, he had only for scrambling twist $\leq 7$ solved rates of better than 50\% and the baseline turned out to be better than the boosted variants. \cite{brunetto2017deep} found somewhat better results with a DNN, they could solve cube states with 18 twists with a rate above 50\%. But they did not learn from scratch because they used an optimal solver based on \cite{kociemba2015two} to generate training examples for the DNN. \cite{smith2016discovering} tried to learn Rubik's cube by genetic programming. However, their learned solver could only reliably solve cubes with up to 5 scrambling twists.
A breakthrough in learning to solve Rubik's cube are the works of \cite{mcaleer2018solving,mcaleer2019solving} and \cite{agostinelli2019solving}: With Autodidactic Iteration (ADI) and Deep Approximate Value Iteration (\href{\#hrefDAVI}{DAVI}\xspace) they were able \textit{to learn from scratch} to solve Rubik's cube in \href{\#hrefQTM}{QTM}\xspace for arbitrary scrambling twists. Their method has been explained in detail already in Sec.~\ref{sec:McAleer}, so we highlight here only their important findings: \cite{mcaleer2019solving} only needs to inspect less than 4000 cubes with its trained network DeepCube when solving for a particular cube, while the optimal solver of \cite{korf1991maxN} inspects 122 billion different nodes, so Korf's method is much slower.
\cite{agostinelli2019solving} extended the work of \cite{mcaleer2019solving} by replacing the MCTS solver with a batch-weighted A$^*$ solver which is found to produce shorter solution paths and have shorter run times. At the same time, \cite{agostinelli2019solving} applied their agent DeepCubeA successfully to other puzzles like LightsOut, Sokoban, and the 15-, 24-, 35- and 48-puzzle\footnote{a set of 15, 24, ... numbers has to be ordered on a $4\times 4$, $5\times 5$, ... square with one empty field}. DeepCubeA could solve all of them.
The deep network used by \cite{mcaleer2019solving} and \cite{agostinelli2019solving} were trained without human knowledge or supervised input from computerized solvers. The network of \cite{mcaleer2019solving} had over 12 million weights and was trained for 44 hours on a 32-core server with 3 GPUs. The network of \cite{mcaleer2019solving} has seen 8 billion cubes during training. -- Our approach started from scratch as well. It required much less computational effort (e.g. 5.4h training time on a single standard CPU for nSym=8, see Table~\ref{tab:compTimes}).
It can solve the 2x2x2 cube completely, but the 3x3x3 cube only partly (up to 15 scrambling twists). Each trained agent for the 3x3x3 cube has seen 48 million scrambled cubes\footnote{$3\cdot10^6\,\times\,16 =$ training episodes $\times$ episode length $E_{train}$. This is an upper bound: some episodes may have shorter length, but each unsolved episode has length $E_{train}$.} during training.
\section{Summary and Outlook}
\label{sec:summary}
We have presented new work on how to solve Rubik's cube with n-tuple systems, reinforcement learning and an MCTS solver. The main ideas were already presented in \cite{Scheier2022} but only for \href{\#hrefHTM}{HTM}\xspace and up to $p=9$ twists. Here we extended this work to \href{\#hrefQTM}{QTM}\xspace as well and presented all the details
of cube representation and n-tuple learning algorithms necessary to reproduce our Rubik's cube results.
As a new aspect, we added cube symmetries and studied their effect on solution quality. We found that the use of symmetries boosts the solved rates by 10-20\%. Based on this, we could increase for \href{\#hrefQTM}{QTM}\xspace the number of scrambling twists where at least 45\% of the cubes are solved from $p=13$ without symmetries to $p=15$ with symmetries.
We cannot solve the 3x3x3 cube completely, as \cite{mcaleer2019solving} and \cite{agostinelli2019solving} do. But our solution is much less computational demanding than their approach.
Further work might be to look into larger or differently structured n-tuple systems, perhaps utilizing the staging principle that \cite{jaskowski2018mastering} used to produce world-record results in the game 2048.
\newpage
\input{bibitems.tex}
\newpage
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 782 |
\section*{Abstract}
Online communities are becoming increasingly important as platforms for large-scale human cooperation. These communities allow users seeking and sharing professional skills to solve problems collaboratively. To investigate how users cooperate to complete a large number of knowledge-producing tasks, we analyze StackExchange, one of the largest question and answer systems in the world. We construct attention networks to model the growth of 110 communities in the StackExchange system and quantify individual answering strategies using the linking dynamics of attention networks. We identify two types of users taking different strategies. One strategy (type A) aims at performing maintenance by doing simple tasks, while the other strategy (type B) aims investing time in doing challenging tasks. We find that the number of type A needs to be twice as big as type B users for a sustainable growth of communities.
\section*{Introduction}
Humans are unique in their ability to create public goods in non-repeated situations with non-kin. In larger groups cooperation is more difficult due to the higher temptation to free ride on the voluntary contributions of others \cite{olson2009logic}. Nevertheless humans are able to create public goods with thousands and even millions of unrelated individuals. For example, there are an increasing number of online communities where participants put in time and effort to make voluntary contributions such as street maps \cite{zook2010volunteered}, software \cite{schweik2012internet}, encyclopedic information \cite{jemielniak2014common}, protein folding \cite{khatib2011algorithm}, and language translation \cite{von2013duolingo}.
Online communities are natural experiments that give us an opportunity to test possible mechanisms that explain cooperation in large groups. Controlled online experiments show that if participants can choose group members higher levels of cooperation can be derived \cite{rand2011dynamic}. This suggests that assortment is a sufficient condition to derive cooperation in large groups. However, such experiments have a duration of about an hour in which participants are all simultaneously online and are recruited with the promise of monetary payments. Whether this scales up to large groups over longer periods of time is an open question.
We will demonstrate in this paper that assortment is not sufficient to derive high levels of contributions in online collaboration \cite{henrich2004foundations}. Our analysis shows that at least two different types of strategies of making voluntary contributions are needed to sustain an online community over a longer period of time. One strategy (type A) aims at performing maintenance by doing simple tasks, while the other strategy (type B) aims investing time in doing challenging tasks. We cannot measure the motivations for those two strategies, but we hypothesize that the first may related to reputation in the broader community, and the second to intrinsic motivations and reputation among peers.
For our empirical analysis we investigate the answering records of nearly three million users over a period of six years from 110 online communities. We find that type A users are important in cutting down the median waiting time for answers, while type B users help increase the acceptance rate of answers. The comparison of overall size across the studied communities suggests that a ratio equals 3:2 between type A and B users is preferred for bigger communities. We use these empirical finding to build an ``attention" network model in which we formalize the strategies depicted above as linking dynamics. In an attention network model the nodes are questions and the edges are successive answering activities connecting two questions. This model allows us to analyze the effect of individual answering behavior on the growth of communities. Our analysis not only supports the existence of a trade-off between the two types of users, but also predicts what will happen when the ratio in a community deviates from the optimal ratio detected. We predict that a community containing too many type A users lacks high quality answers, thus making it difficult to attract new questions continuously. On the contrary, a community composed of too many type B users has many high quality questions, but it will attract more new questions than it can handle. In sum, a balance between the two types of users is necessary for the sustainable growth of communities. At the end of the paper, we select three communities to illustrate the predictions of our attention network model, including ``math.stackexchange.com" (which has an optimal ratio), ``astronomy.stackexchange.com" (which contains too many type A users), and ``electronics.stackexchange.com" (which contains too many type B users).
\section*{Mixing Strategies and the Sustained Growth of Communities}
Figure \ref{strategies} depicts the profiles of the two types of users. Type A users prefer the easier, newer questions and have an answer acceptance rate higher than type B users, who have a tendency to answer the more difficult, older questions, across all expertise levels. The ANOVA tests on all the four variables between the two groups are significant (Figure \ref{strategies} B $\sim$ C).
It is natural to ask whether the ratio between type A and B users has an effect on the overall performance of communities. Two important indicators concerning the performance of Q\&A communities are the median waiting time for answers and the overall acceptance rate of answers \cite{mamykina2011design,bosu2013building}.
The waiting time is defined as the time interval between a question being posted and a satisfying answer being accepted. Median is used instead of mean, because the distribution of waiting time comprises a few extremely large values that will lead to biased results if using the mean value \cite{mamykina2011design}. The accepted answer rate is the fraction of the questions with an accepted answer over the total population of questions. A good community is expected to have a high acceptance rate and a short median waiting time \cite{mamykina2011design,bosu2013building}.
In Figure \ref{sus} we calculate the fraction of type A users $a$ ($a\in [0,1]$) in each of the 110 communities and plot the two discussed indicators against $a$. It turns out that type A users contribute to the waiting time and type B users contribute to the accepted answer rate. Therefore, a balance between these two types of users should be carefully chosen in order to optimize the community performance. In Figure \ref{sus}(C) we find that the maximum community size (``stackoverflow.com") is achieved when $a$ approximates $0.63$, i.e., the ratio of type A users to type B users is 3:2. In Figure \ref{sus}(A) we plot the growth curves of all communities and color them based on their derivation from the optimal $a$. This figure shows clearly how either too many type A users or two many type B users leads to an unsustainable growth rate.
\section*{From Answering Strategies to Linking Dynamics}
We have shown a co-occurrence between an optimal ratio and the maximum community size. However, why a deviation from the optimal ratio is related to small communities is still unclear. Therefore, we use a network model to formalize our assumptions of individual answering strategies and to analyze the consequences of different ratios.
We define attention networks to represent Q\&A communities, in which nodes are questions and edges are the sequential answering activities of users. In attention networks, an answering strategy based on the number of the existing answers to questions can be interpreted as a degree-based rule of link increases. As type A users prefer easy questions (low-degree nodes) and type B users favor difficult questions (high-degree nodes), when these two types of users respond to a new question, they carry links from very different nodes. Therefore, we can simplify the model by assuming that type A strategy corresponds to the rule of ``preferential attachment" \cite{barabasi1999emergence}, in which the rich get richer, and type B strategy corresponds to the reversed process of ``preferential attachment" \cite{sevim2006effects}, in which the attractiveness of a node decreases with its degree. The reversed ``preferential attachment" process is usually used to describe resource-based competition between nodes in flow networks such as food webs \cite{dunne2002food}, power grids\cite{amaral2000classes}, and the airport network \cite{guimera2004modeling}. For example, in food webs an outbound edge transport resources from a ``prey" node to a ``predator" node. If several predators are fed on the same prey, then the supplied resources have to be split and shared, thus decreasing the attractiveness of the prey node. We argue that this effect also exist in attention networks in which questions are competing for the limited attention of users.
We use $f$ and $1-f$ ($f\in$[0,1]) to represent the probability of observing type A and B strategies, respectively (note that $f$ is different from the empirical value of $a$ mentioned in the last section, as $a$ is not the fraction of activities but the fraction of users), and quantify the probability $p(k)$ of a new question being connected to a pre-existing similar question of degree $k$ as
\begin{equation}
\label{eq.linkpro}
p(k) = f\frac{\frac{1}{k}}{\sum \frac{1}{k}} + (1-f) \frac{k}{\sum k}.
\end{equation}
As introduced, in two extreme cases this model degenerates to the ``preferential attachment" model ($f=0$) and the ``reversed preferential attachment" model ($f=1$) , respectively. Using the master equation technique \cite{dorogovtsev2000structure} we derive that the tail of the degree distribution will converge to
\begin{equation}
\label{eq.degreedist}
p_k \sim k^{-\alpha} = k ^{-\frac{3-f}{1-f}},
\end{equation}
in which the power exponent $\alpha$ has a minimum value $3$ and always increases with $f$ (see SI for the details). We find that for a majority of communities the empirical value of $\alpha$ lies in the scope of [3,5] (Figure \ref{scalingsB}M), supporting our derivation. As a larger power-law exponent implies higher equality in resource distribution \cite{newman2005power}, our model suggests that type A strategy equalizes the allocation of attention (edges) among nodes and increases the chance of a new question being answered.
Besides degree distribution, the discussed linking rules also explain several scaling relationships observed in the growth dynamics of attention networks as presented by Figure \ref{scalingsB} and mentioned in \cite{leskovec2005graphs,wu2011acceleratingb}. Users are more likely to post questions when they search the Web and find that a similar question has obtained many answers but their concerns have not been fully addressed. As a consequence, a new question is more likely to be added to the network if the existing similar questions have more answers. To include this process in our model, we consider node replication as the main driving force underlying network growth and allow high-degree nodes to generate more new nodes. Considering the node-matching probability $p(k)$ given by Eq.\ref{eq.linkpro}, we can calculate that the expected attractiveness of a single nodes is
\begin{equation}
\label{eq.expectdegree}
E(k) = \sum_{k=1}^{k_{max}}kp(k)\sim N^{\frac{1-f}{2}}.
\end{equation}
Therefore, the expected number of new nodes generated by an existing node is $E(k)N^g$, in which we use $N^g$ to model the effect of network size. By summing $E(k)N^g$ over all nodes in the network we obtain the total number of new nodes as $\Delta N = E(k)N^{g+1}$.
Substituting this condition into Eq. \ref{eq.expectdegree} we derive the scaling relationship between the number of new and old nodes
\begin{equation}
\label{eq.newnodes}
\Delta N \sim N^\eta=N^{\frac{3-f}{2}+g}.
\end{equation}
Note that if an old node generates many new nodes, then there will be a stronger competition between these new nodes for edges. As a result, the cost of linking to an existing node is proportional to its degree \cite{sevim2006effects}. Meanwhile, it is reasonable to assume that new questions cannot obtain an infinite number of answers but have a limited ``quota" that approximates constant $C$. Putting these two conditions together, we derive the expected number of links obtained by a new node as $\Delta m = CN^h/E(k)$, in which $N^h$ is the effect of network size.
Using the conclusion of Eq. \ref{eq.expectdegree} we have
\begin{equation}
\label{eq.avelinks}
\Delta m \sim N^\delta = N^{\frac{f-1}{2}+h}.
\end{equation}
From Eq. \ref{eq.newnodes} and Eq. \ref{eq.avelinks} we can derive the scaling relationship between the number of new edges and new nodes:
\begin{equation}
\label{eq.newlinks}
\Delta M = \Delta m \Delta N \sim \Delta N^\gamma = \Delta N^{\frac{\delta}{\eta}+1} = \Delta N^{\frac{2(h+g+1)}{3-f+2g}}
\end{equation}
To summarize, the above analysis explains why a mixture of different strategies is crucial to the sustainable growth of communities. On one hand, Eq. \ref{eq.avelinks} suggests that a community should have more type A users to maintain the number of answers per question (larger $\delta$); on the other hand, Eq. \ref{eq.newnodes} suggests that a community should have more type B users to attract new questions (larger $\eta$). As a balance, Eq. \ref{eq.newlinks} predicts that an optimmal fraction of type A users, $f = 3+2g$, is preferred in order to maximize the value of $\gamma$, i.e., to match the growth of answers with the growth of questions. We argue that $f$ as the fraction of behavior can be viewed as the multiplication between two variables, the fraction of users $a$ and answering frequency distribution $w$. As $w$ is always positive, $f$ changes in the same direction as $a$. Therefore the derived optimimal value of $f$ implies that there is also an optimal value of $a$, which is consistent with our empirical observation.
\section*{Examples of Successful and Less Successful Communities}
Three communities are selected to compare the consequences of different ratios between type A and B users (Figure \ref{scalingsB}), including a community for math questions (math.stackexchange.com, or MATH in short), a community for questions about astronomy (astronomy.stackexchange.com, or ASTR), and a community for questions about game development (gamedev.stackexchange.com, or GAME). As given by the first column in Figure \ref{scalingsB}, ASTR and GAME are not as successful as MATH in maintaining a sustained growth curved, and this be explained by our model.
The fraction of type A users in MATH is approximately $0.63$, which is equal to the optimal value. In contrast, ASTR has more type A users ($a = 0.67$). According to our model, questions in ASTR will be responded to efficiently, but there will be a slow growth of new questions. This prediction is confirmed by Figure \ref{scalingsB}k, which shows that the average number of answers to new questions is increasing, and Figure \ref{scalingsB}J, which shows that the increase of new questions slows down as time goes on. Meanwhile, GAME has a few more type B users ($a = 0.62$) than the optimum. According to our model, in this community new questions will increase so fast that they cannot be answered quickly. The fast increase of new questions is evidenced in Figure \ref{scalingsB}F, and the shrinking budget of answers per question is observed in Figure \ref{scalingsB}G. It is interesting to note that ASTR is one of the most traditional scientific areas and GAME is a new, fast developing area due to the widespread use of smart phones. We argue that neither simply being classic nor being hot would naturally lead to sustained growth, instead, the sustained growth of a community comes from the careful balance between contributors of diverse preferences.
\section*{Conclusions and Discussions}
We look at online communities as natural experiments for collective action problems. Our results imply that assortment is not sufficient to derive high levels of contributions in massive collaboration. Instead, strategic diversity seems to be the key for sustainable online communities. In the Stack Exchange datasets, a mixing ratio of $3:2$ between two types of users is found to maximize the size of communities. Type A users have a tendency to answer easier, newer questions and type B users prefer to answer more difficult, older questions. We propose an attention network model to formalize the two answering strategies of users and to explain the existence of an optimal ratio. Our conclusion is that type A users contribute to the number of answers and type B users contribute to the quality of answers, thus both of them is crucial to the development of communities.
Our work generalizes the models of Barabasi et al. \cite{barabasi1999emergence} and Sevim et al. \cite{sevim2006effects} to study large-scale cooperation in online communities. The present analysis on attention networks can also be applied to model a variety of other online collective behaviors such thread browsing \cite{wu2013decentralized}, photo tagging \cite{wu2011acceleratinga, cattuto2007semiotic, wu2013metabolism}, and news sharing \cite{wu2007novelty}. The current study also has limitations, which point out the future direction of research. For example, to obtain a simple math model we simplify the rich behaviors of users and only consider two extreme strategies. Meanwhile, in attention networks we naively assume that the ratio between the two types of users is a constant throughout the evolution of a community. In future studies we may consider a ratio that changes over time.
\section*{Materials and Methods}
\subsection*{Data sources}
Stack Exchange is a network of question and answer communities covering diverse topics
in many different fields. We downloaded its database dump on January, 2014 from
\url{https://archive.org/details/stackexchange},
which contains the log files of 110 communities.
The smallest community \url{italian.stackexchange.com} was created in November, 2013 and
has 374 users, 194 questions, and 387 answers in our data set. The largest site \url{stackoverflow.com} (SO) was
created in July, 2008 and has 2,728,224 users, 6,474,687 questions and 11,540,788 answers in our data set.
\subsection*{Measuring Question Difficulty and User Expertise}
We use the number of answers as a proxy for the ``perceived difficulty" of questions \cite{hanrahan2012modeling} in order to divide users according to their difficulty preferences. We firstly count the number of existing answers $q_{ij}$ to a question $j$ when a user $i$ responds to it. Then we average this number over all $m$ questions answered by a user to derive his/her average level of difficulty preference $\frac{1}{m}\sum_{j=1}^{m} q_{ij}$. After that, we use the grand mean of difficulty preferences in a community containing $n$ users, which is $\frac{1}{mn}\sum_{i=1}^{n}\sum_{j=1}^{m} q_{ij}$, as the threshold to separate type A users (whose preference is smaller than the threshold) from type B users (whose preference is greater than the threshold) (Figure \ref{strategies} A). To validate the difference between the two types of users, we compare the other two variables, including the average age of answered questions and the acceptance rate of submitted answers (Figure \ref{strategies} C $\sim$ D).
The TrueSkill algorithm \cite{herbrich2006trueskill,liu2013question} was applied to validate the estimation on users' difficulty preferences. In the SO community we obtain the TrueSkill scores of 912,082 users and 3,771,021 questions (please see SI for the details), which represent the expertise levels of users and the ``real difficulty" of questions, respectively \cite{liu2013question}. We find that type B users are always answering more difficult questions than type A users at all expertise levels (Figure \ref{strategies} B). In our opinion this result validates our division of the two groups. Furthermore, we find that the
TrueSkill scores of users are positively correlated with their reputation points in the log files (Pearson coefficient $\rho$ = 0.29, p-value $<$ 0.001), justifying the credibility of TrueSkill scores.
\subsection*{Constructing Attention Networks}
The answering strategies of individual users can be understood from a network perspective, in which two questions are connected if they are answered sequentially by the same users (see SI for the details). From this perspective, Q\&A communities are growing networks with increasing nodes (questions) and links (answers). We call them ``attention networks" because they show the transportation of collective attention in solving problems. Attention networks translate answering strategies into linking dynamics; hence provide a quantitative, predictive model for us to explore the collective answering behavior of users.
From empirical data we construct a growing attention network for each of the 110 communities. The network properties we are interested in include the cumulative number of nodes ($N$) and edges ($M$), the daily increments of nodes ($\Delta N$) and edges ($\Delta M$), and the number of links per node ($m = M/N$) and its daily increments ($\Delta m = \Delta M/ \Delta N$).
\section*{Author contributions}
L. W. and M.A.J. designed research; J.B. contributed new analysis ideas; L.W. analyzed data; L.W., J.B., and M.A.J. wrote the paper.
\section*{Acknowledgments}
We acknowledge financial support for this work from the National Science Foundation, grant number 1210856.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,173 |
The History of Ancient Greece
Ryan Stitt
The History of Ancient Greece Podcast is a deep-dive into one of the most influential and fundamental civilization in world history. Hosted by philhellene Ryan Stitt, THOAG spans over two millennia. From the Bronze Age to the Archaic Period, from Classical Greece to the Hellenistic kingdoms, and finally to the Roman conquest, this podcast will tell the history of a fundamental civilization by bringing to life the fascinating stories of all the ancient sources and scholarly interpretations of the archaeological evidence. And we won't just detail their military and political history, but their society, how the Greeks lived day-to-day, as well as their culture—their art, architecture, philosophy, literature, religion, science, and all the other incredible aspects of the Greek achievement , while situating the Greeks within a multicultural Mediterranean whose peoples influenced and were influenced by one another.
108 The Thirty Tyrants
In this episode, we discuss the aftermath of the Peloponnesian War at Athens, including the reign of the Thirty Tyrants, the Athenian civil war, and the restoration of the democracy Show Notes: http://www.thehistoryofancientgreece.com/2021/10/108-thirty-tyrants.html
107 Sparta Triumphant
In this episode, we discuss the final two years of the Peloponnesian War (405-404 BC), including the comedic play "The Frogs" by Aristophanes; Lysander's elevation to Persian satrap, his rebuilding of the Peloponnesian fleet, his tactical moves in the Hellespont, and his crushing victory over the Athenians at Aegospotami; the besiegement and blockade of Athens; and the Athenians' surrender and the terms of the peace treaty Show Notes: http://www.thehistoryofancientgreece.com/2021/04/107-sparta-triumphant.html
106 Frustrations and Poor Decisions (Part II)
In this episode, we discuss the years 409-406 BC of the Peloponnesian War, including the Athenians' achieving control in the Hellespont and Bosporus, Alcibiades' triumphant return to Athens, the ascension of Lysander and his bromance with Cyrus, the Athenian defeat at Notium and the disgrace of Alcibiades, Kallikratidas victory over Konon at Mytilene, and the subsequent Battle of Arginusae with its disastrous consequences for the Athenians. Show Notes: http://www.thehistoryofancientgreece.com/2020/10/106-frustrations-and-poor-decisions.html
***Special Guest Episode on Classics and White Supremacy w/Curtis Dozier***
In today's special guest episode, I am joined by Dr Curtis Dozier, Assistant Professor of Greek and Roman Studies at Vassar College. He is the producer and host of The Mirror of Antiquity, a podcast featuring classical scholars discussing the intersections of their research, the contemporary world, and their own lives. More importantly to our discussion, He is also the director of Pharos: Doing Justice to the Classics, a website devoted to documenting and responding to appropriations of ancient Greece and Rome by hate groups online. We discuss some of the reasons how, as well as why, White Supremacists have taken to coopting Classical imagery to support their twisted world views. Show Notes: http://www.thehistoryofancientgreece.com/2020/10/special-guest-episode-on-classics-and.html
***Special Guest Episode on Race, Antiquity, and Its Legacy w/Denise McCoskey***
In today's special guest episode, I am joined by Dr Denise Eileen McCoskey, Professor of Classics and affiliate of Black World Studies at Miami (OH) University. She has written extensively on the politics of race and gender in antiquity and is currently at work on a project examining the role of eugenics in early twentieth-century classical scholarship. In 2012, she published her book Race: Antiquity & Its Legacy, which will be the topic of today's conversation. It accounts for the various ways in which ancient cultures thought about race (including race as social practice and racial representations). We also dig into the "Black Athena" controversy a bit and why the field of Classics handled it so poorly. Show Notes: http://www.thehistoryofancientgreece.com/2020/09/special-guest-episode-on-race-antiquity.html
105 Carthage Enters the War
In this episode, we discuss the Second Greco-Punic War (410-406 BC), as hostilities in Sicily draw in Carthage and the Syracusan fleet away from the eastern Aegean and the Hellespont, including Hannibal Mago's first invasion of Sicily and the destruction of Selinus and Himera, the rebellion of Hermocrates, the rise of Dionysius as tyrant of Syracuse, Hannibal Mago's second invasion of Sicily and his destruction of Akragas, and the ceasefire which would see Carthage and Syracuse as the two strongest powers on Sicily Show Notes: http://www.thehistoryofancientgreece.com/2020/08/105-carthage-enters-war.html Introduction by Alex Goodman of Antiquity in Question Website: https://anchor.fm/alexandergoodman Facebook: https://www.facebook.com/AIQpodcast/ Twitter: https://twitter.com/AIQpodcast
104 The Democratic Empire Strikes Back
In this episode, we discuss the years 411-410 BC of the Peloponnesian War, including the shifting of the naval war to the Hellespont, the vigor that the Athenian democracy showed in carrying on the war effort against Sparta and Pharnabazos with victories at Cynossema and Cyzicus, the re-establishment of the radical democracy at Athens, and the transition from the historical account of Thucydides into that of Xenophon's Hellenica. Show Notes: http://www.thehistoryofancientgreece.com/2020/08/104-athenian-empire-strikes-back.html Intro by Megan Lewis of Digital Hammurabi Website: https://www.digitalhammurabi.com YouTube: https://www.youtube.com/channel/UCBQo27DbqeB-xG17-kekrdQ Facebook: https://www.facebook.com/digitalhammurabi/ Twitter: https://twitter.com/digi_hammurabi
***Special Guest Episode on Greek Naval Warfare w/Marc DeSantis***
In this special guest episode, Marc DeSantis and I discuss his most recent book, "A Naval History of the Peloponnesian War: Ships, Men and Money in the War at Sea, 431-404 BC". In particular, we talk about the ship designs, naval combat, the financial burden of navies, and the overall war strategies of both sides. Show Notes: http://www.thehistoryofancientgreece.com/2020/07/special-guest-episode-on-greek-naval.html
103 An Oligarchic Coup
In this episode, we discuss the years 411-410 BC of the Peloponnesian War, including the third and final treaty between the Spartans and Tissaphernes; the comedic plays "Lysistrata" and "Thesmophoriazusai" by Aristophanes; how the Athenians succumbed to civil war for the first time in nearly a century and saw an overthrow of their democracy by what is known as the 400; the vicissitudes of this new oligarchic government; and how factionalism between extremists and moderates led to its downfall Show Notes: http://www.thehistoryofancientgreece.com/2020/06/103-oligarchic-coup.html Intro by Anya Leonard of Classical Wisdom Speaks Website: https://classicalwisdom.com/podcast-classical-wisdom-speaks/ Twitter: https://twitter.com/ClassicalWisdom Facebook: https://www.facebook.com/ClassicalWisdomWeekly/
102 Livin' on a (Persian) Prayer
In this episode, we discuss the years 413-412 BC of the Peloponnesian War, including the Athenian response at home to the Sicilian Disaster, the Spartan and Theban devastation of Attic agriculture and commerce from Decelea, the dissolution of the "friendship" between Athens and Persia, the Spartans' building up of a navy and encouraging of revolts of Athenian subject-allies, the shifting of the war to the eastern Aegean, and a series of treaties are made between Sparta and the Persian satrap Tissaphernes Show Notes: http://www.thehistoryofancientgreece.com/2020/05/102-livin-on-persian-prayer.html Intro by Katie Nelson and Olivia Meikle of What's Her Name Podcast Website: https://www.whatshernamepodcast.com Facebook: https://www.facebook.com/whatshernamepodcast/ Twitter: https://twitter.com/WhatsHerNamePC
***Special Guest Episode on 'Ovid and the Art of Love' w/Esme von Hoffman***
In today's special guest episode, I am joined by director and screenwriter Esme von Hoffman (Festival of Cinema NYC 2019 Winner for Best Director) for her film, Ovid and the Art of Love. Esme and I discuss her background with Classics and Roman history, what drew her to make a film about the life of Ovid, her artistic vision in adapting the film to a modern audience, and some of the decisions that she made in writing its script. Show Notes: http://www.thehistoryofancientgreece.com/2020/05/special-guest-episode-on-ovid-and-art.html ***The film is available to stream on all major platforms on May 19th 2020*** Website: https://www.ovidandtheartoflove.com/ Facebook: https://www.facebook.com/ovidandtheartoflove Twitter: https://twitter.com/OvidLove
***Special Guest Episode on Greek Land Warfare w/Owen Rees***
In this special guest episode, Dr. Owen Rees and I discuss Ancient Greek land warfare in general with lengthy discussions on the definition of a hoplite, its socio-political importance, and the problems surrounding its chronology and historiographic tradition; the problems with the traditional reconstructive models of ancient Greek battles; the important role of cavalry and light infantry, particularly in the Peloponnesian War onwards; and why the concept of an "honorable western way of war" which seeks its origins in ancient Greek warfare is bogus and hyped up in modern ideology. There are also lots of digression on logistics, slaves, baggage trains, training, the Spartan mirage, the brutal experience of war, the fear that it instilled, the war dead, and the transition of soldiers from civilian life to the battlefield and back again, including all the psychological and sociological problems that arise from this. Show Notes: http://www.thehistoryofancientgreece.com/2020/04/special-guest-episode-on-greek-land.html Dr Owen Rees Website: http://owenrees.co.uk Twitter: https://twitter.com/reeshistory
101 Disaster in Sicily
In this episode, we discuss the year 413 BC of the Peloponnesian War, including the rise of Archelaus to the Macedonian throne, the Spartan establishment of Decelea, the defeats by the Athenian army and navy at Syracuse, and the retreat and ultimate surrender of the Athenians, which brought the Sicilian Expedition to an end Show Notes: http://www.thehistoryofancientgreece.com/2020/03/101-disaster-in-sicily.html Intro by Seth Michels of the History Uncensored Podcast Website: http://historyuncensoredpod.com Facebook: https://www.facebook.com/historyuncensoredpod/ Twitter: https://twitter.com/Seth4Nerds
100 A Sicilian Stalemate
In this episode, we discuss the years 415-414 BC of the Peloponnesian War, including the Athenian attempt at blockading Syracuse, the death of Lamachos, the tactical blunders of Nikias, the arrival of Gylippus, and the "Birds" of Aristophanes Show Notes: http://www.thehistoryofancientgreece.com/2020/02/100-sicilian-stalemate.html Intro by Neil Eckart of the War and Conquest Podcast Website: https://www.warandconquest.com Facebook: https://www.facebook.com/warandconquestpcast/ Twitter: https://twitter.com/warandconquest1
099 Frustrations and Poor Decisions
In this episode, we discuss the years 417-415 BC of the Peloponnesian War, including the ostracism of Hyperbolus, the rivalry of Nikias and Alcibiades, the siege of Melos, the lead up and first year of the Sicilian Expedition, and the prosecutions for the Hermai and Eleusinian Mysteries scandals Show Notes: http://www.thehistoryofancientgreece.com/2020/01/099-frustrations-and-poor-decisions.html Intro by Kate Armstrong of The Exploress Podcast Website: https://www.theexploresspodcast.com Facebook: https://www.facebook.com/theexploresspodcast/ Twitter: https://twitter.com/theexploresspod
098 The Peace Unravels
In this episode, we discuss the years 421-418 BC of the Peloponnesian War, including the breakdowns of the Peace of Nikias; the rise of Alcibiades to prominence at Athens; the differences that arose between Sparta and some of their dissident allies; the diplomatic maneuverings that resulted in the quadruple alliance between Athens, Argos, Mantinea, and Elis; and the decisive Spartan victory at the Battle of Mantinea Show Notes: http://www.thehistoryofancientgreece.com/2019/12/098-peace-unravels.html Intro by Jacob Collier of The Podcast on Germany Website: https://www.podcastongermany.com Facebook: https://www.facebook.com/PodcastonGermany/ Twitter: https://twitter.com/on_germany
***Special Guest Episode on Mesopotamian Medicine w/Moudhy Al-Rashid***
In this special guest episode, Dr. Moudhy Al-Rashid and I discuss ancient Mesopotamian medicine, in general, and her current research on the use of metaphor in descriptions of mental distress in cuneiform medical texts Show Notes: http://www.thehistoryofancientgreece.com/2019/11/special-guest-episode-on-mesopotamian.html Dr Moudhy Al-Rashid Post-Doc at Wolfson College, University of Oxford Twitter: https://twitter.com/Moudhy
***Special Guest Episode on Classical Monsters and Popular Culture w/Liz Gloyn***
In this special guest episode, Dr. Liz Gloyn and I discuss her forthcoming book, Tracking Classical Monsters in Popular Culture (Bloomsbury Publishing, 2019). This work is the first in-depth study on classical reception and monsters in Anglo-American popular culture from the 1950s to the present day. Throughout the book, Dr. Gloyn reveals the trends behind how we have used the monsters, and develops a broad theory of the ancient monster and its life after antiquity, investigating its relation to gender, genre and space to explore what it is that keeps drawing us back to these mythical beasts and why they have remained such a powerful presence in our shared cultural imagination. Specifically, her book takes us through a comprehensive tour of monsters on film and television, from the much-loved creations of Ray Harryhausen in Clash of the Titans to the monster of the week in Hercules: The Legendary Journeys, before examining in detail the post-classical afterlives of the two most popular monsters, the Medusa and the Minotaur. Show Notes: http://www.thehistoryofancientgreece.com/2019/10/special-guest-episode-on-classical.html Dr Liz Gloyn Senior Lecturer at Royal Holloway, University of London Website: https://lizgloyn.wordpress.com/ Twitter: https://twitter.com/lizgloyn
097 The Road to Peace
In this episode, we discuss the years 423-421 BC of the Peloponnesian War, including the death of Artaxerxes and the succession struggle that ends with Darius II on the Persian throne; the continuation of Brasidas' Thracian and Macedonian campaign; the 'Wasps' and 'Peace' by Aristophanes; and the deaths of Brasidas and Kleon during the second battle of Amphipolis, culminating in the "Peace of Nikias" and the end of the Archidamian War Show Notes: http://www.thehistoryofancientgreece.com/2019/09/097-road-to-peace.html Intro by Samuel Hume of Pax Brittanica Website: https://paxbritannica.info Facebook: https://www.facebook.com/PodBritannica/ Twitter: https://twitter.com/samuelhume10 and https://twitter.com/BritannicaPax
096 Athens on the Offensive
In this episode, we discuss the years 425 and 424 BC of the Peloponnesian War, including the conclusion of the First Sicilian Expedition and the Congress of Gela, the Athenian seizure of Kythera, the Battles of Megara and Delium, and the beginning of Brasidas' Thracian campaign Show Notes: http://www.thehistoryofancientgreece.com/2019/09/096-athens-on-offensive.html Intro by SandRhoman YouTube: https://www.youtube.com/channel/UC7pr_dQxm2Ns2KlzRSx5FZA Facebook: https://www.facebook.com/SandRhoman/ Twitter: https://twitter.com/Sandrhoman | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 8,060 |
A new guidebook is the first mainstream publication dedicated to the attractions of South Sudan, the world's newest country.
The authors of the 'Bradt guide to South Sudan', Sophie and Max Lovell-Hoare, said the book was "tough" to write.
However, they also said it was "an incredible, beautiful and beguiling place".
Having covered Sudan previously for Bradt, they offered to research the guide after the country was formed in 2011.
In a referendum, almost 99 per cent of voters approved South Sudan's independence, and it officially became a separate country on July 9, 2011 after several years of autonomy.
The Sudanese civil war still casts a long shadow over the country, however; it has one of the world's lowest life expectancies, and the political outlook remains uncertain.
The Foreign Office advises against all travel to within 40km of the country's northern border with Sudan, and advises against all but essential travel to the Jonglei State, which is where the Boma National Park – cited as one of the country's highlights (see below) – is located.
Adrian Phillips, Bradt's publishing director, admits in the foreword to the guidebook that South Sudan "is unlikely to become a tourist hotspot in the near future," but said the authors had "made the path just that little bit smoother" for visitors.
Mount Kinyeti, the country's highest peak along the southern border with Uganda.
Wau Cathedral, one of the largest churches in Sudan with some attractive stone carving as well as a stained-glass window.
Bradt is offering Telegraph readers 40 per cent discount on the South Sudan guide (including free postage). Order through the website (www.bradtguides.com) and quote the discount code TELEGRAPH40. | {
"redpajama_set_name": "RedPajamaC4"
} | 3,902 |
This is my resume. It may serve to others as inspiration or just to know a little bit more about me.
See it online [here](https://anderrv.github.io/CV/ "CV"). | {
"redpajama_set_name": "RedPajamaGithub"
} | 23 |
\section{Introduction and statement of main results}
A famous result of C. Fefferman state that $BMO(\bR^n)$ is the dual space of $H^1(\bR^n)$. Although, for $f\in BMO(\bR^n)$ and $g\in H^1(\bR^n)$, the point-wise product $fg$ may not be an integrable function, one (see \cite{BIJZ}) can view the product of $f$ and $g$ as a distribution, denoted by $f\times g$. Such a distribution can be written as the sum of an integrable function and a distribution in a new Hardy space, so-called Hardy space of Musielak-Orlicz type (see \cite{BGK, Ky1}). A complete study about the product of functions in $BMO$ and $\H^1$ has been firstly done by Bonami, Iwaniec, Jones and Zinsmeister \cite{BIJZ}. Recently, Li and Peng \cite{LP} generalized this study to the setting of Hardy and $BMO$ spaces associated with Schr\"odinger operators. In particular, Li and Peng showed that if $L= -\Delta + V$ is a Schr\"odinger operator with the potential $V$ belongs to the reverse H\"older class $RH_q$ for some $q\geq n/2$, then one can view the product of $b \in BMO_L(\bR^n)$ and $f\in H^1_L(\bR^n)$ as a distribution $b\times f$ which can be written the sum of an integrable function and a distribution in $H^{\wp}_L(\bR^n,d\mu)$. Here $H^{\wp}_L(\bR^n,d\mu)$ is the weighted Hardy-Orlicz space associated with $L$, related to the Orlicz function $\wp(t) =t/\log(e+t)$ and the weight $d\nu(x)= dx/\log(e+|x|)$. More precisely, they proved the following.
\begin{theorema}
For each $f\in H^1_L(\bR^n)$, there exist two bounded linear operators ${\mathscr L}_f: BMO_L(\bR^n) \to L^1(\bR^n)$ and ${\mathscr H}_f: BMO_L(\bR^n) \to H^{\wp}_L(\bR^n,d\nu)$ such that for every $b\in BMO_L(\bR^n)$,
$$b\times f = {\mathscr L}_f(b) + {\mathscr H}_f(b).$$
\end{theorema}
Let $(\X,d,\mu)$ be a space of homogeneous type in the sense of Coifman-Weiss. Following Han, M\"uller and Yang \cite{HMY}, we say that $(\X,d,\mu)$ is an {\sl RD-space} if $\mu$ satisfies {\sl reverse doubling property}, i.e., there exists a constant $C>1$ such that for all $x\in \mathcal X$ and $r>0$,
$$\mu(B(x,2r))\geq C \mu(B(x,r)).$$
A typical example for such RD-spaces is the Carnot-Carath\'eodory space with doubling measure. We refer to the seminal paper of Han, M\"uller and Yang \cite{HMY} (see also \cite{GLY, GLY2, YYZ, YZ}) for a systematic study of the theory of function spaces in harmonic analysis on RD-spaces.
Let $(\X,d,\mu)$ be an RD-space. Recently, in analogy with the classical result of Bonami-Iwaniec-Jones-Zinsmeister, Feuto proved in \cite{Feu} that:
\begin{theoremb}
For each $f\in H^1(\X)$, there exist two bounded linear operators ${\mathscr L}_f: BMO(\X) \to L^1(\X)$ and ${\mathscr H}_f: BMO(\X) \to H^{\wp}(\X,d\nu)$ such that for every $b\in BMO(\X)$,
$$b\times f = {\mathscr L}_f(b) + {\mathscr H}_f(b).$$
\end{theoremb}
Here the weight $d\nu(x)=d\mu(x)/\log(e+ d(x_0,x))$ with $x_0\in \X$ and the Orlicz function $\wp$ is as in Theorem A. It should be pointed out that in \cite{Feu}, for $f=\sum_{j=1}^{\infty} \lambda_j a_j$, the author defined the distribution $b\times f$ as
\begin{equation}\label{Feuto by Bonami}
b \times f:= \sum_{j=1}^\infty \lambda_j (b-b_{B_j})a_j + \sum_{j=1}^\infty \lambda_j b_{B_j} a_j
\end{equation}
by proving that the second series is convergent in $H^{\wp}(\X,d\nu)$. This is made possible by the fact that $H^{\wp}(\X,d\nu)$ is complete and is continuously imbedded into the space of distributions $(\G^{\epsilon}_0(\beta,\gamma))'$ (see Section 2), which is not established in \cite{Feu}. Moreover one has to prove that Definition (\ref{Feuto by Bonami}) does not depend on the atomic decomposition of $f$. In this paper, we give a definition for the distribution $b\times f$ (see Section 3) which is similar to that of Bonami-Iwaniec-Jones-Zinsmeister.
Our first main result can be read as follows.
\begin{theorem}\label{the first main theorem}
For each $f\in H^1(\X)$, there exist two bounded linear operators ${\mathscr L}_f: BMO(\X) \to L^1(\X)$ and ${\mathscr H}_f: BMO(\X) \to H^{\log}(\X)$ such that for every $b\in BMO(\X)$,
$$b\times f = {\mathscr L}_f(b) + {\mathscr H}_f(b).$$
\end{theorem}
Here $H^{\log}(\X)$ is the {\sl Musielak-Orlicz Hardy space} related to the Musielak-Orlicz function $\varphi(x,t)= \frac{t}{\log(e+ d(x_0,x)) + \log(e+t)}$ (see Section 2). Theorem \ref{the first main theorem} is an improvement of Theorem B since $H^{\log}(\X)$ is a proper subspace of $H^{\wp}(\X,d\nu)$.
Let $\rho$ be an {\sl admissible function} (see Section 2). Recently, Yang and Zhou \cite{YYZ, YZ} introduced and studied Hardy spaces and Morrey-Campanato spaces related to the function $\rho$. There, they established that $BMO_\rho(\X)$ is the dual space of $H^1_\rho(\X)$. Similar to the classical case, we can define the product of functions $b\in BMO_\rho(\X)$ and $f\in H^1_\rho(\X)$ as distributions $b\times f \in (\G^{\epsilon}_0(\beta,\gamma))'$.
Our next main result is as follows.
\begin{theorem}\label{the second main theorem}
For each $f\in H^1_\rho(\X)$, there exist two bounded linear operators ${\mathscr L}_{\rho, f}: BMO_\rho(\X) \to L^1(\X)$ and ${\mathscr H}_{\rho, f}: BMO_\rho(\X) \to H^{\log}(\X)$ such that for every $b\in BMO_\rho(\X)$,
$$b\times f = {\mathscr L}_{\rho, f}(b) + {\mathscr H}_{\rho, f}(b).$$
\end{theorem}
When $\X\equiv\bR^n, n\geq 3,$ and $\rho(x)\equiv \sup\{r>0: \frac{1}{r^{n-2}}\int_{B(x,r)}V(y)dy\leq 1\}$, where $L= -\Delta +V$ is as in Theorem A, one has $BMO_\rho(\X)\equiv BMO_L(\bR^n)$ and $H^1_\rho(\X)\equiv H^1_L(\bR^n)$.
So, Theorem \ref{the second main theorem} is an improvement of Theorem A since $H^{\log}(\bR^n)$ is a proper subspace of $H^{\wp}_L(\bR^n, d\nu)$ (see \cite{Ky2}).
The following conjecture is suggested by A. Bonami and F. Bernicot.
\begin{conjecture}
There exist two bounded bilinear operators $\mathscr L: BMO(\X)\times H^1(\X) \to L^1(\X)$ and $\mathscr H: BMO(\X)\times H^1(\X)\to H^{\log}(\X)$ such that
$$b\times f = \mathscr L(b,f) + \mathscr H(b,f).$$
\end{conjecture}
It should be pointed out that when $\X=\bR^n$ and $H^{\log}(\X)$ is replaced by $H^{\wp}(\bR^n, d\nu)$, the above conjecture is just Conjecture 1.7 of \cite{BIJZ}, which answered recently by Bonami, Grellier and Ky \cite{BGK} (see also \cite{Ky2}).
Throughout the whole paper, $C$ denotes a positive geometric constant which is independent of the main parameters, but may change from line to line. We write $f\sim g$ if there exists a constant $C>1$ such that $C^{-1}f\leq g\leq C f$.
The paper is organized as follows. In Section 2, we present some notations and preliminaries about $BMO$ type spaces and Hardy type spaces on RD-spaces. Section 3 is devoted to prove Theorem \ref{the first main theorem}. Finally, we give the proof for Theorem \ref{the second main theorem} in Section 4.
{\bf Acknowledgements.} The author would like to thank Aline Bonami, Sandrine Grellier, Dachun Yang and Fr\'ed\'eric Bernicot for very useful suggestions.
\section{Some preliminaries and notations}
Let $d$ be a quasi-metric on a set $\X$, that is, $d$ is a nonnegative function on $\mathcal X\times \mathcal X$ satisfying
\begin{enumerate}[(a)]
\item $d(x,y)=d(y,x)$,
\item $d(x,y)>0$ if and only if $x\ne y$,
\item there exists a constant $\kappa\geq 1$ such that for all $x,y,z\in \mathcal X$,
\begin{equation}
d(x,z)\leq \kappa(d(x,y)+ d(y,z)).
\end{equation}
\end{enumerate}
A trip $(\mathcal X, d,\mu)$ is called a {\sl space of homogeneous type} in the sense of Coifman-Weiss \cite{CW} if $\mu$ is a regular Borel measure satisfying {\sl doubling property}, i.e. there exists a constant $C>1$ such that for all $x\in \mathcal X$ and $r>0$,
$$\mu(B(x,2r))\leq C \mu(B(x,r)).$$
Following Han, M\"uller and Yang \cite{HMY}, $(\mathcal X, d,\mu)$ is called an {\sl RD-space} if $(\mathcal X, d,\mu)$ is a space of homogeneous type and $\mu$ also satisfies {\sl reverse doubling property}, i.e. there exists a constant $C>1$ such that for all $x\in \mathcal X$ and $r>0$,
$$\mu(B(x,2r))\geq C \mu(B(x,r)).$$
Set $\mbox{diam}(\mathcal X) := \sup_{x,y\in\mathcal X} d(x,y)$. It should be pointed out that $(\mathcal X, d,\mu)$ is an RD-space if and only if there exist constants $0<\mathfrak d \leq \mathfrak n$ and $C> 1$ such that for all $x\in\mathcal X$, $0<r<\mbox{diam}(\mathcal X)/2$, and $1\leq \lambda < \mbox{diam}(\mathcal X)/(2r)$,
\begin{equation}\label{RD-spaces}
C^{-1} \lambda^{\mathfrak d} \mu(B(x,r)) \leq \mu(B(x,\lambda r))\leq C \lambda^{\mathfrak n} \mu(B(x,r)).
\end{equation}
Here and what in follows, for $x, y\in\X$ and $r>0$, we denote $V_r(x):= \mu(B(x,r))$ and $V(x,y):= \mu(B(x,d(x,y)))$.
\begin{definition}\label{definition for test functions}
Let $x_0\in\mathcal X$, $r>0$, $0<\beta\leq 1$ and $\gamma >0$. A function $f$ is said to belong to the space of test functions, $\mathcal G(x_0,r,\beta,\gamma)$, if there exists a positive constant $C_f$ such that
\begin{enumerate}[(i)]
\item $|f(x)| \leq C_f \frac{1}{V_r(x_0) + V(x_0,x)}\Big(\frac{r}{r+ d(x_0,x)}\Big)^\gamma$ for all $x\in\mathcal X$;
\item $|f(x) - f(y)|\leq C_f \Big(\frac{d(x,y)}{r+ d(x_0,x)}\Big)^\beta \frac{1}{V_r(x_0) + V(x_0,x)}\Big(\frac{r}{r+ d(x_0,x)}\Big)^\gamma$ for all $x,y\in \mathcal X$ satisfying that $d(x,y)\leq \frac{r + d(x_0,x)}{2\kappa}$.
\end{enumerate}
For any $f\in \mathcal G(x_0,r,\beta,\gamma)$, we define
$$\|f\|_{\mathcal G(x_0,r,\beta,\gamma)}:= \inf \{C_f: (i) \; \mbox{and} \;(ii) \;\mbox{hold}\}.$$
\end{definition}
Let $\rho$ be a positive function on $\X$. Following Yang and Zhou \cite{YZ}, the function $\rho$ is said to {\sl be admissible} if there exist positive constants $C_0$ and $k_0$ such that for all $x,y\in \X$,
$$\rho(y)\leq C_0 [\rho(x)]^{1/(1+k_0)} [\rho(x)+d(x,y)]^{k_0/(1+k_0)}.$$
{\sl Throughout the whole paper}, we always assume that $\mathcal X$ is an RD-space with $\mu(\mathcal X)=\infty$, and $\rho$ is an admissible function on $\X$. Also we fix $x_0\in \X$.
In Definition \ref{definition for test functions}, it is easy to see that $\mathcal G(x_0,1,\beta,\gamma)$ is a Banach space. For simplicity, we write $\mathcal G(\beta,\gamma)$ instead of $\mathcal G(x_0,1,\beta,\gamma)$. Let $\epsilon\in (0,1]$ and $\beta,\gamma\in (0,\epsilon]$, we define the space $\mathcal G^\epsilon_0(\beta,\gamma)$ to be the completion of $\mathcal G(\epsilon,\epsilon)$ in $\mathcal G(\beta,\gamma)$, and denote by $(\mathcal G^\epsilon_0(\beta,\gamma))'$ the space of all continuous linear functionals on $\mathcal G^\epsilon_0(\beta,\gamma)$. We say that $f$ is a {\sl distribution} if $f$ belongs to $(\mathcal G^\epsilon_0(\beta,\gamma))'$.
Remark that, for any $x\in \mathcal X$ and $r>0$, one has $\mathcal G(x,r,\beta,\gamma)= \mathcal G(x_0,1,\beta,\gamma)$ with equivalent norms, but of course the constants are depending on $x$ and $r$.
Let $f$ be a distribution in $(\mathcal G^\epsilon_0(\beta,\gamma))'$. We define {\sl the grand maximal functions} $\M(f)$ and $\M_\rho(f)$ as following
$$\M(f)(x) := \sup\{|\langle f,\varphi \rangle|: \varphi\in \mathcal G^\epsilon_0(\beta,\gamma), \|\varphi\|_{\mathcal G(x,r,\beta,\gamma)}\leq 1\; \mbox{for some}\; r>0\},$$
$$\M_\rho(f)(x) := \sup\{|\langle f,\varphi \rangle|: \varphi\in \mathcal G^\epsilon_0(\beta,\gamma), \|\varphi\|_{\mathcal G(x,r,\beta,\gamma)}\leq 1\; \mbox{for some}\; r\in (0,\rho(x))\}.$$
Let $L^{\log}(\X)$ (see \cite{BGK, Ky1} for details) be the Musielak-Orlicz type space of $\mu$-measurable functions $f$ such that
$$\int_{\X} \frac{|f(x)|}{\log(e+|f(x)|) +\log(e + d(x_0,x))} d\mu(x)<\infty.$$
For $f\in L^{\log}(\X)$, we define the "norm" of $f$ as
$$\|f\|_{L^{\log}}=\inf\left\{ \lambda>0: \int_{\X} \frac{\frac{|f(x)|}{\lambda}}{\log(e+\frac{|f(x)|}{\lambda}) +\log(e + d(x_0,x))} d\mu(x)\leq 1\right\}.$$
\begin{definition}
Let $\epsilon\in (0,1)$ and $\beta,\gamma\in (0,\epsilon)$.
\begin{enumerate}[(i)]
\item The Hardy space $H^1(\mathcal X)$ is defined by
$$H^1(\mathcal X) = \{f\in (\mathcal G^\epsilon_0(\beta,\gamma))': \|f\|_{H^1}:= \|\M( f)\|_{L^1}<\infty \}.$$
\item The Hardy space $H^1_\rho(\mathcal X)$ is defined by
$$H^1_\rho(\mathcal X) = \{f\in (\mathcal G^\epsilon_0(\beta,\gamma))': \|f\|_{H^1_\rho}:= \|\M_\rho( f)\|_{L^1}<\infty \}.$$
\item The Hardy space $H^{\log}(\mathcal X)$ is defined by
$$H^{\log}(\mathcal X) = \{f\in (\mathcal G^\epsilon_0(\beta,\gamma))': \|f\|_{H^{\log}}:= \|\M( f)\|_{L^{\log}}<\infty \}.$$
\end{enumerate}
\end{definition}
It is clear that $H^1(\X) \subset H^1_{\rho}(\X)$ and $H^1(\X)\subset H^{\log}(\X)$ with the inclusions are continuous. It should be pointed out that the Musielak-Orlicz Hardy space $H^{\log}(\X)$ is a proper subspace of the weighted Hardy-Orlicz space $\H^{\wp}(\X, \nu)$ studied in \cite{Feu}. We refer to \cite{Ky1} for an introduction to Musielak-Orlicz Hardy spaces on the Euclidean space $\bR^n$.
\begin{definition}
Let $q\in (1,\infty]$.
\begin{enumerate}[(i)]
\item A measurable function $ \mathfrak a$ is called an $(H^1,q)$-atom related to the ball $B(x,r)$ if
\begin{enumerate}[(a)]
\item supp $\a\subset B(x,r)$,
\item $\|\a\|_{L^q}\leq (V_r(x))^{1/q-1}$,
\item $\int_{\mathcal X} \a(y) d\mu(y)=0$.
\end{enumerate}
\item A measurable function $\mathfrak a$ is called an $(H^1_\rho,q)$-atom related to the ball $B(x,r)$ if $r \leq 2\rho(x)$ and $\mathfrak a$ satisfies (a) and (b), and when $r < \rho(x)$, $\mathfrak a$ also satisfies (c).
\end{enumerate}
\end{definition}
The following results were established in \cite{GLY, YZ}.
\begin{Theorem}\label{atomic decomposition}
Let $\epsilon\in (0,1)$, $\beta,\gamma\in (0,\epsilon)$ and $q\in(1,\infty]$. Then, we have:
\begin{enumerate}[(i)]
\item The space $H^1(\mathcal X)$ coincides with the Hardy space $H^{1,q}_{\rm at}(\mathcal X)$ of Coifman-Weiss. More precisely, $f\in H^1(\X)$ if and only if $f$ can be written as $f= \sum_{j=1}^\infty \lambda_j a_j$ where the $a_j$'s are $(H^1,q)$-atoms and $\{\lambda_j\}_{j=1}^\infty\in\ell^1$. Moreover,
$$\|f\|_{H^1}\sim \inf \left\{\sum_{j=1}^\infty |\lambda_j| : f= \sum_{j=1}^\infty \lambda_j a_j\right\}.$$
\item $f\in H^1_\rho(\X)$ if and only if $f$ can be written as $f= \sum_{j=1}^\infty \lambda_j a_j$ where the $a_j$'s are $(H^1_\rho,q)$-atoms and $\{\lambda_j\}_{j=1}^\infty\in\ell^1$. Moreover,
$$\|f\|_{H^1_\rho}\sim \inf \left\{\sum_{j=1}^\infty |\lambda_j| : f= \sum_{j=1}^\infty \lambda_j a_j\right\}.$$
\end{enumerate}
\end{Theorem}
Here and what in follows, for any ball $B\subset \X$ and $g\in L^1_{\rm loc}(\X)$, we denote by $g_B$ the average value of $g$ over the ball $B$ and denote
$$MO(g,B):= \frac{1}{\mu(B)}\int_{B}|g(x) - g_B| d\mu(x).$$
Recall (see \cite{CW}) that a function $f\in L^1_{\rm loc}(\X)$ is said to be in $BMO(\X)$ if
$$\|f\|_{BMO}=\sup_{B} MO(f,B)<\infty,$$
where the supremum is taken all over balls $B\subset\X$.
\begin{definition}
Let $\rho$ be an admissible function and $\D:= \{B(x,r)\subset \X: r\geq \rho(x)\}$. A function $f\in L^1_{\rm loc}(\X)$ is said to be in $BMO_\rho(\X)$ if
$$\|f\|_{BMO_\rho}= \|f\|_{BMO} + \sup_{B\in \D}\frac{1}{\mu(B)}\int_B |f(x)| d\mu(x) <\infty.$$
\end{definition}
The following results are well-known, see \cite{CW, GLY, YYZ}.
\begin{theorem}
\begin{enumerate}[(i)]
\item The space $BMO(\X)$ is the dual space of $H^1(\X)$.
\item The space $BMO_\rho(\X)$ is the dual space of $H^1_\rho(\X)$.
\end{enumerate}
\end{theorem}
\section{The product of functions in $BMO(\mathcal X)$ and $H^1(\mathcal X)$}
Remark that if $g\in \G(\beta,\gamma)$, then
\begin{equation}\label{bounded property of test functions}
\|g\|_{L^\infty}\leq C \frac{1}{V_1(x_0)}\|g\|_{\G(\beta,\gamma)}
\end{equation}
and
\begin{equation}\label{integrable property of test functions}
\|g\|_{L^1}\leq (C +\sum_{j=0}^\infty 2^{-j\gamma}) \|g\|_{\G(\beta,\gamma)}\leq C \|g\|_{\G(\beta,\gamma)}.
\end{equation}
\begin{proposition}\label{multipliers for bmo}
Let $\beta\in (0,1]$ and $\gamma\in (0,\infty)$. Then, $g$ is a pointwise multiplier of $BMO(\X)$ for all $g\in \G(\beta,\gamma)$. More precisely,
$$\|gf\|_{BMO}\leq C \frac{1}{V_1(x_0)}\|g\|_{\G(\beta,\gamma)}\|f\|_{BMO^+}$$
for all $f\in BMO(\X)$. Here and what in follows,
$$\|f\|_{BMO^+}:= \|f\|_{BMO} + \frac{1}{V_1(x_0)}\int_{B(x_0,1)}|f(x)|d\mu(x).$$
\end{proposition}
Using Proposition \ref{multipliers for bmo}, for $b\in BMO(\X)$ and $f\in H^1(\X)$, one can define the distribution $b\times f\in (\G^{\epsilon}_0(\beta,\gamma))'$ by the rule
\begin{equation}\label{distribution definition for products}
\langle b\times f, \phi\rangle := \langle \phi b, f\rangle
\end{equation}
for all $\phi\in \G^{\epsilon}_0(\beta,\gamma)$, where the second bracket stands for the duality bracket between $H^1(\X)$ and its dual $BMO(\X)$.
\begin{proof}[Proof of Proposition \ref{multipliers for bmo}]
By (\ref{bounded property of test functions}) and the pointwise multipliers characterization of $BMO(\X)$ (see \cite[Theorem 1.1]{Na}), it is sufficient to show that
\begin{equation}\label{multipliers for bmo 1}
\log(e +1/r)MO(g, B(a,r))\leq C \frac{1}{V_1(x_0)}\|g\|_{\G(\beta,\gamma)}
\end{equation}
and
\begin{equation}\label{multipliers for bmo 2}
\log(e+ d(x_0,a) + r) MO(g, B(a,r))\leq C \frac{1}{V_1(x_0)}\|g\|_{\G(\beta,\gamma)}
\end{equation}
hold for all balls $B(a,r)\subset \X$. It is easy to see that (\ref{multipliers for bmo 1}) follows from (\ref{bounded property of test functions}) and the Lipschitz property of $g$ (see (ii) of Definition \ref{definition for test functions}). Let us now establish (\ref{multipliers for bmo 2}). If $r<1$, then by (\ref{multipliers for bmo 2}) follows from the Lipschitz property of $g$ and the fact that $\lim_{\lambda\to\infty}\frac{\log(\lambda)}{\lambda^\beta}=0$. Otherwise, we consider the following two cases:
\begin{enumerate}[(a)]
\item The case: $1\leq r\leq \frac{1}{4\kappa^3} d(x_0,a)$. Then, for every $x,y\in B(a,r)$, one has $d(x_0,a)\leq \frac{4\kappa^3}{4\kappa^2-1}$ and $d(x,y)\leq \frac{d(x_0,x)}{2\kappa}$. Hence, the Lipschitz property of $g$ yields
$$|g(x)-g(y)|\leq C \|g\|_{\G(\beta,\gamma)}\frac{1}{V_1(x_0)}\Big(\frac{1}{d(x_0,a)}\Big)^\gamma.$$
This implies that (\ref{multipliers for bmo 2}) holds since $\lim_{\lambda\to\infty}\frac{\log(\lambda)}{\lambda^\gamma}=0$.
\item The case: $r> \frac{1}{4\kappa^3} d(x_0,a)$. Then, one has $B(x_0,r)\subset B(a, \kappa(4\kappa^3 +1)r)$. Hence, by (\ref{RD-spaces}), we get
\begin{eqnarray*}
\log(e+ d(x_0,a) + r) MO(g, B(a,r)) &\leq& C \frac{\log(2r)}{V_r(x_0)} \|g\|_{L^1}\\
&\leq& C\frac{\log(2r)}{r^{\mathfrak d}}\frac{1}{V_1(x_0)}\|g\|_{\G(\beta,\gamma)}\\
&\leq& C \frac{1}{V_1(x_0)}\|g\|_{\G(\beta,\gamma)}.
\end{eqnarray*}
This proves (\ref{multipliers for bmo 2}) and thus the proof of Propsition \ref{multipliers for bmo} is finished.
\end{enumerate}
\end{proof}
Next we define $L^{\Xi}(\X)$ as the space of $\mu$-measurable functions $f$ such that
$$\int_{\X} \frac{e^{|f(x)|}-1}{(1+ d(x_0,x))^{2\n}}d\mu(x)<\infty.$$
Then, the norm on the space $L^{\Xi}(\X)$ is defined by
$$\|f\|_{L^{\Xi}}=\inf\left\{ \lambda>0: \int_{\X} \frac{e^{|f(x)|/\lambda}-1}{(1+ d(x_0,x))^{2\n}}d\mu(x)\leq 1\right\}.$$
Recall the following two lemmas due to Feuto \cite{Feu}.
\begin{lemma}\label{Feuto, Lemma 3.2}
For every $f\in BMO(\X)$,
$$\|f - f_{B(x_0,1)}\|_{L^{\Xi}}\leq C \|f\|_{BMO}.$$
\end{lemma}
\begin{lemma}\label{Feuto, Lemma 3.1}
Let $q\in (1,\infty]$. Then,
$$\|(\b - \b_B)\M(\a)\|_{L^1}\leq C \|\b\|_{BMO}$$
for all $\b\in BMO(\X)$ and for all $(H^1,q)$-atom $\a$ related to the ball $B$.
\end{lemma}
The main point in the proof of Theorem \ref{the first main theorem} is the following.
\begin{proposition}\label{key lemma for Orlicz functions}
\begin{enumerate}[(i)]
\item For any $f\in L^1(\X)$ and $g\in L^{\Xi}(\X)$, we have
$$\|fg\|_{L^{\log}}\leq 64{\n}^2 \|f\|_{L^1}\|g\|_{L^{\Xi}}.$$
\item For any $f\in L^1(\X)$ and $g\in BMO(\X)$, we have
$$\|fg\|_{L^{\log}}\leq C \|f\|_{L^1}\|g\|_{BMO^+}.$$
\end{enumerate}
\end{proposition}
\begin{proof}
(i) If $\|f\|_{L^1}=0$ or $\|g\|_{L^{\Xi}}=0$, then there is nothing to prove. Otherwise, we may assume that $\|f\|_{L^1}=\|g\|_{L^{\Xi}}=\frac{1}{8\n}$ since homogeneity of the norms. Then, we need to prove that
$$\int_{\X} \frac{|f(x)g(x)|}{\log(e + |f(x)g(x)|)+\log(e+ d(x_0,x))}d\mu(x)\leq 1.$$
Indeed, by using the following two inequalities
$$\log(e+ ab)\leq 2 (\log(e+a) + \log(e+b)),\; a,b\geq 0,$$
and
$$\frac{ab}{\log(e+ab)}\leq a + (e^b -1),\; a,b\geq 0,$$
we obtain that, for every $x\in \X$,
\begin{eqnarray*}
&&\frac{(1+ d(x_0,x))^{2\n}|f(x)g(x)|}{4\n(\log(e+|f(x)g(x)|) + \log(e+d(x_0,x)))}\\
&\leq& \frac{(1+ d(x_0,x))^{2\n}|f(x)g(x)|}{2(\log(e+|f(x)g(x)|) + \log(e+(1+d(x_0,x))^{2\n}))}\\
&\leq& \frac{(1+ d(x_0,x))^{2\n}|f(x)||g(x)|}{\log(e+ (1+ d(x_0,x))^{2\n}|f(x)||g(x)|)}\\
&\leq& (1+ d(x_0,x))^{2\n}|f(x)| + (e^{|g(x)|} -1).
\end{eqnarray*}
This together with the fact $8\n (e^{|g(x)|}-1)\leq e^{8\n|g(x)|}-1$ give
\begin{eqnarray*}
&&\int_{\X} \frac{|f(x)g(x)|}{\log(e + |f(x)g(x)|)+\log(e+ d(x_0,x))}d\mu(x) \\
&\leq& 4\n \|f\|_{L^1} +\frac{1}{2}\int_{\X} \frac{e^{8\n |g(x)|}-1}{(1+ d(x_0,x))^{2\n}} d\mu(x)\\
&\leq& \frac{1}{2} + \frac{1}{2} =1,
\end{eqnarray*}
which completes the proof of (i).
(ii) It follows directly from (i) and Lemma \ref{Feuto, Lemma 3.2}.
\end{proof}
Now we ready to give the proof for Theorem \ref{the first main theorem}.
\begin{proof}[\bf Proof of Theorem \ref{the first main theorem}]
By (i) of Theorem \ref{atomic decomposition}, $f$ can be written as
$$f=\sum_{j=1}^\infty \lambda_j a_j$$
where the $a_j$'s are $(H^1,\infty)$-atoms related to the balls $B_j$'s and $\sum_{j=1}^\infty |\lambda_j|\leq C \|f\|_{H^1}$. Therefore, for all $b\in BMO(\X)$, we have
\begin{equation}\label{first theorem, integrable function}
\left\|\sum_{j=1}^\infty \lambda_j (b-b_{B_j})a_j\right\|_{L^1} \leq \sum_{j=1}^\infty |\lambda_j| \|(b-b_{B_j})a_j\|_{L^1}\leq C \|b\|_{BMO}\|f\|_{H^1}.
\end{equation}
By this and Definition (\ref{distribution definition for products}), we see that the series $\sum_{j=1}^\infty \lambda_j b_{B_j} a_j$ converges to $b\times f - \sum_{j=1}^\infty \lambda_j (b-b_{B_j})a_j$ in $(\G^{\epsilon}_0(\beta,\gamma))'$. Consequently, if we define the decomposition operators as
$${\mathscr L}_f (b)= \sum_{j=1}^\infty \lambda_j (b-b_{B_j})a_j$$
and
$${\mathscr H}_f (b)= \sum_{j=1}^\infty \lambda_j b_{B_j} a_j,$$
where the sums are in $(\G^{\epsilon}_0(\beta,\gamma))'$, then it is clear that ${\mathscr L}_f: BMO(\X)\to L^1(\X)$ is a bounded linear operator, since (\ref{first theorem, integrable function}), and for every $b\in BMO(\X)$,
$$b\times f = {\mathscr L}_f(b) + {\mathscr H}_f(b).$$
Now we only need to prove that the distribution ${\mathscr H}_f(b)$ is in $H^{\log}(\X)$. Indeed, by Lemma \ref{Feuto, Lemma 3.1} and (ii) of Proposition \ref{key lemma for Orlicz functions}, we get
\begin{eqnarray*}
\|\M({\mathscr H}_f(b))\|_{L^{\log}} &\leq& \left\| \sum_{j=1}^\infty |\lambda_j| |b_{B_j}| \M(a_j)\right\|_{L^{\log}}\\
&\leq& \left\| \sum_{j=1}^\infty |\lambda_j| |b- b_{B_j}| \M(a_j)\right\|_{L^1} + \left\| b \sum_{j=1}^\infty |\lambda_j| \M(a_j)\right\|_{L^{\log}}\\
&\leq& C \|f\|_{H^1}\|b\|_{BMO^+}.
\end{eqnarray*}
This proves that ${\mathscr H}_f$ is bounded from $BMO(\X)$ into $H^{\log}(\X)$, and thus ends the proof of Theorem \ref{the first main theorem}.
\end{proof}
\section{The product of functions in $BMO_\rho(\X)$ and $H^1_\rho(\X)$}
For $f\in BMO_\rho(\X)$, a standard argument gives
\begin{equation}\label{relation between BMO spaces}
\|f\|_{BMO^+} \leq C \log(\rho(x_0) +1/\rho(x_0))\|f\|_{BMO_\rho}.
\end{equation}
\begin{proposition}\label{multipliers for generalized bmo associated with the admissible functions}
Let $\beta\in (0,1]$ and $\gamma\in (0,\infty)$. Then, $g$ is a pointwise multiplier of $BMO_\rho(\X)$ for all $g\in \G(\beta,\gamma)$. More precisely, for every $f\in BMO_\rho(\X)$,
$$\|gf\|_{BMO_\rho}\leq C \frac{\log(\rho(x_0)+1/\rho(x_0))}{V_1(x_0)}\|g\|_{\G(\beta,\gamma)}\|f\|_{BMO_\rho}.$$
\end{proposition}
\begin{proof}
By Proposition \ref{multipliers for bmo}, (\ref{relation between BMO spaces}) and (\ref{bounded property of test functions}), we get
\begin{eqnarray*}
\|gf\|_{BMO_\rho} &\leq& \|gf\|_{BMO} + \|g\|_{L^\infty} \sup_{B\in\D}\frac{1}{\mu(B)}\int_B |f(x)|d\mu(x)\\
&\leq& C \frac{\log(\rho(x_0)+1/\rho(x_0))}{V_1(x_0)}\|g\|_{\G(\beta,\gamma)}\|f\|_{BMO_\rho}.
\end{eqnarray*}
\end{proof}
Using Proposition \ref{multipliers for generalized bmo associated with the admissible functions}, for $b\in BMO_\rho(\X)$ and $f\in H^1_\rho(\X)$, one can define the distribution $b\times f\in (\G^{\epsilon}_0(\beta,\gamma))'$ by the rule
\begin{equation}\label{distribution definition for products associated with the admissible functions}
\langle b\times f, \phi\rangle := \langle \phi b, f\rangle
\end{equation}
for all $\phi\in \G^{\epsilon}_0(\beta,\gamma)$, where the second bracket stands for the duality bracket between $H^1_\rho(\X)$ and its dual $BMO_\rho(\X)$.
\begin{proof}[\bf Proof of Theorem \ref{the second main theorem}]
By (ii) of Theorem \ref{atomic decomposition}, there exist a sequence of $(H^1_\rho,\infty)$-atoms $\{a_j\}_{j=1}^\infty$ related to the sequence of balls $\{B(x_j, r_j)\}_{j=1}^\infty$ and $\sum_{j=1}^\infty |\lambda_j|\leq C \|f\|_{H^1_\rho}$ such that
$$f= \sum_{j=1}^\infty \lambda_j a_j= f_1 + f_2,$$
where $f_1= \sum_{r_j<\rho(x_j)} \lambda_j a_j \in H^1(\X)$ and $f_2= \sum_{r_j \geq \rho(x_j)} \lambda_j a_j$.
We define the decomposition operators as following
$${\mathscr L}_{\rho,f}(b)= {\mathscr L}_{f_1}(b) + b f_2$$
and
$${\mathscr H}_{\rho,f}(b)= {\mathscr H}_{f_1}(b),$$
where the operators ${\mathscr L}_{f_1}$ and ${\mathscr H}_{f_1}$ are as in Theorem \ref{the first main theorem}. Then, Theorem \ref{the first main theorem} together with (\ref{relation between BMO spaces}) give
\begin{eqnarray*}
\|{\mathscr L}_{\rho,f}(b)\|_{L^1} &\leq& \|{\mathscr L}_{f_1}(b)\|_{L^1} + \sum_{r_j\geq \rho(x_j)} |\lambda_j|\|b a_j\|_{L^1}\\
&\leq& C \|f_1\|_{H^1} \|b\|_{BMO} + C \|b\|_{BMO_\rho} \sum_{r_j\geq \rho(x_j)} |\lambda_j|\\
&\leq& C \|f\|_{H^1_\rho}\|b\|_{BMO_\rho}
\end{eqnarray*}
and
$$\|{\mathscr H}_{\rho,f}(b)\|_{H^{\log}}\leq C \|f_1\|_{H^1}\|b\|_{BMO^+}\leq C \|f\|_{H^1_\rho}\|b\|_{BMO_\rho}.$$
This proves that the linear operator ${\mathscr L}_{\rho,f}: BMO_\rho(\X) \to L^1(\X)$ is bounded and the linear operator ${\mathscr H}_{\rho,f}: BMO_\rho(\X) \to H^{\log}(\X)$ is bounded. Moreover,
\begin{eqnarray*}
b\times f &=& b\times f_1 + b\times f_2\\
&=& ({\mathscr L}_{f_1}(b) + {\mathscr H}_{f_1}(b)) + b f_2\\
&=& {\mathscr L}_{\rho,f}(b) + {\mathscr H}_{\rho,f}(b),
\end{eqnarray*}
which ends the proof of Theorem \ref{the second main theorem}.
\end{proof}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,540 |
I started out as an avid reader at a tender age and, while working in the high tech industry as a young adult, became a collector. Eventually I embarked on a fanatical pursuit of autographs for the books in my collection. For the past two decades, I have been meeting authors and illustrators by attending science fiction/fantasy conventions throughout the country and internationally. I am also frequently present at bookstore signings. Now, as a retiree, I have transformed a beloved hobby into a business. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,733 |
{"url":"https:\/\/geb.inf.tu-dresden.de\/doxy\/seeder\/sourcefile\/sdr_aux_module.f90.html","text":"sdr_aux_module.f90 Source File\n\nSource Code\n\n! Copyright (c) 2011 Manuel Hasert <m.hasert@grs-sim.de>\n! Copyright (c) 2011-2014 Kannan Masilamani <kannan.masilamani@uni-siegen.de>\n! Copyright (c) 2011-2012, 2014 Harald Klimach <harald.klimach@uni-siegen.de>\n! Copyright (c) 2012 Simon Zimny <s.zimny@grs-sim.de>\n! Copyright (c) 2016 Tobias Girresser <tobias.girresser@student.uni-siegen.de>\n!\n! Redistribution and use in source and binary forms, with or without\n! modification, are permitted provided that the following conditions are met:\n!\n! 1. Redistributions of source code must retain the above copyright notice, this\n! list of conditions and the following disclaimer.\n!\n! 2. Redistributions in binary form must reproduce the above copyright notice,\n! this list of conditions and the following disclaimer in the documentation\n! and\/or other materials provided with the distribution.\n!\n! THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n! AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n! IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n! DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n! FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n! DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n! SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n! CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n! OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n! OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n! ******************************************************************************!\n!> Some auxilary functionalities.\nmodule sdr_aux_module\n\n! treelm modules\nuse tem_aux_module, only: tem_print_execInfo\nuse tem_logging_module, only: logunit\n\nimplicit none\n\nprivate\n\npublic :: sdr_init_global\n\ncontains\n\n! ****************************************************************************!\n!> Prominently let the user now, what he actually is running right now.\n!!\n!! Also set the solvername and version number in the solveHead.\n!> contains solver header information\n\nwrite(logunit(1),*) \" \"\nwrite(logunit(1),*) \" _ \"\nwrite(logunit(1),*) \" ___ ___ ___ __| | ___ _ __ \"\nwrite(logunit(1),*) \" \/ __|\/ _ \\\/ _ \\\/ _ |\/ _ \\ '__| \"\nwrite(logunit(1),*) \" \\__ \\ __\/ __\/ (_| | __\/ | \"\nwrite(logunit(1),*) \" |___\/\\___|\\___|\\__,_|\\___|_\" &\nwrite(logunit(1),*) \" \"\nwrite(logunit(1),*) &\n& \" (C) 2012 German Research School for Simulation Sciences\"\nwrite(logunit(1),*) \" (C) 2013 University of Siegen \"\nwrite(logunit(1),*) \" \"\n! Write the information about the executable, gathered at build time to\n! the screen.\ncall tem_print_execInfo()\nwrite(logunit(1),*) \" \"\nwrite(logunit(1),*) \" \"\n\nend subroutine sdr_init_global\n! ****************************************************************************!\n\nend module sdr_aux_module\n! ******************************************************************************!\n`","date":"2022-09-26 15:47:33","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.29129186272621155, \"perplexity\": 2014.8347960846568}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-40\/segments\/1664030334912.28\/warc\/CC-MAIN-20220926144455-20220926174455-00231.warc.gz\"}"} | null | null |
\section{Introduction}
For stars with masses $M$ above approximately $8 M_{\odot}$ the end of their fuel-burning phase results in a phenomenon known as core-collapse~\cite{Janka:2006fh}. At this point pressure from nuclear fusion can no longer counter the force of gravity and so the star's core contracts. The end of this process is a violent supernova (SN), leading to a vast amount of energy released in the form of photons and neutrinos~\cite{Lang:2016zhv}. It is the neutrinos which carry away the majority of the released energy,
with over $10^{53}$~ergs emitted in neutrinos from a core-collapse supernova~\cite{Lunardini:2009ya,2010PhRvL.104y1101H}.
The flux and energy of this emission during this core-collapse should depend on the initial mass of the star. For stars with mass $8 M_{\odot} \lesssim M \lesssim 25 M_{\odot}$ the collapse will result in a neutron star (NS)
and a large flux of neutrino and photonic emission~\cite{Lunardini:2009ya}. However for more massive stars with $M \gtrsim 25 M_{\odot} \mbox{-} 40 M_{\odot}$ the core will collapse to a black hole (BH), with potentially different physics
as a result, leading to what is sometimes called an unnova~\cite{Yuksel:2012zy}.
Simulations have shown that during such a collapse the flux of neutrinos is greater than for the NS-forming supernova events, and their average energy is larger~\cite{Lunardini:2009ya,PhysRevD.78.083014,Fischer:2008rh,Sumiyoshi:2008zw}.
There may also be considerably less photonic emission making such events difficult to observe with conventional telescopes, and indeed the only optical method of looking for unnovae may be to search for stars disappearing from the sky~\cite{Adams:2016ffj}.
Hence observations of the neutrino flux from one or more unnova events are likely to be the only way to obtain valuable information on the formation of black holes from core-collapse of stars, especially at larger redshifts up to $z \sim 1$ and
above, where optical disappearance searches are more difficult~\cite{Adams:2016hit}.
Neutrinos from supernovae are detectable in two broad categories: through the direct observation of a neutrino burst, potentially correlated with an optical event, and through the (as yet undetected but expected) diffuse supernova neutrino background (DSNB)~\cite{Beacom:2010kk,Lunardini:2010ab}.
The detection of a neutrino burst within our own galaxy will lead to potentially $10^4$ neutrino detections within a short space of time in e.g. Super Kamiokande~\cite{Fukuda:2002uc,Ando:2005ka}, however we expect at most a few such galactic supernovae per
century~\cite{Ando:2005ka}. Indeed a burst of neutrinos from a supernova has been observed only once so far, from SN1987A~\cite{Hirata:1987hu}. The next generation of more massive detectors e.g. Hyper-Kamiokande~\cite{Abe:2011ts} will potentially be sensitive to neutrino bursts from supernovae up to a few Mpc away, which will occur more frequently, however the flux from such events on Earth
will be small and so the statistics will be limited~\cite{Ando:2005ka}.
By contrast the DSNB represents a continuous source of neutrinos from all of the supernovae which have occurred in the Universe, and does not require us to wait for a nearby supernova event to occur. If the simulations of BH-forming
collapses are correct, then this DSNB should be comprised broadly of two components: a larger flux from NS-forming core-collapse supernovae and a smaller component at higher-energies from BH-forming unnovae~\cite{Lunardini:2009ya,Yuksel:2012zy,Keehn:2010pn,Lien:2010yb,Nakazato:2015rya,Lunardini:2010ab}.
As pointed out in refs.~\cite{Lunardini:2009ya,Yuksel:2012zy,Keehn:2010pn,Lien:2010yb,Nakazato:2015rya,Lunardini:2010ab} by measuring the spectra and fluxes of both components we can obtian useful information on the birth rate of black holes as a function of redshift,
and so the DSNB is potentially a unique window into the physics of black hole production from stellar collapse. However this is made significantly more complicated by the large number of parameters which enter the calculation of the
DSNB, such as the redshift-dependent star formation rate, the flux and spectra of both NS-forming supernovae and BH-forming unnovae and the redshift dependence of the fraction of stars which collapse to NS or BH.
No comprehensive study looking at the degeneracies between all these parameters has yet been done, something which we address in this work.
Of all the black holes born long enough ago such that they have had enough time to lead to merger events today, only a fraction $\epsilon$ will exist in binary systems with the right properties to result in the production and observation of gravitational waves today, such as the event observed by the LIGO experiment towards the end of 2015~\cite{Abbott:2016blz,TheLIGOScientific:2016htt}.
Indeed the LIGO experiment currently sets the merger rate of black hole binaries $\mathcal{R}_{\mathrm{BH-BH}}$ within the range $9 - 240 \, \mathrm{Gpc}^{-3} \mathrm{yr}^{-1}$ at $90\%$ confidence~\cite{TheLIGOScientific:2016pea}, and this should improve in precision significantly over the next decade~\cite{Kovetz:2016kpi}.
This quantity $\epsilon$ is not well-known, though efforts have been made to infer its value using theoretical predictions for the black hole birth rate combined with LIGO data on the merger rate of black hole binaries~\cite{Elbert:2017sbr}. It should however be possible instead to use neutrinos from unnovae to constrain the black hole birth rate, and so place bounds on $\epsilon$.
In this work we discuss the potential of future observations using the Hyper-Kamiokande experiment to place constraints on the birth rate of black holes using the DSNB, and then show that by combining this with the merger rate of black holes from LIGO it is possible to place limits on the fraction of black holes which end up in binary mergers $\epsilon$.
In section~\ref{sec:dsnb_mcmc} we obtain robust projected constraints on the black hole birth rate from the DSNB for the upcoming Hyper-Kamiokande experiment by performing a Markov Chain Monte Carlo (MCMC) analysis, taking into
account all of the relevant nuisance parameters.
Crucially we consider the fact that simulations of neutrino production during BH-forming collapse events may not accurately reflect the true physics, by using different prior distributions for the neutrino spectra from unnovae.
We then combine this birth rate with the expectations and measurements of the BH-BH merger rate from LIGO in section~\ref{sec:ligo} to infer the fraction of black holes which lead to merger events $\epsilon$. We conclude in section~\ref{sec:conc}.
\section{MCMC study of the DSNB\label{sec:dsnb_mcmc}}
In this section, we seek to constrain the black hole birth rate $R_{\mathrm{BH}}(z)$ as a function of redshift $z$ by using projected measurements of the DSNB.
Previous studies have considered the effect of $R_{\mathrm{BH}}(z)$ on the DSNB~\cite{Lunardini:2009ya,Yuksel:2012zy,Keehn:2010pn,Lien:2010yb,Nakazato:2015rya,Lunardini:2010ab}, however in this work we seek instead to infer
$R_{\mathrm{BH}}(z)$ from the DSNB, given the full set of potentially degenerate parameters e.g. the supernova and unnova spectrum. Once we have projected bounds on $R_{\mathrm{BH}}(z)$, we will be equipped to combine this information
with LIGO data on black hole mergers to infer the merger fraction.
Hence we perform a Markov Chain Monte Carlo analysis (MCMC) over all parameters on which the DSNB flux depends (see appendix~\ref{app:MCMC} for more information),
to understand to what extent $R_{\mathrm{BH}}(z)$ can be inferred from the DSNB given our potentially uncertain knowledge of the physics of supernovae and unnovae.
Importantly, since the only information we have for neutrino emission from unnovae is from simulations~\cite{PhysRevD.78.083014,Fischer:2008rh,Sumiyoshi:2008zw,Lunardini:2009ya}, we need to incorporate the possibility that unnovae do not exist,
or are identical to supernovae in their neutrino emission.
\subsection{Detecting the DSNB in a water Cherenkov experiment}
The DSNB is a flux of neutrinos from all of the supernovae (and potentially unnovae) which have occurred throughout the Universe. We focus on anti-neutrinos here as they are easier to detect.
The DSNB flux $\Phi(E)$ as a function of neutrino energy $E$ is calculated using the following integral over redshift $z$ with $15$
parameters which we list in table~\ref{table_params}. In each case a subscript ``NS'' refers to neutron star forming core-collapse events while subscript ``BH'' refers to black hole forming unnovae.
\begin{equation}
\begin{split}
\Phi(E) = \frac{c}{H_0} \int_0^{z_{\mathrm{max}}} {\frac{\mathrm{d} z}{\sqrt{\Omega_m (1+z)^3 + \Omega_{\Lambda}}}} \big[ R_{\mathrm{NS}}(z) F_{\mathrm{NS}}(E(1+z);\bar{E}_{e \mathrm{NS}},\bar{E}_{x \mathrm{NS}},L_{e \mathrm{NS}},L_{x \mathrm{NS}}) \\
+ R_{\mathrm{BH}}(z) F_{\mathrm{BH}}(E(1+z);\bar{E}_{e \mathrm{BH}},\bar{E}_{x \mathrm{BH}},L_{e \mathrm{BH}},L_{x \mathrm{BH}}) \big]
\label{eqn:dsnb_all}
\end{split}
\end{equation}
where $R_{\mathrm{NS}}(z)$ and $R_{\mathrm{BH}}(z)$ are respectively the redshift-dependent rates of NS and BH forming core-collapse events, $F_{\mathrm{NS}}$ and $F_{\mathrm{BH}}$ are the spectra of either supernovae or unnovae, $H_0$ is the Hubble constant, $c$ is the speed of light,
$\Omega_m \approx 0.3$ is the fractional matter density of the Universe and $\Omega_{\Lambda} \approx 0.7$ is the same for dark energy~\cite{Lunardini:2009ya,Yuksel:2012zy}. Here we take the maximum redshift to be $z_{\mathrm{max}} = 4$.
\begin{table}[t]
\begin{center}
\begin{tabular}{ c || c }
\textbf{Parameter} & \textbf{Description} \\ \hline
$\bar{E}_{e \mathrm{NS}}$ and $\bar{E}_{e \mathrm{BH}}$& Average energy of $\bar{\nu}_e$\\
\hline
$\bar{E}_{x \mathrm{NS}}$ and $\bar{E}_{x \mathrm{BH}}$& Average energy of $\bar{\nu}_{\mu}$ and $\bar{\nu}_{\tau}$\\
\hline
${L}_{e \mathrm{NS}}$ and ${L}_{e \mathrm{BH}}$& Luminosity of $\bar{\nu}_e$\\
\hline
${L}_{x \mathrm{NS}}$ and ${L}_{x \mathrm{BH}}$& Luminosity of $\bar{\nu}_{\mu}$ and $\bar{\nu}_{\tau}$\\
\hline
$R_0$& Total rate of core-collapse events \\
& at the present-day\\ \hline
$\gamma$ and $\beta$& Powers of redshift-dependent \\
& total core-collapse rate \\ \hline
$f_0$, $f_1$ and $f_4$& Fraction of core-collapses which lead to BH \\
&production at $z = 0$, $z = 1$ and $z = 4$ respectively \\ \hline
$\bar{p}$&Flavour oscillation parameter
\end{tabular}
\end{center}
\caption{List of parameters used in our analysis of the DSNB.}
\label{table_params}
\end{table}
Anti-neutrinos and neutrinos decouple from matter at a radial surface known as the neutrino-sphere~\cite{Sigl:1994da}, and their spectrum becomes an approximately thermal one with a fixed average energy $\bar{E}$. The value of $\bar{E}$ is different for $\nu_e$, $\bar{\nu}_e$
and the remaining heavy $\mu$ and $\tau$ flavours, which we denote as $x$, since each of these three types of neutrino interacts with matter with different strengths. Hence the $\bar{\nu}_e$ have a different spectrum from $\bar{\nu}_x$.
Since neutrinos oscillate between flavours after they are produced, a neutrino produced as a $\bar{\nu}_x$ may oscillate into a $\bar{\nu}_e$ or vice versa.
Due to the extremely small size of the neutrino wave-packet, most of this conversion will occur due to matter effects, and not as the neutrinos travel to Earth~\cite{Kersten:2015kio,PhysRevD.37.552}.
\footnote{This is a subtle point and follows from the fact that the neutrinos are produced from charged particles such as nuclei or electrons/positrons, which have an extremely small mean free path for scattering in the supernova.
Hence the average time over which neutrinos are emitted coherently is tiny, and so the wave-packet size can be as small as $10^{-11}$~cm~\cite{Kersten:2015kio,PhysRevD.37.552}. It follows therefore from the Pauli exclusion principle that the uncertainty on the neutrino
momentum and energy is large, and so we can not describe the different mass eigenstates of the neutrinos as propagating with different velocities. Hence there is practically no observable separation of mass eigenstates as the neutrinos travel to Earth (see also ref.~\cite{Wright:2017jwl}).}
Indeed, the densities of matter in the remnant star during neutrino
production are so large that matter effects can dominate the flavour oscillations. In this work we follow ref.~\cite{Lunardini:2009ya} and parametrise the effect of flavour oscillations with the variable $\bar{p}$,
which can be anywhere between $0$ and $\cos^2 \theta_{12} = 0.68$. We use the same variable for both supernovae and unnovae, however in principle both such scenarios could have different prior distributions of $\bar{p}$, which may or
may not be correlated. Our knowledge of $\bar{p}$ for supernovae and unnovae should improve greatly in the near future due to various upcoming measurements, such as a determination of the neutrino mass hierarchy, a better theoretical understanding of neutrino oscillations near the neutrinosphere and better measurements of the supernova neutrino spectrum~\cite{Mirizzi:2015eza}.
Hence, the spectrum of electron anti-neutrinos $\bar{\nu}_e$ which will be detected on Earth takes the form,
\begin{eqnarray}
F_{\mathrm{NS}} &=& \bar{p} \cdot J_{e \mathrm{NS}}(E(1+z);\bar{E}_{e \mathrm{NS}},L_{e \mathrm{NS}}) + (1 - \bar{p}) \cdot J_{x \mathrm{NS}}(E(1+z);\bar{E}_{x \mathrm{NS}},L_{x \mathrm{NS}}) \\
F_{\mathrm{BH}}&=& \bar{p} \cdot J_{e \mathrm{BH}}(E(1+z);\bar{E}_{e \mathrm{BH}},L_{e \mathrm{BH}}) + (1 - \bar{p}) \cdot J_{x \mathrm{BH}}(E(1+z);\bar{E}_{x \mathrm{BH}},L_{x \mathrm{BH}})
\end{eqnarray}
\begin{equation}
J(E,\bar{E},L) = \frac{L \cdot (1+\alpha)^{1+\alpha}}{\Gamma(1+\alpha) \bar{E}^2} \left( \frac{E}{\bar{E}} \right)^{\alpha} \exp \left[ - (1+\alpha) \frac{E}{\bar{E}} \right]
\end{equation}
i.e. a superposition of two approximately thermal spectra, with $\bar{p}$ controlling which one dominates. We set $\alpha = 3.5$ for $\bar{\nu}_e$ and $\alpha = 2.5$ for $\bar{\nu}_x$, and $\Gamma$ is a gamma function.
The rates of NS and BH forming collapse events $R_{\mathrm{NS}}(z)$ and $R_{\mathrm{BH}}(z)$ take the form,
\begin{eqnarray}
R_{\mathrm{NS}}(z) &=& [1-f_{\mathrm{BH}}(z)] R(z) \\
R_{\mathrm{BH}}(z) &=& f_{\mathrm{BH}}(z) R(z)
\end{eqnarray}
\begin{equation}
R(z) =
\begin{cases}
R_0 (1+z)^{\beta} & \text{for } z \leq z_{\mathrm{th}} \\
R_0 (1+z_{\mathrm{th}})^{\beta} (1+z)^{\gamma} (1+z_{\mathrm{th}})^{-\gamma} & \text{for } z_{\mathrm{th}} < z \leq 4 \\
0 & \text{for } z > 4
\end{cases}
\end{equation}
\begin{equation}
f_{\mathrm{BH}}(z) =
\begin{cases}
f_0 (1+z)^{\kappa} & \text{for } z \leq z_{\mathrm{th}} \\
f_0 (1+z_{\mathrm{th}})^{\kappa} (1+z)^{\epsilon} (1+z_{\mathrm{th}})^{-\epsilon} & \text{for } z_{\mathrm{th}} < z \leq 4
\end{cases}
\end{equation}
where $z_{\mathrm{th}} = 1$ and $\kappa$ and $\epsilon$ are fixed such that $f_{\mathrm{BH}}(z=0) = f_0$, $f_{\mathrm{BH}}(z=1) = f_1$ and $f_{\mathrm{BH}}(z=4) = f_4$ for the parameters $f_0$, $f_1$ and $f_4$. In this case $f_{\mathrm{BH}}(z)$ is the redshift-dependent
fraction of total core-collapse events which result in black holes instead of neutron stars. The form of $R(z)$ is based off an empirical fit to star formation data~\cite{Hopkins:2006bw,Lunardini:2009ya,Yuksel:2012zy}.
We have made the simplifying assumption that the rate of supernovae and unnovae vanishes for $z > 4$, which is around the redshift where models predict the star formation rate to fall sharply.
Our final step is to calculate the actual measured event rate in a neutrino detector. We focus on electron anti-neutrinos $\bar{\nu}_e$ detected in water Cherenkov detectors such as Super Kamiokande or Hyper-Kamiokande, which are detected via the observation of positrons from inverse beta decay capture reactions $\bar{\nu}_e + p \rightarrow n + e^+$~\cite{Beacom:2010kk}. The cross section of this interaction $\sigma_{\mathrm{IB}}(E)$ is larger than that for elastic scattering
of the remaining neutrino flavours with electrons, leading $\bar{\nu}_e$ to dominate the event rate from the DSNB. The detected DSNB spectrum is then,
\begin{equation}
\frac{\mathrm{d}N}{\mathrm{d}E_p} = N_t \Phi(E) \cdot \sigma_{\mathrm{IB}}(E) ,
\end{equation}
where the positron energy $E_p = E - 1.3 \, \mathrm{MeV}$ and $N_t$ is the number of target protons in the detector (with only those in the hydrogen of H$_2$O contributing). Since the Hyper-Kamiokande experiment has a finite energy
resolution $\sigma_E$ we need to account for this when generating our expected spectra. The Hyper-Kamiokande experiment is expected to have the same level of energy resolution compared to Super Kamiokande~\cite{Abe:2011ts}, which takes the form of a gaussian standard deviation~\cite{Abe:2016nxk},
\begin{equation}
\sigma_E(E_p) = \left[ -0.0839 + 0.349 \sqrt{\frac{E_p}{\mathrm{MeV}}} + 0.0397 \left( \frac{E_p}{\mathrm{MeV}} \right) \right] \, \, \mathrm{MeV},
\end{equation}
which is therefore the resolution we adopt for our analysis. We incorporate this to calculate $\frac{\mathrm{d}N}{\mathrm{d}E_{\mathrm{ex}}}$, the measured spectrum expected in Hyper-Kamiokande, using the expression,
\begin{equation}
\frac{\mathrm{d}N}{\mathrm{d}E_{\mathrm{ex}}} = \int \mathrm{d} E_p \frac{\mathrm{d}N}{\mathrm{d}E_p} \frac{1}{\sqrt{2 \pi \sigma_E^2}} \exp \left[ \frac{- (E_{\mathrm{ex}} - E_p)^2}{2 \sigma_E^2} \right] ,
\end{equation}
where $E_{\mathrm{ex}}$ is then the energy measured in Hyper-Kamiokande.
\subsection{Spectra of the DSNB and background events in Hyper-Kamiokande}
\begin{figure}[t]
\centering
\includegraphics[width=0.99\textwidth]{fiducial_DSNB.pdf}
\caption{Expected spectra of the diffuse neutrino background produced by either NS-forming supernovae (dashed blue) or BH-forming unnovae (solid orange), compared with the background from atmospheric neutrinos and invisible muons
(dotted green). The total spectrum is shown as the solid black line.}
\label{fig:rates_in_detector}
\end{figure}
The dominant background to searches for the DSNB in water Cherenkov detectors above detected energies of 20~MeV comes from atmospheric neutrinos and invisible muons~\cite{Abe:2011ts},
and has a spectrum which rises with energy~\cite{Abe:2011ts,Beacom:2010kk}, which we incorporate into our MCMC study. It has been suggested~\cite{Beacom:2003nk} that doping the water target of Hyper-Kamiokande with gadolinium
could be used to reduce this background, by allowing anti-electron neutrinos to be identified through the tagging of the neutron produced through inverse beta decay. However in this work we do not consider such a possibility,
since it is not clear whether such gadolinium doping will be implemented for the Hyper-Kamiokande experiment.
Below energies of 20~MeV there are expected to be huge backgrounds from reactor neutrinos and the products of spallation reactions~\cite{Abe:2011ts}, and so we set the low-energy threshold of our analysis at 20~MeV i.e. we consider
only $E_{\mathrm{ex}} \geq 20$~MeV.
In figure~\ref{fig:rates_in_detector} we show the expected spectra of the diffuse neutrino background produced by either BH-forming unnovae or NS-forming supernovae, compared with the size of the expected background in Hyper-Kamiokande
above energies of 20~MeV. The number of events expected from the DSNB depends strongly on the many parameters of equation~\ref{eqn:dsnb_all}, and here we use the fiducial values of these parameters outlined in section~\ref{sec:priorvals}.
Fortunately the spectrum from diffuse unnovae neutrinos is expected to be larger than that from supernovae neutrinos, due to the larger average energy predicted by unnovae
simulations~\cite{PhysRevD.78.083014,Fischer:2008rh,Sumiyoshi:2008zw,Lunardini:2009ya}. Even so, there is only a small window between around 20~MeV to 30~MeV where this flux is measurable practically, before it becomes impossible
to distinguish from the background.
Within this window there are expected to be approximately $200$ DSNB events after running Hyper-Kamiokande for 10~years, resulting from a neutrino flux of approximately $\Phi_{\mathrm{BH}} \sim 8 \cdot 10^{-2}$~cm$^{-2}$~MeV$^{-1}$~s$^{-1}$ at $20$~MeV energies from BH-forming unnovae.
\subsection{Choosing the prior distributions for the parameters ~\label{sec:priorvals}}
\begin{table}[t]
\centering
\begin{tabular}{ c || c | c }
\textbf{Parameter} & \textbf{Optimistic priors} & \textbf{Pessimistic priors} \\ \hline
$\bar{E}_{e \mathrm{NS}}$& $P \in [14,16]$ MeV & $P \in [14,16]$ MeV\\
\hline
$\bar{E}_{x \mathrm{NS}}$& $P \in [17,19]$ MeV & $P \in [17,19]$ MeV \\
\hline
$\bar{E}_{e \mathrm{BH}}$& $P \in [23,25]$ MeV & $P \in [15,25]$ MeV\\
\hline
$\bar{E}_{x \mathrm{BH}}$& $P \in [23,28]$ MeV & $P \in [16,33]$ MeV \\
\hline
${L}_{e \mathrm{NS}}$& $P \in [4.5,5.5] \cdot 10^{52} \, \mathrm{ergs}$ &$P \in [4.5,5.5] \cdot 10^{52} \, \mathrm{ergs}$\\
\hline
${L}_{x \mathrm{NS}}$& $P \in [4.5,5.5] \cdot 10^{52} \, \mathrm{ergs}$& $P \in [4.5,5.5] \cdot 10^{52} \, \mathrm{ergs}$ \\
\hline
${L}_{e \mathrm{BH}}$& $P \in [12,14] \cdot 10^{52} \, \mathrm{ergs}$ &$P \in [0,20] \cdot 10^{52} \, \mathrm{ergs}$\\
\hline
${L}_{x \mathrm{BH}}$& $P \in [0.35,0.45] {L}_{e \mathrm{BH}}$ &$P \in [0.3,1] {L}_{e \mathrm{BH}}$ \\
\hline
$R_0$& $P \in [0.8,1.2] \cdot 10^{-4} \mathrm{Mpc}^{-3} \mathrm{s}^{-1}$ & $P \in [0.8,1.2] \cdot 10^{-4} \mathrm{Mpc}^{-3} \mathrm{s}^{-1}$ \\ \hline
$\beta$& $P \propto N(\beta,\mu=3.28,\sigma=0.05)$ & $P \propto N(\beta,\mu=3.28,\sigma=0.05)$ \\ \hline
$\gamma$& $P \propto N(\gamma,\mu=0,\sigma=0.1)$ & $P \propto N(\gamma,\mu=0,\sigma=0.1)$ \\ \hline
$\bar{p}$& $P \in [0.5,0.68]$ & $P \in [0,0.68]$ \\ \hline
$f_0$& $P \in [0,1]$ & $P \in [0,1]$ \\ \hline
$f_1$& $P \in [0,1]$ & $P \in [0,1]$ \\ \hline
$f_4$& $P \in [0,1]$ & $P \in [0,1]$
\end{tabular}
\caption{Priors for each of our parameters in either the optimistic or pessimistic case. Priors are flat within the range and zero outside unless otherwise stated, and $N(x,\mu,\sigma) = \frac{1}{\sqrt{2 \pi \sigma^2}} \exp \left[ \frac{- (x - \mu)^2}{2 \sigma^2} \right]$ represents a normal distribution with mean $\mu$ and standard deviation $\sigma$.}
\label{priors}
\end{table}
We are interested primarily in $R_{\mathrm{BH}}(z)$ while the remaining parameters are essentially nuisance parameters. Due to the large number of variables we perform a MCMC over our 15
parameters listed in table~\ref{table_params} by comparing theoretical predictions from equation~\ref{eqn:dsnb_all} to simulated data. For the simulated data we take fiducial values of
$\bar{E}_{e \mathrm{NS}} = 15$~MeV, $\bar{E}_{x \mathrm{NS}} = 18$~MeV, $\bar{E}_{e \mathrm{BH}} = 23.6$~MeV, $\bar{E}_{x \mathrm{BH}} = 24.1$~MeV, ${L}_{e \mathrm{NS}} = 5 \cdot 10^{52} \, \mathrm{ergs}$,
${L}_{x \mathrm{NS}} = 5 \cdot 10^{52} \, \mathrm{ergs}$, ${L}_{e \mathrm{BH}} = 12.8 \cdot 10^{52} \, \mathrm{ergs}$, ${L}_{x \mathrm{BH}} = 4.9 \cdot 10^{52} \, \mathrm{ergs}$,
$R_0 = 10^{-4} \mathrm{Mpc}^{-3} \mathrm{s}^{-1}$, $\beta = 3.28$, $\gamma = 0$, $\bar{p} = 0.68$ and $f_0 = f_1 = f_4 = 0.2$. We assume a 500 kilo-tonne water Cherenkov experiment similar to Hyper-Kamiokande with 10 years worth of data.
The simulated data is generated by sampling a discrete set of events randomly from the total theoretical spectrum (including the background) according to Poisson statistics, then binning them into a histogram. This means that the data-set will include fluctuations from the ``true'' spectrum which one would expect from real experimental data.
The simulated data-set is then essentially one example of what Hyper Kamiokande might see, given the fiducial parameter values chosen here.
Since these fluctuations are by-their-nature random, our results could in principle depend on the simulated data-set. To test this we have cross-checked our analysis with ten different simulated datasets, and find very similar posterior
contours in each case, and so it is reasonable to assume that the effect of the simulated data-set itself on our results is negligable.
Our choice of priors on our parameters is important, especially for the case of the BH-forming unnovae where we only have information from simulations~\cite{PhysRevD.78.083014,Fischer:2008rh,Sumiyoshi:2008zw,Lunardini:2009ya}.
In order to fully understand the effect of prior choice we consider two different scenarios. In the first scenario, labelled as ``optimistic'' we assume we have good
knowledge of the luminosity and spectra of both NS-forming supernovae and BH-forming unnovae, while for the second case labelled ``pessimistic'' we assume poor knowledge of such spectra. In both cases we assume no prior knowledge of
$R_{\mathrm{BH}}(z)$. The full prior list is given in table~\ref{priors}.
In the optimistic case we assume good knowledge of the average energies and luminosities of all neutrino flavours associated with unnovae, based on the results of simulations~\cite{PhysRevD.78.083014,Lunardini:2009ya}. However the pessimistic
case differs in that we take broad priors on these quantities associated with unnovae, which include parameter values for which the spectra of neutrinos from BH-forming unnovae are identical to those for NS-forming supernovae
or where ${L}_{e \mathrm{BH}}=0$ and so unnovae do not produce neutrinos in vast amounts as supernovae do. In
this case the assumption is that the simulations are wrong and that there is no difference between the neutrino emission in either case, and so we can learn little to nothing about black holes from the DSNB.
The pessimistic case also differs in our choice for the prior on $\bar{p}$, where we assume that oscillation effects within the supernovae or unnovae are not well understood, while for the optimistic case we assume that
we will have a high-statistics measurement of $\bar{p}$ by the time precision measurements of the DSNB are made.
For the remaining parameters we assume that by the time Hyper-Kamiokande has enough data to make a precision study of the DSNB, we will have accurate knowledge of the parameters associated with neutrino emission from NS-forming
supernovae and of $R(z)$~\cite{Lien:2010yb}. In the former case this may be because a galactic supernova has occurred by this time, and its neutrino emission has been observed to high accuracy.
We have assumed flat priors on $f_0$, $f_1$ and $f_4$, however in principle it should be possible to constrain the fractional function $f_{\mathrm{BH}}(z)$ using either direct measurements of the supernova rate as a function of redshift, or observations of the rate of stars which disappear, which may be related
to the rate of unnovae~\cite{Yuksel:2012zy,Kochanek:2008mp,Lien:2010yb,Adams:2016ffj,Adams:2016hit}. By the time the DSNB has been measured to high precision, searches for disappearing stars close to our own galaxy such as in refs.~\cite{Adams:2016ffj,Adams:2016hit} may be
advanced enough to give us prior information on $f_0$. Hence although
we assume flat priors on $f_0$, $f_1$ and $f_4$ this may not be appropriate for future studies when more data may be available. By combining the DSNB with information from other surveys our projections can only improve, and so
our study can be considered as a worst-case-scenario where only information from the DSNB is available for the black hole birth rate.
The prior range for $R_0$ is chosen based on the quoted uncertainty at $z = 0$ on the cosmic star formation rate from ref.~\cite{Lien:2010yb}. By the time the DSNB is measured to high-precision, it is likely that more advanced synoptic
surveys will have been performed, reducing the size of the uncertainty on $R_0$~\cite{Lien:2010yb}.
\subsection{Results of the MCMC projection for the DSNB \label{sec:results_mcmc}}
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{rate_vals_werr_tyr=10_prior=optimistic_v4.pdf} \hspace{10pt}
\includegraphics[width=0.47\textwidth]{rate_vals_werr_tyr=10_prior=pessimistic_v4.pdf}
\caption{\textbf{Left:} One sigma (green) and three sigma (blue) regions for the birth rate of black holes $R_{\mathrm{BH}}(z)$ assuming optimistic priors, inferred from the DSNB with 10 years worth of data in an experiment similar to Hyper-Kamiokande. \textbf{Right:} Same but for the pessimistic case.}
\label{fig:rates_from_dsnb}
\end{figure}
The results of our scan over the parameters described in the previous section are shown in figure~\ref{fig:rates_from_dsnb} and in the corner plots of figures~\ref{fig:corner_op} and \ref{fig:corner_pess}, which are described in more detail in appendix~\ref{app:MCMC}. As is clear from figure~\ref{fig:rates_from_dsnb}, in the optimistic case
we know the unnovae neutrino spectrum well enough to infer strong bounds on the black hole birth rate $R_{\mathrm{BH}}(z)$ at least up to redshift $z = 1$. In contrast for the pessimistic case the constraints are much weaker at $95\%$ confidence,
since we no longer claim to know the unnovae neutrino spectrum.
The difference between the priors can be understood from the corner plots of figures~\ref{fig:corner_op} and \ref{fig:corner_pess}.
As can be seen in figure~\ref{fig:corner_pess},
there is a degeneracy between $f_0$ or $f_1$ and $\bar{E}_{e \mathrm{BH}}$ and between $f_1$ and ${L}_{e \mathrm{BH}}$, which weakens the bounds on $f_0$ and $f_1$ in the pessimistic case. For example if for the pessimistic case ${L}_{e \mathrm{BH}}$
can be close to zero then $f_1$ needs to be much larger as a result. While for the optimistic case such values of ${L}_{e \mathrm{BH}}$ and $\bar{E}_{e \mathrm{BH}}$ are not allowed as they contradict results form simulations of
BH-forming collapse events, and so the bounds on $f_0$ and $f_1$ are much tighter.
This degeneracy is also the reason as to why the posterior for $\bar{E}_{e \mathrm{BH}}$ is not centred on its fiducial value, as it is being affected by the potential for $f_0$ or $f_1$ to take values above $0.2$. Specifically, as shown
in the two-dimensional plots of $f_0$ or $f_1$ vs. $\bar{E}_{e \mathrm{BH}}$,
the larger $f_0$ or $f_1$ gets the smaller $\bar{E}_{e \mathrm{BH}}$ needs to be such that the unnova spectrum still provides a good fit to the simulated data, since both parameters change the total expected flux of unnovae neutrinos.
Hence since the one-dimensional plot of $\bar{E}_{e \mathrm{BH}}$ is an integral of any of the two-dimensional plots over the other parameter (e.g. $f_1$ or $f_0$) the peak of the distribution is shifted to lower values.
If $f_0$ and $f_1$ were fixed at their fiducual values, then the posterior distribution of $\bar{E}_{e \mathrm{BH}}$ would
be centred on its fiducial value.
Indeed $f_0$ and $f_1$ themselves have some degeneracy between each other, however this is partly broken since the energy in equation~(\ref{eqn:dsnb_all}) is redshifted, meaning that the spectrum of neutrinos from $z=1$ is different from
those produced near redshift $z = 0$.
There is also a degeneracy between $\bar{p}$ and $f_0$ which weakens the bounds on the BH birth rate at low redshift values, particularly for the pessimistic prior set. This arises from the fact that a value of $\bar{p}$ close to zero means that the neutrino spectra from both NS
and BH forming collapse events are harder, which mimics the effect on the tail of the DSNB caused by having a different value of $f_0$.
These are the strongest degeneracies between our parameters and $f(z)$, which is why we do not show all of the parameters in figures~\ref{fig:corner_op} and \ref{fig:corner_pess}.
For both sets of priors $f_4$ is poorly constrained, and therefore so is $R_{\mathrm{BH}}(z)$ approaching $z = 4$. This can be seen in figures~\ref{fig:corner_op} and \ref{fig:corner_pess}, for example in the two-dimensional
posterior plots where the contours vary only a small amount with a changing $f_4$, and in the one-dimensional plot where the posterior value changes little between $f_4=0$ and $f_4=1$, as compared with $f_0$ or $f_1$.
This poor constraint is due to several factors: it partly results from the suppression of the rate at larger redshifts by the cosmological factor
multiplying $\Omega_m$, and partly also because the energies of the neutrinos from $z = 4$ have been redshifted to much smaller values where the cross section for detection $\sigma_{\mathrm{IB}}(E)$ is smaller, and where the supernovae and unnovae components
are more difficult to separate.
Our MCMC shows that, as expected, the precision to which we can infer $R_{\mathrm{BH}}(z)$ from the DSNB depends crucially on how well we understand the spectrum of neutrinos from BH-forming unnovae. If we trust simulations of unnovae~\cite{PhysRevD.78.083014,Fischer:2008rh,Sumiyoshi:2008zw,Lunardini:2009ya} then ${L}_{e \mathrm{BH}}$ and $\bar{E}_{e \mathrm{BH}}$ are known well enough to fix $R_{\mathrm{BH}}(z)$. However in the pessimistic case, $R_{\mathrm{BH}}(z)$ is only weakly constrained, since we are unable to exploit even a high-precision measurement of the DSNB in an experiment like Hyper-Kamiokande.
\section{Combining posteriors from the MCMC with data from LIGO \label{sec:ligo}}
After the detection of gravitational waves from mergers of BH-BH binaries by LIGO~\cite{TheLIGOScientific:2016htt,Abbott:2016blz}, we are entering a period where the merger rate of black holes can be measured with potentially unprecedented precision~\cite{Elbert:2017sbr,Kovetz:2016kpi}.
The current best-fit value for the BH-BH merger rate from LIGO $\mathcal{R}_{\mathrm{BH-BH}}$ is within the range $9 - 240 \, \mathrm{Gpc}^{-3} \mathrm{yr}^{-1}$ at $90\%$ confidence~\cite{TheLIGOScientific:2016pea}.
In the previous section we saw that the birth rate of black holes $R_{\mathrm{BH}}(z)$ can be inferred from the DSNB with future neutrino experiments such as Hyper-Kamiokande. This function is related to the merger rate of black holes at the present time $t_0$ by the following equation,
\begin{equation}
\mathcal{R}_{\mathrm{BH-BH}} = \frac{\epsilon}{2} \int_0^{t_0} \mathrm{d} t \, R_{\mathrm{BH}}(t_0 - t) P(t),
\label{eqn:R_eq_1}
\end{equation}
where we have written $R_{\mathrm{BH}}(z)$ as a function of time $t$ and $P(t)$ is the distribution of expected merger times for BH-BH binary pairs,
which can be obtained from simulations of binary mergers~\cite{Eldridge:2016ymr,Mandel:2015qlu,deMink:2016vkw,Elbert:2017sbr,Dominik:2012kk}. The latter usually takes a form close to a Normal distribution centred on an average merger
time around several Gyr.
The factor $\epsilon$ is the merger fraction, which equals the fraction of black holes which lead to merger events at the present day i.e. the ratio of the merger rate density today to the density of black holes available to merge.
This will be several orders
of magnitude smaller than unity due to e.g. the small fraction of black holes expected to be in binary systems.
In ref.~\cite{Elbert:2017sbr} it was shown that with a theoretical prediction for $R_{\mathrm{BH}}(t)$ one can use $\mathcal{R}_{\mathrm{BH-BH}}$ to place constraints on the unknown quantity $\epsilon$.
Although this is perfectly reasonable, there is no reason \emph{a priori} to assume any particular redshift dependence for the black hole birth rate, for example.
Hence instead we focus on a data-driven approach and use $R_{\mathrm{BH}}(z)$ from our MCMC study of the DSNB.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{R_vs_ep_tyr=10_prior=optimistic_projection_v5.pdf} \hspace{10pt}
\includegraphics[width=0.47\textwidth]{R_vs_ep_tyr=10_prior=pessimistic_projection_v5.pdf}
\caption{The merger rate of black hole binaries at the present time calculated using equation~(\ref{eqn:R_eq_1}) versus the merger fraction $\epsilon$, using the merger time distribution from simulations~\cite{Mandel:2015qlu,deMink:2016vkw}. The horizontal lines show
upper and lower limits on the merger rate density from either LIGO measurements (green)~\cite{TheLIGOScientific:2016pea,Abbott:2016blz,TheLIGOScientific:2016htt,Elbert:2017sbr} or projections for LIGO running after six years (red)~\cite{Kovetz:2016kpi}.
The filled blue region shows all merger rates consistent with the three sigma interval of $R_{\mathrm{BH}}(z)$, and the filled light green region is the one-sigma interval.}
\label{fig:merger_frac_with_sim_func}
\end{figure}
In order to infer the black hole merger rate from the birth rate $R_{\mathrm{BH}}(z)$ we need to know the distribution of merger time-scales $P(t)$. Here we use results from simulations~\cite{Eldridge:2016ymr,Mandel:2015qlu,deMink:2016vkw,Elbert:2017sbr,Dominik:2012kk}
to estimate this function, which has the merger time distributed between $4$~Gyr and $10$~Gyr with a maximum of the probability distribution at $6$~Gyr (where $z \approx 0.6$). This leads to our results shown in figure~\ref{fig:merger_frac_with_sim_func}. The filled blue/green region shows all present-day BH-BH merger rates consistent with the three/one sigma region of $R_{\mathrm{BH}}(z)$ from figure~\ref{fig:rates_from_dsnb}.
Where this region intersects with the upper and lower bounds from LIGO tells us the approximate bounds on the merger fraction $\epsilon$.
Since the LIGO collaboration provide the full posterior for their inferred BH-BH merger rate in ref.~\cite{TheLIGOScientific:2016pea} we combine this with our posterior from the MCMC on the BH birth rate as a function of redshift
to obtain statistically robust predictions for $\epsilon$. We do this by scanning over $\mathcal{R}_{\mathrm{BH-BH}}$ and $R_{\mathrm{BH}}(z)$, each time calculating the value of $\epsilon$ according
to equation~(\ref{eqn:R_eq_1}), and calculating the combined posterior by multiplying the posterior for $\mathcal{R}_{\mathrm{BH-BH}}$ from the LIGO collaboration with our own for the MCMC. For each value of $\epsilon$ the maximum
value of this combined posterior gives the resulting posterior distribution for $\epsilon$ which we then use to calculate confidence intervals.
With the current upper and lower bounds on the merger rate from LIGO we are able to set a limit on the merger fraction at the level of $3 \cdot 10^{-5} \leq \epsilon \leq 5 \cdot 10^{-2}$ for the optimistic prior set and
$1.5 \cdot 10^{-5} \leq \epsilon \leq 5 \cdot 10^{-1}$ for the pessimistic prior set, both at the $3 \sigma$ level.
Given that our DSNB study assumes an experiment similar to Hyper-Kamiokande running for 10 years, a scenario which is not currently practical, we consider the possibility that by the time neutrino experiments have collected
enough data, gravitational wave experiments should have a significantly better measurement of $\mathcal{R}_{\mathrm{BH-BH}}$. For example in ref.~\cite{Kovetz:2016kpi} it was shown that after six years of running, LIGO could be able to measure
$\mathcal{R}_{\mathrm{BH-BH}}$ to a precision of $11.85$~Gpc$^{-3}$yr$^{-1}$ at one-sigma. Assuming that this results in a Gaussian posterior on $\mathcal{R}_{\mathrm{BH-BH}}$ with a standard deviation of $11.85$~Gpc$^{-3}$yr$^{-1}$ we derive
confidence intervals for $\epsilon$ in the same way as for the current LIGO result.
As can be seen from figure~\ref{fig:merger_frac_with_sim_func} this allows the constraints on $\epsilon$ to be tightened significantly, to the range $2 \cdot 10^{-4} \leq \epsilon \leq 3 \cdot 10^{-2}$ at $3 \sigma$
confidence for the optimistic prior set and $1 \cdot 10^{-4} \leq \epsilon \leq 2 \cdot 10^{-1}$ at $3 \sigma$ confidence for the pessimistic prior set.
Alternatively if we have no prior knowledge of the merger time-scale we can follow ref.~\cite{Elbert:2017sbr} in making the simplifying assumption that $P(t)= \delta(t - \tau)$ in equation~(\ref{eqn:R_eq_1}) leading to the simplified expression,
\begin{equation}
\mathcal{R}_{\mathrm{BH-BH}} = \frac{\epsilon}{2} R_{\mathrm{BH}}(t_0 -\tau),
\label{eqn:R_eq_2}
\end{equation}
where now $\tau$ is the (unknown) merger time of the binary system.
In this case we use equation~(\ref{eqn:R_eq_2}) to combine the bounds on the merger rate $ \mathcal{R}_{\mathrm{BH-BH}}$ from LIGO with our constraint on the black hole birth rate from our MCMC study in the previous section. Shown in figure~\ref{fig:rate_vs_timescale}
are the allowed regions for the black hole merger rate as a function of merger time-scale $\tau$ for two different values of $\epsilon$, from our MCMC, compared with the LIGO measurement of the BH-BH merger rate.
If the merger timescale is unknown then any value of $\epsilon$ which yields the correct merger rate density today, as measured by LIGO, for any value of the merger timescale is allowed.
For the optimistic prior set our projection is that $\epsilon$ can be constrained within the range $2 \cdot 10^{-5} \leq \epsilon \leq 5 \cdot 10^{-1}$ using the current LIGO merger rate and $2 \cdot 10^{-5} \leq \epsilon \leq 2 \cdot 10^{-1}$ for the 6-year projection,
while for the pessimistic prior the bounds are projected to be $3 \cdot 10^{-5} \leq \epsilon \leq 6 \cdot 10^{-1}$ using the LIGO measured rate and $3 \cdot 10^{-5} \leq \epsilon \leq 3 \cdot 10^{-1}$ for the 6-year projection, all at the $1 \sigma$ level. Without knowing the
merger time-scale distribution these constraints are poor, and at any higher level of confidence $\epsilon$ is essentially unconstrained.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{ligo_merger_rate_tyr=10_prior=optimistic_v3.pdf} \hspace{10pt}
\includegraphics[width=0.47\textwidth]{ligo_merger_rate_tyr=10_prior=pessimistic_v3.pdf}
\caption{Similar to figure~\ref{fig:merger_frac_with_sim_func} but for the case where the black hole binary merger timescale is unknown. The merger rate of black hole binaries calculated from equation~(\ref{eqn:R_eq_2}) for two different values of the merger fraction $\epsilon$, assuming that all black hole binaries have the same unknown merger time-scale. Dashed horizontal lines show
upper and lower limits on the merger rate density from either LIGO measurements (dashed)~\cite{TheLIGOScientific:2016pea,Abbott:2016blz,TheLIGOScientific:2016htt,Elbert:2017sbr} or projections for LIGO running after six years (solid)~\cite{Kovetz:2016kpi}.}
\label{fig:rate_vs_timescale}
\end{figure}
Comparing figures~\ref{fig:merger_frac_with_sim_func} and \ref{fig:rate_vs_timescale} it is clear that knowledge of the merger timescale distribution of black hole binaries $P(t)$ vastly reduces the uncertainty on $\epsilon$ for both sets of priors.
Hence in order to place strong constraints on $\epsilon$ the merger time-scale for black hole binaries needs to be known.
Our constraints on the merger fraction $\epsilon$ could be improved upon in several ways:
the first is if the LIGO collaboration were able to constrain the merger time-scale of each event, or if we otherwise knew this time-scale to better precision. The second is through additional information on the black hole birth
rate e.g. through precision measurements of the supernova or unnova rate~\cite{Yuksel:2012zy,Kochanek:2008mp,Lien:2010yb}.
A third method would be through theoretical predictions or models of the black hole birth rate, which could complement data from the DSNB.
Indeed a disadvantage of our approach is that we are not able to make predictions for the black hole birth rate in individual galaxies, and so we can not exploit the directional sensitivity of LIGO to BH-BH mergers, but theoretical
models could help with this~\cite{Elbert:2017sbr}.
In addition we have made the assumption that the neutrino spectrum and flux varies only between either NS or BH forming collapse events, but not within these categories. This seems to be a plausible assumption based on results
from simulations~\cite{PhysRevD.78.083014}, but may not be physical, and could be incorporated into a more advanced MCMC analysis to improve precision.
\section{Conclusion \label{sec:conc}}
In this work we have shown that neutrinos and gravitational waves present complementary probes of black hole physics~\cite{Lunardini:2009ya,Yuksel:2012zy,Kovetz:2016kpi,Elbert:2017sbr}, since both can travel cosmological distances without being significantly
perturbed~\cite{Baker:2016reh}. We found that neutrino experiments such as Hyper-Kamiokande will become effective probes of black hole physics through precision measurements of the DSNB,
allowing the black hole birth rate to be determined to a precision which is not possible otherwise.
When combined with data from experiments looking for gravitational waves, such as LIGO~\cite{Abbott:2016blz,TheLIGOScientific:2016htt}, the black hole merger fraction can also be measured.
We have performed a MCMC projection for measuring the black hole birth rate $R_{\mathrm{BH}}(z)$ from the DSNB, detected in an experiment like Hyper-Kamiokande running for 10 years.
The high-energy tail of the DSNB should contain neutrinos from BH-forming core collapse events, known as unnovae, and so the size and spectral shape of this tail is an effective probe of the BH birth rate. This is shown in figure~\ref{fig:rates_in_detector}.
However there are as many as $15$ parameters involved in this calculation (see table~\ref{table_params}), and this work is the first to take all of these fully into account.
Since our only knowledge of the BH-forming unnova neutrino spectrum comes from simulations~\cite{PhysRevD.78.083014,Fischer:2008rh,Sumiyoshi:2008zw,Lunardini:2009ya}, we have performed two MCMC analyses with different sets of priors, shown in table~\ref{priors}.
Our optimistic prior set worked on the assumption that the unnova spectrum is well-understood, leading to strong constraints on $R_{\mathrm{BH}}(z)$, as shown in figure~\ref{fig:rates_from_dsnb}.
However our pessimistic set led to much weaker constraints on the black hole birth rate, since it had wide priors on the unnova neutrino spectrum and flux.
By combining our posterior distributions for $R_{\mathrm{BH}}(z)$ with data from the LIGO experiment~\cite{Abbott:2016blz,TheLIGOScientific:2016htt}
and projections for its future precision, on the BH-BH merger rate, we calculated projected constraints on $\epsilon$, the (unknown) ratio of the observed merger rate density to the density of black holes which were born long enough ago to be available to result in merger events today, as can be seen in figures~\ref{fig:merger_frac_with_sim_func} and~\ref{fig:rate_vs_timescale}.
For the optimistic prior set our best projected constraint after ten years of LIGO and Hyper-Kamiokande data is at the level of $2 \cdot 10^{-4} \leq \epsilon \leq 3 \cdot 10^{-2}$ at $3 \sigma$ confidence and for the
pessimistic priors it is $1 \cdot 10^{-4} \leq \epsilon \leq 2 \cdot 10^{-1}$ at $3 \sigma$ confidence, for the case of figure~\ref{fig:merger_frac_with_sim_func} where the BH binary merger time-scale is known.
We note also that a measurement of the black hole birth rate from the DSNB on its own may also provide information on
the formation of black holes e.g. on the progenitor masses of stars which form into black holes, though this requires further study.
It also provides complementary information on $f(z)$ to searches in the optical spectrum for disappearing stars~\cite{Adams:2016ffj,Adams:2016hit}.
\acknowledgments
We thank Thomas Dent and Christopher Kochanek for comments on the manuscript and Ilya Mandel for useful comments on the merger rate of black holes.
The research leading to these results has received funding from the European Research Council through the project DARKHORIZONS under the European Union's Horizon 2020 program (ERC Grant Agreement no.648680). The work of MF was supported partly by the STFC Grant ST/L000326/1.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,021 |
UK-based filmmaker/photographer David Newton regularly hits the road to work single-handedly on productions and he explains how he relies on G-Technology storage solutions to give him the secure and speedy backup on the go that he needs.
One of the key things that evolving imaging technology has enabled in recent years is the growing relationship between stills and motion production and the liberation that's arrived on the back of gear becoming ever more compact and portable. It's allowed content creators such as David Newton to become hybrid media professionals with ultimate flexibility to travel to wherever their job might take them, but while the actual production of stills and footage has been made easier it's also raised the issue of backup and storage, a crucial consideration and something that could be tricky if you're working out in the wilds.
This has become even more of an issue in recent times with the proliferation of cameras that output 4K footage and the introduction of Apple's ProRes Raw. While the latter hands creatives more post production control than ever before, which of course is something that's been universally welcomed, it's still a benefit that comes at a cost, in the form of increased file sizes.
"In the stills world the move across to Raw happened several years ago," David explains, "and although many professional photographers were initially against it because of the extra storage requirements it brought it rapidly became obvious that the advantage of having what was essentially a digital negative to work with made this way of operating virtually standard.
"The introduction of ProRes Raw promises to do the same thing for filmmakers and I've now invested in an Atomos Shogun Inferno to record to and am enjoying the extra flexibility this format offers at the post production stage. For me, what is particularly impressive is how Apple has made ProRes Raw integrate perfectly with Final Cut, the editing package I use, and it just helps to make my whole workflow so much easier."
To cope with the ever rising amount of data that's he's producing, David has had to take positive steps to upload and securely store everything as soon as possible after it has been created. This has meant fitting out a van to serve as a mobile office and to kit it out with G-Technology's state-of-the-art storage solutions. "I like to be light, fast and mobile and G-Technology products allow me to be exactly that. They're light, they're rugged and they allow me to be entirely location independent."
This setup was an invaluable resource when the latest assignment called for a trip into the heart of the Peak District in central England to film professional rock climber and competition route setter Yann Genoux in action. The remote and desolate setting perfectly complimented the subject but added to the challenge, and it required a huge amount of ingenuity and improvisation for a single operator to achieve the variety of shots required. At one point David was sitting in a cradle attached to steel pegs hammered into a sheer rock face several meters in the air to achieve the angle he wanted looking directly down on Yann as he made his climb, a precarious shooting position that called for a strong nerve and a steady pair of hands.
At regular intervals throughout the day David headed back to his van to back his SanDisk Extreme Pro CFast cards up to his G-Technology G-Speed Shuttle SSD, using the device's Thunderbolt 3 connection to ensure the speed of transfer he required. This compact and highly portable piece of kit is equipped with up to 16TB of solid state storage and can run at a scorchingly fast 2800MB/s, allowing multi-camera footage to be edited in real time and to be quickly exported with incredible efficiency. This in turn enables 4K, 8K, VR, high dynamic range (HDR) and high frame rate (HFR) footage to be transferred in a single location.
Due to its SSD technology the device is also rugged and built for safe and easy travel between an on-site production location and the studio for post production, giving David powerful, transportable storage that maximises workflow efficiency.
What's in the van?
The G-SPEED Shuttle SSD comes in RAID 5 out of the box and supports RAID 0, 1, 5, 10 and 50 to provide a versatile and flexible storage solution, while Dual Thunderbolt 3 ports up allow up to five additional devices to be daisy chained so that it's possible to stay connected to multiple drives, 4K displays and more through a single laptop connection, an awesome capability.
Alongside the G-SPEED Shuttle SSD in the van was the G-Technology G-Drive Mobile Pro, which also offers transfer rates up to 2800MB/s, the maximum speed Thunderbolt 3 has to offer. This allows footage to be natively edited in real-time, or for as much as a terabyte of content to be transferred in seven minutes or less. The device is also built for location work, offering shock-resistant storage in a durable case with three-meter drop protection, a 1000lb crush-proof rating, and designed with handpicked components to sustain the rigors of travel.
The fact David was working with the fastest drives on the market – both of them capable of up to 2800 M/bs transfer speeds – was exactly what was required to make a 4K ProRes Raw workflow viable. It also took up remarkably little space, ensuring that the restricted room available inside the van was perfectly adequate for everything to be comfortably accommodated.
"The set up allowed me to ingest my footage exceptionally quickly," says David. "I also didn't have to worry about transcoding; even though I was working in a Raw format I could drop it straight into my editing system and then move backwards and forwards and play in real time, even add adjustments also in real time, and there was absolutely no lag.
"There's no denying that moving to an SSD-based workflow does call for an investment, but it's an investment in saving time. By saving the lag time in editing I get that investment back and it allows me to get on with my project, to turn it around very quickly and to move on to the next one. Working with a G-Technology SSD-based workflow has absolutely revolutionized the way that I work."
Camera: Canon EOS C300 Mark II with Canon lenses
Memory Cards: SanDisk Extreme Pro CFast cards
Recorder: Atomos Shogun Inferno with G-Technology Atomos Master Caddy 4K
Software: Apple ProRes Raw and Final Cut Pro X
Tripod: Vinten Flowtech 75
Workstation: Apple MacBook Pro 15
Storage: G-Technology G-Speed Shuttle SSD and G-Technology G-Drive Mobile Pro
G-Drive Mobile ProG-Speed Shuttle SSDProRes
Televisual - ProRes RAW Workflow - Accelerated by SSD
Jim Geduldick: Your Plunge Into Mixed Reality May Be Closer Than You Think
Type-C: Your Coming All-in-One Storage Connection | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 35 |
package net.aggregat4.javatags.content;
import java.io.IOException;
import java.util.Arrays;
import java.util.List;
import java.util.Objects;
import java.util.stream.Collectors;
public class Tag<C extends Node> implements Content {
private final String name;
private final boolean closingTag;
private final List<Attribute> attributes;
private final List<Content> content;
public Tag(String name, boolean closingTag, C[] nodes) {
this.name = Objects.requireNonNull(name);
this.closingTag = closingTag;
this.attributes = getAttributes(nodes);
this.content = getChildren(nodes);
}
private List<Content> getChildren(C[] contents) {
// Text nodes are also content
return Arrays.stream(contents).filter(c -> c instanceof Content).map(Content.class::cast).collect(Collectors.toList());
}
private List<Attribute> getAttributes(C[] contents) {
return Arrays.stream(contents).filter(c -> c instanceof Attribute).map(Attribute.class::cast).collect(Collectors.toList());
}
protected void renderOpeningTag(Appendable appendable) throws IOException {
appendable.append("<");
appendable.append(name);
for (Attribute attr : attributes) {
attr.render(appendable);
}
appendable.append(">");
}
protected void renderClosingTag(Appendable appendable) throws IOException {
appendable.append("</");
appendable.append(name);
appendable.append(">");
}
public void render(Appendable appendable) throws IOException {
renderOpeningTag(appendable);
if (closingTag) {
for (Content c : content) {
c.render(appendable);
}
renderClosingTag(appendable);
}
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 5,226 |
\section{Introduction}
Topological quantum computation (TQC) is currently the most promising approach to scalable, fault-tolerant quantum computation. In recent years, the focus has been on TQC with Kitaev's toric code~\cite{kitaev2003fault}, due to it's high threshold to noise~\cite{dennis2002topological,wang2003confinement}, and amenability to planar architectures with nearest neighbour interactions.
To encode and manipulate quantum information in the toric code, a variety of techniques drawn from condensed matter contexts have been utilised. In particular, some of the efficient approaches for TQC with the toric code rely on creating and manipulating gapped-boundaries, symmetry defects and anyons of the underlying topological phase of matter~\cite{Rau06,TopoClusterComp,Raussendorf07,bombin2009quantum,bombin2010topologicaltwist,yoder2017surface,brown2017poking,koenig2010quantum,webster2020fault,barkeshli2019symmetry,barkeshli2013twist}. Despite great advances, the overheads for universal fault-tolerant quantum computation remain a formidable challenge.
It is therefore important to analyse the potential of TQC in a broad range of topological phases of matter, and attempt to find new computational substrates that require fewer quantum resources to execute fault-tolerant quantum computation.
In this work we present an approach to TQC for more general anyon theories based on the Walker--Wang models~\cite{walker20123+}. This provides a rich class of spin-lattice models in three-dimensions whose boundaries can naturally be used to topologically encode quantum information.
The two-dimensional boundary phases of Walker--Wang models
accommodate a richer set of possibilities than stand-alone two-dimensional topological phases realized by commuting projector codes~\cite{von2013three,burnell2013exactly}.
The Walker--Wang construction prescribes a Hamiltonian for a given input (degenerate) anyon theory, whose ground-states can be interpreted as a superposition over all valid worldlines of the underlying anyons. Focusing on a particular instance of the Walker--Wang model based on the 3-Fermion anyon theory (\textbf{3F} theory)~\cite{rowell2009classification,bombin2009interacting,bombin2012universal}, we show that that the associated ground states can be utilised for fault-tolerant measurement-based quantum computation (MBQC)~\cite{raussendorf2001one,raussendorf2003measurement,van2006universal} via a scheme based on the braiding and fusion of lattice defects constructed from the symmetries of the underlying anyon theory. The Walker--Wang MBQC paradigm that we introduce provides a general framework for finding resource states for computation. For example, we show that the well-known topological cluster state scheme for MBQC of Ref.~\cite{Rau06} is produced when the toric code anyon theory is used as input to the Walker--Wang construction.
Owing to the rich set of symmetries of the \textbf{3F} theory, we find a universal scheme for TQC where all Clifford gates can be fault-tolerantly implemented and magic states can be noisily prepared and distilled~\cite{bravyi2005universal}.
In contrast to the 2D toric code, the full Clifford group is obtained by braiding and fusing symmetry twist defects, owing to the richer symmetry group of \textbf{3F}.
The \textbf{3F} Walker--Wang model --~and consequently the TQC scheme that is based on it~-- is intrinsically three-dimensional, as there is no commuting projector (e.g. stabilizer) code in two dimensions that realises the \textbf{3F} anyon theory~\cite{haah2018nontrivial,burnell2013exactly}.
As such, this TQC scheme is outside the paradigm of operations on a 2D stabilizer code,
and provides an important stepping stone towards understanding what is possible in general, higher-dimensional, topological phases.
We remark, however, that it remains possible to embed our scheme into an extended nonchiral anyon theory that can be implemented in a 2D stabilizer model (such as the color code).
We ground our computational framework in the context of symmetry-protected topological (SPT) phases of matter. In particular, we explore the relationship between the fault-tolerance properties of our MBQC scheme and the underlying 1-form symmetry-protected topological order of the Walker--Wang resource state. While the 3D topological cluster state (of Ref.~\cite{Rau06}) has the same $\mathbb Z_2^2$ 1-form symmetries as the \textbf{3F} Walker--Wang ground state, they belong to distinct SPT phases.
These examples provide steps toward a more general understanding of fault-tolerant, computationally universal phases of matter~\cite{DBcompPhases,Miy10, else2012symmetry,else2012symmetryPRL,NWMBQC,miller2016hierarchy,roberts2017symmetry,bartlett2017robust,wei2017universal,raussendorf2019computationally,roberts2019symmetry,devakul2018universal,Stephen2018computationally,Daniel2019,daniel2020quantum}.
Finally, we find another setting for the implementation of our computation scheme by demonstrating how symmetry defects can be introduced into the 2D subsystem color code of Bomb\'{i}n \cite{bombin2010topologicalsubsystem,bombin2009interacting}, which supports a \textbf{3F} 1-form symmetry and a \textbf{3F} anyon phase.
By demonstrating how the symmetries of the emergent anyons are represented by lattice symmetries, we open up the possibility of an alternative formulation of the \textbf{3F} TQC scheme based on deformation of a subsystem code in (2{+}1)D -- this may be of practical advantage for 2D architectures where 2-body measurements are preferred. Our construction of symmetry defects in this subsystem code may be of independent interest.
A certain limit of this model embeds our scheme into a subtheory of the anyons and defects supported by Bomb\'{i}n's color code~\cite{bombin2006topological,bombin2009interacting,yoshida2015topological,kesselring2018boundaries}.
\textit{Organisation.} In Sec.~\ref{sec3FPreliminaries} we review the \textbf{3F} anyon theory and its symmetries. In Sec.~\ref{sec3FTQCScheme} we present an abstract TQC scheme based on the symmetries of the \textbf{3F} theory. We show how to encode in symmetry defects, and how to perform a full set of Clifford gates along with state preparation by braiding and fusing them. In Sec.~\ref{sec3FWW} we show how the symmetry defects and TQC scheme can be realized in the \textbf{3F} Walker--Wang Hamiltonian. In Sec.~\ref{sec3FMBQC} we show that the \textbf{3F} Walker--Wang model and associated symmetry defects can be used as a resource for fault-tolerant measurement-based quantum computation. We begin by recasting MBQC based on the 3D topological cluster state~\cite{Rau06} in the Walker--Wang MBQC paradigm.
We also discuss the two models in the context of 1-form SPT phases. In Sec.~\ref{sec3FSubsystemCode} we show how the defects can be implemented in a 2D subsystem code, offering an alternative computation scheme based on code deformation. We conclude with a discussion and outlook in Sec.~\ref{sec3FDiscussion}.
\section{3-Fermion anyon theory preliminaries}\label{sec3FPreliminaries}
In this section we outline the \textbf{3F} anyon theory, its symmetries and the associated symmetry domain wall and twist defects.
We describe the fusion rules of the twists, including which anyons can condense on the twist defects.
\subsection{Anyon theory}
The \textbf{3F} anyon theory describes superselection sectors $\{ 1, \textcolor{red!80!black}{\psi_\text{r}},\textcolor{green!80!black}{\psi_\text{g}},\textcolor{blue!80!black}{\psi_\text{b}}\}$ with $\mathbb{Z}_2\times\mathbb{Z}_2$ fusion rules
\begin{align}
\psi_\alpha \times \psi_\alpha = 1 \, ,
&&
\textcolor{red!80!black}{\psi_\text{r}} \times \textcolor{green!80!black}{\psi_\text{g}} = \textcolor{blue!80!black}{\psi_\text{b}} \, ,
\end{align}
where $\alpha = r,g,b,$
and modular matrices
\begin{align}\label{eq3FModularMatrices}
S = \begin{pmatrix}
1 & 1 & 1 & 1 \\
1 & 1 & -1 & -1 \\
1 & -1 & 1 & -1 \\
1 & -1 & -1 & 1
\end{pmatrix}
\, ,
&&
T= \begin{pmatrix}
1 & & & \\
& -1 & & \\
& & -1 & \\
& & & -1
\end{pmatrix}
\, .
\end{align}
The above $S$ matrix matches the one for the anyonic excitations in the toric code, but the topological spins in the $T$ matrix differ as $\textcolor{red!80!black}{\psi_\text{r}},\textcolor{green!80!black}{\psi_\text{g}},\textcolor{blue!80!black}{\psi_\text{b}}$ are all fermions.
These modular matrices suffice to specify the gauge invariant braiding data of the theory~\cite{WALL1963}, while the $F$ symbols are trivial.
This anyon theory is \textit{chiral} in the sense that it is not consistent with a gapped boundary to vacuum~\cite{Moore1989,kitaev2006anyons}.
Using the well-known relation
\begin{align}
\frac{1}{\mathcal{D}} \sum_{a\in \mathcal{C}} d_a^2 \theta_a = e^{2\pi i c_- / 8} \, ,
\end{align}
where $d_a$ is the quantum dimension and $\theta_a=T_{aa}$ is the topological spin of anyon $a$, ${\mathcal{D}^2=\sum_a d_a^2}$ defines the total quantum dimension, and $c_-$ is the chiral central charge.
In the \textbf{3F} theory $d_a=1$ for all anyons as they are abelian, hence $\mathcal{D}=2$, we also have that $\theta_a=-1$ for all anyons besides the vacuum. Hence we find that the chiral central charge must take the value {$c_-=4$ mod 8}.
\subsection{Symmetry-enrichment}
\begin{figure}[t]%
\centering
\includegraphics[width=0.7\linewidth]{figSymmetryTwistBraid.pdf}
\caption{A fermion $\alpha \in \mathcal{C}$ is transformed by the symmetry group to $g\cdot \alpha \in \mathcal{C}$ under counter-clockwise braid with a twist $\mathcal{T}_g$. }
\label{figSymmetryAction}
\end{figure}
The \textbf{3F} theory has an $S_3$ group of global symmetries corresponding to arbitrary permutations of the three fermion species, all of which leave the gauge invariant data of the theory invariant.
We denote the group action on the three fermion types r,g,b, using cycle notation
\begin{align}
S_3 \cong \{ (),(\text{rg}),(\text{gb}),(\text{rb}),(\text{rgb}),(\text{rbg}) \} \, ,
\end{align}
with the usual composition, e.g. ${(\text{rg})\cdot(\text{gb})=(\text{rgb})}$.
The action on the anyons is then given by ${g\cdot 1=1,} \, { g\cdot \psi_c=\psi_{g\cdot c}}$.
Restricting the action of a global symmetry to a subregion creates codimension-1 invertible domain walls~\cite{barkeshli2019symmetry}.
These codimension-1 invertible domain walls are labelled by the nontrivial group elements.
The codimension-2 topological symmetry twist defects that can appear at the open end of a terminated domain wall are labelled by their eigenvalues under the string operators for any fermions that are fixed by the action of the corresponding group element.
Hence there are two distinct symmetry defects of quantum dimension $\sqrt{2}$ for each 2-cycle permutation which we label
$\mathcal{T}_{(\text{rg})}^\pm,\mathcal{T}_{(\text{gb})}^\pm,\mathcal{T}_{(\text{rb})}^\pm,$ and there is only a single symmetry defect of quantum dimension $2$ for each of the 3-cycles which we label
$\mathcal{T}_{(\text{rgb})}$ and $\mathcal{T}_{(\text{rbg})}$. Where we have utilized the fact that the total quantum dimension of each symmetry defect sector matches the trivial sector, consisting of only the anyons, and that the $\mathcal{T}_{(cc')}^\pm$ defects are related by fusing in either of the fermions $\psi_c,\psi_{c'}$ that are permuted by the action of the domain wall.
The twist defect sectors of the full symmetry-enriched theory are then given by
\begin{align}
&\mathcal{C}_{S_3} = \{1,\psi_r,\psi_g,\psi_b\}
\oplus \{\mathcal{T}_{(\text{rg})}^+,\mathcal{T}_{(\text{rg})}^-\} \oplus~ \nonumber \\
&~ \{\mathcal{T}_{(\text{gb})}^+,\mathcal{T}_{(\text{gb})}^-\} \oplus \{\mathcal{T}_{(\text{rb})}^+,\mathcal{T}_{(\text{rb})}^-\} \oplus \{\mathcal{T}_{(\text{rgb})} \} \oplus \{ \mathcal{T}_{(\text{rbg})} \} \, ,
\end{align}
and the additional fusion rules for the defects are
\begin{align}
\mathcal{T}_{(cc')}^\pm \times \mathcal{T}_{(cc')}^\pm &= 1 + \psi_{c''}
\\
\mathcal{T}_{(cc')}^\pm \times \mathcal{T}_{(cc')}^\mp &= \psi_{c'}+\psi_{c''}
\\
\mathcal{T}_{(\text{rgb})} \times \mathcal{T}_{(\text{rbg})} &= 1+\psi_r+\psi_g+\psi_b
\, ,
\end{align}
for $c\neq c' \neq c'' \neq c$, and
\begin{align}
\mathcal{T}_{(cc')}^\pm \times \mathcal{T}_{(\text{rgb})^i} &= \mathcal{T}_{(cc')\cdot(\text{rgb})^i}^+ + \mathcal{T}_{(cc')\cdot(\text{rgb})^i}^-
\\
\mathcal{T}_{(\text{rgb})^{i}} \times \mathcal{T}_{(\text{rgb})^{i}} &= 2 \mathcal{T}_{(\text{rgb})^{-i}}
\, ,
\end{align}
for $i=\pm 1,$ and the related rules given by cycling the legs around a fusion vertex.
These fusion rules imply what anyon types $C_{g}$ can condense on the $\mathcal{T}_g$ defects (the $\pm$ superscript makes no difference) as follows:
\begin{align}
\mathcal{C}_{(c c')} &= \{ 1,\psi_{c''}\} \, ,
\\
\mathcal{C}_{(c c' c'')} &= \{ 1,\textcolor{red!80!black}{\psi_\text{r}},\textcolor{green!80!black}{\psi_\text{g}},\textcolor{blue!80!black}{\psi_\text{b}}\} \, ,
\end{align}
where $c\neq c'\neq c''\neq c$.
We remark that the fusion algebra of each non-abelian $\mathcal{T}_{(cc')}^\pm$ twist defect with itself is equivalent to that of an Ising anyon or Majorana zero mode, reminiscent of the electromagnetic duality twist defect in the toric code~\cite{bombin2010topologicaltwist}.
A full description of the G-crossed braided fusion category~\cite{barkeshli2019symmetry} describing this symmetry-enriched defect theory is not needed for the purposes of this paper, as all relevant processes can be calculated using techniques from the stabilizer formalism. This theory has been studied previously, it is known to be anomaly free and in particular the theory that results from gauging the full symmetry group has been calculated~\cite{barkeshli2019symmetry,Cui2016}.
We remark that in the following sections, for any 2-cycle $g\in S_3$ we define $\mathcal{T}_g$ (i.e. without a superscript) to be equal to $\mathcal{T}_g^+$.
In particular, we do not make explicit use of $\mathcal{T}_g^{-}$ to encode logical information (although they may arise due to physical errors).
\section{\textbf{3F} defect computation scheme}\label{sec3FTQCScheme}
\begin{figure}[t]%
\centering
\includegraphics[width=0.43\linewidth]{fig3F2DEncodings.pdf}
\caption{The elementary twist-defect configuration for encoding quantum information. One or two logical qubits are encoded if $g\in S_3$ is a 2-cycle (e.g. $(\text{rg})$) or 3-cycle (e.g. $(\text{rgb})$), respectively.}
\label{figBaseEncodings}
\end{figure}
In this section we demonstrate how to encode and process logical information using symmetry defects of the \textbf{3F} theory. Our scheme is applicable to any spin lattice model that supports \textbf{3F} topological order (possibly as a subtheory). Here we describe the scheme at the abstract level of an anyon theory with symmetry defects, with the microscopic details abstracted away.
In the following sections we demonstrate how to realise our scheme via MBQC using a Walker-Wang model and in the 2D subsystem color code of Bomb\'{i}n~\cite{bombin2009interacting}.
Our computational scheme is based on implementing a complete set of fault-tolerant Clifford operations using topologically protected processes -- which are naturally fault-tolerant to local noise, provided the twists remain well separated -- along with the preparation of noisy magic states. By Clifford operations we mean the full set of Clifford gates (the unitaries that normalise the Pauli group), along with single qubit Pauli preparations and measurements. The noisy magic states can be distilled to arbitrary accuracy using a post-selected Clifford circuit (provided the error rates are sufficiently small)~\cite{bravyi2005universal}.
We remark that the schemes we present are by no means optimal, and given a compilation scheme and architecture, the overheads are ripe for improvement.
The goal of this section is to prove Prop.~\ref{prop3FCliffordUniversality} -- the Clifford universality of \textbf{3F} defect theory -- which along with noisy magic state preparations offers a universal scheme for fault-tolerant quantum computation. We prove this proposition by breaking an arbitrary space-time configuration of domain walls and twists into smaller components that directly implement individual Clifford operations that generate the Clifford group and allow for Pauli preparations and measurements.
We begin by introducing defect encodings.
\subsection{Encoding in symmetry defects}\label{sec3FEncodings}
By nature of their ability to condense anyonic excitations, symmetry defects are topological objects and information can be encoded in them. To understand such encodings, we consider a two-dimensional plane upon which anyonic charges -- in our case, fermions in $\mathcal{C}$ -- and symmetry defects may reside. This setting is representative of the behaviour of anyons that arise as excitations on two-dimensional topologically ordered phases -- in our case the fermions appear as excitations on the boundary of the three-dimensional Walker--Wang model as well as in the low energy theory of a 2D subsystem code Hamiltonian. Processes that involve moving, braiding and fusing of anyons can be realized on the lattice by certain string operators. Such string operators can also transfer anyonic charge to (and between) twist defects, thereby changing their topological charge.
For a given configuration of twist defects $\{ \mathcal{T}_{g_i}^{(i)}, ~|~ i \in \{1,\ldots,N\}, g_i \in S_3\}$, we can encode a quantum state in the joint fermionic charge of $g$-neutral subsets of them $\mathcal{I}\subseteq\{1,\ldots,N\}$. By $g$-neutral, we mean that the subset of twist defects $\{\mathcal{T}_{g_i}^{(i)}~|~i \in \mathcal{I}\}$ must satisfy $\prod_{i\in \mathcal{I}}g_i = 1$. As the subsets are $g$-neutral, upon their fusion we are left with a fermionic charge $c\in\mathcal{C}$. These possible post-fusion charge states give us a basis for our encoded state space, and the dimension of the logical state-space depends on the quantum dimension of the defects.
\textbf{$g$-encodings:} To be more concrete we fix a twist configuration that acts as the fundamental encoding unit, known as the $g$-encoding $\mathcal{E}_g$, where ${1 \neq g\in S_3}$. In the following, all twists are of the $+$-type, where relevant. The encoding is defined by two twist pairs ${\mathcal{E}_g = \{\mathcal{T}_{g}^{(1)},\mathcal{T}_{g^{-1}}^{(2)},\mathcal{T}_{g^{-1}}^{(3)},\mathcal{T}_{g}^{(4)}\}}$ for $g\in S_3$ with vacuum total charge, as depicted in Fig.~\ref{figBaseEncodings}. The computational basis is defined by the fusion space of $\mathcal{T}_{g}^{(1)}$ and $\mathcal{T}_{g^{-1}}^{(2)}$: when $g$ is a 2-cycle, the two pairs encode a single qubit, and when $g$ is a 3-cycle the two pairs encode two qubits. This degeneracy follows from the fusion space of the twists
\begin{align}
\mathcal{T}_{(\text{rg})} \times \mathcal{T}_{(\text{rg})} &= 1 + \textcolor{blue!80!black}{\psi_\text{b}}, \\
\mathcal{T}_{(\text{rgb})} \times \mathcal{T}_{(\text{rbg})} &= 1 + \textcolor{red!80!black}{\psi_\text{r}} + \textcolor{green!80!black}{\psi_\text{g}} + \textcolor{blue!80!black}{\psi_\text{b}},
\end{align}
along with the constraint that all four twists must fuse to the vacuum containing no charge.
For instance, when $g=(\text{rg})$ the $\ket{\overline{0}}$ state corresponds to the fusion outcome $1\in \mathcal{C}$, and the $\ket{\overline{1}}$ state corresponds to outcome $\textcolor{blue!80!black}{\psi_\text{b}} \in \mathcal{C}$.
\begin{figure}[t]%
\centering
\includegraphics[width=0.95\linewidth]{fig3F2DEncodingsBoth.pdf}
\caption{Representative fermionic string operators for logical Pauli operators for a $g$-encoding. (left) A single qubit is encoded in four twists defined by $(\text{rg})\in S_3$. Note also in this case that the orientation has been removed from the domain wall as $(\text{rg})^{-1} = (\text{rg})$. Similar representative logical operators for twist defects based on the other 2-cycles $g\in S_3$ can be obtained by suitably permuting the fermionic string operator types. (right) Two qubits are encoded in four twists defined by $(\text{rgb}),(\text{rbg}) \in S_3$. }
\label{figQubitBasis}
\end{figure}
We remark that the exact location of the domain wall is not important in the encoding of Fig.~\ref{figBaseEncodings}; only their end points matter, as the action in Fig.~\ref{figSymmetryAction} is invariant under deformations of the domain wall. To encode qubits, one can choose any domain wall configuration with the same end points as the twist defects in Fig.~\ref{figBaseEncodings}.
The total fermionic charge of a subset of $g$-neutral defects can be detected without fusing the twists together, by instead braiding various fermionic charges around them. Such a process can be represented by a string operator (also known as a Wilson loop), and this loop can be used to measure the charge within the defects. Similarly, one can change the charge on each twist by condensing fermions into them, which is also represented by a string operator running between pairs of twists.
The string operators that represent Pauli logical operators $\overline{X}$, $\overline{Z}$ acting on the encoded qubits are represented in Fig.~\ref{figQubitBasis} -- they can be understood as transferring and measuring fermionic charge between different defect pairs. Such operators must anticommute based on the mutual semionic braiding statistics of the fermions they transport (i.e. braiding one fermion around another introduces a $-1$ phase). It is often convenient to utilise other representative logical operators. For instance, when $g$ is a 2-cycle (e.g. $g=(\text{rg})$), one can use either the $\textcolor{red!80!black}{\psi_\text{r}}$ or $\textcolor{green!80!black}{\psi_\text{g}}$ loops to measure the charge and hence define the logical $\overline{Z}$ operator. This follows from the fact that that an $\textcolor{blue!80!black}{\psi_\text{b}}$ Wilson loop enclosing $\mathcal{T}_{(\text{rg})}^{(1)}$ and $\mathcal{T}_{(\text{rg})}^{(2)}$ acts as the logical identity, and swaps $\textcolor{red!80!black}{\psi_\text{r}}$ and $\textcolor{green!80!black}{\psi_\text{g}}$ loops upon fusion. In addition, logical $\overline{X}$ can be represented as a loop operator as per Fig.~\ref{fig3F2DEncodingsIsotope}.
\begin{figure}[t]%
\centering
\includegraphics[width=0.98\linewidth]{fig3F2DEncodingsIsotope.pdf}
\caption{Equivalence between different representative fermionic string operators for logical Pauli $\overline{X}$ operators for the two-twist-pair encoding for $g$-encodings -- in this case $g=(\text{rg})$. They can be verified by the fusion rule for $\textcolor{red!80!black}{\psi_\text{r}} \times \textcolor{green!80!black}{\psi_\text{g}} = \textcolor{blue!80!black}{\psi_\text{b}}$ along with the fact that $\mathcal{T}_{(\text{rg})}$ can condense $\textcolor{blue!80!black}{\psi_\text{b}}$ fermions. }
\label{fig3F2DEncodingsIsotope}
\end{figure}
More efficient encodings are possible. For instance, one can encode $N$ ($2N$) logical qubits into $(2N+2)$ $\text{2-cycle}$ (3-cycle) twists on the sphere, following for example Ref.~\cite{barkeshli2013twist}. Additionally, due to the rich symmetry defect theory of \textbf{3F} other encodings are possible, including a trijunction encoding which is outlined in App.~\ref{appOtherEncodings}.
\subsection{Gates by braiding defects}\label{sec3FGates}
We now show how to achieve encoded operations (gates, preparations and measurements) on our defect-qubits. In order to implement these operations, we braid twists to achieve gates, and fuse them to perform measurements. To understand such processes, we describe the locations of twists in (2{+}1)-dimensions. In (2{+}1)-dimensions, the twists -- which are codimension-2 objects -- can be thought of as worldlines. The domain walls -- which are codimension-1 objects -- can be thought of as world-sheets.
\begin{figure}[t]%
\centering
\includegraphics[width=0.28\linewidth]{fig3FDefectHadamard3.pdf} \qquad
\includegraphics[width=0.28\linewidth]{fig3FDefectSGate2.pdf}
\caption{One qubit gates for the defect encoding $\mathcal{E}_{(\text{rg})}$. Time moves upwards. (a) The Hadamard gate cyclically permutes the four twist defects. A domain wall plane is inserted to return the encoding to its standard form. (b) The $S$ gate consists of the exchange of $\mathcal{T}_{(\text{rg})}^{(3)}$ and $\mathcal{T}_{(\text{rg})}^{(4)}$. The same gates work for a $g$-encoding with $g$ a 2-cycle -- in this case the orientation of the surface does not matter and is not depicted. }
\label{fig3FHandP}
\end{figure}
\begin{lemma}\label{lemSingleQubit}
Braiding the twists of a $g$-encoding $\mathcal{E}_g$ with $g$ a 2-cycle generates the single qubit Clifford group $\langle H, S \rangle$ where $H$ is the Hadamard and $S$ is the phase gate.
\end{lemma}
\begin{proof}
The proof is presented in App.~\ref{appProofOfGates}.
\end{proof}
We remark that in the case that $g$ is a 3-cycle, each $\mathcal{E}_g$ encodes 2 logical qubits. Braiding in this case generates a subgroup of the Clifford group given by $\langle H_{(1)}H_{(2)}, S_{(1)}S_{(2)} \rangle$, where the subscript indexes the two logical qubits.
The previous Lemma defines a generating set of single qubit braids. We present the space-time diagram for the Hadamard and $S$ gate braids in Fig.~\ref{fig3FHandP}. Such diagrams can be interpreted in terms of code deformations or in terms of measurement-based quantum computation. In the former, we depict the space-time location of twists and domain walls during a code deformation, wherein twists trace out (0{+}1)-dimensional worldlines, and domain walls trace out (1{+}1)-dimensional worldsheets. In the MBQC picture, we similarly depict the location of twists and domain walls which correspond to lattice defects within the resource state, as we show explicitly in the next section. As in the previous section, the exact location of domain wall worldsheets is not important and only the location of the twist worldlines matter -- and they must remain well-separated in order for logical errors to be suppressed from local noise processes.
For entangling gates we require encodings $\mathcal{E}_g$ and $\mathcal{E}_h$ with either $g\neq h$, or at least one of $g$, $h$ being a 3-cycle.
\begin{lemma}\label{lemEntangling}
Braiding of twists from two encodings $\mathcal{E}_g$ and $\mathcal{E}_h$ generates entangling gates if and only if either $g \neq h$ or at least one of $g$, $h$ is a 3-cycle.
\end{lemma}
\begin{proof}
See App.~\ref{appProofOfGates}.
\end{proof}
Similarly, we present the space-time diagram of the Controlled-$Z$ ($CZ$) gate between two qubits encoded within 2-cycle encodings $\mathcal{E}_{(\text{rg})}$ and $\mathcal{E}_{(\text{rb})}$ in Fig.~\ref{fig3FDefectCZ}. If one wishes to implement an entangling gate between two $(\text{rg})$-encoded qubits -- such as a $CZ$ gate -- one can utilize an $(\text{rb})$-encoded ancilla to achieve this, as shown by the circuit in Fig.~\ref{fig3FEntanglingCircuit} in App.~\ref{SecEntanglingCircuitViaAncilla}.
\begin{figure}[t]%
\centering
\includegraphics[width=0.45\linewidth]{fig3FDefectCZ.pdf}
\caption{Two qubit $CZ$ gate between pairs of qubits with $g$- and $h$- encodings with 2-cycles $g\neq h \in S_3$, e.g. $g=(\text{rg})$ on the left and $h=(\text{rb})$ on the right. Time moves upwards. Domain walls are colored according to the fermion that they leave invariant. In App.~\ref{SecEntanglingCircuitViaAncilla} we show how to generate entangling gates between two $(\text{rg})$-encoded qubits.}
\label{fig3FDefectCZ}
\end{figure}
We remark that if one implements the same operation as in Fig.~\ref{fig3FDefectCZ} using two $(\text{rgb})$-encodings (which encode 4-qubits) we obtain the operation $\overline{CZ}_{1,4}\overline{CZ}_{2,3}$, where qubits 1,2 belong to the left $(\text{rgb})$-encoding, and qubits 3,4 belong to the right $(\text{rgb})$-encoding.
One can understand the action of these gates by tracking representative logical operators through space-time. If the braid is implemented by code deformation, the logical mapping can be understood by tracking representative logical operators at each time slice through the space-time braid. In the context of MBQC, the observable that propagate logical operators through space-time are known as correlation surfaces -- they reveal correlations in the resource state that determine the logical action on the post-measured state (see for example \cite{Raussendorf07}). Correlation surfaces for each operation are determined in App.~\ref{appProofOfGates}.
\subsubsection{Braid representation and topological compilation}
These operations can be describe purely in terms of the braid group acting on twists. This can be useful for \textit{topological compilation}, where one can find more efficient representations of general Clifford operations. We define the braids that give rise to the Hadamard, $S$ gate and $CZ$ gate in App.~\ref{SecEntanglingCircuitViaAncilla}.
\subsection{Completing a universal gate set}
To complete the set of Clifford operations, we require Pauli basis measurements, which are obtained by fusing twists together. To obtain a universal set of operations we show how to prepare noisy magic states, which can then be distilled using Clifford operations -- allowing for fault-tolerant universality~\cite{bravyi2005universal}.
While the 3-cycle encodings can be used, we focus on a universal scheme for quantum computation using $(\text{rg})$-encoded qubits for logical qubits, and $(\text{rb})$-encoded qubits as ancillas to mediate entangling gates.
\subsubsection{State preparation}\label{secMagicStatePreparation}
To complete the set of Clifford operations, we show how to perform topologically protected measurements in the $\overline{X}$ and $\overline{Z}$ basis. Measurements and preparations are simple time-reverses of each other. To prepare a state in $\overline{X}$ or $\overline{Z}$ we must nucleate out the twists of $\mathcal{E}_g$ such that we know the definite (fermionic) charge of $\mathcal{T}_g^{(1)} \times \mathcal{T}_{g^{-1}}^{(3)}$ and $\mathcal{T}_g^{(1)} \times \mathcal{T}_{g^{-1}}^{(2)}$ respectively. These basis preparations can be understood in Fig.~\ref{preparations}. In the case that $g$ is a 3-cycle, both qubits are prepared in the same basis. This completes the set of Clifford operations.
\begin{figure}[t]%
\centering
\includegraphics[width=0.75\linewidth]{figPreparationXZ.pdf}
\caption{(left) Preparing $\overline{X}$ eigenstates. (right) Preparing $\overline{Z}$ eigenstates. We depict the operation for a $(\text{rg})$-encoding. Time moves upwards. To prepare either $\overline{X}$ or $\overline{Z}$ eigenstates we need to prepare pairs of twists in definite charge states. This can be done by nucleating them out of vacuum so we know they fuse to the identity anyon (i.e. no charge). To obtain the respective measurements, we take the time-reverse diagram (i.e. $t\rightarrow -t$). This works identically for any $g\in S_3$, and we note that when $g$ is a 3-cycle, both encoded qubits are prepared (or measured) in the same basis.}
\label{preparations}
\end{figure}
\begin{proposition}\label{prop3FCliffordUniversality}
(Clifford universality of \textbf{3F} defect theory). For any 2-cycles $g\neq h\in S_3$ any Clifford operation can be implemented on $g,h$-encoded qubits by braiding and fusion of twists.
\end{proposition}
\begin{proof}
An arbitrary Clifford operation is given by either a Clifford unitary -- which can be generated by Hadamard, phase and $CZ$ -- or by a single qubit Pauli preparation or measurement. All Clifford unitaries can be implemented by Lemmas \ref{lemSingleQubit}, \ref{lemEntangling}, and the circuit identity of Fig.~\ref{fig3FEntanglingCircuit} in App.~\ref{SecEntanglingCircuitViaAncilla}. This, along with the Pauli $X$ and $Z$ basis preparations and measurements, as demonstrated in App.~\ref{appProofOfGates} completes the proof.
\end{proof}
To complete a universal set of gates we consider preparation of noisy $T$-states $\ket{T} = \frac{1}{\sqrt{2}}(\ket{\overline{0}}+e^{\frac{i\pi}{4}}\ket{\overline{1}})$. Such states can be distilled using post-selected Clifford circuits, and are sufficient to promote the Clifford gateset to universality~\cite{bravyi2005universal}. To prepare noisy $T$ states, we utilise a non-topological projection on the four twists in a $g$-encoding that are brought within a constant width neighbourhood. Here we consider $g$ a 2-cycle for simplicity. The $\ket{T}$ state is the $+1$ eigenstate of $(\overline{X} +\overline{Y})/\sqrt{2}$, and thus its preparation can be achieved by measuring the observable $(\overline{X} +\overline{Y})/\sqrt{2}$ and post selecting on the $+1$ outcome. To ensure such operations can achieved in a local way, the four twists of the $g$-encoding must be brought within a small neighbourhood, after which they can be separated. In the Walker--Wang resource states introduced in the following section, one may separate the twists by a distance of one lattice spacing, such that the required logical action can be achieved by modifying only a single qubit measurement to $(X+Y)/\sqrt{2}$. Topologically, the magic state preparation is depicted in Fig.~\ref{figMagicState}.
\begin{figure}[t]%
\centering
\includegraphics[width=0.99\linewidth]{fig3FMagicStatePreparation2.pdf} \\
\caption{Non-topologically protected magic state preparation. Time moves upwards. Four twists are brought to close proximity such that a non-topological operation can be implemented (depicted by shaded neighbourhood around all four twists on the left-most figure) -- in this case to prepare $\ket{T} = \frac{1}{\sqrt{2}}(\ket{\overline{0}}+e^{\frac{i\pi}{4}}\ket{\overline{1}})$. The precise nature of the non-topological projection depends on the lattice implementation. Topologically, the projection can be understood as giving rise to a superposition of a $\overline{X}$ and $\overline{Z}$ eigenstate preparations.}
\label{figMagicState}
\end{figure}
\section{Walker--Wang realisation of \textbf{3F}
computational resource states}\label{sec3FWW}
In order to implement the computational schemes of the previous section, we develop a framework for MBQC based on Walker--Wang resource states. In this section we introduce the \textbf{3F} Walker--Wang model of Ref.~\cite{burnell2013exactly} which provides the resource state for our computation scheme. We describe how the symmetries of the \textbf{3F} anyon theory can be lifted to a lattice representation, as symmetries of the \textbf{3F} Walker--Wang model, along with how to implement symmetry domain walls and twists based on them. While we focus on the \textbf{3F} anyon theory, the Walker--Wang construction, along with our computation scheme can be applied for general anyon theories. Indeed, the most well known example of fault-tolerant MBQC -- the topological cluster state model of Ref.~\cite{Rau06} -- is a special case of our construction, that arises when the toric code anyon theory is used as an input as described in Sec.~\ref{subsecComparisonToRauss}. We expect more exotic MBQC schemes can be found using this paradigm.
However, for general non-abelian anyon theories efficiently accounting for the randomness of measurement outcomes is an open problem.
\subsection{Hilbert space and Hamiltonian}
We utilise the simplified \textbf{3F} Hamiltonian defined in Ref.~\cite{burnell2013exactly}. We begin by considering a cubic lattice $\mathcal{L}$ with periodic boundary conditions. (For the \textbf{3F} theory, we do not need to trivalently resolve the cubic lattice as is done for general Walker--Wang models). The Hilbert space is given by placing a pair of qubits on each 1-cell of the cubic lattice $\mathcal{L}$. We refer to each 1-cell as a \textit{site}. We label a basis for each site $i$ as $\ket{x_1 x_2}_i, x_1,x_2 \in \mathbb{Z}_2$. Pauli operators acting on the first (second) qubit of site $i$ are labelled by $\sigma_i^{\alpha}$ ($\tau_i^{\alpha}$), where $\alpha \in \{ X, Y, Z\}$.
Following Ref.~\cite{burnell2013exactly}, the \textbf{3F} Hamiltonian is defined in terms of vertex and plaquette operators
\begin{equation}\label{eq3FWWHamiltonian}
H_{\textbf{3F}} = -\sum_{v\in V}( A_v^{(\textcolor{red!80!black}{\psi_\text{r}})} + A_v^{(\textcolor{green!80!black}{\psi_\text{g}})}) -\sum_{f \in F} (B_f^{(\textcolor{red!80!black}{\psi_\text{r}})} + B_f^{(\textcolor{green!80!black}{\psi_\text{g}})}),
\end{equation}
where the sum is over all vertices $V$ and plaquettes $F$ of the lattice, and
\begin{align}
A_v^{(\textcolor{red!80!black}{\psi_\text{r}})} &= \prod_{i \in \delta v} \sigma_i^{X}, &\quad B_f^{(\textcolor{red!80!black}{\psi_\text{r}})} &= \sigma_{O_f}^X \sigma_{U_f}^X \tau_{U_f}^X \prod_{i \in \partial f}\sigma_i^Z, \label{eqHamTermse}\\
A_v^{(\textcolor{green!80!black}{\psi_\text{g}})} &= \prod_{i \in \delta v} \tau_i^{X}, &\quad B_f^{(\textcolor{green!80!black}{\psi_\text{g}})} &= \sigma_{O_f}^X \tau_{O_f}^X \tau_{U_f}^X\prod_{i \in \partial f}\tau_i^Z.\label{eqHamTermsm}
\end{align}
Therein $\delta v$ consists of all edges that contain $v$ as a vertex, $\partial f$ consists of all edges belonging to the face $f$, and $O_f$ and $U_f$ are the unique edges determined by the plauqette $f$ as per Fig.~\ref{fig3FLatticeExample}. We also define the terms $A_v^{(\textcolor{blue!80!black}{\psi_\text{b}})} = A_v^{(\textcolor{red!80!black}{\psi_\text{r}})} A_v^{(\textcolor{green!80!black}{\psi_\text{g}})}$, $B_f^{(\textcolor{blue!80!black}{\psi_\text{b}})} = B_f^{(\textcolor{red!80!black}{\psi_\text{r}})} B_f^{(\textcolor{green!80!black}{\psi_\text{g}})}$, and one may add them to the Hamiltonian (with a negative sign) if desired. We remark that not all terms are independent, for example, taking products of plaquettes around a cube gives the product of a pair of vertex terms.
\begin{figure*}[t]%
\centering
\includegraphics[width=0.4\linewidth]{fig3FWWHamTerm} \hspace{1cm}
\includegraphics[width=0.35\linewidth,trim= 0 -1.75cm 0 0]{fig3FWWHamTerm2}
\caption{(top) Special edges $O_f$ and $U_f$ for each plaquette orientation. The coordinate system is shown, with each edge of the lattice being length 1. (bottom) Example of Hamiltonian terms $B_f^{(\textcolor{red!80!black}{\psi_\text{r}})}$ and $B_f^{(\textcolor{green!80!black}{\psi_\text{g}})}$.}
\label{fig3FLatticeExample}
\end{figure*}
On any closed manifold (i.e. without boundary), the ground state of $H_{\textbf{3F}}$ is unique~\cite{haah2018nontrivial}. In the Walker-Wang description, the ground state of $H_{\textbf{3F}}$ can be viewed as a weighted superposition over all valid anyonic worldlines, i.e. braided anyon diagrams that can be created from the vacuum via a sequence of local moves.
In particular, for each link, the basis of $\sigma_i^X$ and $\tau_i^X$ can be viewed as defining the presence or absence of fermionic $\textcolor{red!80!black}{\psi_\text{r}}$ and $\textcolor{green!80!black}{\psi_\text{g}}$ strings: $\ket{++}$ denotes the vacuum (identity anyon), $\ket{-+}$ denotes the presence of $\textcolor{red!80!black}{\psi_\text{r}}$, $\ket{+-}$ denotes the presence of $\textcolor{green!80!black}{\psi_\text{g}}$, and $\ket{--}$ denotes the presence of $\textcolor{blue!80!black}{\psi_\text{b}}$. The $A_v^{(\textcolor{red!80!black}{\psi_\text{r}})}$, $A_v^{(\textcolor{green!80!black}{\psi_\text{g}})}$ terms generate a $\mathbb Z_2\times \mathbb Z_2$ 1-form symmetry, ensuring valid fusion rules at each vertex (i.e. $\mathbb Z_2\times \mathbb Z_2$ fermion conservation), while the $B_f^{(\textcolor{red!80!black}{\psi_\text{r}})}$, $B_f^{(\textcolor{green!80!black}{\psi_\text{g}})}$ ``fluctuation" terms ensure the ground-space is a superposition over all valid fermionic worldline configurations, with sign determined by the fermion braiding rules. Namely, the unnormalized ground state is
\begin{equation}
\ket{\psi_{\textbf{3F}}} = \sum_{c \in \mathcal{D}} \phi(c) \ket{c}, \quad \phi(c) = (-1)^{\text{linking}(c) + \text{writhe}(c)}
\end{equation}
where $\mathcal{D}$ is the set of all basis states corresponding to closed anyon diagrams with valid fusion rules that can be created from the vacuum, and $\text{linking}(c)$ ($\text{writhe}(c)$) is the linking number (writhe number) of the $\textcolor{red!80!black}{\psi_\text{r}}$ and $\textcolor{green!80!black}{\psi_\text{g}}$ fermion worldlines~\cite{walker20123+}.
\subsection{Symmetry of the \textbf{3F} Hamiltonian}
Recall that the \textbf{3F} theory has a symmetry $S_3 = \text{Aut}(\mathcal{C})$ with action on anyons given by $g \cdot 1 = 1$, $ g \cdot \psi_i = \psi_{g(i)}$, where $g(i)$ denotes the usual $S_3$ permutation action on $i \in \{\textcolor{red!80!black}{\text{r}}, \textcolor{green!80!black}{\text{g}}, \textcolor{blue!80!black}{\text{b}}\}.$
We now show that this symmetry can be lifted to a symmetry of the \textbf{3F} Walker--Wang model defined above.
The symmetry contains an onsite and non-onsite part. Namely, write the symmetry $S(g)$ of the \textbf{3F} Hamiltonian as
\begin{equation}
S(g) = V(g) U(g) \quad g \in S_3,
\end{equation}
where $U(g)$ is the onsite representation of $S_3$ and $V(g)$ is a locality preserving unitary (the deviation from onsiteness), which takes the form of a partial translation of qubits. If we write the basis for the 2 qubit space on each 1-cell as $\ket{\textbf{1}} := \ket{++}$, $\ket{\textcolor{red!80!black}{\psi_\text{r}}} := \ket{-+}$, $\ket{\textcolor{green!80!black}{\psi_\text{g}}} := \ket{+-}$, $\ket{\textcolor{blue!80!black}{\psi_\text{b}}} := \ket{--}$, then the onsite part of the symmetry acts as a permutation of the three fermionic basis state on each site,
\begin{align}\label{eqFermionPermuation}
g\cdot\ket{\psi_k}_i &= \ket{\psi_{g \cdot k}}_i, \quad g \in S_3, ~k \in \{\textcolor{red!80!black}{\text{r}},\textcolor{green!80!black}{\text{g}},\textcolor{blue!80!black}{\text{b}}\},
\end{align}
while preserving the vacuum, i.e, $g\cdot\ket{\textbf{1}}_i = \ket{\textbf{1}}_i$. This action can be represented by a Clifford unitary on each site.
The unitary, onsite representation $U$ of $S_3$ is defined by $U(g) = \otimes_i u_i(g)$, where we have generators
\begin{align}
u_i(\text{rg}) &= \text{SWAP}_{i_1, i_2} \label{eqOnsiteSym1} \\
u_i(\text{rgb}) &= \text{SWAP}_{i_1, i_2} \cdot \text{CNOT}_{i_1, i_2} . \label{eqOnsiteSym2}
\end{align}
The non-onsite part $V$ of the representation is generated by
\begin{align}
V(\text{rg}) &= T_{\tau}(v) \quad \text{with} \quad v = (1,1,1), \label{eqGaugeTrans1} \\
V(\text{rgb}) &= I \label{eqGaugeTrans2},
\end{align}
where $T_{\tau}(v)$ is a partial translation operator acting on all $\tau$ qubits, shifting them in the $v = (x,y,z)$ direction, with coordinate basis defined in Fig.~\ref{fig3FLatticeExample}.
Notationally, we use only single parentheses when explicit group elements appear as representation arguments, e.g. ${V((\text{rg}))\equiv V(\text{rg})}$.
The partial translation operator has a well defined action on operators as a translation of their support. Namely, it can be defined factor-wise (with respect to the tensor product) for Pauli operators $\tau_u^{\alpha}$, $\sigma_u^{\alpha}$ at coordinate $u$, we have $T_{\tau}(v): \tau_u^{\alpha} \mapsto \tau_{u+v}^{\alpha}$, $\sigma_u^{\alpha} \mapsto \sigma_u^{\alpha}$, and extended by linearity.
\begin{proposition}\label{prop3FHamSymmetry}
The unitary representation $S$ of $S_3$ defined by Eqs.~(\ref{eqOnsiteSym1}) -- (\ref{eqGaugeTrans2}) is a symmetry of the Hamiltonian $H_{\textbf{3F}}$.
\end{proposition}
That $u_i$ is indeed a representation of $S_3$ is verified in App.~\ref{AppProofOfSymmetryRep}. We note that commutation of the symmetry with the Hamiltonian is not strictly necessary here, since $H_{\textbf{3F}}$ is a stabilizer model it is sufficient to prove that the stabilizer group is preserved under the action of $S(g)$ $\forall g\in S_3$. This can be verified by direct computation and we provide the proof in App.~\ref{AppProofOfProp3FHamSymmetry}. We remark that $S(g)$ induces a permutation on the terms $B_f^{(\textcolor{red!80!black}{\psi_\text{r}})}$, $B_f^{(\textcolor{green!80!black}{\psi_\text{g}})}$, $B_f^{(\textcolor{blue!80!black}{\psi_\text{b}})}$, given by $S(g) B_f^{(f_i)} S^{-1}(g) = B_f^{(f_{g\cdot i})}$.
Thus only the 3-cycles have an onsite representation while the 2-cycles require a non-onsite partial translation. One can track the source of the non-onsiteness to the particular choice of gauge for the input data to the Walker--Wang construction -- namely the $R$ symbols -- to obtain the Hamiltonian $H_{\textbf{3F}}$. One can equally construct a Hamiltonian using the transformed data corresponding to the action of each symmetry element $g\in S_3$, all of which belong to the same phase and can be related by a locality preserving unitary. This additional locality preserving unitary is the origin of the non-onsite part of the symmetry.
In general, applying a global symmetry to an anyon theory results in a transformation of the gauge-variant data~\cite{barkeshli2019symmetry}. The Walker--Wang model based on this transformed data is in the same topological phase as the original model, implying the existence of a locality-preserving unitary to bring the symmetry transformed Hamiltonian back to the original Hamiltonian.
Combining the global symmetry transformation with this locality-preserving unitary promotes the global symmetry of the input anyon theory to a locality-preserving symmetry of the Walker--Wang Hamiltonian.
We remark that for the \textbf{3F} theory, and more general anyon theories, one can construct a (non-stabilizer) Hamiltonian representative (using the symmetry-enriched anyon theory data), where the symmetry is onsite~\cite{shawnthesis,williamson2016hamiltonian} -- even for anomalous symmetries~\cite{bulmash2020absolute}.
\subsubsection{Transforming the lattice}
For the \textbf{3F} Hamiltonian presented in Eq.~(\ref{eq3FWWHamiltonian}), the 3-cycles $(\text{rgb}), (\text{rbg}) \in S_3$ admit an onsite unitary representation, while the 2-cycles require a non-onsite (but nonetheless locality preserving) unitary. By transforming the lattice, we can express the symmetry action of the 2-cycles entirely as a translation. This simplifies the implementation of symmetry defects on the lattice. Namely, consider the translation operator
\begin{equation}\label{eqLatticeTransformation}
T(t), \quad t= \frac{1}{2}(1,1,1),
\end{equation}
that acts to translate all qubits in the $t$ direction (where again the $(x,y,z)$ coordinates are defined in Fig.~\ref{fig3FLatticeExample}). Consider the translation $T_{\tau}(v)$ of all of the $\tau$ qubits such that they are shifted to faces of the cubic lattice. On this new lattice, where there are qubits on every face ($\tau$ qubits) and every edge ($\sigma$ qubits) and the \textbf{3F} Hamiltonian consists of a term for each vertex $v$, edge $e$, face $f$ and volume $q$:
\begin{align}
\tilde{A}_v^{(\textcolor{red!80!black}{\psi_\text{r}})} &= \prod_{e \in \delta v} \sigma_e^{X}, &\quad \tilde{B}_f^{(\textcolor{red!80!black}{\psi_\text{r}})} &= \sigma_{O_f}^X \sigma_{U_f}^X \tau_{f}^X \prod_{e \in \partial f}\sigma_e^Z, \label{eqModifiedHamTermse}\\
\tilde{A}_q^{(\textcolor{green!80!black}{\psi_\text{g}})} &= \prod_{f \in \partial q} \tau_f^{X}, &\quad \tilde{B}_e^{(\textcolor{green!80!black}{\psi_\text{g}})} &= \tau_{O_e}^X \tau_{U_e}^X \sigma_{e}^X \prod_{f \in \delta e}\tau_f^Z.\label{eqModifiedHamTermsm}
\end{align}
where $\delta e$ ($\delta v) $ consists of all faces (edges) incident to the edge $e$ (vertex $v$), and $\partial f$ ($\partial q$) consists of all edges (faces) in the boundary of the face $f$ (volume $q$), and $U_f, O_f$, and $U_e, O_e$ are edges and faces, respectively, depicted in Fig.~\ref{fig3FModifiedWWModel}.
\begin{figure}[t]%
\centering
\includegraphics[width=0.95\linewidth]{fig3FModifiedWWModel.pdf}
\caption{The \textbf{3F} Walker--Wang plaquette terms after translation of each of the $\tau$ qubits in the original lattice by $\frac{1}{2}(1,1,1)$. $\sigma$ qubits live on edges, while $\tau$ qubits live on faces. The support of the terms $\tilde{B}_f^{(\textcolor{red!80!black}{\psi_\text{r}})}$ and $\tilde{B}_e^{(\textcolor{green!80!black}{\psi_\text{g}})}$ are shown on top and bottom, respectively. For a given face $f$, the edges $U_f, O_f$ are precisely those depicted that are not in the boundary of the face. Similarly, for a given edge $e$, the faces $U_e, O_e$ are those depicted that are not in the coboundary of the edge. The 1-form constraint terms $\tilde{A}_v^{(\textcolor{red!80!black}{\psi_\text{r}})}$ and $\tilde{A}_q^{(\textcolor{green!80!black}{\psi_\text{g}})}$ are given by a product of Pauli-$X$ operators on the star of a vertex and the boundary of a cube, respectively.}
\label{fig3FModifiedWWModel}
\end{figure}
On this lattice, the symmetry $S(\text{rg})$ can be entirely implemented by a lattice transformation:
\begin{equation}\label{eq3FTranslationSymmetry}
S(\text{rg}) = T(w), \quad w = \frac{1}{2}(\pm1, \pm1, \pm1),
\end{equation}
where it is understood that the $\pm$ sign for each direction can be chosen independently. The symmetry induces the correct permutation action on Hamiltonian terms: namely, $\tilde{B}_f^{(\textcolor{red!80!black}{\psi_\text{r}})}$ and $\tilde{B}_e^{(\textcolor{green!80!black}{\psi_\text{g}})}$ plaquettes are permuted, as are the 1-form generators $\tilde{A}_v^{(\textcolor{red!80!black}{\psi_\text{r}})}$ and $\tilde{A}_q^{(\textcolor{green!80!black}{\psi_\text{g}})}$.
We remark that there are other choices of translation vector that realise the symmetry. One can directly generate the lattice representations for the other 2-cycle symmetries by composing $S(\text{rg})$ and $S(\text{rgb})$.
\subsection{Construction of symmetry defects in stabilizer models for locality-preserving symmetries}\label{subsecWWSymmetryDefects}
Here we present a general construction for implementing symmetry defects in 3D stabilizer models, whenever the symmetry is given by a constant depth circuit with a potential (partial) translation.
The prescription leverages similar constructions of symmetry defects in 2D systems \cite{bombin2010topologicaltwist, barkeshli2019symmetry, cheng2016translational}.
The construction admits a direct generalisation to a wider class of locality preserving symmetries,
such as those realized by quantum cellular automata~\cite{haah2018nontrivial}, and we expect that it extends to more general topological commuting projector models.
In particular, we give a prescription for implementing symmetry defects for the $S_3$ symmetries of the \textbf{3F} Walker--Wang model.
\subsubsection{Codimension-1 domain walls}
Let us begin by implementing $g$-domain walls in a stabilizer Hamiltonian $H$ with a symmetry $S(g)$ represented by a locality preserving unitary. Consider for simplicity an infinite lattice, that is partitioned by a two-dimensional surface $D$ into two connected halves $L\cup R$ (for example, $D$ may be a lattice plane).
Our goal is to create a codimension-1 domain wall supported near $D$.
We decompose the Hilbert space as $\mathcal{H} = \mathcal{H}_{L} \otimes \mathcal{H}_{R}$, where $\mathcal{H}_{L}$ and $\mathcal{H}_{R}$ are the Hilbert spaces for the two halves, which we refer to as the left and right spaces. We require that the partition is such that there is a natural restriction to one of the half spaces, which without loss of generality we assume to be $R$. In particular, we require the restriction $S_{R}(g) = S(g)|_{R}$ of $S(g)$ to $\mathcal{H}_{R}$ to be a well defined map
\begin{equation}
S_{R}(g): \mathcal{H}_R \rightarrow \mathcal{H}_{R}.
\end{equation}
For any constant depth unitary circuit, there exists a well defined restriction that is unique up to a local unitary that acts within a small neighbourhood of $D$.
In the presence of translation symmetries we additionally require that the translation is injective on $\mathcal{H}_R$ -- that is, we require that the translation maps one half of the partition to itself.
Such transformations can be achieved for example if $D$ is a plane or multiple half-planes that meet.
For $D$ a lattice plane this accommodates half space translations orthogonal to $D$ that are injective but not surjective on $\mathcal{H}_R$.
With this restriction, the Hamiltonian with a $g$-domain wall is given by conjugating the Hamiltonian $H$ by the restriction of the symmetry $S_{R}(g)$.
We remark that the defect Hamiltonian differs from $H$ only near the plane $D$. Namely, the restriction $S_{R}(g)$ preserves all Hamiltonian terms that are supported entirely within $\mathcal{H}_{R}$ (as it is a symmetry of $H$), has no affect on the terms that are supported entirely within $\mathcal{H}_{L}$, but may have some nontrivial action on terms supported on both $\mathcal{H}_L$ and $\mathcal{H}_R$. The modified terms supported in the neighbourhood of $D$ realise the $g$-domain wall. Such modified terms commute with each other and the remainder of the Hamiltonian, since their (anti)commutation relations upon restriction to either side of $D$ are preserved by $S_R(g)$.
We remark that when $S(g)$ is a locality preserving, but not onsite, unitary the Hilbert space near the domain wall may be modified. In particular, for a symmetry involving translation, the new Hilbert space may be a strict subset of the old Hilbert space. That is, a subset of qubits in $\mathcal{H}_R$ near the domain wall $D$ may be ``deleted" (for example if the translational symmetry is not parallel to the $D$~plane).
\subsubsection{Codimension-2 twist defects}\label{subsecTwistPrescription}
We now consider domain walls that terminate in codimension-2 twists. Consider a domain wall $D$ that has been terminated to create a boundary $\partial D$ which we assume is a straight line (in this way, $D$ no longer parititions the lattice into two halves). Let the Hamiltonian be written $H=\sum_{x\in I}h_x$ for some index set $I$, and $d=\max_{x\in I}\{\text{diam}(h_x)\}$ be the max diameter of any term, where $\text{diam}(h_x)$ is the diameter of the smallest ball containing the support of $h_x$ in the natural lattice metric.
Along the domain wall $D$ we can modify the Hilbert space and Hamiltonian terms following the previous prescription, provided they commute with the bulk Hamiltonian.
This works away from the boundary of the domain wall $\partial D$.
Specifically, one can replace all terms $h_x$ with support intersecting $D$ by $S_R(g) h_x S^{-1}_R(g)$, where again $S_R(g)$ is the restriction to one side of the domain wall, which is locally well defined away from $\partial D$.
In general, this procedure will break down for terms supported within a distance $d$ of $\partial D$, as the modified terms may no longer commute with the neighbouring bulk Hamiltonian terms and so are not added to the Hamiltonian. In order to ensure that all local degeneracy has been lifted in the neighbourhood of $\partial D$, we must find a maximal set of local Pauli terms that commute with the bulk Hamiltonian and domain wall terms. By stabilizer cleaning~\cite{bravyi2009no} there exist a generating set supported on the qubits within a neighbourhood of radius $d$ of $\partial D$, which we label by $N_d(\partial D)$. By Theorem IV.11. of Ref.~\cite{haah2018nontrivial}, this maximal set of local terms admits a translationally invariant generating set (by assumption we have assumed that $\partial S$ and thus the surrounding Hamiltonian terms have a translational invariance along one dimension). We use such a generating set to define our terms along the twist.
\begin{figure}[t]%
\centering
\includegraphics[width=0.92\linewidth]{figDefectPlane.pdf}
\caption{Example of a domain wall plane $D$ for a symmetry $S(\text{rg})$ ending in a twist (depicted in solid blue) travelling in the $\hat{y}$ direction. The new Hilbert space contains no qubits on any of the shaded edges or faces, leaving a lattice dislocation. }
\label{fig3FDomainWall}
\end{figure}
\subsubsection{Planes meeting at seams and corners}
For the purposes of discretising domain walls to implement gates from Sec.~\ref{sec3FTQCScheme} on the lattice we are required to consider configurations of two or three domain wall planes that meet at 1D seams and 0D corners, along with twists defect lines that change directions at 0D corners.
If the planes are constructed using different symmetries $S(g)$ or different translations, then Hamiltonian terms in the neighbourhood of seams can be constructed in the same way as the twists (utilising Theorem IV.11. of Ref.~\cite{haah2018nontrivial}).
Hamiltonian terms in the neighbourhood of a corner where a twist changes direction or where distinct domain wall planes meet can be again computed by finding a maximal set of mutually commuting terms that commute with the surrounding Hamiltonian, which is a finite constant sized problem (and thus can be found by exhaustive search), as such these features are contained within a ball of finite radius.
\subsection{Symmetry defects in $H_{\textbf{3F}}$}\label{secWWSymmetryDefects}
\begin{figure}[t]%
\centering
\includegraphics[width=0.98\linewidth]{fig3FWWDefectTerms.pdf}
\caption{Example \textbf{3F} Walker--Wang terms along the ${(\text{rg}) \in S_3}$ domain wall depicted in Fig. \ref{fig3FDomainWall}. The terms are color coded according to their support: blue shaded faces denote the domain wall $D$ -- upon which no qubits are supported; magenta shaded faces and edges denote the presence of $\tau^X$ and $\sigma^X$ respectively; yellow shaded faces and edges denote the presence of $\tau^Z$ and $\sigma^Z$ respectively. The top row of terms may be regarded as transformed versions of the right-most terms of Fig.~\ref{fig3FModifiedWWModel} that intersect the domain wall plane, while the bottom row are the transformed 1-form terms $\tilde{A}_v^{(\textcolor{red!80!black}{\psi_\text{r}})}$ and $\tilde{A}_q^{(\textcolor{green!80!black}{\psi_\text{g}})}$.}
\label{fig3FDomainWallTerms}
\end{figure}
Here we compute \textbf{3F} Hamiltonian terms along domain walls and twists for the 2-cycle symmetry $(\text{rg}) \in S_3$. The domain walls and twists corresponding to 3-cycles $(\text{rgb}),(\text{rbg}) \in S_3$ are simple to construct, involving no change to the Hilbert space or lattice due to the onsite nature of their representation. The remaining symmetry defects and twists can be constructed by direct analogy, combining those of $(\text{rg})$ with those of $(\text{rgb})$.
\subsubsection{\textbf{3F} domain walls and twists for $(\text{rg}) \in S_3$}
\begin{figure}[t]%
\centering
\includegraphics[width=0.98\linewidth]{fig3FWWTwistTerms}
\caption{Example \textbf{3F} Walker--Wang terms along the ${(\text{rg}) \in S_3}$ twist depicted in Fig. \ref{fig3FDomainWall}. The twist travels along the central vertical edge, depicted by the dotted blue line. This term can be regarded as the transformed version of right-most term of Fig.~\ref{fig3FDomainWallTerms} terms along the twist (obtained by multiplying a plaquette by its image under translation, each restricted to the qubits on the complement of the defect). Similarly, the modified 1-form operators contain $\tau^Y$ to ensure correct commutation. Other terms can be obtained by translating in the $\hat{y}$ direction, but we remark that these terms alone do not form a complete set. The color coding is identical to that of Fig.~\ref{fig3FDomainWallTerms} with the addition of $\tau^Y$ being denoted by chequered teal faces.}
\label{fig3FTwists}
\end{figure}
For clarity, we consider the modified lattice with qubits on faces and edges, whose Hamiltonian terms are given by Eqs.~(\ref{eqModifiedHamTermse}),~(\ref{eqModifiedHamTermsm}). Consider a domain wall $D$ given by a plane normal to the $n = (1,0,-1)$ direction constructed using the translation symmetry ${S(\text{rg}) = T(w)}$, ${w=\frac{1}{2}(1,1,-1)}$ (both vectors were chosen for visualisation purposes). The discretised version of the domain wall on the lattice is visualised in Fig.~\ref{fig3FDomainWall}.
The modified Hilbert space and Hamiltonian terms along the domain wall are depicted in Fig.~\ref{fig3FDomainWallTerms}.
We remark that there is a layer of qubits missing on the domain wall itself, arising from the restricted translation action away from (rather than parallel to) the domain wall.
Now consider a domain wall $D$ with boundary $\partial D$, for example along the $\hat{y} = (0,1,0)$, direction.
To find a set of local terms to gap out the twist, we consider modifying the plaquette terms whose supports intersect the twist line to make them commute with the domain wall and bulk terms.
The modified 1-form terms are then determined by constructing operators that commute with these modified plaquette terms and all other terms in the Hamiltonian (such that the product of modified and unmodified plaquette terms around a 3-, or 0-cell still matches the product of a pair of modified and unmodified 1-form terms).
Such modifications can be done locally, following the discussion in Sec.~\ref{subsecTwistPrescription}.
In Fig.~\ref{fig3FTwists}, we depict an example of a modified version of the right-most term from Fig.~\ref{fig3FDomainWallTerms} along the twist.
We remark that the terms depicted in Fig.~\ref{fig3FTwists} alone do not form a complete set to gap out the twist.
One can also define these defects on the original Walker--Wang lattice, with two qubits per edge, by applying the inverse transformation of Eq.~(\ref{eqLatticeTransformation}).
\begin{figure*}[t]
\center
\includegraphics[width=0.48\linewidth]{fig3FSmoothBoundary2}
\includegraphics[width=0.48\linewidth]{fig3FSmoothBoundary}
\caption{Boundary of $H_{\textbf{3F}}$. The blue shaded region depicts vacuum (i.e. region with no qubits), and the bulk of $H_{\textbf{3F}}$ lies above the plane. On the left we depict the stabilizers on the boundary: Truncated versions of $A_v^{(\textcolor{red!80!black}{\psi_\text{r}})}$ and $B_f^{(\textcolor{red!80!black}{\psi_\text{r}})}$ shaded in red, and $A_v^{(\textcolor{green!80!black}{\psi_\text{g}})}$ and $B_f^{(\textcolor{green!80!black}{\psi_\text{g}})}$ shaded in green. On the right we depict the support of logical operators of Eqs.~(\ref{eq3FLogicalOperators1}), (\ref{eq3FLogicalOperators2}) -- two cycles $c$, $c'$ are depicted by solid red and green lines, while the links belonging to $c_O$, $c'_O$ are depicted by dashed lines. For example, if we take periodic boundary conditions (such that the boundary is a torus), then the two operators $l^{\textcolor{red!80!black}{\psi_\text{r}}}_c$ and $l^{\textcolor{green!80!black}{\psi_\text{g}}}_{c'}$ form anti-commuting pairs of logical operators.}
\label{fig3FBoundary}
\end{figure*}
\subsection{Boundaries of $H_{\textbf{3F}}$}\label{secWWTimeLikeBoundaries}
Finally, we review boundaries of the \textbf{3F} Walker--Wang model.
On a manifold with boundary, the Walker--Wang model admits a canonical smooth boundary condition~\cite{walker20123+} that supports a topological phase described by the input anyon theory -- in this case the \textbf{3F} anyon theory, as described in Ref.~\cite{burnell2013exactly}.
To be more precise, one may terminate the lattice with smooth boundary conditions as depicted in Fig.~\ref{fig3FBoundary}. The Hamiltonian terms for the boundary can be obtained by truncating the usual bulk terms, see Fig.~\ref{fig3FBoundary}. The boundary supports a topology dependent ground space degeneracy of $2^{2g}$ for an orientable, connected boundary with genus $g$. We can view the ground-space of the boundary as a code with certain logical operators that form anti-commuting pairs. The logical operators come in two types. Let $c$ be a closed cycle on the boundary, then let
\begin{align}
l^{\textcolor{red!80!black}{\psi_\text{r}}}_c &= \prod_{i \in c} \sigma^Z_i \prod_{j \in c_O} \sigma_j^X \label{eq3FLogicalOperators1}\\
l^{\textcolor{green!80!black}{\psi_\text{g}}}_c &= \prod_{i \in c} \tau^Z_i \prod_{j \in c_O} \sigma_j^X \tau_j^X \label{eq3FLogicalOperators2}
\end{align}
where $c_O$ is a set of links ``over'' the cycle $c$, depicted by dashed lines in Fig.~\ref{fig3FBoundary}. Two operators $l^{\textcolor{red!80!black}{\psi_\text{r}}}_c$ and $l^{\textcolor{green!80!black}{\psi_\text{g}}}_{'c}$ anticomute if and only if $c$ and $c'$ intersect an odd number of times and two operators of the same type commute. Representative logical operators can be found by choosing nontrivial cycles $c$ of the boundary.
As described in ~\cite{burnell2013exactly}, the \textbf{3F} anyons can be found as excitations on the boundary. Such excitations correspond to flipped boundary plaquettes $B_f^{(\textcolor{red!80!black}{\psi_\text{r}})}$ and $B_f^{(\textcolor{green!80!black}{\psi_\text{g}})}$, and can be created at the end of string operators obtained as truncated versions of the loop operators of Eqs.~(\ref{eq3FLogicalOperators1}),~(\ref{eq3FLogicalOperators2}). Further, symmetry defects from the bulk that intersect the boundary give rise to the natural 2D defects that behave as described in Sec.~\ref{sec3FTQCScheme}. Thus the boundary of the \textbf{3F} Walker--Wang model faithfully realises the \textbf{3F} anyon theory and its symmetry defects. In the following section we show how to perform fault-tolerant MBQC with these states.
\section{Fault-tolerant measurement-based quantum computation with Walker--Wang resource states}\label{sec3FMBQC}
Measurement-based quantum computation provides an attractive route to implement the topological computation scheme introduced in Sec.~\ref{sec3FTQCScheme}.
The computation proceeds by implementing single spin measurements on a suitably prepared resource state -- in this case the ground state(s) of the Walker--Wang model introduced in the previous section.
In this section we introduce the general concepts required to implement fault-tolerant MBQC with Walker--Wang resource states, focusing on the \textbf{3F} anyon theory example.
\subsection{Warm-up: topological cluster state MBQC in the Walker--Wang framework}\label{subsecComparisonToRauss}
The simplest and most well known example of fault-tolerant MBQC is the topological cluster state model from Ref.~\cite{Rau06}.
As a warm-up for what's to come, we explain how this model can be understood as a Walker--Wang model based on the toric code anyon theory.
Up to some lattice simplifications (which we show below), the topological cluster state model~\cite{Rau06} is is prescribed by the Walker--Wang construction using the toric code anyon theory $\mathcal{C}_{\textbf{TC}} = \{1,e, m, \epsilon\}$ as the input. The toric code anyon theory emerges as the fundamental excitations of the toric code~\cite{kitaev2003fault}, they have the following $\mathbb{Z}_2\times\mathbb{Z}_2$ fusion rules:
\begin{equation}
e \times m = \epsilon, \qquad e\times e = m \times m = 1,
\end{equation}
with modular $S$ matrix the same as \textbf{3F} as in Eq.~(\ref{eq3FModularMatrices}), and $T$ matrix given by $T= \text{diag}(1,1,1,-1)$. The Walker--Wang construction can be used with this input to give a Hamiltonian with plaquette terms as per Fig.~\ref{figRaussendorfWWModel}~(top) along with the same vertex terms as Eqs.~(\ref{eqHamTermse}),~(\ref{eqHamTermsm}). To obtain the more familiar stabilizers of the three-dimensional topological cluster state of Ref.~\cite{Rau06} -- depicted in Fig.~\ref{figRaussendorfWWModel}~(bottom) -- we simply translate all $\tau$ qubits by $\frac{1}{2}(1,1,1)$, as in Eq.~(\ref{eqLatticeTransformation}).
The Walker--Wang construction provides a useful insight into topological quantum computation with the 3D cluster state. In particular, the (unique on any closed manifold) ground state of the toric code Walker--Wang model consists of a superposition over closed anyon diagrams. We interpret the basis states $\ket{++}, \ket{-+}, \ket{+-}, \ket{--}$ on each link as hosting $1,e,m, \epsilon$ anyons, respectively. The ground state is then
\begin{equation}
\ket{\psi_{\textbf{TC}}} = \sum_{c \in \mathcal{D}} \phi(c) \ket{c}, \quad \phi(c) = (-1)^{\text{linking}(c)}
\end{equation}
where $\mathcal{D}$ is the set of all basis states corresponding to closed anyon diagrams with valid fusion rules that can be created via local moves, and $\text{linking}(c)$ is the linking number of the $e$ and $m$ anyon worldlines.
The computation on this state proceeds by measuring all qubits in the local anyon basis (i.e. $\ket{++}, \ket{-+}, \ket{+-}, \ket{--}$), projecting it into a definite anyon diagram which we call a history state. As each measurement outcome is in general random, the history state produced is also random. This leads to a outcome-dependent Pauli operator that needs to be applied (or kept track of) to ensure deterministic computation. This Pauli operator is inferred from measurement outcomes of operators known as \textit{correlation surfaces} for each gate \cite{raussendorf2003measurement, Rau06}, which measure the anyon flux between different regions. The computation is fault-tolerant because of the presence of the $\mathbb Z_2\times \mathbb Z_2$ 1-form symmetry: errors manifest as violations of anyon conservation in the history state and can be accounted for and corrected.
To implement logical gates, one can use a combination of boundaries and symmetry defects to encode and drive computation. The anyon theory enjoys a $\mathbb Z_2$ symmetry: $e \leftrightarrow m$ (which on the usual cluster-state lattice with qubits on faces and edges can be realized by the same translation operator as Eq.~(\ref{eq3FTranslationSymmetry})). Twists defects corresponding to this $\mathbb Z_2$ symmetry can be implemented in this lattice using the prescription of Sec.~\ref{subsecWWSymmetryDefects} and can be braided and fused to implement logical gates. (Another method for constructing defects is given by Ref.~\cite{brown2020universal} -- although is distinct from the method proposed in Sec.~\ref{subsecWWSymmetryDefects}.). We remark that braiding these defects is not Clifford-complete. To make the scheme Clifford complete, one can introduce boundaries, of which there are two types (each boundary can condense either $e$ or $m$ anyons)~\cite{Rau06}.
In what follows, we describe the topological MBQC scheme based on the \textbf{3F} theory.
\begin{figure}[t]%
\centering
\includegraphics[width=0.95\linewidth]{figRaussendorfWWTerms}
\includegraphics[width=0.95\linewidth]{figRaussendorfWWTermsModified.pdf}
\caption{(top) The Walker--Wang construction applied to the toric code anyon theory $\mathcal{C}_{\textbf{TC}}$ gives the plaquette terms depicted above. Terms on different plaquettes can be obtained by translating and rotating according to the correct orientation, as depicted by the blue and red legs. (bottom) The 3D cluster state terms obtained after applying the translation of all $\tau$ qubits have been translated by $\frac{1}{2}(1,1,1)$. All terms are rotationally symmetric on this lattice. }
\label{figRaussendorfWWModel}
\end{figure}
\subsection{3-Fermion topological MBQC}
We now describe how to implement our \textbf{3F} topological quantum computation scheme using an MQBC approach based on Walker--Wang resource states.
A high level description of the computation scheme is depicted in Fig.~\ref{figWWMBQC}.
\subsubsection{The \textbf{3F} resource state}
The resource state upon which measurements are performed is given by the ground state of the Walker--Wang Hamiltonian $H_{\textbf{3F}}$ with defects as defined in Sec.~\ref{sec3FWW}, which is a stabilizer model.
This resource state can be understood as a blueprint for the computation and we denote the stabilizer group that defines it by $\mathcal{R} \leq \mathbb{P}_n$ (where $\mathbb{P}_n$ is the Pauli group on $n$ qubits). In particular, $\mathcal{R}$ is generated by all the local terms of $H_{\textbf{3F}}$, and the resource state is a $+1$-eigenstate of all elements of $\mathcal{R}$.
It is instructive to think of one direction of the lattice, say the $\hat{y}$ direction, as being simulated time. For simplicity, we choose the global topology of the lattice to be that of the 3-torus such that the Hamiltonian(s) contain a unique ground state. Of course, one may consider boundaries that support a degenerate ground-space (with \textbf{3F} topological order and possible symmetry defects) as described in Sec.~\ref{secWWTimeLikeBoundaries}, which can be used as the input and output encoded states for quantum computations. However, we remark here that all computations may be performed in the bulk (i.e. with periodic boundary conditions) with all boundaries of interest being introduced by measurement.
In order to perform computations consisting of a set of preparations, gates and measurements, one prepares the ground state of the \textbf{3F} Walker--Wang Hamiltonian with symmetry defects according to a discretised (on the cubic lattice) version of the topological specification of each gate in Sec.~\ref{sec3FTQCScheme} following the microscopic prescription of Sec.~\ref{sec3FWW}, with gates concatenated in the natural way.
Fault-tolerant preparation of the ground state of this Hamiltonian can be achieved using standard techniques as it is the ground-state of a stabilizer Hamiltonian (for example, one may measure each Hamiltonian term using Clifford operations~\cite{steane1997active}).
Importantly, the entire resource state need not be prepared at once (following for example, Ref.~\cite{Raussendorf07}) -- it can be dynamically produced and this allows for adaptive choices of gates which is required for many algorithmic primitives as well as magic state distillation.
\begin{figure}[t
\centering
\includegraphics[width=0.95\linewidth]{fig3FFTMBQCProcess.png}
\caption{Fault-tolerant MBQC using the \textbf{3F} Walker--Wang model. (left) Defects and twists can be discretised to live on 2-chains of the lattice and their boundary. (middle) Measurements in the fermion basis in the blue region drives the computation. (right) The post measured state is given by a fixed fermion worldline string-net. Any violations of the $\mathbb Z_2\times\mathbb Z_2$ conservation at each vertex results from an error, and is detected by the vertex operators whose outcomes are inferred from the local measurements.}
\label{figWWMBQC}
\end{figure}
\subsubsection{Topological boundary states through measurement}\label{secPreparing3Fstates}
Much like the traditional approaches to fault-tolerant MBQC~\cite{RBH,Rau06,brown2020universal}, we can understand the measurements as propagating and deforming topologically-encoded states (or in another sense as encoded teleportation).
To understand this more precisely, and develop intuition about how the topological computation proceeds, we begin with an example.
Consider the ground-state of the $\textbf{3F}$ Walker--Wang model on the lattice $\mathcal{L}$. We partition the lattice into three segments $\mathcal{L} = A \sqcup C \sqcup B$ as depicted in Fig.~\ref{fig3FMeasuringBoundary}. To begin with, we consider the case where all the sites in $C$ are measured in the fermion basis -- i.e. in $\sigma_i^X$ and $\tau_i^X$ -- and where $A$ and $B$ are unmeasured.
Firstly, we observe that the post-measured state supports two bulk \textbf{3F} Walker Wang ground states in $A$ and $B$, with \textbf{3F} boundary states on the interface surfaces $\partial A$ and $\partial B$. The boundary states are precisely those described in Sec.~\ref{secWWTimeLikeBoundaries}, as one can verify that the post-measured state is stabilized by the same truncated stabilizers of Fig.~\ref{fig3FBoundary} up to a sign. Even in the absence of errors, these boundary states will in general host \textbf{3F} anyons as excitations which live at the end of strings of $-1$ measurement outcomes of $\sigma_i^X$ and $\tau_i^X$.
These boundary states are maximally entangled. To show this, we introduce the concept of a \textit{correlation surface}, which are certain stabilizers of the resource state, that agree with the measurements in $C$ and restrict to logical operators on the boundaries $\partial A$ and $\partial B$. Namely, we define two planes in the $xy$ and $zy$ directions, $c^{(xy)}$ and $c^{(zy)}$, as per Fig.~\ref{fig3FMeasuringBoundary}, and define the operators
\begin{align}
S_{r}(c^{(xy)}) = \prod_{i \in \partial c^{(xy)}} \sigma_i^Z \prod_{j \in c_O^{(xy)}}\sigma_j^X \prod_{k \in c_U^{(xy)}}\sigma_k^X \tau_k^X \\
S_{g}(c^{(zy)}) = \prod_{i \in \partial c^{(zy)}} \tau_i^Z \prod_{j \in c_O^{(zy)}}\sigma_j^X \tau_j^X \prod_{k \in c_U^{(zy)}} \tau_k^X
\end{align}
where $c_O^{(xy)}$ ($c_O^{(zy)}$) and $c_U^{(xy)}$ ($ c_U^{(zy)}$) denote the sets of edges perpendicular to and on each side of $c^{(xy)}$ ($c^{(zy)}$). Namely, $c_O^{(xy)}$ ($c_O^{(zy)}$) is the set of edges over the surface $c^{(xy)}$ ($c^{(zy)}$), i.e., on the same side as the dashed edges in Fig.~\ref{fig3FMeasuringBoundary}, while $c_O^{(xy)}$ ($c_O^{(zy)}$) is the set of edges under the surface $c^{(xy)}$ ($c^{(zy)}$), i.e., on the opposite side of the dashed edges in Fig.~\ref{fig3FMeasuringBoundary}.
The operators $S_{r}(c^{(xy)})$ $S_{g}(c^{(zy)})$ are stabilizers for the Walker--Wang ground state and we refer to them as \textit{correlation surfaces}: they are products of plaquette terms $B_f^{(\textcolor{red!80!black}{\psi_\text{r}})}$ and $B_f^{(\textcolor{green!80!black}{\psi_\text{g}})}$ in the $c^{(xy)}$ and $c^{(zy)}$ planes, respectively. They can be viewed as world-sheets of the \textbf{3F} boundary state logical operators (they are the analogues of the correlation surfaces in topological cluster state computation of Ref.~\cite{RBH}). In particular, they restrict to logical operators of the \textbf{3F} boundary states on $\partial A$ and $\partial B$ and can be used to infer the correlations between the post-measured boundaries. Namely, we have that the post-measured state is a +1-eigenstate of
\begin{align}
\pm l^{\textcolor{red!80!black}{\psi_\text{r}}}_{\partial c^{(xy)}\cap A} \otimes l^{\textcolor{red!80!black}{\psi_\text{r}}}_{\partial c^{(xy)}\cap B},\\
\pm l^{\textcolor{green!80!black}{\psi_\text{g}}}_{\partial c^{(zy)}\cap A} \otimes l^{\textcolor{green!80!black}{\psi_\text{g}}}_{\partial c^{(zy)}\cap B},
\end{align}
where each factor is a logical operator for the boundary code, as defined in Eqs.~(\ref{eq3FLogicalOperators1}),~(\ref{eq3FLogicalOperators2}) and where the $\pm$ signs are determined by the outcome of the measurements along the correlation surface in $C$. These are the correlations of a maximally entangled pair. Depending on the topology of $\partial A$ and $\partial B$ the boundary state may involve multiple maximally entangled pairs (e.g. 2 pairs if the boundary states are supported on torii).
We remark that one can construct equivalent, but more natural, correlation surfaces by multiplying with vertex stabilizers $A_v^{(\textcolor{red!80!black}{\psi_\text{r}})}$ and $A_v^{(\textcolor{green!80!black}{\psi_\text{g}})}$ to obtain the bulk of the correlation surfaces for $S_{r}(c^{(xy)})$ and $S_{g}(c^{(zy})$ in terms of a product of $\tau_i^X$ and $\sigma_i^X$ on one side of the surface, respectively.
Importantly, if the region $B$ was prepared in some definite state and measured, the logical state would be teleported to the qubits encoded on the surface at $\partial A$. Conceptually, at any intermediate time during the computation, we may regard the state as being encoded in topological degrees of freedom on a boundary normal to the direction of information flow. This picture holds more generally, when the information may be encoded in twists, and where the propagation of information is again tracked through correlation surfaces that can be regarded as world-sheets of the logical operators.
\begin{figure}[t]%
\centering
\includegraphics[width=0.85\linewidth]{fig3FSmoothBoundaries3}
\caption{Preparing \textbf{3F} surface states on the boundaries of $A$ and $B$ by measuring all sites in $C$. The planes $c^{(xy)}$ and $c^{(zy)}$ are depicted in red and green, respectively with the cycles $\partial c^{(xy)}$ and $\partial c^{(zy)}$ on their boundary. The set of links $c_O^{(xy)}$ and $c_O^{(zy)}$ are the set of links perpendicular to the surfaces, on the same side as the dashed lines on $\partial A$. The set of links $c_U^{(xy)}$ and $c_U^{(zy)}$ are on the opposite side.}
\label{fig3FMeasuringBoundary}
\end{figure}
\subsubsection{Measurement patterns, 1-form symmetries and correlation surfaces.}\label{secCorrelationSUrfaces}
We now consider the general setting for fault-tolerant MBQC with the Walker--Wang resource state.
The computation is then driven in time by applying single qubit measurements to a resource state describing the Walker--Wang ground state with defects. Such measurements are sequentially applied and the outcomes are processed to determine Pauli corrections, logical measurement outcomes as well as any errors that may have occurred. We label by $\mathcal{M}\subseteq \mathbb{P}_n$ the group generated by the single qubit measurements. For the \textbf{3F} Walker--Wang resource state, we measure in the local fermion basis to project onto a definite fermion wordline occupation state, giving
\begin{equation}
\mathcal{M} = \langle \sigma_i^X, \tau_i^X ~|~ i \in \mathcal{L} \rangle.
\end{equation}
We remark for magic state preparation, as per Sec.~\ref{secMagicStatePreparation}, the measurement pattern must in general be modified in a manner that depends on the implementation. Additionally, the measurement pattern may be locally modified in the vicinity of a twist defect, again depending on the implementation. For the twist identified in the previous section, we require a chain of Pauli-$Y$ measurements on the qubits uniquely determined by the 1-form operators of Fig.~\ref{fig3FTwists}. The post-measured state can be regarded as a classical \textit{history state} with definite fermion worldlines.
Individual measurement outcomes are random and in general measurements result in a random fermion worldline occupation on each link of the lattice. However, there are constraints in the absence of errors. Namely, at each bulk vertex the the $\mathbb Z_2\times \mathbb Z_2$ fermion charge must be conserved. This bulk conservation is measured by the operators $A_v^{(\textcolor{red!80!black}{\psi_\text{r}})}, A_v^{(\textcolor{green!80!black}{\psi_\text{g}})}$, which belong to both the resource state and measurement group, $A_v^{(\textcolor{red!80!black}{\psi_\text{r}})}, A_v^{(\textcolor{green!80!black}{\psi_\text{g}})} \in \mathcal{R}\cap \mathcal{M}$.
The conservation law is modified near defects and domain walls, so too are the corresponding operators from $\mathcal{R}\cap \mathcal{M}$. Therefore, in the absence of errors, due to membership in $\mathcal{R}$, measurement of any operator from $\mathcal{R}\cap \mathcal{M}$ would deterministically return $+1$, signifying the appropriate fermion conservation. Due to membership in $\mathcal{M}$, the outcome of these operators can be inferred during computation as the measurement proceed.
The vertex operators generate a symmetry group
\begin{equation}
\mathcal{S} = \langle A_v^{(\textcolor{red!80!black}{\psi_\text{r}})}, A_v^{(\textcolor{green!80!black}{\psi_\text{g}})} ~|~ v \in \mathcal{L} \rangle,
\end{equation}
where we assume that each vertex operator is suitably modified near symmetry defects and domain walls.
This is known as a $\mathbb Z_2 \times \mathbb Z_2$ 1-form symmetry group because it consists of operators supported on closed codimension-1 submanifolds of the lattice~\cite{Gaiotto2015}. In terms of the Walker--Wang model for the \textbf{3F} theory, operators in $\mathcal{S}$ measure the fermionic flux through each contractible region of the lattice -- which must be net neutral in the groundstate.
Even in the absence of errors, the randomness of measurement outcomes can result in fermionic worldlines (in the post-measured state) that nontrivially connect distinct twists. In particular, at each point in the computation, this randomness results in a change in the charge on a twist line and can be mapped to an outcome-dependent logical Pauli operator that has been applied to the logical state. This outcome-dependent Pauli operator is called the logical Pauli frame, and can be deduced by the outcomes of the correlation surfaces (as we have seen in the example of Sec.~\ref{secPreparing3Fstates}).
The correlation surfaces are obtained for each preparation, gate, and measurement. They are stabilizers of the resource state that can be viewed as topologically nontrivial 1-form operators that enclose (and measure) the flux through a region and thus the charge on relevant sets of defects in the history state. We define correlation surfaces for each operation in App.~\ref{appProofOfGates}. Correlation surfaces are not uniquely defined: multiplication by any 1-form operators $s\in \mathcal{S}$ produces another valid correlation surface that is logically equivalent (i.e. will determine the same logical Pauli frame).
For a given operation, we label the set of all correlation surfaces up to equivalence under $\mathcal{S}$ by $\overline{\mathcal{S}}$. This equivalence allows us to map between different representative logical operators as explained in Sec.~\ref{sec3FEncodings}.
\begin{figure}[t]%
\centering
\includegraphics[width=0.6\linewidth]{fig3FMBQCNoise2.pdf}
\caption{Syndromes observed in Walker--Wang MBQC. Lines are color-coded according to the observed measurement outcomes corresponding to the basis $\ket{\textbf{1}} := \ket{++}$, $\ket{\textcolor{red!80!black}{\psi_\text{r}}} := \ket{-+}$, $\ket{\textcolor{green!80!black}{\psi_\text{g}}} := \ket{+-}$, $\ket{\textcolor{blue!80!black}{\psi_\text{b}}} := \ket{--}$. Possible errors producing the observed syndrome are displayed by dashed lines. Nontrivial syndromes $s_v = (a,b) \in \mathbb Z_2^2$ on each vertex are observed due to violations of the $\mathbb Z_2^2$ charge flux on each vertex and can be inferred from the measurement outcomes of $(A_v^{(\textcolor{red!80!black}{\psi_\text{r}})}, A_v^{(\textcolor{green!80!black}{\psi_\text{g}}))})$. For example, $s^1 = (1,0)$ and $s^3 = (1,1)$ arises from $\textcolor{red!80!black}{\psi_\text{r}}$ and $\textcolor{blue!80!black}{\psi_\text{b}}$ string errors, as depicted.}
\label{figLogicalError2}
\end{figure}
\subsubsection{Errors, fermion parity, and decoding.}
Errors may occur during resource state preparation, computation, and measurement. For simplicity, let us focus on Pauli errors and measurement errors in the bulk.
Firstly, $\sigma_i^Z$, $\tau_i^Z$ and $\sigma_i^Z \tau_i^Z$ errors acting on the resource state result in flipped measurement outcomes. They flip the local $\sigma_i^X$, $\tau_i^X$ measurement outcomes. In the resource state wavefunction, they can be thought of as creating $\textcolor{red!80!black}{\psi_\text{r}}$, $\textcolor{green!80!black}{\psi_\text{g}}$, and $\textcolor{blue!80!black}{\psi_\text{b}}$ fermion string segments, respectively. On the other hand, $\sigma_i^X$, $\tau_i^X$ and $\sigma_i^X \tau_i^X$ errors are benign (as is familiar from topological cluster state computation~\cite{Rau06, RBH}). They commute with the measurement pattern and thus do not affect the measurement outcome. In the Walker--Wang resource state wavefunction they can be thought of as creating a small contractible loop of $\textcolor{red!80!black}{\psi_\text{r}}$, $\textcolor{green!80!black}{\psi_\text{g}}$, or $\textcolor{blue!80!black}{\psi_\text{b}}$ fermion worldine, respectively, linking edge $i$~\cite{walker20123+,burnell2013exactly}. Finally, measurement errors (i.e. measurements that report the incorrect outcome) are equivalent to $Z$-type physical errors that occurred on the state before measurement.
In the post measured state, these errors manifest themselves as modifications to the classical history state. Detectable errors are those that give rise to violations of the $\mathbb Z_2\times \mathbb Z_2$ fermion conservation rule (that exists away from the twists) and are thus revealed by $-1$ outcomes of the 1-form symmetry operators $s\in \mathcal{S}$. We consider example configurations in Fig.~\ref{figLogicalError2}. Nontrivial errors are those that connect distinct twist worldlines. Such errors result in the incorrect inferred outcome of the correlation surfaces in $\overline{\mathcal{S}}$, and therefore an incorrect inference of the logical Pauli frame -- in other words: a logical Pauli error. Such a process is depicted in Fig.~\ref{figLogicalError}. If errors arise by local processes then they can be reliably identified and accounted for if twist worldlines remain well separated.
It is possible to correct for violations of the $A_v^{(\textcolor{red!80!black}{\psi_\text{r}})}$ and $A_v^{(\textcolor{green!80!black}{\psi_\text{g}})}$ sectors independently (although depending on the noise model, it may be advantageous to correct them jointly). In particular if we represent the outcome of all vertex operators $A_v^{(\textcolor{red!80!black}{\psi_\text{r}})}, A_v^{(\textcolor{green!80!black}{\psi_\text{g}})}$ by two binary vectors $v_{\mathcal{S}}^{(\textcolor{red!80!black}{\psi_\text{r}})} \in \mathbb Z_2^{|V|}$, $v_{\mathcal{S}}^{(\textcolor{green!80!black}{\psi_\text{g}})} \in \mathbb Z_2^{|V|}$, where $|V|$ is the number of vertices in the lattice. Then one can apply the standard minimum weight pair matching algorithm that is commonly used for topological error correction \cite{dennis2002topological,Rau06}.
The algorithm returns a matching of vertices for each sector, $\textcolor{red!80!black}{\psi_\text{r}}$ and $\textcolor{green!80!black}{\psi_\text{g}}$, which can be used to deduce a path of measurement outcomes that need to be flipped to restore local fermion parity (i.e. ensure $s\in \mathcal{S}$ has a $+1$ outcome).
\begin{figure}[t]%
\centering
\includegraphics[width=0.85\linewidth]{fig3FMBQCNoise1.pdf}
\caption{Undetectable errors in Walker--Wang MBQC depicted by dashed lines. The homologically trivial loops do not result in a logical error. The central error depicted in blue that extends between different twists results in a logical error.}
\label{figLogicalError}
\end{figure}
\subsubsection{Threshold performance}
Assuming a phenomenological error model of perfect state preparation, memory and only noisy measurements, the bulk \textbf{3F} Walker--Wang MBQC scheme has a high threshold identical to that of the topological cluster state formulation~\cite{dennis2002topological,Rau06} (assuming the same decoder). This follows from the fact that the error model and bulk decoding problem is identical to that of topological cluster state computation~\cite{Rau06}. In practice the \textbf{3F} Walker--Wang resource state is likely more complicated to prepare and thus is expected to have a lower threshold under a more realistic noise model.
\subsection{1-form symmetry-protected topological order and Walker--Wang resource states}\label{secSPTorder1form}
We remark that while both ground states of the \textbf{3F} and \textbf{TC} Walker--Wang models can be prepared by a quantum cellular automaton, only the \textbf{TC} Walker--Wang model ground state can be prepared from a constant depth circuit~\cite{haah2018nontrivial}. Indeed, the two phases, belong to distinct nontrivial SPT phases under $\mathbb Z_2^2$ 1-form symmetries. The topological cluster state model has been demonstrated to maintain its nontrivial SPT order at nonzero temperature~\cite{roberts2017symmetry,roberts2020symmetry}. By the same arguments as in Refs.~\cite{roberts2017symmetry,roberts2020symmetry}, the \textbf{3F} Walker--Wang model belongs to a nontrivial SPT phase under 1-form symmetries, distinct from the topological cluster state model.
More generally, the bulk of any Walker--Wang state arising from a modular anyon theory should be SPT ordered under a 1-form symmetry (or appropriate generalisation thereof). One can diagnose the nontrivial SPT order under 1-form symmetries by looking at the anomalous action of the symmetry on the boundary. This anomalous 1-form symmetry boundary action corresponds to the string operators of a modular anyon theory. A gapped phase supporting that anyon theory can be used to realize a gapped boundary condition that fulfils the required anomaly matching condition. This boundary theory can form a thermally stable, self-correcting quantum memory when protected by the 1-form symmetries~\cite{roberts2020symmetry}.
Thus the Walker--Wang paradigm provides a useful lens to search for (thermally stable) SPT ordered resource states for MBQC.
However, determining whether these computational schemes are stable to perturbations of the Walker--Wang parent Hamiltonian for the resource state remains an interesting open problem.
For 1-form symmetry respecting perturbations, at least, we expect the usefulness of the resource state to persist, as the key relation between the 1-form symmetry and (possibly fattened) boundary string operators remains.
This potentially has important implications for the existence of fault-tolerant, computationally universal phases of matter~\cite{DBcompPhases,Miy10, else2012symmetry,else2012symmetryPRL,NWMBQC,miller2016hierarchy,roberts2017symmetry,bartlett2017robust,wei2017universal,raussendorf2019computationally,roberts2019symmetry,devakul2018universal,Stephen2018computationally,Daniel2019,daniel2020quantum}.
\section{ Lattice defects in a \textbf{3F} topological subsystem code}\label{sec3FSubsystemCode}
\begin{figure}[t]
\center
\includegraphics[scale=0.575]{Lattice}
\caption{The tricoloring of hexagonal plaquettes used to define the generators of the anomalous $\mathbb{Z}_2 \times \mathbb{Z}_2$ 1-form symmetry. }
\label{fig:tricolored}
\end{figure}
In Refs.~\cite{bombin2009interacting,bombin2010topologicalsubsystem} a 2D topological subsystem code~\cite{Poulin2005,Bacon2005a} was introduced that supports a stabilizer group corresponding to a lattice realization of the string operators for the \textbf{3F} anyon theory.
As the gauge generators do not commute, they can be used to define a translation invariant Hamiltonian with tunable parameters that supports distinct phases, and phase transitions between them.
The model is defined on an inflated honeycomb lattice, where every vertex is blown-up into a triangle, with links labelled by $x,y,z,$ in a translation invariant fashion according to Figs.~\ref{fig:tricolored} \&~\ref{fig:InflatedHexagon}.
This is reminiscent of Kitaev's honeycomb model~\cite{kitaev2006anyons}, which can also be thought of as a 2D topological subsystem code (that encodes no qubits) with a stabilizer group corresponding to the string operators of an emergent $\mathbb{Z}_2$ fermion.
\begin{figure}[t]
\center
{\includegraphics[scale=.85]{InflatedHexagon}}
\hspace{.5cm}
{\includegraphics[scale=.85]{Edges}}
\caption{ (Left) An inflated hexagon.
(Right) There are three different types of $x$, $y$, and $z$ links in the lattice, respectively. }
\label{fig:InflatedHexagon}
\label{fig:Edges}
\end{figure}
The 2D topological subsystem code of Refs.~\cite{bombin2009interacting,bombin2010topologicalsubsystem} is defined on the lattice of Fig.~\ref{fig:tricolored}, with one qubit per vertex. There is one gauge generator per edge, given by
\begin{align}
K_{\braket{ij}} = \begin{cases}
X_i X_j & \text{if $\langle i j\rangle$ is an $x$-link,}
\\
Y_i Y_j & \text{if $\langle i j\rangle$ is a $y$-link,}
\\
Z_i Z_j & \text{if $\langle i j\rangle$ is a $z$-link,}
\end{cases}
\end{align}
see Fig.~\ref{fig:InflatedHexagon}.
The Hamiltonian can be written in terms of the gauge generators
\begin{align}
H = - J_x \sum_{x\text{-links}} K_{\braket{ij}}
- J_y \sum_{y\text{-links}}K_{\braket{ij}}
-J_z \sum_{z\text{-links}}K_{\braket{ij}}
\, ,
\end{align}
where $J_x,J_y,J_z,$ are tunable coupling strengths.
The group of stabilizer operators that commute with all the gauge generators, and are themselves products of gauge generators, are generated by a $\mathbb{Z}_2 \times \mathbb{Z}_2$ algebra on each inflated plaquette.
The plaquette algebra is generated by $W_p^X$, $W_p^Z$, and $W_p^Y$ on each plaquette, which satisfy
\begin{align}
(W_p^{X})^2=(W_p^{Z})^2=\openone \, ,
&&
W_p^X W_p^Z = W_p^Z W_p^X = W_p^Y \, ,
\end{align}
see Fig.~\ref{fig:Generators}.
\subsection{String operators}
The above plaquette operators are in fact loops of a $\mathbb{Z}_2 \times \mathbb{Z}_2$ algebra of string operators on the boundary of the plaquette.
To define larger loops of the string operators we make use of a tricoloring of the hexagon plaquettes shown in Fig.~\ref{fig:tricolored}.
On the boundary of a region $\mathcal{R}$, given by a union of inflated plaquettes on the inflated honeycomb lattice, we have the following $\mathbb{Z}_2 \times \mathbb{Z}_2$ string operators
\begin{align}
\label{eq:LoopOps}
W_{\partial\mathcal{R}}^r &=
\prod_{p_r \in \mathcal{R}} W_{p_r}^{Z}
\prod_{p_g \in \mathcal{R}} W_{p_g}^{X}
\prod_{p_b \in \mathcal{R}} W_{p_b}^{Y} \, ,
\\
W_{\partial\mathcal{R}}^g &=
\prod_{p_r \in \mathcal{R}} W_{p_r}^{Y}
\prod_{p_g \in \mathcal{R}} W_{p_g}^{Z}
\prod_{p_b \in \mathcal{R}} W_{p_b}^{X} \, ,
\\
W_{\partial\mathcal{R}}^b &=
\prod_{p_r \in \mathcal{R}} W_{p_r}^{X}
\prod_{p_g \in \mathcal{R}} W_{p_g}^{Y}
\prod_{p_b \in \mathcal{R}} W_{p_b}^{Z} \, ,
\end{align}
where $p_r,p_g,$ and $p_b$ stand for red, green, and blue plaquettes, respectively.
The string operators satisfy the same algebra as the plaquette operators
\begin{align}
(W_{\partial\mathcal{R}}^{r})^2=(W_{\partial\mathcal{R}}^{b})^2=\openone \, ,
&&
W_{\partial\mathcal{R}}^r W_{\partial\mathcal{R}}^b = W_{\partial\mathcal{R}}^b W_{\partial\mathcal{R}}^r = W_{\partial\mathcal{R}}^g \, .
\end{align}
\begin{figure}[t]
\center
{\includegraphics[scale=.85]{XGenerator}}
\hspace{.5cm}
{\includegraphics[scale=.85,trim= 0 -.395cm 0 0]{ZGenerator}}
\caption{ (Left) The $W_p^X$ generator on the inflated hexagon.
(Right) The $W_p^Z$ generator on the inflated hexagon. The $W_p^Y$ generator is given by their product. }
\label{fig:Generators}
\end{figure}
The loop operators on the boundary of a region $\mathcal{R}$ in Eq.~\eqref{eq:LoopOps} suffice to define red string operators $W^r_{\ell_r}$ on arbitrary open paths $\ell_r$ on inflated edges between red plaquettes, and similarly for green and blue string operators and plaquettes.
The string operators are given by a product of the elementary string segment operators shown in Fig.~\ref{fig:StringSegments} along the string.
With the string segment operators shown, the excitations of the $W^r_{\ell_r}$ operator can be thought of as residing on the red plaquettes of the lattice, and similarly for the green and blue plaquettes.
We denote the superselection sector of the excitation created at one end of an open $W^r_{\ell_r}$ operator by $\textcolor{red!80!black}{\psi_\text{r}}$, and similarly $\textcolor{green!80!black}{\psi_\text{g}}$, $\textcolor{blue!80!black}{\psi_\text{b}}$ for green and blue string operators.
The fusion and braiding processes for these sectors, as defined by the string operators, are described by the \textbf{3F} theory introduced in Sec.~\ref{sec3FPreliminaries}.
The set of string operators $W_{\ell_r},W_{\ell_g},W_{\ell_b},$ commute with the Hamiltonian throughout the whole phase diagram
\begin{align}
[H,W_{\ell_r}^{r}]=[H,W_{\ell_g}^{g}]=[H,W_{\ell_b}^{b}]=0 \, ,
\end{align}
for closed loops $\ell_r,\ell_g,\ell_b$.
This structure is formalized as an anomalous $\mathbb{Z}_2 \times \mathbb{Z}_2$ 1-form symmetry, with the anomaly capturing the nontrivial $S$ and $T$ matrices of the \textbf{3F} theory associated to the string operators.
We remark that the Hamiltonian can support phases with larger anyon theories that include the \textbf{3F} theory as a subtheory (due to the factorization of modular tensor categories~\cite{Mueger2002} the total anyon theory is equivalent to a stack of the \textbf{3F} theory with an additional anyon theory).
In particular, in the $J_z \gg J_x,J_z >0$ limit the Hamiltonian enters the phase of the color code stabilizer model~\cite{bombin2009interacting}. The anyon theory of this model is equivalent to two copies of the \textbf{3F} theory~\cite{bombin2012universal} (or equivalently two copies of the toric code anyon theory~\cite{bombin2012universal,kubica2015unfolding}).
\begin{figure}[t!]
\center
\includegraphics[scale=.8]{StringSegments}
\caption{Segments of the string operators that form the anomalous $\mathbb{Z}_2 \times \mathbb{Z}_2$ 1-form symmetry. }
\label{fig:StringSegments}
\end{figure}
\subsection{Symmetry defects}
The symmetry group of the Hamiltonian is generated by translations $T(u)$ and $T(v)$ along the lattice vectors $u$ and $v$ shown in Fig.~\ref{fig:tricolored}, plaquette centered $\frac{\pi}{3}$-rotations combined with the Clifford operator that implements $X_v \leftrightarrow Y_v$ on all vertices $v$ denoted
$R_p$, and inflated vertex centered $\frac{2 \pi}{3}$-rotations denoted $R_v$.
\begin{figure}[b]
\center
\includegraphics[scale=.575]{Z2Defect}
\caption{ A $\frac{ \pi}{3}$ lattice disclination on a plaquette that hosts twist defects of a $\mathbb{Z}_2$ symmetry generator. }
\label{fig:Z2Defect}
\end{figure}
The \textbf{3F} superselection sectors in the model exhibit \textit{weak symmetry breaking}~\cite{kitaev2006anyons}, or symmetry-enrichment~\cite{barkeshli2019symmetry} under the lattice symmetries giving rise to an $S_3$ action.
The $\frac{\pi}{3}$ rotation and Clifford operator $R_p$ centered on a red plaquette implements the $(\text{gb})$ symmetry action on the superselection sectors.
A domain wall attached to a disclination defect with a $\frac{\pi}{3}$ angular deficit can be introduced by cutting a wedge out of the lattice and regluing the dangling edges as shown in Fig.~\ref{fig:Z2Defect}.
This leads to mixed edges across the cut formed by rejoining broken $x$ and $y$ edges, the Hamiltonian terms on these edges are of the form $X_v Y_v'$ where $v$ is the vertex adjacent to the $x$ portion of the rejoined edge and $v'$ is the vertex adjacent to the $y$ portion of the rejoined edge.
Assuming the lattice model lies in a gapped phase described by the \textbf{3F} theory, such a lattice symmetry defect supports a non-abelian twist defect $\mathcal{T}_{(\text{gb})}^\pm$, where the $\pm$ is determined by the eigenvalue of the string operator $W_{\ell_r}$ encircling the defect.
This twist defect is similar to a Majorana zero mode as it has quantum dimension $\sqrt{2}$ and fusion rules given in Sec.~\ref{sec3FPreliminaries}.
A similar result holds with $\frac{\pi}{3}$ disclination defects centered on green and blue plaquettes hosting $\mathcal{T}_{(\text{rb})}^\pm$ and $\mathcal{T}_{(\text{rg})}^\pm$ twist defects, respectively.
The $\frac{2 \pi}{3}$ rotation operator $R_v$ centered on an inflated vertex, and also the translations $T(u)$ and $T(v)$, implement the $(\text{rgb})$ symmetry action on the superselection sectors.
Similar to above a disclination defect with a $\frac{2 \pi}{3}$ angular deficit can be introduced by cutting a wedge out of the lattice and rejoining the dangling edges following Fig.~\ref{fig:Z3Defect1}.
We can also introduce a dislocation defect as shown in Fig.~\ref{fig:Z3Defect2}.
Again assuming the lattice model lies in the \textbf{3F} phase, such lattice symmetry defects support non-abelian topological defects with quantum dimension $2$ introduced as $\mathcal{T}_{(\text{rgb})}$ in Sec.~\ref{sec3FDiscussion}.
\begin{figure}[t]
\center
{\includegraphics[scale=.575]{Z3Defect1}}
%
%
%
%
%
\hspace{.5cm}
%
%
%
%
%
{\includegraphics[scale=.575]{Z3Defect2}}
\caption{ (Left) A $\frac{2 \pi}{3}$ lattice disclination on an inflated vertex that hosts twist defects of the $\mathbb{Z}_3$ symmetry generator.
(Right) A lattice dislocation on a plaquette that can also host twist defects of the $\mathbb{Z}_3$ symmetry generator. }
\label{fig:Z3Defect1}
\label{fig:Z3Defect2}
\end{figure}
These lattice implementations of the twist defects can in principle be used to realize the defect topological quantum computation schemes introduced in Sec.~\ref{sec3FTQCScheme}. To perform error-correction, we must define the order in which gauge generators are measured to extract a stabilizer syndrome. At each time-step a subset of gauge generators are measured where each of the gauge operators must have disjoint support, for example following Ref.~\cite{bombin2010topologicalsubsystem}.
In the presence of twists, one must take extra care in defining a globally consistent schedule. A simple (possibly inefficient) approach can be obtained by partitioning the schedule according to the gauge generators along the defect and the bulk separately.
We remark that symmetry defects in the stabilizer color code have been explored previously in Refs.~\cite{yoshida2015topological,kesselring2018boundaries}.
This presents an alternative route to implement the defect computation scheme of Sec.~\ref{sec3FTQCScheme}.
This is particularly relevant as the 2D stabilizer color code~\cite{bombin2006topological} is obtained in the limit $J_z \gg J_x,J_z>0$.
\section{Discussion}\label{sec3FDiscussion}
We have presented an approach to topological quantum computation based on Walker--Wang resource states and their symmetries. We have explicitly presented a new scheme for universal fault-tolerant quantum computation using symmetry defects of the \textbf{3F} anyon theory, and we have shown how these defects can be implemented in the Walker--Wang models for use in measurement-based quantum computation. Under a phenomenological toy noise model consisting of bit/phase flip errors and measurement errors, the threshold of the \textbf{3F} Walker--Wang computation scheme is equal to that of the well known toric code (or equivalently the topological cluster state scheme) under the same noise model (see e.g. Refs.~\cite{dennis2002topological,TopoClusterComp} for threshold estimates). Further investigation under more realistic noise models remains an open problem.
Our computation scheme based on the defects of the \textbf{3F} anyon theory provides a nontrivial example of the power of the Walker--Wang approach, as the \textbf{3F} anyon theory is chiral and cannot be realized as the emergent anyon theory of a 2D commuting projector model (although it can be embedded into one as a subtheory).
We hope that this example provides an intriguing step into topological quantum computation using more general anyon schemes and a launch point for the study of further non-stabilizer models.
In particular, our framework generalizes directly to any abelian anyon theory with symmetry defects, leading to a wide class of potential resource states for fault-tolerant MBQC.
While we have not tried to optimise the overhead of our gate schemes, the richer defect theory (in comparison with toric code) may lead to more efficient implementations of, for example, magic state distillation.
In addition to this, computing the full $G$-crossed theory of the \textbf{3F} anyon theory could potentially lead to further improvements arising from the possibility of additional fusions and braiding processes that may yield more efficient logic gates.
Determining the set of transversal (or locality preserving) logic gates admitted by the boundary states of the \textbf{3F} Walker--Wang model remains an open problem.
We remark that an extension of the Walker-Wang model has recently been defined which is capable of realizing an arbitrary symmetry-enriched topological order on the boundary under a global on-site symmetry action~\cite{bulmash2020absolute}.
Further interesting open directions include the construction of MBQC schemes using Walker--Wang resource states based on more exotic, non-abelian anyon theories, including those that are braiding universal, i.e. not requiring non-topological magic state preparation and distillation.
Moving away from stabilizer resource states, it may be difficult to keep track of, and control, the randomness induced by the local measurements. One way to address this concern would be to consider adiabatic approaches to MBQC~\cite{bacon2013adiabatic,williamson2015symmetry} to circumvent some these difficulties.
Another interesting direction is to investigate MBQC schemes based on Walker--Wang resource states that are both perturbatively and thermally stable. The \textbf{3F} Walker--Wang model can be shown to belong to a nontrivial SPT phase under $\mathbb Z_2^2$ 1-form symmetries using the same arguments as Refs.~\cite{roberts2017symmetry,roberts2020symmetry}. More generally, the bulk of any Walker--Wang state corresponding to a modular anyon theory input should be a nontrivial SPT order under some 1-form symmetry (or an appropriate generalization thereof).
The nontrivial nature of these 1-form SPT phases is manifest through their anomalous symmetry action on the boundary. This anomalous boundary action of the 1-form symmetry corresponds to the string operators of a modular anyon theory. A gapped phase supporting that anyon theory can be used to realize a gapped boundary condition that fulfils the required anomaly matching condition.
The topologically ordered boundaries of these states should remain thermally stable under the 1-form symmetries. Demonstrating the stability (or otherwise) of these schemes away from fixed point models is an open problem: the computation scheme is based on symmetry principles alone, and (potentially fattened) string operators and defects that exist throughout the topological phase should suffice to perform topological quantum computation.
Finally, to complement the MBQC scheme, we suggested an alternative approach to TQC using symmetry defects of the \textbf{3F} anyon theory and code deformations of the 2D topological subsystem code due to Bomb\'{i}n. The 2-body nature of the gauge generators for the 2D subsystem code may be attractive for architectures with strong locality constraints or long two qubit gate times. Investigation into the error-correcting performance of the 2D topological subsystem code remains an important open problem in this direction.
\acknowledgements
We acknowledge inspiring discussions with Guanyu Zhu at the early stages of this project.
We also acknowledge useful discussions with Jacob Bridgeman.
DW acknowledges support from the Simons foundation.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,642 |
<!DOCTYPE html>
<html>
<head>
<title>Active fields</title>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<meta http-equiv="X-UA-Compatible" content="IE=edge"/>
<link rel="stylesheet" type="text/css" href="../../../codebase/fonts/font_roboto/roboto.css"/>
<link rel="stylesheet" type="text/css" href="../../../codebase/dhtmlx.css"/>
<script src="../../../codebase/dhtmlx.js"></script>
<script>
var myGrid, myDataProcessor;
function doOnLoad(){
// init grid and set its parameters (this part as always)
myGrid = new dhtmlXGridObject('gridbox');
myGrid.setImagePath("../../../codebase/imgs/");
myGrid.setColumnIds("sales,book,author,price,store,shipping,best,date");
myGrid.setHeader("Sales,Book Title,Author,Price,In Store,Shipping,Bestseller,Date of Publication");
myGrid.setInitWidths("80,150,100,80,80,80,80,100");
myGrid.setColAlign("right,left,left,right,center,left,center,center");
myGrid.setColTypes("dyn,ed,txt,price,ch,coro,ch,ro");
myGrid.setColSorting("int,str,str,int,str,str,str,date");
myGrid.enableAutoWidth(true);
myGrid.init();
myGrid.load("php/get.php"); // used just for demo purposes
// ============================================================================================
myDataProcessor = new dataProcessor("php/update.php"); // lock feed url
myDataProcessor.init(myGrid); // link dataprocessor to the grid
myDataProcessor.setTransactionMode("GET", false)
myDataProcessor.setDataColumns([false,true,true,false]); // only second and third columns will trigger data update
}
</script>
</head>
<body onload="doOnLoad()">
<p>Only second and third column will trigger data update</p>
<div id="gridbox" style="width:750px;height:350px;overflow:hidden"></div>
<p><a href="javascript:void(0)" onclick="myGrid.addRow((new Date()).valueOf(),[0,'','','',false,'na',false,''],myGrid.getRowIndex(myGrid.getSelectedId()))">Add row</a></p>
<p><a href="javascript:void(0)" onclick="myGrid.deleteSelectedItem()">Remove Selected Row</a></p>
</body>
</html>
| {
"redpajama_set_name": "RedPajamaGithub"
} | 5,234 |
Past 30 days 6
Past 365 days 184
Rule 4,115
Proposed Rule 1,449
Coast Guard 5,305
Homeland Security Department 4,621
Transportation Department 758
Defense Department 174
Engineers Corps 173
Saint Lawrence Seaway Development Corporation 66
Environmental Protection Agency 16
Federal Aviation Administration 3
Federal Motor Carrier Safety Administration 3
Federal Railroad Administration 3
Maritime Administration 3
Pipeline and Hazardous Materials Safety Administration 3
Agriculture Department 2
Army Department 2
Transportation Security Administration 2
Environment 5,474
World 5,466
Business & Industry 760
Science & Technology 752
Documents Found 5,565
Special Local Regulation; Tennessee River, Knoxville, TN
by the Coast Guard on 09/05/2017.
The Coast Guard is establishing a temporary special local regulation for all navigable waters of the Tennessee River from mile marker (MM) 641 to MM 648.7. This special local regulation is necessary to provide safety for the participants of the Bridges to Bluffs marine event in Knoxville, TN. This regulation prohibits vessels from entering into,...
Safety Zone; Dredging, Shark River, NJ
The Coast Guard is establishing a temporary safety zone on a portion of Shark River, in Neptune City, NJ, from September 5, 2017, through September 23, 2017, while dredging operations are being conducted in the main navigational channel. This safety zone is necessary to provide for the safety of life on navigable waters during dredging...
Safety Zone; Delaware River, Philadelphia, PA
The Coast Guard is establishing a temporary safety zone for multiple fireworks events launched in the vicinity of Penn's Landing in Philadelphia, Pennsylvania for the waters of Delaware River, Philadelphia, PA. Enforcement of this safety zone is necessary and intended to enhance safety of life on navigable waters immediately prior to, during,...
Safety Zone; Pacific Ocean, North Shore, Oahu, HI-Recovery Operations
The Coast Guard is establishing a temporary safety zone for the navigable waters of the North Shore of Oahu, Hawaii, near Ka'Ena Point. The temporary safety zone encompasses all waters extending 3 nautical miles in all directions from position 21[deg]34.88' N.; 158[deg]17.90' W. The safety zone is needed to protect personnel, vessels and the...
Safety Zone; Wando River, Charleston, SC
The Coast Guard is extending the duration of a temporary safety zone for navigable waters of the Wando River within a 500-yard radius of the SC-41 Bridge, vessels and machinery in Charleston, South Carolina. The safety zone is needed to ensure the safety of persons, vessels, and the marine environment from potential hazards created by demolition...
Safety Zone, Delaware River; Dredging
The Coast Guard is establishing temporary safety zones in portions of Bellevue Range, Marcus Hook Range, Anchorage 7 off Marcus Hook Range, Chester Range, and Eddystone Range, on the Delaware River, in Philadelphia, PA. The safety zone will temporarily restrict vessel traffic from transiting or anchoring in a portion of the Delaware River while...
Safety Zone; Atlantic Ocean, Ocean City, NJ
The Coast Guard is establishing a safety zone on the waters of the Atlantic Ocean adjacent to Ocean City, NJ on August 26, 2017. The safety zone will restrict vessel traffic from operating on a portion of Atlantic Ocean during a fireworks display. This safety zone is necessary to protect the public, spectators and vessels from the hazards...
Safety Zone; Village of Sodus Point Fireworks; Lake Ontario, Sodus Point, NY
The Coast Guard is establishing a temporary safety zone on Lake Ontario, Sodus Point, NY. This safety zone is intended to restrict vessels from portions of Lake Ontario during the Village of Sodus Point Fireworks display on September 2, 2017. This temporary safety zone is necessary to protect mariners and vessels from the navigational hazards...
Safety Zone; Marine Event Held in the Captain of the Port Long Island Sound Zone
The Coast Guard is establishing a temporary safety zone for a fireworks display within the Captain of the Port (COTP) Long Island Sound (LIS) Zone. This temporary final rule is necessary to provide for the safety of life on navigable waters during these events. Entry into, transit through, mooring or anchoring within the limited access area is...
Safety Zone; Mississippi River; New Orleans, LA
The Coast Guard proposes to establish a temporary safety zone for certain navigable waters of the Mississippi River. This action is necessary to provide for the safety of life on these navigable waters near New Orleans, LA, during a fireworks display on October 28, 2017. This proposed rulemaking would prohibit persons and vessels from being in...
Safety Zones; Ice Covered Waterways in the Fifth Coast Guard District
The Coast Guard is establishing 11 safety zones on certain navigable waters of the Fifth Coast Guard District. This action is necessary to promote navigational safety, provide for the safety of life and property, and facilitate the reasonable demands of commerce where a threat to navigation exists due to ice covered waterways. This rule is...
Safety Zone: PG&E Evolution, King Salmon, CA
The Coast Guard is establishing a temporary safety zone in the navigable waters of Humboldt Bay in King Salmon, CA in support of the Pacific Gas and Electric Evolution that will be effective on August 2, 2017 and on August 30, 2017. This safety zone is established to ensure the safety of workers, mariners, and other vessels transiting the area...
Special Local Regulation; Choptank River, Cambridge, MD
The Coast Guard is establishing special local regulations for
Safety Zone; Kaskaskia River, Evansville, IL
The Coast Guard is establishing a safety zone for all
Safety Zone; Port Huron Float-Down, St. Clair River, Port Huron, MI
The Coast Guard is establishing a temporary safety zone on the waters of the St. Clair River in the vicinity of Port Huron, MI. This zone is intended to restrict and control movement of vessels in a portion of the St. Clair River. Though this is an unsanctioned, non- permitted marine event, this zone is necessary to provide for the safety of...
Safety Zone; St. Marys River, Sault Ste. Marie, MI
The Coast Guard is establishing a temporary safety zone for navigable waters within a 200-yard radius of the position of the grounded vessel, M/V CALUMET on the north end of Sugar Island. The safety zone is needed to provide for the safety of life and property on the navigable waters during emergency salvage operations onboard a bulk carrier...
Safety Zone; Willamette River, Lake Oswego, OR
The Coast Guard is establishing a temporary safety zone for navigable waters of the Willamette River in the vicinity of George Rogers Park in Lake Oswego, OR. This action is necessary to provide for the safety of life on these navigable waters during a fireworks display on September 9, 2017. This regulation prohibits persons and vessels from...
Special Local Regulation, Islamorada Grand Prix of the Seas, Islamorada, FL
The Coast Guard is establishing a special local regulation on the waters of the Atlantic Ocean in the vicinity of Islamorada, FL during the Islamorada Grand Prix of the Seas high-speed boat race. Approximately 70 high-speed boats and personal watercraft are expected to participate in the race, in addition to spectators. The special local...
Special Local Regulation; Mobile River, Mobile, AL
The Coast Guard is establishing a temporary special local regulation on the Mobile River, Mobile, AL. The special local regulation is needed to protect the persons participating in the Rubber Ducky Regatta marine event. This rule restricts transit into, through, and within the regulated area unless specifically authorized by the Captain of the...
Safety Zone; Demolition of SC-41 Bridge, Wando River, Charleston, SC
The Coast Guard is establishing a temporary safety zone for navigable waters of the Wando River within a 500-yard radius of SC-41 Bridge, vessels and machinery in Charleston, South Carolina. The safety zone is needed to ensure the safety of persons, vessels, and the marine environment from potential hazards created by demolition work on the... | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,408 |
<?php
/**
* ALIPAY API: alipay.ebpp.product.search request
*
* @author auto create
* @since 1.0, 2014-06-12 17:16:51
*/
class AlipayEbppProductSearchRequest
{
/**
* 业务类型
**/
private $orderType;
/**
* 子业务类型
**/
private $subOrderType;
private $apiParas = array();
private $terminalType;
private $terminalInfo;
private $prodCode;
private $apiVersion="1.0";
public function setOrderType($orderType)
{
$this->orderType = $orderType;
$this->apiParas["order_type"] = $orderType;
}
public function getOrderType()
{
return $this->orderType;
}
public function setSubOrderType($subOrderType)
{
$this->subOrderType = $subOrderType;
$this->apiParas["sub_order_type"] = $subOrderType;
}
public function getSubOrderType()
{
return $this->subOrderType;
}
public function getApiMethodName()
{
return "alipay.ebpp.product.search";
}
public function getApiParas()
{
return $this->apiParas;
}
public function getTerminalType()
{
return $this->terminalType;
}
public function setTerminalType($terminalType)
{
$this->terminalType = $terminalType;
}
public function getTerminalInfo()
{
return $this->terminalInfo;
}
public function setTerminalInfo($terminalInfo)
{
$this->terminalInfo = $terminalInfo;
}
public function getProdCode()
{
return $this->prodCode;
}
public function setProdCode($prodCode)
{
$this->prodCode = $prodCode;
}
public function setApiVersion($apiVersion)
{
$this->apiVersion=$apiVersion;
}
public function getApiVersion()
{
return $this->apiVersion;
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 2,968 |
Polish Polar Research
Science and earth science
Click "Subscribe" to receive notifications of new articles
The quarterly Polish Polar Research edited by the Committee on Polar Research of the Polish Academy of Sciences is an international journal publishing original research articles presenting the results of studies carried out in polar regions.
All papers are peer-reviewed and published in English.
The Editorial Advisory Board includes renowned scientist from Poland and from abroad.
Polish Polar Research is indexed in Science Citation Index Expanded, Journal Citation Reports/Science Edition, Biological Abstracts, BIOSIS Previews, Cold Regions Bibliography, Antarctic Literature, Geological Abstracts, Polish Scientific Journals Contents - Agricultural and Biological Sciences, Quarterly Review, and Zoological Record.
We have been made aware of certain fraudulent activities that have been claiming to represent Polish Polar Research. These activities include a fake, predatory website and unsolicited emails. The aim of the fraud is to trick suspected authors/researchers into believing they are communicating with a journal editor in order to obtain their personal information, scientific results and/or money. Polish Polar Research's name, logo and other information have been used without permission to try to convey authenticity. If you have any concerns or see suspicious communications that reference Polish Polar Research, please report to Editors-in-Chief. Legitimate information regarding Polish Polar Research and its manuscripts can always be found on our website at http://journals.pan.pl/ppr/. We recommend that authors do not respond to any unsolicited offers of manuscript submissions nor enter any monetary agreement.
Polish Polar Research is an open-access journal in which archive issues are freely accessible and articles are published at no cost to authors.
ISSN 0138-0338, eISSN 2081-8262
Polish Academy of Sciences, Committee on Polar Research
Choose a year and number
Select number
Polish Polar Research | 2016 | vol. 37 | No 4 |
Steven Attenborough
Polish Polar Research | 2016 | vol. 37 | No 4 | 527-528 | DOI: 10.1515/popore-2016-0028
Download PDF Download RIS Download Bibtex
2 Freshwater mineral nitrogen and essential elements in autotrophs in James Ross Island, West Antarctica
Pavel Coufalík, Daniel Nývlt, Petra Prochazková, Ondřej Zvěřina, Kateřina Trnková, Kateřina Skácelová, Josef Komárek
Keywords: algae Antarctica Cyanobacteria nutrients
The lakes and watercourses are habitats for various communities of cyanobacteria and algae, which are among the few primary producers in Antarctica. The amount of nutrients in the mineral-poor Antarctic environment is a limiting factor for the growth of freshwater autotrophs in most cases. In this study, the main aim was to assess the availability of mineral nitrogen for microorganisms in cyanobacterial mats in James Ross Island. The nitrate and ammonium ions in water environment were determined as well as the contents of major elements (C, N, P, S, Na, K, Ca, Mg, Al, Fe, Mn) in cyanobacterial mats. The molar ratios of C:N, C:P and N:P in mats were in focus. The growth of freshwater autotrophs seems not to be limited by the level of nitrogen, according to the content of available mineral nitrogen in water and the biogeochemical stoichiometry of C:N:P. The source of nutrients in the Ulu Peninsula is not obvious. The nitrogen fixation could enhance the nitrogen content in mats, which was observed in some samples containing the Nostoc sp.
Pavel Coufalík
Daniel Nývlt
Petra Prochazková
Ondřej Zvěřina
Kateřina Trnková
Kateřina Skácelová
Josef Komárek
3 Micromorphology of modern tills in southwestern Spitsbergen – insights into depositional and post-depositional processes
Katarzyna Skolasińska, Grzegorz Rachlewicz, Witold Szczuciński
Keywords: Arctic microstructures postdepositional changes subglacial till supraglacial till Svalbard
Textural properties and microstructures are commonly used properties in the analysis of Pleistocene and older glacial deposits. However, contemporary glacial deposits are seldom studied, particularly in the context of post-depositional changes. This paper presents the results of a micromorphological study of recently deposited tills in the marginal zones of Hansbreen and Torellbreen, glaciers in southwestern Spitsbergen. The main objectives of this study were to compare modern tills deposited in subglacial and supraglacial conditions, as well as tills that were freshly released from ice with those laid down several decades ago. The investigated tills are primarily composed of large clasts of metamorphic rocks and represent coarse-grained, matrix-supported diamictons. The tills reveal several characteristic features for ductile (e.g. turbate structures) and brittle (e.g. lineations, microshears) deformations, which have been considered to be indicative of subglacial conditions. In supraglacial tills, the same structures are common as in the subglacial deposits, which points to the preservation of the primary features, though the sediment was transferred up to the glacier surface due to basal ice layer deformation and redeposited as slumps, or to formation of similar structures due to short-distance sediment re-deposition by mass flows. This study revealed that it might not be possible to distinguish subglacial and supraglacial tills on the basis of micromorphology if the latter are derived from a subglacial position. The only noted difference was the presence of iron oxide cementation zones and carbonate dissolution features in supraglacial tills. These features were found in tills that were deposited at least a few years ago and are interpreted to be induced by early post-depositional processes involving porewater/sediment interactions.
Katarzyna Skolasińska
Grzegorz Rachlewicz
Witold Szczuciński
4 Recent distribution of Echinodermata species in Spitsbergen coastal waters
Jan Marcin Węsławski, Maria Włodarska-Kowalczuk, Kajetan Deja, Tomasz Borszcz, Piotr Kukliński, Piotr Bałazy, Patrycja Kwiatkowska
Keywords: Arctic climate change Echinodermata fjords megabenthos species distribution
Thirty-two species of echinoderms from epibenthic sledges, dredges, scuba diving, and other samples (in total: 467 samples and c. 20 000 specimens) from fjords and coastal waters off Spitsbergen were analysed between 1996 and 2014. The most numerous group of echinoderms in the coastal waters off Spitsbergen is brittle stars (78% of the total individuals). The echinoderms do not form any clear assemblages according to depth or distance from glacial sedimentation and substrate. Some species prefer hard bottom (Strongylocentrotus droebachiensis) or water free from glacial suspensions (Ophiopholis aculeata). In contrast to the species listed above, we also found opportunistic species such as the starfish Urasterias lincki and the brittle star Ophiocten sericeum. These two species are distributed quite uniformly, regardless of the environmental factors. The majority of the species prefer a soft bottom below 200 m.
Jan Marcin Węsławski
Maria Włodarska-Kowalczuk
Kajetan Deja
Tomasz Borszcz
Piotr Kukliński
Piotr Bałazy
Patrycja Kwiatkowska
5 Two centuries-long dendroclimatic reconstruction based on Low Arctic Betula pubescens from Tromsø Region, Northern Norway
Magdalena Opała, Krzysztof Migała, Piotr Owczarek
Keywords: Arctic birch dendroclimatology temperature reconstruction tree-ring chronology
This study presents the results of dendrochronological and dendroclimatological research of Betula pubescens from four sites in northern Norway (Kvaløya Island, Tromsøya Island and Storelva Valley), which provided a 193-year chronology. Our results highlight the importance of the site selection in dendroclimatological studies. We demonstrated that activity of geomorphic processes connected with local topography could led to reduced strength of climatic signal embedded in tree-ring data. Negative pointer years, triggered mainly by unfavourable climatic conditions and insect outbreaks, were common for all site chronologies in 1945, 1955, 1965, 1975, 1986, 2004. However, some site-specific differences were also distinguished. Response function analysis confirmed that June, July and August temperatures were positively correlated with tree-ring widths. This climate-growth relationship was stable throughout the years 1925-2000. From summer temperature reconstruction back to AD 1820, two colder (c. 1835-1850 and 1890-1920) and two warmer (c. 1825-1835 and 1920-1940) periods were identified. The tree-ring record from the Tromsø Region, well correlated between series, sites and climate variables, is an important element of a large-scale reconstruction of pre-instrumental climate variation in the northeastern part of the Atlantic Ocean. Our dendroclimatic reconstruction corresponds well with other climate proxy data, like fluctuations of mountain glaciers in Scandinavia or sea ice extent.
Magdalena Opała
Krzysztof Migała
Piotr Owczarek
6 Vegetation diversity and selected abiotic factors influencing the primary succession process on the foreland of Gåsbreen, Svalbard
Michał Węgrzyn, Maja Lisowska, Paulina Wietrzyk
Keywords: Arctic bryophytes colonisation glacier lichens vascular plants
The rapidly changing Arctic provides excellent opportunities for investigating primary succession on freshly deglaciated areas. Research on the Gåsbreen foreland (S Spitsbergen) traced the succession of particular groups of organisms and species, particularly lichens and bryophytes, and determined the effect of selected abiotic factors on this succession. Fieldwork in 2008, employed a continuous linear transect of phytosociological relevés (1 m2) along the foreland. Data analysis allowed to distinguish five different succession stages and three types of colonisers. Canonical correspondence analysis and a permutation test showed that distance from the front of the glacier and fine grain material in the substrate mostly influenced the distribution and abundance of vegetation, and the steepness of the moraine hills affected the colonisation process, mainly in the older part of the marginal zone.
Maja Lisowska
Paulina Wietrzyk
Magdalena BŁAŻEWICZ (Life Sciences), University of Łódź, Poland
e-mail: magdalena.blazewicz@biol.uni.lodz.pl
Wojciech MAJEWSKI (Geosciences), Institute of Paleobiology PAS, Poland
e-mail: wmaj@twarda.pan.pl
Krzysztof HRYNIEWICZ (Warszawa),
e-mail: krzyszth@twarda.pan.pl
Piotr JADWISZCZAK (Białystok),
e-mail: piotrj@uwb.edu.pl
Krzysztof JAŻDŻEWSKI (Łódź),
e-mail: krzysztof.jazdzewski@biol.uni.lodz.pl
Monika KĘDRA (Sopot)
e-mail: kedra@iopan.gda.pl
Ewa ŁUPIKASZA (Sosnowiec)
e-mail: ewa.lupikasza@us.edu.pl
Piotr PABIS (Łódź),
e-mail: cataclysta@wp.pl
Angelika BRANDT (Hamburg),
Claude DE BROYER (Bruxelles),
Peter CONVEY (Cambridge, UK),
J. Alistair CRAME (Cambridge, UK),
Rodney M. FELDMANN (Kent, OH),
Jane E. FRANCIS (Cambridge, UK),
Andrzej GAŹDZICKI (Warszawa)
Aleksander GUTERCH (Warszawa),
Jacek JANIA (Sosnowiec),
Jiří KOMÁREK (Třeboň),
Wiesława KRAWCZYK (Sosnowiec),
German L. LEITCHENKOV (Sankt Petersburg),
Jerónimo LÓPEZ-MARTINEZ (Madrid),
Sergio A. MARENSSI (Buenos Aires),
Jerzy NAWROCKI (Warszawa),
Ryszard OCHYRA (Kraków),
Maria OLECH (Kraków)
Sandra PASSCHIER (Montclair, NJ),
Jan PAWŁOWSKI (Genève),
Gerhard SCHMIEDL (Hamburg),
Jacek SICIŃSKI (Łódź),
Michael STODDART (Hobart),
Witold SZCZUCIŃSKI (Poznań),
Andrzej TATUR (Warszawa),
Wim VADER (Tromsø),
Tony R. WALKER (Halifax, Nova Scotia),
Jan Marcin WĘSŁAWSKI (Sopot) - President.
Wojciech MAJEWSKI
phone: (48 22) 697 88 53
Instytut Paleobiologii
Polska Akademia Nauk
Magdalena BŁAŻEWICZ
Zakład Biologii Polarnej i Oceanobiologii Uniwersytet Łódzki
ul. S. Banacha 12/16
90-237 Łódź, POLAND
The quarterly Polish Polar Research invites original scientific papers, dealing with all aspects of polar research. The journal aims to provide a forum for publication of high quality research papers, which are of international interest.
Articles must be written in English. Authors are requested to have their manuscript read by a person fluent in English before submission. They should be not longer than 30 typescript pages, including tables, figures and references. All papers are peer-reviewed. With the submitted manuscript authors should provide the names, addresses and e-mail addresses of three suggested reviewers.
Submission of an article implies that the work described has not been published previously nor is under consideration by another journal.
No honorarium will be paid. The journal does not have article processing charges (APCs) nor article submission charges.
The contribution should be submitted as Word file. It should be prepared in single- column double-spaced format and 25 mm margins. Consult a recent issue of the journal for layout and conventions (journals.pan.pl/ppr). Prepare figures and tables as separate files. For computer-generated graphics, editor Corel Draw is preferred. Line art images should be scanned and saved as bitmap (black and white) images at a resolution of 600–1200 dpi and tightly cropped. Computer versions of the photographs should be saved in TIFF format of at least 400 dpi (non-interpolated). Maximal publication size of illustrations is 126 × 196 mm. Limited number of color reproductions in print is fee of charge. Color artwork in PDF is free of charge.
Title should be concise and informative, no longer than 15 words. Abstract should have no more than 250 words. The authors are requested to supply up to 5 keywords. The references should be arranged alphabetically and chronologically. Journal names should not be abbreviated. Please, ensure that every reference cited in the text is also present in the reference list and vice versa. Responsibility for the accuracy of bibliographic citations lies entirely with the authors. References in the text to papers should consist of the surname of the author(s) followed by the year of publication. More than two authors should be cited with the first author's surname, followed by et al. (Dingle et al. 1998) but in full in the References.
ANDERSON J.B. 1999. Antarctic Marine Geology. Cambridge University Press, Cambridge: 289 pp.
BIRKENMAJER K. 1991. Tertiary glaciation in the South Shetland Islands, West Antarctica: evaluation of data. In: M.R.A. Thomson, J.A. Crame and J.W. Thomson (eds) Geological Evolution of Antarctica. Cambridge University Press, Cambridge: 629–632.
DINGLE S.A., MARENSSI S.A. and LAVELLE M. 1998. High latitude Eocene climate deterioration: evidence from the northern Antarctic Peninsula. Journal of South American Earth Sciences 11: 571–579.
SEDOV R.V. 1997. Glaciers of the Chukotka. Materialy Glyatsiologicheskikh Issledovaniy 82: 213–217 (in Russian).
SOBOTA I. and GRZEŚ M. 2006. Characteristic of snow cover on Kaffi oyra's glaciers, NW Spitsbergen in 2005. Problemy Klimatologii Polarnej 16: 147–159 (in Polish).
The journal does not have article processing charges (APCs) nor article submission charges.
Twenty-five reprints of each article published are supplied free of charge. Additional charged reprints can be ordered.
Please submit your manuscripts to Polish Polar Research via email to Editors-in-Chief:
Magdalena BŁAŻEWICZ (Life Sciences) magdalena.blazewicz@biol.uni.lodz.pl
Wojciech MAJEWSKI (Geosciences) wmaj@twarda.pan.pl
Polish Pola r Research is covered by the following services:
AGRICOLA (National Agricultural Library)
Cabell's Directory
CABI (over 50 subsections)
Celdes
CNPIEC
Cold Regions Bibliography
Current Antarctic Literature
Elsevier - Geobase
Elsevier - Reaxys
Elsevier - SCOPUS
Polish Scientific Journals Contents
SCImago (SJR)
Summon (Serials Solutions/ProQuest)
TDOne (TDNet)
Thomson Reuters - Biological Abstracts
Thomson Reuters - BIOSIS Previews
Thomson Reuters - Journal Citation Reports/Science Edition
Thomson Reuters - Science Citation Index Expanded
Thomson Reuters - Zoological Record
Dom Wydawniczy ELIPSA, ul. Inflancka 15/198, 00-189 Warszawa, tel./fax 22 635 03 01, 22 635 17 85
Polish Polar Research jest czasopismem wydawanym w wolnym dostępie na licencji CC BY-NC-ND 3.0. https://creativecommons.org/licenses/by-nc-nd/3.0/
Polish Polar Research is an open access journal with all content available with no charge in full text version. The journal content is available under the licencse CC BY-NC-ND 3.0 https://creativecommons.org/licenses/by-nc-nd/3.0/.
To subscribe to the magazine enter the email address:
* I agree to processing my personal data for shipment Journal subscriptions and I confirm that I have read Get Info for subscribers to magazines and Privacy Policy.
*Fields marked with an asterisk are mandatory to be filled in and checked. To Subscribe to the journal you must agree to the processing of personal data.
A message has been sent to your email address. Check your mailbox
and confirm subscriptions to the magazine. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 7,630 |
La Federación Internacional de Mujeres de Carreras Jurídicas (en francés, Fédération internationale des femmes des carrières juridiques) es una organización no gubernamental internacional de mujeres juristas fundada en París, Francia, en 1929 con el objetivo de defender y promocionar los derechos de mujeres y niñas del mundo.
La FIFCJ se inspira en los principios inscritos en la Carta de las Naciones Unidas, consagrados en la Declaración Universal de los Derechos Humanos y reafirmados en la Convención sobre la Eliminación de Todas las Formas de Discriminación contra la Mujer.
La FIFCJ posee estatuto consultivo especial ante al Consejo Económico y Social de las Naciones Unidas desde 1961, y está representada ante otras organizaciones internacionales, como la Organización de las Naciones Unidas para la Educación, la Ciencia y la Cultura, la Organización de las Naciones Unidas para la Alimentación y la Agricultura, la Organización Internacional del Trabajo, el Fondo de las Naciones Unidas para la Infancia y el Lobby Europeo de Mujeres.
Actualmente, la sede principal de la organización está localizada en Maputo, Mozambique, lugar de residencia de la actual presidenta.
Historia
La Federación Internacional de Mujeres en Carreras Jurídicas nació en 1929, de la voluntad común de cinco abogadas: dos francesas, Agathe Dyvrande-Thévenin y Marcelle Kraemer-Bach, la española Clara Campoamor, la estoniana Vera Poska-Grüntal y la alemana Margarete Berent.
La primera reunión tuvo lugar en París, en el Museo Social, el 3 de noviembre de 1929. Desde sus orígenes, el objetivo de la Federación fue el de mejorar la situación de las mujeres, la infancia y la familia. Agathe Dyvrande-Thévenin fue la primera presidenta de la organización y se mantuvo en el cargo hasta 1953.
Durante el IV Congreso de la FIFCJ, celebrado en París del 17 al 22 de julio de 1961, bajo la presidencia de Giovanna Pratilii, abogada del Colegio de Abogados de Venecia, se anunció que la organización había obtenido el estatuto consultivo especial ante al Consejo Económico y Social de las Naciones Unidas.
Presente en la Conferencia Mundial sobre la Mujer de Beijing, en 1994, y en varios eventos históricos, como la firma de la Convención sobre la Eliminación de todas formas de Discriminación contra la Mujer (CEDAW), la FIFCJ ha tomado postura y formulado su opinión consultiva ante distintas organizaciones internacionales sobre temas relacionados con los derechos humanos de las mujeres y niñas del mundo.
Objetivos
Según el artículo 2° de su Estatuto, la FIFCJ tiene por objeto:
Establecer relaciones entre mujeres de todos los países que ejerzan o hayan ejercido carreras jurídicas, posean un título de Derecho o cualquier equivalente en su país.
Unir esfuerzos con el fin de que todas las carreras sean accesibles a las mujeres sin ninguna discriminación.
Reunir información sobre la condición jurídica, económica y social de las mujeres del mundo.
Favorecer los lazos de amistad y solidaridad entre sus miembros, a la vez que con otras asociaciones internacionales.
Promover el estudio del Derecho específicamente en lo que concierne al estatus de las mujeres.
Promover el respeto y la defensa del medio ambiente.
Trabajar en la promoción y defensa de los derechos humanos.
En resumen, contribuir a difundir la idea de paz en el mundo, base indispensable para obtener un progreso equitativo y consciente de la humanidad.
Organización
De acuerdo con su Estatuto, la FIFCJ está compuesta por tres tipos de miembros: miembros activas, miembros asociadas y miembros de honor. En calidad de miembros activas, pueden adherirse tanto asociaciones nacionales como miembros individuales. Actualmente, la FIFCJ cuenta con las siguientes membresías activas:
La FIFCJ está compuesta por cuatro órganos: la Asamblea General, el Consejo, el Bureau y la Comisión de control de finanzas, cuyas atribuciones están definidas en el Estatuto de la organización.
Las siguientes autoridades son electas cada tres años: presidenta, secretaria general, secretaria general adjunta, tesorera, tesorera adjunta, miembros del Consejo, miembros del Bureau, miembros de la Comisión de control de finanzas, presidenta y secretarias de la Asamblea General y presidentas de las comisiones de trabajo permanentes.
Encuentros
Regularmente, la FIFCJ organiza encuentros a propósito de temas relacionados con los derechos de mujeres y niñas. Los más recientes han sido:
XXIII Congreso de Lisboa (2018): La cumbre de los derechos humanos de las mujeres, organizado por la Associação Portuguesa de Mulheres Juristas en la Facultad de Derecho de la Universidad de Lisboa en Lisboa, Portugal, del 17 al 23 de noviembre de 2018.
Consejo Ampliado de Maputo (2017): Empoderamiento de las mujeres en el lugar de trabajo y en el medio rural, organizado por la Associação das Mulheres Moçambicanas de Carreira Jurídica en la Procuraduría General de la República en Maputo, Mozambique, del 7 al 9 de noviembre de 2017.
Consejo Ampliado de Buenos Aires (2016): El cuerpo de las mujeres y los derechos fundamentales: nuevos desafíos, organizado por la Asociación Argentina de Mujeres de Carreras Jurídicas en el Colegio Público de Abogados de Capital Federal en Buenos Aires, Argentina, del 14 al 17 de noviembre de 2016.
XXII Congreso de Barcelona (2015): El cuerpo de las mujeres y los derechos fundamentales, organizado por Dones Juristes en el Museo Marítimo de Barcelona, España, del 14 al 16 de octubre de 2015.
Consejo Ampliado de París (2014): Mujer y ciudadanía, organizado por la Association Française des Femmes des Carrières Juridiques en la Maison du Barreau en París, Francia, del 12 al 14 de noviembre de 2014.
Consejo Ampliado de Roma (2013): Empoderamiento de la mujer. La toma de decisiones y la participación de las mujeres en la resolución de crisis, organizado por la Associazione Giuriste Italiane en la Universidad Link Campus en Roma, Italia, del 15 al 18 de octubre de 2013.
XXI Congreso de Dakar (2012): La paz: garantía de los derechos humanos, organizado por la Association des Juristes Sénégalaises en Dakar, Senegal, del 12 al 16 de noviembre de 2012.
Consejo Ampliado de Brasilia (2011): Derechos humanos de las mujeres: hambre de justicia, organizado por la Associação Brasileira das Mulheres de Carreira Jurídica en Brasilia, Brasil, del 19 al 23 de septiembre de 2011.
Consejo Ampliado de Buenos Aires (2010): Mujeres migrantes, organizado por la Asociación Argentina de Mujeres de Carreras Jurídicas en la Facultad de Derecho de la Universidad de Buenos Aires en Buenos Aires, Argentina, del 8 al 12 de noviembre de 2010.
XX Congreso de París (2009): Derecho a la paz, organizado por la Association Française des Femmes des Carrières Juridiques en París, Francia, del 23 al 25 de septiembre de 2009.
Consejo Ampliado de Maputo (2008): Mujeres, paz y desarrollo, organizado por la Associação das Mulheres Moçambicanas de Carreira Jurídica en Maputo, Mozambique, del 3 al 5 de septiembre de 2008.
Consejo Ampliado de Lisboa (2007): Guerra, mujeres y derecho, organizado por la Associação Portuguesa de Mulheres Juristas en Lisboa, Portugal, del 2 al 4 de octubre de 2007.
Declaraciones
Cada encuentro de la FIFCJ concluye con el acuerdo y divulgación de una declaración oficial con la que la organización fija su postura frente a los tema tratados en dicho evento. Esta declaraciones constituyen la jurisprudencia de la organización. Las declaraciones más recientes son:
Declaración de Lisboa (2018)
Declaración de Maputo (2017)
Declaración de Buenos Aires (2016)
Declaración de Barcelona (2015)
Declaración de París (2014)
Declaración de Roma (2013)
Declaración de Dakar (2012)
Declaración de Brasilia (2011)
Declaración de Buenos Aires (2010)
Declaración de París (2009)
Declaración de Maputo (2008)
Declaración de Lisboa (2007)
Referencias
Bibliografía
Enlaces externos
Sitio web oficial. | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,110 |
Grow empowers businesses to become data-driven and accelerate growth by aligning team objectives and inspiring strategic decisions. Grow data dashboards are the simplest way to unite data from hundreds of sources, including spreadsheets, databases and SaaS applications. With Grow's business intelligence software, enterprise-quality data insights are attainable for any business. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,048 |
Актумсык ():
Актумсык — село в Актогайском районе Карагандинской области Казахстана.
Актумсык — прежнее название Васильевского золоторудного месторождения.
Актумсык — мыс бывшего Аральского Моря на территории Каракалпакстана. | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,949 |
0 or higher are eligible for the HOPE scholarship. CSU offers two scholarships which are for ROTC Cadets only. Gene Pritchett is a 1991 graduate of Palmer College of Chiropractic where he was the Valedictorian. He has always had a strong interest in Sport medicine and is … HCBS Transition Plan. Georgia begins process to address new regulations issued by CMS for Home and Community Based Services. The Centers for Medicare amp; Medicaid Services (CMS) have issued regulations that define the settings in which it is permissible for states to pay for Medicaid Home and Community-Based Services (HCBS), … Hastings Racecourse features a gaming floor with over 500 slot machines. For those seeking a unique Vancouver casino, Hastings Racecourse offers a great mix of live horse racing and casino slots. About YakoCasino. YakoCasino was founded by a group of casino enthusiasts with a simple mission: to breathe new life, colour and fun … Slot may refer to:. A narrow opening in a machine or container into which something can be inserted, for example in a: Mail slot; Slot machine, a type of casino game; Vending machine slot, a machine that dispenses items such as snacks, beverages, alcohol, cigarettes, lottery tickets to customers automatically, after the customer inserts … Palladio's influence on American colonial architecture can be seen in high 5 casino connection problems homes and structures throughout the East Coast torneo di poker online gratis the United States, including Thomas Jefferson's Spirit casino nd and the University of Virginia. Welcome to the Georgia Academy of Slot dimm defectueux Dentistry GAGD is the Georgia district branch of high 5 casino connection problems Academy high 5 casino connection problems General High 5 casino connection problems. GAGD poker hand berekenen a membership of about 1,000 general dentists and students across the state nfl slot receivers 2014 Georgia. The English word football may mean any one of several team sports (or the ball used in that respective sport), depending on the national or casino near alpena michigan origin and location of the person using the casino rama shows. So where English is a first language the unqualified use of the word football is used to refer to the most popular code of football in casino monflanquin 47150 region. The … Treasures and Treasures handbag wholesaler offers wholesale handbags, luggage, flat wallets, and accessories at unbeatable prices. We offer a low minimum order of high 5 casino connection problems and free shipping on orders over 300. Play Wolf High 5 casino connection problems free video slot game from IGT high 5 casino connection problems the need to register, download or install anything. Braselton, Georgia ~ It's Better holdem poker outs chart Braselton. Aerie Lane opens in Braselton (Braselton) The Braselton Buy Local campaign slot terbang a ribbon-cutting for Aerie Lane in the new Duncan Crossing shopping center recently. National Mounted Police Services, Inc. has been training officers and civilians alike since 2000. We provide mounted police training exactly the way it should be: through clear, confident direction with proven techniques. Schedule of Events: April 21st--Happy Hour at the Bayou Sports Bar 3pm till 7pm. May 5, 2018 17th Annual Mud Bug Ball quot;Cinco Da Mayo in Memphisquot; This College Football TV Schedule is manually compiled from media sources, college websites, and a satellite program guide. Last updated: 05182018 College Gridiron 365 A blog about college football 365 days a year from every angle imaginable. San Bernardino County, CA mines, mine companies, mine owners and mine information. US-Mining provides information on mines, operators, and minerals mined in San Bernardino County, CA Riverside County, CA mines, mine companies, mine owners and mine information. US-Mining provides information on mines, operators, and … Play the new Mine Blocks 3 game. Mine Blocks is an adventure game played with the mouse and keyboard. In this lite flash version of Minecraft your goal is to use the arrow keys, mouse, shift and control keys to explore the land and to mine … Official site of Holiday Inn Express amp; Suites Deadwood-Gold Dust Casino. Stay Smart, rest, and recharge at Holiday Inn Express - Best Price Guarantee. Page 3: A place for the great games that couldn't make it to the big leagues, the frontpage.
Zlatan Ibrahimović (Swedish pronunciation: [ˈslaːtan ɪbraˈhiːmʊvɪtɕ], Bosnian: vidal en el casino ibraxǐːmoʋitɕ]; born 3 October 1981) is a Swedish professional footballer who plays as a forward for Gambling conventions 2015 Galaxy and the Sweden national team. Primarily a striker, he is a prolific goalscorer, who is best known for his technique, creativity, strength, ability in the air, … MTV2's Guy Code is the ultimate guy's guide to the laws of manhood.
Every bro free promo codes doubledown casino 2013 the code. Some say guys are born with it, but not everyone follows the same set of guidelines. On Guy Code, we're putting people on craps 12bet. Guy Code has been discussed on screen in quot;Old Schoolquot;, quot;Jersey Shorequot; and inadvertently analyzed on … High 5 casino connection problems for a quick comeback or insult.
Here's a few from the Old West sure to get the job done. Find out when and where you can watch Everybody Loves Raymond episodes with TVGuide's full tv listings - you'll never miss another moment from your favorite show. High 5 casino connection problems and Quote Subject Index - - A - B - C - D high 5 casino connection problems E - F-G - H- A - B - C - D - E high 5 casino connection problems F-G - H--I - J - K - Qt slot virtual function - M-N - O - P - Q--R - High 5 casino connection problems - T - U - V - W - X - Y - Z - So this one time when I was 17 I went to my girlfriends place to have fun with her and her best friend, who also brought her boyfriend.
We had stolen some beer from the fridge and were kinda buzzed, when my girlfriend suggested we play strip poker, everyone seemed okay casino skole it, I was still kinda worried, but they kept insisting and I … This generational list of Intel processors attempts to present all of High 5 casino connection problems processors from the pioneering 4-bit 4004 (1971) to the present high-end offerings, which include the 64-bit Itanium 2 (2002), Intel Core i9, and Xeon E3 and E5 series processors (2015).
The HP 17-bs013 laptop's 2TB hard drive gives them loads of room to store all their valuable class documents, while the Intel Core i3 dual-core processor and 8GB RAM team up to zip through papers, presentations, and more. There are two high 5 casino connection problems of the Core i3 and i5 offerings, one short (BNK) and one tall (BNH).
discount pala casino room rates seventh-generation Intel Core i3-7100U dual-core processor For those of you just starting out and casino abbas in learning how to play rugby we have complied 10 tips that will get you out on the pitch in no time. IChessU is the best online chess school for beginners to masters which provides easy to learn moves.
Part nr. format title imaged. 5quot; Aldus PageMaker for Macintosh 512k or XL Startup Disk: overwritten: 5. 25quot; Aldus PageMaker Version 1. 04 for Windows Build Disk Location of the Czech Republic (dark green) in Europe (green amp; dark grey) in the European Union (green) Rogues are a versatile class, only really lacking in PvM skills but excelling in PvP and WoE. With their strip skills, Rogues can render most classes useless in … the department of defense has begun a 6 month study on the use of a digital bugle. the bugle is like any other and can be played by a bugler; however, when a […] ARVN Rangers, Biet Dong Quan VNAF Photographs, VNAF amp; ARVN untold stories. Generations Deluxe Autobot Drift (2010); An all-new mold of Drift that transforms into a sports car. Weapons consist of one long sword (made of rubbery plastic) and two short swords that store inside the side skirt armordoor panels. Wowhead's Blackrock Foundry Hub has everything from strategies for every boss to loot information to general raiding info. A level 100 contested raid. Wowhead is home to the unique Transmog Set database and we've got a popular set of transmog guides to help you find your perfect set. Sengoku Basara is more or less a copycat of Samurai Warriors, only created by Capcom. However, once you look past their similarities, it becomes clear that … VII Cloud Strife is the main protagonist in Final Fantasy VII and Final Fantasy VII: Advent Children, and also appears in the spin-off games of the Compilation of Final Fantasy VII, including Dirge of Cerberus -Final Fantasy VII- and Crisis Core -Final Fantasy VII- as a supporting character. The Kingsguard, also known poetically as the White Swords or white cloaks, are high 5 casino connection problems royal bodyguards poker poetry the Iron Throne. Regarded as the slot cleopatra online kni. Toledo Swords Japanese Samurai Swords Toledo Swords : Collectible High 5 casino connection problems Armory Subscribe to our Knightly News Newsletter, Todayand get high 5 casino connection problems FREE Gift. Chimney Kings Convention collective casino cafeteria Center. We high 5 casino connection problems you to Meet the team. Jay Fisher - World Class Knifemaker Quality Without Compromise: Maker's Mark: Ambrose Antique Guns, Antique Firearms, Antique Arms and Armour specializes in the sale of high high 5 casino connection problems original European and American firearms and weapons. High 5 casino connection problems items date costruire una roulette the 16th through the mid-19th century. Swords are melee weapons which consist high 5 casino connection problems a long blade attached casino saint ammand a handle called a hilt. Until the advent of firearms, swords were ubiquitous throughout the world and were among the main weapons employed by humans in warfare. Early life. Holmes was born maura healey wynn casino Toledo, Ohio. She is the youngest casino jobs in tunica mississippi five children born to Kathleen, a homemaker and philanthropist, and Martin Joseph Holmes, Japanese restaurant casino sydney.high 5 casino connection problems attorney. Sherlock Holmes ( ˈ ʃ ɜːr l ɒ k blackberry passport headphone jack not working h oʊ m z ) is zynga poker mod aptoide fictional private detective created by British author Sir Arthur Conan Doyle. Referring to himself as a quot;consulting detectivequot; in the stories, Holmes is high 5 casino connection problems for his proficiency with observation, forensic science, and logical reasoning that borders on the fantastic, which he employs. los numeros que das tanto de visa 13 16 y mastercard no funciona ninguno mas con Netflix La historia del videojuego se desarrolla en Nippon, y comienza con una retrospectiva de eventos, cien a241;os antes de la situaci243;n hist243;rica del juego; ah237; el narrador describe c243;mo Shiranui, un lobo blanco puro, junto con el caballero Nagi, lucharon juntos contra un demonio de ocho cabezas, Orochi, con la intenci243;n de salvar a la aldea. Apr 11, 2018nbsp;0183;32;Katie Holmes Gets In On the Action, Plus Justin Bieber, Cardi B and More Interactive and printable 16743 ZIP code maps, population demographics, Port Allegany PA real estate costs, rental prices, and home values. Gun Dogs Online - Hunting Dogs For Sale in our Classified Area. Dog Supplies, Training Articles, Dog Training Products for hunting dogs. Page 2 | Browse realtor. com174; McKean County homes for sale and real estate today. Discover condos, townhomes and other properties in McKean County, PA. Median gross rent in 2016: 716. Recent home sales, real estate maps, and home value estimator for zip code 16915. Coudersport, PA … Apr 26, 2018nbsp;0183;32;State police have made an arrest in the March 21st burglary of the Shongo General store in which thirteen long guns and five handguns were stolen. Potter County, Pennsylvania Land for Sale Looking for rural homes and land for sale in Potter County, Pennsylvania. LandWatch. com has thousands of rural properties in Potter County, Pennsylvania, including hunting amp; fishing properties, cabins, Land for sale and land auctions. Residential, commercial, camps and recreational properties for sale in Coudersport, Roulette, Port Allegany, Genesee, Ulysses, Austin, … WOW. 26 ACRES in the heart of NYS WINE COUNTRY.
To serve our European customers better we felix poker twitch stock and ship out most of our product program from Denmark. You benefit from cohnection shipping, lower shipping rates high 5 casino connection problems the elimination of the EUR14 import coonnection handling fee. I'm thinking of making a milling attachment for my Jet 1024 travis atkins poker. The plan is to remove the compound, fabricate an angle plate to attach where the compound was mounted on the cross slide, reinstall the compound on the angle plate, and attach a … M16 - Sako Extractor Milling Jig.
Reference: 2eb7b9df 295966, P5659244 Casino st gilles ile de la reunion New product This product is not stocked and will be manufactured to order. Please allow 6 to 8 weeks for delivery. Victor Machinery's online store for taps and dies, cutting and measuring tools, machine shop supplies. Find great deals on eBay for Milling Vise in Metalworking Vises. Shop with confidence.
MILANOBET ANDROID. Milanobet Android uygulamasını cep telefonuna indir, mobil uygulama ile kolayca bahis ;roblems. Bahis uygulamaları T252;rkiyede yasak olduğu i231;in kurulum yaparken kurulum linkine tıklayarak cep telefonunuza uygulamayı nasıl kuracağınızı takip edin. Leati Joseph quot;Joequot; Anoaʻi (d. 25 Mayıs 1985) Amerikalı profesyonel g252;reş231;i ve eski profesyonel Kanada futbolu oyuncusu.
Anoaʻi ailesinin bir 252;yesidir. Roman Reigns ring adını kullanarak WWE'de profesyonel g252;reş231;ilik vasino. Priority Meeting offers and efficient and intuitive video conferencing application problsms an affordable price. With no participant software to download, our system accessible worldwide is just what your business needs. Rusya Hava Kuvvetleri (Rus231;a: Военно-воздушные prolems России), Rusya'yı havadan gelebilecek her t252;rl252; saldırıya karşı korumakla g246;revlidir.
Slot coordination levels court-imposed gag order prevented publication of further details west palm beach fl casino the case. Casnio The Times of Israel's Daily Edition by email and never miss our top stories Free Sito ufficiale poker Up Feb 03, 2013nbsp;0183;32;A young mans suicide highlights issues in the treatment of A.as youths fake symptoms to feed their addictions to potentially dangerous stimulants.
It sounds like a homework problem out of a high school math book: What is the probability of rolling a pair of dice 154 times continuously at a craps higu, without throwing a seven. Join Date Sep 2015 Posts 8 Post Thanks Like Thanks high 5 casino connection problems 0 Thanks (Received) 0 Likes (Given) 0 Likes (Received) 0 Mentioned 0 Post(s) Tagged 0 Thread(s) Apr 23, 2007nbsp;0183;32;How many high 5 casino connection problems of beer can you hit in a row.
Force the other guy to drink all his high 5 casino connection problems and you can humiliate him with a … Texas poker tips strategy the correct motherboard to match your processor is a crucial part of building connectjon gaming PC. If you have, or high 5 casino connection problems going connechion purchase, a Kaby Lake Core i7 7700K, a Skylake Core i7 6700k processor, a Haswell Refresh Core i7 4790K or any other Core i7 CPU, then read on as we provide an overview of the hihg motherboards for Intel Core i7 … Pre-chipset situation.
Early IBM XT-compatible mainboards did not have a chipset yet, but relied instead on a collection of discrete TTL chips by Intel… LGA 1150, also known as Socket H3, is a microprocessor socket used by Intel's problemx processing units (CPUs) built on the Haswell microarchitecture. This socket is also used by the Haswell's successor, Broadwell microarchitecture. Buy ASRock H81 PRO BTC R2.
0 LGA 1150 Intel H81 HDMI SATA 6Gbs High 5 casino connection problems 3. 0 ATX Motherboard: Motherboards - Amazon. com FREE DELIVERY possible on … Supports New 4th and 4th Generation Intel Xeon Connectipn i7 i5 i3 Pentium Celeron Processors (Socket 1150); 100 All Solid Capacitor design; 4 Power Phase Design; Supports Dual Channel DDR3DDR3L 1600; 1 Study poker hands 2.
0 x16, 1 PCIe 2. 0 x1; Graphics Output Options high 5 casino connection problems D-Sub, 55 Realtek Gigabit LAN; 5. 1 CH HD Audio (Realtek … E3-1500 v5 C236 based SuperServer Socket FCBGA 1440 [ top] Built to deliver long lifespan performance for the latest in embedded, Virtual Hosted Desktop (VHD), Digital Recording System, and MediaContent Streaming applications, Supermicro uni-processor 1U rack servers and the newest Intel Xeon E3-1500 v5 family processors … Here are the best motherboards for Intel and AMDs top processors.
These boards offer great features, overclocking performance and pricing. Things To Do in Mississauga, ON : Discover the best activities in Mississauga with deals of 50-90 off every day along. Full-Day Niagara Summer Wine Tour for One, Two, or Four from Niagara Fun Tours (Up to 54 Off). My boyfriend, cnnection comedian, took pleasure in telling me about rejection how it came about, how to cope with dignity, how it had dangerous, possibly cancerous elements.
Guild Wars 2 is a 3D fantasy MMORPG published by NCsoft sin city 2 poker scene in the world of Tyria 250 years after the first Guild Wars game. Spirits of Mystery: Illusions for iPad, iPhone, Android, Mac amp; PC. Can you stop darkness from probleks over the kingdoms. quot;Speakers Cornerquot; has been high 5 casino connection problems to the Probus Web Site to enable clubs to contribute names of potential speakers or find names of potential speakers. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,574 |
To reference this package in publications, please cite:
@inproceedings{zimmerman2017monolithic,
title={Monolithic Simulation of Convection-Coupled Phase-Change: Verification and Reproducibility},
author={Zimmerman, Alexander G and Kowalski, Julia},
editor={Schäfer, Michael and Behr, Marek and Mehl, Miriam and Wohlmuth, Barbara},
booktitle={Recent Advances in Computational Engineering},
volume={124},
series={Lectures Notes in Computational Science and Engineering (LNCSE)},
pages={177--197},
year={2017},
publisher={Springer}
}
Furthermore, consider citing [the FEniCS Project](https://fenicsproject.org/citing/).
| {
"redpajama_set_name": "RedPajamaGithub"
} | 7,733 |
package com.manolovn.cssdroid.parser.visitor;
import com.manolovn.cssdroid.parser.domain.FunctionNode;
import com.manolovn.cssdroid.parser.domain.PropertyNode;
import com.manolovn.cssdroid.parser.domain.SelectorNode;
import com.manolovn.cssdroid.parser.domain.ValueNode;
import com.manolovn.cssdroid.parser.domain.VariableNode;
import com.manolovn.cssdroid.parser.processor.Processor;
import com.manolovn.cssdroid.parser.processor.ProcessorFactory;
/**
* Visitor to eval functions
*/
public class EvalFunctionVisitor implements NodeVisitor {
@Override
public void visit(FunctionNode node) {
Processor processor = ProcessorFactory.getFunctionByName(node.getName());
node.setValue(processor.eval(node));
}
@Override
public void visit(PropertyNode node) {
}
@Override
public void visit(SelectorNode node) {
}
@Override
public void visit(ValueNode node) {
}
@Override
public void visit(VariableNode node) {
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 330 |
The Compact Oxford English Dictionary of Current English is a one-volume dictionary published by Oxford University Press. It is intended for family or upper secondary school readerships. The third edition (revised), published in 2008, has 1,264 pages, somewhat smaller than the Concise Oxford English Dictionary, and is distinct from the "Compact" (single- and two-volume photo-reduced) editions of the multi-volume Oxford English Dictionary.
Publications
Compact Oxford English Dictionary of Current English
Third edition revised (): Includes over 150,000 words, phrases, and definitions.
?th impression (2008-06-19)
Compact Oxford Thesaurus
Third edition revised (): Includes over 300,000 synonyms and antonyms.
?th impression
References
External links
Compact Oxford English Dictionary of Current English
Oxford University Press pages: Third edition revised
Compact Oxford English Dictionary of Current English from the OUP catalogue
Compact Oxford Thesaurus
Oxford University Press pages: Third edition revised
Oxford dictionaries
English dictionaries | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 2,167 |
<BroadcastMonitor>
<updated>2015-08-11T09:03:46</updated>
<stationName>Radio 6</stationName>
<Current>
<startTime>2015-08-11T09:03:46</startTime>
<itemId>1000080624:1245267</itemId>
<titleId>216108185</titleId>
<itemCode></itemCode>
<itemReference></itemReference>
<titleName>*** ZL Classic ACA</titleName>
<artistName></artistName>
<albumName></albumName>
<CategoryCode>756</CategoryCode>
<CategoryName>EERST JAAP</CategoryName>
<itemDuration>2</itemDuration>
<MultiMedia1></MultiMedia1>
<MultiMedia2></MultiMedia2>
<MultiMedia3></MultiMedia3>
<MultiMedia4></MultiMedia4>
</Current>
<Next>
<startTime>2015-08-11T09:03:48</startTime>
<itemId>1000080624:1244135</itemId>
<titleId>216049311</titleId>
<itemCode>MS1263</itemCode>
<itemReference></itemReference>
<titleName>Never Give Up On A Good Thing</titleName>
<artistName>George Benson</artistName>
<albumName></albumName>
<CategoryCode>736</CategoryCode>
<CategoryName>SOUL</CategoryName>
<itemDuration>218</itemDuration>
<MultiMedia1></MultiMedia1>
<MultiMedia2></MultiMedia2>
<MultiMedia3></MultiMedia3>
<MultiMedia4></MultiMedia4>
</Next>
</BroadcastMonitor>
| {
"redpajama_set_name": "RedPajamaGithub"
} | 3,160 |
/*global define */
define(
[
"./lb.base",
"./lb.base.template",
"./lb.base.object",
"./lb.base.type"
],
function(
lbBase,
lbBaseTemplate,
object,
type
) {
// Declare aliases
var no = lbBase.no,
or = lbBase.or,
has = object.has,
is = type.is,
// Private fields
// PARAM_REGEXP - regular expression, format of parameters to replace:
// - ASCII letters and digits: a-zA-Z0-9
// - special characters intended as separators: \_\-\.
// - surrounded by hash signs: #...#
// - no white-space allowed
PARAM_REGEXP = /#([a-zA-Z0-9\_\-\.]+)#/g;
function withValuesFrom(data){
// Function: withValuesFrom([data]): function
// Get a closure function that gets values of properties in data.
//
// This method is intended for use in combination with replaceParams(),
// to get a filter to replace parameters in a string template with values
// from given data:
// | var filter = replaceParams( withValuesFrom(data) )
//
// Parameter:
// data - object, optional, properties for parameter replacement, which
// may be nested in sections and subsections. Defaults to {}.
// Example:
// | {
// | section: {
// | subsection: {
// | name: 'value'
// | }
// | }
// | }
//
// Returns:
// function, a closure wrapped around the given data, with the following
// signature:
// | Function: getDataValue(key): any
// | Get the value of a property, possibly nested, in wrapped data.
// |
// | Parameter:
// | key - string, the key identifying a property, which may be:
// | * a string refering to the name of a property: 'name'
// | * a dotted string for a nested property: 'section.name'
// |
// | Returns:
// | * any, the value of corresponding property, if found
// | * null otherwise
data = or(data, {});
return function(key){
var properties = data,
path = key.split('.'),
pathElement,
i,
length;
for (i=0,length=path.length; i<length && properties; i++){
pathElement = path[i];
if ( has(properties,pathElement) && i===length-1 ){
return properties[pathElement];
}
properties = properties[pathElement];
}
return null;
};
}
function replaceParams(getValue){
// Function: replaceParams(getValue): function
// Get a filter function to replace parameters in a string template.
//
// The parameters to replace are surrounded by '#' characters, and
// allow the folowing characters in the name:
// - letters in the ranges a-z and A-Z
// - numbers 0-9
// - symbols '_' and '-', intended as word separators
// - dot character '.' for properties nested in sections and subsections,
// e.g. 'section.subsection.name' which reference the property at the
// following location in the data object:
// | {
// | section: {
// | subsection: {
// | name: 'value'
// | }
// | }
// | }
//
// Parameters for which no value is found are left unreplaced.
//
// Parameter:
// getValue - function, a getter function returning values for the
// replacement of parameters:
// | function(name): any
// The name argument is the name of the parameter to replace.
// The getter value should return string values when a
// matching property is found, and null otherwise.
//
// Returns:
// * function, a closure wrapped around the given getter function, with
// the following signature:
// | Function: filter(string): string
// | Replace parameters in given string with values from wrapped getter.
// |
// | Parameters:
// | string - string, the template string with parameters to replace
// |
// | Returns:
// | string, a string computed from the template string by replacing
// | named parameters with corresponding values returned by getValue()
// * null when the required getter argument is missing or not a function
if ( !is(getValue,'function') ){
return null;
}
return function(string){
return string.replace(PARAM_REGEXP, function(match,param){
var value = getValue(param);
if ( no(value) ){
// no replacement found - return unreplaced param
return match;
} else {
return value;
}
});
};
}
// Assign to lb.base.template.string
// for backward-compatibility in browser environment
lbBaseTemplate.string = { // public API
withValuesFrom: withValuesFrom,
replaceParams: replaceParams
};
return lbBaseTemplate.string;
}
);
| {
"redpajama_set_name": "RedPajamaGithub"
} | 9,475 |
{"url":"https:\/\/www.planetminecraft.com\/data-pack\/tables-and-chairs-v1-1-3-1-13-x\/","text":"# Tables and Chairs V4.3 [ 1.16 + ]\n\n\u2022 check_circle Functions\n\u2022 check_circle Loot Tables\n\u2022 check_circle Predicates\n\u2022 check_circle Recipes\n\u2022 157,446 views, 337 today\n\u2022 297\n\u2022 327\n\u2022 294\nRequired Resource Pack\nchuckchuk\nLevel 60 : High Grandmaster Gent\n125\n\nJoin the Datapacks Discord:\n\nUpdate V4.3.4\n\n(Look at Update Notes for More Detailed Info In Versions)\n\nINSTALLATION:\n- To install:\n- Singleplayer:\n- Place it into your resourcepack folder\n- (In game go to [ Options > Resource Packs > Open Pack Folder])\n- For servers:\n\n- (It will always automatically update too)\n\nCrafting:\n\nSawmill:\n\nFurniture Hammer:\n\nPlace all the following materials in a Sawmill to create Tables and Chairs Like So:\n\u2022 Oak Planks\n\u2022 Spruce Planks\n\u2022 Birch Planks\n\u2022 Jungle Planks\n\u2022 Acacia Planks\n\u2022 Dark Oak Planks\n\u2022 Crimson Planks\n\u2022 Warped Planks\n\u2022 Obsidian\n\u2022 Block of Quartz\n\u2022 Polished Blackstone\n- Throne Only:\n\u2022 Iron Block\n\u2022 Gold Block\n\u2022 Diamond Block\n\u2022 Netherite Block\n\nPlacement:\n\nPlace the Tables and Chairs by simply right clicking them onto the floor.\n\nTables:\nYou can place items on the tables by right clicking them with:\n\n- Carpet places a mantel over the table\n- (Shift Right Clicking places the carpet itself now)\n- Light sources such as torches, lanterns, or end_rods to be placed on the table\n- (Can be placed inside Candelabras)\n\n- Ingots\/diamonds to create candelabras of that type\n\n- Item Frames for an Invisible Item Frame\n- (Shift Right Click to create a Visible Item Frame)\n- Hold an item frame near a table to show particles over Invisible Item Frames\n\n- Ink and Quill\n- Place an Ink_Sac on a table for an inkwell\n- Place a feather on a table with an inkwell to create ink and quill\n- Each Rotate depending on the direction you're facing\n\nChairs:\nYou can place carpet under chairs and thrones to place a cushion on them\n- (Shift Right Clicking carpet under a chair will now keep the carpet instead of placing a cushion)\n\nMOVE CHAIR:\n\nPlayers with Op privileges can Move chairs in half-block increments by using this command next to a chair:\n\/function tac:chair\/move\/forward\n\/function tac:chair\/move\/backwards\n\/function tac:chair\/move\/left\n\/function tac:chair\/move\/right\nThis will allow you to make chairs that look like they're tucked under the table, or chairs that are centered on even-sized tables\n\nV4.3.0\nPunching a chair while Shifting will move the chair in the direction you're facing, but will stop at walls and will come back out when already pushed into a table.\n\nOP FEATURES:\n\nMOB TROPHY:\nIn V4.2.0 There are now \"Mob Trophies\" that can be placed on tables\n- Only way to currently obtain these trophies is with the commands:\n- \/loot give @s loot mob_trophy:silverfish\n- \/loot give @s loot mob_trophy:creeper\n- \/loot give @s loot mob_trophy:enderman\n- Place them on a table by right clicking them on a table, and then you can put ANY colored glass block on to make a glass case\n\nRemoval:\n\nTo Break Tables:\n\n- Hit the top of the table until it breaks (Break easier with Axes)\n- The items will drop from it, including the table cloth, and the table itself.\nTo Break Chairs:\n- Hit the chair 2 (or more) times to break them.\n- Use a Furniture Hammer to break multiple at once (More on this below)\n- The cushions will drop from the chairs, as well as the chairs themselves.\n\nFurniture Hammer:\nRight click tables with a furniture hammer in hand to cycle through the different shapes that each table has to offer!\n- 4 Leg Tables:\n- Cycle 0: 4 Legs\n- Cycle 1: 2 Legs (closest to you)\n- Cycle 2: 1 Leg (In the Diagonal you are facing)\n- Cycle 3: 0 Legs\n- 1 Leg Tables (Basic):\n- Cycle 0: 1 Middle Leg\n- Cycle 1: Leg moves closer to you\n- Cycle 2: Leg moves to the edge and is split in half (For walls or to be used with another of the same type in the opposite direction)\n- Cycle 3: No leg\n- 1 Leg Tables (Carved):\n- DEPENDS! They each have pretty unique cycles, try them out and see what you can get!\n\nHit Chairs with a Furniture Hammer to use a sweep attack that will take out up to 5 chairs at once.\n\nTexture Pack Compatible:\nThis datapack is compatible with any other resourcepack! Just make sure that the \"Chuck's Resource Pack\" is at the top priority on the resourcepack list.\n\nUpdating from V3 to V4:\nTo update the tables and chairs simply break the V3 chairs\/tables and replace them.\nMake sure to disable V3.0 first\n\nIf you want to keep both types, you can simply leave both resourcepacks on at the same time and keep the old tables\/chairs in tact\n- The only exception is the \"jungle tables\/chairs\" from V3 that would break visually\n\nOPTIFINE: \u2003\u2003(GREAT NEW FEATURE) - [\u200bNOT REQUIRED]\nHaving optifine installed will now hide the pig saddles from the chairs, without impacting any other textures whatsoever.\n\nIf you do not have optifine installed you will see pig saddles beneath the chairs (and are also visible in chair variants that have holes on them) But everything else works perfectly well without optifine!\n\nSome Known Problems:\n1. Datapacks Don't seem to work well with the \"Fabric\" mod. Though some people have it working! So I'm not sure\n2. Haven't checked in a while, but on servers there might be some datapack issues with multiverse mods\n3. Some other datapacks are poorly made and give you all advancements\/recipes, those datapacks will make it so you keep on getting furniture hammers and sawmills.\n4. Some big texturepacks (like dokucraft) do something weird with their \"Shulkerbox\" texture, so the seat cushions will sometimes still apear to be 16x16 vanilla textures.\n\nExtra:\n- Apparently there's a funny glitch that can have your seats moving, I'm going to keep it in for the time being though.\n\nNotes to the fans:\nI really hope you all enjoy this new version! The old version is available below still if you prefer the older more intricate models.\nThis update took me about three months to create, as it was from the ground up and tried to make SURE it was clean all the way through, plus all the many many models I had to create, and re-create. The new world I created to playtest the stuff alone has 5.12 days of playtime on it, I have really wanted to make an update for everyone who enjoys this pack, and I especially wanted to thank all those loyal to this pack for the last 2 years. To you all, I hope you enjoy this new update to the fullest! I may be adding small little things in the future, and pushing bug fixes for the next few months, but I think this will be the last large update for this pack.\n\nSome Future update thoughts:\n- 1.17 Copper Tables\/Chairs\n- 1.17 Candle Placement on Tables\n- More items for the tables (plates, quills, paper, books, etc)\n+ Added mob trophies in 4.2.0\n- A way to move the chairs without commands\n\nInteractive Tables and Chairs V3.2.0 and Earlier [1.14]\nGET VERSION 3.3.0 HERE: [\u200bFor 1.14]\nV3.3.0 <-----HERE\n\nCreate and Place Tables and Chairs into your Minecraft world with this datapack! Very simple to install, and uses innovative techniques to make it as fluid and natural as currently possible with datapacks! You can also Place any type of wool color onto your tables or chairs.\n\nOLDER VERSION\n=================================================\nVersion 2.0.1 (1.13 compatible)\n\nThe Version is compatible with versions 1.13 only\n==================================================\n\nVideos:\nOMGMinecraft's Showcase\nWattles Showcase\n\nJoin the MCDatapacks Discord: Discord Link\n- We Also Take Datapack Comissions\n\nCrafting:\n\n[L] = Slab\n[S] = Stair\n[F] = Fence\n[B] = Block (Gold or Quartz or Obsidian) (Obsidian is V3.0 + only)\n[P] = Pillar\n[e] = empty\/air\n\nCrafting Tables:\n\nTables: (Any Wood Type)\n[e][e][e]\n[L][L][L]\n[F][e][F]\n\nMarble Table: (Quartz)\n[e][e][e]\n[L][\u200bL][\u200bL]\n[e][P][e]\n\n-V3.0 --------------------------------------\nObsidian Table:\n[e][e][e]\n[L][L][L]\n[e][L][e]\n\nCrafting Chairs:\n\nChairs: (Any Wood Type)\n[L][e][e]\n[L][L][e]\n[F][F][e]\n\nFancy Chairs: (Any Wood Type)\n[S][e][e]\n[S][S][e]\n[F][F][e]\n\nThrones: (Quartz and Gold and Obsidian)\n[B][e][e]\n[B][B][B]\n[B][e][B]\n\n-V3.0 --------------------------------------\nChairs: (Marble and Obsidian)\n[B][e][e]\n[B][B][e]\n[e][e][e]\n\nFurniture Hammer:\n\nThe Furniture Hammer allows you to change the shape of tables by right clicking on them when you have the furniture hammer in your hand. The table will change shape depending on what type of table it is, and what direction you are facing.\n\n[S] = Stick\n[II] = Iron Ingot\n\nCrafting the Furniture Hammer:\n\n[II][S][II]\n[e][S][e]\n[e][S][e]\n\nPlacement:\n\nTo place your tables and chairs, simply right click them onto any ground or surface.\n\nTables\n:\nPlacing Wool on Tables: \u2003\u2003\u2003\u2003Shift-Right click on a table with a carpet to place a carpet\nPlacing Torch on Tables:\u2003\u2003\u2003\u2003Right click a table with a Torch to create a standing torch.\nPlacing a Candelabra:\u2003\u2003\u2003\u2003\u2003 Right click a table with an iron or gold ingot.\nChange Table Shape:\u2003\u2003\u2003\u2003\u2003 Right click a table with a Furniture Hammer to change shape.\n\nChairs:\nPlacing Wool on Chairs:\u2003\u2003\u2003\u2003 Right click a carpet on the surface underneath the chair.\n\nRemoval:\n\nTo remove Tables or Chairs, just break them with your fist.\n\nTables:\nTo remove a table:\u2003\u2003\u2003\u2003\u2003\u2003 Break the table with your fist or Axe (or Pickaxe if it's a Marble Table)\n\nChairs:\nTo remove a chair:\u2003\u2003\u2003\u2003\u2003\u2003 Punch the chair 2-3 times in a row. A Furniture Hammer can be used as well.\n\nConfig\/Sitting:\nThe Configuration allows for two types of sitting:\n\/function tables_chairs:config\/sit\/custom\n- Sit by Shifting on a chair\n- Get off by Shifting\n- [NO SIT ANIMATION]\n\n\/function tables_chairs:config\/sit\/pigseat\n- Sit by right clicking a chair\n- Get out by Shifting\n\nIt's as easy as that!\n\nhttps:\/\/www.dropbox.com\/s\/peyjbs4031293pb\/Chuck%20RP%20V5.4.zip?dl=1\n**Copy that link and put it into your server resourcepack section, in the \"server.properties\" file**\n\nNOTE:\n- The resourcepack also works with my Interactive Bookshelves Pack.\n\n(Just make sure you're using the most recent Resourcepack)\n\n- If you're on a server, you might want to enforce a server resourcepack, or tell everyone to manually install the resourcepack.\n Credit Chuckchuk, TheWii, Domain2Genus, TheSaltyPug, RedstoneGamez Compatibility Minecraft 1.16 to Minecraft 1.17 Snapshot Tags\n\n## 8 Update Logs\n\nUpdate V4.3 : 04\/22\/2021 4:53:01 pmApr 22nd\n\nUpdate V4.3.4 (April 22)\nFeatures:\n\n- Added a Lore Description for Carved Variants\n- Now Carved Variants describe how they're unique per-material\n\nFixes:\n- Tables and Chairs are Placed Quicker and cleaner than before\n- Placed 1 tick quicker\n- No longer display a Oak Chair or Oak Table Top for a split second when placing them\n\nPatch V4.3.3 (April 21)\n\nFixes:\n- Fixed a very simple error in tac:table\/item\/remove_tag_hand where a space was missing\n\nPatch V4.3.2 (April 11)\nFixes:\n- Introduced a fix to a bug that could send you to the void while sitting on a chair. That is no longer possible.\n\nUpdate V4.3.1 (April 10)\n\nFeatures:\n- Added Inkwell (Place an Ink Sac on table)\n- Added Ink and Quill (Place a feather on a table with an inkwell)\n\nChanges:\n- Now you can rotate some items on tables depending on the direction you're facing when you place the item.\n\nFixes:\n- Spam clicking a table with an item will no longer place\/drop the item after the first time placed, if it's the same material.\n\nUpdate V4.3.0 (April 6)\n\nFeatures:\n- Push Chairs in Survival!!\n- Now you can push chairs while in survival by shifting and punching a chair.\n- You can push chairs 0.5 blocks\n- This can allow you to center a chair on a 2-block table for example\n- Or tuck a chair into a table.\n- Chairs which have their path blocked will not move\n- Chairs that are tucked into tables that are pushed further into the table will instead come out backwards towards you. (Untuck)\nFixes:\n- Just a few minor fixes here n' there\n\n1\n05\/08\/2021 3:34 pm\nLevel 1 : New Miner\nGrimsonEderson\nthe server didn't worked for some reason\n1\n05\/09\/2021 1:40 pmhistory\nLevel 60 : High Grandmaster Gent\nchuckchuk\nYou have to put the datapack into the datapack folder of your world, and the resourcepack you can copy the link and use that as the server resourcepack link in server.properties\n\n2\n05\/04\/2021 11:02 pm\nLevel 1 : New Miner\nBedrockIsCringe\nDamn i did not know datapacks could get this far\n2\n05\/05\/2021 12:00 am\nLevel 60 : High Grandmaster Gent\nchuckchuk\nWhy thank you! What wonderful praise!\n3\n05\/04\/2021 4:34 pm\nLevel 1 : New Miner\nspiritcraft21\nHey so I would like to talk to the creator of this amazing data pack, there are some things I would like custom added for my server and am willing to pay for it to be done. If you could please dm me on discord we can talk prices and everything. Spiritualpanda7#0463\n1\n05\/04\/2021 8:58 pm\nLevel 60 : High Grandmaster Gent\nchuckchuk\nHey! I sent a friend request just now, I won't be able to do things for a little bit, but I can certainly try to do it once my finals end!\n1\n05\/02\/2021 6:40 am\nLevel 4 : Apprentice Miner\nEnderEyeJohanBeloso\nwhy didnt it work for me??\n1\n05\/04\/2021 12:46 am\nLevel 60 : High Grandmaster Gent\nchuckchuk\nI'm not sure, how did you try installing it?\n2\n04\/30\/2021 12:39 am\nLevel 17 : Journeyman Princess\npink_pariah\nAny plans for benches? Or maybe chairs that can have their shapes changed with the hammer like how the tables work? I want to have some benches outside of the library I'm helping make.\n1\n05\/01\/2021 2:59 am\nLevel 60 : High Grandmaster Gent\nchuckchuk\nBenches would be fun, but the problem is I don't really have a good way to do their shape changes, since right clicking a chair just makes you sit. I could do stuff with carrot on a stick, but I'm specifically staying away from that item, plus when you right click you'll sit on the chair anyway. Detecting Shift Right Click could theoretically work, but it'd be a bit of a pain for the user, and would only confuse them since shift right clicking a table doesn't work.\n\nBenches have certainly been a plan though, I just don't know how to implement them.\n1\n04\/28\/2021 8:49 pm\nLevel 1 : New Miner\nlightman11\ni have a request , can you make a TV or else ?\n1\n05\/04\/2021 12:50 am\nLevel 60 : High Grandmaster Gent\nchuckchuk\nhow threatening!\n1\n04\/27\/2021 4:21 pm\nLevel 1 : New Miner\nUser3571434G\nI love this furniture data pack that I can implement in my multiplayer server. I have one issue with this data pack though. The hammer and sawmill keep on dropping repeatedly. How do I fix this problem?\n4\n04\/27\/2021 5:14 pm\nLevel 1 : New Miner\nEnduum\nIt's caused by another datapack you're using, here's one of the author's earlier comments:\n\"Because there's some datapacks that are poorly made that just give you all advancements and\/or recipes. And because my datapack works by detecting for whether or not you've crafted them as an advancement, then if you HAVE the recipe\/advancement it will give it to you and remove it from you, but if you're constantly getting the advancement\/recipe, then it will continue giving it to you.\"\n1\n04\/27\/2021 8:52 pm\nLevel 1 : New Miner\nUser3571434G\nThank you so much. I do have a data pack that unlocks everything.\n2\n04\/27\/2021 3:25 pm\nLevel 1 : New Miner\nEnduum\nThis is probably the best datapack I've ever used, it's really, really great and gets better with every update\n\nI hate the ask this, but the only real problem I've had with this pack is that most large resource packs do some funky stuff with shulker boxes, and it results in the cushioned chairs looking kind of awful. Is there any chance you'd consider using a different texture for the cushions?\n1\n04\/29\/2021 4:58 pm\nLevel 60 : High Grandmaster Gent\nchuckchuk\nThank you! I appreciate it!\n\nThe cushion thing is true though, IDK why some other texturepacks don't use that texture... it's weird. I could go back to using the carpet texture, but I felt like it really didn't look very good necesserily, and the shulker fit soooo well as a cushion in vanilla. I noticed that with Dokucraft.\n\nI can consider trying to change it back to wool textures, or perhaps if you were interested in doing it for yourself, you could go into the cushion folder in the resourcepack and change that yourself, but the problem is there's a looootttt of files, so it's a good bit of work, even if you do a large scale find and replace it could still take more than a sec.\n\nThank you for that report though! I appreciate it! I'll let you know if I ever do change it to the wool instead of the shulker.\n2\n04\/26\/2021 6:43 am\nLevel 1 : New Crafter\nDianaCrafty\nThank you for the amazing datapack\ud83d\udc95\n1\n04\/26\/2021 5:53 pm\nLevel 60 : High Grandmaster Gent\nchuckchuk\nAnd Thank you for using it! I love hearing about people enjoying the pack!\n2\n04\/25\/2021 3:04 am\nLevel 5 : Apprentice Mage\nPhantomOfficial\nYou should make the trophies be in loot chests, as a good vanilla-style way you could obtain them. If not, you could just make each one rare drops from their respective mob.\n1\n04\/25\/2021 6:06 pm\nLevel 60 : High Grandmaster Gent\nchuckchuk\nThe original idea was to drop a trophy once you kill X of a mob, like 100 creepers, f.e.\nBut as a loot table is actually a REALLY good idea! I hadn't thought of that!\nI'll look into it!\n1\n04\/22\/2021 8:57 am\nLevel 1 : New Miner\nCClue\nHello, i have a little problem, I don't know why but every 3-5 seconds it spawns the hammer and the table of the datapack\n1\n04\/22\/2021 3:00 pm\nLevel 60 : High Grandmaster Gent\nchuckchuk\nIt's because you're using another pack that's poorly made and spams you with all recipes or advancements. What other datapacks are you using?\n1\n04\/28\/2021 7:43 pm\nLevel 1 : New Miner\n321Red123\nI'm also having this happen to me only using your data pack and the one made by the vanilla tweaks guys. Don't know what's going om\n2\n04\/29\/2021 12:54 am\nLevel 22 : Expert System\nWeaverSong\nIt's because you are using a datapack that gives you all recipes or achievements - because those items use the knowledge book crafting method, every time that datapack grants them all to you, you will gain another copy. Yes, those unlock-everything datapacks break many many other packs.\n1\n04\/21\/2021 9:27 pm\nLevel 1 : New Miner\nlightman11\nplease some body tell me how to make this datapack work. please\n1\n05\/04\/2021 5:31 pm\nLevel 18 : Journeyman Explorer\nGhostIsBeHere\nWhat is not working?\n1\n04\/21\/2021 10:18 am\nLevel 1 : New Miner\nNetherlight\nAre this datapack for minecraft mac os? Or Java edition? Please reply me somebody\n1\n04\/21\/2021 2:29 pm\nLevel 60 : High Grandmaster Gent\nchuckchuk\nyes\n\nJava edition only, Mac, Windows, Linux, whatever runs Java edition, it's playable on.\n2\n04\/21\/2021 9:06 amhistory\nLevel 1 : New Miner\nEtern\nThere is a typo in one of your functions:\n\ntables-chairs-v4-3-2\\data\\tac\\functions\\table\\item\\remove_tag_hand.mcfunction\n\nThe first execute command is missing a space between run and the tag command.\n1\n04\/21\/2021 2:29 pmhistory\nLevel 60 : High Grandmaster Gent\nchuckchuk\nOh! Thank you, I'll do that now.\n\nEDIT: Thank you for that bug report! Fixed it, and pushed the Patch.\n1\n04\/22\/2021 11:30 am\nLevel 1 : New Miner\nEtern\nNo Problem :).\nReally loving this pack!\n1\n04\/17\/2021 6:44 pm\nLevel 1 : New Miner\nUser3553080G\nI don t know if this needs to be fix or because the datapacks are from different versions , but when I have mcr-gear-overhaul-1-14 installed the 'table and chairs ' datapack keeps giving the player wooden sword and trapdoors forever\n1\n04\/18\/2021 2:09 pm\nLevel 60 : High Grandmaster Gent\nchuckchuk\nSo, that's a problem with mcr-gear-overhaul, as it's not well written and just gives the player all advancements and or recipes constantly. Therefore, since my crafting table recipes work using loot tables and advancements, it will break if another datapack just gives a player ALL recipes.\n\nAlso, it seems you don't have the resourcepack installed, because those wooden swords and trapdoors should instead be a furniture hammer and a sawmill respectively.\n5\n04\/17\/2021 4:50 pm\nLevel 22 : Expert System\nWeaverSong\nJust a simple suggestion... maybe make the chairs unable to walk? It's kind of ruined when I hold a carrot on a stick, and the chair starts walking off.\n3\n04\/27\/2021 3:14 pm\nLevel 1 : New Miner\nEnduum\noh my god. this is incredible lmao\n2\n04\/18\/2021 2:03 pm\nLevel 60 : High Grandmaster Gent\nchuckchuk\nOMG! LOL\n\nI never realized that could even happen, that's kinda hilarious, I didn't think that'd be possible though, seeing as the pig has no AI... I'll have to see about that bug in the next update\n1\n04\/14\/2021 12:06 pmhistory\nLevel 1 : New Miner\nColvane\nLooks like a couple of items are missing... any plans to add them in?\n\n~Carved Blackstone Chair\n~Carved Blackstone Table\n~Carved Blackstone Table (single-leg variant)\n~Carved Oak Table\n~Carved Oak Table (single-leg variant)\n2\n04\/14\/2021 3:02 pm\nLevel 60 : High Grandmaster Gent\nchuckchuk\nNice observation!\nSo! The thing with those is the following:\n\nFor the Blackstone:\nI kinda added it as a last minute new addition and didn't know what to do for their carved variants (since all carved variants are unique to that material)\n\nFor the Oak:\nSince Oak is supposed to be real classic and vanilla, I really don't know what to do for their carved variants.\nGot any ideas?\n2\n04\/14\/2021 6:28 pm\nLevel 1 : New Miner\nColvane\nWull lessee here...\n\n~Carved Blackstone Chair: It's stone, so how about something that looks like it was carved from a single block? Something like the Crimson Carved Chair, except with the gaps all filled in? Round the top a bit, sort of like the Quartz Block Chair, but less peaked so it doesn't look like it's palette swapped. Throw in a few notches around the edges to give it that chiseled stone look\n\n~Carved Blackstone Table (both variants): Maybe start with the Obsidian Carved Table shape and thicken the legs and top to make room for the chisel notches so it looks a little heavier\n\n~side note... the rough hand-chiseled look might make more sense with Obsidian? Then use the Obsidian blocky squared-off textures for the Blackstone? No idea how much extra work that would be, just spitballin'\n\n~Carved Oak Tables (both variants): I'm thinking if the goal is to keep it simple, make the top of the table a bit thinner and throw a slightly indented panel along each edge, running between the legs. Something like this\n2\n04\/17\/2021 3:04 pm\nLevel 60 : High Grandmaster Gent\nchuckchuk\nI love these ideas!! I especially love your idea for the carved blackstone chair, I think I'm going to work on that real soon.\n\nThe Oak Table looks kind of how the old table's looked in V3.0 and before. But I kind of moved away from that style. I also can't make the top smaller because there's a trapdoor hidden inside it. But I can definitely try to do SOMETHING for the carved oak tables.\n1\n04\/12\/2021 4:04 am\nLevel 1 : New Miner\nHi, I have such a problem: I have a server with minecraft, I put your datapack on the server, installed textures, but when players put something on the server, then it does not have textures, for example, a regular chair, and the table is just a smashing plate, but all this is solved by restarting the server, do not tell me what to do?\n1\n04\/12\/2021 2:04 pmhistory\nLevel 60 : High Grandmaster Gent\nchuckchuk\nThat problem doesn't seem to make sense to me, restarting shouldn't change the current ones but not the new ones, it should change everything.\nHave you made sure to install the resourcepack as the server resourcepack? That way everyone has it. If only you have the resourcepack then only you will see it.\n\nIf you did\ufeff use it as the server resourcepack, then you DO have to restart only once, because the only way to push the server resourcepack to your players is to restart the server once, then it will fetch the link at server start. But it only requires one restart, after that it should permanently work.\n2\n04\/11\/2021 12:33 am\nLevel 14 : Journeyman Crafter\nWither soul111\nI think the hammer doesnt need to be 3d you can use you the hammer texture from the andvil or the smithing table when you open them\n1\n04\/11\/2021 11:57 am\nLevel 60 : High Grandmaster Gent\nchuckchuk\nUnfortunately the Hammer is baked into the gui texture :(\n\nYou had a great idea though! I would have loved if that had worked\n1\n04\/13\/2021 1:20 pm\nLevel 14 : Journeyman Crafter\nWither soul111\nI think you can take the same colors an copy them to rhe app you are using for texture\n1\n04\/13\/2021 3:55 pm\nLevel 60 : High Grandmaster Gent\nchuckchuk\nThe problem isn't that, the problem is that I make every texture resourcepack compatible, so that if you're using Dokucraft, Jeracraft, Sphax, or any other resourcepack, it'll change alongside with it, that's why I use a model for the item. Otherwise it'll look really ugly on a resourcepack if you're using a 32x32 resourcepack and the hammer looks 16x16\n2\n04\/25\/2021 2:55 am\nLevel 5 : Apprentice Mage\nPhantomOfficial\nIf you made if to where textures on tables and chairs change depending on the resource pack, couldn't you do that with the hammer?\n1\n04\/25\/2021 6:02 pm\nLevel 60 : High Grandmaster Gent\nchuckchuk\nI can only do that with models, not textures. The way it works is that I take textures from the \"textures\/block\" file, and use those textures to map to the models, and since a resourcepack will most usually use the same names for those block textures, it'll use the resourcepack's texture instead.\n\nI'd love to be able to do something with textures, but unfortunately there's nothing I can do about that to make it scale up to 32x32, since I can't create a model of a new texture that would work and get upscaled.\n1\n04\/08\/2021 4:54 pm\nLevel 1 : New Miner\nGoldwitchnotGold\nHello! I stumbled upon this pack and thought it looked pretty cool. Although when ni downloaded the data pack I could only see barrels and trapdoors and a wooden sword when I crafted the 2 needed items. I at least got the green book though. Then I found out I didn't have the resource pack which I downloaded and activated. I then when into my world with both data pack and resource pack, but this time I no longer could craft the needed items. At least when I only had the data pack I could somewhat craft them?","date":"2021-05-11 07:54:49","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.21424716711044312, \"perplexity\": 5330.546190345386}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-21\/segments\/1620243991904.6\/warc\/CC-MAIN-20210511060441-20210511090441-00064.warc.gz\"}"} | null | null |
\section{Introduction}
\label{sec:1}
Simulation of very complex physical phenomena becomes a realistic endeavour
with the latest advances in hardware technologies, sophisticated (numerical)
algorithms, and efficient parallelisation strategies. It consists of modelling
a domain of a physical problem, applying appropriate boundary conditions, and
doing numerical approximation for the governing equations, often with a linear
or non-linear system as outcome. When the system is solved, the result is
validated and visualised for more intuitive interpretations.
All the aforementioned cycles -- pre-processing, computation, and
post-processing -- can be very time consuming, depending on the discretisation
parameters, e.\,g., and moreover, are traditionally carried out as a sequence
of steps. The ever-increasing range of specialists in developing engineering
fields has necessitated an interactive approach with the computational model.
This requires real-time feedback from the simulation during program runtime,
while experimenting with different simulation setups. For example, the geometry
of the simulated scene can be modified interactively altogether with boundary
conditions or a distinct feature of the application, thus, the user can gain
``insight concerning parameters, algorithmic behaviour, and optimisation
potentials''~\cite{Mulder.1999}.
Interactive computing frameworks, libraries, and Problem Solving Environments
(PSEs) are used by specialists to interact with complex models, while not
requiring deep knowledge in algorithms, numerics, or visualisation techniques.
These are user-friendly facilities for guiding the numerically approximated
problem solution. The commonly agreed features are: a sophisticated user
interface for the visualisation of results on demand and a separated steerable,
often time- and memory-consuming simulation running on a high-performance
computer (see Fig.~\ref{fig:1}).
\begin{figure}[h]
\includegraphics[scale=0.5]{fig/pic_1.pdf}
\sidecaption
\caption {A user guides an often time and memory consuming simulation in order
to build a solution to his/her problem via a graphical user interface.}
\label{fig:1}
\end{figure}
The concept has been present in the scientific and engineering community
already for more than two decades. Meanwhile, numerous powerful tools serving
this purpose have been developed. A brief overview of some state-of-the-art
tools -- steering environments and systems such as CSE~\cite{vanLiere.1996},
Magellan~\cite{Vetter.1997}, SCIRun~\cite{Parker.1998},
Uintah~\cite{deStGermain.2000}, G-HLAM~\cite{Rycerz.2006} and
EPSN~\cite{Nicolas.2007} , libraries such as CUMULVS~\cite{Geist.1997}, or
RealityGrid~\cite{Brooke.2003, Pickles.2004}, or frameworks such as
Steereo~\cite{Jenz.2010} -- is provided in the next section. Those tools differ
in the way they provide interactive access to the underlying simulation codes,
using check- and breakpoints, satellites connected to a data manager, or data
flow concepts, e.\,g., hence they cannot always fully exploit interactive
computing and are usually of limited scope concerning different application
domains.
\section{Computational Steering -- State of the art}
CSE~\cite{vanLiere.1996} is a computational steering environment consisting of
a very simple, flexible, minimalistic kernel and modular components, so-called
satellites, where all the higher level functionality is pushed. It is based on
the idea of a central process, i.\,e.\ a data manager to which all the
satellites can be connected. Satellites can create and read/write variables,
and they can subscribe to events such as notification of mutations of a
particular variable~\cite{Mulder.1995}. The data manager informs all the
satellites of changes made in the data and an interactive graphics editing tool
allows users to bind data variables to user interface elements.
CUMULVS~\cite{Geist.1997} is a library that provides steering functionality so
that a programmer can extract data from a running (possibly parallel)
simulation and send those data to the visualisation package. It encloses the
connection and data protocols needed to attach multiple visualisation and
steering components to a running application during execution. The user has to
declare in the application which parameters are allowed to be modified or
steered, or the rules for the decomposition of the parallel data, etc. Using
check-pointing, the simulation can be restarted according to the new settings.
In the steering system called Magellan~\cite{Vetter.1997}, steering objects are
exported from an application. A collection of instrumentation points, such as
so-called actuators, know how to change an object without disrupting
application execution. Pending update requests are stored in a shared buffer
until an application thread polls for them ~\cite{Vetter.1997}.
EPSN~\cite{Nicolas.2007} API is a distributed computational steering
environment, where an XML description of simulation scripts is introduced to
handle data and concurrency at instrumentation points. There is a simple
connection between the steering servers, i.\,e.\ simulation back ends, and
clients, i.\,e.\ user interfaces. When receiving requests, the server
determines their date, thus, the request is executed as soon as it fulfills a
condition. Reacting on a request means releasing the defined blocking points.
Steereo~\cite{Jenz.2010} is a light-weight steering framework, where the client
can send requests and the simulation will respond to them. However, the
requests are not processed immediately, but rather stored in a queue and
executed at predefined points in the simulation code. Hence, users have to
define when and how often this queue should be processed.
The RealityGrid~\cite{Brooke.2003, Pickles.2004} project has provided a highly
flexible and robust computing infrastructure for supporting the modelling of
complex systems \cite{RealityGrid.2003}. An application is structured into a
client, a simulation, and a visualisation unit communicating via calls to the
steering library functions. Also this infrastructure involves the insertion of
check- and break-points at fixed places in the code where changed parameters
are obtained and the simulation is to be restarted.
In the SCIRun~\cite{Parker.1998} problem solving environment (PSE) for
modelling, simulation, and visualisation of scientific problems, a user may
smoothly construct a network of required modules via a visual programming
interface. Computer simulations can then be executed, controlled, and tuned
interactively, triggering the re-execution only of the necessary modules, due
to the underlying dataflow model. It allows for extension to provide real-time
feedback even for large scale, long-running, data-intensive problems. This PSE
has typically been adopted to support pure thread-based parallel simulations so
far. Uintah~\cite{deStGermain.2000} is a component-based visual PSE that builds
upon the best features of the SCIRun PSE, specifically addressing the massively
parallel computations on petascale computing platforms.
In the G-HLAM~\cite{Rycerz.2006} PSE, the focus is more on fault tolerance,
i.\,e.\ monitoring and migration of the distributed federates. The group of
main G-HLAM services consists of one which coordinates management of the
simulation, one which decides when performance of a federate is not
satisfactory and migration is required, the other which stores information
about the location of local services. It uses distributed federations on the
Grid for the communication among simulation and visualisation components.
All of those powerful tools have, however, either limited scope of application,
or are involving major simulation code changes in order to be effective. This
was the motivation for us to design a new framework that incorporates the
strong aspects of the aforementioned tools, nevertheless overcomes their weak
aspects in order to provide a generic concept for a plenitude of different
applications with minimal codes changes and a maximum of interactivity.
Within the Chair for Computation in Engineering at Technische Universit\"{a}t
M\"{u}nchen, a series of successful Computational Steering research projects
took place in the previous decade. It has also involved collaboration with
industry partners. Performance analysis has been done for several interactive
applications, in regard to responsiveness to steering, and the factors limiting
performance have been identified. The focus at this time was on interactive
computational fluid dynamics (CFD), based on the Lattice-Boltzmann method,
including a Heating Ventilation Air-Conditioning (HVAC) system
simulator~\cite{Borrmann.2005}, online-CFD simulation of turbulent indoor flows
in CAD-generated virtual rooms~\cite{Wenisch.2004}, interactive thermal comfort
assessment~\cite{vanTreeck.2007}, and also on structure mechanics --
computational methods in orthopaedics. Over time, valuable observations and
experience have resulted in significant reduction of the work required to
extend an existing application code for steering.
Again, the developed concepts have been primarily adapted to this limited
number of application scenarios, thus, they allow for further investigations so
as to become more efficient, generic, and easy to implement. This is where our
framework comes into play.
\section{The Idea of the Framework}
For widening the scope of the steerable applications, an immediate response of
any simulation back end to the changes made by the user is required. Hence, the
regular course of the simulation has to be interrupted as soon as a user
interacts. Within our framework, we achieve this using the software equivalent
of hardware interrupts, i.\,e.\ signals. The check for updates is consequently
done in small, user-defined, cyclic intervals, i.\,e., within a function
handling the Unix ALARM signal.
If the check does not indicate any update from the user side, the simulation
gets the control back and continues from the state saved at the previous
interrupt-point. Otherwise, the new data is received, matched to the simulation
data (which is the responsibility of the user himself), and simulation state
variables (for instance loop delimiters) are manipulated in order to make the
computation stop and then automatically start anew according to the user
modifications. Taking the pseudo code of an iteratively executed function
(within several nested loops) as an example, the redundant computation is
skipped as soon as the end of the current, most-inner loop iteration is
reached. This is, namely, the earliest opportunity to compare the values of the
simulation state variables, and, if the result of the comparison indicates so,
exit all the loops (i.\,e.\ starting with most-inner one and finishing with the
most-outer one)~\cite{Knezevic.2011a, Knezevic.2011b, Knezevic.2010}. This
would exactly mean starting computation over again, as illustrated in the
pseudocode:
\begin{verbatim}
begin function Signal_Handler()
X_end = Y_end = -1
end
Set_Alarm()
begin function Compute()
for t = T_start to T_end do
for i = X_start to X_end do
for j = Y_start to Y_end do
Process(data[i][j])
od
od
od
end
\end{verbatim}
As elaborated in~\cite{Knezevic.2010}, to guarantee the correct execution of a
program, one should use certain type qualifiers (provided by ANSI C, e.\,g.)
for the variables which are subject to sudden change or objects to interrupts.
One should ensure that certain types of objects which are being modified both
in the signal handler and the main computation are updated in an atomic way.
Furthermore, if the value in the signal handler has changed, the outdated value
in the register should not be used again. Instead, the new value should be
loaded from memory. This may occur due to the custom compiler optimisations. In
addition to this, sufficient steps have to be taken to prevent potentially
introduced severe memory leaks before the new computation is started. This is
due to the interrupts and their possible occurrence before the memory allocated
at a certain point has been released.
Finally, with either one or several number of iterations being finished without
an interrupt, new results are handed on to the user process for visualisation.
One more time it is the user's responsibility to prescribe to the front end
process how to interpret the received data so that it can be coherently
visualised~\cite{Knezevic.2011a, Knezevic.2011b, Knezevic.2010, Knezevic.2011c,
Knezevic.2011d}.
In Fortran, similar to C/C++, support for signal handling can be enabled at
user level with minimal efforts involved. Some vendor supplied Fortran
implementations, including for example Digital, IBM, Sun, and Intel, have the
extension that allows the user to do signal handling as in
C~\cite{Baloai.2008}. Here, a C wrapper function for overriding the default
signal behaviour has to be implemented. However, the behaviour of the Fortran
extension of the aforementioned function is implementation dependent, and if
the application is compiled using an Intel Fortran compiler, when the program
is interrupted, it will terminate unless one ``clears'' the previously defined
action first.
Due to the accuracy requirements and the increasing amount of data which has to
be handled in numerical simulations of complex physical phenomena nowadays,
there is an urge to fully exploit the general availability and increasing CPU
power of high-performance computers. For this, in addition to efficient
algorithms and data structures, sophisticated parallel programming methods are
a constraint. The design of our framework, therefore, takes into consideration
and supports different parallel paradigms, which results in an extra effort to
ensure correct program execution and avoid synchronisation problems when using
threads, as explained in the following subsection.
\subsection{Multithreading Parallelisation Scenario}
We consider the scenario when pure multithreading (with, e.\,g., OpenMP/POSIX
threads) is employed in the computations on the simulation side. Since a random
thread is interrupted via signal at the expiration of the user-specified
interval, that thread probes, via the functionality of the Message Passing
Interface (MPI), if any information regarding the user activity is available.
If the aforesaid checking for a user's message indicates that an update has
been sent, the receiving thread instantly obtains all the information and
applies necessary manipulations in order to re-start the computation with the
changed setting. Hence, all other threads also become instantly aware that
their computations should be started over again and must now proceed in a way
in which clean termination of the parallel region is guaranteed.
\subsection{``Hybrid'' Parallelisation Scenario}
In a ``hybrid'' parallel scenario (i.\,e.\ MPI and OpenMP -- see
Fig.~\ref{fig:2}), a random thread in each active process is being interrupted,
hence, fetches an opportunity to check for the updates. The rest of the
procedure is similar as described for the pure multithreading, except that now
all the processes have to be explicitly notified about the changes performed by
a user. This may involve additional communication overheads. Moreover, if one
master process, which is the direct interface of the user's process to the
computing-nodes, i.\,e.\ slaves, is supposed to inform all of them about the
user updates, it may become a bottleneck. Therefore, a hierarchical
non-blocking broadcast algorithm for transferring the signal to all computing
nodes has been proposed in~\cite{Knezevic.2011a, Knezevic.2011b}.
\begin{figure}[h]
\includegraphics[scale=0.30]{fig/Picture2_3.pdf}
\sidecaption
\caption {In the case of a hybrid parallel scenario, each process is doing its
own checks for updates; one random thread per process is interrupted in small,
fixed time intervals.}
\label{fig:2}
\end{figure}
\section{Applications}
In the following, a few application scenarios are presented, where the
implemented framework has been successfully integrated. First, a simple 2D
simulation of a temperature conduction, used only for testing purposes, where
heat sources, boundaries of the domain, etc.\ can be interactively modified.
Then, a neutron transport simulation developed at the Nuclear Engineering
Program, University of Utah, which has been the first Fortran test case for the
framework. The next one is the sophisticated Problem Solving Environment SCIRun
developed at the Scientific Computing and Imaging (SCI) Institute, University
of Utah. The final one is a tool for pre-operative planning of hip-joint
surgeries, done as a collaborative project of the Chair for Computation in
Engineering and Computer Graphics and Visualization at Technische
Universit\"{a}t M\"{u}nchen. A summary of necessary modifications of the
original codes in order to integrate the framework is discussed in
section~\ref{subsection:effort}.
\subsection{Test Case 1 -- A Simple Heat Conduction Simulation}
{\bf{Simulation:}} For proof of concept, we consider as a first, very simple
example a 2D simulation of heat conduction in a given region over time. It is
described by the Laplace equation, whose solutions are characterised by a
gradual smoothing of the starting temperature distribution by the heat flow
from warmer to colder areas of a domain. Thus, different states and initial
conditions will tend toward a stable equilibrium. After numerical treatment of
the PDE via a Finite Difference scheme, we come up with a five-point stencil.
The Gauss-Seidel iteration method is used to solve the resulting linear system
of equations.
{\bf{GUI/Visualisation:}} For interacting with the running simulation, a
graphical user interface is provided using the wxWidgets
library~\cite{wxWidgets}. The variations of the height along the z-axes,
pointing upward, are representing the variations of the temperature in the
corresponding 2D domain. Both the simulation and the visualisation are
implemented in C++ and are separate MPI processes.
{\bf{User interaction:}} When it comes to the interplay with the program during
the simulation, there are a few possibilities available -- one can
interactively add, delete, or move heat sources, add, delete, or move boundary
points of the domain, or change the termination condition (maximal number of
iterations or error tolerance) of the solver. As soon as a user interacts, the
simulation becomes immediately aware of it and consequently the computation is
restarted. An instant estimation of the equilibrium state for points of the
domain far away from the heat sources is unfortunately not always feasible on
the finest grid used (300$\times$300). This may be the case due to the short
intervals between two restarts in case of too frequent user interaction, as
shown in Fig.~\ref{fig:3}. Here we profit from a hierarchical approach.
\begin{figure}[h]
\includegraphics[scale=0.4]{fig/Picture6.pdf}
\caption {Left: an initial scenario; right: moving heat sources/boundaries
leads to the restart of the computation, but in-between two too frequent
restarts a user is unable to estimate the equilibrium temperature in the region
farther away from the heat sources (here, further iterations of the solver
would be necessary).}
\label{fig:3}
\end{figure}
The {\bf{hierarchical approach}} is based on switching between several grids of
different resolutions depending on the frequency of the user interaction. At
the beginning, the finest desired grid is used for the computation. When the
simulation process is interrupted by an update, it restarts the computation
with the new settings, but on a coarser grid for faster feedback, i.\,e.\ to
provide new results as soon as possible. As long as the user is frequently
interacting, all computations are carried out on the coarser grids only. If the
user halts, i.\,e.\ stops interacting, the computation switches back to the
finest grid in order to provide more accurate values. In this particular test
case, three different grids were used -- an initial 300$\times$300 grid, the
four times smaller, intermediate one (150$\times$150, in case of lower pace of
interactions, i.\,e.\ adding/deleting heat sources or boundary points, e.\,g.)
and, finally, the coarsest one (75$\times$75) for very high frequency moving of
boundary points or heat sources over the domain (Fig.~\ref{fig:4}). The coarser
grids are not meant for obtaining quantitative solutions, just for a fast
qualitative idea how the solution might look like. If a user interactively
found an interesting setup, he just has to stop and an accurate solution for
this setup will be computed. Nevertheless, measurements concerning the
different grids showed that the variation of the solution on the finest grid
compared to the intermediate one is around 4.5\%, and compared to the coarsest
one around 14.6\%. The described approach, on the other hand, leads to an
improvement in convergence by a factor of 2.
\begin{figure}[h]
\includegraphics[scale=0.3]{fig/Picture7_2.pdf}
\caption {Switching to a coarser grid in case of moving heat sources or
boundaries, switching back to the finer one once the user stops interacting.}
\label{fig:4}
\end{figure}
Additionally, we employ a multi-level algorithm -- the results of the
computation on the coarsest grid are not discarded when switching to the finer
one. Our concept, namely, already involves a hierarchy of discretisations, as
is the case in multigrid algorithms, thus, we can profit from the analogous
idea. Our scheme is somewhat simpler -- it only starts with the solution on the
coarsest grid and uses the result we gain as an initial guess of the result on
a finer one. A set of examples has been tested (with grids
300$\times$300, 150$\times$150, and 75$\times$75) where the number of necessary
operations on the intermediate and fine grid could be halved. What seems to be
a somehow obvious approach, at least for this simple test scenario, can be
efficiently exploited in our forth test case where we use hierarchical Ansatz
functions for an interactive finite-element computation of a biomedical
problem.
In order to enable the framework functionality for interrupting the above
simulation, takes an experienced user a couple of hours at most. Implementing
the hierarchical approach (which is not a part of the framework) is more time
consuming (a few working days), since an optimal automatic detection when to
switch from one hierarchy to another has to be found---which requires numerous
experiments.
\subsection{Test Case 2 -- A Neutron Transport Simulation}
We present as second test case the integration of the framework into a
computationally efficient, high accuracy, geometry independent neutron
transport simulation. It makes researchers' and educators' interaction with
virtual models of nuclear reactors or their parts possible.
{\bf{Simulation:}} AGENT (Arbitrary GEometry Neutron Transport) solves the
Boltzmann transport equation, both in 2D and 3D, using the Method of
Characteristics (MOC)~\cite{Lee.2004}. The motivation for steering such a
simulation during runtime comes mostly from the geometric limitation of this
method, which requires fine spatial discretisation in order to provide an
accurate solution to the problem. On the other hand, a good initial solution
guess would help tremendously to speed-up the convergence, and this property is
used to profit from our framework. The 3D discretisation basis for the
Boltzmann equation consists of the discrete number of plains, for each of which
both a regular geometry mesh and a number of parallel rays in a discrete number
of directions are generated. The approximation results in a system of equations
to be iteratively solved for discrete fluxes.
{\bf{GUI/Visualisation:}} The result in terms of the scalar fluxes is
simultaneously calculated and periodically visualised. The simulation server
maintains a list of available simulation states, and clients connect using the
ImageVis3D volume rendering tool~\cite{Fogal.2010} to visualise the results in
real time. Users can interfere with the running simulation via a simple console
interface, providing the new values of the desired parameters.
{\bf{User interaction:}} Instant response of the simulation to the changes made
by the user is again achieved via signals. Using the technique described as our
general concept, the most outer iteration instantly starts anew, as soon as its
overall state is reset within the main computational steering loop, according
to the updated settings and necessary re-initialisation of the data. By
manipulating only two simulation parameters in the signal handler, it is
achieved that the iteration restarts almost within a second in all the cases --
e.\,g.\ 20 planes in z-direction, each discretised by a 300$\times$300 grid and
36 azimuthal angles (where only one, the most outer iteration lasts
approximately 500 seconds). The effort to integrate our framework into this
application depends on whether the re-allocation of the memory and
re-initialisation of the data is required, and if one wants to re-use the
values from the previous iterations~\cite{Knezevic.2012a, Knezevic.2012b}.
{\bf{Hierarchical and multilevel approach:}} It is likely, similar to the heat
conduction scenario, that the user wants to accelerate the convergence by
starting calculations with lower accuracy (i.\,e.\ number of azimuthal angles,
see Fig.~\ref{fig:5}), preserve and re-use some of the values from the previous
calculation as an initial guess for the higher accuracy solution. For a
conceptually similar algorithm, such as the previously described multilevel
approach in the 2D heat conduction simulation, we have seen that our framework
has given promising results. The re-initialisation of the data for this, most
challenging, scenario is a part of imminent research.
\begin{figure}[h]
\includegraphics[scale=0.5]{fig/Picture13.pdf}
\caption {Experimenting with different numbers of azimuthal angles, small
values are given to simplify the picture.}
\label{fig:5}
\end{figure}
To briefly conclude on this application scenario, the integration of the
framework has been straightforward and also not very time consuming. After
examining the initial code, deciding which variables to register within the
framework, and writing reinitialisation routines, it has taken a few hours to
couple the components together and enable visualisation after each iteration.
The major effort refers actually to the re-initialisation of variables at the
beginning of each ``new'' computation, i.\,e.\ after a user interaction, which
is also not a responsibility of the framework itself.
\subsection{Test Case 3 -- Extension of a Problem Solving Environment}
{\bf{Simulation:}} As mentioned before, SCIRun is a PSE intended for
interactive construction, debugging, and steering of large-scale, typically
parallel, scientific computations~\cite{Shepherd.2009}. SCIRun simulations are
designed as networks of computational components, i.\,e.\ modules connected via
input/output ports. This makes it very easy for a programmer to modify a module
without affecting others. As SCIRun is already a mature, sophisticated
environment for computational steering, nevertheless, our goal is to improve it
in a way that real time feedback for extensive time- and memory-consuming
simulations becomes possible. Here, SCIRun needs to finish an update first
before new results are shown, which easily can lead to long latencies between
cause and effect.
{\bf{GUI/Visualisation:}} For the user, it is possible to view intermediate
results after a pre-defined number of iterations, while the calculations
continue to progress. At some point, he may require to influence the current
simulation setup. Different options such as parameter modification for each
module are available via corresponding interfaces. Both the modified module and
modules whose input data is dependent on that module's output are stored in a
queue for execution. Our intention is to interrupt the module currently being
executed and skip the redundant cycles, as well as to remove any module
previously scheduled for execution from the actual queue.
{\bf{User Interaction:}} The concept has been tested on several examples to
evaluate the simulation response to the modifications during runtime. These
scenarios are: a simulation that facilitates early detection of acute heart
ischemia and two defibrillation-like simulations -- one on a homogeneous cube
and the other on a human torso domain. The challenges of getting an immediate
feedback/response of the simulation depend on a few factors -- the size of the
problem, the choice of the modified parameters within the simulation, etc. The
earlier in the execution pipeline the parameter appears, the more modules have
to be re-executed, thus, the more challenging it is to provide the real-time
response to the user changes. A user can define different discretisation
parameters for a FEM computation such as the mesh resolution for all spatial
directions. For solving the resulting linear system of equations, different
iterative solvers as well as pre-conditioners can be used; one may change
tolerances, the maximal number of iterations, levels of accuracy, as well as
other numerical or some more simulation-specific parameters. In the created
network of modules, typically the most laborious step is the SolveLinearSystem
module. Thus, the first challenge is how to interrupt it as soon as any change
is made by the user -- in particular, the changes done via UI to this module.
To achieve this in the algorithm of the linear equation solver, the maximal
number of iterations (a user interface variable) is manipulated in the signal
handler, so as to be set to some value outside of the domain of the iterator
index which interrupts the simulation as described before. The execute function
of this module also has to be re-scheduled afterward with the new user-applied
settings. However, one has to take care that the previous interrupted execution
of the same module is finished in a clean way and that the execute function has
to be called anew (in order to trigger re-computation instantly). If one
chooses to emit the partial solution after each iteration, executions of
several visualisation modules are scheduled after each iteration, which would
take additional few seconds after each iteration. This is because after an
interrupted iteration the preview of old results has to be cancelled. The
execution of all modules, which would happen after SolveLinearSystem, has to be
aborted. The scheduler cancels the execution of all the scheduled modules that
have not begun yet by making sure an exception is employed. Changing any input
field of a module via its UI automatically triggers the re-execution of all the
modules following it in the pipeline.
\subsubsection{Tool for early detection of heart ischemia}
Myocardial ischemia is characterised by reduced blood supply of the heart
muscle, usually due to coronary artery disease. It is the most common cause of
death in most Western countries, and a major cause of hospital admissions
~\cite{Podrid.2005}. By early detection further complications might be
prevented. The aim of this application is the generation of a quasi-static
volume conductor model of an ischemic heart, based on data from actual
experiments ~\cite{Stinstra.2012}. The generation of models of the myocardium
is based on MR images/scans of a dog heart. The known values are extracellular
cardiac potentials as measured by electrodes on an isolated heart or with
inserted needles. The potential difference between the intracellular and
extracellular space which is being calculated is not the same for ischemic and
healthy cells. A network of modules is constructed within SCIRun to simulate
and then render a model of the transmembrane potential of a dog's myocardium in
experiments (Fig.~\ref{fig:6}).
\subsubsection{Defibrillation}
Defibrillation therapy consists of delivering a dose of electrical energy to
the heart with a device that terminates the arrhythmia and allows normal sinus
rhythm to be re-established by the body's natural pacemaker. Implantable
Cardioverter Defibrillators (ICDs) are relatively common, patient specific,
implantable devices that provide an electric shock to treat fatal arrhythmias
in cardiac patients ~\cite{Steffen.2012}. By building a computational model of
a patient's body with ICDs and mapping conductivity values over the entire
domain, we can accurately compute how activity generated in one region would be
remotely measured in another region ~\cite{Weinstein.2005}, which is exactly
what doctors would be interested in. First, we consider a simulation of the
electrical conduction on a homogeneous cube domain (Fig.~\ref{fig:6}) with two
electrodes placed within. Each of the electrodes is assigned a conductivity
value. The effect of changing those values is explored for both of the
electrodes. The second example helps to determine optimal energy discharge and
placement of the ICD in the human torso (Fig.~\ref{fig:6}). A model of the
torso into which ICD geometry is interactively placed is based on patient MRI
or CT data. Different solver-related parameters for the resulting system of the
linear equations, conductivity values, as well as mesh resolutions for a FEM
computation can be applied during runtime. This allows for previewing the
solution on a coarser grid and switching to finer ones, once the user is
satisfied with the current setting.
\begin{figure}[h]
\includegraphics[scale=0.4]{fig/Picture15.pdf}
\caption {Illustrated user interfaces for the tested simulation scenario.}
\label{fig:6}
\end{figure}
For a user to integrate the framework, the major effort has been related to
re-triggering the execution of all the needed modules when the user makes a
change. This has required good understanding of a used Model-View-Controller
pattern. On the other hand, registering the variables which need to be
manipulated within the framework to interrupt the execution of the modules of
interest, has required negligible amount of time.
\subsection{Test Case 4 -- A Biomedical Application}
Another test case is an analysis tool which assists an orthopaedic surgeon to
do optimal implant selection and positioning based on prediction of response of
a patient-specific bone (femur) to a load that is applied. The tool consists of
two coupled components.
{\bf{Simulation:}} The first one is a simulation core, where the generated
models of femur geometry are based on CT/MRI-data and the computation is done
using the Finite Cell Method (FCM). FCM is a variant of high order
\emph{p}-FEM, i.\,e.\ convergence is achieved by increasing the polynomial
degree \emph{p} of the Ansatz functions on a fixed mesh instead of decreasing
the mesh sizes \emph{h} as in case of classical \emph{h}-FEM, with a fictitious
domain approach, as proposed in~\cite{Duester.2009}. With this method, models
with complicated geometries or multiple material interfaces can be easily
handled without an explicit 3D mesh generation. This is especially advantageous
for interactive computational steering, where this typically user-interaction
intensive step would have to be re-executed for each new configuration.
{\bf{GUI/Visualisation}} The second component is a sophisticated visualisation
and user interface platform that allows the intuitive exploration of the bone
geometry and its mechanical response to applied loads in the physiological and
the post-operative state of an implant-bone in terms of stresses and strains
\cite{Dick.2008, Dick.2009}. Thus, after updating the settings -- either after
insertion/moving an implant, or testing a new position/magnitude of the forces
applied to the bone -- for each unknown a scalar value, i.\,e.\ the so-called
von Mises stress norm, can be calculated and the overall result sent to the
front end to be visualised.
Some of the challenges in developing such an analysis tool are described in
more detail in~\cite{Yang.2010, Dick.2009, Dick.2008}. We conveniently had the
described simulation and a sophisticated user interface with visualisation
module as a starting point. Due to the initial rigid communication pattern
between the two components, however, a new setting could be recognised within
the simulation only after the results for the previous, outdated, one have been
completely calculated and sent to the user. Consequently, the higher polynomial
degrees \emph{p} were used, the dramatically longer became the total time until
one could finally perceive the effect of his last change. The integration of
our framework then comes into play not only to make the way the data is
communicated more suitable for this purpose, but also to enable interrupting
the simulation immediately and getting instant feedback ensued by any user
interaction.
For the best performance, on the front end, the main thread (in charge of
fetching user interaction data and continuous rendering), the second thread (in
charge of collecting and sending updates in timely fashion, via non-blocking
MPI routines), and the third thread (dedicated for waiting to receive results
as soon as these are available), are not synchronised with one another. This
way, we tackle the problem of long delays that would occur if one thread is
responsible for everything and communication is blocked as long as the thread
is busy, which would hinder the user in (smoothly) exploring the effects of his
interaction.
On the simulation side, as mentioned before, a variant of FEM is used.
Mainstream approaches are
\begin{itemize}
\item \emph{h}-FEM: convergence due to smaller diameters \emph{h} of elements,
\item \emph{p}-FEM: convergence due to higher polynomial degrees \emph{p},
\item \emph{hp}-FEM: combining the aformentioned ones by alternating \emph{h}
and \emph{p} refinements,
\item \emph{rp}-FEM: a combination of mesh repositioning and \emph{p}
refinements,
\item $\ldots$
\end{itemize}
In our case, for the algebraic equations gained by the \emph{p}-version Finite
Element Method describing the behaviour of the femur, iterative solvers such as
CG or multi grid could not be efficiently deployed due to the poor condition
number of the system. To make most out of the simulation performance potential,
a hierarchical concept based on an octree-decomposition of the domain in
combination with a nested dissection solver is used~\cite{Mundani.2007}. It
allows for both the design of sophisticated steerable solvers as well as for
advanced parallelisation strategies, both of which are indispensable within
interactive applications.
{\bf{User interaction:}} By applying a nested dissection solver, the most time
consuming step is the recursive assembly of the stiffness matrices, each
corresponding to one tree node, traversing the octree bottom up. Again,
cyclically-repeating signals are used for frequent checks for updates. If there
is an indicator of an upcoming message from the user side, this is recognised
while processing one of the tree nodes and the simulation variables are set in
a way which ensures skipping the rest of them. All the recursive assembly
function calls return immediately, and the new data is received in the next
step of the interactive computing loop (updating one or more of the leaf
nodes). Here, precious time has been saved by skipping all the redundant
calculations and, thus, calculating results only for an actual setting. As soon
as the whole assembly has been completed without an user interrupt, the result
in terms of stresses is sent back to the front end process for visual display.
However, there is an unavoidable delay of any visual feedback especially for
higher \emph{p} values, i.\,e.\ $p > 4$, in case of the used hardware and the
complexity of the geometric model. Namely, the time needed for a (full) new
computation is dramatically increasing in case of increasing \emph{p}. Thus, we
profit from a hierarchical approach one more time. The hierarchy exploited in
this approach refers to the usage of several different polynomial degrees
chosen by the user (Fig.~\ref{fig:7}). While the user's interplay with the
simulation is very intensive, he retrieves immediate feedback concerning the
effects of his changes for lower \emph{p}, being able to see more accurate
results (for higher \emph{p}) as soon as he stops interacting and let one
iteration finish. In this case, the computation is gradually switched to higher
levels of hierarchy, i.\,e.\ from $p=1$ to $p=2$ to $p=4$ and so on. The number
of MPI program instances, being executed in parallel for different \emph{p} can
be chosen by the user. A detailed communication schemes can be found in
\cite{Knezevic.2011c, Knezevic.2011d}.
\begin{figure}[h]
\includegraphics[scale=0.4]{fig/bone.pdf}
\sidecaption
\caption {Direct transition from $p = 6$ to $p = 1$ as soon as the user changes
the force's magnitude and direction, inserts an implant or moves it, while
gradually increasing from $p = 1 \rightarrow p = 2 \rightarrow p = 4
\rightarrow \ldots$ as soon as the user diminishes or finally stops his
interaction. Hence, a qualitative feedback about stress distribution for $p =
1$ or $p = 2$ is received instantly, finer result for $p \geq 4$ on demand.}
\label{fig:7}
\end{figure}
To get several updates per second even for higher \emph{p} values, one has to
employ sophisticated parallelisation strategies. Custom decomposition
techniques (i.\,e.\ recursive bisection) in this scenario, as in case of long
structures such as a femur, typically hinder the efficient exploitation of the
underlying computing power as this leads to improper load distributions due to
large separators within the nested dissection approach. Thus, our next goal has
been the development of an efficient load balancing strategy for the existing
structural simulation of the bone stresses.
Task scheduling strategies typically involve a trade-off between the uniform
work load distribution among all processors, as well as keeping both the
communication and optimisation costs minimal. For hierarchically organised
tasks with bottom-up dependencies, such as in our generated octree structure,
the number of processors participating in the computation decreases by a factor
of eight in each level, similar to the problem posed by Minsky for the parallel
summation of $2N$ numbers with $N$ processors in a binary tree
\cite{Knezevic.2012c}.
In interactive applications which assume the aforementioned frequent updates
from user's side, those rapid changes within the simulation and tasks' state
favour static in comparison to dynamic load balancing strategies. It would also
have to be taken into consideration that certain modifications performed by a
user may involve major changes of the computational model. In this case, for
repeatedly achieving the optimal amount of work being assigned to each process
for each new user update, the overhead-prone scheduling step has to be executed
each time. Therefore, an efficient, nevertheless simple to compute scheduling
optimisation approach is needed.
Since the scheduling problem can be solved by polynomial-depth backtrack
search, thus, is \emph{NP} complete for most of its variants, efficient
heuristics have to be devised. In our case, the sizes of the tasks, as well as
the dependencies among them (given by the octree structure responsible for the
order of the nested dissection advance) have to be considered. By making
decisions, we consider (1) the level of the task dependency in the tree
hierarchy where children nodes have to be processed before their parent nodes;
(2) among equal tasks (i.\,e.\ of the same dependency level) we distinguish
between different levels in the tree hierarchy, calling this property the
processing order. If the depth of the tree is \emph{H}, tasks from level
\emph{M} in the tree hierarchy have the processing order of $H - M - 1$. Then
we form lists of priorities, based on these two criteria, since tasks inside
very long branches of the tree with an estimated bigger load should be given a
higher priority. Additionally, we resort to a so-called max-min order, making
sure that big tasks, in terms of their estimated number of floating-point
operations, are the first ones assigned to the processors. We also split a
single task among several processors when mapping tasks to processors, based on
the comparison of a task's estimated work with a pre-defined `unit' task. This
way, arrays of tasks, so-called `phases', are formed, each phase consisting of
as many generated tasks as there are computing resources. Namely, taken from
the priority lists, tasks are assigned to phases in round-robin manner. The
results are illustrated in Fig.~\ref{fig:8}.
\begin{figure}[h]
\includegraphics[scale=0.5]{fig/rucksacks.pdf}
\caption {Vertical axes describe the so-called ``phases'' and the horizontal
axes the number of processors involved in the particular phase. One ``phase"
involves actually the processors to which a task is assigned at that point.
Having the capacity of each phase as \emph{full} as possible is achieved,
i.\,e.\ all processors are busy with the approximately equal amount of work
throughout the solver execution.}
\label{fig:8}
\end{figure}
Those phases refer to the mapping which will be done during runtime of the
simulation. When the tasks are statically assigned to the processors, all of
them execute the required computations, communicating the data when needed and
also taking care that the communication delays due to the MPI internal
decisions are avoided, as elaborated more in \cite{Knezevic.2012c}.
Satisfactory speedup is achieved for different polynomial degrees \emph{p}
within the FCM computation, where higher polynomial degrees correspond to more
unknowns. Tests are being done currently for a larger number of distributed
memory computational resources. According to the tendency observed for up to 7
processors so far, engagement of larger numbers of processes would result in
the desired rate of at least several updates per second (i.\,e.\ 1--10\,Hz) for
the calculated bone stresses even for $p = 4$ or $p = 6$.
Referring back to the existing environment, without the integration of the
developed distributed parallel solver, the major effor that has been invested
in creating the new communication pattern to support the described hierarchical
approach was in the order of several working day. Anyhow, the functionality for
interrupting the computation to do checks for updates, thus, start a
computation anew if needed has been quick and straightforward.
\section{Results and Conclusions}
Finally, after discussing the achievements concerning interaction for each
application scenario in the previous section, here results in terms of
execution time overhead after integrating the framework in different scenarios
are to be presented, as well as the coding effort to be invested when
integrating the framework into an existing application code. Furthermore,
conclusions concerning the proposed hierarchical approaches are made and
possible ideas for further extension of the framework are discussed.
\subsection{Overhead of the Framework}
For the heat conduction application scenario, the integration of our framework
resulted in not more than 5--10\,\% overhead in the execution time. Tests have
been done also for the same problem with a message-passing-based parallel
Jacobi solver. Not even in case when user interaction was invoked in
5-millisecond intervals (which is far more frequent than it typically occurs in
practice) any significant effect of the interrupts on the overall execution
time (less than 10\,\%) was to be observed.
Performance evaluation of the biomedical test scenario, where the simulation is
executed on a multi-core architecture and connected to a visualisation front
end via a network, still proved that the overhead caused by the framework
itself is not significant (up to 11.7\,\%).
We have also tested the different simulation scenarios from SCIRun. The
measurements have been made for different update intervals, namely, 5, 2, or 1
millisecond for different solvers of linear systems of equations. In one of the
test case scenarios, for the shortest interval (i.\,e.\ 1 millisecond), the
overhead caused by the framework was up to 15\,\%. However, by making the
intervals longer (2 or 5 milliseconds, e.\,g.), the overhead was reduced to
5\,\% and 3\,\%, resp. When increasing the interval up to 5 milliseconds (and
beyond), an end-user does not observe the difference in terms of simulation
response. Hence, it is always recommendable to experiment with different
intervals for a specific simulation.
Some of the measurements are illustrated in Fig.~\ref{fig:overhead} for
comparison.
\begin{figure}
\includegraphics[scale=0.75]{fig/overhead.pdf}
\caption{Performance measurements: overhead of the framework (expressed in
terms of additional execution time) for alarm set to 1 millisecond -- heat
conduction simulation ($300\times300$ grid), executed on 1, 2, and 4 cores
(left to right); SCIRun PSE, heart ischemia example using CG, BCG, and MINRES
solver (left to right); biomedical application ($p = 4$), executed on 1, 2, and
4 cores (left to right).}
\label{fig:overhead}
\end{figure}
\subsection{User Effort for Integrating the Framework}
\label{subsection:effort}
A few modifications within any application code have to be made by the user in
order to integrate our framework. These modifications are -- as intended --
only minor, hence, we list all of them. All variables which will be affected by
the interrupt handler in order to force the restart of the computation have to
be declared global (to become visible in a signal handler). It is typically
enough to have only few of them, such as loop delimiters, in order to skip all
the redundant computations. If these variables shall be used also in the rest
of the code, a user can rename those he wants to manipulate within the signal
handler and declare only those as global. Atomicity of data updates and
prevention of compiler optimisations -- which would lead to incorrect value
references -- have to be ensured. The integrity of each user-defined `atomic'
sequence of instructions in the simulation code has to be provided. The calls
to the appropriate send and receive functions which are interface to our
framework have to be included in the appropriate places in the code. The user
himself should provide the correct interpretation of the data (in the receive
buffers of both simulation and visualisation components). Finally, he has to
enable the regular checks for updates by including appropriate functions which
will examine and change the default signal (interrupt) action, specifying the
time interval in which the checks of the simulation process(es) are made, as
shown in the following pseudo code example.
\begin{verbatim}
begin func My_sig_action ()
if update_available then
receive update
manipulate simulation specific variables
fi
end
begin func main ()
Set_sig_action (My_sig_action)
Set_interrupt_interval (time_slot)
end
\end{verbatim}
\subsection{Hierarchical Approaches}
As one may also conclude, no matter how generic our basic idea is, when
applying it to the wide diversity of applications, the user himself has to be
involved in making certain decisions. For example, in our first test case, he
has to specify the number of grids which he would like to use together with
their resolutions. This information might be based on his previous experience,
i.\,e.\ at which resolution the problem can be solved within less than a second
(for choosing the coarsest grid), etc. The hierarchical approaches used so far
should not be the limitation for future test cases. In addition to recursively
coarsening the grid, or increasing the resolution of other simulation-specific
discretisations such as the number of azimuthal angles in AGENT, or increasing
the polynomial degree \emph{p} in the biomedical example, one may analogously
profit from his or her own simulation-specific hierarchical structures. Any
user of the framework can, if needed, easily adopt it to his individual
requirements.
\subsection{Outlook}
In the future, we would like to tackle the computational expensive scenarios
with massively parallel simulations. In efforts to interrupt one thread per
process, a trade-off between ensuring a minimal number of checks per process
and allowing for receiving the data promptly is to be faced. Thus, an optimal
interval between the interrupts on different levels of the communication
hierarchy is going to be estimated. In addition, a possibility of distributing
the tasks among several user processes, each in charge of a certain group of
simulation processes will be examined to avoid typical master-slave
bottlenecks. Furthermore, we would like to explore techniques for the fast
transfer of (distributed) simulation results between front and back end,
especially in case of huge data sets, needed for an interactive visualisation.
\begin{acknowledgement}
The overall work has been financially supported by the Munich Centre of
Advanced Computing (MAC) and the International Graduate School of Science and
Engineering (IGSSE) at Technische Universit\"{a}t M\"{u}nchen and we would like
to gratefully acknowledge that. The work related to SCIRun PSE was made
possible in part by software from the NIH/NIGMS Center for Integrative
Biomedical Computing, 2P41 RR0112553-12. It was accomplished in winter 2011/12
during a three-month research visit of Jovana Kne\v{z}evi\'{c} to the
Scientific Computing and Imaging (SCI) Institute, University of Utah. She would
like to express her appreciation and gratitude to Prof.~Chris Johnson for
inviting her and all the researchers for fruitful discussions. Furthermore, she
would like to thank Hermilo Hern\'{a}ndez and Tatjana Jevremovi\'{c} at Nuclear
Engineering Program, University of Utah, and Thomas Fogal from SCI Institute,
in collaboration with whom the work on the AGENT project was done.
\end{acknowledgement}
\bibliographystyle{unsrt}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 2,301 |
\section{Introduction}
\label{sec:introduction}
The increasing popularity of natural language human-computer interaction urges the development of robust and scalable task-oriented dialog systems. In order to fulfill a user goal, a dialogue system must be capable of extracting meaning and intent from the user input, and be able to keep and update this information over the continuation of the dialog~\cite{young2010hidden}. This task is called dialog state tracking (DST). Because the next dialog system action depends on the current state of the conversation, accurate dialog state tracking (DST) is absolutely vital.
DST is tasked to extract from the user input information on different concepts that are necessary to complete the task at hand. For example, in order to recommend a restaurant to a user, the system needs to know their preferences in terms of price, location, etc. These concepts are encapsulated in an ontology, where dialogue domain (e.g., restaurant or hotel), slot (e.g., price range or location), and value (e.g. cheap or expensive) are defined. Solving this information extraction task is prerequisite for forming a belief over the dialog state.
\begin{figure}[t]
\centering
\includegraphics[page=1, trim=2.7cm 0.5cm 2.5cm 1.3cm, clip=true, width=1.00\linewidth,]{Diag}
\caption{Example dialog in MultiWOZ.} \label{fig:diag}
\end{figure}
Traditional approaches to DST operate on a fixed ontology and perform prediction over a pre-defined set of slot-value pairs~\cite{mrkvsic2016neural,liu2017end,zhong2018global}. Such approaches perform very well on datasets which are defined over fairly small ontologies. Apply these methods to more complex datasets however reveals various limitations~\cite{ren2018towards,nouri2018toward}. First, it is often difficult to obtain a complete ontology for a task. Second, slot-value pairs that were outside the ontology or the training data are impossible to capture during test time. Third, such methods at best scale linearly with the size of the ontology. Most importantly, the idea of fixed ontologies is not sustainable, as in real world applications they are subject to constant change.
Human-computer interactions often need to be defined over multiple domains at the same time, ideally with unrestricted vocabulary. Recent approaches to multi-domain and open-vocabulary DST extract values from the dialog context directly by predicting value spans in the input~\cite{gao2019dialog,chao2019bert,kim2019efficient,zhang2019find}. Span prediction is a demonstrably potent method to detect relevant information in utterances, but its major drawback is that it only suits \emph{extractive} values that are explicitly expressed as a sequence of tokens. This is the reason why span-based methods benefit from the support of a picklist, i.e., a list of value candidates from which a system can choose. Still, these methods fall short when handling nuanced and subtle phenonema that often occur in natural conversations, such as coreference and value sharing ("I'd like a hotel in the same area as the restaurant."), and implicit choice ("Any of those is ok.").
In this work, we propose a new approach to value independent multi-domain DST:
\begin{enumerate}
\item In addition to extracting values directly from the user utterance via span prediction and copy, our model creates and maintains two memories on-the-fly, one for system inform slots, and one for the previously seen slots.
\item The \emph{system inform memory} solves the implicit choice issue by allowing to copy from concepts mentioned by the system, e.g., values that are offered and recommended.
\item The \emph{DS memory} allows the use of values already existing in the dialogue state to infer new values, which solves the coreference and value sharing problems.
\end{enumerate}
We call this approach \textbf{TripPy}, \textbf{Trip}le co\textbf{py} strategy DST.\footnote{Our code is available at \url{https://gitlab.cs.uni-duesseldorf.de/general/dsml/trippy-public}.} Our experiments results show that our model is able to handle out-of-vocabulary and rare values very well during test time, demonstrating good generalization. In a detailed analysis we take a closer look at each of the model's components to study their particular roles.
\section{Related Work}
Dialog state tracking has been of broad interest to the dialog research community, which is reflected by the existence of a series of DST challenges~\cite{henderson2014second,rastogi2019towards}. These challenges consistently pushed the boundaries of DST performance. Current state-of-the-art has to prove to work on long, diverse conversations in multiple domains with a high slot count and principally unrestricted vocabulary~\cite{eric2019multiwoz}. Dialogs of such complex nature are tough for traditional approaches that rely on the availability of a candidate list due to scalability and generalization issues~\cite{mrkvsic2016neural,liu2017end,ramadan2018large,rastogi2017scalable}.
Span-based approaches recently alleviated both problems to some extent. Here, slot values are extracted from the input directly by predicting start and end positions in the course of the dialog. For instance,~\citet{xu2018end} utilizes an attention-based recurrent network with a pointer mechanism to extract values from the context. This extractive approach has its limitations, since many expressible values are not found verbatim in the input, but rather mentioned implicitly, or expressed by a variety of rephrasings.
With the assistance of contextual models such as BERT~\cite{devlin2018bert}, issues arising from expressional variations can be mitigated. Recent work has demonstrated that encoding the dialog context with contextual representations supports span prediction to generalize over rephrasings. SUMBT~\cite{lee2019sumbt} utilizes BERT to encode slot IDs and candidate values and learns slot-value relationships appearing in dialogs via an attention mechanism. Dialog context is encoded with recurrence. BERT-DST~\cite{chao2019bert} employs contextual representations to encode each dialog turn and feeds them into classification heads for value prediction. The dialog history, however, is not considered for slot filling. In~\citet{gao2019dialog}, DST is rendered as a reading comprehension task that is approached with a BERT-based dialog context encoder. A slot carryover prediction model determines whether previously detected values should be kept in the DS for the current turn.
An alternative to span prediction is value generation. TRADE~\cite{wu2019transferable} and MA-DST~\cite{kumar2020ma} generate a DS from the input using a copy mechanism to combine the distributions over a pre-defined vocabulary and the vocabulary of current context. SOM-DST~\cite{kim2019efficient} applies a similar mechanism for value generation, but takes the previous dialog turn as well as the previous DS as input to BERT to predict the current DS. A state operation predictor determines whether a slot actually needs to be updated or not. The downside of generative models is that they tend to produce invalid values, for instance by word repetitions or omissions.
Recently, a hybrid approach called DS-DST has been proposed that makes use of both span-based and picklist-based prediction for slot-filling~\cite{zhang2019find}. In contrast to generative approaches, picklist-based and span-based methods use existing word sequences to fill slots. DS-DST somewhat alleviates the limitations of span prediction by filling a subset of slots with a picklist method instead.
Recent works seemed to reveal a trade-off between the level of value independence in a model and the DST performance. \citet{chao2019bert} and~\citet{gao2019dialog} solely rely on span-prediction, but their performance lacks behind methods that at least partially rely on a pre-defined list of candidate values. This has impressionably been demonstrated by~\citet{zhang2019find}. Their model could not compete when relying on span-prediction entirely. In contrast, when relying solely on their picklist slot-filling method, they achieved the to-date best performance on MultiWOZ 2.1. The proposed dual-strategy approach lies favorably between these two extremes.
To the best of our knowledge, none of the recent approaches to complex DST tasks such as MultiWOZ~\cite{budzianowski2018multiwoz,eric2019multiwoz} are value independent in the strict sense. What's more, they tremendously benefit from the use of a value candidate list. Our work tackles this limitation by introducing a triple copy strategy that relies on span-prediction as well as memory mechanisms. In contrast to other hybrid approaches such as~\citet{zhang2019find}, our memory mechanisms create candidate lists of values on-the-fly with the dialog context as only source of information, thus avoiding the use of pre-defined picklists.
We let the model decide which strategy to choose for each slot at each turn. Our approach differs from~\citet{chao2019bert} and~\citet{kim2019efficient} in that we consider the dialog history as context in addition to the current turn. We also differ from approaches like~\citet{lee2019sumbt} since we do not employ recurrence. Like~\citet{kim2019efficient}, we use auxiliary inputs at each turn, but we do so as a late feature fusion strategy. With our slot-value copy mechanism to resolve coreferring value phrases, we employ a method which is reminiscent of~\citet{gao2019dialog}'s slot carryover, but with the sharp distinction that we copy values between different slots, facilitating value sharing within and across domains.
\begin{figure*}[t]
\centering
\includegraphics[page=1, trim=0.0cm 1.5cm 0.5cm 2cm, clip=true, width=1.00\linewidth,]{Model}
\caption{Architecture of our proposed model. TripPy takes the turn and dialog history as input and outputs a DS.}
\label{fig:model}
\end{figure*}
\section{TripPy: Triple Copy Strategy for DST}
Our model expects the following input format to perform dialog state tracking. Let $X = \{(U_1, M_1), \dots, (U_T, M_T)\}$ be the sequence of turns that comprise a dialog of length $T$. $U_t$ is the user utterance at turn $t$, $M_t$ is the system utterance that preceeds the user utterance. The task of the model is (1) to determine for every turn whether any of the $N$ domain-slot pairs in $S = \{S_1, \dots, S_N\}$ is present, (2) to predict the values for each $S_n$ and (3) to track the dialog state $DS_t$ over the course of the dialog, i.e., for $t \in [1, T]$.
We employ a triple-copy strategy to fill the slots. The intuition is that values are either explicitly expressed by the user, that they are expressed by the system and referred to by the user via confirmation or rejection, or that they have been expressed earlier in the dialog as assignment to another domain-slot pair (coreference). Each of these cases is handled by one of three copy mechanisms. It becomes apparent that slots can not be filled by exclusively resorting to one particular copy method. Therefore, we employ slot gates that determine at each turn which method to use to fill the respective slot.
Figure~\ref{fig:model} depicts our model. We encode the dialog context with a BERT front-end and feed-forward the resulting contextual representations to various classification heads to solve the sub-tasks for DST. The aggregate sequence representation is the input to the slot gates. The sequence of token representations is the input to the span predictors.
\subsection{Context Encoder}
We use BERT~\cite{devlin2018bert} as front-end to encode at each turn $t$ the dialog context as
\begin{equation}
\begin{split}
R_t = \mathrm{BERT}(&\mathrm{[CLS]} \oplus U_t \oplus \mathrm{[SEP]} \oplus M_t \oplus \\
&\mathrm{[SEP]} \oplus H_{t} \oplus \mathrm{[SEP]}),
\end{split}
\end{equation}
where $H_t = {(U_{t-1}, M_{t-1}), \dots, (U_1, M_1)}$ is the history of the dialog up to and excluding turn $t$. The special token [CLS] preceeds every input sequence to BERT, and [SEP] separates portions of the input sequence. It is then
$R_t = [r_t^{\mathrm{CLS}}, r_t^1, \dots, r_t^{\mathrm{seq_{max}}}],$
where $r_t^{\mathrm{CLS}}$ is a representation of the entire turn including the dialog context $H_t$. The vectors $r_t^1$ to $r_t^{\mathrm{seq_{max}}}$ are contextual representations for the sequence of input tokens (including special tokens). Both types of representations are used for the following classification tasks.
\subsection{Slot Gates}
Our model is equipped with a slot gate for each domain-slot pair. This ensures greatest flexibility for multi-domain DST, as there is no restriction as to how many domains might be present in a single turn.
At each turn $t$, slot gates assign each slot $S_n$ to one of the classes in $C = \{\mathit{none, dontcare, span, inform, refer}\}$. The first two labels express special cases. $none$ denotes that the slot does not take a value in this turn and $\mathit{dontcare}$ states that any value is acceptable for this slot. The remaining three labels each denote one of the model's copy mechanisms. $span$ indicates that a value is present in $U_t$ that can be extracted via span prediction. $\mathit{inform}$ indicates that the user refers to a value that has been uttered by the system in $M_t$. Lastly, $\mathit{refer}$ indicates that the user refers to a value that is already present in $DS_t$.
The input to the slot gates is $r_t^{\mathrm{CLS}}$, and the probability distribution over classes $C$ for domain-slot pair $S_n$ at turn $t$ is $p^\mathrm{gate}_{t,s}(r_t^{\mathrm{CLS}}) =$
\begin{multline}
\mathrm{softmax}(W_s^\mathrm{gate} \cdot r_t^{\mathrm{CLS}} + b_s^\mathrm{gate}) \in \mathbb{R}^5,
\label{eq:gate}
\end{multline}
i.e., each slot gate is realized by a trainable linear layer classification head for BERT.
Boolean slots, i.e., slots that only take binary values, are treated separately. Here, the list of possible classes is $C_{\mathrm{bool}} = \{none, dontcare, true, false\}$ and the slot gate probability is $p^\mathrm{bgate}_{t,s}(r_t^{\mathrm{CLS}}) =$
\begin{multline}
\mathrm{softmax}(W_s^\mathrm{bgate} \cdot r_t^{\mathrm{CLS}} + b_s^\mathrm{bgate}) \in \mathbb{R}^4.
\label{eq:bgate}
\end{multline}
\subsection{Span-based Value Prediction}
For each slot $s$ that is to be filled via span prediction, a domain-slot specific span prediction layer takes the token representations $[r_t^1, \dots, r_t^{\mathrm{seq_{max}}}]$ of the entire dialog context for turn $t$ as input and projects them as follows:
\begin{subequations}
\begin{align}
[\alpha^s_{t,i}, \beta^s_{t,i}] &= W^\mathrm{span}_s \cdot r_t^i + b^\mathrm{span}_s \in \mathbb{R}^2 \\
p^{\mathrm{start}}_{t,s} &= \mathrm{softmax}(\alpha^s_t) \\
p^{\mathrm{end}}_{t,s} &= \mathrm{softmax}(\beta^s_t) \\
\mathrm{start}^s_t &= \mathrm{argmax}(p^{\mathrm{start}}_{t,s}) \\
\mathrm{end}^s_t &= \mathrm{argmax}(p^{\mathrm{end}}_{t,s}).
\end{align}
\end{subequations}
Each span predictor is realized by a trainable linear layer classification head for BERT, followed by two parallel softmax layers to predict start and end position. Note that there is no special handling for erroneously predicting $\mathrm{end}^s_t < \mathrm{start}^s_t$. In practice, the resulting span will simply be empty.
\subsection{System Inform Memory for Value Prediction}
The system inform memory $I_t = \{I_t^1, \dots, I_t^N\}$ keeps track of all slot values that were informed by the system in dialog turn $t$. A slot in $DS_t$ needs to be filled by an informed value, if the user positively refers to it, but does not express the value such that span prediction can be used. E.g., in Figure~\ref{fig:diag}
the slot gate for domain-slot \texttt{<restaurant,name>} should predict $\mathit{inform}$. The slot is filled by copying the informed value into the dialog state, i.e., $DS_t^s = I_t^s$, where $s$ is the index of the respective domain-slot.
\subsection{DS Memory for Coreference Resolution}
The more complex a dialog can be, the more likely it is that coreferences need to be resolved. For instance, the name of a restaurant might very well be the destination of a taxi ride, but the restaurant might not be referred to explicitly upon ordering a taxi within the same conversation. Coreference resolution is challenging due to the rich variety of how to form referrals, as well as due to the fact that coreferences often span multiple turns. An example of a coreference that can be handled by our model is found in the example in Figure~\ref{fig:diag}.
The third copy mechanism utilizes the DS as a memory to resolve coreferences. If a slot gate predicts that the user refers to a value that has already been assigned to a different slot during the conversation, then the probability distribution over all possible slots that can be referenced is
\begin{multline}
p^\mathrm{refer}_{t,s}(r_t^{\mathrm{CLS}}) =\\
\mathrm{softmax}(W^s_\mathrm{refer} \cdot r_t^{\mathrm{CLS}} + b^s_\mathrm{refer}) \in \mathbb{R}^{N+1},
\label{eq:refer}
\end{multline}
i.e., for each slot, a linear layer classification head either predicts the slot which contains the referenced value, or \emph{none} for no reference.
\subsection{Auxiliary Features}
Some recent approaches to neural DST utilize auxiliary input to preserve contextual information. For instance, SOM-DST adds the dialog state to its single-turn input as a means to preserve context across turns.
We already include contextual information in the input to BERT by appending the dialog history $H_t$. In addition to that, we also create auxiliary features based on the system inform memory and the DS memory. We generate two binary vectors $a_t^{\mathrm{inform}} \in \{0,1\}^N$ and $a_t^{\mathrm{ds}} \in \{0,1\}^N$ that indicate whether (1) a slot has recently been informed (based on the system inform memory), or (2) a slot has already been filled during the course of the dialog (based on the DS memory). These vectors are added to the output of BERT in a late fusion approach, and the slot gate probabilities in Equations~\ref{eq:gate},~\ref{eq:bgate} and~\ref{eq:refer} become $p^\mathrm{gate}_{t,s}(\hat{r}_t^{\mathrm{CLS}}),
p^\mathrm{bgate}_{t,s}(\hat{r}_t^{\mathrm{CLS}})$ and
$p^\mathrm{refer}_{t,s}(\hat{r}_t^{\mathrm{CLS}}),$
with $\hat{r}_t^{\mathrm{CLS}} = r_t^{\mathrm{CLS}} \oplus a_t^{\mathrm{inform}} \oplus a_t^{\mathrm{ds}}$.
\subsection{Partial Masking}
We partially mask the dialog history $H_t$ by replacing values with BERT's generic [UNK] token. The masking is partial in the sense that it is applied only to the past system utterances. For the system utterances, the contained values are known and their masking is straightforward. The idea behind partially masking the history is that the model is compelled to focus on the historical context information rather than the sighting of specific values. This should result in more robust representations $r_t^{\mathrm{CLS}}$ and therefore better overall slot gate performance.
\subsection{Dialog State Update}
We employ the same rule-based update mechanism as~\citet{chao2019bert} to track the dialog state across turns. At every turn, we update a slot, if a value has been detected which is not \emph{none}. If a slot-value is predicted as \emph{none}, then the slot will not be updated.
\section{Experimental Setup}
\subsection{Datasets}
We train and test our model on four datasets, MultiWOZ 2.1~\cite{eric2019multiwoz}, WOZ 2.0~\cite{wen2016network}, sim-M and sim-R~\cite{shah2018building}.
Among these, MultiWOZ 2.1 is by far the most challenging dataset. It is comprised of over 10000 multi-domain dialogs defined over a fairly large ontology. There are 5 domains (train, restaurant, hotel, taxi, attraction) with 30 domain-slot pairs that appear in all portions of the data.
The other datasets are single-domain and significantly smaller. Evaluations on these mainly serve as sanity check to show that we don't overfit to a particular problem. Some slots in sim-M and sim-R show a high out-of-vocabulary rate, making them particularly interesting for evaluating value independent DST.
The single domain datasets come with span labels. However, MultiWOZ 2.1 does not. We therefore generate our own span labels by matching the ground truth value labels to their respective utterances.
\subsection{Evaluation}
We compute the joint goal accuracy (JGA) on all test sets for straightforward comparison with other approaches. The joint goal accuracy defined over a dataset is the ratio of dialog turns in that dataset for which all slots have been filled with the correct value according to the ground truth. Note that \emph{none} needs to be predicted if a slot value is not present in a turn.
In addition to JGA, we compute the accuracy of the slot gates (joint and per-class) and various other metrics
for a more detailed analysis of model design decisions.
We run each test three times with different seeds and report the average numbers for more reliable results.
MultiWOZ 2.1 is in parts labeled inconsistently. For a fair evaluation, we consider a value prediction correct, if it matches any of its valid labels (for instance "centre" and "center" for the slot-value \emph{hotel-area=centre}) as being correct. We semi-automatically analyzed value label inconsistencies in the training portion of the dataset in order to identify all label variants for any given value. During testing, these mappings are applied as is.
\begin{table}
\centering
\begin{tabular}{lr}
\hline
\textbf{Models} & \textbf{MultiWOZ 2.1} \\
\hline
DST-reader~\shortcite{gao2019dialog} & 36.40\% \\
DST-span~\shortcite{zhang2019find} & 40.39\% \\
SUMBT~\shortcite{lee2019sumbt} & 42.40\%$^{**}$ \\
TRADE~\shortcite{wu2019transferable} & 45.60\% \\
MA-DST~\shortcite{kumar2020ma} & 51.04\% \\
DS-DST~\shortcite{zhang2019find} & 51.21\% \\
SOM-DST~\shortcite{kim2019efficient} & 52.57\% \\
DST-picklist~\shortcite{zhang2019find} & 53.30\% \\
\hline
TripPy & \textbf{55.29$\pm$0.28\%} \\
\hline
\end{tabular}
\caption{\label{tab:baselines_multiwoz}
DST Results on MultiWOZ 2.1 in JGA ($\pm$ denotes the standard deviation. $^{**}$ MultiWOZ 2.0 result.}
\end{table}
\subsection{Training}
We use the pre-trained \emph{BERT-base-uncased} transformer~\cite{vaswani2017attention} as context encoder front-end. This model has 12 hidden layers with 768 units and 12 self-attention heads each. The maximum input sequence length is set to 180 tokens after WordPiece tokenization~\cite{wu2016google}, except for MultiWOZ 2.1, where we set this parameter to 512. We compute the joint loss as
\begin{equation}
\mathcal{L} = 0.8 \cdot \mathcal{L}_{\mathrm{gate}} + 0.1 \cdot \mathcal{L}_{\mathrm{span}} + 0.1 \cdot \mathcal{L}_{\mathrm{refer}}.
\end{equation}
The function for all losses is joint cross entropy. As there is no coreferencing in the evaluated single-domain datasets, the refer loss is not computed in those cases and the loss function is
\begin{equation}
\mathcal{L} = 0.8 \cdot \mathcal{L}_{\mathrm{gate}} + 0.2 \cdot \mathcal{L}_{\mathrm{span}}
\end{equation}
instead.
Span predictors are presented only spans from the user utterances $U_i$ to learn from (including the user utterances in the history portion $H_i$ of the input). During training we set the span prediction loss to zero for all slots that are not labeled as \emph{span}. Likewise, the coreference prediction losses are set to zero if slots are not labeled as ~\emph{refer}. For optimization we use Adam optimizer~\cite{kingma2014adam} and backpropagate through the entire network including BERT, which constitutes a fine-tuning of the latter. The initial learning rate is set to $2e^{-5}$. We conduct training with a warmup proportion of 10\% and let the LR decay linearly after the warmup phase. Early stopping is employed based on the JGA of the development set.
During training we use dropout~\cite{srivastava2014dropout} on the BERT output with a rate of 30\%. We do not use slot value dropout~\cite{xu2014targeted} except for one dataset (sim-M), where performance was greatly affected by this measure (see Section~\ref{sec:results:ssec:analysis}).
\begin{table}
\centering
\begin{tabular}{lr}
\hline
\textbf{Models} & \textbf{WOZ 2.0} \\
\hline
NBT~\shortcite{mrkvsic2016neural} & 84.2\% \\
BERT-DST~\shortcite{chao2019bert} & 87.7\% \\
GLAD~\shortcite{zhong2018global} & 88.1\% \\
GCE~\shortcite{nouri2018toward} & 88.5\% \\
StateNet~\shortcite{ren2018towards} & 88.9\% \\
SUMBT~\shortcite{lee2019sumbt} & 91.0\% \\
\hline
TripPy & \textbf{92.7$\pm$0.2\%} \\
\hline
\end{tabular}
\caption{\label{tab:baselines_dstc}
DST Results on WOZ 2.0.}
\end{table}
\begin{table}
\centering
\begin{tabular}{lrr}
\hline
\textbf{Models} & \textbf{sim-M} & \textbf{sim-R}\\
\hline
SMD-DST~\shortcite{rastogi2017scalable} & 96.8\%$^*$ & 94.4\%$^*$ \\
\hline
LU-DST~\shortcite{rastogi2018multi} & 50.4\% & 87.1\% \\
BERT-DST~\shortcite{chao2019bert} & 80.1\% & 89.6\% \\
\hline
TripPy & \textbf{83.5$\pm$1.2\%} & \textbf{90.0$\pm$0.2\%} \\
\hline
\end{tabular}
\caption{\label{tab:baselines_sim}
DST Results on sim-M and sim-R. $^*$ should be considered as oracle because the value candidates are ground
truth labels.}
\end{table}
\section{Experimental Results}
Tables~\ref{tab:baselines_multiwoz},~\ref{tab:baselines_dstc} and~\ref{tab:baselines_sim} show the performance of our model in comparison to various baselines. TripPy achieves state-of-the-art performance on all four evaluated datasets, with varying distance to the runner-up. Most notably, we were able to push the performance on MultiWOZ 2.1, the most complex task, by another 2.0\% absolute compared to the previous top scoring method, achieving 55.3\% JGA. The improvements on the much smaller datasets WOZ 2.0, sim-M and sim-R demonstrate that the model benefits from its design on single-domain tasks as well. The following analysis serves a better understanding of our model's strengths.
\subsection{Analysis}
\label{sec:results:ssec:analysis}
We analyse the performance of TripPy on ablation experiments on MultiWOZ 2.1 (see Table~\ref{tab:ablation}).
Our baseline model is best compared to BERT-DST~\cite{chao2019bert}; we only take single turns as input, and only use span prediction to extract values from the turn. The resulting performance is comparable to other span-based methods such as DST-reader and DST-span and confirms that the dialogs in MultiWOZ are too complex to only be handled by this information extracting mechanism alone.
\begin{table}
\centering
\begin{tabular}{lr}
\hline
\textbf{Model} & \textbf{JGA} \\
\hline
Span prediction only (entire turn) & 42.63\% \\
\hline
+ triple copy mechanism & 49.23\% \\
\quad + dialog history & 52.58\% \\
\quad\quad + auxiliary features & 54.08\% \\
\quad\quad\quad + masking & 54.29\% \\
\hline
TripPy (full sequence width) & 55.29\% \\
\hline
\end{tabular}
\caption{\label{tab:ablation}
Ablation experiments for our model.}
\end{table}
\begin{figure}
\centering
\includegraphics[page=1, trim=9cm 6.5cm 9cm 7.3cm, clip=true, width=1.00\linewidth,]{Figures}
\caption{Per class performance of the slot gates for different versions of our model (ablation study).}
\label{fig:slotgates}
\end{figure}
\paragraph{Impact of the triple copy mechanism}
Using our proposed triple copy mechanism pushes the performance close to 50\%, surpassing TRADE and closing in on the leading hybrid approaches. Especially the performance of the slot gates benefits from this change (see Figure~\ref{fig:slotgates}).
When looking at the F1 score for the individual classes, one can see that the \emph{span} class benefits from the distinction.
It is important to point out that none of the coreferences that our model handles can be resolved by span-prediction alone. This means that otherwise guaranteed misses can now be avoided and coreferences can be resolved by copying values between slots. What's more, using the dialog state memory to resolve coreferences helps value detection across multiple turns, as a value that has been referred to in the current turn might have been assigned to another slot multiple turns before.
\paragraph{Impact of the dialog history}
We found that using the dialog history as additional context information is critical to a good performance, as it reduces contextual ambiguity. This is clearly reflected in the improved performance of the slot gates (see Figure~\ref{fig:slotgates}), which has two positive effects. First, the presence and type of values is recognized correctly more often. Especially the special value \emph{dontcare}, and boolean slots (taking values \emph{true} and \emph{false}) benefit from the additional context. This is only logical, since they are predicted by the slot gate using the representation vector of the [CLS] token. Second, values are assigned to the correct slot more often than without the additional contextual information. With the additional dialog history, we outperform DS-DST and match SOM-DST, which set the previous state-of-the-art.
\begin{figure}[t]
\centering
\includegraphics[page=4, trim=9cm 3cm 9cm 4.1cm, clip=true, width=1.00\linewidth,]{Figures}
\caption{Performance of TripPy on slots with high OOV rate. \emph{ALL} denotes the average of all slots of the respective dataset.}
\label{fig:oov_slots}
\end{figure}
\paragraph{Impact of the auxiliary features}
SOM-DST uses single turns as input, but preserves additional contextual information throughout the dialog by using the dialog state as auxiliary input. By adding our memory based auxiliary features in a late fusion approach, we surpass SOM-DST, and ultimately DST-picklist, which performs slot-filling with the knowledge of the full ontology.
Even though our features carry less information, that is, only the identities of the informed slots -- tracked by the system inform memory -- and the identities of the previously seen slots -- tracked by the DS memory --, we see substantial improvement using them. Obviously, more information about the progress of the dialog helps the slot gates and the referral gates in their classification tasks.
\paragraph{Impact of partial masking}
We found that masking the informed values in past system utterances does not give a clear benefit, but it also does not harm performance of the slot gates. While the \emph{inform} cases are detected more accurately, some other cases suffer from the loss of information in the input. Overall, there is a minor overall improvement observable. We report the numbers for MultiWOZ in Table~\ref{tab:ablation} and Figure~\ref{fig:slotgates}, but would like to note that we have seen the same trend on all other datasets as well.
\paragraph{Impact of the context width}
Our best model utilizes the full width of BERT (512 tokens). This is a clear advantage for longer dialogs. Maximal context width is not a decisive factor for the single-domain datasets, since their dialogs tend to be shorter. As expected, we have not seen any change in performance on these. For MultiWOZ, we gain 1\% absolute by maximizing the history length to preserve as much of the dialog history as possible, achieving 55.3\% JGA.
\begin{figure}[t]
\centering
\includegraphics[page=2, trim=11.8cm 6.5cm 11.7cm 7.2cm, clip=true, width=1.00\linewidth,]{Figures}
\caption{Recall of values depending on the amount of samples seen during training. 0 seen samples means the value is OOV during test time.}
\label{fig:oovs}
\end{figure}
\subsection{Generalization Study}
\label{sec:results:ssec:generalization}
It is important that a DST model generalizes well to previously unseen values. We looked at the performance of our model on slots with exceptionally high out-of-vocabulary rates, of which we identified 8 across the evaluated datasets. Figure~\ref{fig:oov_slots} plots performance measures for these slots and compares them to the average performance for all slots in the respective datasets. Generally, the slots that expect named entities as values show the lowest accuracy. However, the below-average performance of these slots does not seem to be caused by a particularly high OOV rate. Even at 100\%, the \emph{movie} slot of sim-M still performs comparably well. Other slots with relatively high OOV rate still perform close to or better than the average.
\begin{figure}[t]
\centering
\includegraphics[page=3, trim=9.8cm 4cm 10cm 4.5cm, clip=true, width=1.00\linewidth,]{Figures}
\caption{Per-slot accuracy of TripPy on the original test set and the OOV test set. Underlined slot names indicate slots with at least one OOV value.}
\label{fig:oov_circle}
\end{figure}
Figure~\ref{fig:oovs} plots the recall of values depending on the number of samples seen during training. To our surprise, it does not seem to matter whether a particular value has never been seen during training in order to be detected correctly. OOV values are detected just as well as generally less common values. Our observations however indicate that the model benefits tremendously by seeing a certain minimal amount of training samples for each value, which is somewhere around 50. In other words, if such amounts of data are available, then the model is able to effectively utilize them. In the same Figure we compare TripPy to the span prediction baseline. The latter clearly struggles with OOVs and rare values and generally seems to require more training samples to achieve a good recall. The higher recall on OOV values is likely caused by the fact that many unseen values are of the category time of day, which mostly follows a strict format and is therefore easier to spot. Overall, TripPy clearly generalizes better over sample counts.
To test the limits of our model's generalization capacities, we manually replaced most of the values in the MultiWOZ test set by (fictional but still meaningful) OOV values. Of the over 1000 unique slot-value pairs appearing in the modified test set, about 84\% are OOV after the replacement. Figure~\ref{fig:oov_circle} compares the per-slot accuracy of our model on the original test set and the OOV test set. Underlined slot names indicate slots with at least one OOV value. Their average OOV rate is 90\%. Surprisingly, most of these slots maintain their high accuracy and only few suffer from the high OOV count. Mainly it is one particular domain, \emph{train}, which suffers above-average performance drops. However, the remainder of the slots maintain their performance. This demonstrates that our model is well equipped to handle OOV values, regardless of the type (e.g., named entity, time of day).
\section{Conclusion}
We have demonstrated that our approach can handle challenging DST scenarios. Having to detect unseen values does not considerably impair our model's general performance. The information extraction capabilities of our proposed model are rooted in the memory-based copy mechanisms and perform well even in extreme cases as discussed in Section~\ref{sec:results:ssec:generalization}. The copy mechanisms are not limited by a predefined vocabulary, since the memories themselves are value agnostic.
To further improve the DST capabilities of TripPy, we hope to introduce slot independence as at present its tracking abilities are limited to slots that are predefined in the ontology. For that, We would like to expand our approach towards the schema-guided paradigm for dialog modeling. We also would like to employ a more sophisticated update strategy, for example by adding the option to partially forget. There already exists an intriguing set of works focusing on these issues and we hope to incorporate and expand upon it in the future.
\section*{Acknowledgments}
M. Heck, C. van Niekerk and N. Lubis are supported by funding provided by the Alexander von Humboldt Foundation in the framework of the Sofja Kovalevskaja Award endowed by the Federal Ministry of Education and Research, while C. Geishauser, H-C. Lin and M. Moresi are supported by funds from the European Research Council (ERC) provided under the Horizon 2020 research and innovation programme (Grant agreement No. STG2018\_804636).
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,937 |
\section{Characterisations of Ordinal Invariants}
\label{sec-characterizations}
We recall in this section the known characterisations of ordinal
invariants. With the method of residuals we can follow \citet{kriz90b} and
show that the height and maximal order types of WPOs also correspond
to their maximal chain heights (Sec.~\ref{ssec:chain}) and maximal
linearisation heights (Sec.~\ref{ssec:lin}), relying on results of
\citet{Wolk} and \citet{deJonghParikh} to show that these maxima are
indeed attained. In a similar spirit, the width of a FAC poset is
equal to its antichain rank (Sec.~\ref{AB}), an invariant studied by
\citet{AbBo}---but this time it is not necessarily attained. Finally,
in Sec.~\ref{sec-links} we recall an inequality relating all three
invariants and shown by \citet{kriz90b}.
\subsection{Height and Maximal Chains}\label{ssec:chain}
Given a WF poset $P$, let $\?C(P)$ denote its set of non-empty chains.
Each chain $C$ from $\?C(P)$ is well-founded and has a rank
$\h(C)$; we denote the supremum of these ranks by
$\mathrm{rk}_\?C P\eqdef\sup_{C\in\?C(P)}\h(C)$. As explained for
example by
\citet[Thm.~4.9]{kriz90b}, we have
\begin{equation}\label{eq-sup-chain}
\rk_\?C P \le \h(P)
\end{equation}
and this can be shown, for instance, by induction on the height using
the method of residuals. Indeed, \eqref{eq-sup-chain} holds when
$P=\emptyset$, and for the induction step
\begin{align*}
\sup_{C\in\?C(P)}\h(C)
&\eqby{\eqref{eq-w-decomp}}\sup_{C\in\?C(P)}(\sup_{x\in
C}\{\h(C_{<x})+1\})
\leq\sup_{x\in P}\{ (\!\sup_{C'\in\?C(P_{<x})}\!\h(C')) + 1\}
\intertext{because $C_{<x}$ is a chain in
$\?C(P_{<x})$, and then by induction hypothesis~\eqref{eq-sup-chain}}
\sup_{C\in\?C(P)}\h(C)&\leq\sup_{x\in P}\{\h(P_{<x})+1\}
\eqby{\eqref{eq-w-decomp}}\h(P)\;.
\end{align*}
\begin{remark}\label{rk-wolk}
The inequality in~\eqref{eq-sup-chain} can be strict. For instance,
consider the forest $F$ defined by the disjoint union $\{C_n:
n\in\+N\}$ along $(\+N,{=})$, where each $C_n$ is a chain of height
$n$, and add a new top element $t$ yielding $P\eqdef
t^\frown F$. Then $P$ is
WF (but not FAC and is thus not a WPO).
Note that $\h(P)=\h(F)+1=\omega+1$. However, every
chain $C$ in $\?C(P)$ is included in
$t^\frown C_n$ for some $n$ and has height bounded by $n+1$,
while $\rk_\?C (P)=\omega<\h(P)$.\hfill$\eop_{\ref{rk-wolk}}$
\end{remark}
\Citet[Thm.~9]{Wolk} further shows that, when $P$ is a WPO, the
supremum is attained, i.e.\ there is a chain $C$ with rank
$\h(C)=\mathrm{rk}_\?C P$. In such a case, \eqref{eq-sup-chain} can
be strengthened to
\begin{equation}\label{eq-max-chain}
\max_{C\in\?C(P)}\h(C) = \rk_\?C P = \h(P)
\end{equation}
as can be checked by well-founded induction with
\begin{align*}
\h(P)&\eqby{\eqref{eq-w-decomp}}\sup_{x\in P}\{\h(P_{<x})+1\}
\leq \sup_{x\in P}\{\h(C_x)+1\}\leq \sup_{x\in P}\h(C_x\cup\{x\})\leq \sup_{C\in\?C(P)}\h(C)
\end{align*}
where $C_x$ is a chain of $P_{<x}$ witnessing~\eqref{eq-max-chain} by
induction hypothesis, and $C_x\cup\{x\}$ is therefore a chain in
$\?C(P)$ of height $\h(C_x)+1$.
\begin{theorem}[%
\citeauthor{Wolk}; \citeauthor{kriz90b}]\label{thm-equivalences-2} Let
$P$ be a WPO.
Then $\h(P)=\mathrm{rk}_\?C P=\max_{C\in\?C(P)}\h(C)$ is the maximal
height of the non-empty chains of~$P$.
\end{theorem}
More generally, the WPO condition in Thm.~\ref{thm-equivalences-2} can
be relaxed using the following result proven in
\citep{pouzet79,schmidt81,milner81}.
\begin{theorem}[\citeauthor{pouzet79}; \citeauthor{schmidt81};
\citeauthor{milner81}]\label{thm-max-chain}
Let $P$ be a WF poset. Then
\begin{itemize}
\item \emph{either} $\rk_\?C P=\max_{C\in\?C(P)}\h(C)$,
i.e.\ there exist chains of maximal height,
\item \emph{or} there exists an antichain $A$ of $P$ such that the
set of heights $\{\h(P_{{<}x}):x\in A\}$ is infinite.
\end{itemize}
\end{theorem}
\subsection{Maximal Order Types and Linearisations}\label{ssec:lin}
A \emph{linearisation} of a poset $(P,{\leq})$ is an augmentation
$L=(P,{\preceq})$ which is a total order: $x\leq y$ implies
$x\preceq y$. We let $\?L(P)$ denote the set of linearisations of
$P$. As stated by \cite{deJonghParikh}, a poset is a WPO if and only
if all its linearisations are well-founded.
\Citeauthor{deJonghParikh} furthermore considered the supremum
$\sup_{L\in\?L(P)}\h(L)$ of the order types of the linearisations of
$P$, and showed that this supremum was attained
\citep[Thm.~2.13]{deJonghParikh}; this is also the subject of of
\cite[Thm.~10]{BlGu}.
\begin{theorem}[%
\citeauthor{deJonghParikh}; \citeauthor{kriz90b}]\label{thm-equivalences-1} Let $Q$ be a WQO.
Then $\o(Q)=\max_{L\in\?L(Q)}\h(L)$ is the maximal
height of the linearisations of~$Q$.
\end{theorem}
\subsection{Maximal Order Types and Height of Downwards-Closed Sets}
A subset $D$ of a WQO $(Q,{\leq})$ is \emph{downwards-closed} if, for
all $y$ in $D$ and $x\leq y$, $x$ also belongs to~$D$. We let
$\?D(Q)$ denote the set of downwards-closed subsets of~$Q$. For
instance, when $Q=\omega$, $\?D(\omega)$ is isomorphic to
$\omega+1$.
It is well-known that a quasi-order $Q$ is WQO if and only if it
satisfies the descending chain condition, meaning that
$(\?D(Q),{\subseteq})$ is well-founded. Therefore $\?D(Q)$ has a rank
$\h(\?D(Q))$ when $Q$ is WQO. As shown by \citet[Prop.~31]{BlGu},
this can be compared to the maximal order type of~$Q$.
\begin{theorem}[\citeauthor{BlGu}]
Let $Q$ be a WQO. Then $\o(Q)+1=\h(\?D(Q))$.
\end{theorem}
\subsection{Width and Antichain Rank}
\label{AB}
\Citet{AbBo} consider a structure similar to the tree $\Inc(P)$ for
FAC posets $P$, namely the poset $\?A(P)$ of all non-empty
antichains of $P$. In the case of a FAC poset, the poset $(\?A(P),
{\supseteq})$ is well-founded. Let us call its height the
\emph{antichain rank} of $P$ and denote it by $\rk_\?A
P\eqdef\h(\?A(P))$; this is the smallest ordinal $\gamma$ such that
there is a strict order-preserving function from $\?A(P)$
to~$\gamma$.
In fact the antichain rank and the width function we study have the
same values, as we now show. Thus one can reason about the width
$\w(P)$ by looking at the tree $\Inc(P)$ or at
$(\?A(P),{\supseteq})$, a different structure.
\begin{theorem}\label{equal}
Let $P$ be a FAC poset. Then $\w(P)= \rk_\?A P$.
\end{theorem}
\begin{proof}\renewcommand{\qedsymbol}{$\eop_{\ref{equal}}$}
Let $\gamma=\rk_\?A P$ and let $r{:}\, \?A(P)\to \gamma$ be such
that $S\supsetneq T\implies r(S) < r(T)$ for all non-empty antichains
$S,T$. Define $f{:}\,\Inc(P) \to \gamma$ by letting for $s$ non-empty
$f(s)\eqdef r(S)$, where $S$ is the set of elements of $s$. This
function satisfies $s\initial t \implies f(s)> f(t)$ and hence
$\w(P)\le\rk_\?A P$.
Conversely, let $\gamma=\w(P)$ and $f{:}\,\Inc(P)\to\gamma$ be such
that $s\initial t \implies f(s)>f(t)$. For a non-empty antichain
$S\in \?A(P)$, observe that there exist finitely many---precisely
$|S|!$--- sequences $s$ in $\Inc(P)$ with support set $S$. Call this
set $\Lin(S)$ and define $r{:}\,\?A(P)\to\gamma$ by $r(S)\eqdef
\min_{s\in\Lin(S)}f(s)$. Consider now an antichain $S$ with $r(S) =
f(s)$ for some $s\in\Lin(S)$, and an antichain $T$ with $T\supsetneq
S$: then there exists an extension $t$ of $s$ in $\Lin(T)$, which is
therefore such that $f(s)>f(t)$, and hence $r(S)=f(s)>f(t)\geq r(T)$.
Thus $\w(P)\geq\rk_\?A P$.
\ifams\relax\else\hfill\qedsymbol\fi
\end{proof}
\begin{remark}\label{rk-max-width}
The width $\w(P)$ is in general not attained, i.e., there might not
exist any chain of antichains of height $\w(P)$. First note that
even when $P$ is a WPO, $(\?A(P),{\supseteq})$ is in general not a
WPO, hence Thm.~\ref{thm-equivalences-2} does not apply. In fact,
examples of FAC posets where the width is not attained abound.
Consider indeed any FAC poset $P$ with $\w(P)\ge\omega$, and any
non-empty chain $C$ in $\?C(\?A(P))$. As $C$ is well-founded for
$\supseteq$, it has a minimal element, which is an antichain
$A\in\?A(P)$ such that, for all $A'\neq A$ in $C$,
$A'\subsetneq A$. Since $P$ is FAC, $A$ is finite, and $C$ is
therefore finite as well: $\h(C)<\omega$.~\hfill$\eop_{\ref{rk-max-width}}$
\end{remark}
\section{Computing the Invariants of Common WQOs}
\label{sec-computing-w}
We now consider WQOs obtained in various well-known ways and address
the question of computing their width, and recall along the way what
is known about their height and maximal order type.
In the ideal case, there would be a means of defining
well-quasi-orders as the closure of some simple orders, in the
`Hausdorff-like' spirit of Thm.~\ref{allFAC}. Unfortunately, no such
result is known and indeed it is unclear which class of orders one
could use as a base---for example how would one obtain Rado's example
(see Sec.~\ref{sec-Rado}) from a base of any `reasonable orders.'
Therefore, our study of the width of WQO orders will have to be
somewhat pedestrian, concentrating on concrete situations.
\ifams\relax\else\vspace{-.5em}\fi
\subsection{Lexicographic Sums}
In the case of lexicographic sums along an ordinal (defined in
Sec.~\ref{wqoversusall}), we have the following result.
\begin{lemma}\label{theorem-lexsum}
Suppose that for an ordinal $\alpha$ we have a family of WQOs
$\{P_i:\, i<\alpha\}$. Then $\Sigma_{i<\alpha}P_i$ is a WQO, and:
\begin{enumerate}
\item
$\o(\Sigma_{i<\alpha}P_i)=\Sigma_{i<\alpha} \o(P_i)$,
\item
$\h(\Sigma_{i<\alpha}P_i)=\Sigma_{i<\alpha} \h(P_i)$,
\item
$\w(\Sigma_{i<\alpha}P_i)=\sup_{i<\alpha} \w(P_i)$.
\end{enumerate}
\end{lemma}
\begin{proof}
First note that any infinite bad sequence in $\Sigma_{i<\alpha}P_i$
would either have an infinite projection to $\alpha$ or an infinite
projection to some $P_i$, which is impossible. Hence
$\Sigma_{i<\alpha}P_i$ is a WQO. Therefore the values
$\w(\Sigma_{i<\alpha}P_i)$, $\o(\Sigma_{i<\alpha}P_i)$ and
$\h(\Sigma_{i<\alpha}P_i)$ are well defined.
\begin{enumerate}
\item
We use Thm.~\ref{thm-equivalences-2}. Let
$\alpha_i\eqdef\o(P_i)$, then $\Sigma_{i<\alpha}\alpha_i$ is isomorphic to
a linearisation of $\Sigma_{i<\alpha}P_i$. Hence
$\o(\Sigma_{i<\alpha}P_i) \ge \Sigma_{i<\alpha} \o(P_i)$. Suppose that
$L$ is a linearisation of $\Sigma_{i<\alpha}P_i$ (necessarily a well
order), then the projection of $L$ to each $P_i$ is a linearisation of
$P_i$ and hence it has type $\le \alpha_i$. This gives that the type
of $L$ is $\le \Sigma_{i<\alpha} \alpha_i$, proving the other side of
the desired inequality.
\item
We use Thm.~\ref{thm-equivalences-1}. Any chain
$C$ in $\Sigma_{i<\alpha}P_i$ can be obtained as
$C=\Sigma_{i<\alpha}C_i$, where $C_i$ is the projection of $C$ on the
coordinate $i$. The conclusion follows as in the case of $\o$.
\item
Every non-empty sequence of incomparable elements in $P$ must come
from one and only one $P_i$, hence $\Inc(P)=\bigcup_{i\in L}
\Inc(P_i)$, and therefore $\w(P_i)=\sup_{i<\alpha}\w(P_i)$ by Lem.~\ref{bunching}.
$\eop_{\ref{theorem-lexsum}}$
\end{enumerate}
\end{proof}
\subsection{Disjoint Sums}
We also defined disjoint sums in Sec.~\ref{wqoversusall} as sums along
an antichain.
\begin{lemma}\label{ABresults-disjsum} Suppose that $P_1,P_2,\ldots$
is a family of WQOs.
\begin{enumerate}
\item $\o(P_1\sqcup P_2) = \o(P_1)\oplus\o(P_2)$,
\item $\h(\bigsqcup_i P_i) = \sup \{\h(P_i)\}_i$,
\item $\w(P_1\sqcup P_2)=\w(P_1)\oplus \w(P_2)$.
\end{enumerate}
\end{lemma}\ifams\relax\else\clearpage\fi
\begin{proof}\renewcommand{\qedsymbol}{$\eop_{\ref{ABresults-disjsum}}$}
(1) is Thm.~3.4 from~\cite{deJonghParikh}.
\noindent
(2) is clear since, for an arbitrary family $P_i$ of WQOs,
$\Dec(\bigcup_i P_i)$ is isomorphic to $\bigsqcup_i \Dec(P_i)$. We
observe that, for infinite families, $\bigsqcup_i P_i$ is not WQO, but
it is still well-founded hence has a well-defined height.
\noindent
(3) is Lem.~1.10 from~\cite{AbBo} about antichain rank, which
translates to widths thanks to Thm.~\ref{equal}.%
\ifams\relax\else\qedsymbol\fi
\end{proof}
We can apply lexicographic sums to obtain the existence of WQO posets
of every width.
\begin{corollary}\label{theorem-obtained} For every ordinal $\alpha$,
there is a WQO poset $P_\alpha$ such that $\w(P_\alpha)=\alpha$.
\end{corollary}
\begin{proof}
The proof is by induction on $\alpha$. For $\alpha$ finite, the
conclusion is exemplified by an antichain of length $\alpha$. For
$\alpha$ a limit ordinal let us fix for each $\beta<\alpha$ a WPO
$P_\beta$ satisfying $\w(P_\beta)=\beta$. Then
$\w(\Sigma_{\beta<\alpha} P_\beta)=\sup_{\beta<\alpha} \beta=\alpha$,
as follows by Lem.~\ref{theorem-lexsum}. For $\alpha=\beta+1$, we take
$P_\alpha=P_\beta\sqcup 1$, i.e., $P_\beta$ with an extra
(incomparable) element added, and rely on $\w(Q\sqcup 1)=\w(Q)\oplus
1=\w(Q)+1$ shown in Lem.~\ref{ABresults-disjsum}.
$\eop_{\ref{theorem-obtained}}$
\end{proof}
\subsection{Direct Products}
Direct products are again a particular case of lexicographic sums
along a poset~$Q$, this time of the same poset $P$. While the cases
of $\o$ and $\h$ are mostly folklore, the width of $P\cdot Q$ is not
so easily understood, and its computation in Lem.~1.11 from
\cite{AbBo} uses the notion of \emph{Heisenberg products} $\alpha\odot
\beta$, defined for any ordinal $\alpha$ by induction on the ordinal
$\beta$:
\begin{align*}
\alpha\odot 0&\eqdef 0\:,&
\alpha\odot (\beta+1)&\eqdef (\alpha\odot \beta)\oplus \alpha\:,&
\alpha \odot \lambda&\eqdef\sup\{(\alpha\odot
\gamma)+1:\,\gamma<\lambda\}
\end{align*}
where $\lambda$ is a limit ordinal. Note that this differs from the
natural product, and is not commutative: $2\odot \omega=\omega$ but
$\omega\odot 2=\omega\cdot 2$.
\begin{lemma}[\citeauthor{AbBo}]\label{ABresults-directprod} Suppose
that $P$ and $Q$ are two WPOs.
\begin{enumerate}
\item $\o(P\cdot Q)=\o(P)\cdot \o(Q)$,
\item $\h(P\cdot Q)=\h(P)\cdot \h(Q)$,
\item $\w(P\cdot Q)=\w(P)\odot \w(Q)$.
\end{enumerate}
\end{lemma}
\input{ssec-cartesian-prod}%
\section{Concluding Remarks}\label{sec-concl}
We provide in Table~\ref{table-summary} a summary of our findings
regarding ordinal invariants of WQOs. Mostly, the new results concern
the width $\w(P)$ of WQOs. We note that the width $\w(P\times Q)$ of
Cartesian products is far from elucidated, the first difficulty being
that---unlike other constructs---it cannot be expressed as a function
of the widths $\w(P)$ and $\w(Q)$. For Cartesian products,
Sec.~\ref{finiteproducts} only provide definite values for a few
special cases: for the rest, one can only provide upper and lower
bounds for the moment.
\input{short-table}
\section{Introduction}
\label{intro}
In the finite case, a partial order---also called a
\emph{poset}---$(P,{\leq})$ has natural cardinal invariants: a
\emph{width}, which is the cardinal of its maximal antichains, and a
\emph{height}, which is the cardinal of its maximal chains. The width
and height are notably the subject of the theorems of \citet{Dilworth}
and \citet{Mirsky} respectively; see \citet{west82} for a survey of
these \emph{extremal} problems. In the infinite case, cardinal
invariants are however less informative---especially for countable
posets---,\linebreak while the theorems of \citeauthor{Dilworth} and
\citeauthor{Mirsky} are well-known to fail%
~\citep{Peles,Schmerl}.
When the poset at hand enjoys additional conditions, the corresponding
\emph{ordinal invariants} offer a richer theory, as studied for instance by
\citet{kriz90b}. Namely, if $(P,{\leq})$ has the \emph{the finite
antichain condition} (FAC), meaning that its antichains are finite,
then the tree
\begin{align*}
\Inc(P)&\eqdef\bigl\{\langle x_0,x_1,\dots,x_n\rangle \in P^{<\omega}
~:~
0\leq n<\omega \land \forall 0\leq i<j\leq n,\,x_i\mathbin\bot
x_j\bigr\}
\intertext{%
of all non-empty (finite) sequences of pairwise
\underline{inc}omparable elements of $P$ ordered by initial
segments has no infinite branches. Note that the tree
$(\Inc(P),{\initial})$ does not necessarily have a single root and
that the empty sequence is excluded (the latter is a matter of
aesthetics, but it does make various arguments run more smoothly
by not having to consider the case of the empty sequence
separately). Therefore, $\Inc(P)$ has a rank, which is the
smallest ordinal $\gamma$ such that there is a function
$f:\,\Inc(P)\to\gamma$ with $s\mathrel\initial t\implies f(s)> f(t)$ for
all $s,t\in \Inc(P)$. This ordinal is called the \emph{width} of
$P$ and in this paper we denote it by $\w(P)$---it was
denoted by $\mathrm{wd}(P)$ by \citet{kriz90b}.
\newline\hspace*{\parindent}
Similarly, if $(P,{\leq})$ is \emph{well-founded} (WF), also
called {Artinian},
meaning that its descending sequences are finite, then the tree}
\Dec(P)&\eqdef\bigl\{\langle x_0,x_1,\dots,x_n\rangle \in P^{<\omega}
~:~
0\leq n<\omega \land \forall 0\leq i<j\leq n,\,x_i> x_j\bigr\}
\intertext{%
of non-empty strictly descending sequences has an ordinal rank, which we
denote by $\h(P)$ (\citeauthor{kriz90b} denote it by
$\mathrm{ht}(P)$) and call the \emph{height} of $P$.
\newline\hspace*{\parindent}
Finally, if $(P,{\leq})$ is both well-founded and FAC, i.e., is a
\emph{well partial order} (WPO), then the tree}
\Bad(P)&\eqdef\bigl\{\langle x_0,x_1,\dots,x_n\rangle \in P^{<\omega}
~:~
0\leq n<\omega \land \forall 0\leq i<j\leq n,\,x_i\not\leq
x_j\bigr\}
\end{align*}
of non-empty \emph{bad sequences} of $P$ has an ordinal rank, which we
denote by $\o(P)$ and call the \emph{maximal order type} of $P$ after
\citet{deJonghParikh} and \citet{schmidt79} (\citeauthor{kriz90b}
denote it by $c(P)$, \citeauthor{BlGu} call it the \emph{stature} of
$P$). In the finite case, this invariant is simply the cardinal of
the poset.
Quite some work has already been devoted to heights and maximal order
types, and to their computation. Widths are however not that
well-understood: as \citet[Rem.~4.14]{kriz90b} point out, they do not
enjoy nice characterisations like heights and maximal order types do,
and the range of available results and techniques on width
computations is currently very limited.
\medskip
Our purpose in this paper is to explore to what extent we can find
such a characterisation, and provide formul\ae\ for the behaviour of
the width function under various classically defined operations with
partial orders. Regarding the first point, we first show in
Sec.~\ref{sec-characterizations} that the width coincides with the
\emph{antichain rank} defined by \citet{AbBo}, which is the height of
the chains of antichains; however, unlike the height and maximal order
type of WPOs, the width might not be attained
(Rem.~\ref{rk-max-width}). Regarding the second point, we first show
in Sec.~\ref{wqoversusall} that computing widths in the class of FAC
orders reduces to computing widths in the class of WPOs. We recall
several techniques for computing ordinal invariants, and apply them in
Sec.~\ref{sec-computing-w} to obtain closed formul\ae\ for the width
of sums of posets, and for the finite multisets, finite sequences, and
tree extensions of WPOs. One of the main questions is to give a
complete formula for the width of the Cartesian products of WPOs.
Even the width of the product of two ordinals is only known through a
complex recursive formula (due to Abraham, see Sec.~\ref{finiteproducts}) and we only
have partial answers to the general question.
The three ordinal invariants appear in different streams of the
literature, often unaware of the results appearing in one another, and
using different definitions and notations. Another motivation of this
paper is then to provide a unified presentation of the state of the
knowledge on the subject, and we also recall the corresponding results
for heights and maximal order types as we progress through the paper.
\section{Background and Basic Results}
\subsection{Posets and Quasi-Orders}
We consider posets and, more generally, quasi-orders (QO). When
$(Q,{\leq_Q})$ is a QO, we write $x<_Qy$ when $x\leq_Q y$ and
$y\not\leq_Q x$. We write $x\perp_Q y$ when $x\not\leq_Q y$ and
$y\not\leq_Q x$, and say that $a$ and $b$ are \emph{incomparable}. We
write $x\equiv_Q y$ when $x\leq_Q y\land y\leq_Q x$: this is an
equivalence and the quotient $(Q,{\leq_Q})/\equiv_Q$ is a poset that,
as far as ordinal invariants are concerned, is indistinguishable
from~$Q$. Therefore we restrict our attention to posets for technical
reasons but without any loss of generality. Note that some
constructions on posets (e.g., taking powersets) yield quasi-orders
that are not posets. A QO $Q$ is \emph{total} if for all $x,y$ in
$Q$, $x\le_Q y$ or $x\ge_Q y$; a total poset is also called a
\emph{chain}.
When a QO does not have infinite antichains, we say that it satisfies
the \emph{Finite Antichain Condition}, or simply that it is FAC. A QO
that does not have any infinite (strictly) decreasing sequence is said
to be well-founded (or WF). A \emph{well-quasi order} (or WQO) is a
QO that is both WF and FAC: it is well-known that a QO is WQO if and
only if it does not have any infinite bad
sequence~\cite{Kruskal,Milner}, where a sequence $\langle
x_0,x_1,x_2,\ldots\rangle$ is \emph{good} if $x_i\leq x_j$ for some
positions $i<j$, and is \emph{bad} otherwise.
For a QO $(Q, {\le})$ we define the \emph{reverse} QO $Q^\ast$ as $(Q,
{\ge})$, that is to say, $x\le_{Q^\ast} y$ if and only if $x\ge_Q y$.
An \emph{augmentation} of $(Q, {\le})$ is a QO $(Q, {\le'})$ such that
$x\le y\implies x\le' y$, i.e., $\le$ is a subset of $\le'$. A
\emph{substructure} of a QO $(Q, {\le})$ is a QO $(Q', {\le'})$ such
that $Q'\subseteq Q$ and ${\le'}\:\subseteq\: {\le}$. In this case, we
write $Q'\le Q$.
\subsection{Rankings and Well-Founded Trees}\label{ssec-rank}
Recall that for every WF poset $P$ there exist ordinals $\gamma$ and
order preserving functions $f{:}\,P\to \gamma$, that is, such that
$x<_P y\implies f(x)< f(y)$ for all $x,y\in P$. The smallest such
ordinal $\gamma$ is called the \emph{rank} of $P$; one can obtain the
associated \emph{ranking function} $r{:}\,P\to \gamma$ by defining
inductively $r(x)=\sup\{r(y)+1:\,y<_P x\}$, and the rank turns out to
be equal to its height $\h(P)$ (see Sec.~\ref{ssec:residuals}). When $P$
is total, i.e., is a chain, then its rank is also called its
\emph{order type}.
Traditionally, for a tree $(T,\le_T )$, one says that it is
well-founded if it \emph{does not have an infinite branch}, which with
the notation above amounts to saying that the reverse partial order
$(T,\ge_T)$ is well-founded. This somewhat confusing notation,
implies that for rooted well-founded trees, the root(s) have the
largest rank, and the leaves have rank~$0$. In our definitions of
ordinal invariants given in the introduction, we considered trees of
non-empty finite sequences, ordered by initial segments: if $s=\langle
x_0,x_1,\ldots,x_n\rangle$ and $t=\langle y_0,y_1,\ldots,y_m\rangle$,
we write $s\initialeq t$ and say that $s$ is an \emph{initial segment}
of $t$, when $n\leq m$ and $s=\langle y_0,\ldots,y_n\rangle$.
Equivalently, the associated strict ordering $s\initial t$ means that
$t$ can be obtained by appending some sequence $t'$ after $s$, denoted
$t=s\frown t'$.
We also make an easy but important observation regarding
substructures: When $P$ is embedded in $Q$ as an induced substructure,
then $\w(P)\le \w(Q)$, and similarly for $\o$ and $\h$. Indeed, every
antichain (bad sequence, decreasing sequence, resp.) of $P$ is an
antichain (bad sequence, decreasing sequence, resp.) of $Q$, so the
ranks of the corresponding trees can only increase when going from $P$
to~$Q$.
\subsection{Residual Characterisation}
\label{ssec:residuals}
For a poset $(P,{\le})$, $x\in P$, and
$\ast\in\{{\bot},{<},{\not\ge}\}$, we define the
\emph{$\ast$-residual} of $P$ at $x$ as the induced poset defined by
\begin{equation}
P_{\ast x}\eqdef \{y\in P~:~ y\mathrel\ast x\}\;.
\end{equation}
Since this is an induced substructure of $P$, $P_{\ast x}$ is FAC
(resp.\ WF, WPO) whenever $P$ is FAC (resp.\ WF, WPO).
The interest of $\bot$-residuals (resp.\ $<$-residuals,
$\not\ge$-residuals) is that they provide the range of choices for
continuing incomparable (resp.\ descending, bad) sequences once
element $x$ has been chosen as first element: the suffix of the
sequence should belong to $P_{\ast x}$, and we have recursively
reduced the problem to measuring the rank of the tree $\Inc(P_{\bot
x})$ (resp.\ $\Dec(P_{<x})$, $\Bad(P_{\not\ge x})$).
The following lemma shows precisely how we can extract the rank from
such a recursive decomposition of the tree.
\begin{lemma}\label{bunching}
\hfill\begin{enumerate}
\item Suppose that $\{T_i:\,i\in I\}$ is a family of
well-founded trees and let $T$ be their disjoint union. Then
$T$ is a well-founded tree and it has rank
$\rho(T)=\sup_{i\in I} \rho(T_i)$.
\item Let $T=t^\frown F$ denote a tree rooted at $t$ with $F =
T\setminus t$ and suppose that $F$ is well-founded of rank
$\rho(F)$. Then so is $T$, and $\rho(T) =\rho(F) +1$.
\end{enumerate}
\end{lemma}
\begin{proof}[\ifams Proof \fi of~1] It is clear that $T$ is well founded.
For each $i\in I$, let $f_i{:}\,T_i\to
\rho(T_i)$ be a function witnessing the rank of $T_i$. Then
$f\eqdef\bigcup_{i\in I}f_i$ is an order reversing
function from $T$ to $\gamma\eqdef\sup_{i\in I}
\rho(T_i)$, showing $\rho(T)\le \gamma$.
Conversely, if $f{:}\,T\to\rho(T)$ is a witness function for
the rank of $T$, its restriction to any $T_i$ is
order reversing, showing that $\rho(T_i)\leq\rho(T)$.
\end{proof}
\begin{proof}[\ifams Proof \fi of~2]\renewcommand{\qedsymbol}{$\eop_{\ref{bunching}}$}
Clearly $T$ is
well-founded. Let $\rho^\ast\eqdef\rho(F)+1=\bigl(\sup_{\alpha<\rho(F)}(\alpha+1)\bigr)
+1$. Consider the ranking function $r{:}F\to \rho(F)$, and let
$f{:}T\to\rho^\ast$ be given by
\[
f(s) \eqdef \begin{cases}
r(s)& \textrm{if }s\in F\:, \\
\sup_{\alpha< \rho(F)}(\alpha+1)& \textrm{if }s=t.
\end{cases}
\]
It is clear that $f$ is an order reversing function, witnessing
$h(T)\leq\rho^\ast$.
Suppose that $\beta<\rho^\ast$ and that $h{:}\,T\to\beta$
is an order reversing function. In particular, $h(r) < f(r)$, so let
$\alpha< \rho(F)$ be such that $h(r)<\alpha+1$. Let $s\in F$ be such
that $f(s)=\alpha$. Hence $h(r)\le h(s)$, yet $r<_T s$, a
contradiction.
\ifams\relax\else\hfill$\eop_{\ref{bunching}}$\fi
\end{proof}
Lemma~\ref{bunching} yields the equations:
\ifams\begin{equation}\begin{aligned}\label{eq-w-decomp}
\w(P)&=\sup_{x\in P}\{\w(P_{\bot x}) + 1\}\:,\\
\h(P)&=\sup_{x\in P}\{\h(P_{{<}x}) + 1\}\:,\\
\o(P)&=\sup_{x\in P}\{\o(P_{{\not\ge}x}) + 1\}\:,
\end{aligned}\end{equation}\else
\begin{align}\label{eq-w-decomp}
\w(P)&=\sup_{x\in P}\{\w(P_{\bot x}) + 1\}\:,&
\h(P)&=\sup_{x\in P}\{\h(P_{{<}x}) + 1\}\:, &
\o(P)&=\sup_{x\in P}\{\o(P_{{\not\ge}x}) + 1\}\:,
\end{align}\fi
that hold for any FAC, WF, or WPO, poset $P$ respectively.
Note that it yields $\w(\emptyset)=\h(\emptyset)=\o(\emptyset)=0$.
Equation~\eqref{eq-w-decomp} is used very frequently
in the literature and provides for a method for computing ordinal
invariants recursively, which we call the \emph{method of residuals}.
Equation~\ref{eq-w-decomp} further shows that the function
$r(x)\eqdef\h(P_{<x})$ is the optimal ranking function of~$P$. Thus
$\h(P)$ is the rank of~$P$, i.e.\ the minimal $\gamma$ such that there
exists a strict order-preserving $f{:}\,P\to\gamma$ (recall
Sec.~\ref{ssec-rank}).
\subsection{Games for WQO Invariants}\label{ssec:games}
One limitation of the method of residuals is that it tends to produce recursive rather than
closed formul\ae, see, e.g., \citet{SS-icalp11}.
Another proof technique adopts a game-theoretical point of view. This
is based on \cite[\S3]{BlGu}, which in turn can be seen as an
application of a classical game for the rank of trees to the specific
trees used for the ordinal invariants. We shall use this technique to
obtain results about special products of more than two orders, see for
example Thm.~\ref{cor-productofsquares}.
The general setting is as follows. For a WQO $P$ and an ordinal
$\alpha$, the game $G_{P,\alpha}^*$ ---where $*$ is one of
$\h,\o,\w$--- is a two-player game where positions are pairs
$(\beta,S)$ of an ordinal and a sequence over $P$. We start in the
initial position $(\alpha,\langle\rangle)$. At each turn, and in
position $(\beta,S)$, Player 1 picks an ordinal $\beta'<\beta$ and
Player 2 answers by extending $S$ with an element $x$ from $P$. Player
2 is only allowed to pick $x$ so that the extended $S'=S\frown x$ is a
decreasing sequence (or a bad sequence, or an antichain) when $*=\h$
(resp.\ $*=\o$, or $*=\w$) and he loses the game if he cannot answer
Player 1's move. After Player 2's move, the new position is
$(\beta',S')$ and the game continues. Player 2 wins when the position
has $\beta=0$ and hence Player 1 has no possible move. The game
cannot run forever so one player has a winning strategy. Applying
\cite[Prop.~23]{BlGu} we deduce that Player 2 wins in $G_{P,\alpha}^*$
iff $*(P)\geq\alpha$. As we are mostly interested in the invariant
$\w$, we shall adopt the notation $G_{P,\alpha}$ for~$G_{P,\alpha}^{\w}$.
\subsection{Cardinal Invariants}
We can connect the ordinal invariants with cardinal measures but this
does not lead to very fine bounds. Here are two examples of what can
be said.
\begin{lemma}\label{upperbound}
Suppose that $Q$ is a FAC quasi-order of cardinal
$\kappa\geq\aleph_0$. Then $\w(Q)<\kappa^+$, the cardinal successor of
$|Q|$.
\end{lemma}
\begin{proof}\renewcommand{\qedsymbol}{$\eop_{\ref{upperbound}}$}
The tree $\Inc(Q)$ has size equal to $\kappa$ and therefore its rank is
an ordinal $\gamma<\kappa^+$.
\ifams\relax\else\hfill\qedsymbol\fi
\end{proof}
\begin{theorem}[Dushnik-Miller]\label{the-partitions}
Suppose that $P$ is a WPO of cardinal $\kappa\ge\aleph_0$. Then
$\h(P)\ge \kappa$.
\end{theorem}
\begin{proof}\renewcommand{\qedsymbol}{$\eop_{\ref{the-partitions}}$}
This is an easy consequence of Thm. 5.25 in \cite{DushnikMiller}. By
the definition of $\h$, it suffices to show that $P$ has a chain of
size $\kappa$. Define a colouring $c$ on the set $[P]^2$ of pairs of
$P$ by saying $c({x,y})\eqdef 0$ if $x$ is comparable to $y$ and
$c({x,y})\eqdef 1$ otherwise. Then use the relation $\kappa\arrows
(\kappa, \aleph_0)^2$, meaning that $P$ has a chain of cardinal
$\kappa$ or an antichain of cardinal $\aleph_0$, which for
$\kappa=\aleph_0$ is the Ramsey Theorem, and for $\kappa>\aleph_0$ is
the Dushnik-Miller Theorem. Since $P$ is FAC, we must have a chain of
order type at least $\kappa$.
\ifams\relax\else\hfill\qedsymbol\fi
\end{proof}
Such results are however of little help when the poset at hand is
countable, because they only tell us that the invariants are countable
infinite, as expected. This justifies the use of ordinal invariants
rather than cardinal ones.
\subsection{WPOs as a Basis for FAC Posets}\label{wqoversusall}
A \emph{lexicographic sum} of posets in some family $\{ P_i:\, i \in
Q\}$ of disjoint orders \emph{along} a poset $(Q,\leq_Q)$, denoted by
$\sum_{i\in Q}P_i$, is defined
as the order $\le$ on the disjoint union $P$ of $\{ P_i:\, i \in Q\}$
such that for all $x,y \in P$ we have $x\le y$ iff $x,y\in P_i$ for
some $i\in Q$ and $x\le_{P_i} y$, or $x\in P_i$ and $y\in P_j$ for some
$i,j \in Q$ satisfying $i <_Q j$.
The lexicographic sum of copies of $P$ along $Q$ is denoted by $P\cdot
Q$ and called the \emph{direct product} of $P$ and $Q$. The
\emph{disjoint sum} of posets in $\{ P_i:\,i \in Q\}$ is defined as
the union of the orders $\le_{P_i}$: this is just a special case of a
lexicographic sum, where the sum is taken over an antichain~$Q$. In
the case of two orders $P_1,P_2$, the lexicographic sum is denoted by
$P_1\sqcup P_2$.
As a consequence of Thm.~7.3 of \citet{abcdzt} (by taking the union
over all infinite cardinals~$\kappa$), one obtains the following
classification theorem.
\begin{theorem}[\citeauthor{abcdzt}]
\label{allFAC}
Let $\BB\!\PP$ be the class of posets which are either a WPO, the
reverse of a WPO, or a linear order. Let $\PP$ be the closure of
$\BB\!\PP$ under lexicographic sums with index set in $\BB\!\PP$ and
augmentation. Then $\PP$ is exactly the class of all FAC posets.
\end{theorem}
We will use the classification in Thm.~\ref{allFAC} to see that if we
know how to calculate $\w(P)$ for $P$ an arbitrary WPO, then we can
bound $\w(P)$ for any FAC poset $P$. This in fact follows from some
simple observations concerning the orders in the class $\BB\!\PP$.
\begin{lemma}\label{basic}
(1) If $P$ is total, then $\w(P)=1$. In general, if all the
antichains in a poset $P$ are of length $\le n$ for some $n<\omega$,
then $\w(P)\le n$, and $\w(P) = n$ in the case that there are
antichains of length $n$.
\noindent
(2) For any poset $P$, $\Inc(P)=\Inc(P^\ast)$ and hence in the case of
FAC posets we have $\w(P^\ast)=\w(P)$.
\noindent
(3) If $P'$ is an augmentation of a FAC poset $P$, then $\Inc(P')$ is
a subtree of $\Inc(P)$ and therefore $\w(P')\le \w(P)$.
\noindent
(4) Let $P$ be the lexicographic sum of posets $\{ P_i:\,i \in L\}$
along some linear order $L$. Then $\Inc(P)=\bigcup_{i \in L}
\Inc(P_i)$ and in the case of FAC posets we have $\w(P)=\sup_{i \in L}
\w(P_i)$.
\end{lemma}
\begin{proof}\renewcommand{\qedsymbol}{$\eop_{\ref{basic}}$}
(1) The only non-empty sequences of antichains in a linear order $P$
are the singleton sequences. It is clear that the resulting tree
$\Inc(P)$ has rank~$1$, by assigning the value~$0$ to any singleton
sequence. The more general statement is proved in the same way,
namely if all the antichains in a poset $P$ are of length $< n$ for
some $n<\omega$ then it suffices to define $f{:}\, \Inc(P) \to n$ by
letting $f(s)\eqdef n - |s|$.
\noindent
(2), (3) Obvious.
\noindent
(4) This is the same argument as in Thm.~\ref{theorem-lexsum}.(3).
\ifams\relax\else\hfill\qedsymbol\fi
\end{proof}
In conjunction with Thm.~\ref{allFAC}, we conclude that the problem of
bounding the width of any given FAC poset is reduced to knowing how to
calculate the width of WQO posets. This is the consideration of the
second part of this article, starting with Sec.~\ref{sec-computing-w}.
\subsection{Relationship Between Width, Height and Maximal Order Type}
\label{sec-links}
As we have seen in the previous discussion, $\w(P)=\h(\?A(P))$ the
antichain rank (where antichains are ordered by reverse inclusion).
\Citet[Thm.~4.13]{kriz90b} proved that there is another connection
between the ordinal functions discussed here and the width function.
The statement uses natural products of ordinals. Recall for this that
the Cantor normal form (CNF) of an ordinal $\alpha$
\[
\alpha=\omega^{\alpha_0}\cdot m_0 + \cdots + \omega^{\alpha_\ell}\cdot
m_\ell
\]
is determined by a non-empty decreasing sequence $\alpha_0>\alpha_1
\cdots >\alpha_\ell\ge 0$ of ordinals and a sequence of natural numbers
$m_i> 0$. Cantor proved that every ordinal has a unique
representation in this form. Two well-known operations can be defined
based on this representation: the \emph{natural or Hessenberg sum}
$\alpha\oplus\beta$ is defined by adding the coefficients of the
normal forms of $\alpha$ and $\beta$ as though these were polynomials
in $\omega$. The \emph{natural or Hessenberg product}
$\alpha\otimes\beta$ is obtained when the normal forms of $\alpha$ and
$\beta$ are viewed as polynomials in $\omega$ and multiplied accordingly.
\begin{theorem}[K\v r\'i\v z and Thomas]\label{thm-oandh}
For any WQO $(Q,\leq)$ the following holds:
\begin{gather}
\label{eq-KT-ineq}
\w(Q)\leq \o(Q) \leq \h(Q)\otimes \w(Q)\:.
\end{gather}
\end{theorem}
For completeness, we give a detailed proof.
\begin{proof}\renewcommand{\qedsymbol}{$\eop_{\ref{thm-oandh}}$}
For the first inequality, clearly any antichain in $Q$ can be
linearised in an arbitrary way in a linearisation of $Q$. So $\w(Q)$
is certainly bounded above by the length of the maximal such
linearisation, which by Thm.~\ref{thm-equivalences-1} is exactly the
value of~$\o(Q)$.
For the second inequality, let $\alpha=\w(Q)$ and let $g:\Inc(Q)\into
\alpha$ be a function witnessing that. Also, let $\beta=\h(Q)$ and let
$\rho:\,Q\into \beta$ be the rank function.
For any bad sequence $\langle q_0, q_1, \ldots, q_n\rangle$ in $Q$ we
know that $i<j\le n$ implies that either $q_i$ is incomparable with
$q_j$ or $q_i > q_j$ and hence, in the latter case
$\rho(q_i)>\rho(q_j)$. Fixing a bad sequence $s=\langle q_0, q_1,
\ldots, q_n\rangle$, consider the set
\begin{equation*}
S_{s}\eqdef\{ \langle q_{i_0}, q_{i_1}, \ldots, q_{i_m}\rangle :\,
i_0<i_1\cdots<i_m=n
\land
\rho(q_{i_0})\le \rho(q_{i_1})\cdots\le
\rho(q_{i_m})
\}.
\end{equation*}
In other words,
$S_s$ consists of subsequences of $s$ that end with $q_n$ and where
all elements are incomparable. So for each
$t\in S_{s}$ the value $g(t)$ is defined. We define that $\varphi(s)$
is the minimum over all $g(t)$ for $t\in S_{s}$. The intuition here is
that $\varphi$ is an ordinal measure for the longest incomparable
sequence within a bad sequence. Now we are going to combine $\rho$
and $\varphi$ into a function $f$ defined on bad sequences. Given such
a sequence $s=\langle q_0, q_1, \ldots, q_n\rangle$, we let
\begin{equation*}
f(s)\eqdef \bigl\langle
\bigl(\rho(q_0), \varphi(\langle q_0\rangle)\bigr),
\bigl(\rho(q_1), \varphi(\langle q_0, q_1\rangle)\bigr),
\ldots,
\bigl(\rho(q_n), \varphi(\langle q_0, q_1,\ldots, q_n\rangle)\bigr)
\bigr\rangle.
\end{equation*}
Noticing that every non-empty subsequence of a bad sequence is bad, we
see that $f$ is a well-defined function which maps $\Bad(Q)$ into the
set of finite sequences from $\alpha\times\beta$. Moreover, let us
notice that every sequence in the image of $f$ is a bad sequence in
$\alpha\times\beta$: if $i<j$
and
$\rho(q_i)\le \rho(q_j)$, let $t$
be a sequence from $S_{\langle q_0, q_1, q_2, \ldots
q_i\rangle}$ such that $g(t)=\varphi(\langle q_0, q_1, q_2, \ldots
q_i\rangle)$. Hence $t$ includes $q_i$ and for every $q_k\in t$ we
have $\rho(q_k)\le \rho(q_i)\le \rho(q_j)$. Therefore $t\frown q_j$
was taken into account when calculating $\varphi(\langle q_0, q_1,
q_2, \ldots q_j\rangle)$. In particular,
\begin{equation}
\varphi(\langle q_0, q_1, q_2, \ldots q_j\rangle)\le g(t\frown q_j)< g(t)=\varphi(\langle q_0, q_1, q_2, \ldots q_i\rangle)\:.
\end{equation}
Then
$(\rho(q_i), \varphi(\langle q_0, q_1, q_2, \ldots
q_i\rangle))\not\le (\rho(q_j), \varphi(\langle q_0, q_1, q_2, \ldots
q_j\rangle))$.
Another possibility when $i<j$ is that $\rho(q_i)>\rho(q_j)$ and it
yields the same conclusion.
We have therefore shown that $f:\Bad(Q)\into
\Bad(\alpha\times\beta)$. Let us also convince ourselves that $f$ is a
tree homomorphism, meaning a function that preserves the strict tree
order. The tree $\Bad(Q)$ is ordered by initial segments, the
order which we have denoted by $\initial$. If $s\initial t$, then obviously $f(s)\initial f(t)$. Given that it is well
known and easy to see that tree homomorphisms can only increase the
rank of a tree, we have that $\o(Q)\le \o(\alpha\times\beta)$. The
latter, as shown by \citet{deJonghParikh}, is equal to
$\alpha\otimes\beta=\w(Q)\otimes \h(Q)$ (note that $\otimes$ is
commutative).
\ifams\relax\else\hfill\qedsymbol\fi
\end{proof}
From Thm.~\ref{thm-oandh} we derive a useful consequence. Recall that
$\alpha$ is \emph{additive} (or \emph{multiplicative})
\emph{principal} if $\beta,\gamma<\alpha$ implies
$\beta+\gamma<\alpha$ (respectively implies $\beta \cdot
\gamma<\alpha$). These implications also hold for natural sums and
products.\todo[size=\tiny]{I've put a proof in the Appendix for our
peace of mind}
\begin{corollary}
\label{thm-w=o-4multprinc}
Assume that $\o(Q)$ is a principal multiplicative ordinal and that
$\h(Q) < \o(Q)$. Then $\w(Q) = \o(Q)$.
\end{corollary}
\begin{proof}\renewcommand{\qedsymbol}{$\eop_{\ref{thm-w=o-4multprinc}}$}
Assume, by way of contradiction, that $\w(Q)<\o(Q)$. From
$\h(Q)<\o(Q)$ we deduce $\h(Q)\otimes\w(Q)<\o(Q)$ (since $\o(Q)$ is
multiplicative principal), contradicting the inequality \eqref{eq-KT-ineq} in
Thm.~\ref{thm-oandh}. Hence $\w(Q) \geq \o(Q)$, and necessarily $\w(Q) =
\o(Q)$, again by~\eqref{eq-KT-ineq}.~%
\ifams\relax\else\hfill\qedsymbol\fi
\end{proof}
\subsection{Infinite Products and Rado's Structure}\label{sec-Rado}
One may wonder what happens in the case of infinite products. We
remind the reader that the property of being WQO is in general not
preserved by infinite products. The classical example for this was
provided by Rado in \cite{rado54}, who defined what we call the
\emph{Rado structure}, denoted $(R,\leq)$: \footnote{We adopted the
definition from~\cite{laver-wqos}.} Rado's order is given
as a structure on $\omega\times\omega$ where we define
\[
(a,b) \leq (a',b')\mbox{ if }[a=a'\mbox{ and }b\leq b']\mbox{ or }b<a'.
\]
The definition of BQOs was motivated by trying to find a property
stronger than WQO which is preserved by infinite products, so in
particular Rado's example is not a BQO \citep[see][Thm.~1.11 and
2.22]{Milner}.
We can use the method of residuals and other tools
described in previous sections to compute.
\begin{xalignat}{3}
\o(R) &= \omega^2, & \h(R)&=\omega, & \w(R)&=\omega,
\end{xalignat}
which gives the same ordinal invariants as those of the product $\omega\times\omega$, even though they are not isomorphic, and moreover
$\omega\times\omega$ is a BQO (since the notion of BQO is preserved under products) while Rado's order is not. Therefore one cannot characterise BQOs by the ordinal invariants considered here. Moreover, the two orders do not even embed into each other. To see this,
assume by way of contradiction that $f$ injects $\omega\times\omega$
into $R$. Write $(a_i,b_i)$ and $(c_i,d_i)$ for $f(0,i)$ and, resp.,
$f(i,0)$ when $i\in\omega$. Necessarily the $b_i$'s and the $d_i$'s
are unbounded. If the $a_i$'s are unbounded, one has the contradictory
$f(1,0)<_Rf(0,i)=(a_i,b_i)$ for some $i$, and there is a similar
contradiction if the $c_i$'s are unbounded, so assume the $a_i$'s and
the $c_i$'s are bounded by some $k$. By the pigeonhole principle, we
can find a pair $0<i,j$ with $a_i=c_j$ so that $f(0,i)\mathbin{\not\!\!\bot_R}
f(j,0)$, another contradiction. Hence $(\omega\times\omega)\not\leq R$.
In the other direction $R\not\leq(\omega\times\omega)$,
is obvious since $\omega\times\omega$ is BQO while $R$ is not.
\subsection{Finite Multisets, Sequences, and Trees}\label{sec-sequences}
Well-quasi-orders are also preserved by building multisets, sequences,
and trees with WQO labels, together with suitable embedding relations.
\emph{Finite sequences} in $Q^{<\omega}$ are compared by the
\emph{subsequence embedding} ordering defined by
$s=\tup{x_0,\dots,x_{n-1}}\leq_* s'=\tup{x'_0,\dots,x'_{p-1}}$ if
there exists $f{:}\,n\to p$ strictly monotone such that $x_i \leq x'_{f(i)}$ in
$Q$ for all $i\in n$. The fact that $(Q^{<\omega},{\leq_*})$ is WQO
when $Q$ is WQO was first shown by \citet{Higman}.
Given a WQO $(Q,{\leq})$, a \emph{finite multiset} over~$Q$ is a
function $m$ from $Q\to\+N$ with finite support, i.e.\ $m(x)>0$ for
finitely many $x\in Q$. Equivalently, a finite multiset is a finite
sequence $m$ in $Q^{<\omega}$ where the order is irrelevant, and can
be noted as a `set with repetitions' $m=\{x_1,\dots,x_n\}$; we denote
by $M(Q)$ the set of finite multisets over~$Q$. The \emph{multiset
embedding} ordering is then defined by
$m=\{x_0,\dots,x_{n-1}\}\leqH m'=\{x'_0,\dots,x'_{p-1}\}$ if there
exists an injective function $f{:}\,n\to p$ with $x_i \leq x'_{f(i)}$
in $Q$ for all $i\in n$. As a consequence of $(Q^{<\omega},{\leq_*})$
being WQO, $(M(Q),{\leqH})$ is also WQO when $Q$~is.
Finally, a (rooted, ordered) \emph{finite tree} $t$ over $Q$ is either
a leaf $x()$ for some $x\in Q$, or a term $x(t_1,\dots, t_n)$ for some
$n>0$, $x\in Q$, and $t_1,\dots,t_n$ trees over~$Q$.
A tree has arity~$b$ if we bound $n$ by~$b$ in this definition.
We let $T(Q)$ denote the set of finite trees over~$Q$. The
\emph{homeomorphic tree embedding} ordering is defined by
$t=x(t_1,\dots,t_n)\leqT t'=x'(t'_1,\dots,t'_p)$ (where $n,p\geq 0$)
if at least one the following cases occurs:
\begin{itemize}
\item $t\leqT t'_j$ for some $1\leq j\leq p$, or
\item $x\leq x'$ in $Q$ and $t_1\cdots t_n\leq_* t'_1\cdots t'_p$ for
the subsequence embedding relation on $T(Q)$.
\end{itemize}
The fact that $(T(Q),{\leqT})$ is WQO when $Q$ is WQO was first shown
by \citet{Higman} for trees of bounded arity, before \citet{kruskal60}
proved it in the general case. Note that it implies
$(Q^{<\omega},{\leq_*})$ being WQO for the special case of trees of
arity~$1$.
\subsubsection{Maximal Order Types}
The maximal order types of $M(Q)$, $Q^{<\omega}$, and $T(Q)$ have been
studied by \citet{weiermann2009} and \citet{schmidt79}; see also
\citet[Sec.~1.2]{vandermeeren15} for a nice exposition of these results.
For finite multisets with embedding, we need some additional
notations. For an ordinal $\alpha$ with Cantor normal form
$\omega^{\alpha_1}+\cdots+\omega^{\alpha_n}$ where
$\o(P)\geq \alpha_1\geq \ldots\geq \alpha_n$, we let
\begin{equation}
\widehat\alpha\eqdef\omega^{{\alpha_1}'}+\cdots+\omega^{{\alpha_n}'}
\end{equation}
where $\alpha'$ is $\alpha+1$ when $\alpha$ is an epsilon number,
i.e.\ when $\omega^\alpha=\alpha$, and is just $\alpha$ otherwise.
The following is \cite[Thm.~2]{weiermann2009}, with a corrected proof
due to \citet[Thm.~5]{VdMRaWe}.
\begin{theorem}[\citeauthor{weiermann2009}]\label{th-oM}
Let $Q$ be a WQO. Then $\o(M(Q))=\omega^{\widehat{\o(Q)}}$.
\end{theorem}
Thus, for $\o(Q)<\varepsilon_0$, one has simply
$\o(M(Q))=\omega^{\o(Q)}$.
\medskip
For finite sequences with subsequence embedding, we recall the following
result by \cite{schmidt79}.
\begin{theorem}[\citeauthor{schmidt79}]\label{th-oS}
Let $Q$ be a WQO. Then
\begin{equation*}
\label{eq-o-of-seq}
\o(Q^{<\omega})
=
\begin{cases}
\omega^{\omega^{\o(Q)-1}} &\text{if $\o(Q)$ is finite},
\\
\omega^{\omega^{\o(Q)+1}} &\text{if $\o(Q)=\varepsilon+n$ for
$\varepsilon$ an epsilon number and $n$ finite},
\\
\omega^{\omega^{\o(Q)}} &\text{otherwise}.
\end{cases}
\end{equation*}
\end{theorem}
The case of finite trees is actually a particular case of the results
of \citet{schmidt79} on embeddings in structured trees. Her results
were originally stated using Sch\"utte's Klammer symbols, but can be
translated in terms of the $\vartheta$ functions of \citet{RaWe}.
Defining such ordinal notation systems is beyond the scope of this
chapter; it suffices to say for our results that the ordinals at hand
are going to be principal multiplicative.
\begin{theorem}[\citeauthor{schmidt79}]\label{th-oT}
Let $Q$ be a WQO. Then
$o(T(Q))=\vartheta(\Omega^\omega\cdot\o(Q))$.
\end{theorem}
\subsubsection{Heights}\label{ssec-hast}
For a WQO $Q$ we define $\h^*(Q)$ as
\begin{equation}
\h^*(Q)\eqdef\begin{cases}
\h(Q) & \text{if $\h(Q)$ is additive principal $\geq \omega$,}\\
\h(Q)\cdot \omega & \text{otherwise.}
\end{cases}
\end{equation}
We are going to show that the heights of finite multisets, finite
sequences, and finite trees over $Q$ is the same, namely $\h^\ast(Q)$.
\begin{theorem}\label{eq-hT=hM=h*}
Let $Q$ be a WF poset. Then
$\h(M(Q))=\h(Q^{<\omega})=\h(T(Q))=\h^*(Q)$.
\end{theorem}
Since obviously $\h(M(Q))\leq \h(Q^{<\omega})\leq\h(T(Q))$, the claim
is a consequence of lemmata~\ref{lem-bound-hTQ}
and~\ref{lem-bound-hMQ} below.
\begin{lemma}\label{lem-bound-hTQ}
$\h(T(Q)) \leq \h^*(Q)$.
\end{lemma}
\begin{proof}\renewcommand{\qedsymbol}{$\eop_{\ref{cor-productofsquares}}$}
Consider a strictly decreasing sequence $x_0 >_T x_1 >_T \ldots$ in
$T(Q)$, where each $x_i$ is a finite tree over $Q$. Necessarily these
finite trees have a nonincreasing number of nodes: $|x_0|\geq
|x_1|\geq\ldots$. If we add a new minimal element $\bot$ below $Q$,
we can transform any $x_i$ by padding it with some $\bot$'s so that
now the resulting $x'_i$ has the same shape and size as $x_0$. Let us
use $1+Q$ instead of $\{\bot\}+Q$ so that the new trees belong to
$T(1+Q)$, have all the same shape, and form a strictly decreasing
sequence. This construction is in fact an order-reflection from
$\Dec(T(Q))$ to $\Dec\bigl(\bigsqcup_{n<\omega}(1+Q)^n\bigr)$, from
which we get
\begin{equation}
\label{eq-bound-TQ1}
\h(T(Q))\leq
\h(\bigsqcup_{n<\omega}(1+Q)^n)=\sup_{n<\omega} \h([1+Q]^n)
\:,
\end{equation}
using Lem.~\ref{ABresults-disjsum}.(2) for the last equality. For
$n<\omega$, one has
\begin{equation}
\label{eq-bound-TQ2}
\h([1+Q]^n)
=\sup \{ (\alpha\otimes n)+1 ~:~ \alpha < 1+\h(Q) \}\:,
\end{equation}
using lemmata~\ref{theorem-lexsum}.(2)
and~\ref{the-heightproducts}.
If $\h(Q)\leq 1$, $\h(T(Q))=\h(Q)\cdot\omega=\h^*(Q)$ obviously.
For $\h(Q)> 1$, and thanks to \eqref{eq-bound-TQ1} and
\eqref{eq-bound-TQ2}, it is sufficient to show that $\alpha\otimes
n+1\leq \h^*(Q)$ for all $n<\omega$ and all $\alpha<1+\h(Q)$. We
consider two cases:
\begin{enumerate}
\item
If $\h(Q)\geq\omega$ is additive principal, $\alpha<1+\h(Q)=\h(Q)$
entails $\alpha\otimes n<\h(Q)$ thus $\alpha\otimes
n+1<\h(Q)=\h^*(Q)$.
\item
Otherwise the CNF for $\h(Q)$ is $\sum_{i=1}^m\omega^{\alpha_i}$ with
$m>1$. Then $\alpha<1+\h(Q)$ implies $\alpha\leq
\omega^{\alpha_1}\cdot m$, thus $\alpha\otimes n+1\leq
\omega^{\alpha_1}\cdot m\cdot n +1\leq \omega^{\alpha_1+1}=\h(Q)\cdot
\omega=\h^*(Q)$.
\ifams\qedhere\else\qedsymbol\fi
\end{enumerate}
\end{proof}
\newcommand{\vx}{{\bm{x}}} \newcommand{\vy}{{\bm{y}}} Let us write
$M_n(Q)$ for the restriction of $M(Q)$ to multisets of size $n$.
\begin{lemma}\label{lem-MnQ-vs-Q^n}
$\h(M_n(Q))\geq \h(Q^n)$.
\end{lemma}
\begin{proof}\renewcommand{\qedsymbol}{$\eop_{\ref{lem-bound-hTQ}}$}
With $\vx=\tup{x_1,\ldots,x_n}\in Q^n$ we associate the multiset
$M_\vx = \{x_1,\ldots,x_n\}$. Obviously $\vx <_\times\vy$ implies
$M_\vx\leqH M_\vy$. We further claim that $M_\vy\not\leqH M_\vx$.
Indeed, assume by way of contradiction that $M_\vy\leqH M_\vx$. Then
there is a permutation $f$ of $\{1,\ldots,n\}$ such that $y_i\leq_Q
x_{f(i)}$ for all $i=1,\ldots,n$. From $\vx\leq_\times\vy$, we get
\[
x_i\leq_Q y_i\leq_Q x_{f(i)}\leq_Q y_{f(i)}\leq x_{f(f(i))}
\leq y_{f(f(i))} \leq_Q \cdots \leq_Q x_{f^k(i)}\leq_Q y_{f^k(i)} \leq_Q\cdots
\]
So that for all $j$ in the $f$-orbit of $i$, $x_j\equiv_Q x_i\equiv_Q
y_j$, entailing $\vy\equiv_\times \vx$ which contradicts the
assumption $\vx<_\times \vy$.
We have thus exhibited a mapping from $Q^n$ to
$M_n(Q)$ that will map chains to chains. Hence $\h(Q^n)\leq
\h(M_n(Q))$.
\ifams\relax\else\qedsymbol\fi
\end{proof}
\begin{lemma}\label{lem-bound-hMQ}
$\h(M(Q)) \geq \h^*(Q)$.
\end{lemma}
\begin{proof}
The result is clear in cases where $\h^*(Q)=\h(Q)$ and when $\h(Q)=1$
entailing $\h(M(Q))=\omega=\h^*(Q)$. So let us assume that $\h(Q)$ is
not additive principal and has a CNF $\sum_{i=1}^m\omega^{\alpha_i}$
with $m>1$. Thus $\h^*(Q) = \h(Q)\cdot\omega =
\omega^{\alpha_1+1}$. Since by Lem.~\ref{the-heightproducts}, for
$0<n<\omega$, $\h(Q^n)=\sup \{\alpha\otimes n+1~:~\alpha<\h(Q)\}$, we
deduce $\h(Q^n)\geq \omega^{\alpha_1}\cdot n+1$%
. Since $M_n(Q)$ is a
substructure of $M(Q)$, and using Lem.~\ref{lem-MnQ-vs-Q^n}, we
deduce
\begin{align*}
\hspace{1.9cm}\h(M(Q))&\geq \h(M_n(Q))\geq \h(Q^n)\geq \omega^{\alpha_1}\cdot n+1
\\
\shortintertext{for all $0<n<\omega$, hence}
\hspace{1.9cm}\h(M(Q))&\geq \sup_{n<\omega} \omega^{\alpha_1}\cdot n+1 =
\omega^{\alpha_1}\cdot \omega = \h^*(Q)\:.&\hspace{1.9cm}\eop_{\ref{lem-bound-hMQ}}
\end{align*}
\end{proof}
\subsubsection{Widths}
The previous analyses of the maximal order types and heights of
$M(Q)$, $Q^{<\omega}$, and $T(Q)$ allow us to apply the correspondence
between $\o$, $\h$, and $\w$ shown by \citet[Thm.~4.13]{kriz90b}, in
particular its consequence spelled out in Cor.~\ref{thm-w=o-4multprinc}.
\begin{theorem}
\label{prop-w-M-seq-T}
Let $Q$ be a WQO. Then $\w(Q^\dagger)=\o(Q^\dagger)$ where
$Q^\dagger$ can be $T(Q)$, or $Q^{<\omega}$ when $\o(Q)>1$, or
$M(Q)$ when $\o(Q)>1$ is a principal additive ordinal.
\end{theorem}
\begin{proof}\renewcommand{\qedsymbol}{$\eop_{\ref{prop-w-M-seq-T}}$}
First observe that $\h^\ast(Q)\leq
\h(Q)\cdot\omega\leq\o(Q)\cdot\omega < \o(Q^\dagger)$ when
$Q^\dagger$ is $T(Q)$ (by Thm.~\ref{th-oT}), $Q^{<\omega}$ with
$\o(Q)>1$ (by Thm.~\ref{th-oS}), or $M(Q)$ with $\o(Q)>1$ (by
Thm.~\ref{th-oM}). Furthermore, when $Q^\dagger$ is $T(Q)$ or
$Q^{<\omega}$, and when it is $M(Q)$ with $\o(Q)$ a principal
additive ordinal, $\o(Q^\dagger)$ is a principal multiplicative
ordinal. Thus Cor.~\ref{thm-w=o-4multprinc} shows that
$\w(Q^\dagger)=\o(Q^\dagger)$.
\ifams\relax\else\qedsymbol\fi
\end{proof}
The assumptions in Thm.~\ref{prop-w-M-seq-T} seem necessary. For
instance, if $Q=1$, then $M(1)$ is isomorphic to $1^{<\omega}$ and
$\omega$, with height~$\omega$ and width~$1$. If $A_3=1\sqcup 1\sqcup
1$ is an antichain with three elements, then $M(A_3)$ is isomorphic
with $\omega\times\omega\times\omega$, $\h(M(A_3))=\omega$ by
Lem.~\ref{the-heightproducts} or Thm.~\ref{eq-hT=hM=h*},
$\o(M(A_3))=\omega^3$ by Lem.~\ref{th-oprod}, and
$\w(M(A_3))=\omega^2$ by Thm.~\ref{cor-productofsquares}.(3).
\subsection{Cartesian Products}\label{finiteproducts}
The next simplest operation on WQOs is their Cartesian product. It
turns out that the simplicity of the operation is deceptive and that
the height and, especially, the width of a product $P\times Q$ are not
as simple as we would like. As a consequence, this section only
provides partial results and is unexpectedly long.
To
recall, the product order $P\times Q$ of two partial orders is defined
on the pairs $(p,q)$ with $p\in P$ and $q\in Q$ so that $(p,q)\le
(p',q')$ iff $p\le _P p'$ and $q\le_Q q'$. It is easy to check and
well known that product of WQOs is WQO and similarly for FAC and WF
orders.
The formula for calculating $\o(P\times Q)$ is still simple. It was first established by
\citet[Thm.~3.5]{deJonghParikh}; see also \citep[Thm.~6]{BlGu}.
\begin{lemma}[\citeauthor{deJonghParikh}]\label{th-oprod}
Suppose that $P$ and $Q$ are two WQOs. Then $\o(P\times
Q)=\o(P)\otimes\o(Q)$.
\end{lemma}
The question of the height of products is also well studied and a
complete answer appears in \citep{Abraham-Dilworth}, where it is
stated that the theorem is well known. The following statement is a
reformulation of Lem.~1.8 of \cite{Abraham-Dilworth}.
\begin{lemma}[Abraham; folklore]\label{the-heightproducts}
If $\rho_P:\, P\to \h(P)$ and $\rho_Q:\, Q\to \h(Q)$ are the rank
functions of the well-founded posets $P$ and $Q$, then the rank
function $\rho$ on $P \times Q$ is given by $\rho (x, y) = \rho_P (x)
\oplus \rho_Q (y)$. In particular,
\[
\h(P \times Q)=\sup \{\alpha\oplus\beta+1 :\, \alpha<\h(P)\land
\beta<\h(Q)\}
\:.
\]
\end{lemma}
We recall that for any two ordinals $\alpha$ and $\beta$ we have
$\sup_{\alpha'<\alpha, \beta'<\beta} \alpha'\oplus \beta'+1 <
\alpha\oplus \beta$ \citep[see e.g.][p.~55]{AbBo}, thus the statement
in Thm.~\ref{the-heightproducts} cannot be easily simplified.
\begin{remark}[Height of products of finite ordinals]
\label{lem-h-nxm}
The very nice general proof of \citet[Lem.~1.8]{Abraham-Dilworth} can
be done in an even more visual way in the case of finite ordinals. Let
$P=n_1\times \cdots \times n_k$ for some finite
$n_1,\ldots,n_k\in\omega$; then $\h(P)=n_1+\cdots+n_k+1-k$.
Indeed, we observe that any chain $\mathbf{a}_1 <_P\cdots
<_P\mathbf{a}_\ell$ in $P$ leads to a strictly increasing
$|\mathbf{a}_1|<\cdots<|\mathbf{a}_\ell|$, where by $|{\mathbf a}|$ we
denote the sum of the numbers in ${\mathbf a}$. Since
$|\mathbf{a}_\ell|$ is at most $\sum_i (n_i-1)=(\sum_i n_i)-k$ and
since $|\mathbf{a}_1|$ is at least $0$, the longest chain has length
$1+\sum_i n_i-k$. Furthermore it is easy to build a witness for this length. We conclude by
invoking Thm.~\ref{thm-equivalences-1} which states that for any WPO
$P$, $\h(P)$ is the length of the longest chain in $P$.
$\eop_{\ref{lem-h-nxm}}$
\end{remark}
Having dealt with $\h$ and $\o$, we are left with $\w$. Here we cannot
hope to have a uniform formula expressing $\w(P\times Q)$ as a
function of $\w(P)$ and $\w(Q)$. Indeed, already in the case of
ordinals one always has $\w(\alpha)=\w(\beta)=1$, while
$\w(\alpha\times\beta)$ has quite a complex form, as we are going to
see next.
\subsubsection{Products of Ordinals}\label{prod-ordinals}
Probably the simplest example of WQO which is not actually an ordinal,
is provided by the product of two ordinals. Thanks to
Thm.~\ref{equal}, we can translate results of \cite{Abraham-Dilworth},
Section 3 to give a recursive formula which completely characterises
$\w(\alpha\times \beta)$ for $\alpha,\beta$ ordinals. We shall sketch
how this is done.
First note that if one of $\alpha,\beta$ is a finite ordinals $n$, say $\alpha=n$, then we have $\w(n \times \beta)=\min\{n,\beta\}$. The next
case to consider is that of successor ordinals, which is taken care by the following Thm.~\ref{successorcase}. Abraham proved this theorem using
the method of residuals and induction, we offer an alternative proof using the rank of the tree $\Inc$.
\begin{theorem}[Abraham]\label{successorcase} For any ordinals
$\alpha,\beta$ with $\alpha$ infinite, we have $\w(\alpha\times (\beta+1))=\w(\alpha\times \beta)+1$.
\end{theorem}
The proof is provided by the next two lemmas.
\begin{lemma}
\label{lem2-axb+1}
$\w(\alpha\times (\beta+1))\leq \w(\alpha\times\beta)+1$ for any
ordinals $\alpha,\beta$.
\end{lemma}
\begin{proof}\renewcommand{\qedsymbol}{$\eop_{\ref{lem2-axb+1}}$}
Write $I$ for $\Inc(\alpha\times (\beta+1))$ and $I'$ for
$\Inc(\alpha\times \beta)$. Any sequence $s = \langle p_1,\ldots,
p_\ell\rangle$ which is in $I$, is either in $I'$ or contains a single pair of the
form $p_i = (a,\beta)$, with $a<\alpha$. In the latter case we write $s'$
for $s$ with $p_i$ removed. Note that $s'$ is in $I'$ (except when $s$
has length 1). Let $\rho':I'\to \rank(I')=\w(\alpha\times\beta)$ be a
ranking function for $I'$ and define $\rho:I\to \ON$ via
\[
\rho(s) \egdef\begin{cases}
\rho'(s)+1 & \text{if $s\in I'$,}\\
\rho'(s') & \text{if $s\not\in I'$ and $|s|>1$,}\\
\rank(I') & \text{otherwise.}
\end{cases}
\]
One easily checks that $\rho$ is anti-monotone. For this assume $s
\initial t$: (1) if both $s$ and $t$ are in $I'$, monotonicity is
inherited from $\rho'$; (2) if none are in $I'$ then
$s'\initial t'$ (or $s'$ is empty) and again monotonicity is
inherited (or $\rho(s) = \rank(I') > \rho'(t') =
\rho(t)$); (3) if $s$ is in $I'$ and $t$ is not then $s\initialeq
t'$, entailing $\rho'(s) \geq \rho'(t')$ so that $\rho(s) =
\rho'(s)+1 > \rho'(t') = \rho(t)$.
In conclusion $\rho$, having values in $\w(\alpha\times\beta)+1$,
witnesses the assertion of the lemma.
\ifams\relax\else\qedsymbol\fi
\end{proof}
\begin{lemma}
\label{lem1-axb+1}
If $\alpha$ is infinite then $\w(\alpha\times (\beta+1))\geq
\w(\alpha\times\beta)+1$ for any $\beta$.
\end{lemma}
\begin{proof}\renewcommand{\qedsymbol}{$\eop_{\ref{lem1-axb+1}}$}
Write $I$ for $\Inc(\alpha\times\beta)$. Any $s\in I$
has the form $s=\langle(a_1,b_1),\ldots,(a_\ell,b_\ell)\rangle$. We
write $s_+$ for the sequence $\langle(a_1+1,b_1), \ldots,
(a_\ell+1,b_\ell)\rangle$ and observe that it is still a sequence over
$\alpha\times\beta$ since $\alpha$ is infinite, and that its elements
form an antichain (since the elements of $s$ did). Let now $s'_+$ be
$r\frown s_+$ where $r=\langle(0,\beta)\rangle$: the prepended
element is not comparable with any element of $s_+$ so that $s'_+$ is
an antichain and $s'_+\initialeq t'_+$ iff $s_+\initialeq t_+$ iff
$s\initialeq t$. Write $I'_+$ for $\{s'_+~|~s\in I\}\cup \{r\}$. This
is a tree made of a root glued below a tree isomorphic to $I$. Hence
$\rank(I'_+)=\rank(I)+1$. On the other hand, $I'_+$ is a substructure
of $\Inc(\alpha\times(\beta+1))$ hence
$\w(\alpha\times(\beta+1))\geq\rank(I'_+)$.
\ifams\relax\else\qedsymbol\fi
\end{proof}
With Thm.~\ref{successorcase} in hand, the remaining case is to compute $\w(\alpha\times\beta)$ when $\alpha, \beta$ are limit ordinals.
This translates into saying that $\alpha=\omega\alpha'$ and $\beta=\omega\beta'$ for some $\alpha', \beta'>0$. A recursive formula describing the
weight of this product is the main theorem
of Section 3 of \cite{Abraham-Dilworth}, which we now quote. It is proved using a complex application of the method of residuals and induction.
\begin{theorem}[Abraham]\label{Abrahamlimits} Suppose that $\alpha$ and $\beta$ are given in their Cantor normal forms
$\alpha=\omega^{\alpha_0}\cdot m_0+\rho$, $\beta=\omega^{\beta_0}\cdot
n_0+\sigma$, where $\omega^{\alpha_0}\cdot m_0$ and
$\omega^{\beta_0}\cdot n_0$ are the leading terms and
$\rho$ and $\sigma$ are the remaining terms of the Cantor normal forms of $\alpha$
and $\beta$ respectively.
Then if $\alpha=1$, we have $\w(\omega\times\omega\beta)=\omega\beta$, and in general
\[
\w(\omega\alpha\times \omega\beta)=
\omega\omega^{\alpha_0\oplus\beta_0}\cdot(m_0+n_0-1) \oplus
\w(\omega\omega^{\alpha_0}\times \omega\sigma)\oplus \w(\omega\omega^{\beta_0}\times \omega\rho).
\]
\end{theorem}
It would be interesting to have a closed rather than a recursive formula for the width of the product of two ordinals. However, the formula does give us a closed form of values of the weight
of the product of two ordinals with only one term in the Cantor normal form, as we now remark. Here $m,n$ are finite ordinals $\ge 1$.
\begin{enumerate}
\item If $k,\ell<\omega$ then we have
\[\w(\omega^{1+k}\cdot m \times
\omega^{1+\ell}\cdot n)=
\w\bigl(\omega(\omega^k\cdot m)\times\omega(\omega^\ell\cdot
n)\bigr)=\omega^{k+\ell-1}\cdot(m+n-1)\:.
\]
\item (example 3.4 (3) from \cite{Abraham-Dilworth}) If $\alpha,\beta\ge\omega$ then $1+\alpha=\alpha$ and $1+\beta=\beta$, so
\[
\w(\omega^\alpha\cdot m\times \omega^\beta\cdot n)=
\w\bigl(\omega(\omega^\alpha\cdot m)\times \omega(\omega^\beta\cdot n)\bigr)=
\omega^{\alpha\oplus\beta}\cdot(m+n-1)\:.
\]
\item If $\alpha\ge\omega$ and $k<\omega$ then $\w(\omega^\alpha\cdot m\times \omega^{1+k}\cdot n)=\omega^{\alpha +k}\cdot(m+n-1)$.
\end{enumerate}
Let us mention one more result derivable from Thm.~\ref{Abrahamlimits}.
\begin{lemma}[Abraham]
\label{lem-w-omega-x-alpha}
$\w(\omega\times \alpha)=\alpha$ for any ordinal $\alpha$.
\end{lemma}
\begin{proof}\renewcommand{\qedsymbol}{$\eop_{\ref{lem-w-omega-x-alpha}}$}
By induction on $\alpha$. If $\alpha$ is a limit, we write it
$\alpha=\omega\alpha' = \omega(\omega^{\alpha_0}\cdot m_0+\cdots+
\omega^{\alpha_\ell}\cdot m_\ell)$. Now Thm.~\ref{Abrahamlimits}
yields $\w(\omega\times\omega\alpha')= \omega\omega^{\alpha_0}\cdot
m_0 \oplus \cdots\oplus \omega\omega^{\alpha_\ell}\cdot
m_\ell=\alpha$. If $\alpha$ is a successor, we use
Lem.~\ref{lem2-axb+1} and~\ref{lem1-axb+1}.
\ifams\relax\else\qedsymbol\fi
\end{proof}
\subsubsection{Finite Products and Transferable Orders}\label{gamesonproduct}
Since the width of the product of two ordinals is understood, we can
approach the general question of the width of products of two or a
finite number of WQO posets $P_i$ by reducing it to the width of some
product of ordinals. Using that strategy, we give a lower bound to
$\w(\Pi_{i\le n}P_i)$.
\begin{theorem}\label{thm-lboundPxQ}
For any WQO posets $P_0, P_1\ldots P_n$, $\w(\prod_{i\le n}P_i )\ge
\w(\prod_{i\le n}\h(P_i))$.
\end{theorem}
The proof follows directly from a simple lemma, which is of
independent interest:
\begin{lemma}\label{lem-htintoproduct}
Suppose that $P_0, P_1\ldots P_n$ are WQO posets. Then $\prod_{i\le
n}\h(P_i)$ embeds into $\prod_{i\le n}P_i $ as a substructure.
\end{lemma}
\begin{proof}\renewcommand{\qedsymbol}{$\eop_{\ref{lem-htintoproduct}}$}
We use Thm.~\ref{thm-equivalences-2} and pick, in each
$P_i$, a chain $C_i$ in $P_i$ that has
order type $\h(C_i)=\h(P_i)$. Then $\prod_{i\le n}C_i$
is an induced suborder of $\prod_{i\le n}\h(P_i)$ which is isomorphic
to $\prod_{i\le n}\h(P_i)$.
\ifams\relax\else\qedsymbol\fi
\end{proof}
Now we shall isolate a special class of orders for which it will be
possible to calculate certain widths of products. Let us write $\down
x$ for the downwards-closure of an element $x$, i.e., for $\{y: x\leq
y\}$.
\begin{definition}\label{def-everywheredense} A FAC partial order $P$ belongs to the class $\TT$ of \emph{transferable orders} if
$\w(P\setminus (\down x_1 \cup\cdots\cup\down x_n))=\w(P)$ for any
(finitely many) elements $x_1,\ldots,x_n\in P$.
\end{definition}
\begin{theorem}\label{thm-foisk} Suppose that $P$ is a WQO
transferable poset and $\delta$ is an ordinal. Then $\w(P\times
\delta)\ge \w(P)\cdot \delta$.
\end{theorem}
\begin{proof}\renewcommand{\qedsymbol}{$\eop_{\ref{thm-foisk}}$}
Write $\gamma$ for $\w(P)$: we prove that Player 2 has a winning
strategy, denoted $\sigma_{P'\times\delta,\alpha}$, for each game
$G_{P'\times\delta,\alpha}$ where $P'$ is some $P\setminus(\down
y_1\cup\cdots\cup\down y_n)$ and $\alpha\leq \gamma\cdot\delta$.
The proof is by induction on $\delta$.
If $\delta=0$ then $\alpha=0$ and Player 1 loses immediately.
If $\delta=\lambda$ is a limit, the strategy for Player 2 depends on
Player 1's first move. Say it is
$\alpha'<\alpha\leq\gamma\cdot\delta$. Then
$\alpha'<\gamma\cdot\delta$ means that $\alpha'<\gamma\cdot\delta'$
for some $\delta'<\delta$. Player 2 chooses one such $\delta'$ and now
applies $\sigma_{P'\times\delta',\alpha'+1}$ (which exists and is
winning by the induction hypothesis) for the whole game. Note that a strategy for a
substructure $P'\times\delta'$ of the original $P'\times\delta$ will
lead to moves that are legal in the original game. Also note that
$\alpha'+1$ is $\leq\gamma\cdot\delta'$.
If $\delta=\epsilon+1$ is a successor then Player 2 answers each move
$\alpha_1,\ldots,\alpha_m$ played by Player 1 by writing it in the
form $\alpha_i=\gamma\cdot\delta_i+\beta_i$ with $\beta_i<\gamma$.
Note that $\delta_i<\delta$. If $\delta_1=\cdots=\delta_m=\epsilon$, note that $\beta_1>\beta_2>\ldots \beta_m$.
Let Player 2 play $(x_m,\epsilon)$ where $x_m$ is $\sigma_{P',\gamma}$
applied on $\beta_1,\ldots,\beta_m$ (that strategy exists and is
winning since $P$ is transferable and has width $\gamma$). If
$\delta_m<\epsilon$ then Player 2 switches strategy and now uses
$\sigma_{P''\times\epsilon,\gamma\cdot\epsilon}$ as if a new game was
starting with $\alpha_m$ as Player 1's first more, and for
$P''=P'\setminus(\down x_1\cup\cdots\cup\down x_{m-1})$. By
the induction hypothesis , Player 2 will win by producing a sequence $S''$ in
$P''\times\epsilon$. These moves are legal since
$(x_1,\epsilon)\cdots(x_{m-1},\epsilon)\frown S''$ is an antichain in
$P'\times(\epsilon+1)$.
\ifams\relax\else\qedsymbol\fi
\end{proof}
In order to use Thm.~\ref{thm-foisk}, we need actual instances of
transferable orders.
\begin{lemma}\label{lem-omegatransferable} For any $1\leq\alpha_1,\ldots,\alpha_n$, the order $P=\omega^{\alpha_1}\times\cdots\times \omega^{\alpha_n}$ is transferable.
\end{lemma}
\begin{proof}\renewcommand{\qedsymbol}{$\eop_{\ref{lem-omegatransferable}}$}
Since each $\omega^{\alpha_i}$ is additive principal, $P\setminus (\down
x_1\cup\cdots\cup\down x_m)$ contains an isomorphic copy of $P$ for
any finite sequence $x_1,\ldots,x_m$ of elements of $P$.
\ifams\relax\else\qedsymbol\fi
\end{proof}
\begin{theorem}\label{cor-productofsquares} Let $P$ be a transferable
WPO poset.
\begin{enumerate}
\item Suppose that $1\le m<\omega$. Then $\w(P) \cdot m\le \w(P\times m)\le \w(P) \otimes m$.
\item If $\w(P)=\omega^\gamma$ for some $\gamma$, then $\w(P\times m)=\w(P) \cdot m$ (Note that this applies to any $P$ which is the product of the
form $\omega^\alpha\times \omega^\beta$, see the examples after Thm.~\ref{Abrahamlimits}).
\item $\w(\omega\times\omega\times\omega)=\omega^2$.
\end{enumerate}
\end{theorem}
An easy way to provide an upper bound needed in the proof of Thm.~\ref{cor-productofsquares} is given by the following observation:
\begin{lemma}\label{lem-uboundPxQ}
For any FAC poset $P$ and $1\le m<\omega$, $\w(P\times m)\le
\w(P)\otimes m$.
\end{lemma}
\begin{proof}\renewcommand{\qedsymbol}{$\eop_{\ref{lem-uboundPxQ}}$}
We just need to remark that $P\times m$ is an augmentation of the
perpendicular sum $\sqcup_{i < m}P$ and then apply Lem.~\ref{ABresults-disjsum}.
\ifams\relax\else\qedsymbol\fi
\end{proof}
\begin{proof}[\ifams Proof \fi of Thm.~\ref{cor-productofsquares}]
\renewcommand{\qedsymbol}{$\eop_{\ref{cor-productofsquares}}$}
(1) We get $\w(P\times m)\ge \w(P) \cdot m$ from Thm.~\ref{thm-foisk}.
We get $\w(P\times m)\le \w(P) \otimes m$ from Lem.~\ref{lem-uboundPxQ}.
\noindent
(2) This follows because $\omega^\gamma\otimes m = \omega^\gamma \cdot
m$.
\noindent
(3) Let $P=\omega\times\omega$, hence we know that $\w(P)=\omega$.
Since any $P\times m$ is a substructure of $P\times \omega$, we
clearly have that $\w(P\times \omega)\ge \sup_{m<\omega} \w(P\times
m)= \sup_{m<\omega} \omega \cdot m=\omega^2$. Let us now give a proof
using games that $\w(P\times \omega)\le \omega^2$. It suffices to give
a winning strategy to Player 1 in the game $G_{P\times \omega,\gamma}$
for any ordinal $\gamma > \omega^2$.
So, given such a $\gamma$, Player 1 starts the game by choosing as his
first move the ordinal $\omega^2$. Player 2 has to answer by choosing
an element $x$ in $P\times \omega$, say an element $(p,m)$ with $p=(k,
\ell)$. Now notice that any element of $P\times \omega$ that is
incompatible with $(p,m)$ is either an element of $P\times m$ or of
the form $(q,n)$ for some $q\le p$ in $\omega\times \omega$, or is of
the form $(r,i)$ for some $r$ which is incompatible with $p$ in
$\omega\times \omega$. Therefore, any next step of Player 2 has to be
in an order $P'$ which is isomorphic to an augmentation of a
substructure of the disjoint union of the form
\begin{equation}\label{sqcups}
P\times m\sqcup [(k+1)\times (\ell+1)] \times \omega \sqcup [(k+1)\times \omega]\times \omega \sqcup [(\ell +1)\times \omega]\times \omega.
\end{equation}
It now suffices for Player 1 to find an ordinal $o < \omega^2$
satisfying $o > \w(P')$ as the game will then be transferred to
$G_{P',o}$, where Player 1 has a winning strategy. As $\omega^2$ is
closed under $\oplus$, it suffices to show that each of the orders
appearing in equation (\ref{sqcups}) has weight $<\omega^2$. This is
the case for $P\times m$ by (2). We have that $\w\bigl( [(k+1)\times (\ell+1)]
\times \omega\bigr)= \w\bigl( (k+1)\times [(\ell+1) \times \omega]\bigr)$, which by
applying Lem.~\ref{lem-uboundPxQ} is $\le (\ell+1)\cdot (k+1)$. For
$[(k+1)\times \omega]\times \omega$, we apply
Lem.~\ref{lem-uboundPxQ} to $\omega\times\omega$, to obtain
$\w\bigl([(k+1)\times \omega]\times \omega\bigr)\le \omega \cdot (k+1)$ and
similarly $\w\bigl([(\ell+1)\times \omega]\times \omega\bigr)\le \omega \cdot
(\ell+1)$.
\ifams\relax\else\qedsymbol\fi
\end{proof}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 713 |
17+ years of experience in programming, networking and system administration. Key projects include: design; architecture; integration; implementation; monitoring; automation of enterprise hybrid networks. Vital skills of focus on project and complex problem solving. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,382 |
Henrique da Silva da Fonseca de Cerveira Leite (Porto, 30 de julho de 1784 — Coimbra, 16 de janeiro de 1852) foi um oficial general do Exército Português que se distinguiu na Guerra Civil Portuguesa. Foi o primeiro titular do título de Visconde de Alcobaça, um título atribuído por decreto de 22 de Dezembro de 1841, pela rainha D. Maria II.
Biografia
Henrique da Silva da Fonseca de Cerveira Leite foi um dos mais importantes militares das Guerras Liberais. Foi perseguido pelos absolutistas e obrigado a exilar-se, primeiro na Galiza, e depois em Inglaterra. Atingiu o posto de Coronel, liderando o Regimento de Infantaria n.º 18, e mais tarde, em 1832, comandou uma das divisões do exército que desembarcou no Mindelo.
Recebeu o título em vida de barão de Alcobaça em 1 de dezembro de 1834 e, em 1841, o de visconde de Alcobaça de juro e herdade. Foi comendador da Ordem de Avis e oficial da Ordem Militar de Torre e Espada.
Referências
Oficiais superiores de Portugal | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,261 |
Кутейниково — село в Чертковском районе Ростовской области.
Административный центр Кутейниковского сельского поселения.
Главой поселения является Третьякова Алла Ивановна.
География
Расположено в 25 км восточнее районного центра, посёлка Чертково. В 4,5 км к западу от Кутейникова пролегает построенный в 2017 году железнодорожный обход Украины с одноимённой станцией (8 км на северо-запад от села). В 10 км восточнее села проходит федеральная автодорога Дон. Через село протекает река Камышная. Высота села над уровнем моря — 124 м.
Улицы
Население
Социальная сфера
В Кутейниково насчитывается примерно 1 500 человек, имеется отделение Сбербанка и почта.
В селе есть ДК, средняя школа, детский сад, сельскохозяйственные предприятия ООО «Кутейниково» и ТОО Надежда.
Примечания
Ссылки
Администрация сельских поселений.
Населённые пункты Чертковского района | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,330 |
New tactile sensor chip with silicone rubber cover
Sensors and Actuators 84 Ž2000. 236–245 www.elsevier.nlrlocatersna New tactile sensor chip with silicone rubber cover Michael Leineweber a,) , Georg ...
2MB Sizes 10 Downloads 203 Views
New silicone rubber with improved fluid resistance
Normal Force Calibration for Optical Based Silicone Tactile Sensor
Development of a single-chip elasticity sensor using MEMS-based piezoresistive cantilevers with different tactile properties
Silicone rubber spacers
Silicone rubber resists attack
SILICONE RUBBER IN SURGERY
Natural rubber and silicone rubber-based biomaterials
An ISFET glucose sensor with a silicone rubber membrane for undiluted serum monitoring
Optical fiber based soft curvature sensor with polyvinyl chloride reinforced silicone rubber substrate
Ferropiezoelectric Tactile Sensor Array
Process for preparing silicone rubber
Curable liquid silicone rubber composition
Sensors and Actuators 84 Ž2000. 236–245 www.elsevier.nlrlocatersna
New tactile sensor chip with silicone rubber cover Michael Leineweber a,) , Georg Pelz a , Michael Schmidt b, Holger Kappert c , Gunter Zimmer a ¨ a
Department of Electrical Engineering, UniÕersity of Duisburg, FB9, FG EBS, Finkenstraße 61, D-47057 Duisburg, Germany b EPOS GmbH and Co. KG, Bismarckstraße 120, D-47057 Duisburg, Germany c Fraunhofer-Institute FhG-IMS, Finkenstraße 61, D-47057 Duisburg, Germany Received 16 September 1999; received in revised form 8 December 1999; accepted 21 December 1999
Abstract We report on a new tactile sensor chip developed for measuring distribution of forces on its surface. The chip has eight force-sensitive areas, called ''taxel'' with a pitch of 240 mm. Surface micromachining techniques are used to produce small cavities that work as pressure-sensitive capacitors. The process is CMOS compatible, therefore, on-chip switched capacitor circuits can be used for signal amplification. To enable transduction of normal forces to the sensitive areas, we cover the sensor chip surface with silicone rubber. First, measurements show that the sensor's output can be explained by results from contact mechanics. We demonstrate this by the simple case of a hard sphere pressed in the silicone rubber cover. The center of contact can be measured within 2 mm precision. The radius of the sphere and the load working on it can be estimated with high precision from the tactile sensor output data. q 2000 Elsevier Science S.A. All rights reserved. Keywords: Tactile sensor; Silicone micromachining; Silicone rubber layer; Half-space model
1. Introduction We developed a new tactile sensor chip in order to measure the distribution of forces that act on its surface. The sensor is suitable for delicate manipulation tasks, especially in the field of micromanipulation. It provides information not only about the amount of force exerted during manipulation, but also about the position and orientation of the manipulated object. High-resolution tactile sensors based on silicon micromachining have been realised by previous authors. Investigations started maybe in 1985 with an 8 = 8 tactile array that was produced in a dissolved wafer process at the University of Michigan. The taxels consisted of force sensitive silicon membranes, which were anodic bonded on a glass substrate. The whole array was 16 = 16 mm2 and an external switched-capacitor ŽSC. circuit was used to read-out the capacitors between the membranes and metal electrodes located on the glass substrate w1x. Similar techniques have been used by Suzuki et al. w2x. Sugiyama et al. w3x realised a 32 = 32 taxel array on a single chip, where
each taxel was 250 = 250 mm2 . The pressure sensitive membranes consisted of silicone nitride and their elongation was measured by the piezoresistive effect. A readout circuit was also integrated on the tactile sensor chip. Some researchers investigated bulk micromachined sensor structures, which enable the measurement of both normal and shear stresses, which might be useful in future robotic manipulation w4,5x. Others report on surface micromachined capacitive cells for medical w6x and fingerprint applications w7x. However, these designs lack on chip signal amplification circuitry, which leads to noisy signals and extensive wiring. No attempt has been made to analyse the sensors output in terms of contact mechanics. Tactile sensors have been proposed for micromanipulation and medical purposes w8x, but none of the designs presented so far seems to be well suited for such applications.
2. System design and process 2.1. Pressure sensor process
Corresponding author.
The tactile sensor chip presented here is based on the FhG-IMS pressure sensor process, which is compatible
0924-4247r00r$ - see front matter q 2000 Elsevier Science S.A. All rights reserved. PII: S 0 9 2 4 - 4 2 4 7 Ž 0 0 . 0 0 3 1 0 - 1
M. Leineweber et al.r Sensors and Actuators 84 (2000) 236–245
with the standard 1.2 mm CMOS process. Therefore, integration of the sensing elements and electronics for signal conditioning and data transfer on the sensor chip is possible. The pressure-sensing elements consist of a capacitance where the top plate is represented by a polysilicon membrane. The bottom electrode is implanted in the silicon substrate. The cavity, which is 80 mm in diameter, is realized by employing a sacrificial layer Žsilicon dioxide., which is removed later by lateral etching with hydrofluoric acid. The height of the cavity is approximately 800 nm. The etching channels are subsequently sealed by a CVD process by deposition of silicon dioxide. The force sensitive areas Žwhich are sometimes called ''taxel''. consist each of the two sensor capacitors and two reference capacitors. The latter includes a much thicker membrane and, therefore, their pressure sensitivity is much lower, but parasitic capacitances and temperature effects are much the same. Fig. 1 shows a photo of the realised taxel with 240 = 240 mm2 . The process is described in details in Refs. w9,10x.
2.2. Signal amplification circuit We use SC technique to create an analog voltage signal from the pressure sensitive capacitors. Fig. 2 shows the schematic of the two-stage SC circuit. At a time, only one of the eight parallel switches is on. Therefore, the taxels are connected sequentially with the SC circuit. The output of the first stage is given by:
f1 Vout1 s Vmin y
ž / CS
Ž Vmax y Vmin .
Ž 1.
f2 Vout1 s Vmin
where f 1 and f 2 are two non-overlapping clock signals f1 Ž f 2 . Ž10–50 kHz. and Vout1 Vout1 is the output voltage of the first stage if f 1Ž f 2 . is high. C R and CS are the taxel capacitances, respectively. The output of the second stage
Fig. 1. Pressure-sensitive area Ž''taxel''..
Fig. 2. Schematic of SC circuit.
and, hence, the output signal of the chip can be calculated as: f1 f2 Vout2 s Vout2 f2 Vout2 s Vmin q
ž /Ž C1
f1 Vout1 y Vcal y Voff .
Vmax and Vmin are voltage references of 4.5 and 0.5 V, respectively. The ratio of the capacitors C 1 and C 2 can be used to change signal amplification. A sample and hold capacitor C H in the second stage makes the output signal of the chip time-continuous. Vcal is an extra analog input voltage used to cancel taxel offset voltages. The realized chip layout is approximately 6000 = 400 mm2 ŽFig. 3.. 2.3. Sensor packaging After processing, the silicon wafer is carefully thinned to 160 mm, followed by sawing, die-bonding and bonding in a standard DIL-Package or on a printed circuit board. A silicone rubber sheet is glued to the chip surface. We examined several types of commercially available materials. Silicone rubber has many advantages in the field of
microelectronic packaging. It exhibits good stability with regard to most chemicals, including weak acids and polar solvents. The elastic properties are stable between y508C and 1808C. Electrical resistivity and breakdown field strength are high. Last, but not least, since the material is self-healing, we can also use the liquid material to glue the sheets onto the chip. We decided to use ELASTOSILw RT601, a two-component silicone elastomer produced by Wacker Chemical because of its durability and high resistance to tearing. The maximum tensile stress is 7.0 Nrmm2 and the maximum tensile strain reaches 100%, therefore, the modulus of elasticity E is approximately 7 = 10 6 Nrm2 . The hardness of the material is 45 Shore A w11x. We use Wacker's primer G795 to enhance the sticking of the rubber on the silicon surface.
3. Calibration procedure In order to calibrate the sensor chip and to check whether the sensor elements work correctly, we implement
Fig. 3. Chip photo of tactile sensorship.
our first test run within an air pressure chamber. The results are plotted in Fig. 4 for air pressure between normal atmosphere Ž; 1 bar. and 3 bar Ž1 bar s 10 5 Pa, 1 Pa s 1 Nrm2 .. Note that the offset values of the sensor chip were set to 0.5 V by the external voltage Vcal . There are some differences in the slope for each taxel. The sensor's sensitivity between 1 and 2.4 bar is almost linear Žwithin 2%. and its value for different taxels is between 1.2 and 1.5 Vrbar. The change in slope for pressure ) 2.4 bar is due to the non-linear behaviour of the plate capacitance when the upper electrode approaches the lower electrode. Software calibration was carried out with the following fit-function:
Si Ž P . s f 0,i q
1 e 0,i q e1 ,i P
where Si Ž P . is the sensor's output for the ith taxel and f 0, i , e 0,i and e1,i are fit parameters. This function is often used to describe the behaviour of a plate capacitor. If P approaches ye0, ire1,i Žwhich is equivalent to the pressure, where the membrane touches the substrate., Si Ž P . approaches infinity. We tried some more complex fit function with up to five parameters, but the results did not justify the effort. The calibration was checked again after fixing the silicone rubber sheet. No change of the output signal was observed.
4. Contact mechanics We demonstrate the sensor's abilities with the example of a small sphere pressed against the sensor's elastomer cover. It is possible to calculate the stresses produced by a sphere pressed in an elastic half-space w12x. Although our elastomer is not infinite in depth, the model can be used for data analysis of the tactile sensor chip, as we will demonstrate. We take the elastic half-space to be bounded by the plane z s 0 of an rectangular coordinate system with the positive values of z located inside the half-space. The stresses produced in the elastic half-space produced by a point load P is given by w13x:
sz Ž r , z . s y
Ž r2qz2 .
rs x2 qy2
The normal stresses under an elastic strip with finite thickness z 0 produced by a point load have been calculated by the finite element software ANSYS. Fig. 5 shows that for z 0 between 100 and 1000 mm, one can reach almost perfect agreement for the stress beneath the elastic strip and the model of half-space in choosing an appropriate value for z, the depth inside the half-space. For this case, the parameter z is chosen to be 0.8 times the
Fig. 4. Output signal vs. pressure of the tactile sensor chip.
Fig. 5. Comparison between FEM-data for the elastic strip and half-space model Eq. Ž4..
thickness of the strip z 0 . If we would take z 0 s z Žas one might expect., the stress beneath the strip is higher than in the elastic half-space because of the reaction forces at the bottom of the strip. Therefore, it is reasonable to choose a smaller value for z. These results lead to the following tactile sensor model equation: Ss
H H s Ž x , y, z . d xd y ab yar2 ybr2 z
where S is the sensors output signal and a and b are the dimensions of the taxel in x and y directions, respectively. sz represents the stress, calculated within the half-space model. In our model, z is not a free parameter, but fixed to 0.8 z 0 throughout the paper, which leads to good correspondence with experimental results as will be demonstrated. We can always omit the integration if the variation of sz is small over the taxel area ab. In the theory of elastic contact developed by H. Hertz, the relationship between the depth of penetration d and the radius of contact between the sphere and the elastic halfspace A is given by
where A might be expressed in terms of P, the load that acts on the sphere. 3
3 PR Ž 1 y n 2 . 4E
where R is the radius of the sphere, E the modulus of elasticity and n Poisson's ratio, which is nearly 0.5 for an almost incompressible material as silicone rubber w13x. From the equations above, we are able to calculate the load P. 4 E P s d 3r2 R 1r2 Ž 8. 3 Ž1yn 2 . The first point of contact between the sphere and the half-space occurs at the origin. From Ref. w12x, we get sz , the normal stress in the elastic half-space acting in vertical direction to the surface: 3P
sz Ž r , z . s
ž /
A2 u
2p A2 'u u 2 q A2 z 2 1 us r 2 q z 2 y A2 2
(Ž r q z y A . q 4 A z
For a given value of z, the stress sz has its maximum value at r s 0: 3P 1 szmax Ž z . s Ž 10 . 2 p A2 q z 2 We can use this equation to calculate the load P from szmax : 3 A2 s hP y z 2 , with h s 2pszmax
´ A s h P y 3h P z q 3hPz y z 6
Ž 11 .
hand, R can be calculated from szmax as well if P is given. From Eqs. Ž7. and Ž10.:
2p sz
3 Ž1yn . P
If the penetration depth d is given instead of P: 2
R q
Fig. 6. Experimental set-up.
9 Ž1 y n 2 . R2 E2
P3y 3
k q
P 2 s kP 2
P2q3
Py 2
yh d Rq
5. Experimental set-up and results
From Eq. Ž7.:
2 z2
Eq. Ž13. can be solved at once by use of the cardanian formulas. In other words, P can be calculated out of szmax if the radius R of the indenter is known. On the other
The set-up is depicted in Fig. 6. The sensor chip is fixed on an x–y table, which allows positioning with 1 mm precision. The sphere can be moved along the z-axis by means of a position device with equally 1 mm resolution. The chip surface is identified with the X–Y plane and the direction of the taxels with the X-axis. We analyze the sensor's output data for a fixed sphere position but increasing penetration. Fig. 7 shows the measured stresses for a sphere with 1.6 mm radius. Stresses are presented for a penetration depth ranging from 10 to 90 mm, i.e., 1% to 9% of the rubber cover. Results from the half-space model as explained in Section 4 are also included. The parameter z is chosen to be 0.8 mm for all cases. For penetration depths smaller than 80 mm, this leads to good correspon-
Fig. 7. Comparison of the tactile sensor data Žsymbols. and normal stresses sz Žlines. calculated with the elastic half-space model w8x.
Fig. 8. Output signal for seven taxels when changing the spheres position along the taxel line.
dence between measured data and model. Penetration larger than 80 mm results in stresses higher than the one predicted by the model at X s 0 and lower for X ) 0.4 mm. Fig. 8 shows the output signal for seven taxels when changing the sphere's position along the X-axis. The sig-
nal of each taxel has the shape of a gaussian curve, and the maximum signal is reached when the taxel is located under the sphere. The signals of the different taxels overlap each other. This is partly because of the contact radius A Ž360 mm for 80-mm indentation., which is bigger than the
Fig. 9. Gaussian curve through three taxel data and least-square fit through all data points.
Table 1 Results of data fit to gaussian curve with Eqs. Ž17. – Ž19. Position of sphere wmmx
Center of gaussian curve wmmx
Center error w%x
Height of gaussian curve wbarx
Height error w%x
Width wmmx
Width error w%x
0.1212 0.2198 0.3184 0.4201 0.5185 0.6248 0.7263 0.8305 0.9285 1.0292 1.1313
– 1.4 1.4 0.4 0.7 0.7 0.85 1.3 0.9 0.9 1.0
1.895 1.934 1.906 1.924 1.925 1.920 1.928 1.936 1.914 1.891 1.922
width of a taxel Ž240 mm.. Second, the stress field is broadened because of the influence of the elastic layer. From Eq. Ž4., we calculate the distance r X , where the normal stress is reduced to half of its maximum value:
sz Ž r X , z . szmax
´ r s z'0.5 X
Ž r qz . y2r5
y 1 f z 0.565
One can see that the extension of the stress field increases proportional to the depth z inside the elastic half-space, and this is the same for the thickness z 0 of the elastic layer if we take the result of our FEM investigation. For applications that require high resolution Žfor example, object identification by tactile imaging., this so-called crosstalk between taxels has to be reduced by choosing a
very thin elastic layer w14x. For other applications, an overlap of the taxel signals is needed to avoid ''death zones'' between the taxels. One example is precise position measurement. This is carried out for the tactile data shown in Fig. 8. In this case, position measurement does not even require a time-consuming least-square algorithm fit. Instead, we make use of the data of only three taxels for an analytical calculation of a gaussian curve that describes the slope of our measured data well. We choose the data point with the highest signal and its two neighbours to determine central position x 0 , width w and height A of the gaussian curve y Ž xyx 0 . 2
G Ž x . s Ae
Fig. 10. Dependence of position error from signal error.
Table 2 Calculated data for the sphere's radius and load Position of sphere wmmx
Radius R wmmx
Radius R error w%x
Load P wNx
Load P error w%x
1.59241 1.69392 1.62035 1.66723 1.66988 1.65669 1.67785 1.69932 1.64101 1.58239 1.66195
0.302 0.308 0.303 0.306 0.307 0.305 0.307 0.308 0.304 0.3 0.307
by simple algebraic calculation: x 12 y x 22 y
ln Ž Sig 2 y Sig 1 .
Ž x 2 y x 32 . ln Ž Sig 3 y Sig 2 . 2 x0 s ln Ž Sig 2 y Sig 1 . 2Ž x1 y x 2 . y 2Ž x 2 y x 3 . ln Ž Sig 3 y Sig 2 . w2 s
sensitivity, which is 1.35 Vrbar Žfrom Fig. 4.. Therefore, the maximum position error is:
x 22 y x 32 y 2 x 0 Ž x 2 y x 3 . ln Ž Sig 3 y Sig 2 .
where x 1, . . . ,3 are taxel positions along the x-axis and Sig 1, . . . ,3 taxel output data. Fig. 9 compares this analytical procedure with the result of the least-square fit. Concerning the center position x 0 and the height A, there is no difference for the two gaussian curves. Table 1 shows the result of the analysis for the central part of the tactile sensor with the data of Fig. 8. Errors are given in percent deviation of the mean value, except for the center of the gaussian curve, where the error is taken as the deviation of the x-step value, which was 0.1 mm. The error in the sphere's position is smaller than 2 mm. This result can be supported theoretically by the calculation of the maximum error in the measurement of the center x 0 resulting from the noise of the sensor chip output signal. The derivation of the gaussian curve ŽEq. Ž16.. is: X
g Ž x . s y2 A
x w
y Ž xyx 0 . 2
g X Ž 240 mm .
3.125 = 10 8 Nrm3
s 2.4 mm
Ž x 1 yx 0 . 2
A s Sig 1e
750 Nrm2
DY D Xs
Fig. 10 illustrates that the maximum position error occurs if the first and last of the three taxels exhibit maximum signal error with different signs. The maximum signal error observed was 10 mV, which is equivalent to 750 Nrm2 derived from the mean value of the sensor's
which agrees well with our experimental results. To calculate either the radius R of the sphere or the load P from the height A in Table 1, we make use of the formulas Ž13. and Ž15. where we take zX as 0.8 mm again. Results are listed in Table 2. The radius R can be measured with "6% accuracy. We could not determine the exact value of the load P to compare it with our results, but we notice that with the data of Table 1, the result for P is stable within 2% for the central part of the tactile sensor chip. It is possible to calculate radius R and load P together from height A and width w of the gaussian curve by using numerical calculation methods.
6. Conclusion A new tactile sensor chip with micromachined force sensitive areas is presented. These taxels are covered with elastic rubber. The stresses under the cover can be well understood by means of the elastic half-space model, which has been demonstrated by the example of a sphere pressed into the rubber. The center of the sphere's contact can be determined with 2 mm precision. The algorithm used for this task consists of simple algebraic calculation, no time-consuming least-square algorithm fit is needed, which is important for control purposes. The load acting on the sphere and the sphere's radius can be calculated from the sensors output data as well within 6% and 2% accuracy, respectively. Other contact geometries might re-
sult in more difficult expressions for the load within the elastic half-space model. If exact solutions are not available, one has to calibrate the sensor for each contact geometry again, what seems to be no major problem. Therefore, the sensor is well suited for force as well as position control in micromanipulation applications.
References w1x K.J. Chun, K.D. Wise, A capacitive silicon tactile imaging array, 3rd Int. Conf. on Solid-State Sensors and Actuators, 1985, pp. S22–S25. w2x K. Suzuki, K. Najafi, K.D. Wise, Process alternatives and scaling limits for high-density silicon tactile imagers, Sensors and Actuators A 21–23 Ž1990. S915–S918. w3x S. Sugiyama, K. Kawatha, M. Yoneda, I. Igarashi, Tactile image detection using a 1k-element silicon pressure sensor array, Sensors and Actuators A 21 Ž1990. S397–S400. w4x Z. Chu, P.M. Sarro, S.M. Middelhoek, Silicon three-axial tactile sensor, Sensors and Actuators A 54 Ž1996. S505–S510. w5x B. Kane, R. Cutkosky, CMOS-compatible traction stress sensor for use in high-resolution tactile imaging, Sensors and Actuators A 54 Ž1996. 511–516. w6x B.L. Gray, R.S. Fearing, A surface micromachined microtactile sensor array, Proceedings of the IEEE International Conference on Robotics and Automation Ž1996. 1–6. w7x P. Rey, A high density capacitive pressure sensor array for fingerprint sensor application, Transducers 97 Ž1997.. w8x M. Schuenemann, Anthropomorphic tactile sensors for tactile feedback systems, Proceedings of the SPIE 3206 Ž1997. 82–97. w9 x M. Kandler, CMOS kompatibler Siliziumdrucksensor in Oberflachenmikromechanik, Doctors Thesis, University of Duisburg, ¨ 1992. w10x H. Dudaicevs, M. Kandler, Y. Manoli, W. Mokwa, Surface micromachined pressure sensor with integrated cmos read-out electronics, Sensors and Actuators A 43 Ž1994. 157–163. w11x Wacker Chemie, Munich, Product information silicon rubber ''Elastosil''. w12x M.T. Huber, Comments on the theory of the contact between solid elastic bodies,wZur theorie der beruhrung fester elastischer korper ¨ ¨ x Annalen der Physik 14 Ž1904. 153. w13x K.L. Johnson, Contact Mechanics, Cambridge Univ. Press, 1992. w14x M. Shimojo, Mechanical filtering effect of elastic cover for tactile sensor, IEEE Transactions on Robotics and Automation 13 Ž1. Ž1997. 128–132, Feb.
Biographies Michael Leineweber was born in 1969. He studied solid-state physics at the University of Duisburg, receiving his diploma in 1995. He is now a PhD student at the Department of Electrical Engineering at the University of Duisburg. His research interests include fabrication and simulation of micromechanical systems, especially tactile sensors. Georg Pelz received his diploma degree in computer science from the University of Dortmund, Germany in 1988 and doctor's degree in 1993 from the University-GH Duisburg, Germany. From 1989 to 1993, he was with the Fraunhofer Institute of Microelectronic Circuits and Systems, Duisburg. Presently, he is with the Department of Electronic Devices and Circuits, Gerhard Mercator University-GH Duisburg, Germany. His interests of research include modeling and simulation of electromechanical systems, computational geometry, and VLSI circuit design and verification. Dr. Pelz is member of the German Society of Computer Science ŽGesellschaft fur ¨ Informatik.. He received a distinguished paper citation at the International Conference on Computer-Aided Design ŽICCAD. in 1991 and a nomination for the best paper award at the Design Automation Conference ŽDAC. in 1992. Michael Schmidt was born in Bottrop, Germany, in 1966. He received his Dipl.Ing. degree in electrical engineering and PhD from the University of Duisburg, Germany, in 1993 and 1997, respectively. He joined the Fraunhofer Institute of Microelectronic Circuits for Systems ŽIMS. in 1993, where he worked on analog circuits for sensor applications in CMOS and SOI techniques. Since 1998, he has been with EPOS Žembedded core and power systems. in Duisburg working on automotive integrated circuit design. Holger Kappert was born in 1968. He has received his Dipl.Ing. degree in electrical engineering from the University of Bochum in 1993. Since then, he has been with the Fraunhofer-Institute of Microelectronic Circuits and Systems where he is working on embedded systems. His special research interest is the design for testability of those systems. Gunter Zimmer received his diploma in physics from the Technische ¨ Hochschule Darmstadt in 1966 and PhD from the Technische Hochschule Munchen in 1968. Until 1970, he was an assistant in the Physics ¨ Department of the Technische Hochschule Munchen. In 1970, he joined ¨ the Semiconductor Division of Siemens, where he worked on bipolar and MOS technology and on the application of ion implantation in various device technologies. From 1973 to 1984, he was a chief engineer and lecturer at the University of Dortmund with research activities in integrated circuit technology, particularly using MOS devices. In 1984, he was appointed professor at the University of Duisburg and director of the Fraunhofer Institute of Microelectronic Circuits and Systems, Duisburg. Since 1991, he has also been director of the Fraunhofer Institute of Microelectronic Circuits and Systems in Dresden.
Report "New tactile sensor chip with silicone rubber cover" | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,371 |
According to a well-known adage: 'Success doesn't come to you; you go to it.' This very much applies to Vital Engineering, which has gone more than the extra mile to successfully complete the 2 largest gratings, handrail and stair tread supply contracts so far in Africa. The contracts in question were to supply 100,000m2 of grating panels per power station; as well as some 70km of hand railing and over 8,000 stair treads to Eskom's Medupi and Kusile power station projects. To date, feedback from the client has been uniformly positive.
"I would ascribe the success of these seamless supply contracts to Vital Engineering's long history and experience of over 75 years of supplying to power generation projects both locally and internationally," explains Dodds Pringle, Managing Director of Vital Engineering."Success breeds success and, on the back of these two massive contracts, we soon won various large mining, materials handling, food and beverage and infrastructural water treatment, bridge and shopping centre contracts," Pringle explains.
Combined with Vital Engineering's experience and track record, the company's value-added services, quality management, capacities, open and transparent communication channels, reporting functions – as well as attention to detail in ensuring first-time fitment during the installation phases - have all been pivotal deciding factors in winning these contracts. He adds that many of Vital Engineering's products were used in these projects: from GRP or FRP fibreglass gratings and accessories, to mild or stainless steel and aluminium gratings, stair treads and expanded metals. Adding value to these products were Vital's unique, copyrighted sealed unit tubular stanchions. The stanchion product range comprises ball-type angle stanchions and solid forged stanchions, which were supplied with solid hand rails and bends to construct complete walkways. With any construction or engineering project, the parameters of 'on time, in budget' are not negotiable, and definitely not during any of Vital Engineering's projects."Fourteen years ago, we invested in highly sophisticated IT controlled production tracking systems, which we keep updated constantly," says Pringle.
This gives the company the ability to accurately review and track the progress of client's orders, ensuring that both product delivery commitments and budgetary targets are achieved without fail. Safety was a key aspect in both the Eskom power station projects. Vital Engineering strove to work proactively with the principal contractor, MHPSA (Mitsubishi Hitachi Power Systems Africa), to meet and surpass specified quality and safety goals. "Having said that, all the stringent safety procedures, systems and quality controls applied to the power station projects are also applied without fail to every other project which we undertake for our loyal and valued local and international client base," Pringle explains."This ensures that our service levels and deliverables remain consistent. Furthermore, we have always adopted pre-emptive safety and quality practices, instead of leaving this responsibility solely to the main contractor," he continues. Regarding product quality, Vital Engineering's high manufacturing standards, total quality management (TQM) philosophy and strict adherence to systems management have become an industry benchmark for client satisfaction."To the best of our knowledge, as at the time of going to press, we are the only ISO 9001 design-related grating, handrail and expanded metal manufacturer in Southern Africa; and our checks and balances regarding our product performance and safety ensure that our customers have peace of mind at all times," he adds.
"Extensive time and study has been undertaken to develop the very highest manufacturing standards, specification performance and criteria. More importantly, the regular testing and certification of our product performance gives clients the reassurance that they are receiving a product that performs in accordance with international standards," he says. An example of the attention to detail which Vital Engineering implements is the practice of always carrying out pre-trail layouts "Our pre-trial layout policies have resulted in more satisfied clients than we can name – both on the local and international fronts. This is one of the factors that allows us to retain long-standing clients and win new ones," he elaborates. However, recently, Pringle has had to caution buyers, specifiers and clients to look closely at the products they specify. "Recent instances of sub-standard performance parameters, lower steel and manufacturing quality, and failure to carry out test procedures have led to catastrophic failures. When these occur, specifiers suffer professionally, as they bear the full brunt of the law for choosing sub-standard products," he cautions. In terms of performance, grid flooring and hand railings are reliant on the supplier's integrity to ensure high standards of safety are maintained. | {
"redpajama_set_name": "RedPajamaC4"
} | 1,112 |
{"url":"https:\/\/www.cp3-origins.dk\/people\/della-morte-michele\/","text":"Della Morte, Michele\n\nEmail:\ndellamor @ cp3.sdu.dk\n\nPhone:\n6550 2308\n\nShort CV\n\nResearch Interests\n\nMy main research\u00a0 interests are in Lattice gauge theories, Flavor Physics, Strongly interacting extensions of the Standard Model, Monte Carlo Simulations and numerical techniques.\n\ninSPIRE","date":"2019-10-23 05:52:42","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.816581666469574, \"perplexity\": 12226.6184905008}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570987829458.93\/warc\/CC-MAIN-20191023043257-20191023070757-00130.warc.gz\"}"} | null | null |
static int currentContext(lua_State *L) {
CGContextRef context = UIGraphicsGetCurrentContext();
wax_fromObjc(L, @encode(CGContextRef), &context);
if (lua_gettop(L) > 1 && lua_isfunction(L, 1)) { // Function!
CGContextSaveGState(context);
lua_call(L, 1, 1);
CGContextRestoreGState(context);
}
return 1;
}
static int imageContext(lua_State *L) {
int startTop = lua_gettop(L);
int width = luaL_checknumber(L, 1);
int height = luaL_checknumber(L, 2);
UIGraphicsBeginImageContext(CGSizeMake(width, height));
lua_call(L, 0, LUA_MULTRET);
UIGraphicsEndImageContext();
int nresults = lua_gettop(L) - startTop + 1; // Add one because the function is popped
return nresults;
}
static int imageFromContext(lua_State *L) {
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
wax_instance_create(L, image, NO);
return 1;
}
static int translate(lua_State *L) {
CGContextRef c = (CGContextRef)lua_topointer(L, 1);
CGFloat x = lua_tonumber(L, 2);
CGFloat y = lua_tonumber(L, 3);
CGContextTranslateCTM(c, x, y);
return 0;
}
static int setAlpha(lua_State *L) {
CGContextRef c = (CGContextRef)lua_topointer(L, 1);
double alpha = lua_tonumber(L, 2);
CGContextSetAlpha(c, alpha);
return 0;
}
// Can take percent values (0 - 1) or a UIColor
// setFillColor(context, r, g, b [, a])
// setFillColor(context, color)
static int setFillColor(lua_State *L) {
CGContextRef c = (CGContextRef)lua_topointer(L, 1);
if (lua_gettop(L) >= 4) {
double r = luaL_checknumber(L, 2);
double g = luaL_checknumber(L, 3);
double b = luaL_checknumber(L, 4);
double a = lua_isnoneornil(L, 5) ? 1 : luaL_checknumber(L, 5);
CGContextSetRGBFillColor(c, r, g, b, a);
}
else {
UIColor **color = wax_copyToObjc(L, @encode(UIColor *), 2, nil);
CGContextSetFillColorWithColor(c, [*color CGColor]);
free(color);
}
return 0;
}
// Can take percent values (0 - 1) or a UIColor
// setStrokeColor(context, r, g, b [, a])
// setStrokeColor(context, color)
static int setStrokeColor(lua_State *L) {
CGContextRef c = (CGContextRef)lua_topointer(L, 1);
if (lua_gettop(L) >= 4) {
double r = luaL_checknumber(L, 2);
double g = luaL_checknumber(L, 3);
double b = luaL_checknumber(L, 4);
double a = lua_isnoneornil(L, 5) ? 1 : luaL_checknumber(L, 5);
CGContextSetRGBStrokeColor(c, r, g, b, a);
}
else {
UIColor **color = wax_copyToObjc(L, @encode(id), 2, nil);
CGContextSetStrokeColorWithColor(c, [*color CGColor]);
free(color);
}
return 0;
}
// fillRect(context, CGRect(0,0,10,10))
static int fillRect(lua_State *L) {
CGContextRef c = (CGContextRef)lua_topointer(L, 1);
CGRect *rect = wax_copyToObjc(L, @encode(CGRect), 2, nil);
CGContextFillRect(c, *rect);
free(rect);
return 0;
}
// fillRoundedRect(context, CGRect(0,0,10,10), 10)
static int fillRoundedRect(lua_State *L) {
CGContextRef c = (CGContextRef)lua_topointer(L, 1);
CGRect *rect = wax_copyToObjc(L, @encode(CGRect), 2, nil);
CGFloat radius = luaL_checknumber(L, 3);
CGFloat width = CGRectGetWidth(*rect);
CGFloat height = CGRectGetHeight(*rect);
// Make sure corner radius isn't larger than half the shorter side
if (radius > width/2.0) {
radius = width/2.0;
}
if (radius > height/2.0) {
radius = height/2.0;
}
CGFloat minx = CGRectGetMinX(*rect);
CGFloat midx = CGRectGetMidX(*rect);
CGFloat maxx = CGRectGetMaxX(*rect);
CGFloat miny = CGRectGetMinY(*rect);
CGFloat midy = CGRectGetMidY(*rect);
CGFloat maxy = CGRectGetMaxY(*rect);
CGContextMoveToPoint(c, minx, midy);
CGContextAddArcToPoint(c, minx, miny, midx, miny, radius);
CGContextAddArcToPoint(c, maxx, miny, maxx, midy, radius);
CGContextAddArcToPoint(c, maxx, maxy, midx, maxy, radius);
CGContextAddArcToPoint(c, minx, maxy, minx, midy, radius);
CGContextClosePath(c);
CGContextDrawPath(c, kCGPathFill);
free(rect);
return 0;
}
// fillPath(context, pointArray)
static int fillPath(lua_State *L) {
CGContextRef c = (CGContextRef)lua_topointer(L, 1);
CGContextBeginPath(c);
int indexCount = lua_objlen(L, 2);
if (indexCount % 2 != 0) luaL_error(L, "Requires an even number of indexes for points.");
int pointCount = indexCount / 2;
CGPoint *points = calloc(pointCount, sizeof(CGPoint));
for (int i = 0; i < pointCount; i++) {
int arrayIndex = (i * 2) + 1;
lua_rawgeti(L, 2, arrayIndex);
lua_rawgeti(L, 2, arrayIndex + 1);
points[i].x = luaL_checknumber(L, -2);
points[i].y = luaL_checknumber(L, -1);
lua_pop(L, 2);
}
CGContextAddLines(c, points, pointCount);
CGContextDrawPath(c, kCGPathFill);
free(points);
return 0;
}
// drawLinearGradient(context, startPoint, endPoint, colors, locations)
// FOR SOME REASON THIS DOESN'T WORK WITH GRAY/BLACK COLORS... WTF!
static int drawLinearGradient(lua_State *L) {
CGContextRef c = (CGContextRef)lua_topointer(L, 1);
CGPoint *start = wax_copyToObjc(L, @encode(CGPoint), 2, nil);
CGPoint *end = wax_copyToObjc(L, @encode(CGPoint), 3, nil);
NSArray *colors = wax_copyToObjc(L, @encode(NSArray *), 4, nil);
CGFloat *locations = malloc(lua_objlen(L, 5) * sizeof(CGFloat));
for (int i = 0; i < lua_objlen(L, 5); i++) {
lua_rawgeti(L, 5, i + 1);
locations[i] = luaL_checknumber(L, -1);
lua_pop(L, 1);
}
CGGradientRef gradient = CGGradientCreateWithColors(nil, *(CFArrayRef *)colors, nil);
CGContextDrawLinearGradient(c, gradient, *start, *end, 0);
free(start);
free(end);
free(colors);
free(locations);
CGGradientRelease(gradient);
return 0;
}
static const struct luaL_Reg metaFunctions[] = {
{NULL, NULL}
};
static const struct luaL_Reg functions[] = {
{"currentContext", currentContext},
{"imageContext", imageContext},
{"imageFromContext", imageFromContext},
{"translate", translate},
{"setFillColor", setFillColor},
{"setStrokeColor", setStrokeColor},
{"setAlpha", setAlpha},
{"fillRect", fillRect},
{"fillRoundedRect", fillRoundedRect},
{"fillPath", fillPath},
{"drawLinearGradient", drawLinearGradient},
{NULL, NULL}
};
int luaopen_wax_CGContext(lua_State *L) {
BEGIN_STACK_MODIFY(L);
luaL_newmetatable(L, METATABLE_NAME);
luaL_register(L, NULL, metaFunctions);
luaL_register(L, METATABLE_NAME, functions);
END_STACK_MODIFY(L, 0)
return 1;
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 5,419 |
Q: Build all solutions within a tree using Cake (C# make)? I have multiple VS solutions within the same directory tree and would like to build all of them using Cake. Is there a way to build all of them without putting them one by one into the build script?
Thanks for any ideas
A: Yes that's certainly possible using the built-in globber features, example:
var solutions = GetFiles("./**/*.sln");
Task("Build")
.IsDependentOn("Clean")
.IsDependentOn("Restore")
.Does(() =>
{
// Build all solutions.
foreach(var solution in solutions)
{
Information("Building {0}", solution);
MSBuild(solution, settings =>
settings.SetPlatformTarget(PlatformTarget.MSIL)
.WithProperty("TreatWarningsAsErrors","true")
.WithTarget("Build")
.SetConfiguration(configuration));
}
});
Similarly you can do the same before build with nuget restore, example
Task("Restore")
.Does(() =>
{
// Restore all NuGet packages.
foreach(var solution in solutions)
{
Information("Restoring {0}...", solution);
NuGetRestore(solution);
}
});
And a clean task could be adapted like this
var solutionPaths = solutions.Select(solution => solution.GetDirectory());
Task("Clean")
.Does(() =>
{
// Clean solution directories.
foreach(var path in solutionPaths)
{
Information("Cleaning {0}", path);
CleanDirectories(path + "/**/bin/" + configuration);
CleanDirectories(path + "/**/obj/" + configuration);
}
});
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,598 |
The Fall of the American Fraudster?
tags: fraud, con men, finance, financial history, Cryptocurrency, Sam Bankman-Fried
Without much notice, the American scene now boasts an array of professional scam artists meeting a richly deserved comeuppance. The tangle-haired crypto titan Sam Bankman-Fried has presided over an epic meltdown at his FTX empire; the trading platform had yielded him an estimated net worth of $15.6 billion, and that number now stands at a nice round zero, with Bankman-Fried facing a host of legal smackdowns in the offing. Elon Musk, lauded far and wide as the genius tech disrupter of the age, has run his latest acquisition, Twitter, straight into the ground, displaying rank ignorance, insatiable bro-hubris, and rudderless right-wing conspiracy-mongering in equal parts. Elizabeth Holmes, the founder of the phony blood-testing app, Theranos, is bound for prison for more than 11 years on fraud charges.
Meanwhile, a host of political hucksters, from snake oil doctor Mehmet Oz to election denier on autopilot Kari Lake to Peter Thiel action figure Blake Masters, were kicked to the curb in the 2022 midterms. (Though it must regretfully be noted that one of their rank, Ohio Senator-elect J.D. Vance, managed to scurry safely out of range of his own rendezvous with karma, thanks in no small part to the $30 million pumped into his flailing campaign by Mitch McConnell's Senate super PAC.) And as a perfect grace note, nearly all of the most high-profile failed political frauds of the 2022 cycle were handpicked protégés of the greatest scammer in American public life, former president Donald Trump, who tried to head off a series of legal and political reckonings with his low-energy announcement of his 2024 run for the presidency.
It's wise not to read too much into the present moment's Fraudsterdämmerung; this is America, after all, the premier breeding ground of boodlers and mountebanks of all description, from John Jacob Astor, Henry Fricke, and Bernie Madoff on one hand, to Richard Nixon, Warren Harding, and James Buchanan on the other. Still, students of the close alignment of rampant scamming and financial entrepreneurship note that we could be on the verge of a new culture-wide reassessment of late-capitalist fraud.
"The instinct to geographic expansion and asset invention in capitalism tends to, historically, be accompanied by fraud," says Ian Klaus, a senior fellow with the Chicago Council of Global Affairs and the author of Forging Capitalism: Rogues, Swindlers, Frauds, and the Rise of Modern Finance. "In this sense, fraud is an inseparable part of capitalism, especially at its frontiers, such as crypto now." In some ways, the rise of mogul-driven digital capitalism has produced a convergence between financial fraud and the political variety, rendering the two increasingly difficult to distinguish. "What's so strange right now is the way in which fraud has seen a parallel breakdown in the realm of business, where we normally see it, and the realm of politics," says University of Georgia historian Stephen Mihm, author of A Nation of Counterfeiters: Capitalists, Conmen, and the Making of the United States. "I do think that much of the proliferation of scamming and hucksterism has flourished in an environment of extreme political polarization. The two things are linked insofar as people become wedded to a political wing over reality and facts, and will not consider an alternative view."
This calls to mind other grifters in the right-wing house of election denialism, such as MyPillow CEO Mike Lindell and all-purpose agitprop hack Dinesh D'Souza. But Mihm also notes that an important augur of today's great fraud shakeout was the pair of civil lawsuits targeting the deranged hard-right conspiracy merchant Alex Jones, who's now facing more than $1 billion in penalties for promulgating the hateful lie that the massacre of schoolchildren and teachers at Sandy Hook elementary school was a false-flag operation perpetrated by "crisis actors" in thrall to the liberal agenda of the deep state. "He's also representative of this extreme polarization, selling vitamins and survivalist gear, while he's got people in his audience migrating to these really dark places," Mihm says. "You know, it's one thing to sell vitamin supplements, and it's another thing to tell lies about dead children. That's a line that hucksters historically haven't crossed."
Read entire article at The Nation | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,554 |
All of our seed is open pollinated, non-hybrid, non-GMO, untreated, natural seed.
Specializing in rare and endangered heirloom vegetable, flower and herb seed.
LIKE US ON FACEBOOK! Click the image!
In Celebration of Our 16th Year in Business, orders over $160.00 (before taxes) ship free!
At Heritage Harvest Seed, all of our seed is open pollinated, non-hybrid, non GMO, untreated, natural seed. Click the previous keywords to see a defenition.
At Heritage Harvest Seed, we specialize in rare & endangered heirloom vegetable, flower & herb seed. Our main goal is preserving these time honoured cherished heirlooms for all to enjoy. All of our seed is open pollinated, non-hybrid, non-GMO, untreated, natural seed. Heirloom Seeds, also called Heritage Seeds, are open pollinated varieties that are usually atleast 50 years old. Heirloom seeds have much more genetic diversity than modern varieties and are well known for exceptional taste, aroma and higher nutrient content.
Please sign in towards the bottom of the page to sign in. This allows you to save items to your cart. | {
"redpajama_set_name": "RedPajamaC4"
} | 6,100 |
\section{Introduction}
In the generative models, GANs has achieved great success in image, audio, and text generation tasks \cite{karras2021alias} \cite{mao2019mode} \cite{dong2018musegan}. One of the significant reasons is that the goodness of the generated samples can be judged by human intuition, such as visual, auditory, and reading skills. Obstacles, however, arise under the task of time series generation, especially that of Multivariate Time Series (MTS), because it is impracticable for human beings to judge the goodness of generated MTS. In other words, our intuition fails to help us directly evaluate the quality of generated samples under the MTS generation task.
Driven by the obstacle, a question arises: is there an indirect visualization way to make it easier for people to perceive the goodness of generated MTS. An obstacle to answering this question was the notorious problem of how to effectively reduce the dimension of MTS \cite{park2010dimension} \cite{pena2006dimension}. Principle Component Analysis (PCA) is, in most papers, used to reduce the dimension of MTS into two dimensions before plotting the 2D visualization \cite{yoon2019time} \cite{ttsgan}. Otherwise, down-streaming tasks, such as MTS classification tasks, are used to verify whether the generated data is good or not \cite{yoon2019time} \cite{ttsgan}.
\begin{figure}[hptb]
\centering
\includegraphics[width = 0.6 \textwidth]{figure/gaugan-simple.png}
\caption{The general framework of Gaussian GANs.}
\label{fig:my_label}
\end{figure}
In this paper, we address the visualization problem by introducing a transformation function using GANs instead of following the dimension reduction or down-streaming task verification. Inspired by the uniform transformation theory \cite{rosenblatt1952remarks} and Kolmogorov–Smirnov test \cite{justel1997multivariate} under the multivariate case, we construct a transformation function that is able to transform target distribution into standard multivariate Gaussian distribution using GANs. (We call this GANs as Gaussian GANs). With the transformation function, various normality tests can be used to evaluate the goodness of generated samples. Formally, denoting $G_n()$ as the generator and $D_n()$ as the discriminator of the Gaussian GANs, we let MTS, denoted as $\mathbf{X}$, be the input to the generator, and Gaussian noise, denoted as $\mathbf{Z}$ as its output, that is $\mathbf{Z} = G_n(\mathbf{X})$. Then, we use $D_n()$ to distinguish whether the $Z$ is standard multivariate Gaussian noise or not.
We present two experiments on a real dataset, the UniMiB \cite{dataset}, to show the effectiveness of our visualization methods. Our results show that the generated MTS are visually acceptable under both normality test using Gaussian distribution and PCA visualization if two datasets come from the same distribution. If two datasets, however, come from different distributions, the generated MTS are visually terrible from the normality test using Gaussian distribution but acceptable from PCA visualization. The Gaussian GANs can be constructed based on most GANs' architecture, and we expect that it is applicable to other MTS evaluation problems.
\section{Related Work}
\textbf{Similarity in MTS}
In time-series data mining, similarity measures can be classified into four categories, shape-based, edit-based, feature-based, and model-based distances \cite{esling2012time}.
The shape-based distance compares the whole shape of time series sequences and aims to measure the similarity from points to points, such as the Euclidean distance, Dynamic Time Warping (DTW)\cite{berndt1994using} and Spatial Assembling Distance \cite{chen2007spade}.
The edit-based distances compare the minimum number of operations needed to transform one series into another one. The proper algorithm is the core to the edit-based distances, such as Longest Common SubSequence algorithm \cite{das1997finding}, Constraint Continuous Editing Distance\cite{chhieng2007adaptive}
If used in measuring similarity between groups, the shape-based distance and the edit-based distance would be computationally expensive, even more than $O(n^3)$.
The feature-based similarity uses constructed statistical features to measure the similarity, such as a likelihood ratio for DFT coefficients \cite{janacek2005likelihood}, a combination of periodogram and
autocorrelation functions \cite{vlachos2005periodicity}, and copula-based similarity measures \cite{safaai2018information}. There is plenty of mathematics behind the feature-based similarity compared to the first two methods, but it is a more efficient measure way.
Model-based distance methods usually have an assumption that the time series can be well captured by the proposed model, and then estimate the parameters in the model and lastly calculate the distance, such as ARMA model \cite{xiong2004time}. The disadvantage is the difficulty of verifying the assumptions. Note that the similarity measures can be extended to measure groups although they are formulated in two single time series sequences.
\textbf{Uniform Transformation} The uniform transformation theory provides that there exists a transformation function $F()$ which can transform any distribution into uniform distribution at interval $[0, 1]$ \cite{rosenblatt1952remarks}. With the transformation function, the Kolmogorov–Smirnov test was used to test the divergence of two distributions \cite{justel1997multivariate}. The $F()$ is the Cumulative Density Function (CDF) of random variables to be transformed. Under the univariate case, any random variable with continuous CDF $F()$ can be transformed into a uniform random variate on the interval $[0,1]$. In higher dimensions,
however, it would be computationally expensive or even impossible to estimate the continuous CDF because the probability integral transformation is a much richer tool that is also far less understood \cite{genest2001multivariate}.
In our work, we model a transformation function $f()$ with regards to Probability Density Function (PDF) instead of CDF $F()$ and thus skip the high dimension's integration. Meanwhile, we replace the targeted uniformed distribution at interval $[0,1]$ with standard multivariate Gaussian distribution to make the output of $f()$ more statistically reasonable.
\section{Background}
\subsection{Generative Adversarial Networks}
Two parts contained in the GANs: Generator (G) and Discriminator (D), and they play the following two-player minimax game with value function V (G, D) \cite{gans}:
\begin{equation}
\label{equa.1}
\min_{G} \max_D V (D, G) = \mathbb{E}_ {\mathbf{x} \sim p_{data(\mathbf{x})}} [log D(\mathbf{x})] +
\mathbb{E}_{ \mathbf{z} \sim p_{N(\mathbf{z})}} [log(1 - D(G(\mathbf{z})))]
\end{equation}
, where $\mathbf{z} \sim p_N(\mathbf{z})$ means the $\mathbf{z}$ is sampled from the Normal distribution, $\mathbf{x} \sim p_{data(\mathbf{x})} $ means the $\mathbf{x}$ is sampled from the real dataset, $G(\mathbf{z})$ is the mapping from Gaussian noise to generated data, and $D(\mathbf{x})$ or $D(G(\mathbf{z}))$ is to tell whether the input data is fake (generated) or real. The Equation. \ref{equa.1} can be reformulated into two circular steps:
\begin{equation}
\begin{aligned}
arg \max_{\theta_D} & \mathbb{E}_ {\mathbf{x} \sim p_{data(\mathbf{x})}} [log D_{\theta_D}(\mathbf{x})] +
\mathbb{E}_{ \mathbf{z} \sim p_{N(\mathbf{z})}} [log(1 - D_{\theta_D}(G(\mathbf{z})))] \\
arg \min_{\theta_G} & \mathbb{E}_{ \mathbf{z} \sim p_{N(\mathbf{z})}} [log(1 - D(G_{\theta_G}(\mathbf{z})))]
\end{aligned}
\end{equation}
\subsection{Problem Description}
Given a dataset $\mathbf{X}_{real} = \{\mathbf{x}_i, i = 1,2, \dots, n\}$, generative models attempts to find a fake distribution $p_{fake}(\mathbf{x})$ to approximate the real distribution $p_{real}(\mathbf{x})$. Specifically, GANs trains a generator $G$, $G(\mathbf{z}) \sim p_{fake}(\mathbf{x})$, to reduce the divergence between distribution $p_{real}(\mathbf{x})$ and $p_{fake}(\mathbf{x})$ by minimax games. The generated samples set $\mathbf{X}_{fake}$ are obtained from the $G(\mathbf{z})$. The paper's aim is to find a metrics $M(\mathbf{X}_{fake}, \mathbf{X}_{real})$ to visually and numerically and evaluate the quality of generated samples.
\section{GaussianGANs}
\subsection{Gaussian Transformation using GANs}
Let us consider a GANs task, mapping the real data into Gaussian noise. The input for generator and discriminator is MTS, the output of generator is Gaussian noise and the output of discriminator is True/False, that is, to train a GANs which transforms the distribution of MTS into standard multivariate Gaussian distribution. The task could be represented by the following assumption,
\begin{assumption}
\label{assumption1}
There exists a Gaussian GANs, $G_n()$, which is able to map the real data (high dimensions) to Gaussian noise (low dimensions),
\begin{equation}
\mathbf{z} = G_n(\mathbf{x}), \; \; \mathbf{z} \sim N(\boldsymbol{0}, \mathbf{I}), \; \mathbf{x} \sim f_{real}(\mathbf{x})
\end{equation}
, where $\mathbf{x}$ is data sampled from the real dataset $\mathbf{X}_{real}$, and $\mathbf{z}$ is sampled from the standard multivariate Normal distribution. Moreover, the dimension of $X$ is more than that of $Z$.
\end{assumption}
With the Gaussian generator, the real data in the high dimensions could be transformed into the low dimension Gaussian noise. In other words, the transformed real data could be regarded to be sampled from the standard normal distribution, $ G_n(\mathbf{x}) \sim N(\boldsymbol{0}, \mathbf{I})$. If the distribution of fake data (or generated data) is similar to that of real data enough, the transformed fake data should follow the normal distribution, that is, $G_n(G(\mathbf{z})) \sim N(\boldsymbol{0}, \mathbf{I})$. With a convincing Gaussian generator, $G_n()$, we could construct the new dataset as follows to evaluate the quality of the generated data,
\begin{theorem}
\label{theorem}
Given a well-trained generator $G_n$, that is $G_n(\mathbf{X}_{real}) \sim N(\boldsymbol{0}, \mathbf{I})$, the dataset $\mathbf{U} = \{u_k|k = 1,2,...,n \}$ is sampled from the $\chi^2(c*s)$ distribution,
\begin{equation}
u_k = \sum_{i=1}^{c} \sum_{j=1}^{s} y_{fake,ijk}^2, \; k = 1,2,...,n,
\end{equation}
, where $n$ is the number of fake samples, $c$ means the dimensions of time series, $s$ represents the length of time series, $\mathbf{y}_{fake, k} = G_n(\mathbf{x}_{fake, k})$, $x_{fake,ijk}$ is the value of the $\mathbf{x}_{fake, k}$ at its location of (i, j).
\end{theorem}
Note: Theorem.1 provides a sufficient condition but is not necessary. Moreover, the Theorem.1 also applies to the $\mathbf{U}$ computed from $\mathbf{X}_{real}$, because of the hypothesis that both $\mathbf{X}_{real}$ and $\mathbf{X}_{fake}$ come from the same distribution.
The method of evaluating the quality of generated time series is summarized as follows:
\begin{itemize}
\item[(1)] Train a well-performed Gaussian GANs;
\item[(2)] Evaluate the quality of generated MTS with our proposed metrics and visualization.
\end{itemize}
The section \ref{sec:42}provides how to train the Gaussian GANs; the section \ref{sec:43} \ref{sec:44} are the proposed metric and visualization method.
\subsection{General Architecture}
\label{sec:42}
\begin{figure}[hptb]
\centering
\includegraphics[width=0.99\textwidth]{figure/architecture.png}
\caption{GaussianGANs architecture. The (a) is a standard architecture for GANs, the input to the generator is Gaussian noise and the output is generated MTS; the input to the discriminator is the real and fake MTS and the output is a scalar (real or fake). The (b) is the general framework for Gaussian GANs, where there are only 2 modifications compared to the standard architecture}
\label{fig:gaussiangans}
\end{figure}
There is no specific architecture for Gaussian GANs but it has a general architecture in which different models have their own different architectures. That is because the Gaussian GANs can be used in most scenarios of MTS generation tasks when the following two points are satisfied: (1) the generative model is GANs and (2) the generator and discriminator use the same blocks in their own architecture. In figure \ref{fig:gaussiangans}, the (a) contains a standard architecture for GANs, the input to the generator is Gaussian noise and the output is generated MTS; the input to the discriminator is the real and fake MTS and the output is a scalar (real or fake). The (b) is the general framework to Gaussian GANs, where there are only 2 modifications compared to the standard architecture,
\begin{itemize}
\item The generator of the Gaussian GANs uses the discriminator of the standard GANs, but changes the output from a scalar to the dimension of Gaussian noise at the last layer;
\item The discriminator of the Gaussian GANs uses the generator of the standard GANs, but changes the output from the dimension of MTS to the dimension of Gaussian noise at the last layer;
\end{itemize}
\subsection{Normality Metrics - Sufficient and Necessary}
\label{sec:43}
As Assumption \ref{assumption1} indicates, the fake data (or real data) transformed by Gaussian GANs, $G_n(\mathbf{X}_{fake})$, follows the standard multivariate normal distribution if the $G_n()$ has been well trained. Considering the properties of standard multivariate normal distribution, it would be simple and efficient to use heatmap for correlation matrix and univariate normality test to examine the quality of generated data. Because in the multivariate normal distribution, the independence among features (random variables) would be equivalent to that of the correlation between any two features is equal to 0.
Moreover, with the independence among features, the multivariate normality test (difficult and computationally expensive) could be replaced by a univariate normality test (simple and computationally efficient).
\textbf{ Correlation heatmap}. Let $\mathbf{Y} = G_n(\mathbf{X}_{fake})$, (or $\mathbf{Y} = G_n(\mathbf{X}_{real})$). The correlation matrix, $\mathbf{R} = \{r_{ij}\}_{p\times p}$, for transformed data could be estimated by following estimation method,
\begin{equation}
r_{ij} = \frac{\mathbf{y}_i^T\times \mathbf{y}_{j}}{||\mathbf{y}_i|||\mathbf{y}_j||}, i,j = 1,2,..,p,
\end{equation}
where the $\times$ means the dot product for two vectors, and the ||$\mathbf{y}_i$|| is the L1 norm for vector $\mathbf{y}_i$. If the a well trained GANs satisfies the Assumption \ref{assumption1}, the values off the diagonal should be close to 0 and the values alone the diagonal should be close to 1.
\textbf{ Normality test}.
Shapiro-Wilk test \cite{shapiro1965analysis} and D'Agostino's K-squared test \cite{normaltest} are used to identify whether the transformed data follows the normal distribution. The Shapiro-Wilk test statistics is,
\begin{align}
W = \frac{(\sum_{i=1}^{n} a_i x_{(i)})^2}{\sum_{i=1}^{n}(x_i - \bar{x})^2} \\
(a_1, ..., a_n) = \frac{\mathbf{m}^T V^{-1}}{C} \\
C = ||V^{-1} \mathbf{m}|| = (\mathbf{m}^T V^{-1} V^{-1} \mathbf{m})^{\frac{1}{2}},
\end{align}
where $W$ is the Shapiro-Wilk test statistics, $x_{(i)}$ is order statistics, which is different than the $x_{i}$. $\bar{x} = \frac{\sum_{i=1}^{n}x_i}{n}$, $\mathbf{a}$ is weighted coefficient determined by both standard normal distribution and the dataset. $\mathbf{m} = (m_1, m_2, ..., m_n)^T = (\mathbb{E}x'_{(1)}, \mathbb{E}x'_{(2)}, ..., \mathbb{E}x'_{(n)}) = \mathbb{E}(\mathbf{x'})$, where $x'_{(i)}$ is the identically and independently distributed random variables which follow the standard normal distribution. $V$ is covariance matrix of those normal order statistics \cite{shapiro1965analysis}. D'Agostino's K-squared statistics proposed the revised skewness and kurtosis to make the original skewness and kurtosis converge as fast as possible. The details of the statistics are contained in the \cite{normaltest} \cite{normaltest2}
Note: the ideal quality of a generated dataset should have a small p-value and correlation matrix as close as to the identity matrix.
\subsection{Visualization - Sufficient}
\label{sec:44}
Let's consider the hypothesis in the Assumption \ref{assumption1} that the Gaussian GANs and the generative model is well trained, then the transformed dataset (generated or real dataset) would be samples from $\chi^2(c\times s)$ distribution according to the Theorem.1. Thus, both of the $\mathbf{u}_{real} = (u_{real, 1}, ..., u_{real, n})$ and $\mathbf{u}_{fake}= (u_{fake, 1}, ..., u_{fake, n})$ could visualized through QQ plot, which is used to identify whether the hypothesis is true or not. But $\chi^2$ Visualization is not a necessary condition as normality metrics in Section \ref{sec:43} to test the quality of the generated data. The simplicity and efficiency, however, is the advantage of $\chi^2$ Visualization.
\section{Experiments}
\subsection{Transformer GaussianGANs}
To illustrate how our method works, the transformer-based time series GANs (ttsGANs) \cite{ttsgan} is used because of its better results under PCA visualizations. In addition, the model is trained on the 1 GV100 32G with 4 hours. There are merely 2 modifications for the corresponding Gaussian GANs based on the ttsGANs. In the figure \ref{fig:ttsgan}, the ttsGANs is light blue shadowed, and its corresponding Gaussian GANs is light green shadowed. The 2 modifications are shadowed with dark blue and dark green separately: the architecture switch between generator and discriminator and their output layer, specifically,
\begin{itemize}
\item The generator in Gaussian GANs uses the discriminator architecture in the ttsGANs; and the discriminator in Gaussian GANs uses the generator architecture in the ttsGANs;
\item The output layers of generator in Gaussian GANs and ttsGANs are both Conv2D channels reduction; and the output layers of generator in Gaussian GANs and ttsGANs are both classification head.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width = 0.99 \textwidth]{figure/ttsgan.png}
\caption{Gaussian GAN based on transformer-based time series GANs. (a) the ttsGANs is light blue shadowed, and (b) its corresponding Gaussian GANs is light green shadowed. The 2 modifications are shadowed with dark blue and dark green separately, that is, the architecture switch between generator and discriminator and their output layer. }
\label{fig:ttsgan}
\end{figure}
\subsection{Datasets}
We adopt the same datasets used in the ttsGANs paper \cite{ttsgan}, the UniMiB dataset \cite{dataset}, where two categories are selected: Jumping and Running because the effect of ttsGAN is different under the two categories. Thus, it can be observed how the normality metrics and $\chi^2$ visualization would change while results change from good to bad. The sample size for the two classes in the training dataset is 600 and 1572 respectively, that in testing dataset is 167 and 416 respectively. Both of the classes have 150 timestamps and 3 channels. Additionally, all of the recordings are channel-wisely normalized to a mean of 0 and a variance of 1.
\subsection{Results}
\textbf{Training Gaussian GANs} In order to select the best Gaussian GANs during the training epochs, a comprehensive metric is designed:
\begin{align}
S &= S_{1} + S_{2} + S_{3}, \;\;\; \\
S_{1} & = \frac{1}{n\times c} \sum_{j=1}^{c} |\sum_{i=1}^{n} x_{ij}| + \frac{1}{c} \sum_{j=1}^{c} ( |\frac{1}{n}\sum_{i=1}^{n}(x_{ij} - \bar{x}_{i\cdot})^2 -1|)^{\frac{1}{2}}\\
S_{2} & = \frac{1}{c \times c}sum(|\mathbf{r} - \mathbf{I}|)\\
S_{3} & = \frac{1}{n}\sum_{i=1}^{n} S_{3i}\\
S_{3i} &= \begin{cases}
1, & p_{i1} > 0.9 \; and \; p_{i2} > 0.9\\
0, & otherwise\\
\end{cases},
\end{align}
where c is the number of dimensions of the multivariate normal distribution; and n is the number of samples. S is constructed by three features of the dataset: $S_1$ moment distance, $S_2$ correlation matrix distance, and $S_3$ normality distance. In detail, $S_1$ is composed of the mean and variance of the dataset; in the $S_2$, the $sum$ means the sum of all points in the matrix, which represents the distance of two matrices; $S_3$ is determined by the p-values of two normality test, which represents the percentage of how many dimensions that do not follow a normal distribution.
For the Running dataset, the training process of Gaussian GANs at epochs 0, 400 and 630 is shown in figure \ref{fig:resultsgaussiangans-run}. At epoch 0, it is seen that the generated MTS cannot cover the region of the real dataset under PCA visualization, although the correlation matrix is close to the identity matrix with a small value of $S3$ and the QQ plot is almost a straight line. It is because the output of the neural network is close to white noise when the whole network has not been trained, under which circumstance any type of dataset could be transformed into a normal distribution. That is not what we expected because Gaussian GANs is aimed to transform the dataset from a certain distribution into samples from the standard multivariate normal distribution. (, which is discussed in Section \ref{sec:limitation}).
\begin{figure}[hptb]
\centering
\subfloat[Heatmap: epoch 0 ($S_3 = 0.25$)]{
\centering
\includegraphics[width=0.33\textwidth]{figure/GauGANs-Running/heat0.png}
}%
\subfloat[QQ plot: epoch 0]{
\centering
\includegraphics[width=0.33\textwidth]{figure/GauGANs-Running/qq0.png}
}%
\subfloat[PCA visualization: epoch 0]{
\centering
\includegraphics[width=0.33\textwidth]{figure/GauGANs-Running/pca0.png}
}%
\quad
\subfloat[Heatmap: epoch 400 ($S_3 = 0.125$)]{
\centering
\includegraphics[width=0.33\textwidth]{figure/GauGANs-Running/heat400.png}
}%
\subfloat[QQ plot: epoch 400]{
\centering
\includegraphics[width=0.33\textwidth]{figure/GauGANs-Running/qq400.png}
}%
\subfloat[PCA visualization: epoch 400]{
\centering
\includegraphics[width=0.33\textwidth]{figure/GauGANs-Running/pca400.png}
\quad
\subfloat[\textbf{Epoch 630 ($\mathbf{S_3 = 0.1875}$)}]{
\centering
\includegraphics[width=0.33\textwidth]{figure/GauGANs-Running/heat630.png}
}%
\subfloat[\textbf{QQ plot: epoch 630}]{
\centering
\includegraphics[width=0.33\textwidth]{figure/GauGANs-Running/qq630.png}
}%
\subfloat[\textbf{PCA visualization: epoch 630}]{
\centering
\includegraphics[width=0.33\textwidth]{figure/GauGANs-Running/pca630.png}
\centering
\caption{Running dataset for Gaussian GANs. At epoch 0, the generated MTS cannot cover the region of the real dataset under PCA visualization, in spite of a good normality test and $\chi^2$ visualization. Because the output of neural network is close to white noise when the whole network has not been trained. In the following training epochs, the correlation matrix is transitioning to the identity matrix with a small $S_3$ values, moreover, the QQ plot is almost a straight line, and the generated MTS and the real MTS overlap under PCA visualization. In addition, the best Gaussian GANs is in bold with $S = 4.034$}
\label{fig:resultsgaussiangans-run}
\end{figure}
In the following training epochs, we could see at epoch 400 that the correlation matrix is not close to the identity matrix, but $S_3$ = 0.125 represents that 87.5\% dimensions follows normal distribution, which indicates the generated data are not sampled from the real distribution. PCA visualization and QQ plot, however, indicates a completely different conclusion based on that the generated MTS and the real MTS overlap under most of area and QQ plot is transitioning to a straight line. It is because normality test using Gaussian GANs is able to retain more information than PCA and QQ plot in theory. According to the comprehensive metric $S$, the best Gaussian GANs is in bold in the figure \ref{fig:resultsgaussiangans-run} (g,h,i) with the value of 4.034. Moreover, it is clear that Gaussian GANs well pass the normality test, QQ plot and PCA visualization. Therefore, we argue that Gaussian GANs successfully transform the Jumping dataset from its distribution into white noise from the standard multivariate normal distribution.
\textbf{Evaluating quality of generated MTS}
With the best Gaussian GANs, the training process could be monitored, that is to say, the quality of the generative MTS could be visualized at every training process epoch.
Firstly, it is necessary to verify that the Gaussian GANs really works by feeding the test dataset into the Gaussian GANs, and the results is shown in figure \ref{fig:verifygaussiangans} and it indicates that the testing dataset and training dataset comes from the same distribution. From the perspective of normality test, the heatmap is close to identity matrix and $S_3 = 0.06$ close to 0, and thus the assumption \ref{assumption1} is satisfied. The line is nearly straight, which satisfies Theorem \ref{theorem}. Moreover, the testing and training dataset almost overlaps in their regions.
\begin{figure}[hptb]
\centering
\subfloat[Heatmap ($S_3 = 0.06$)]{
\centering
\includegraphics[width = 0.33 \textwidth]{figure/ground-run/heat.png}
}
\subfloat[QQ plot]{
\centering
\includegraphics[width = 0.33 \textwidth]{figure/ground-run/qq.png}
}
\subfloat[PCA visualization]{
\centering
\includegraphics[width = 0.33 \textwidth]{figure/ground-run/pca.png}
}
\caption{Verification for Gaussian GANs by feeding the test dataset. }
\label{fig:verifygaussiangans}
\end{figure}
Secondly, ttsGANs is trained to generate MTS and then use Gaussian GANs to evaluate the quality of the generated MTS. The results of the Running dataset is shown in figure \ref{fig:resultsrun}, from which PCA visualization and QQ plot point to a good quality of the generated dataset. The normality test using Gaussian GANs, however, leads to the opposite side (correlation matrix is not close to identity matrix), because the normality test can capture more information between two groups, which experimentally signifies that normality test using Gaussian GANs is more robust than $\chi^2$ and PCA visualization.
\begin{figure}[hptb]
\centering
\subfloat[Heatmap: epoch 0 ($S_3 = 0.0625$)]{
\centering
\includegraphics[width=0.33\textwidth]{figure/run/heat0.png}
}%
\subfloat[QQ plot: epoch 0]{
\centering
\includegraphics[width=0.33\textwidth]{figure/run/qq0.png}
}%
\subfloat[PCA visualization: epoch 0]{
\centering
\includegraphics[width=0.33\textwidth]{figure/run/pca0.png}
}%
\quad
\subfloat[Heatmap: epoch 1000 ($S_3 = 0.125$)]{
\centering
\includegraphics[width=0.33\textwidth]{figure/run/heat1000.png}
}%
\subfloat[QQ plot: epoch 1000]{
\centering
\includegraphics[width=0.33\textwidth]{figure/run/qq1000.png}
}%
\subfloat[PCA visualization: epoch 1000]{
\centering
\includegraphics[width=0.33\textwidth]{figure/run/pca1000.png}
}%
\centering
\caption{Running dataset training process. At the epoch 1000, the good performance under PCA visualization is reflected on QQ plot, but not for heatmap, because the heatmap retains more information than the PCA does. That is, the generated MTS is not as good as PCA visualization shows.}
\label{fig:resultsrun}
\end{figure}
\section{Limitations}
\label{sec:limitation}
The normality test using Gaussian GANs is an effective tool to verify whether the generated data and real data come from the same distribution. However, there are some points that need to be address in future work:
\begin{itemize}
\item Why does the normality test using Gaussian GANs fails at the initial training stage and how to fix it. From the perspective of experiments, the output of the generator in the Gaussian GANs is nearly white noise at the initial training epoch, thus making the normality test for the multivariate normal distribution fail.
\item Is there an effective and simple method which is necessary and sufficient condition for assumption \ref{assumption1} to test the normality. Normality test using Gaussian in the Assumption \ref{assumption1} is compound, where both correlation matrix and $S_3$ should be measured simultaneously. The QQ plot, on the other hand, is not a necessary condition for assumption \ref{assumption1}.
\item If there is not such a GANs architecture when generating new data, such as the VAE, flow-based generative model, or statistical models, is it possible to establish an architecture to evaluate the quality of generated data.
\end{itemize}
\section{Conclusion}
We theoretically and experimentally discussed the effectiveness of our proposed evaluation methods, the normality test using Gaussian GANs, which captures more information than PCA does. Importantly, the Gaussian GANs is the general framework that can be easily constructed from most types of GANs under the MTS task. Moreover, in order to simplify the normality test using Gaussian GANs, the QQ-plot in theorem.\ref{theorem} is attempted as a tentative exploration.
\bibliographystyle{plain}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 367 |
David M. Robinson (born May 27, 1965) is an American historian. He earned a bachelor's degree from Hobart College and completed graduate studies at Princeton University. He teaches at Colgate University as the Robert H.N. Ho Professor in Asian Studies.
References
1965 births
Living people
American historians
American sinologists
Mongolists
Hobart and William Smith Colleges alumni
Princeton University alumni
Colgate University faculty | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,180 |
Our Mining Centres of Excellence in Africa are equipped to assist with the various mineral processing, energy, water treatment and infrastructure services from concept to . Founded in 1912, Fraser Alexander provides customized, innovative, . domain expertise to address our customers' business and asset challenges.
The Mining Engineering Department offers scholarships to undergraduate and graduate students currently Preference will be given to students who are pursuing a degree in mining engineering. Address Line 2 . AliceMae Goodwin; William Fraser Alexander Peretiatko; Varun Maruvanchery; Hui Lu; Zhao Hoachen.
May 24, 2018 Enter your email address for LEARNERSHIPS, INTERNSHIPS, BURSARIES and more Fraser Alexander is a leading supplier of Mining services which include Day to day running activities in the Mineral Processing sites.
alex jpg. Prof. Dr. Alexander M. Fraser LudwigMaximiliansUniversität München und Sprachverarbeitung (Center for Information and Language Processing) for Unsupervised, Semisupervised and Supervised Transliteration Mining.
May 1, 2014 http// Fraser Alexander is a proudly currently include mining, mineral processing, waste deposition management, into equally innovative solutions to address these challenges.
Special thanks to Arjun. Bhalla, Enkhbileg Enkhjargal and Alex Burger Fraser (University of British Columbia), Patty. Smith (AREVA) Multifaceted solution to address a complex web of problems. 18 create process flowcharts and water.
Jobs 1 10 of 32 People also searched machine operator transnet mining driver general worker plant operator sasol process controller eskom.
Private treaty of 2 x parnaby modular coal dense medium cyclone washing plants immediately available for negotiation.
View current job vacancies with the Alex Fraser Group and find out about more about a career with the Alex Fraser Group.
Telenav's navigation, maps, and connected car solutions help millions of people onthego around the world.
Fraser Alexander is your trusted mining and industrial services partner. We're for you and we're still there once the last ounce of material has been processed.
Civil engineering and public works consultants. Description, Key figures, Executives, Activities. Company informations Fraser Alexander (Pty) Ltd. Presentation.
Physical Address · 2224 Lincoln 20 years at a large mining house and also project houses, Paul Conveyor & Engineering Equipment . Fraser Alexander. | {
"redpajama_set_name": "RedPajamaC4"
} | 8,360 |
{"url":"http:\/\/math.stackexchange.com\/questions\/78142\/the-constant-distribution","text":"# The constant distribution\n\nIf $u$ is a distribution in open set $\\Omega\\subset \\mathbb R^n$ such that ${\\partial ^i}u = 0$ for all $i=1,2,\\ldots,n$. Then is it necessarily that $u$ is a constant function?\n\n-\nIt's true if we assume that $\\Omega$ is connected. We will show that $u$ is locally constant, and the connectedness will allow us to conclude that $u$ is indeed constant. Let $a\\in\\Omega$ and $\\delta>0$ such that $\\overline{B(a,2\\delta)}\\subset \\Omega$. Consider a test function $\\varphi\\in\\mathcal D(\\Omega)$ such that $\\varphi=1$ on $B(a,2\\delta)$, and put $S=\\varphi u$, which is a distribution with compact support. Let $\\{\\rho_k\\}$ a mollifier, i.e. non-negative functions, of integral $1$, with support contained in the ball $\\overline{B\\left(0,\\frac 1k\\right)}$, and consider $S_k=\\rho_k* S$. It's a distribution which can be associated to a test function. We have $\\partial^i S=\\partial^i\\varphi u$, hence $\\partial^iS=0$ on $B(a,2\\delta)$. $\\partial^iS_k=\\rho_k*(\\partial^iS)$ is $0$ if $\\frac 1k<\\delta$, hence $S_k$ is constant. Since $S_k$ converges to $S$ in $\\mathcal D'$, $S$ is constant on $B(a,\\delta)$ and so is $u$.\nThis topic answers the cases $\\Omega=\\mathbb R^n$.","date":"2014-03-07 10:13:10","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9915406703948975, \"perplexity\": 46.93141137580081}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-10\/segments\/1393999641260\/warc\/CC-MAIN-20140305060721-00096-ip-10-183-142-35.ec2.internal.warc.gz\"}"} | null | null |
Q: Upload file error message not appearing. CodeIgniter I hope someone can help me with this. Whenever I upload a file with the wrong format and submit the form, an error is supposed to appear. However after passing multiple variables back to the view from my controller, none of the errors seem to be appearing. Thank you.
Controller Code :
function upload()
{
$this->load->library('pagination');
//the name on the view must be userfile
$username = $this->session->userdata('username');
$company = $this->session->userdata('company');
$title = $this->input->post('title');
$description =$this->input->post('description');
$path = './assets/files/'.$company.'/announcements';
$config['upload_path'] = $path;
$config['allowed_types'] = 'pdf';
$config['max_size'] = '10000';
$this->load->library('upload',$config);
$this->load->model('announcement');
$this->load->library('pagination');
$config['per_page']=5;
if(!$this->upload->do_upload())
{
//$error = array('error'=>$this->upload->display_errors());
$data = array(
'announcement' => $this->announcement->fetch_announcement($username,$company,$config['per_page'],$this->uri->segment(3)),
'links' => $this->pagination->create_links(),
'error' => $this->upload->display_errors()
);
print_r($this->upload->display_errors());
$this->load->view('includes/admin_header');
$this->load->view('announcements',$data);
$this->load->view('includes/admin_footer');
//redirect('announcements/index');
}
else
{
$data = array(
'announcement' => $this->announcement->fetch_announcement($username,$company,$config['per_page'],$this->uri->segment(3)),
'links' => $this->pagination->create_links(),
'error' => $this->upload->display_errors()
);
$file_data = array('upload_data' => $this->upload->data());
$result = $this->announcement->create($insert);
//$this->load->view('includes/admin_header');
//$this->load->view('announcements',$data);
//$this->load->view('includes/admin_footer');
redirect('announcements/index');
}
}
View Code :
<div class="col-sm-9 col-sm-offset-3 col-md-10 col-md-offset-2 main">
<table class="table">
<thead>
<tr>
<th>No.</th>
<th>Title</th>
<th>Description</th>
<th>Added By</th>
<th>Company</th>
<th>Date Added</th>
<th>Publish</th>
</tr>
<?php $offset = $this->uri->segment(3,0)+1; ?>
<?php foreach($announcement as $row): ?>
<tr>
<td><?php echo $offset++; ?></td>
<td><?php echo $row->title; ?></td>
<td><?php echo $row->description; ?></td>
<td><?php echo $row->addedby; ?></td>
<td><?php echo $row->company; ?></td>
<td><?php echo $row->dateadded; ?></td>
<td><?php echo $row->published; ?></td>
<?php endforeach; ?>
</tr>
</tbody>
</table>
<?php echo $links; ?>
</div>
<div class="col-sm-9 col-sm-offset-3 col-md-10 col-md-offset-2 main">
<?php $error = ''; ?>
<h2>Add an announcement</h2>
<?php echo form_open_multipart('announcements/upload'); ?>
<p>Title : <input type="text" name="title"/></p>
<p>Description :
<textarea style="resize:none;" maxlength="200" row="20" cols="20" name="description"></textarea>
</p>
<p>Upload file :<input type="file" name="userfile"> </p>
<p><input type="submit" name="submit" value="Upload"></p>
</form>
<?php echo $error; ?>
</div>
A: Kindly refer below code :-
if ( ! $this->upload->do_upload())
{
$error = array('error' => $this->upload->display_errors());
$this->load->view('upload_form', $error);
}
else
{
$data = array('upload_data' => $this->upload->data());
$this->load->view('upload_success', $data);
}
Ref From : https://ellislab.com/codeigniter/user-guide/libraries/file_uploading.html
Let me know, if this does not work.
A: Remove the below starting of coding
<?php $error = ''; ?>
in announcements view file
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,131 |
\section{Introduction}
The D0 and CDF collaborations have recently measured $B_s -\bar
B_s$ mixing with the result \cite{Abazov:2006dm,cdf}
\begin{eqnarray}
&&D0:\;\;\;\;17 ~~ps^{-1} < \Delta M^{exp}_{B_s} < 21 ~~ps^{-1},\nonumber\\
&&CDF:\;\; \Delta M^{exp}_{B_s} = (17.33^{+0.42}_{-0.21}\pm 0.07)
~~ps^{-1}.
\end{eqnarray}
This last measurement is sufficiently precise to place new
constraints on tree-level flavor changing neutral currents. In
this paper we explore the consequences of these constraints in
models with an additional gauge boson, a $Z^\prime$. We first
discuss general constraints and then specialize to the case of
non-universal $Z^\prime$ models. For our numbers we will use the
CKMfitter \cite{Charles:2004jd} average including these
measurements,
\begin{equation}
\Delta M^{exp}_{B_s} = (17.34^{+0.49}_{-0.20}) ~~ps^{-1}.
\end{equation}
This mass difference is related to the $B_s - \bar B_s$ mixing
parameter $M^{B_s}_{12}$ by $\Delta M_{B_s} = 2 |M^{B_s}_{12}|$,
if the lifetime difference is neglected. In the Standard Model,
$M^{B_s}_{12}$ arises from the so called ``box'' diagram and is
given by
\begin{eqnarray}
&&M^{B_s,SM}_{12} = {G_F^2\over 12 \pi^2} \eta_B m_{B_s}
\xi^2 f_{B_d}^2 B_{B_d} m^2_W S(x_t) (V_{ts}V^*_{tb})^2,\nonumber\\
&&S(x) = {4x-11x^2+x^3\over 4(1-x)^2} -{3 x^3\ln x\over 2
(1-x)^3},\label{SM}
\end{eqnarray}
where $x_t = m^2_t/m_W^2$, $\eta_B = 0.551\pm 0.007$ is a QCD
correction. The hadronic parameters $f_{B_d} = (0.191\pm 0.027)$
GeV, $B_{B_d} =1.37\pm 0.14$ and $\xi =
f_{B_s}\sqrt{B_s}/f_{B_d}\sqrt{B_d} =1.24\pm 0.04 \pm 0.06$ are
obtained from lattice calculations \cite{Charles:2004jd}. This
value of $f_{B_d}$ is in reasonable agreement with the recently
observed branching ratio $B(B_d \to \tau \nu_\tau) =
(1.06^{+0.3}_{-0.28}(sat)^{+0.18}_{-0.16}(syst))\times 10^{-4}$
\cite{Ikado:2006un}.
To quantify the uncertainty in the input parameters to
Eq.~(\ref{SM}), we use the latest result from the CKMfitter overall
fit (excluding the measurement) \cite{Charles:2004jd},
\begin{equation}
\left(\Delta M_{B_s}\right)_{SM} = 21.7^{+13.1}_{-9.1} ~~ps^{-1}
\end{equation}
where the errors indicate the ${\mathbf 3\sigma}$ range. Notice
that the central value of this prediction is slightly higher than
the measured mass difference, although the predicted and measured
ranges are in good agreement.
This agreement between the SM prediction and the data places
stringent constraints on new physics that will become more severe
as the theoretical uncertainty is reduced. There are many models
beyond the SM containing additional flavor changing sources which
can be constrained by recent data on $\Delta
M_{B_s}$~\cite{Group1}. We concentrate here on the impact
of the measured $\Delta M_{B_s}$ on non-universal $Z'$ models that are
motivated by the apparent anomaly in the measurement of $A^b_{FB}$
at LEP~\cite{He:2002ha,He:2003qv,He:2004it}. These models are
variations of left-right models in which the right-handed
interactions single out the third generation with enhanced
couplings to the $Z^\prime$.
\section{Generic bounds on new physics}
We begin by considering a generic new physics contribution to
$M^{B_s,N}_{12}$ that is real, and add it to the standard model
3-$\sigma$ range. By requiring this to overlap with the 3-$\sigma$
range for the measured $\Delta M_{B_s}$ we extract the allowed
range for new physics. Normalized to the central value of the
measured $\Delta M_{B_s}$ we find
\begin{eqnarray}
\delta_{B_s} \equiv {2 M^{B_s,N}_{12}\over \Delta M^{exp}_{B_s}}
\sim (-3 {\rm ~to~} -1.7)~{\rm ~or~}(-1 {\rm ~to~} 0.27),
\end{eqnarray}
as seen in Figure~(\ref{range}).
\begin{figure}[htb]
\begin{center}
\includegraphics[width=8cm]{range.eps}
\end{center}
\caption{$\Delta M_{B_s}$ as a function of $\delta_{B_s}$ for a
new physics contribution assumed to be real. We show a shaded band
obtained by adding the new physics to the 3-$\sigma$ standard
model range. The horizontal band corresponds to the 3-$\sigma$
experimental range. }\label{range}
\end{figure}
Because the central value of the SM prediction is already larger
than the measured mass difference, there is little room for a new
contribution in phase with the standard model. A larger range is
allowed for a new physics contribution opposite in sign to the SM.
More generally, $\delta_{B_s}$ is complex and we refer to cases
in which the real part of $\delta_{B_s}$ has the same sign as
(opposite sign to) the SM as having constructive (destructive)
interference with the SM.
For a new physics contribution that is complex, we require that
$2|M^{B_s,N}_{12} + M^{B_s,SM}_{12}|$ reproduce the measured mass
difference. Once again we allow a 3-$\sigma$ range in both the SM
prediction and the measurement. In Figure~(\ref{bound}) we show
the allowed ranges for $Re(\delta_{B_s})$ and $Im(\delta_{B_s})$
for two cases. In the first case we use the central value of the
SM prediction, $\Delta M_{B_s} = 21.7 ~~ps^{-1}$ whereas in the
second case we use the full 3-$\sigma$ range $\Delta M_{B_s} =
12.6 ~~ps^{-1}{\rm ~~to~~} 34.8~~ps^{-1}$. We see again that there
is very little room for a constructive new physics contribution.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=5cm]{range-c.eps}
\includegraphics[width=8cm]{range-full.eps}
\end{center}
\caption{Constraints on $Re(\delta_{B_s})$ and $Im(\delta_{B_s})$
for a complex new physics contribution to $M_{12}^{B_s}$. The
shaded regions indicate where the prediction falls within the
3-$\sigma$ experimental range on $\Delta M_{B_s}$ when we use (a)
the central value of the SM prediction; and (b) the 3-$\sigma$
range for the SM prediction. }\label{bound}
\end{figure}
\section{Constraints on generic $Z^\prime$ models}
We now restrict ourselves to the case where the new physics
contributions originate in the exchange of $Z^\prime$ bosons with
flavor changing couplings. The flavor changing $Z^\prime$
couplings can be written in general as
\begin{eqnarray}
{\cal L} = {g\over 2 c_W} \bar q_i \gamma^\mu (a_{ij}P_L +
b_{ij}P_R )q_j Z^\prime_\mu\;.
\end{eqnarray}
A tree-level exchange of the $Z^\prime$ generates the effective
Lagrangian responsible for neutral meson mixing (and in particular
$B_s -\bar B_s$ mixing) at the $M_{Z^\prime}$ scale,
\begin{eqnarray}
{\cal L}_{Z^\prime} = - {G_F\over \sqrt{2}} {M^2_Z\over
M^2_{Z^\prime}} [a^2_{ij} O_{LL} + b^2_{ij}O_{RR} +
2a_{ij}b_{ij}O_{LR}],
\end{eqnarray}
where the operators are given by
\begin{eqnarray}
&&O_{LL} =\bar q_i \gamma^\mu P_L q_j \bar q_i\gamma_\mu P_L q_j,
\;\;O_{RR} =\bar q_i \gamma^\mu P_R q_j \bar q_i\gamma_\mu P_R
q_j,\nonumber\\
&&O_{LR} =\bar q_i \gamma^\mu P_L q_j \bar q_i\gamma_\mu P_R q_j,
\;\;\tilde O_{LR} =\bar q_i P_L q_j \bar q_iP_R q_j.
\end{eqnarray}
The operator $\tilde O_{LR}$ does not appear directly in
$Z^\prime$ exchange, but is induced by renormalization through
mixing with $O_{LR}$. Starting from an effective Lagrangian at
the high energy scale $m$ given by
\begin{eqnarray}
{\cal L} = a_{LL}(m)O_{LL}+a_{RR}(m)O_{RR} + a_{LR}(m)O_{LR} +
\tilde a_{LR}(m) \tilde O_{LR},
\end{eqnarray}
we find at a low energy scale $\mu = m_b$ relevant to $B_s$
mixing,
\begin{eqnarray}
{\cal L} = a_{LL}(\mu)O_{LL}+a_{RR}(\mu)O_{RR} + a_{LR}(\mu)O_{LR}
+ \tilde a_{LR}(\mu) \tilde O_{LR}.
\end{eqnarray}
At leading order in QCD RG running, the coefficients are
\cite{Ecker:1985ei,Barger:2003hg}
\begin{eqnarray}
&&a_{LL}(\mu) = a_{LL}(m)\eta_{LL}(\mu),\;\;a_{RR}(\mu) =
a_{RR}(m)\eta_{RR}(\mu),\nonumber\\
&&a_{LR}(\mu) = a_{LR}(m)\eta_{LR}(\mu),\;\;\tilde a_{LR}(\mu) =
\tilde
a_{LR}(m)\tilde \eta_{LR}(\mu) + {2\over 3} a_{LR}(m)(\eta_{LR}-\tilde \eta_{LR}),\nonumber\\
&&\eta_{LL}(\mu) =
\left (\eta_m\right )^{6/23},\;\;\eta_{RR}(\mu) = \left (\eta_m
\right )^{6/23},\nonumber\\
&&\eta_{LR}(\mu) = \left ( \eta_m \right )^{3/23},\;\;\tilde
\eta_{LR}(\mu) = \left (\eta_m \right )^{-24/23}.
\end{eqnarray}
where $\eta_m \equiv \alpha_s(m)/\alpha_s(\mu)$.
From the low energy effective Lagrangian one obtains the mass
difference in terms of the ``bag factors'',
\begin{eqnarray}
&&M^{P,Z'}_{12} = -{1\over 3} f_P^2 m_P B_p\left [ a_{LL}(\mu) +
a_{RR}(\mu)+ a_{LR}(\mu) (-{3\over 4} + {\epsilon\over 2}) +
\tilde a_{LR}(\mu) ({1\over 8} - {3\epsilon\over 4} )\right ],
\end{eqnarray}
where $B_P=B_{LL}=B_{RR}=B_{LR}$ is the ratio between the matrix
element $<P|\bar q\gamma^\mu \gamma_5 b \bar q \gamma_\mu
\gamma_5 b|P>$ and its value in factorization. Similarly,
$\epsilon$ is defined as $\epsilon = (\tilde
B_{LR}/B_{LL})(m^2_P/(m_i+m_j)^2)$ where $\tilde B_{LR}$ is the
ratio between the matrix element $<P|\bar q \gamma_5 b \bar q
\gamma_5 b|P>$ and its value in factorization. We will use
$\epsilon =1$ for our numerical results.
With all this we finally obtain the new physics contribution to
$M_{12}$ from $Z^\prime$ exchange,
\begin{eqnarray}
M^{P,Z^\prime}_{12} &=& {G_F\over \sqrt{2}}{m^2_Z\over
m^2_{Z^\prime}}\eta_{Z^\prime}^{6/23}{1\over 3} f^2_P M_P B_P
\left ( a^2_{ij}
+ b^2_{ij}\right. \nonumber\\
& +&\left . \eta_{Z^\prime}^{-3/23} {1\over 2}
a_{ij}b_{ij}(2\epsilon -3) + {2\over
3}(\eta_{Z^\prime}^{-3/23}-\eta_{Z^\prime}^{-30/23}){1\over
4}a_{ij}b_{ij} (1-6\epsilon)\right ).
\end{eqnarray}
The mass difference $\Delta M^P$ is then obtained by adding the SM
and new physics contributions,
\begin{eqnarray}
\Delta M^P = 2 \left|M^{P,SM}_{12}+M^{P,Z^\prime}_{12}\right|.
\end{eqnarray}
As mentioned before, the new physics contribution can be
constructive or destructive with respect to the SM. In the case of
$B_s$ mixing, if $a_{sb}$ and $b_{sb}$ are real relative to the
CKM matrix element $V_{tb}^*V_{ts}$, the new contributions are
constructive. We show in Figure~(\ref{constraint}) the allowed
region for the parameters $a_{sb}$, and $b_{sb}$ (assumed to be
real). The shaded region is obtained
for a SM contribution allowed to vary in its 3-$\sigma$ range. For
emphasis, the darker shaded region corresponds to $(\Delta
M^{B_s})_{SM}= 12.6 ~~ps^{-1}$ indicating the lower end of the
theory range. Notice that for the central value of the SM
prediction, $(\Delta M^{B_s})_{SM}= 21.7 ~~ps^{-1}$, there is no
room for constructive new physics.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=6cm]{zpri.eps}
\end{center}
\caption{Constraints on the flavor changing parameters
$\hat{a}_{sb}\equiv (m_Z/m_{Z^\prime})a_{sb}\times 10^3$ and
$\hat{b}_{sb}\equiv (m_Z/m_{Z^\prime})b_{sb}\times 10^3$. The
shaded region is allowed when the SM contribution varies in its
3-$\sigma$ range. The darker shaded region indicates corresponds
to $(\Delta M_{B_s})_{SM}=12.6~~ps^{-1}$. }\label{constraint}
\end{figure}
A similar exercise can be done for $B_d$ mixing with the
3-$\sigma$ SM range from Ref.~\cite{Charles:2004jd} as well as the
HFAG experimental average \cite{hfag}
\begin{eqnarray}
\left(\Delta M_{B_d}\right)_{SM} &=& 0.394^{+0.361}_{-0.162} ~~ps^{-1}\nonumber \\
\left(\Delta M_{B_d}\right)_{exp}&=& (0.507 \pm 0.004) ~~ps^{-1}.
\end{eqnarray}
Assuming that the new physics is real it will also interfere
constructively with the SM. Taking the central value of the SM
prediction and requiring the total $\Delta M^{B_d}$ to fall within
the 3-$\sigma$ experimental range leads to the allowed region
shown in Figure~(\ref{rangebd}).
\begin{figure}[htb]
\begin{center}
\includegraphics[width=6cm]{zprid.eps}
\end{center}
\caption{Constraints on the flavor changing parameters
$\hat{a}_{db}\equiv (m_Z/m_{Z^\prime})a_{db}\times 10^3$ and
$\hat{b}_{db}\equiv (m_Z/m_{Z^\prime})b_{db}\times 10^3$. The
shaded region indicates the 3-$\sigma$ experimental range on
$\Delta M_{B_d}$ corresponding to the central value of the SM
prediction. }\label{rangebd}
\end{figure}
\section{A non-universal
$Z^\prime$ model and flavor changing parameters}
We now apply the above constraints to a $Z'$ model with
non-universal couplings. The non-universal $Z^\prime$ models we
consider here have been discussed in
Ref.~\cite{He:2002ha,He:2003qv,He:2004it} motivated by the
apparent anomaly in the measurement of $A^b_{FB}$ at LEP
\cite{Chanowitz:2001bv,Abbaneo:2001ix}. The models are variations
of left-right models in which the right-handed interactions single
out the third generation giving enhanced $Z^\prime$ couplings to
the $b$ and $t$ quarks and to the $\tau$ and $\nu_\tau$ leptons.
In general the models contain tree-level flavor changing neutral
currents as well. We find that the new measurement of $\Delta
M_{B_s}$ can place stringent constraints on some of the parameters
of the model, but that there is still room for a substantial
enhancement in the modes $B\to X_s \tau^+ \tau^- (\nu_\tau \bar
\nu_\tau )$, $B_s\to \tau^+ \tau^-$, and also $K\to \pi \nu_\tau
\bar \nu_\tau$. We briefly review the relevant aspects of the
models and refer the reader to
Ref.~\cite{He:2002ha,He:2003qv,He:2004it} for details.
In these models the first two generations are chosen to have the
same transformation properties as in the standard model with
$U(1)_Y$ replaced by $U(1)_{B-L}$,
\begin{eqnarray}
&&Q_L = (3,2,1)(1/3),\;\;\;\;U_R = (3,1,1)(4/3),\;\;\;\;D_R =
(3,1,1)(-2/3),
\nonumber\\
&&L_L = (1,2,1)(-1),\;\;\;\;E_R = (1,1,1)(-2). \label{gens12}
\end{eqnarray}
The numbers in the first parenthesis are the $SU(3)$, $SU(2)_L$
and $SU(2)_R$ group representations respectively, and the number
in the second parenthesis is the $U(1)$ charge. For the first two
generations the $U(1)$ charge is the same as the $U(1)_Y$ charge
in the SM and for the third generation it is the usual
$U(1)_{B-L}$ charge of LR models. The third generation is chosen
to transform differently,
\begin{eqnarray}
&&Q_L(3) = (3,2,1)(1/3),\;\;\;\;Q_R(3) = (3,1,2)(1/3),\nonumber\\
&&L_L(3) = (1,2,1)(-1),\;\;\;\;L_R = (1,1,2)(-1). \label{gen3}
\end{eqnarray}
The correct symmetry breaking and mass generation of particles can
be induced by the vacuum expectation values of three Higgs
representations: $H_R = (1,1,2)(-1)$, whose non-zero vacuum
expectation value (vev) $v_R$ breaks the group down to
$SU(3)\times SU(2)\times U(1)$; and the two Higgs multiplets, $H_L
= (1,2,1)(-1)$ and $\phi = (1,2,2)(0)$, which break the symmetry
to $SU(3)\times U(1)_{em}$.
The models contain flavor changing neutral currents at tree level
that contribute to $B_s$ mixing and other related flavor changing
decays, the relevant interactions are \cite{He:2003qv},
\begin{eqnarray}
{\cal L}_Z &=& {g\over 2}\tan\theta_W (\tan\theta_R+\cot\theta_R)
(\sin\xi_Z Z_\mu + \cos\xi_Z Z^\prime_\mu) \nonumber \\
&\times &\left( \bar d_{Ri} \gamma^\mu V^{d*}_{Rbi} V^{d}_{Rbj}
d_{Rj} -\bar u_{Ri} \gamma^\mu V^{u*}_{Rti} V^{u}_{Rtj} u_{Rj}
+\bar \tau_R \gamma^\mu \tau_R -\bar \nu_{R \tau} \gamma^\mu
\nu_{R \tau} \right) \label{nmcoups}
\end{eqnarray}
In this expression $g$ is the usual $SU_L(2)$ gauge coupling,
$\theta_W$ the usual electroweak angle, $\theta_R$ parametrizes
the relative strength of the right-handed interactions, $\xi_Z$ is
the $Z$-$Z^\prime$ mixing angle and $V^{u,d}_{Rij}$ are the
unitary matrices that rotate the right-handed up-(down)-type
quarks from the weak eigenstate basis to the mass eigenstate basis
\cite{He:2003qv}.
The relative strength of left- and right-handed interactions is
determined by the parameter $\cot\theta_R$. In the limit in which
this parameter is large, the new right-handed interactions affect
predominantly the third generation. It was found in
Ref.~\cite{He:2003qv} that the measurement of $g_{R\tau}$ at
LEP\cite{Abbaneo:2001ix} implies a small $\cot\theta_R \xi_Z \leq
10^{-3}$ if the new interaction affects the third generation
leptons as well as the quarks. It is possible to construct models
in which the third generation lepton couplings are not enhanced.
Here we consider models in which they are enhanced but in which
the $Z-Z^\prime$ mixing is negligible.
In Ref.~\cite{He:2003qv}, the process $e^+e^- \rightarrow b
\bar{b}$ at LEP-II was used to obtain a lower bound for the mass
of the new $Z^\prime$ gauge boson for a given $\cot\theta_R$. For
our present purpose that bound can be approximated by the relation
\begin{equation}
\cot\theta_R \tan\theta_W \left({M_W \over M_{Z^\prime}}\right)
\sim 1. \label{appbound}
\end{equation}
Within this framework there are two potentially large sources of
FCNC. The first one, through the coupling $\bar{d}_i\gamma_\mu P_R
d_j Z^{\prime \mu}$ which occurs at tree level and which also
receives large one-loop corrections (enhanced by $\cot\theta_R$).
There
is a second operator responsible for FCNC which has the form
$\bar{d}_i\gamma_\mu P_L d_j Z^{\prime \mu}$. This operator first
occurs at one-loop with a finite coefficient that is enhanced by
$\cot\theta_R$, and is present even when there are no
FCNC at tree-level. Because it is enhanced by $\cot\theta_R$, it
can contribute to a low energy FCNC process at the same level as
the ordinary electroweak penguins mediated by the $Z$ boson even
though $M_{Z^\prime} >> M_Z$. It can be written as
\cite{He:2004it}
\begin{equation}
{\cal L}_{eff} = {g^3 \over 16 \pi^2} \tan\theta_W\cot\theta_R
V^\star_{ti} V_{tj} I(\lambda_t,\lambda_H) \bar{d}_i \gamma_\mu
P_L d_j\ Z^{\prime \mu} \label{effl},
\end{equation}
where $I(\lambda_t,\lambda_H)$ ($\lambda_i = m^2_i/m^2_W$) is the
corresponding Inami-Lim type function. With a Higgs mass in the
range of a few hundred GeV, this function varies between a few and
about 20 \cite{He:2004it}. When a third generation lepton pair is
attached to the $Z^\prime$ in Eq.~(\ref{effl}), a second factor
$\cot\theta_R$ is introduced which compensates for the small
$M_Z/M_{Z^\prime}$ ratio and makes this mechanism comparable to
the standard $Z$ penguin as follows from Eq.~(\ref{appbound}).
Collecting the above FCNC interactions, we find that for large
$\cot\theta_R$, $Z^\prime$ exchange will produce the following
effective flavor changing parameters,
\begin{eqnarray}
&&a_{ij} = {\alpha\over 2 \pi \sin^2\theta_W}I(\lambda_t,
\lambda_H) \cos\theta_W \tan\theta_W \cot\theta_R
V^*_{ti}V_{tj},\nonumber\\ &&b_{ij} = \cos\theta_W \tan\theta_W
\cot\theta_R \cos\xi_Z V^{d*}_{bi}V^d_{bj}.
\end{eqnarray}
Here $\cos\xi_Z = 1$ since we are working in the limit of no
$Z-Z^\prime$ mixing.
It is interesting to find that the one loop generated $a_{ij}$
can have a significant contribution to $B_s$ mixing. For example,
with $\cot\theta_R \tan\theta_W (m_W/m_{Z^\prime} )= 1$ and
$b_{bs} = 0$, Figure~(\ref{constraint}) shows that the range
$0.0012 \lsim (m_Z/ m_{Z^\prime})|a_{bs}| \sim 0.0015$ added to
the lowest bound of the SM reproduces the measured $B_s$ mass
difference. This range implies
\begin{eqnarray}
5.5 \lsim I(\lambda_t, \lambda_H)\left|
\frac{V_{tb}^*V_{ts}}{0.04}\right| \lsim 6.5 \label{intran}
\end{eqnarray}
This range is accessible in our models with reasonable parameters
for the Higgs mass and $\cot\theta_R$ (See Figure 3 in
Ref.~\cite{He:2004it}). Interestingly, this is the same range in
which $K^+ \to \pi^+ \nu \nu$ reproduces the branching ratio
measured by E787 and E949 (which requires $I(\lambda_t,\lambda_H)
= 5.54$ \cite{He:2004it}).
If we take the value, $I(\lambda_t,\lambda_H) = 5.54$, then $(m_Z/
m_{Z^\prime})|a_{bs}| \sim 0.0012$ and from
Figure~(\ref{constraint}) we see that the following range of
values is allowed for real $b_{bs}$
\begin{eqnarray}
0.0005\sim \left|V^{d*}_{Rbb}V^d_{Rbs}\right| \sim 0.0009.
\end{eqnarray}
Allowing $V^{d*}_{bb}V^d_{bs}$ to be complex, we find an allowed
range shown in Figures~(\ref{parameter}) and ~(\ref{parameterp}).
In Figure~(\ref{parameter}) we use $\Delta
M^{SM}_{B_s}=12.6ps^{-1}$, at the low end of the theoretical
range. The region shaded in light gray corresponds to $a_{bs}=0$,
whereas the region in dark gray corresponds to $a_{bs}=-0.0012$.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=8cm]{fig5a.eps}{\hspace{0.5in}}
\includegraphics[width=6cm]{fig5b.eps}
\end{center}
\caption{ Constraints on the flavor changing parameter
$\zeta\equiv V^{d*}_{bb}V^d_{bs}$ with $\Delta
M^{SM}_{B_s}=12.6ps^{-1}$. The shaded regions correspond to the
new physics contribution necessary to match the 3-$\sigma$ $\Delta
M_{B_s}$ range for $a_{bs}=0$ (light gray) and $a_{bs}=-0.0012$
(dark gray).}\label{parameter}
\end{figure}
If we take $a_{bs}=-0.0012$ but use the central value of the SM
range then we obtain Figure~(\ref{parameterp}). Notice that in
this case there are no solutions for real values of $b_{bs}$ (or
$\zeta$). New physics in this case is only allowed with a large CP
violating phase.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=8cm]{fig4acv.eps}{\hspace{0.5in}}
\includegraphics[width=6cm]{fig4bcv.eps}
\end{center}
\caption{ Constraints on the flavor changing parameter
$\zeta\equiv V^{d*}_{bb}V^d_{bs}$ with $\Delta M^{SM}_{B_s} =
21.7ps^{-1}$. The shaded regions correspond to the new physics
contribution necessary to match the 3-$\sigma$ $\Delta M_{B_s}$
range when $a_{bs}=-0.0012$.}\label{parameterp}
\end{figure}
\section{$B\to X_s \tau^+ \tau^- (\nu \bar \nu )$,
$B_s \to \tau^+ \tau^-$ and $K\to \pi\nu\bar \nu$ }
We now show that the constraints on the flavor changing parameters
from $\Delta M_{B_s}$ still allow for a substantial enhancement in
$b \to s \tau^+ \tau^- (\nu_\tau \bar \nu_\tau)$ transitions. In
the large $\cot \theta_R$ limit, a $Z^\prime$ exchange at tree
level leads to an effective interaction
\begin{equation}
{\cal L} = {g^2\tan^2\theta_W\cot^2\theta_R\over 4 M^2_{Z^\prime}}
V^{d\star}_{Rbs}V^{d}_{Rbb}\bar s \gamma_\mu P_R b \
(\bar{\nu}_\tau\gamma^\mu P_R \nu_\tau - \bar \tau \gamma^\mu P_R
\tau) + h.c. \label{tleffl},
\end{equation}
and at one loop level to
\begin{eqnarray}
{\cal L} = {g^2\tan^2\theta_W\cot^2\theta_R\over 4 M^2_{Z^\prime}}
{g^2 \over 8 \pi^2} V^*_{ts}V_{tb}I(\lambda_t,\lambda_H) \bar s
\gamma_\mu P_L b \ (\bar{\nu}_\tau\gamma^\mu P_R \nu_\tau - \bar
\tau \gamma^\mu P_R \tau) + h.c.
\end{eqnarray}
The corresponding transitions in the SM are mediated by the
effective Hamiltonian
\begin{equation}
\tilde{H}_{eff} = {G_F \over \sqrt{2}}{2\alpha\over \pi
\sin^2\theta_W} V^\star_{ts}V_{tb} \bar{s}\gamma_\mu P_L b \
\left[X(x_t) \sum_\ell \bar{\nu_\ell}\gamma^\mu P_L \nu_\ell -
Y(x_t) \bar \tau \gamma^\mu P_L \tau\right]+ h.c., \label{comp}
\end{equation}
where $x_t =m_t^2/M_W^2$ and the Inami-Lim functions $X(x_t)$ and
$Y(x_t)$ are approximately equal to 1.6 and 1.06 respectively
\cite{Buchalla:1995vs}.
Comparing the tree-level $Z^\prime$ exchange and SM contributions,
we have
\begin{eqnarray}
&&{\Gamma_{new}(B\to X_s \nu\bar \nu)\over \Gamma_{SM}(B\to X_s
\nu\bar \nu)} \approx 1130 \cot^4\theta_R \tan^4\theta_W
\left({M_W \over M_{Z^\prime}}\right)^4
\left|{V^{d\star}_{Rbs}V^{d}_{Rbb}\over
V^\star_{ts}V_{tb}}\right|^2,\nonumber\\
&&{\Gamma_{new}(B_s\to \tau \bar \tau)\over \Gamma_{SM}(B_s\to
\tau \bar \tau)} \approx 7730 \cot^4\theta_R \tan^4\theta_W
\left({M_W \over M_{Z^\prime}}\right)^4
\left|{V^{d\star}_{Rbs}V^{d}_{Rbb}\over
V^\star_{ts}V_{tb}}\right|^2. \label{kpnnrat}
\end{eqnarray}
In the SM, with $|V_{ts}^*V_{tb}|\approx 0.04$ these branching
ratios are predicted to be $B(B\to X_s \nu \bar \nu) = 4\times
10^{-5}$ and $B(B_s \to \tau \bar \tau) = 1.1\times 10^{-6}$.
For $B\to X_s \tau^+ \tau^-$ it is easier to compare the new
contribution to the semileptonic decay,
\begin{equation}
\frac{\Gamma_{new}(B\to X_s \tau \bar \tau)}{\Gamma_{SM}(B \to X_c
e^-\bar{\nu})} \approx 0.06 \cot^4\theta_R \tan^4\theta_W
\left({M_W \over
M_{Z^\prime}}\right)^4\left|{V^{d\star}_{Rbs}V^{d}_{Rbb}\over
V_{cb}}\right|^2
\end{equation}
The short distance contributions to $B\to X_s \tau^+ \tau^-$
within the SM have been estimated to be $B(B\to X_s \tau^+
\tau^-) = 3.2 \times 10^{-7}$ \cite{Hewett:1995dk}.
Using the constraint $|V^{d*}_{Rbb}V^d_{Rbs}|\lsim
3.5\times10^{-3}$ from $\Delta M_{B_s}$ with $\cot\theta_R
\tan\theta_W (m_W/m_{Z^\prime}) \approx 1$ (see
Figure~(\ref{parameter})), we find the following upper bounds for
these decays
\begin{eqnarray}
&&B(B\to X_s \tau^+ \tau^-)\leq 4.4\times 10^{-5},\nonumber\\
&&B(B\to X_s \nu \bar \nu)\leq 3.7\times 10^{-4},\nonumber\\
&&B(B_s \to \tau \bar
\tau)\leq 6.3\times 10^{-5}. \label{modes}
\end{eqnarray}
These upper bounds are larger than the respective SM predictions
by factors of about 100, 8 and 55, representing one and two orders
of magnitude enhancements. They occur when the imaginary part of
$V^{d*}_{Rbb}V^d_{Rbs}$ is large (see Figure~(\ref{parameter})).
If we restrict $V^{d*}_{Rbb}V^d_{Rbs}$ to be real, then the
constraint reads $|V^{d*}_{Rbb}V^d_{Rbs}|\lsim 9 \times10^{-4}$
and the largest enhancements possible for the modes in
Eq.~(\ref{modes}) become 10, 1.5 and 5 respectively. The one-loop
new physics contributions are still smaller than this.
Let us now comment on the effect of $\Delta M_{B_s}$ on $K\to \pi
\nu \bar \nu$. Here we would like to see whether the new
constraint on the flavor changing parameters allows for the
enhancement of about 2 over the SM that is required to reproduce
the measured central value $B(K^+ \to \pi^+ \nu \bar \nu) =
(1.47^{+1.30}_{-0.89})\times 10^{-10}$ by E787 and E949
\cite{Adler:2001xv,Artamonov:2004hr}. The tree-level $Z^\prime$
contribution compared with the SM is given by
\begin{eqnarray}
&&{\Gamma_{tree}(K^+\to \pi^+ \nu\bar \nu)\over \Gamma_{SM}(K^+\to
\pi^+ \nu\bar \nu)} \approx 1130 \cot^4\theta_R \tan^4\theta_W
\left({M_W \over M_{Z^\prime}}\right)^4
\left|{V^{d\star}_{Rbs}V^{d}_{Rbd}\over
V^\star_{ts}V_{td}}\right|^2.
\end{eqnarray}
The parameters involved are different than those in $B_s$ mixing.
They can be related when the matrix $V^d_{Rij}$ is almost
diagonal. In that case $V^d_{Rbb} \approx 1$, and
$|V^{d*}_{Rbb}V^d_{Rbs} |\approx |V^d_{Rbs}|\lsim 3.5 \times
10^{-3}$. A similar analysis for $\Delta M_{B_D}$ using
Figure~(\ref{rangebd}) leads to $|V^d_{Rbd}| \lsim 2.5\times
10^{-4}$. Combining these two results one obtains
\begin{eqnarray}
\left|{V^{d\star}_{Rbs}V^{d}_{Rbd}\over
V^\star_{ts}V_{td}}\right|^2 \lsim 7 \times 10^{-6}
\end{eqnarray}
With these numbers, the tree-level $Z^\prime$ exchange
contributions to $B(K^+ \to \pi^+ \nu \bar \nu)$, $B(B_d \to X_d
\tau^+ \tau^- (\nu\bar \nu))$ and to $B(B_d \to \tau^+ \tau^-
(\nu \bar \nu))$ are much smaller than their SM counterparts.
The situation is different for the one loop level $Z^\prime$
flavor changing interaction Eq.~(\ref{effl}). Here we have
\begin{eqnarray}
{\Gamma_{loop}(K^+\to \pi^+ \nu \bar \nu)\over \Gamma_{SM}(K^+\to
\pi^+\nu\bar \nu)} \approx {1\over 12} \cot^4\theta_R
\tan^4\theta_W \left ({m_W\over m_{Z^\prime}}\right )^4 \left
\vert {I(\lambda_t, \lambda_H)\over X(x_t)}\right \vert^2.
\end{eqnarray}
The total contribution to the rate is simply the sum of the SM and
the one loop $Z^\prime$ exchange since they have the same CKM
factor and the same sign. With $I(\lambda_t, \lambda_H) = 5.54$,
we obtain the central value of $1.47\times 10^{-10}$ by E787 and
E949, and as we saw in Eq.~(\ref{intran}), this value is allowed
by $\Delta M_{B_s}$. A similar situation occurs for the CP
violating decay $K_L \to \pi^0 \nu\bar \nu$ where
\begin{eqnarray}
\Gamma(K_L \to \pi^0 \nu\bar \nu) = \Gamma(K_L \to \pi^0 \nu\bar
\nu)_{SM} {\Gamma(K^+\to \pi^+\nu\bar \nu)\over \Gamma(K^+\to
\pi^+ \nu\bar \nu)_{SM}}.
\end{eqnarray}
\section{CP Violation}
A complex new physics contribution to $M^{B_s,N}_{12}$ can have
significant effects on CP violation in $B_s$ decays \cite{Group1}.
We briefly comment on the effects on two experimental observables,
the dilepton and the time dependent CP asymmetries $a$ and
$A_{TCP}$ defined as
\begin{eqnarray}
&&a = {N^{++} - N^{--}\over N^{++}+N^{--}} =
\frac{\left|\frac{p_{B_s}}{q_{B_s}}\right|^2|A|^4 -
\left|\frac{q_{B_s}}{p_{B_s}}\right|^2 |\bar A|^4 }{
\left|\frac{p_{B_s}}{q_{B_s}}\right|^2|A|^4
+\left|\frac{q_{B_s}}{p_{B_s}}\right|^2 |\bar
A|^4},\nonumber\\
&&A_{TCP} = 2 e^{\frac{\Delta \Gamma^{B_s}}{2} t} {A_f \cos(\Delta
M^{B_s} t) + S_f \sin(\Delta M^{B_s} t)\over 1+ e^{\Delta
\Gamma^{B_s} t} - A^{\Delta \Gamma}_f (1- e^{\Delta \Gamma^{B_s} t})},
\end{eqnarray}
where $N^{ii}$ is proportional to $\Gamma(b \bar b \to l^i l^i X)$
and $A$ and $\bar A$ are the decay amplitudes for $B \to l^+ \nu
X$ and $\bar B \to l^- \bar \nu \bar X$. $\Delta \Gamma$ is the
lifetime difference between the heavy and light states $B_s^L$ and
$B_s^H$. The other quantities are defined as
\begin{eqnarray}
&&A_f = {|A(f)|^2-|\bar A(\bar f)|^2\over |A(f)|^2+|\bar A(\bar
f)|^2},\;\;S_f = - 2 {Im((q_{B_s}/p_{B_s})\bar A(f)A^*(f))\over
|A(f)|^2+|\bar A(\bar f)|^2},\nonumber\\
&&A^{\Delta \Gamma}_f = 2 {Re((q_{B_s}/p_{B_s})\bar
A(f)A^*(f))\over |A(f)|^2+|\bar A(\bar f)|^2},
\;\;|A_f|^2+|S_f|^2+|A^{\Delta \Gamma}_f|^2 = 1. \label{qm}
\end{eqnarray}
Here $A(f)$ and $\bar A(\bar f)$ are decay amplitudes for $B_s$
and $\bar B_s$ decay into CP eigen-states $f$. In terms of the
$B_s$ mixing parameters
\begin{eqnarray}
{q_{B_s}\over p_{B_s}} = \sqrt{{M^{B_s*}_{12} -
i\Gamma^{B_s*}_{12}/2\over M^{B_s}_{12} - i \Gamma^{B_s}_{12}/2}},
\end{eqnarray}
Assuming that CP violation in $A$ and $\bar A$ is small, $|A| =
|\bar A|$, and
\begin{eqnarray}
a = {Im(\Gamma^{B_s}_{12}/M^{B_s}_{12}) \over 1 +
|\Gamma^{B_s}_{12}/M^{B_s}_{12}|^2/4}.
\end{eqnarray}
In the SM one has \cite{Beneke:1998sy}
\begin{eqnarray}
{2\Gamma^{B_s,SM}_{12}\over \Gamma^{B_s}} = -{f^2_{B_s}\over
(230\mbox{MeV})^2}(0.007 B_{B_s} + 0.132 \epsilon - 0.078)\approx
-0.11.
\end{eqnarray}
This number is consistent with the 95\% CL HFAG experimental upper
bound of $\Delta \Gamma(B_s)/\Gamma(B_s) < 0.54$ \cite{hfag}.
Since $\Gamma^{B_s}_{12}= |\Gamma_{12}^{B_s}|e^{i\alpha_s}$ arises from
loop contributions involving light quarks, we do not expect a
significant new physics contribution. In the following we will use
the SM value above to estimate this quantity. In terms of the
phase of $M^{B_s}_{12} = |M_{12}^{B_s}|e^{i\alpha_s + i\theta_s}$,
we obtain
\begin{eqnarray}
a \approx 0.004\sin\theta_s.
\end{eqnarray}
The phase $\theta_s$ is not constrained by the measurement of
$\Delta M_{B_s}$, as we saw it can take any value from $0$ to
$2\pi$. Therefore the asymmetry $a$ can vary in the range from
-0.004 to 0.004.\footnote{It can also reach 2\% if
$\Delta\Gamma(B_s)/\Gamma(B_s)$ is close to its experimental upper
bound.}
There may also be large effects in the time dependent CP
asymmetry. The time dependent CP asymmetries in B decays have been
shown to provide crucial information about CP violation in $B_d$
decays. For $B_s$ decays, the ``Gold Plated" mode to study CP
violation is $B_s \to \psi \phi$. In the SM, $S_f$ is about 0.038
and $A_f$ is also very small. But with new physics, $S_f$ can be
much larger (since $S_f = \sin(2\theta_s)$) even if there is no CP
violating phase in $A(f)$ and $\bar A(\bar f)$. Future experiments
should test CP violation in the $B_s$ sector \cite{Ball:2000ba}.
There is another special aspect of time dependent CP violation in
$B_s$ decays due to the fact that $\Delta \Gamma$ is not equal to
zero \cite{Ball:2000ba}. If $\Delta \Gamma=0$, which is a very good
approximation for $B_d$ decays, it is not possible to measure
$A^{\Delta \Gamma}_f$, and one cannot check the last equation in
Eq.~(\ref{qm}). Assuming again that CP violation in the decay
amplitudes is small, we have
\begin{eqnarray}
A_{TCP} = 2 e^{\Delta \Gamma^{B_s} t/2} {\sin\theta_s \sin(\Delta
M^{B_s} t)\over 1+ e^{\Delta \Gamma^{B_s} t} - \cos\theta_s (1-
e^{\Delta \Gamma^{B_s} t})}.
\end{eqnarray}
In Figure~(\ref{quantf}), we show $ a_{TCP}=A_{TCP}(\Delta
\Gamma^{B_s}) - A_{TCP}(0)$ as a function of t. We have chosen two
values $2\pi/3$ and $\pi/5$ for $\theta_s$ for illustration. We
can see that at a few percent level, there are differences
compared with the $\Delta \Gamma =0$ case such difference may be
tested at LHCb.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=8cm]{quantf.eps}
\end{center}
\caption{$a_{TCP}$ as a function of t (ps) for two values of
$\theta_s$; $\theta_s = 2\pi/3$ shown as a solid line, and $\theta_s
= \pi/5$ shown as a dashed line . }\label{quantf}
\end{figure}
\noindent {\bf Acknowledgments}$\,$ The work of X.G.H. was
supported in part by the National Science Council under NSC
grants. The work of G.V. was supported in part by DOE under
contract number DE-FG02-01ER41155. We thank Soeren Prell for
useful conversations.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,912 |
# The Bhagavad-Gita for the Modern Reader
What is the _Bhagavad-Gita_? Is it just a religious text? When was it composed? How relevant is it to the modern world?
This book answers these foundational questions and more. It critically examines the _Bhagavad-Gita_ in terms of its liberal, humanist and inclusive appeal, bringing out its significance for both present times and novel applications. The author elaborates the philosophy underlying the text as well as its ethical and spiritual implications. He also responds to criticisms that have been levelled against the text by Ambedkar, D. D. Kosambi and, more recently, Amartya Sen. The volume proposes new ways of utilising the text in diverse fields, such as business and management and scientific research.
Eclectic and accessible, this work will be of interest to scholars of philosophy, religion, history, business and management studies, as well as the general reader.
**M. V. Nadkarni** is presently Honorary Visiting Professor at the Institute for Social and Economic Change (ISEC), Bengaluru, and a Member of the Governing Body at the Centre for Multi-disciplinary Development Research (CMDR), Dharwad, Karnataka, India. An economist by professional training, with specialisation in agricultural and ecological/environmental economics, he is actively interested in development economics, political economy, history, sociology, philosophy, ethics, religion and Gandhian Studies. He was the Indian Council of Social Science Research (ICSSR) National Fellow for two years (2002–04) and Vice Chancellor of Gulbarga University, Karnataka, India from 1999 to 2002.
# The Bhagavad-Gita for the Modern Reader
# History, interpretations and philosophy
_M. V. Nadkarni_
First published 2017
by Routledge
2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN
and by Routledge
711 Third Avenue, New York, NY 10017
_Routledge is an imprint of the Taylor & Francis Group, an informa business_
© 2017 M. V. Nadkarni
The right of M. V. Nadkarni to be identified as author of this work has been asserted by him in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988.
All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers.
_Trademark notice_ : Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe.
_British Library Cataloguing in Publication Data_
A catalogue record for this book is available from the British Library
_Library of Congress Cataloging in Publication Data_
A catalog record has been requested for this book.
ISBN: 978-1-138-20231-3 (hbk)
ISBN: 978-1-315-43900-6 (ebk)
Typeset in Sabon
by Apex CoVantage, LLC
### Transliteration
Yasya mulam tvaya proktam mameti hyarpaye katham /
Koaham arpana-karta syam yasyastitvam tvaya kritam //1//
Tvam sweekurushwakhilakaryakartah kritam tu kimchit tava pritijatam /
Gitarthasarastu janeshu gamyah tvadiya karyesmi nimittamatram //2//
Bhanita Bhagavadgita bahvibhih bhashya-bhangibhih /
Tasam talina-tatparyam granthesmin viniveditam //3//
Vibhrame bhartsita Gita vitarkena vimarshakaih /
Satarkam khandanam tesham vinayenaiva bhashitam //4//
Kalakramena Gitaya prayoga agata navah /
Tesham kathanam chaapi savistarena samarpitam //5//
Vachakeshu vyavasthayin Vishwesha-a Vishwa-vatsala /
Vanditvaanena granthena vinamram tvam yajamyaham //6//
### Translation
How can I dedicate this as mine whose source is (as) told by You?
Who am I to dedicate whose existence is created by You? //1//
Even then, Oh Doer of Everything, accept
what little is done born from your love!
The essence of the Gita needs to be disseminated among people;
I am just an instrument (nimitta) in Your own work. //2//
The Bhagavad-Gita is spoken of by numerous waves of (bhangibhih) commentaries;
their underlying (talina) purport (tatparyam) is narrated in this book. //3//
The Gita has been criticised in confusion by critics with false arguments;
their refutation is presented (here) logically and politely only. //4//
In the course of time, new applications (prayogah) of the Gita have come into vogue;
their narration is also offered in good detail. //5//
Through this book, I salute (and) propitiate (yajami) You in all modesty, the Lord of the universe,
(who is) affectionate to all, and nicely abiding among the readers. //6//
[The original Sanskrit verses and their translation are both by the author]
# Contents
1. Cover
2. Title
3. Copyright
4. CONTENTS
5. _Preface_
6. _Key to transliteration_
7. 1 Significance of the Gita, its date and authorship
1. As a sacred text
2. As a part of the Mahabharata
3. Date and authorship
8. 2 Classical commentators of the Gita
1. The Gita in the rest of the Mahabharata, and the _Pur anas_
2. Shankara
3. Bhaskara
4. Ramanuja
5. Madhva
6. Others in the Sanskritic tradition
7. _Jnaneshwari_ – The Gita goes to people at large
8. Appendix to Chapter 2 on the three Acharyas and their philosophies
9. 3 The Gita goes global
1. Wilkins's English translation, impact and reactions
2. Reception in Germany
3. Further spread
10. 4 Makers of modern India and their interpretations of the Gita
1. Raja Rammohan Roy
2. Bankimchandra
3. Theosophical Society and Annie Besant
4. Bal Gangadhar Tilak
5. Swami Vivekananda
6. Lala Lajpat Rai
7. Mahatma Gandhi
8. Aurobindo Ghose
9. Swami Sahajananda Saraswati
10. Jawaharlal Nehru
11. 5 Contemporary interpretations
1. Swami Ramdas
2. D. V. Gundappa (DVG)
3. Swami Sivananda
4. K. M. Munshi
5. S. Radhakrishnan
6. Sri Sri Paramahansa Yogananda
7. Vinoba Bhave
8. A. C. Bhaktivedanta Swami Prabhupada
9. Swami Ranganathananda
10. Eknath Easwaran
11. Swami Chinmayananda
12. Maharishi Mahesh Yogi;
13. Swami Dayananda Saraswati
12. 6 Philosophy of the Gita
1. The Gita and the pursuit of happiness
2. Ethics in the Gita
3. God and His world
4. _S adhana_: spiritual striving
13. 7 Criticisms of the Gita and responses
1. Contradictions
2. Historicity
3. Is the Gita other-worldly? amoral? deterministic?
4. Is the Gita reactionary?
5. The Gita and its deontology
6. Miscellaneous criticisms
14. 8 Novel applications
1. The Gita as a guide to leadership, enterprise and management
2. Pursuit of truth in scientific research
3. Success in career and life
15. _Glossary_
16. _Bibliography_
17. _Name index_
18. _Subject index_
# Preface
I have been exposed to the Shrimad-Bhagavad-Gita (the Gita in short) since childhood. However, it was only after my retirement from salaried service in 2002 that I could give a serious attention to it, and I started a systematic study of its many translations along with commentaries and interpretations. The sheer simplicity and at the same time complexity and profundity of the Gita fascinated me. The deep desire that was dormant during all these decades to know what all the tremendous literature on the Gita added up to, and how the Gita helps humankind, came to the surface as a compelling force. This book is as much a product of my struggle to understand the Gita in all its aspects as it is an outcome of equally strong urge to share this knowledge with all those who may not have the time or patience to read the vast literature on the Gita on their own. The adage that one gains by sharing applies conspicuously to knowledge.
There are, however, thousands of books already on the Gita including some 2,000 translations in English, not counting many in Indian languages. Why this book again? Is there anything special about it? There is, I plead. First of all, it has a historical perspective, exploring not only its origin, but also its long career in terms of translations, global spread, and influence particularly in India both before and after independence. The book starts with an introduction which explains what made the Gita a sacred book over two-and-half millennia, its place in the Mahabharata and the question of the date of its composition and also of authorship (see Chapter 1). The next four chapters (2–) trace four main phases in the subsequent career of the Gita. The classical interpreters in the first phase like Shankara focused on the metaphysical and sadhana aspects of the Gita, mainly in the Vedic tradition. During the next phase, paradoxically under the colonial rule, the Gita was launched on a global travel and accepted as a text of universal interest to all mankind, not just to the Vedic tradition within India. I have explained why. In the later phase of the colonial rule, the Gita assumed a special relevance for the nationalist struggle and social reform in India and was recognised as a progressive and activist text. In the contemporary phase, the ethical content of the Gita and its relevance for guiding even day-to-day mundane life were recognised without detriment to its being a source of spiritual inspiration and teaching. Once again with redoubled strength, its universal relevance was emphasised in the contemporary phase. The story of how the Gita began to reach wider audience through translations in Indian regional languages has also formed a part of this narrative. Interestingly, the Gita also has had its own career! It is marked by two features: first, it has been continuously applied to address the issues of each phase; second, its audience is deepening in the sense of reaching down to even common people beyond scholars and also widening across countries and religions. It was exciting to narrate the story of this career.
The second distinctive feature of the book is that while most of the books on the Gita go verse by verse and chapter by chapter, I thought it is useful to view it as a whole. It is by so studying that one can comprehend its philosophy in all its aspects – moral, metaphysical and paths to spiritual striving or sadhana that it has shown. Before arriving at this understanding, I had to take due note of most of the main interpretations of the Gita. There is a long history of these interpretations since Shankara in the eighth century to the present, which is reviewed in this book (see Chapters 2–). The approach to presenting these interpretations here is not one of mechanically summarising them, but of bringing out their contributions to the understanding of the Gita. The reader can know about the main interpreters of the Gita, over thirty of them, along their background and thoughts, within a single book. In the process, there is also an attempt to see how far these interpretations are helpful to solve the problems of the present.
Third, I have my standpoint through which I have viewed the Gita in writing this book. It is not in terms of the traditional classification of Vedanta philosophy into Advaita, Vishishta-advaita, and Dvaita, though certainly I have taken due note of them and commented on them by arguing that they need not be taken as mutually irreconcilable, but as different aspects of a complex but comprehensive Ultimate Truth. All the three philosophies have significantly enriched Indian philosophy. My standpoint is in terms of the modern values we cherish – equality, human dignity, justice, respect for differences in faiths and the need to build a peaceful and prosperous world in which there is fair play, without poverty and deprivation. Is the Gita relevant here? This is the main standpoint from which I have tried to understand the Gita. The greater relevance of the Gita consists not so much in showing us a path to renounce this world, as in making our travel in this world itself from birth to death, sharira-yatra, as the Gita calls it, meaningful, fruitful and enjoyable. The Mahabharata calls it as loka-yatra, or people's travel in the world (Amur 2013). Many of the ancient as well as modern interpreters valued the Gita for the guidance it provides in sailing through the conflict-ridden turbulent ocean of samsara (dayto-day life in this practical world), not by escaping from it but by facing it. Ethical teachings of the Gita along with its guidance to shape our mind and attitudes helpful in this task are therefore particularly emphasised in this book. What is great about the Gita is that its God is not jealous of other gods; it does not preach intolerance to differences in faiths. On the other hand, it preaches that God is the same whatever be the name and form through you worship Him, and the worship reaches Him. Its approach to differences in faiths is to reconcile them. It is, therefore, a universal and inclusive sacred text. Gandhi was particularly fascinated by this feature of the Gita, as he was also by its emphasis on truth and non-violence, and ethics in general, which he treated as the main message of the Gita. That is what makes our travel in this world worthwhile and happy. While the Gita was a source of inspiration for freedom fighters as well as social reformers during nationalist struggle before independence in India (see Chapter 4), it also has astoundingly novel applications in the present times. Three such areas are identified in the book, where the teachings of the Gita can be very helpful: (a) business enterprise, leadership, and management; (b) theory and methodology of seeking truth in scientific research; and (c) making our personal lives and relationships happy and fruitful (Chapter 8).
The Gita, however, does not expect us to get bogged down into the mundane matters, but to have a higher purpose. Whether or not one believes in gaining moksha in the sense of liberation from the cycle of births and deaths, the Gita shows the path to freedom from bondage of the confines of narrowness and weaknesses like hatred and jealousy, and to realise the divinity within us all. The path shown by it for success in sailing through samsara serves also in sadhana or spiritual striving. The Gita has no doubt rich ethical content, but is not confined to it. The Gita has a philosophy of happiness, which all humankind is entitled to, and also guidance to give about attaining this happiness. This philosophy has been explained both in general and then specifically in its three major aspects: (a) ethics, (b) theology and metaphysics and (c) proposed paths of sadhana (in Chapter 6). To understand the Gita, we need to comprehend all these aspects. It is stressed here that selfless service for the good of all (sarva-bhuta-hita) or promoting the welfare of people (loka-sangraha) is an intrinsic part of sadhana preached by the Gita. Its karma-yoga has hardly any significance if it does not include such service.
Fourth, the book devotes an entire chapter (the seventh) to respond to the criticisms made against the Gita, including those by Dr B. R. Ambedkar, Kosambi, and Amartya Sen. Since most of the criticisms are not just due to differences in perspectives but also mainly due to improper understanding of the Gita, the chapter clarifies the issues with cool logic, trying to remove the misunderstandings. Neither do I mind rationalism and agnosticism, nor do I subscribe to any fanatical or intolerant interpretation of Hinduism, but I also think that to attack the Gita is hardly a brilliant and effective strategy to defeat fanaticism. Having been trained as a professional social scientist (economist) has helped me to be analytical, rational and objective, shunning fanaticism and blind belief. My admiration for the Gita is based mainly on its humanist and ethical appeal.
As Swami Bhoomananda Tirtha says, it would be doing injustice to the Gita if it is seen only as a sacred text meant for the recluse and the retired, instead of grasping its larger significance (Bhoomananda 2014, Vol. I: 15). My book presents the Gita in its wider perspective. The Gita should not be seen as a sacred book of Hindus only. I do not, however, subscribe to demands like declaring it as a 'national book' or prescribing it as a compulsory teaching in schools to the exclusion of sacred texts of other religions. Besides, the Gita is not the only sacred text even within Hinduism.
My book is addressed to a general contemporary reader who has interest in religion and Indian philosophy, and not only to scholars in this field, though they too would find it interesting. Hence my style of writing is simple even when dealing with complex and profound issues. Wherever technical terms are used, they are adequately explained in the text itself. A Glossary is also added to the book.
I have generally followed the Key to Transliteration given in the following text, but I have also used the simple well-known forms after using the Key once or twice. For example, after using the correct word, Shrimad-Bhagavad-Gita once or twice, the simple term 'Gita' is used without transliteration marks; similarly, after using 'Krishna', the word has been used subsequently without transliteration marks.
I had the benefit of consulting several scholar friends, whose suggestions have been very helpful. Among them are Professors Mallepuram G. Venkatesh (former vice chancellor, Karnatak Sanskrit University, who assured me that my response to the criticisms of the Gita is cautious and polite enough), Shrinivasa Varakhedi, P. R. Panchamukhi and K. S. Kannan. Since I am not a pundit in Sanskrit, I got my dedication verses checked by Professors Varakhedi, Panchamukhi and Mallepuram, who kindly helped me in correcting my mistakes. If mistakes remain, they are mine. Professor G. S. Amur took the trouble to go through an initial draft of the book and gave valuable advice. My brother, Dr Kishore Nadkarni, served as a sounding board, whose comments on initial chapters put me on guard. I am also grateful to Professor R. S. Deshpande, my daughter Saraswati, sons Anirudh and Makarand for encouragement and support. My daughter-in-law Amita helped by neatly arranging all the chapters in different files as required by the publisher. Subhashree Banerjee taught me how to type in Sanskrit on the computer with Google typing aid. From Vishnu Kedar, I learnt linking up footnotes with the main text. I cannot fail to thank Mr B. B. Chand and Dr Pradeep Hegde of ISEC Library for their kind help by getting books and other references on time. My being an honorary visiting professor at ISEC has been a source of strength and stimulus in all my works after retirement.
I am grateful to the three anonymous reviewers of the publisher for their valuable suggestions and appreciation of the book. I may not have followed all their suggestions, though I have a great respect for them. The responsibility for views expressed and for any errors in the book is, therefore, mine. I am greatly indebted to Dr Shashank Sinha, the publishing director of Routledge India, for all the encouragement and support, and the personal interest he took in getting this published soon. I am also grateful to his colleagues for processing the book in good time.
Though my wife, Ganga, is physically not with me since 2003, she has continued to be a constant source of strength. She would have loved to read this book. I miss her deeply. I also miss my sister-in-law, Girija (Ammu) Pandit, who was a keen student of the Gita and would have received this book with her usual enthusiasm and spread a word about it among her numerous friends and relatives. I greatly miss my senior friend and colleague, Professor V. M. Rao. There was hardly a book or an article with which I did not bother him for critical comments, and he always obliged. He wrote the Foreword for my previous book on ethics. This book also would have benefited from his sage advice.
# Key to Transliteration
(In the alphabetical order of Sanskrit)
### Vowels
a – o as in son | a – a as in master
---|---
i – i as in if | i – ee as in feel
u – u as in full | u – oo as in boot
ri – ri as in Krishna | au – ow as in now
### Consonants
kh – ckh as in blockhead | gh – gh as in log-hut
---|---
ch – ch as in chain | chh – chh as in catch-him
jh – dge as in hedgehog |
t – t as in ten | th – th as in anthill
d – d as in den | dh – dh as in godhood
n – n as in under |
t – t as in Gita | th – th as in thin
d – th as in then | dh – th as in this
n – as in not, singer, bench |
ph – ph as in loophole, or as f in fit | bh – bh as in abhor
y – y as in yard | v, w – as in avert, awake
sh – sh as in cherish, shankara | sh – sh as in show, shashtha (sixth)
s – s as in Sun |
h – h as in hot | l – second l as in Malayalam
Note: Illustrations of pronunciation are mostly from Harshananda (2008, Vol. I: x), but the key followed here is different, consisting simply of underlining, not using symbols which need special software. This key was successfully used in Nadkarni (2013).
# [1
Significance of the Gita, Its Date and Authorship](content.xhtml#bck_Ch01)
## As a sacred text
The Shrimad-Bhagavad-Gita, or the Gita in its well-known short form, is the most popular sacred text of the Hindus. The prefix, Shrimat(d), is normally used for some of the later sacred texts in Hinduism, signifying utmost respect given to them, Shrimad-Ramayanam, Shrimad-Bhagavatam being a few other examples. What constitutes a sacred book? First and foremost, a sacred book is an authoritative source of guidance and inspiration both in leading day-to-day life consistent with ethical values and in pursuing peace of mind and spiritual realisation. In describing itself both as brahma-vidya and yoga-shastra (in the colophons), the Gita tells what constitutes a sacred book. Brahmavidya means the knowledge of the Ultimate Truth or reality; in imparting such a knowledge, it presents a discussion of metaphysics and theology. But this knowledge is not a question of mere book learning. One's life should reflect the basic urge for spiritual realisation. That is, to acquire brahma-vidya, or spiritual knowledge, one should also behave in a certain way consistent with or aimed at this goal. That is where striving for it becomes relevant. Yoga-shastra is the science of spiritual striving (yoga) or sadhana (striving). Yoga here does not just mean adopting certain postures of the body, but has the much wider connotation of leading a life of spiritual orientation and rigorous discipline – morally and mentally. A sacred book serves thus as a guide to both brahma-vidya and yoga-shastra.
Second, a sacred book resonates across space and time, not being specific or limited to a particular given context alone, but having a universal significance or relevance. Though the immediate context of the Gita is to remove Arjuna's despondency, the purpose of Krishna or the Gita is not just to give a pep talk to him. When Arjuna asks Krishna what is best (shreya) for him, Krishna's reply is about not only what is best for Arjuna particularly but also what is best generally or universally for all (Dayananda 1989: 20). Arjuna is a symbol of a human being in struggle. Krishna's Gita is meant for all. This is very clear from verses 68 to 70 in the last chapter of the Gita, viz. Chapter 18, where Krishna makes his intention for its wider dissemination quite evident, though of course among those who are faithful and devoted. Krishna says that one who teaches or expounds on the Gita with devotion among his devotees is dearest to him and will ultimately become one with him. Verse 71 of the same chapter promises that even if one merely listens to this teaching, he or she will have auspicious destiny. The text is clearly not limited to Arjuna, who is only a pretext for the profound teaching.
Third, a sacred book is also one which is faithfully accepted by a large number of people at least of a group or creed. Though many leaders of Hinduism including Gandhi and Sri Aurobindo emphasise that sacred books are not substitutes for reasoning, reasoning alone may not suffice when humans try to probe what is beyond merely mundane and go in the pursuit of spiritual solace and contentment. Even in helping to lead a life of virtue, a sacred book should harmonise reason and faith; reasoning in it should be convincing while faith inspires and prods. Both Gandhi and Aurobindo emphasise further that a scripture should enable one to elevate himself or herself above ordinary existence and experience the truth that the scripture expounds. Recitation and verbal understanding of the text, though necessary, are not enough.
Hinduism does not have one single exclusive sacred text, and the greatest, most ancient and original sacred texts of Hinduism are the Vedas. They are followed by the Upanishads, most of which are appended at the end of the Vedas themselves. The scriptures of Hinduism are more like a common pool multilingual library, where you pick and choose a certain book which you find particularly useful and inspiring, without insisting that all other books be destroyed or condemned, nor requiring that all the books must be read equally thoroughly to qualify for access to the library. What is more, it is not a sacrilege to add to this library. It does not diminish the significance of the books accessed earlier in any way. First came the Vedas and Upanishads, and then the classic epics – the Ramayana and the Mahabharata, the Vedantasutras (also known as the Brahmasutras), the Yogasutras, Narada-Bhaktisutras, the Shastras and Smritis, the Puranas including the eminent Bhagavata Purana, and the Kural in Tamil by Thiruvalluvar. All these emerged in the ancient, or the classical, period well before the end of the first millennium CE. This was not the end. Except for the Kural, all the literature mentioned earlier is in Sanskrit. There was no bar in having more scriptures, now in the vernacular. In the medieval era were added the Jnaneshwari (an explanation of the Gita in Marathi in verses by Jnaneshwar), the Vachanas (sayings) of Shivasharanas in Kannada, Ramacharitamanasa of Goswami Tulasidas in Hindi, and the lyrical poetry of the numerous bhakti sants in regional languages. One's lifetime is not enough to go through and understand all this literature. I may say in a lighter vein that Hindus invented rebirth so that they can take up reading in the next life what they craved to finish but could not in the present life!
Not many Hindus, however, opt for a rebirth, if they can help it. Devout Hindus prefer to have self-realisation or enlightenment in this very birth! Spiritual enlightenment is open to everyone in Hinduism, irrespective of gender, class and caste. Hindus needed a scripture which would guide them to engage with the life they face in an ethically correct way, taking the pleasures and frustrations in their stride with equanimity and, at the same time, realise the Divine. Of course, they had a wide choice of the texts. But the four Vedas and over a dozen Upanishads constituted a very vast literature, offering a formidable challenge for ordinary humans to learn them in the first instance, let alone following them after attaining some understanding of them. The Bhagavad-Gita provided a solution. As Yogananda and other celebrities have declared, the Gita provided the essence of all the 'ponderous' four Vedas, the 108 Upanishads, and the six systems of Hindu philosophy, constituting – 'a universal message for the solace and emancipation of all mankind' (Yogananda 2002, Vol. 1: 169), in lucid and relatively easy-to-understand Sanskrit, and in a compact form of only 700 verses in eighteen chapters. One could easily read a single chapter a day or even a few verses a day and feel blessed. Quite a few Hindus have learnt the whole of the Gita by heart. The Gita-Dhyanam (Meditating on the Gita), which is ritually recited before the Gita, picturesquely states that all the Upanishads are like cows, and Shr i Krishna – the son of a cowherd – milked them for the benefit of people having a pure mind, with Arjuna being the calf, and the nectar of the Gita is the milk (verse 4). This verse explains why the Gita came to be accepted as the most popular sacred book of Hindus. It acknowledges that the Gita is intended to be the essence of the Upanishads conveyed for the benefit of people, Arjuna providing an excuse, and the Mahabharata war a dramatic context to take off. What is more, the milkman is no less than God Himself, though in human incarnation as Krishna! The God of the Gita is not just the abstract Brahman, but more a personal God who loves and likes to be loved. This is in great contrast with the abstract and impersonal God of the Upanishads. To make it endearing, the Gita is in the form a dialogue between Krishna and Arjuna, with the latter asking searching questions again and again, and the former responding patiently in detail with deep affection. Krishna tells Arjuna emphatically: 'Definitely dear you are to me' (Ishtoasi mey dridham, XVIII.64, i.e. Chapter 18 and verse 64); and again, 'I promise, beloved you are to me!' (Pratijaney priyoasi mey, XVIII.65). It is as if Lord Krishna invites the devout listeners and readers of the Gita to place themselves in the position of Arjuna and enjoy the Lord's unbounded love and protection. Arjuna is only an exemplar of a devotee, who is also a friend treated with love. The Gita does not see a devotee as a servant or slave, nor it needs a devotee consider himself or herself that way. A devotee can question God and engage in a dialogue with him as an intimate friend. Krishna assures that a devotee or a seeker can obtain peace with the knowledge that He, the Divine, is a friend (suhrida) of all beings (V.29). He affirms in verse after verse (XII.14–20) that he loves his devotees. The Gita emphasises not only devotee's love of God, but also God's abundant love for devotees. When a devotee loves God heartily with abandon, that is enough to earn Divine love. Krishna also makes it clear that whatever little a devotee offers to him as a token of genuine love, he accepts it without reservation – be it a leaf, a flower, a fruit or even a little water! (IX.23–24). It made religion easy and accessible to all. No other sacred text before it had achieved this revolution in religion. Bal Gangadhar Tilak points out, in his preface to his famous work, Gita-Rahasya, a traditional compliment to the Gita, which says: 'It is quite enough if one thoroughly studies the Gita. What is the use of dabbling in other Shastras?' (1936: li). It is no surprise, therefore, that the popularity of the Gita as a sacred text grew spontaneously without anyone imposing it. For quite a few people, the Gita as a book became an icon by itself, to be revered and worshipped, without having to understand its contents. However, many of these people end up trying to study and absorb the contents as well and benefit by its guidance.
The Gita's popularity also owes to its dramatic context of conflict. The dialogue took place right in the battlefield, with Lord Krishna urging a confused Arjuna to fight. The Gita is certainly not a text on the ethics of war. It does not discuss whether wars are justified or when they are. But if one is confronted with a war which is inevitable, it teaches how to face it with equipoise. Even this is not its only purpose. As Gandhi explained, the war which Arjuna faces is only a metaphor for situations of conflict between forces of good and evil inherent in the human condition. The Gita teaches that we have to face the conflicts and cannot run away from them in a cowardly manner (Gandhi 1980: 12–14). Not only is the war in the Gita an allegory, but even the fact of Lord Krishna being the charioteer (sarathi) of Arjuna is also of great allegorical significance. Before the war, Duryodhana and Arjuna were given a choice by Krishna between the whole of Krishna's army on the one side and the unarmed non-fighting Krishna on the other. Arjuna chose Krishna, while Duryodhana chose his army. What is more, Arjuna wanted Krishna as his charioteer. In the struggle for life, one need not be alone. One has to invoke Krishna or the Divine Spirit as sarathi to be with us to guide, inspire and empower. The significance of the Gita's inspiring and empowering teaching becomes apparent when we note why many people commit suicides when they give up facing problems. The Gita can give courage and cure demoralisation. One needs to face the struggle of life with wisdom and equanimity. The Gita teaches that it is also the duty of each individual to help all persons and the society in general to cope with this struggle. This is conveyed in the Gita's teaching on loka-hita and loka-sangraha, which will be discussed subsequently in more detail.
When a flood of religious literature followed the Vedas and the Upanishads, the Shastras which appeared subsequently made a distinction between two types of literature – the Shruti and Smriti. The former included the foundational texts like the Vedas and Upanishads, consisting of prayers, ethical axioms and principles, and philosophical probes. The latter are secondary texts, covering the great epics, Shastras including Smritis, and the Puranas. They derive their authority from the former by explaining and illustrating what is contained in the former and indicating guidelines for conducting day-to-day lives. Sometimes there was a risk that the Smriti texts could deviate from the former, developing their own theories and rules which are not in conformity with the Shruti texts. In such cases, the Shastras themselves ruled that what is stated in the Shruti prevails over whatever is said in the Smritis. It is clear that only the Shruti texts are regarded as authoritative and sacred, and the Smriti texts as secondary or supplementary. Where would the Gita fit in?
The Gita is a part of the Mahabharata, appearing in the Bhishma Parvan (as Chapters 23–40 according to the 'Critical Edition' of Bhandarkar Oriental Research Institute (BORI), Pune, and Chapters 25–42 according to the more popular Gita Press Gorakhpur edition). The Bhishma Parvan is the sixth out of eighteen Parvans of the great epic, which itself has 117 chapters (122 chapters according to the Gorakhpur edition), which include the Gita's eighteen chapters. But the Mahabharata is regarded as a Smriti text, secondary and not foundational. Therefore, the tradition does not accord the status of Shruti to the Gita on par with the Vedas and Upanishads (Radhakrishnan 1923, Vol. 1: 519). How then could the Gita be considered as a sacred text, if being regarded as a foundational authority is a hallmark of a sacred text? First of all, the Mahabharata is not just another Smriti text, but even regarded as the fifth Veda (Panchama Veda) by some. But the importance of the Gita transcends beyond being a part of the Mahabharata and can be regarded as a stand-alone text too. The Gita is taken as not only conveying the essence of the Vedas and Upanishads, but also giving its followers much more even within its small size. It is both a brahma-vidya (the science of Brahman, the ultimate reality) and also a yoga-shastra (the science and practical art of realising that reality), as stated in the colophons. For this reason, the Gita itself is regarded as an Upanishad in the colophons, which appear at the end of every chapter of the Gita. Colophons mention the name of each chapter respectively, and the title of each chapter ends with the suffix yoga, indicating that each is a discipline itself. Incidentally, the colophons are not given in some of the modern printed texts of the Gita, but they were given invariably in popular texts meant for daily recitation. The colophon at the end of the first chapter of the Gita, for instance, reads as follows:
Om Tad-sat iti Shrimad-Bhagavad-Gitasu Upanishadsu Brahmavidyayam Yogashastre Shri Krishna-Arjuna-samvade Arjunavishada-yogo nama Prathamodhyayah.
It literally means: Om tad-sat! (Invocation of the Brahman, the Truth). Here ends the first chapter named 'the Yoga of Arjuna's Agony', in the dialogue between Shri Krishna and Arjuna in the Divine Songs (which are) Upanishads, brahma-vidya as well as yoga-shastra.
It may look puzzling that colophon uses the word samvade (dialogue) in singular, but the words Gitasu (songs, referring to the Gita) and Upanishadsu (Upanishads) in plural. Apparently, it may be because each chapter could be considered as a song, a discipline and an Upanishad in its own right, with the whole constituting a unity in terms of the referred dialogue. To me, however, the use of the plural has tremendous additional significance as borne out from the history of the Gita given later in this and succeeding chapters. The Gita is amenable to a variety of interpretations, and in this sense, we have Shankaracharya's Gita, Ramanujacharya's Gita, Madhvacharya's Gita, Jnaneshwari, Edwin Arnold's Gita, Gandhi's Gita, Annie Besant's Gita, B. G. Tilak's Gita, Aurobindo's Gita and so on. And yet, we have one dialogue, one integrated Gita. It is one as well as many!
There is no doubt that the Gita has borrowed from a few Upanishads, sometimes whole or parts of verses. Working with detachment is emphasised both in the Gita and in the Upanishads (see, e.g. the I sha Up.). Yet, the Gita is neither a mere compilation of quotations from the Upanishads, nor a mere summary of them, but has a contribution of its own to make. For instance, there is hardly any discussion of bhakti in the Upanishads, but it finds a significant place in the Gita. The concept of the descent (avatar) of God to solve the problems of the world came into the Gita first; it was not there in the Upanishads. The concept of God in the Gita is personal unlike in the Upanishads, where the Brahman is the impersonal force permeating the whole universe. Nevertheless, the Gita takes a holistic and fresh look at the Vedas and Upanishads and presents a synthesis along with its own contribution in a succinct, yet readable style. The Brahmasutras also contain the philosophical essence of the Upanishads, but their format in the form of dense axioms was beyond the reach of even the commonly learned persons. The Gita was on the contrary formatted to be a popular sacred text.
The very first verse of Gita-Dhyanam (Meditation on the Gita) makes it clear that the Gita was taught by God Himself (Narayana), addressed to Arjuna, and composed as a book by the ancient sage Vyasa as a part of the Mahabharata (Parthaya pratibodhitam Bhagavata Narayanena swayam/Vyasena grathhitam purana-munina madhye Mahabharatam//). The divine origin of the Gita, in which devout Hindus believe, is thus considered to be a major reason, though not the only one, for regarding it as a sacred scripture. But the Mahabharata has many philosophical dialogues attributed to various persons, including dialogues between Lord Krishna and his devotees (such as the Anu-Gita, the Uttara-Gita and the Uddhava-Gita, the first two being dialogues between Krishna and Arjuna like the Bhagavad-Gita). However, none has attained the popularity and status of the Bhagavad-Gita. So it is not just the divine origin attributed to the Gita which made it a popular sacred text, but also the nature of its format and the outstanding significance of its contents.
Gandhi played down the criterion of divine origin for a text to be regarded as sacred. Even if texts regarded as sacred had divine origin, they have come down to us through human intervention, and therefore no sacred text of any religion is infallible, he argued. Instead of divine origin, he would apply three tests: first, the text should stand to reasoning; second, its values should be ethical, consistent with the basic principles of truth and ahimsa; and third, it should have teachings which are practical to follow. He observed, 'Every formula of every religion has in this age of reason, to submit to the acid text of reason and universal justice if it is to ask for universal assent. Error can claim no exemption even if it can be supported by the scriptures of the world' (Young India 26 February 1929: 74). He did not exaggerate the importance of reason and recognised the due role of faith in religion, but any belief or practice or any religious text which went against the three tests, particularly ethics, would not be acceptable to him. He declared, 'Nothing that is inconsistent with the universally accepted principles of morality has for me the authority of the Shastras' (CWMG Vol. 52: 9; Jordens 1991: 92). For the same reason, he would not accept any text as Shastra or sacred book which endorsed untouchability and oppression. Gandhi was equally emphatic on practicality. He said: 'It is a misuse of our intellectual energy and a waste of time to go on reading what we cannot put into practice' (CWMG Vol. 32: 228; Jordens 1991: 91–92). It is significant that Gandhi not only just accepted but also regarded the Gita as a basic spiritual and moral guide in his day-to-day life. Obviously, the Gita passed all his stringent tests.
An important requirement for a religious text to be regarded as sacred is, more than its simplicity, its being profound. As Dalal remarks, 'a sacred text has a receding horizon' (2009: 26). The more one learns, the more one feels like knowing and goes deeper. A sacred text requires both contemplation and practice. It needs to be absorbed in a way that our life, even day-to-day living, becomes more meaningful and fulfilling under its guidance. It is not enough to merely recite a text. A sacred text helps not only in choosing life's goals, but also in selecting the means of achieving them. A text cannot inspire contemplation and practice without being profound. The Gita is profound in this sense and came to be regarded as a foundational text.
Even more than the Vedas, the foundation of philosophical Hinduism is traditionally regarded as consisting of the Upanishads, the Brahmasutras and the Bhagavad-Gita. All these are profound. Together, they are known as the Prasthanatriya – the three foundational works. Different schools of philosophical thought in Hinduism or Vedanta sprang out from these works, and no offshoots of Hinduism could ignore them even if they did not accept the Vedas. But in sheer popularity, the Gita prevailed over the other two. The Upanishads are too many for a common Hindu to study, and philosophical and moral thought is spread over all of them. The Brahmasutras are at the other extreme, in compact prose as Sutras, but too terse and dense to be popular. The Gita provided a golden mean. It is in the form of a long poem, lucid and lyrical, amenable for chanting. The whole Gita could be learnt by heart if one wished, and the meaning of its verses could be easily understood, in translation and with commentaries if necessary. It could be recited daily, in parts or as a whole depending on one's time and aptitude, which is believed to confer great spiritual merit and solace. Regular recitation of a text is an important sign of its acceptance as sacred. The benefits obtained by recitation of the Gita are eulogised in Shri Gita-Mahatmyam ('Greatness of the Gita'), which normally appears at the end of printed texts of the Gita, said to be from the Varaha-Purana. It appears that recitation of the Gita as a sacred text had already come into vogue by the time this Purana was composed. The Varaha-Purana only formalised what was in practice and made the faith of people firmer in the Gita. The Gita is being studied and recited by millions regularly for at least two millennia. And that is how the three great acharyas (Shankara, Ramanuja and Madhva) and others too took it up for editing and commenting upon it.
There were also more profound reasons for accepting the Gita as a sacred text. The Gita from the beginning was never exclusive. It opened the door to the masses of people for spiritual solace and God realisation. It offered a simple religion which any one could follow, rich or poor, man or woman, young or old, irrespective of caste. It transcended the Vedas and even the Upanishads without undermining them. While the Vedas offered a religion of rituals and sacrifices, suitable more to the rich, the Gita offered convenient choices as per the aptitude of persons concerned between contemplation on the Divine, selfless work or service of others, and devotion to the Divine, or, if you like, a suitable combination of all these tailored to the spiritual seekers' personal inclination. What is more, the Gita was practical enough to realise that all cannot be full-time spiritual seekers, and most people would like to combine worldly pursuits with spiritual ambition. So the Gita offered a religion for people who are basically engaged in the world and not addressed to the ascetics who have renounced the world. The Gita gave a personal God to people at large, to whom they could pray and offer their love and devotion and also be loved and protected by Him. The religion it offered was meant for the worldly people. It taught them how to avoid tensions of ordinary life with equanimity and efficiency in work. It offered guidance in leading cheerfully one's worldly activities, but with a sense of purpose and moral backing. While the religion offered by the Upanishads also transcended the Vedas and did not require rituals, it tended to emphasise contemplation on the nature of the self and the Divine most of the time. Their contemplative methods did not very much appeal to the mass of people engaged in worldly affairs but who nevertheless had some spiritual aspirations. The Gita, on the other hand, avoided both the material elitism of the Vedas and the spiritual elitism of the Upanishads, and offered a more realistic and relatively easy-to-follow religion within the reach of the common people. Its religion is simple, so simple in fact that there is no idol worship or temple worship in the Gita. Yet its religion is sophisticated enough to insist on honesty, truthfulness and selflessness and shows the ways of spiritual evolution to the level of perfection.
Moreover, the Gita is non-sectarian in approach and preaches a universal religion. In the Gita-Dhyanam, Krishna of the Gita is saluted as Jagadguru – world teacher – and not as a teacher of Vaishnavites, or for that matter, Hindus alone. And that is how it came to be accepted by the Vaishnavas, Smartas, Shaivites and Shaktas equally, and it would be no surprise if it appeals also to non-Hindus who are open minded. The universality and tolerant understanding of different religious ways of people are reflected in what Krishna assures: 'In whatever way people try to reach me, I accept and reward them; O Partha (Arjuna), people can follow the path to me from all sides' (IV.11). This assurance is repeated again: 'Whatever form devotees choose to worship with dedication and faith (shraddha), I make that shraddha steady' (VII.21). The greatness of these statements consists in accepting different conceptions of the Divine and paths to it. The Gita says nowhere that Krishna alone is the true God and other gods are false. It simply says that whichever god or whatever form of God you worship, it goes to the One and same God (IX.23). Though the roots of this belief lie in the Vedas and Upanishads themselves, the credit for popularising it and making it a basic tenet of Hinduism goes to the Gita. The bhakti saints of the medieval era only further emphasised and popularised it.
Consistent with this tolerance and respect for differences, the Gita does not intend to impose its views as God-given. Though considered as a sacred text, it is aware that conflicting views in such texts can be bewildering (shruti-vipratipanna, II.53). There could be flaws and inconsistencies even in texts claiming divine origin, since as Gandhi said, these texts have been handed down to us through human media. We cannot give up human reasoning, which after all is itself God-given. After teaching his message, Krishna tells Arjuna at the end to reflect critically on what all was told to him and then do only what he is willing to accept (Vimarshyetad asheshena yathhetchhasi tatha kuru, XVIII.63). The Gita also provides guidance in thinking correctly and seeking true knowledge in the same chapter. As Gandhi clarified, human reasoning can be depended upon only if it is unselfish and unprejudiced. Gita's God in any case is not an imposing tyrant, but an understanding, friendly, compassionate and liberal teacher, and gives enough freedom to human beings in choosing their path correctly and wisely in the light of His guidance. Ultimately, a spiritual seeker transcends injunctions of the texts and attains realisation through his or her own efforts (jijnasurapi yogasya shabda-brahmaativartate, VI.44). The Gita says that for a realised person who knows Brahman, the Ultimate, sacred texts are of the same use as a small water-body when there is flooding everywhere (II.46). The Gita thus teaches a healthy, undogmatic attitude to sacred texts, though it is itself considered as a sacred text.
Thus, the Gita, though initially regarded as a part of the Smriti literature along with the Mahabharata, soon transcended this status and became a sacred book in its own right surpassing the Vedas. This became particularly conspicuous after Shankaracharya took it up for commentary as one of the Prasthanathriyee, signalling that what mattered more for religion is its philosophy and ethics rather than its rituals. The astounding rise in its appeal during the modern period since the second half of the eighteenth century owes to this fact more than anything else. The Gita still has the potential to finish its unfinished task of releasing Hinduism from the compulsive clutches of ritualism and blind belief, and emphasising its ethical and philosophical basis. It can be useful in reforming popular Hinduism and taking it to a higher level.
## As a part of the Mahabharata
The Gita, as a dialogue between Krishna and Arjuna, has been a part of the great epic, Mahabharata, almost ever since the Mahabharata was created. The occasion for the Gita is the great battle between Kauravas and Pandavas, both contenders to the kingdom which was earlier ruled by Pandu, the father of Pandavas, and later after the demise of Pandu, by Dhritarashtra, the blind brother of Pandu and the father of Kauravas. Pandavas did not, in any case, claim the whole of the kingdom, but only a reasonable part of it which they had ruled before but were cheated out of it and exiled after a game of chess with high stakes. Pandavas returned after fourteen years of exile and laid their claim. When their claim was repeatedly rejected by Duryodhana, the eldest of the Kaurava brothers, Krishna – keen on avoiding a war between the cousins – finally went as an emissary of the Pandavas to plead with the Kauravas, for at least five villages for the five brothers, as they did not want to live in the kingdom whose effective ruler was Duryodhana. Krishna was a close friend of Pandavas, particularly to Arjuna, and had chosen to be on their side. Krishna was humiliated in the Kaurava court and was even tried to be arrested, though he had come only as an emissary. He could not however be arrested, but the war became inevitable between the cousins, as it was now a matter of honour. There was a long history of humiliations, cheating and sufferings imposed on the Pandavas earlier, including an attempt to disrobe Draupadi, the common wife of the five Pandavas, in the open court of Kauravas.
The war, though basically between the cousins, involved most of the kings and armies of northern India joining either side. The Kaurava army was much bigger with eleven akshohinis in size, as against the Pandava army of seven akshohinis. The armies came to the plains of Kurukshetra in the present-day Haryana to fight it out. On the first day of the war when the armies on both sides stood face to face ready for battle in the battlefield of Kurukshetra, Arjuna was suddenly in no mood for the war. He developed an agony, became acutely conscious of a moral dilemma and a compassion for his cousins, particularly his teacher, elders and other relatives whom he had venerated all his life. His grandfather Bhishma, teacher Drona and numerous other relatives were now on the side of Kauravas. Though sympathetic to the cause of Pandavas, they chose to be on the side of Kauravas because they were employed by them. The first chapter of the Gita and a small initial part of the second are devoted to describing the moral dilemma and agony of Arjuna mostly in his own words. For instance, Arjuna tells Krishna, his charioteer and mentor: 'I do not see any good in killing my own people; I want neither victory nor the pleasures of a kingdom' (I.31). 'Kauravas may be overpowered by greed, but I am not, and can see the evil consequences of a destructive war' (I.38–45). And he lays down his bow and sits down in the chariot, full of tears.
Arjuna was no common warrior; he was well known for his prowess and had defeated the Kaurava army earlier on behalf of king Virata while in exile. His agony was not out of cowardice; he was confident of victory. After several attempts at reconciliation and peace, Pandavas including Arjuna had finally decided on war, which the Kauravas were itching for. Krishna did not lose his cool at this unexpected development of Arjuna's loss of will. He smiled at him and explained that his behaviour would be treated as cowardice by the enemies, and he cannot just run away at this juncture. As a soldier (Kshatriya) on a battlefield, it was his duty to fight; not only his but also the honour of all those fighting on Pandavas' side was involved. A dishonour to Kshatriya is worse than death. Gandhi adds as a comment here that Krishna was right, because if Arjuna had given up the fight, the whole Pandava army would have been demoralised, and the Kaurava army would have pursued and slaughtered them all. Violence could not have been avoided which Arjuna wanted to avoid (Gandhi 1980: 20). Moreover, Arjuna was merely sentimental, and his agony arose out of his attachment to his teacher and relatives whom he did not want to kill, as Gandhi pointed out. Arjuna's compassion and nonviolence were not general or universal covering all, but focused on what he called his own people (swajana) (Gandhi 1980: 13). Even Gandhi, a votary of nonviolence, did not see any merit in Arjuna's opting for nonviolence at that juncture.
Arjuna was not easily convinced by Krishna's admonition and call to abandon what could apparently be treated as unmanliness. Krishna, therefore, launches a long philosophical and moral discourse to ultimately convince him that the sin of killing would not attach to him by following his teaching. And that is how the Bhagavad-Gita originated. We will look into the ethics and philosophy of the Gita in subsequent chapters, but presently, we face a question often raised by a few scholars on this episode.
The question raised is whether the Gita is an original part of the Mahabharata or an interpolation. For, how could Krishna launch a long discourse which would have taken several hours, if not days, right in the middle of the battlefield when the fighting was about to begin? Moreover, all the philosophical issues raised in the discourse were not directly necessary to give a morale-boosting pep talk to Arjuna. For instance, how could the teaching on meditation and bhakti as ways of God realisation have been relevant in that urgent context of war? The inference is that the Gita was simply added to the Mahabharata by its talented poet-author who was eager to immortalise his analysis and interpretation of the Upanishads, and give a philosophical basis for the sanatana dharma (Hinduism), which common people could understand more easily than the Vedas and Upanishads. It is argued that it was not very unusual for poets in ancient India to supress own identity and attribute authorship to Vyasa, and the Gita may have been an instance of this. The concerned poet may have imaginatively used the dramatic context of the Mahabharata war to launch his own poem. Supporting this hypothesis of the Gita as an interpolation, Meghnad Desai says that even if the Gita were taken out of the Mahabharata, and the preceding and succeeding chapters are joined, it would still make a continuous and coherent flow, and would not make a break in the narrative (Desai 2014: 53).
Kashi Nath Upadhyaya takes up this hypothesis for a detailed critical examination and concludes that the Gita is indeed 'a genuine constituent part of the Mahabharata', and it had been so from the very beginning (Upadhyaya 1971: 4–9). He argues that the Mahabharata is not a historical treatise presenting only an accurate and exact account of events that took place. The author was a talented poet, versed in philosophy, and made use of all occasions to expound his views and analysis of philosophical and ethical issues. There are several other instances of such long discourses in the epic poem, though the Gita is among the more detailed. There are several references to the Gita in other parts of the epic. Upadhyaya refers to the views of eminent scholars like K. T. Telang and B. G. Tilak, who compared the words, combinations of words (sandhis) and grammar used in the Gita with those in the Mahabharata and concluded that they were similar. The type of grammar used in both is of pre-Panini period, involving usages which could be considered as mistakes by Panini (Upadhyaya 1971: 7). The revelation of the cosmic form of Krishna in the eleventh chapter of the Gita is not unique to it and occurs elsewhere too at least on four more occasions in the epic (Upadhyaya 1971: 9).
Swami Vivekananda also took up this issue and was convinced that the Gita was an integral and inseparable part of the Mahabharata. He observed at the Paris Congress of the History of Religions in 1900: 'The style of language of the Gita is the same as that of the Mahabharata. Most of the adjectives used in the Gita to explain matters spiritual are used in the Vana and other Parvans of the Mahabharata.... Such coincidence is impossible without the most general and free use of those words at one and the same time. Again, the line of thought in the Gita is the same as in the Mahabharata' (CWSV 1998, Vol. IV: 428). Sri Aurobindo, who had made a thorough study of the Mahabharata and even tried to separate Vyasa's original from the later accretions, treated the Gita as a central part of the epic, 'the epitome of Vyasa's main emphasis upon the active religious life', 'the undoubted work of the poet' (Minor 1991-a: 63).
There is a great consistency between the philosophical and ethical stands taken in both the Gita and the Mahabharata. The Mahabharata is of great interest to us even today because of the ethical dilemmas raised through various characters and events, and the Gita begins with one such ethical dilemma. Thus, it is entirely coherent with the overall nature of the larger epic and fits very well in it as its integral part. If the Kurukshetra War constitutes the core of the Mahabharata in terms of events, with the Gita dialogue placed at the very beginning of it, the Gita can be regarded as the soul of the epic in terms of dharma, or ethics and philosophy. Desai's remark that even if the Gita were taken out, the flow of the epic would not be affected appears therefore to be irrelevant and trivial, when we take into account the significance of the Gita in the Mahabharata, in terms of both events and philosophy.
Moreover, there have been at least two main recensions of the Mahabharata, the Southern and the Northern, and several editions and versions in each recension, and of course hundreds of palm-leaf manuscripts in different parts of the country. The Bhagavad-Gita is included in the bulk of them, except maybe where the manuscripts themselves were not found in their entirety. This indicates that the Gita was accepted as a part of the Mahabharata in popular tradition since a long time. Four considerations are relevant in finally settling the question of whether the Gita in its present version had since the beginning been a part of the Mahabharata. First is that the Gita has been composed in a way that it appears as an integral part of the epic. If you take away the dramatic context of the Mahabharata war, what is left for the motivation of the Gita? It would rob the Gita of its main teaching to humanity to face conflicts and tensions in life with equanimity and courage, and not run away from them. Second, the Gita has appeared as a part of the epic in almost all its versions since nearly two millennia. Third, its status as a part of the epic has also since times immemorial been embedded firmly in the minds of people for good reasons, whatever a few scholars may say to the contrary. Finally, irrespective of whether a part of the Mahabharata or not, the Gita admittedly has an authority and importance on its own, and has been recited, studied, and venerated as an independent text, though without erasing the context of the Mahabharata war.
## Date and authorship
As is well known, the Gita is in the form of a dialogue believed to have taken place between Lord Krishna and Arjuna in the battlefield of Kurukshetra when the battle was about to start. In fact, the whole Gita is presented as a narration by Sanjaya to king Dhritarashtra, Kauravas' father, including the dialogue between Krishna and Arjuna. Blessed by Sage Vyasa, Sanjaya is enabled to observe everything going on in the battle even while sitting in the palace with the king and narrate it to him. Based on astrological data from the Mahabharata, the traditional sources have estimated that the great battle took place in the year 3139 BCE (Before Common/Christian Era), and the Gita Jay-anti is observed on Shukla Ekadashi (eleventh day of the bright half) of the month of Margashirsha (according to the Hindu lunar calendar, which comes usually in December), as the day when the dialogue took place. Most of the historians and modern scholars on the Gita, Hindus included, would not, however, accept this year as historically correct, at least not as the date of the composition of the Gita. We will see later what is a more acceptable date according to modern scholars, though a consensus on this is not possible. Moreover, there could well be a gap between the date when the dialogue took place and the date when the present version of the Gita was finally settled and established. Lord Krishna did not write it down himself even as per tradition.
Though Lord Krishna is acknowledged as the original source and author of the Gita, the credit for compiling it in the form of a text (grantha) and including it in the Mahabharata is given by tradition to sage Vyasa. He was the genetic grandfather of both Pandavas and Kauravas, and is believed to have been living through and observing all the events of the Mahabharata. The authorship of the Mahabharata as a whole has been attributed by tradition to Vyasa, whose full name was Krishna Dvaipayana – Krishna indicating that he was dark in complexion and Dvaipayana because he was born in an island (dvipa) in the river Yamuna (Jamuna). He is also known as Veda-Vyasa. His mother was a fisherwoman named Matsyagandha (woman with fishy smell) and father a Brahmin sage, Parashara. Vyasa could have been a title given to a learned man who is an expert in putting texts together and editing them, as Karve has observed (1991: 7). Vyasa has been credited to have put together in an orderly form all the Vedas composed nearly a millennium earlier and also with having authored – apart from the Mahabharata – many Puranas composed centuries later. Apparently, it could not have been the same Vyasa who did it all. According to the tradition, however, Vyasa is a chiranjeevi, an immortal.
Karve observes further that there were two types of literature in ancient India. The first and more ancient type consists of the Vedas, which were in the form of mantras, and they were handed down from generation to generation by well-trained priests with scrupulous attention given to safeguard the accuracy of every syllable and word and its pronunciation in every generation. A similar care was taken about the Upanishads too. The Vedas and the Upanishads were considered as sacred, and no liberty could be taken with their texts. The second type consisted of versified narratives sung and passed on from generation to generation by talented bards, who often were themselves composers too (1991: 2–3).The Mahabharata could have been put together by a learned person, titled as Vyasa, and handed down together with additions made by the bards. The bards were known as Sutas, who were well sought after by kings and common people alike eager to listen to their stories in the form of songs. The so-called interpolations thus became natural in such Suta literature, to which the two major epics – the Ramayana and the Mahabharata – and the Puranas belonged.
According to many scholars, the first version of the Mahabharata was much smaller, consisting of only 8,400 verses, called Jaya (victory). It was confined mainly to narrating the victory of Pandavas over the Kauravas. In the course of time, it grew to a larger version consisting of some 24,000 verses, Bharata, so called as it narrated the history of events in the dynasty of the ancient emperor Bharata, who is said to have ruled over all of India (and that is how India was named as Bharat) and whose descendants were the Pandavas and the Kauravas. In the third stage of the expansion of the epic, it grew to its present size of 100,000 verses and became the Mahabharata. It is difficult to tell with certainty, when the Jaya became the Bharata, and when the Bharata became the Mahabharata. The Mahabharata war, as a historical event, is believed by historians to have taken place around 1000 BCE (Karve 1991: 1), and the narrative in its first stage may have begun its course soon after. R. S. Sharma, an authority on ancient India, supports this hypothesis when he says that the composition of the epic (in its first stage) started roughly from the tenth century BCE (2011: 2–3). Based on a review of different Sanskritists' views by Pusalker, one may say that the second stage of the epic was reached around 450–400 BCE (1955: xxvi–xxxi). Pusalker observes that the Mahabharata was almost finalised in its present form by the year 200 CE (or ad). He adds that no single definite date can be assigned to this epic and that it existed largely in its present form by the second century BCE (1955: xxvi–xxxii). It is in the third stage that most of the side stories and long discourses intended to explore the intricacies of dharma and adharma, and ethical dilemmas were included. The Bhagavad-Gita in its present form was ready probably before the end of the second stage of the Mahabharata and inspired the inclusion of other side stories and long discourses in the great epic in its third stage.
The evolution of the Mahabharata took place in the spirit of an evolving encyclopaedia, which undergoes periodical revisions and additions, often with new authors entering the picture. In fact, the epic is of the nature of an encyclopaedia. In the first volume itself, it claims: Yadihasti tadanyatra yannehasti na tat kvachit ('What is here is found elsewhere. What is not here is nowhere.') (Mahabharata Critical Edition 1.56.33; Tr. by Gurcharan Das 2009: xli). Such being the case, treating the later insertions as 'interpolations' amounts to being improper and unfair, particularly if the term 'interpolation' is used pejoratively. Those who put the insertions were talented and learned poets who acted on good faith. The so-called interpolations took place not recently, but more than two millennia back. There was a stage when they stopped. They were not whimsical and arbitrary, but were often creative and imaginative (as the following anecdote would show) and played a positive role in the evolution of the Mahabharata, enriching it with insights, stories, and instructive discourses. Badrinath has subtitled his book on the Mahabharata as 'An Inquiry in the Human Condition' (2007). The epic would not have attained this status of a perceptive inquiry had it not been for the many 'interpolations' which reflected the ancient wisdom of centuries.
There is an anecdote in the Mahabharata itself about how the epic was first put to writing, though it is considered as an insertion into a later version of the epic. C. Rajagopalachari in his summary of the Mahabharata in English, which has proved to be the most popular book on the epic, has of course taken care to include this anecdote in the beginning itself besides 107 stories through which he covered the entire epic (Rajagopalachari 2006: 1). Having conceived the epic, sage Vyasa was in search of a stenographer who could take down the whole epic in written form. Brahma, the first of the Trinity of Hindu gods – the creator – advised Vyasa to meditate upon god Ganapati and pray to him to take up the challenging role. Vyasa did so. Ganapati appeared before him, agreed to his request, but smiled mischievously and laid down a condition: his pen should not stop, and Vyasa should dictate without pause! Vyasa agreed and, guarding himself, put a counter-stipulation: Ganapati should first understand the meaning of what is dictated before writing it down! Ganapati agreed. Vyasa began to sing the story of the Mahabharata, and Ganapati smoothly put it all down in writing. But when the shrewd poet needed a pause occasionally, he dictated a puzzling verse (kuta-shloka) to keep Ganapati pondering over its meaning, until Vyasa would start dictating again. Apart from its entertainment value, the anecdote perhaps was inserted to explain the presence of several puzzling verses occurring in the course of the epic.
Since the epic was handed down to succeeding generations orally by bards in different parts of the country, there were differences from region to region in the manuscripts which were based on oral accounts. There is even a Javanese adaptation ascribed to 1000 CE, but all the Parvans (volumes) of this adaptation have not yet been traced (Lal 2013: 6).Though the majority of manuscripts found were in Devanagari script in which normally Sanskrit literature is written, there have also been manuscripts in other regional scripts like Bengali, Kannada, Maithili, Malayalam, Nepali, Sharada, and Telugu. During the reign of the Mughal emperor Akbar, a free rendering of the epic was done into Persian (Lal 2013: 6). Since there were quite a few differences from manuscript to manuscript of different regions, sometimes some containing extra chapters and verses, a need was felt for taking a critical look into all the available manuscripts and arriving at what was termed as a 'Critical Edition'. This task was undertaken by the BORI, Pune, in 1919, and completed around the middle of the century. V. S. Sukthankar edited the first volume, Adiparvan, examined some 235 manuscripts and set out the principles and criteria of preparing the Critical Edition of the epic in his voluminous 'Prologomena'. The edition did not ignore what the editors considered as interpolations or insertions, but important differences from the selected version were recorded in the footnotes for readers' own judgement. Bringing out the Critical Edition has been a major milestone in the history of the Mahabharata.
Interestingly, divergences in the text of the Gita are far less than in case of the Mahabharata as a whole. Based on modern research into the Gita, S. C. Roy concluded: 'The text of the Gita has remained substantially unaltered in spite of numberless interpolations that have taken place in other portions of the Great Epic' (quoted by Pusalker 1955: 163). Pusalker (Pusalker 1955: 144) remarks: 'It is indeed curious how the Bhagavadgita presents such a relatively fixed consistent text for the last 1200 years.' But the estimate of 1,200 years could well be a gross underestimate if we go by other scholars' view, according to whom the Gita may have attained the present final shape before 400 BCE. On the basis of astronomical analysis, M. R. Yardi concludes that the (present) Gita was composed in 450 BCE (1991: 12). Probably, Pusalker took eighth century CE as a benchmark, because Shankaracharya wrote his commentary on the Gita in its present form then. But there were commentaries on the Gita even before, which Shankara and Ramanuja have referred to, such as the one by Boudhayana. Telang observes that though Boudhayana's date is not settled, he is known to have lived prior to Apasthambha Smriti, the date of which was settled by Buhler as prior to the third century BCE (Telang 1970: 21, 32), which concurs with the date given by Upadhyaya (1971: 17). Boudhayana's commentary could have come only after the Gita in its present form was established. Shankara does not refer to changes in the text of the Gita before him. The present popular version has been stable since then, except for minor variations, with 700+1 verses. The +1 verse appears in some editions at the beginning of Chapter 13 as a question put by Arjuna, omitted in other editions. Thus, the present version of the Gita, commented upon by Shankara, has been with us for no less than 2,500 years. A Kashmiri recension emerged in around 800 CE with 745 verses, but the additional verses do not make any material difference to the content of the Gita (Pusalker 1955: 144). It is the traditional version used by Shankara (with 700 verses) which is extant in the mainstream.
Closely linked with the issue of the date or year as per calendar, there is also the issue of where the Gita fits in the chronological sequencing of major texts, which indirectly helps in settling the date. This question has been discussed at some length by Telang (1970: 7–24), which may be briefly paraphrased here. Telang suggests that the Gita clearly came before the six orthodox schools of Hindu philosophy were systematised into texts (such as Patanjali's Yogasutras), and in this respect, the Gita can be grouped with the Upanishads. There is no attempt at systematisation either in the Upanishads or in the Gita, though they reflect deep philosophical insights. They provided the ingredients for the subsequent systematisation. Telang even observes that there has been no attempt at the standardisation of the meaning of some words in the Gita, like 'Yoga', 'Brahman' or 'Buddhi', which acquired scientific precision only subsequently. He thinks that the Gita precedes even the Brahmasutras, known more popularly as the Vedantasutras earlier. A mention of 'Brahmasutra' in the Gita (XIII.4) does not really refer to the text as such, according to Telang. As the verse itself says in the preceding line, it may refer simply to statements pointing to Brahman found in the Vedas and Upanishads. On the contrary, Telang contends, the Brahmasutras themselves refer to the Gita as a Smriti (in Sutras I.2.6 and I.3.23). Telang also observes that the Gita mentions (in IX.17) only the three Vedas, Rig, Yajur, and Sama, there being no reference to the Atharva-veda. This may suggest that the Gita came even prior to the Atharva-veda. According to Telang, the Gita comes prior also to Panini, the ancient grammarian, whose date is fixed as not later than the fourth century BCE (Telang 1970: 33). The grammar of the Gita and of the Mahabharata as a whole is considered as pre-Paninian. The Apasthambha and Manu Smritis came long after the Gita. This is because (as Telang argues), the Gita came at a time when the birth-based caste system was only in a formative stage and the varnas were on the basis of work and aptitude (guna-karma) rather than birth. Both Apasthambha and Manu Smritis, on the other hand, came at a subsequent stage when the birth-based caste system had consolidated itself. Between the two, the former came earlier. On the authority of Buhler, Telang points out that while the Manu Smriti can be assigned to the second or third century BCE, the Apasthambha Smriti can be to a period of time prior to the third century BCE. If the Gita was composed earlier than the Brahmasutras and the Atharva-veda, which came much before the two Smritis, its place in all these chronological sequencing points to its date as being not later than the fifth century BCE.
The chronological position of the Gita vis-à-vis early Buddhism also corroborates this conclusion. Telang takes a clear stand on this and argues that the Gita was composed before the rise of early Buddhism (Telang 1970: 24–27). His contention is not based on the negative point of the absence of any reference to the Buddha or Buddhism in the Gita, but on more positive grounds. Telang points out two major differences between the Gita and the Buddha. The Buddha totally rejects the authority of the Vedas and the varna order. On the other hand, the Gita does not absolutely reject either the Vedas or the varna system. It only protests against ritualism in the Vedas and deplores the emphasis on merely reciting them and just book learning. It points to more profound ways of making our lives meaningful and fulfilled. As regards varna system, the Gita places it on 'less untenable basis' (Telang 1970: 25) of work and aptitude, rather than birth. Telang does not interpret it as a defensive stand of the Gita trying to save Hinduism by abandoning its weak points in the light of Buddhist attack, which would have meant that the Gita is subsequent to early Buddhism. On the other hand, he interprets the rise of Buddhism as a result of dissatisfaction with the success of the reform movement within the sanatana dharma initiated clearly much earlier to Buddhism by the Upanishads – a reform movement which the Gita also joined subsequently. Telang groups both the Upanishads and the Gita together as fellow participants in reversing the decline brought about by Vedic ritualism. He compares the Upanishad–Gita reform movement with the modern reform movement started by Raja Rammohan Roy, and the rise of Buddhism with Keshub Chandra's more radical movement subsequently. Since Telang wrote in the nineteenth century, he could not have foreseen the even more radical Ambedkar movement and the re-emergence of Buddhism after a long gap, again as a protest against the evils in Hinduism. Had he written now, Telang would have considered it as a repetition of history. The main point of Telang, however, is that it is more probable that the Gita came prior to early Buddhism or even the Buddha. There is some controversy about the exact date of the Buddha. But the chief landmark in Buddhist chronology is his Parinibbana or death, and there is a fair degree of agreement that it took place between 487 and 477 BCE (Upadhyaya 1971: 32). It is well recorded that the Buddha lived up to eighty years. If the Gita's period is taken to be as at about the same time as the Buddha's lifetime, it will mean sixth century BCE, which would make it close to what was concluded earlier as not later than the fifth century BCE. It is difficult to be more accurate than this about the date of the Gita.
Dr Babasaheb Ambedkar, however, considers the Gita to be post-Buddhist, on the basis of certain points which he thinks the Gita has borrowed from Buddhism. There is a conspicuous reference to Brahmanirvana in the Gita (II.72). He observes that no Upanishad used the word, and that the Gita must have borrowed the word 'Nirvana' from Mahaparinibbana Sutta, prefixing 'Brahma' to make it appear as original. He also points out some common concepts or ideas between the Gita and the Buddhist texts, including virtues described as desired like detachment, compassion, purity and desirelessness (Ambedkar 2004: 203–4). There is certainly a significant common ground between the Gita and Buddhism. But both were influenced by Upanishads and the vibrant intellectual environment of the time, which explains the common ground. It is possible, however, that the Gita – at least its final version – may have been composed during the Buddha's time or early Buddhism, in which the words like Brahma-nirvana may have been included. In any case, the Gita has its own distinctive features which are not in Buddhism, particularly its theism and the different paths of sadhana.
There is a further complication if we agree about the Gita having undergone a few stages of evolution before it attained its present form, indicating multiple authorship. The thesis of multiple authorship has not gone uncontested, but a few scholars agree about it. Yardi mentions those who subscribe to the thesis of multiple authorship of the Gita, both Indian and Western (Yardi 1991: 4). He refers to the Chhandogya Upani sh ad mentioning Devaki-putra (son of Devaki) Krishna, a pupil of Ghora Angirasa; Krishna is said to have studied the Upanishads in depth under the latter. Yardi believes that it is the same Krishna who expounded the Gita to Arjuna, as he was also believed to be Devakiputra. This teaching as compiled by Vyasa was preserved and handed down to succeeding generations. Shaunaka came into possession of it and incorporated it into the Mahabharata. According to Yardi, Sauti added six chapters to the Gita subsequently, which dealt mainly with the Sankhya philosophy (Yardi 1991: 13, 18). Whether the Gita had one or more authors, it is agreed by many that they were all before Panini, who systematised Sanskrit grammar, because the grammar of the Gita (and the Mahabharata) is pre-Paninian. As noted earlier, Panini came before the fourth century BCE, which means that even the last addition to the Gita must have come before that.
The hypothesis of multiple authorship of the Gita was first raised by European scholars, particularly by Rudolf Otto, whose work on it was first published in German in 1933, and an English translation of which by J. E. Turner was published in 1939, under the title, The Original Gita: The Song of the Supreme Exalted One. Interestingly, Gajanan S. Khair also supported the hypothesis much later in India through a similarly titled book, Quest for the Original Gita (1969, 1997). Khair thinks there were three authors of the Gita, the first well before the Buddha, the second a little before the Buddha or during his lifetime, and the third in the early phase of Buddhism partly as a response to it, around the third century BCE. While all the three authors made significant and welcome contributions according to Khair, it is only the original Gita identified as such which is authentic and important for Otto. According Khair, the first and the second authors addressed themselves mainly to upper classes and the religious and spiritual elite. The third author, however, kept the common people in mind (Khair 1997: 50–51). Khair even attempts an identification of the exact contribution made by each author and thinks that, in terms of the size, the contribution of the third author was the largest of the three. He makes no attempt, however, to name the three authors, except saying that they were very learned sages who added their contribution in good faith and with good intention. The first author distilled the philosophy of the Upanishads, taking their idea of Brahman, broadened the concept of yajna, making elaborate rituals and animal sacrifices unnecessary for spiritual progress. He also advanced the idea of karma-yoga, the path of selfless service without detachment (Khair 1997: 68, 71–72). The second author was influenced by the Sankhya philosophy, and not only added to the Gita a discussion of trigunas (satva, rajas, and tamas) but also went beyond the Sankhya philosophy by proposing the concept of Purushottama or Paramapurusha, the Highest Soul or the Supreme Soul above all and everything. The original Sankhya system did not have this concept and was confined to the twin concept of prakriti, the active principle behind the material universe or nature, and numerous Purushas, the individual, conscious, and watching souls. While the original Sankhya is agnostic, the Gita is very much theistic (Khair 1997: 68, 72–73). Khair's third author took the idea of Purushottama further, making Him a personal God, amenable to bhakti or devotion and love. The third author felt that the common people needed a personal God to whom they could offer their prayers and devotion, and felt it could work to the advantage of sanatana dharma, as against the agnostic faiths like Buddhism, Jainism and Lokayata. He also made the concept of karma-yoga easier to follow by declaring that one has to just surrender the fruits of action to God. He made God accessible to everyone – men, women, high castes and low castes – for everyone is equal in the eyes of God, and caste or gender did not matter (Khair 1997: 75, 128,142). Khair thus explains how the Gita evolved over a period, responding to the needs of time. However, the evolution of the Gita came to a stop after the third author in around the third century BCE, and the interpreters took over thereafter, who brought in the needed flexibility to respond to circumstances prevailing at different times.
The basis for the hypothesis of multiple authorship rests on a perception that there is no unity of a single theme in the Gita, that there is no logical and smooth flow of argument and that there are some inconsistencies. The hypothesis is tried to be proved only through a textual analysis or internal evidence, and speculatively so, and there is not even a shred of external evidence cited anywhere to indicate support to multiple authorship. On the contrary, all available external evidence confirms the traditional view that the Gita first came out of the lips of Lord Krishna and compiled later in verse form by Vyasa as a part of the Mahabharata. As to the absence of a single theme, lack of a smooth logical flow and presence of inconsistencies, which scripture in the world is free from these problems – including the scriptures of Semitic religions? In the case of the Gita, it is known and explicitly acknowledged that it was a free and informal dialogue between two friends, and neither a lecture in a formal academic seminar nor an article in an academic journal! Moreover, it is the basic essence and purpose of the Gita to offer a creative synthesis of different schools of thought and different paths of spiritual pursuit. The apparent inconsistencies have to be interpreted in the light of their respective contexts in any scripture after all! No scripture should be interpreted literally with each sentence taken as applicable to all times, contexts and circumstances, as it would amount to fundamentalism or fanaticism.
Whether we accept the hypothesis of multiple authorship or not, it is not necessary to regard the Gita as a patchwork of interpolations. The additions through which the Gita is presumed by some to have evolved are not whimsical, but thoughtfully done by knowledgeable and perceptive sages, responding to the needs of the time. The Gita does have enough continuity and consistency to be treated as a single integrated work, in spite of apparent inconsistencies which can arise even in the works known to be by one author. The Gita any way has been a delightful challenge to a variety of interpreters each of whom could find a central message, even if they differed from each other in identifying what that message was. Merely because their perceptions and interpretations differed does not mean lack of unity in the original text, just as the different perceptions of the nine blind men in the famous parable could not have denied the oneness of the elephant they perceived. It is these differences which make the Gita so very interesting and even challenging. Besides this, we cannot forget that in its present form, the Gita has been with us for no less than 2,500 years and has acquired enough heritage value and authority. Even if it evolved over time before that and multiple authorship is granted, Lord Krishna remains as its basic source. We can still regard the Gita as one integrated work, and the different paths of spiritual seeking represented in it only reflect the diversity and richness of choices open to its followers who like to benefit from it. If I go to a restaurant for lunch, I would expect some choices open to me according to my taste, and not one dish for all and for all time. If the restaurant offers a rich variety of fare, it is its strength, not weakness. The Gita reflects not inconsistency, but a variety to choose from, according to one's aptitude. One may feel attracted by the notion of the Supreme as nirguna (without attributes), while another may prefer an intervening personal saguna God (with attributes) who loves and looks after his devotees. The former may prefer to follow jnana-marga (the path of knowledge or contemplative inquiry), while latter may like to be a bhakta, a devotee. Others may not very much bother about the nature of the Supreme, but would concentrate on their just duty of serving others selflessly as a path of liberation. They would choose karma-marga. Yet others may like to follow a combination of all the three paths. The Gita is an inspiring guide for all of them.
It is because of its proven ability to provide answers to questions raised by a variety of people, in a variety of contexts, and for well over two millennia by now, and in future too, the Gita can be considered as timeless like the Vedas. It is an eternal intellectual and spiritual resource of humanity to draw upon, which is not exhausted but further enriched by using it.
## Notes
1 The original Sanskrit saying quoted in Tilak's Preface is: Gita sugita kartavya kim anyaiah Shastra-vistarashah? The translation is as given by Sukthankar (Tilak 1936: li).
2 Radhakrishnan (1996: 525, n3) gives some examples of common verses: BG II.29 and Katha Up. 2.7; BG II.20, VIII.11 and Katha Up. 2.19, 2.15; BG III.42 and Katha Up. 3.10; BG VI.11 and Shvet. Up. 2.10; BG VI.13 and Shvet. Up. 2.8.
3 An Akshohini consists of 21,870 chariots (rathas), 21,870 elephants, 65,610 cavalry and 109,350 infantry (Adiparva 2.15–23). See en.wikipedia.org/wiki/Akshauhini; and Pai (2015: 18). There were eleven of them on the Kaurava side and seven on the Pandava side, making a total of more than four million fighting, both sides together, depending on how many worked on each elephant and chariot. Pai notes incidentally that the digits of these divisions add up to eighteen, coinciding with the eighteen chapters of the Gita and eighteen Parvans of the Mahabharata. It is also mentioned on page 18 of her book!
4 This is as per www.thevedicfoundation.org/bharatiya-history/mahabharat.htm, downloaded on 25 January 2015.
5 Based on the account in the Mahabharata itself, Richard Davis narrates how: 'When the ruling king of the Bharata dynasty, Vichitravirya, dies without fathering a male heir, Vyasa (who is half-brother to the deceased) is called to court in order to impregnate his two widows. Their sons are Pandu and Dhritarashtra, fathers of the Pandavas and Kauravas respectively' (Davis 2015: 36).
6 According to Otto, the original Gita comprised of: I; II.1–13, 20, 22, 29–37; X.1–8; XI.1–6, 8–12, 14, 17, 19–36, 41–51; and XVIII.58–61, 66, 72–73 (Otto 1939: 15; as quoted in Robinson 2013: 46).
# [2
Classical Commentators of the Gita](content.xhtml#bck_Ch02)
## The Gita in the rest of the Mahabharata, and the Puranas
The significance and greatness of the Gita was recognised without much lapse of time. The Mahabharata itself says later in the same Bhishma Parvan in which the Gita occurs – Gita sugita kartavya kim anyah Shastra-vistarashah, that is, 'whoever studies the Gita thoroughly need not bother himself about the prolixity of other Shastric writings' (Yardi 1991: v). The Mahabharata makes use of another occasion in the Ashwamedhika-parvan to paraphrase the Gita as narrated to Arjuna by Krishna in what came to be known as Anu-Gita (Sharma 1985: 1–11). The war had ended by then and the Pandavas had won, and the context of removing Arjuna's grief and goading him to fight was no longer there. It is in such a changed context that Arjuna requests Krishna to recapitulate what was taught in the Bhagavad-Gita. It is as if Arjuna wanted to know the more enduring teaching of the Gita, with the earlier context removed. Sharma thinks that the Anu-Gita is 'the first comment, if not commentary, on the Bhagavad-Gita within the Hindu tradition' (Sharma 1985: 2). The Anu-Gita emphasises that the knowledge of Brahman is possible only through sense-control, and the yoga of action has to be based on and guided by jnana or knowledge. Sharma observes that the Anu-Gita is free from any glorification of Krishna and ends on a jnana-oriented note, in contrast to the bhakti-oriented concluding note of the Bhagavad-Gita (Sharma 1985: 6). However, the Anu-Gita has been considered as somewhat less comprehensive and inspiring as compared with the original Gita and did not become popular. Perhaps it was so due to the less dramatic context of the Anu-Gita. There are several further references to the Gita in the Mahabharata, which have been cited by Tilak in the Appendix to his Gita Rahasya.
Around the time the composition of the Mahabharata including the final form of the Gita was completed, a Vasudeva cult had started becoming popular. Its emphasis was on bhakti as the easiest and the most dependable path of God realisation. As a part of following this path, a practice of daily recitation of the Gita was adopted. It was this practice which must have prevented any further interpolation of the text and stabilised it. A few Puranas, which emerged then, refer to the Gita with reverence, including the Padma-Purana and the Skanda Purana. The Varaha-Purana prescribes recitation of its Gita-Mahatmyam along with that of the Gita. Tilak mentions two other Gita-Mahatmyams in Padma-Purana and Vayu-Purana respectively. The former is more detailed, with eighteen chapters each corresponding respectively to each of the eighteen chapters of the Gita. Tilak also points out to several Gitas in various Puranas inspired by the original Gita, which, however, do not have the same freshness as the original according to him. There is even a Ganesha-Gita of Ganesha-Purana, which he says is a faithful copy of the Bhagavad-Gita except for slight verbal differences (Tilak 1936: 4–8). Yoga-vasishta gives a summary of the Bhagavad-Gita in its last chapter. There is the Uddhava-Gita in the Bhagavata Purana, the Shiva-Gita in the Padma Purana, and the Devi-Gita in the Devi Bhagavata Purana. These subsequent Gita s, however, hardly reduced the significance attached to the Bhagavad-Gita. Endorsement of the Bhagavad-Gita by the Puranas spread the belief that its mere recitation or of parts of it or even of a few verses in it every day conferred great merit (punya). Soon the influence of the Gita spread beyond the followers of the Vasudeva cult and captured the mind of others too.
S. Radhakrishnan observes that the Gita influenced Mahayana Buddhism, and at least two chief works of it, Mahayana-shraddhotpatti (The Awakening of Faith in the Mahayana), and Saddharma-pundarika (The Lotus of the True Law), are deeply indebted to the teaching of the Gita. He adds that through Buddhism, the Gita's influence extended in early times to China and Japan also (Radhakrishnan 1993: 11, n1).
Apart from finding an endorsement in a few Puranas, the Gita attracted a few commentators as well even before Adi Shankaracharya. Baudhayana was a major one among them, who has been referred to in the last chapter of Shankara's commentary. Unfortunately, Baud-hayana's commentary (bhashya) is not available now. An important feature of the bhashyakaras (commentators) is that they take the text handed down to them as given, treat it with reverence and only try to explain its intricacies and elaborate the meaning though in the light of their own perspective. They avoid being critical of the text, though not of other commentators. If any vagueness or inconsistencies are found in the text, they try to understand and reconcile them, instead of launching an attack on the text on that pretext. In this respect, they are very different from some of the modern scholars of the Gita who treated texts like lab specimens, meant for dissection and critical analysis only. Nevertheless, the earlier commentators cannot be regarded as uncritical, and they played an important role by interpreting a fixed text in the light of changing times, discovering new meanings and new messages. This has been very much the case with the Gita throughout its history.
Though the Gita had slowly started making an impact, in those initial days, the Gita's followers and admirers were not yet in the mainstream of India's religious milieu. In spite of the Upanishads and their emphasis on jnana or knowledge and ethics, a tradition of complex rituals originating from the Karma-kanda portion of the Vedas had staged a comeback. The Gita had given a new meaning to the concept of yajna as well as karma, but with most people, karma meant rituals. On the other hand, the heterodox faiths like Buddhism and Jainism caught the imagination of those people who found the path of rituals much less appealing.
## Shankara
A need arose thus for an eminent and eloquent person who could retrieve the sanatana dharma from the morass of ritualism on the one hand and the challenge of the heterodox faiths on the other. This need was addressed by Adi Shankaracharya (Shankara for brevity). He did not have to bother much about opposing Buddhism and Jainism, but focused more on defeating the ritualists through debates, travelling all over the country on foot from Kerala to Kashmir and the Himalayas in the bargain. Once the ritualists were defeated, it could take the wind out of the sails of Buddhism and Jainism.
There is a controversy about the exact years when Shankara lived, but much less dispute about his having had a short life of only thirty-two years. Perhaps there is none else in history who accomplished so much within such a short span of life. Govind Chandra Pande, after examining the views and evidence of most scholars and making his own analysis in an exclusive chapter devoted to arriving at the date of Shankara, concludes that the date of Shankara ranges between 650 and 775 CE (Pande 1994: 41–54, esp.52). Several scholars believe his date to be from 788 to 820 CE (Harshananda 2008, Vol. 3: 192). It is commonly accepted that he lived in the eighth century CE. His parents' home was in Kaladi (also called as Kalati), near modern Alwaye, and he was born in Veliyanad (near Erunakulam in his mother's place), in a pious family. He died at Kedarnath in the Himalayas (Pande 1994: 77–78). Not only in birth and death, but also in work, he spanned the whole of India.
Shankara was a precocious child who could learn quickly and retained in memory what he heard once. By his third year itself, he is said to have memorised a good deal of poetry. His father, Shivaguru, initiated him into Vedic studies at the age of five, but died soon after. Sanskritic education was widespread in Kerala then, available mostly in the premises of major temples. The child Shankara had no difficulty in mastering the Vedas, Upanishads, Shastras and Puranas early in life. At the age of eight, he had an urge to become a sannyasi (monk, renunciate), but could not do so without permission from his mother, Aryamba, whom he loved much, and she was reluctant to accede to his wish. An incident helped him in realising his wish. Once when he visited his home back from the place of his study, he was bathing in the river nearby, when a crocodile caught him. Shankara called out to his mother who was nearby and said he might be saved if she released him for renunciation. His mother agreed, and the crocodile also released him miraculously. After leaving his mother to the care of her relations, Shankara left as a peripatetic mendicant in search of a Guru. But before leaving, he promised to his mother that he would be with her at the hour of her death and perform her last rites in spite of being a sannyasi (Pande 1994: 78–81). He kept his promise.
The eight-year-old Shankara travelled north and in his ninth year met his guru, Govindapada, on the banks of river Narmada. Shankara remained with his guru until he was twelve and learnt the intricacies of Vedanta. The guru then advised him to go to Kashi (Varanasi) and write a commentary on the Brahmasutras, which he did. Biographers of Shankara agree that he finished writing his famous commentaries on the Brahmasutras, ten Upanishads and the Gita, as well as most of his numerous literary works by the time he finished sixteen years (Pande 1994: 83, 87, 88).
Shankara spent the next sixteen years of his life travelling almost all over the country, defeating opponents in debates, winning disciples, establishing mathas (monasteries) in all four directions of India and propagating his doctrine of non-dualism or Advaita. To Shankara goes the credit of organising sanatana dharma on the sound and sustained basis of mathas probably for the first time in the history of Hinduism. The acharyas and spiritual leaders who came after him simply followed his example. The mathas contributed a good deal in consolidating Hinduism toughening it against the onslaughts of other religions, by continuing the tradition of Vedic studies, providing religious instruction to people, training priests and also promoting social service particularly in times of emergencies.
We cannot afford to miss an incident in his life said to have occurred at Kashi, which became famous. When Shankara was proceeding to the river along with his disciples, he saw a Chandala (outcaste) on the way and asked him to move away. To Shankara's surprise, the Chandala pointed out the inconsistency between his doctrine of spiritual unity and the caste notion of untouchability. Shankara realised his mistake and prostrated at the feet of the so-called outcaste. Shankara has immortalised this incident in five verses, together titled as Manisha-panchakam, recording Shankara's response to the man whom he now accepted as a guru. Pande observes that this incident could hardly have been a fabrication of later times and believes it to be genuine (Pande 1994: 87). The incident is significant as it demonstrated the social significance of Shankara's philosophy in a very practical way, though unfortunately it hardly succeeded in ending untouchability in India.
With Shankara, Pande observes, the age of Smritis and Puranas ended, and the age of commentaries or bhashyas commenced (Pande 1994: 57). There is a common notion that it was Shankara who almost discovered the Gita hidden in a corner of the Mahabharata and made it famous through his commentary. Such a notion is a bit exaggerated view of his role. Shankara himself, in his introduction to his Gita Bhashya, says that there have been commentaries before him, but they could not achieve much because they did not take a total view of the Gita and resolve seemingly contradictory verses and doubts. He explains that that is why he wrote his commentary (cf. para 3 of Upodghata or introduction in Warrier's translation with the Sanskrit original, 1983: 3). There is no doubt, however, that though Shankara may not have discovered a lost Gita as such, his bhashya gave unprecedented prominence to it. In any case, the commentaries made before Shankara have been lost since long and are not available today.
Shankara interpreted the Gita in terms of his own perspective, as did the acharyas after him from their own respective viewpoints. These perspectives or viewpoints neither were preconceived notions nor were derived from the study of the Gita alone, but also from their study of other texts, particularly the Brahmasutras and the Upanishads. What these acharyas did was to take a total view in developing their doctrines, derived from their comprehensive study and contemplation on them, and reapply them to individual scriptures in interpretation. As Arvind Sharma shows almost throughout his book (1985), this attempt landed them in difficulties with particular verses which did not quite fit their doctrines, resulting in over-interpretation or under-representation at times. Sharma observes here: 'Any attempt to straightjacket the Gita leads to odd excesses' (1985: xxv). Whenever someone tries to get a single meaning out of a sacred text, it can become one more among many. This is especially so in the discussions on the nature of the Supreme, the relation of the Supreme with individuals and the path to the realisation of the Supreme or Liberation. There just cannot be one exclusive view about it. This does not mean that the interpretations of the eminent acharyas are flawed; after all, this is a problem with any interpreter having one particular perspective. But is it possible to analyse or interpret anything without a perspective?
As is well known, Shankara's overall perspective was Advaita (non-dualism or monism), which he summarised in half a verse: Brahma satyam jagan mithya jivo Brahmeti na parah (Brahman is the truth, the world unreal; the soul is Brahman, not anything else). The Gita's explicit support to Shankara's main contention about the soul or the real self (jiva or atma) being Brahman is to be found in quite a few of its verses (see the first half of the following: II.17, VI.29, VI.31, VIII.3, X.20, XIII.27, XV.15, XVIII.61, and full IV.24, XIII.32). Can this belief lead to arrogance in an individual ('I am God!')? No, because the soul of everyone is divine, and not of one person alone. It can lead, however, to enormous self-confidence, motivating each to realise one's immense potentialities. A basic teaching of Advaita is that the ego, identified with body–mind–senses complex, is not the real self or the soul. Arrogance is associated with ego, which actually comes in the way spiritual pursuit. Ego perishes with the body, while the Atman is eternal. Atman is the Divine presence in all beings, not only human beings, but all beings. In this sense, all life is sacred as it is the explicit expression of the Divine.
The other part of Shankara's summary statement about the world being unreal (jaganmithya) has often been wrongly interpreted to mean that Shankara denied the reality of the world. If the world is unreal or false, then what is the meaning and purpose of any work in the world? Would there be any basis for ethics? Shankara had to address these implications; so he explained that what he really meant was that the world is not real in the same sense in which Brahman is real and that the world is only relatively real. While the reality of Brahman is absolute, autonomous, eternal and primary, the reality of the world is dependent, derived, transient and secondary. Based on the Upanishads, he explained that there are two levels of truth – paramarthika satya (spiritual or transcendental truth) and vyavaharika satya (truth at all practical or mundane levels). The fact that Shankara did not regard the latter as illusory is clear from another word he gave to the occurrence of an illusion, pratibhasika, which is not a reality in any sense (Radhakrishnan 1999, Vol. 2: 520). The immense significance of vyavaharika in mundane matters is not denied. The paramarthika satya may not be realised through sense perception, but the vyavaha-rika can be. Both are real at their own levels. According to quantum physics, the basic reality consists of particles. The table which I use for writing is very real and has immense meaning and significance to me in practical matters. For a scientist, however, its basic reality is in terms of quanta of particles. If I am a spiritual seeker eager to realise the basic reality behind all that exists, it is the paramarthika satya which I have to know. But at the same time, I have to live and work in this world, where I have to operate according to the rules of vyavaharika satya. Even if the world is a drama, we have to play our roles as actors, that is, as morally responsible human beings. There is an anecdote in the life of Shankara, which brings out this teaching. During one of his travels in the country along with his disciples, they were passing through a forest and saw an elephant coming. They immediately took to their heels. Once they reached a safe place, a disciple asked him in a lighter vein why he ran, wasn't the elephant mithya (unreal)? Shankara promptly replied, 'mama palayanam api mithya!' (My running away also was mithya).
Right in his introduction to his Gita Bhashya (in Upodghata, para 4), Shankara observes that 'the purpose of the science of the Gita is to set forth the summom bonum nihshreyas], which consists in the total cessation of the transmigratory life and its causes' (Warrier 1983: 4). Transcending the Samsara's cycle of births and deaths has been accepted as the highest goal by almost all the Hindu philosophers and saints, including the three acharyas. But there may be a little difference of opinion about the means of achieving it. According to Shankara, it is achieved only by atma-jnana-nishtha (staying with the self-knowledge) preceded by sarva-karma-sannyasa (renunciation of all works). Atma-jnana means becoming aware of the unity or identity of the self with the Brahman, and realising that the self is different from the ego. Shankara believed that such state cannot be reached through being engaged in action or works. But the Gita itself tells clearly in [Chapter 3 (verse 5) how none can rest even for a moment without some action, how work is absolutely necessary for the functioning of the world itself (III.14) and how God Himself always continues in action though He has nothing to gain from it (III.22). Shankara, however, rules out combining knowledge with works as a solution whatever the reasoning (Tasmat kayapi yuktya na samucchayah jnanakarmanoh – para III.2 in his Gita Bhashya, Warrier 1983: 102). He even declares that karma-yoga is meant only for the ignorant (Ajna-nam eva hi karma-yogah, na jnaninam – para 5.1 in his Gita Bhashya, Warrier 1983: 106). In the context of the Gita, one can make sense of Shankara in three ways: one, we have to take him to mean only rituals when he refers to karma, while the Gita uses the term in a wider sense. Second, even if Shankara takes karma in the wider sense of the Gita, karma or works are relevant only as far as the phenomenal world or samsara is concerned. If our goal is transcending it and realising the real self or the Brahman, then karmas are not relevant, as per his philosophy. It is only jnana which is. Third, as the Gita itself explains, renunciation consists not in ceasing to work, but only in giving up the personal desire for fruits of karma and surrendering the outcome of works into the hands of God.
According to Advaita philosophy, the Brahman is masked or veiled by the world. The latter is a projection on the Brahman, like a scene or picture projected on a cinema screen. We see only the picture and enjoy it, and not the screen. The process by which Brahman is veiled or projected upon is called by Shankara as maya. Though the term is usually translated as illusion, it is not its precise connotation. Maya is the act of veiling or projection. Maya is taken as a power of the Divine by almost all the interpreters of the Gita, since the Gita also says so. The traditional explanation for the purpose of Maya is the proneness of the Divine for sport (lila). But this goes against the notion of nirguna (without attributes) Brahman in Advaita and also against the notion that Brahman is purna (complete, perfect, not lacking in anything) who needs no sport to entertain himself. Brahman is therefore said to be inscrutable. Mere human beings are unable to probe into the purpose of the Divine in wielding the power of maya, as they are themselves a part of maya. But when Brahman is worshipped as saguna Ishwara, the Lord with attributes of compassion for all beings, the creation is an expression of this attribute.
The world being mithya is only one of the ingredients of Advaita philosophy. Its most important ingredient which gave the philosophy its name is that jiva or Atman or the soul is the same as Brahman, but distinct from the ego identified with the body–mind complex; that is, Atman and Brahman are not two. The Divine is present in every being as its soul. One way in which Shankara arrived at this conclusion is by reflecting on the fact that the basic nature of both the Brahman and Atman is the same, and that basic nature is one of sat (pure existence, truth), chit (consciousness) and ananda (blissful joy). Both are eternal, though the ego is not. Ego perishes with body. The identity of the jiva –Atman and Brahman is realised through deep contemplation, by focusing on pure consciousness or awareness. Another way in which one can arrive at the identity of the jiva with the Brahman is by recalling that the Brahman is omnipresent, and therefore It/He is in you and me, and in every being. Omnipresence can also be taken as implying that there is nothing else present, nothing else exists except the Brahman. Thus, the jiva cannot be different from the Brahman. Any perception of other things being present is due to maya.
Mani Bhaumik, a reputed physicist and co-inventor of laser technology, believes that modern science has brought us closer to appreciate the ancient teaching of Advaita Vedanta – Aham Brahmasmi (I am Brahman). He says that each of us completes creation and that 'God is incomplete without us' (emphasis in original) (Bhaumik 2005: 30). It means that God lives in us, and his spirit pervades everywhere (Bhaumik 2005: 31). Bringing out the remarkable correlation between modern science and ancient Vedanta, he observes that 'everything is at the deepest level, everywhere. That all is one' (Bhaumik 2005: 32). Bhaumik points out that according to Walter Moore, biographer of quantum visionary Erwin Schrödinger, he was intuitively influenced by Vedanta in formulating quantum mechanics, and perhaps the Vedantic concept of oneness led him to a decade-long search for the unified field theory. 'From the Upanishads, Schrödinger finds it to be so simple and clear: Tat twam asi, This is you' (Bhaumik 2005: 177). Bhaumik does not refer to Shankara as such, but it is clear that his reference is to the Advaita Vedanta from the Upanishads. He says: 'Why do the Vedic rishis quietly assert, from the deep well of contemplation, "I am Brahman"? Could it be that the timeless mystical experience of oneness with the source, an experience that transcends all faiths and cultures, is actually the closest we humans can ever come to perceiving the universe as it truly is?' (Bhaumik 2005: 216).
Once the identity of jiva and Brahman is realised, that is liberation, according to Shankara. It can happen even while living; one does not have to wait for death. This realisation is not possible unless one has a pure mind and moral integrity. It can be aided by prayers to the Divine through stotras (praise of God) even in saguna form. In the vyavaharika world, we can have a personal God in any form including the female, whom we can call Ishwara, or Narayana or Devi or by any name. Shankara himself composed numerous stotras in mellifluous verses addressed to various deities with forms including the female, though he believed in the Ultimate being nirguna Brahman. In this type of Sadhana, complex Vedic rituals hardly had any role. And that is the reason he gave emphasis to jnana or spiritual knowledge as the ultimate means of liberation, and not karma. When he denied a role for karma in liberation, he must have meant only the complex rituals, but not day-to-day activities of living or activities oriented to the welfare of the world (loka-hita or loka-sangraha) emphasised in the Gita. He accepted both karma-yoga and bhakti-yoga recommended by the Gita, but he held that both facilitate the achievement of jnana through which alone ultimate liberation is possible (Rangaswami 2012: 317, 331, 334, 352). Swami Anandashram (1902–66), known for the depth of his scholarship in Advaita Vedanta and spiritual accomplishment in the same tradition, clearly stated: 'There is no conflict between bhakti and Advaita; not merely that, some sadhakas can even experience Advaita through bhakti' (2014: 7) (Tr. by the author from the Konkani original).
It did not take long for Advaita to attract sharp criticism. Its maya-vada (theory of maya) was too complicated for popular mind. Even for intellectuals, any suggestion that the world is unreal was difficult to accept as it negated the need for any responsibility and work in the world. The distinction made between paramarthika satya and vyavaharika satya on the one hand and between the impersonal Brahman and the personal I shwara on the other could be mistaken as denying the Unity of Truth and God respectively. It was feared that such a theory could distort the essence of the Gita. Moreover, though Shankara appeared to be a reformist, his followers did not quite follow the implication of Advaita for caste discrimination, and the significance of the incident between the untouchable and Shankara resulting in the composition of Manisha-panchakam was lost on the followers. The incident was tried to be explained by saying that it was not an untouchable really but Ishwara himself in disguise, and discrimination against lower castes by upper castes continued in practice.
## Bhaskara
The first important commentator of the Gita to oppose Shankara was Bhaskara, who lived around 900 CE, a century after Shankara. This Bhaskara, who wrote bhashyas both on the Gita and Brahmasutras, is different from astronomer Bhaskara who lived in the sixth/seventh century CE and also from the mathematician Bhaskara who lived in the twelfth century CE (Harshananda 2008, Vol. 1: 287). Bhaskara is credited for his contribution known as the Bhedabheda (difference cum non-difference) or Dvaitadvaita (dualism cum non-dualism) doctrine. This was taken up for further elaboration later by Nimbarka of the twelfth century CE, but Bhaskara originated it. In a way, the Bhedabheda theory explains rather than opposes Advaita. According to Bhedabheda, the 'difference (bheda) has in it the characteristic of identity (abheda) – the waves are different from the sea, but are also identical with it.... So all that is one is also many, and the one is neither absolute identity nor absolute difference' (Dasgupta 1975, Vol. 3: 6). Where Bhaskara sharply differs from Shankara is in his treating the waves as much a reality as the sea, and not as mithya as Shankara does. 'Bhaskara maintained that there is no maya, and that it was Brahman which by its own powers, underwent a real modification' (Dasgupta 1975: 2). Brahman is both an efficient and material cause of the world. It is both the creator and the material of creation. Everything is Brahman, but at the same time it is diverse, and the diversity is not an illusion but a reality. Another important difference between Bhaskara and Shankara is in the former's emphasis on karma (Sharma 1985: 26). We fulfil the purpose of our Creator through our work, and not by spiritual knowledge or jnana alone. Bhaskara was not averse to Vedic rituals, but by karma, he mainly meant doing our duties to the society of which we are members. His preference was for a combination of both jnana (through meditation) and karma (jnana-karma-samuchchaya) (Sharma 1985: 28). There is no personal God in Bhaskara's theory and thus no place for bhakti or divine grace (Harshananda 2008, Vol. 3: 522). Probably, that is why he could not be popular, since common people want a personal God whom they can worship and pray to. Even Shankara's philosophy had a personal God, at least as a stepping stone to full realisation of the Supreme. The acharyas who came later also met this need in their system of philosophy. Though Bhaskara is not as well known today as the other commentators, maybe for this or other reasons, he could not be ignored by them. While the other three commentators established their own mathas and ensured a following for their doctrines, Bhaskara did not seem to have bothered about it. Not much is known about his life, and there are no available biographies on him.
## Ramanuja
While Bhaskara tried to correct the balance in the interpretation of the Gita in favour of karma and acknowledging the reality of the world, the task of restoring it in favour of bhakti was performed by Ramanujacharya (Ramanuja in brief). Swami Harshananda observes in this context: 'India seems to have a special knack of producing great saints almost on a "made to order" basis, as per the needs of the time' (2008, Vol. 3: 56). Though according to tradition, Ramanuja lived for 120 years from 1017 to 1137 CE, according to another account, he lived for 80 years from 1077 to 1157 CE (Tapasyananda – year not available: 1). There was already a tradition of Vaishnava Alvars in Tamil Nadu, known for their passionate advocacy of bhakti and composition of mellifluous songs in Tamil, from which Ramanuja drew inspiration (Ramakrishnananda 1959: 11–39). The Shrivaishnava sect was already established (to which Ramanuja belonged), with its main centre at Srirangam in Tamil Nadu.
Ramanuja was born in a Brahmin family at Sriperumbudur, about 48 km southwest of the present Chennai. Endowed with sharp intellect and prodigious memory, he had been well educated by the age of sixteen, thanks to his learned father Keshava Dikshitar and particularly to Kanchipurna, a great devotee of the deity Varadaraja of Kanchi temple, who used to visit the family of Ramanuja and teach him. Kanchipurna came from what was considered as a low Shudra caste, but Ramanuja had no hesitation in venerating him and even touching his feet as a teacher. The family, particularly Ramanuja, was liberal and catholic in spirit. Ramanuja was married at sixteen and shortly thereafter his father died. Ramanuja moved to Kanchipuram, a reputed centre of Vedic learning, and joined Yadava-prakasha, a reputed scholar in the Advaita tradition, as a student. The young student, however, was too sharp and open minded for the orthodox teacher, and after a few unsavoury incidents, Ramanuja left him for good. By this time, the fame of the young student had spread far, and the chief pontiff of Shrivaishnavas then, Yamunacharya of Srirangam, decided to name Ramanuja as his successor after coming to know of the latter's leaving the tutelage of Yadava-prakasha. He sent a word to Ramanuja to meet him, but by the time he reached Srirangam, Yamunacharya had just died. After the funeral of Yamunacharya, Ramanuja returned to Kanchipuram with his wife. His wife came from an orthodox family believing in rigid rules of caste and of purity and pollution. Ramanuja used to meet learned men and devotees of Kanchi Varadaraja from all castes, who used to visit his house also. This did not go well with his caste-minded wife, resulting once in humiliating the wife of a person whom Ramanuja greatly respected as teacher. This clinched the issue, and Ramanuja sent his wife to her parents, and became a Vaishnava sannyasi or a renunciate. This news reached Srirangam, and a respected representative came from there to press Ramanuja to come and settle there. Ramanuja was reluctant, but finally accepted the invitation.
An incident reported to have happened at this juncture, which reveals the greatness of Ramanuja. After reaching Srirangam, he was advised to take a holy mantra from a great Vaishnava saint, Goshtipurna. But this saint wanted to test the earnestness of Ramanuja and asked him to come again and again eighteen times. Ramanuja was persistent, and finally the saint obliged him and gave the mantra on condition that he should not impart it to others. He also told that it had tremendous potency and whosoever heard it even once is sure to go to heaven, and hence it was meant only for the deserving few. Delighted at this, Ramanuja, on his way back, saw a temple; climbing its gopura (tower), he shouted loudly inviting people around to gather and imparted the mantra to them. After Goshtipurna heard of this, he sent for Ramanuja and severely scolded him. He said that Ramanuja would go to hell for disobeying his instruction. Ramanuja calmly replied that he did not mind it at all if in the bargain it guaranteed heaven to so many. By this act, he conveyed a simple but noble principle that spiritual seeking need not be confined to a chosen few, but has to reach all.
After some further training under the senior disciples of Yamunacharya, Ramanuja took over as the head of the matha and the Sriranganatha temple at Srirangam. His discourses in Tamil on the life and teachings of Alvars, especially Nammalvar, and on Vaishnava philosophy made him very popular. His discourses on Nammalvar have been compiled together in a Tamil book by one of his disciples, Kurukesha or Pillan. Ramanuja was recognised in his own lifetime as a very learned and saintly teacher and became known as Yatiraj (king of monks). He began to attract many disciples and took into his fold of disciples even from the so-called low castes including untouchables and deplored caste distinctions in the realm of God (Seshadri 1996). He did not want the word 'pariah' applied to untouchables and called them as 'Tirukulattar' (which means people of noble descent) intending to impart to them greater dignity in society (as Gandhi did much later by calling them as Harijans). It is said that he used to spend hours in the hut of an untouchable devotee, discoursing on philosophy. The community of devotees of God which he built, known as Shrivaishnavas, had people from all castes and women, all of whom he treated with equal love and regard (Yamunacharya 1988: 36–38; Dasgupta 1975, Vol. 3: 104). Ramanuja wrote commentaries in Sanskrit on the Brahmasutras, the Upanishads and the Gita. He also travelled widely in the country, spreading his philosophy and winning debates.
However, as Swami Tapasyananda observes, the tenor of Ramanuja's life at Srirangam was disturbed by a policy of persecution of Vaishnavas under the Chola king, Kulottunga, a fanatic Shaivite, and Ramanuja had to flee to Melkote in Karnataka to save his very life. Melkote was under the Hoysala king Bittideva at that time, known for his generosity and open mind. He was a Jain. Ramanuja visited him and defeated several Jain scholars in debate. Bittideva was so impressed that he converted to Vaishnavism, taking a new name, Vishnuvardhana by which he became more famous. Ramanuja founded a matha at Melkote on a picturesque hilltop, which still attracts thousands of pilgrims. Ramanuja went back to Srirangam after the demise of Kulottunga, as the latter was succeeded by a more tolerant and liberal-minded ruler. Ramanuja spent most of the rest of his life there only (Tapasyananda: 19–20). Though he lived mostly in the south, he left an indelible impact in north India too. Ramananda who had the distinction of having eminent disciples like Kabir and Ravidas was a close follower of Ramanuja, not only in his theology and metaphysics but also in social philosophy of opposing caste discrimination and treating all as equals (Dasgupta 1975: 27–28).
Though the philosophy of Ramanuja is known as Vishishta-advaita (qualified monism), it is much closer to the Dvaita (dualism) philosophy of Madhvacharya (Madhva in brief) than to Shankara's Advaita. This is because the Supreme for both Ramanuja and Madhva is personal God with attributes (saguna), though they do not mind calling Him as Brahman as well. He is different from the insentient world and the sentient jivas (souls), though both the world and jivas are dependent upon God and supported by Him and both are real for both philosophers, though different from each other. In an individual living being, the insentient body becomes sentient because of the jiva. Jivas have consciousness (chit), which are different from the world because the latter is achit (not having consciousness) . Jivas are also different from God in both these philosophies, because though jivas have consciousness, which is also an important characteristic of God, they do not share many other characteristics of God and are dependent on God. In both philosophies, the relationship between God and jivas is one of Master and servant (sheshi and shesha, respectively, in Ramanuja's words). God is accessible through bhakti; though both jnana and karma are also necessary, they are so only in so far as they lead to bhakti. For both these philosophies, bhakti is the most important and indispensable means of God realisation. Jnana and karma even when combined are not enough without bhakti.
According to Ramanuja, though God, nature and the jivas are different with different characteristics, they form a unity in the 'body' of God in a 'Pan-organistic system' as Tapasyananda terms it in his introduction to Svami Adidevananda's translation of Ramanuja's Gita Bhashya (Adidevananda 2014: 12). Nature and jivas cannot exist without God and His support, and it is in this sense that they are a part of the body of God. Ramanuja's school of philosophy is called as 'qualified monism' (Vishishta-advaita) precisely because as per this school, 'the unity of Brahman is qualified by the sentient and insentient things' (Yamunacharya 1988: 40). But these two are different from God, because while nature is insentient or jada, the jivas are subject to imperfections and suffering. By contrast, God is free from any such deficiencies (Radhakrishnan 1996, Vol. 2: 660). God is the creator of both these things which emanate from Him, who is the efficient as well as the material cause of both. The school is known as qualified monism also because its God is not a nirguna Brahman who is abstract, without attributes and indifferent to suffering humanity, but a God with attributes who responds to the love and prayers of his devotees. Ramanuja is eloquent in describing the numerous attributes of God. He calls Him as the storehouse of all beneficent qualities (samasta-kalyana-gunakara). He is one and absolutely so, and yet allows within Himself all the diversity comprising nature and jivas. His essential and eternal characteristics are that He is sat, chit and ananda (absolute existence/truth, consciousness and bliss). He is omniscient, omnipotent, omnipresent and immanent in the universe, which is His creation and his portion or part, and also transcendental. What is most important for devotees is that He is also accessible to them (sulabha). 'It is the manifestation as the Incarnate and as images that stand for the extreme accessibility of Narayana' according to Ramanuja (Tapasyananda: 49). Narayana as personal God is full of compassion and mercy for all, and loves His devotees and looks after them. He is just and fair to all equally. He is also indescribably beautiful, graceful and sweet (Yamunacharya 1988: 41). Ramanuja's emphasis on the personal God and His superlative attractiveness and goodness is consistent with the immense importance given by him to bhakti in God realisation. This personal and accessible God is not Ramanuja's invention, but is sourced from his reading of the Gita itself, apart from the various Puranas that preceded him and the rich devotional tradition of Alvars before.
Culmination of bhakti, according to Ramanuja, is prapatti – total surrender to God and His will. It is a state wherein the devotee does everything as service to God (kainkarya) and offers the fruit of all his acts or works at his feet. Self-interest is totally erased. The devotee is always conscious that everything is the abode and property of God, and he or she has no claims against or even prayers for wish fulfilment to Him. The entire burden of salvation of the devotee is put on God. This puts the devotee completely at ease, free from any stress (Yamunacharya 1988: 52–53). Doesn't the devotee then have any need to put in some effort to win God's favour? Two schools of thought emerged among the followers of Ramanuja in this respect: one, known as markata-kishora-nyaya (the logic of monkey's kid), would argue that just as the monkey's kid holds on to its mother, a devotee has to take some responsibility to win God's favour. The second, known as marjala-kishora-nyaya (the logic of kitten), argues on the other hand that just as the mother cat lifts its kitten by the neck and carries them to safety, God will take care of the devotee who surrenders completely to Him. To which of these two schools of thought did Ramanuja subscribe? It is said that when he wrote the commentary on the Brahmasutras, he subscribed to the former; but when he later on wrote the commentary on the Gita, he changed his stance and subscribed to the later view (Yamunacharya 1988: 105–6). Two verses in the final (eighteenth) chapter of the Gita lend support to the doctrine of prapatti and particularly to 'the logic of the kitten'. Translated, they read:
Occupy thy mind with Me, be devoted to Me, sacrifice to Me, bow down to me. Thou shalt reach Myself; truly do I promise unto thee, (for) thou art dear to Me.
(XVIII.65)
Relinquishing all Dharmas take refuge in Me alone; I will liberate thee from all sins, grieve not.
(XVIII.66) (Tr. Swarupananda 1982: 397–98)
Release or liberation or salvation (moksha) is, however, left entirely to the grace of God. It is for the devotee, however, to be worthy of this grace. How can this worthiness be attained? When can jivas expect to be granted liberation? The intrinsic nature of jivas is that they are blemish-free (amala) and also eternal. It is when they develop body-conscious ego, mistaking themselves with the bodies, that they attract blemish and become baddha or bound by their karma. God grants free will to the jivas, who become morally responsible for their acts. The first step in liberation consists in developing the knowledge of their true nature as parts of the Divine or Its sparks, and awareness that the body is only an instrument of the self in pure form and the sense organs of the body should be at the control of the self. The second step in liberation is being engaged in works but only as a kinkara or servant of God, with the fruits of works offered to Him. The third step is bhakti and then prapatti, as explained earlier. The final liberation comes when the jiva joins God, and that is after death. There is no rebirth for a liberated jiva. Ramanuja rules out jivan-mukti, that is, liberation even while living. The process of karma continues until death, but comes to an end with death in the case of a liberated jiva and continues even after death in the case of other jivas. In the liberated state, Ramanuja observes, however, that there is no full absorption of the jiva into God; the jiva enjoys only fellow-equality status and nearness to God but not complete absorption into Him. 'The released soul is conscious of itself as separate but yet united with the highest Brahman' (Yamunacharya 1988: 124). It amounts to saying that all jivas are in a process of moral or spiritual evolution ever since a dormant state in the distant past in the early phase of the evolution of life in the world, going then through a struggle for survival and progress in their embodied state, and ultimately to a state of liberated perfection in fellowship with God Himself. Not all jivas attain the heaven of perfection obviously, and bulk of them take birth again and again.
Kumarappa observes that the main contribution of Ramanuja, based on his interpretation of the Gita, lies in reconciling two seemingly opposite views of the Supreme – the impersonal nirguna Brahman and the personal saguna God, who loves and likes to be loved by the devotees. While the Upanishads and Shankara focused on the chit (consciousness) aspect of the Supreme, Ramanuja drew attention to the sat (being; benign or good) and ananda (blissful) aspects too, based again on the Gita. This reconciliation between the impersonal and the personal, and, nirguna and saguna, is not possible on the basis of dry reasoning, but only on the basis of mystic experience of an ardent devotee. Kumarappa feels that it was nothing short of a revolution on the part of the Gita, which was rightly highlighted by Ramanuja (Kumarappa 1979: 58, and Part II of the book exclusively on Ramanuja – esp. 164–93).
Though Ramanuja's Gita Bhashya is hardly polemical on the whole, his attack on Advaita is generally sharp. For example, he almost ridicules Advaita's rejection of difference between God and jiva, and between jivas. He asks if it were so, what was the point in the Lord's teaching to Arjuna? 'For no one who is not out of his senses would undertake to give any instruction to his own reflections in mediums such as a... mirror, knowing, as he does, that they are non-different from himself' (Adidevananda 2014: 65). Shankara's idea of two satyas or realities – ultimate (paramarthika) and practical (vyavaharika) – is not accepted either by Ramanuja or by Madhva. A social significance of Ramanuja's philosophy is that it means equality of all human beings, all having the dignity of being parts in the body of the same Divine. He understood this implication clearly and applied it in his life. He gave equal respect and love to all irrespective of whether they were so-called untouchables or high-caste Brahmins. In fact, Shankara's Advaita philosophy also has the same implication of equal dignity of all, but his distinction between the two truths – ultimate and practical – could permit (mistakenly at least) some dichotomy between philosophy and practice. In Ramanuja, there is no such dichotomy, since he makes no distinction between the two truths.
## Madhva
The next great commentator is Madhvacharya, or Madhva in short. In his case also, there is no agreement among scholars about the date of birth and death. While S. Dasgupta thinks that he was born in 1197 CE and died in 1276 CE (Dasgupta 1975, Vol. IV: 52, 54), B.N.K. Sharma takes these dates as 1238 and 1317 CE respectively (Sharma 1986: xv). There is agreement, however, on the fact that he lived for seventy-nine years and was born in a Tulu-speaking Brahmin family in a village called Pajaka near Udupi in Karnataka. He was named as Anandatirtha by his Guru on taking sannyas and later also as Purna-prajna on taking the leadership of his matha or monastery. But he is more popularly known as Madhvacharya or Madhva. He was tall, muscular and had no compunctions in combining a little indulgence in sports like running races and wrestling along with his scholastic and spiritual pursuits at least in his early life. Like his main intellectual adversary Shankara, Madhva also toured widely all over India both in the north and in the south, debating with scholars and defeating them. He too was a prolific writer, producing some thirty-seven works in Sanskrit, including two on the Gita – Gita Bhashya and Gita Tatparya, besides commentaries on the Brahmasutras, the ten principal Upanishads and the Bhagavata Purana. In his Gita Bhashya, Madhva does not comment on all the verses of the Gita, but only on those which in his view required comment or explanation. Thus, only 385 out of 700 verses of the Gita are commented upon (Sharma 1989: 2). Madhva's writings, known for precision and brevity, often required further exegesis. This was provided by his disciples, an important one being Jayatirtha who wrote Nyayadipika on Madhva's Gita Tatparya and Prameyadipika on his Gita Bhashya besides other works (Harshananda 2008, Vol. I: 562).
In introducing Madhva's philosophy, B.N.K. Sharma observes that at the time of Madhva, India was in turmoil facing critical times due to invading Muslims and a philosophy such as that by Shankara saying that the world is unreal or maya, needed to be fought first as a precondition to resisting the invaders. This need was fulfilled by Madhva, and there could be no indifference to the problems of the world which all were facing, be it in the name of either maya or seeking moksha. Interestingly, Madhva recommended the unselfish continuation of one's duties in the world even after spiritual realisation or enlightenment, which he aptly called jnanottara-karma (post-realisation karma) (Sharma 1989: 20). Madhva, according to Sharma, strongly felt that mayavada was a source of inner weakness, as it undermined the role of one's duties to the world or society (Sharma 1997: 3). The notion of mayavada was attacked earlier by Bhaskara and Ramanuja too, as observed earlier, but Madhva gave a further impetus to this attack.
Madhva, however, distinguished between Independent (swatantra) Reality, which is the Supreme, and dependent (a-swatantra) reality, which is the world. Interestingly, this corresponds to a similar distinction made by Shankara between paramarthika satya and vyavaharika satya respectively. Madhva, however, would not accept any hint of the latter reality being illusory or unreal in any way. The world is absolutely – not just relatively – real, according to him, though it is entirely dependent on the Divine for creation, maintenance and dissolution. This dependence does not reduce its reality.
While the Supreme is One, the world is diverse and so are the souls or jivas in it. The jivas are different both from the Supreme and from each other. Jiva is also different from the body–mind–ego complex. The Gita mentions (XV.16, 17) two Purushas, one kshara (perishable) and the other akshara (imperishable), both being different from each other and also from Purushottama (the Supreme Person), who pervades all worlds and sustains them. This mention makes sense only in the context of Dvaita philosophy. Clearly the perishable kshara purusha refers to the body–mind–ego complex, while the imperishable purusha means the jivas which are many, and different from the kshara purushas as well as from Supreme Master, I shwara. The Gita's support to Dvaita (and also Vishishta-advaita) philosophy is evident from several other verses also, as, for example IV.11, VII.21–22, IX.22, XVIII.55, 61–62, 65–66.
Madhva speaks about five types of differences or pancha-bhedas. They are as between God and the insentient world (jada), God and souls (jivas), the world and souls, the souls themselves, and insentient entities within the world. While the basic or ultimate reality for Shankara is unity, it is unity in diversity for Ramanuja, and diversity for Madhva. Nevertheless, there is in Madhva's thought a unifying or integrating principle even behind diversity in the world, which is provided by the dependence of all non-divine realities on the Divine. Despite the emphasis on the bhedas or differences, even in Dvaita, the reach of the Divine is everywhere, and in that sense, the Divine is immanent too. God is omnipresent in all the three philosophies. Both in Dvaita and Vishishta-advaita, the Divine is an intensely personal God, having all benign attributes particularly compassion for the devotees. In Advaita on the other hand, the emphasis is on the impersonal nirguna God, but in the practical (vyavaharika) world, God can be conceived as personal and responsive to the prayers and the love of devotees as a preparatory step to seeking nirguna Brahman. This has implications for bhakti as sadhana (means of spiritual seeking and liberation). In Vishishta-advaita and Dvaita, bhakti is the direct means of liberation. By contrast, in Advaita, though bhakti to a personal God is important and useful, it is a stepping stone to jnana, and it is jnana only which leads to final liberation. So it is with karma. Accepting one's moral responsibilities and doing duties unselfishly is certainly important and necessary, but it is after it leads to jnana that final liberation comes. In Vishishta-advaita and Dvaita, the spiritual seeker simply surrenders all fruits of works or karma at the feet of the Supreme and is thus freed from any attachment to fruits.
The social significance of Dvaita philosophy is the importance given to diversity or pluralism. In this beautiful and enormous garden of God, there is a great variety of plants, flowers, fruits and beings, each with its own beauty and role, and all coexisting harmoniously under the same Master who nourishes all without discrimination. To deny diversity and homogenise everything goes against the very spirit of God's creation. Despite diversity, there is also a stress on equality of treatment, since compassion to and equal concern for all, especially the weak, is the innate quality of God (saguna Brahman) in all the three philosophies. Human behaviour has to be harmonious with God's will and cannot afford to go against it, whatever be the philosophy.
Though there are differences (see table in the Appendix to the chapter) between the three schools of Vedanta, they are not so overwhelming as to be disharmonious. They are only different ways of conceptualising the relationship between the selves or souls and God, depending on one's inclination to the Divine. These inclinations are personal or individual, and cannot genuinely be community or caste based. They can not only vary between individuals but also change over time. There can be no doctrinaire rigidity about it. For example, the sum and substance of the three schools of Vedanta has been tried to be encapsulated in terms of three short Sutras in Sanskrit: Dvaita by Tasmaivaham or Tasyaivaham, Vishishta-advaita by Mamaivasau, and Advaita by Sa evaham. There is a verse in Sanskrit which says that all the three can be practised by a sadhaka:
Tasyaivaham mamaivasau sa evaham iti tridh a /
Bhagavat-sharanatvam syat sadhanabhyasapakatah //
(Original source unknown; quoted in Anandashram 2014: 7)
The verse means: '"I am His only", "He is mine only" and "He is me only," thus in three ways, one may surrender himself to God in the course of spiritual practice' (translation by the author).
D. V. Gundappa (popularly known as DVG), an eminent poet-philosopher from Karnataka, has quoted and explained these Sutras in his treatise on the Gita in Kannada in order to bring out the harmony (samarasa as he calls it) between the three schools (Gundappa 2001: 565). Tasmaivaham means that 'I belong to Him' or 'I am meant for Him', indicating complete surrender to God. It represents height of devotion or bhakti. Though there is Dvaita or dualism in the sense of separateness of the self from God, there is also at the same time awareness that the self is His. There is a feeling of being a servant of God. In Mamaivasau which means 'He is mine only', taken to represent Vishishta-advaita, the feeling is from the other side. It is the beloved's feeling for her lover or a child's possessiveness about its mother. Though there is some awareness of separateness, the dominant feeling is one of unity of both, a unity in diversity. In Advaita, there is complete identity, Sa evaham, which means 'He is me!'. Any separateness between the self and God is submerged under the consciousness of oneness. Intense bhakti is required in the initial stage, which when attained transcends any dualism, and the person becomes a 'realised one', a jnani, whom the Gita describes eloquently. Gundappa terms these three states of mind respectively as Svatah-samarpana (self-surrender), Svatah-sahabhaga (the self as associate of God) and Svatah-vilayana (the self as merged into God). The significance of Gundappa's interpretation of the three schools of thought, which have fought bitterly with each other at least on the debating plane and consider themselves as rigidly separate from each other, lies in the fact that the same persons can have these three states of mind at different times, and one state of mind can easily move to into another. He illustrates this with the example of a married couple in love with each other. There are some duties which they do separately in Dvaita state, and some they do together as part of family in Vishishta-advaita state. But when they are in their height of love and forget each other's separateness, they are in the Advaita state!
## Others in the Sanskritic tradition
There have been several other commentators as well, apart from the three discussed earlier: Abhinavagupta (in the tenth to eleventh century) in the Kashmiri Shaiva tradition, Nimbarka, Vallabha, Madhusu-dana Saraswati, Raghavendra Tirtha, Chaitanya and more. Nimbarka (twelfth century) subscribed to the theory of Dvaitadvaita (dualism-cum-non-dualism), holding that the soul (jiva), the world (jagat) and God are different from each other, yet the existence and the activities of the soul and the world depend on God's will, all the three constituting one integrated system. He emphasised bhakti as the promising path to moksha. Vallabha (1473–1531) is a prominent Vaishnava philosopher who advocated shuddhadvaita (pure non-dualism or monism) as his philosophy and pushti-marga as the path to spiritual seeking. Vallabha does not consider Shankara's Advaita as pure, because it accepts maya, which negates non-dualism. Brahman is the only reality, and there is no secondary reality. According to Vallabha, maya is a part and real power of Brahman, and not a different entity. Pushti-marga is essentially bhakti, but based on unconditional love of God which wins His grace. It is different from formal bhakti based on rituals or worship. It is through the unconditional love that God realisation and final union with God takes place. There is, however, a logical problem in this philosophy. Bhakti involves a difference between the devotee and God – the object of devotion, though the ultimate goal is attaining unity between the two. Shankara gets over this problem through the concept of maya, under which alone there is this difference. But Vallabha rejects this concept as untenable. If, on the other hand, he accepts that under non-dualism the difference between the devotee and God is illusory, he would be indirectly resorting to mayavada.
Madhusudana Saraswati (1490–1580) is an Advaitin who simultaneously enriched the literature on Advaita philosophy as well as on bhakti. He did not see a conflict between non-dualism and devotion to a personal deity. His work on the Gita, Gudhartha-deepika, which reflects his philosophy is rated highly. He declares that the core message of the Gita, as well as that of other scriptures, is self-surrender to God, which is the culmination of all spiritual practices. This stand brings him close to other philosophies. According to him, the main theme of the first six chapters of the Gita is on karma-yoga (steadfastness in selfless action, in Gambhirananda's translation) and of the last six chapters is on the yoga of knowledge. But these two yogas are seemingly opposite and can be combined only through bhakti, which is the main theme of the middle six chapters of the Gita (Gambhirananda 1998: 21–22).Thus, there is coherence and a logical order in the sadhana (spiritual striving) commended by the Gita in M. Saraswati's view. Also, all the three yogas are necessary to attain spiritual liberation or moksha. What is recommended is a stepwise approach starting with the yoga of selfless action, going on to bhakti in God, both of which purify and prepare the mind of the seeker to gain knowledge or jnana. M. Saraswati says that the Gita is an enlightening guide for all those who want to overcome the sorrow, delusion and bondage inherent in mundane existence, and attain the blissfulness of liberation, which is the highest human goal or purushartha (Gambhirananda 1998: 26). Bondage is not a natural condition of the self, M. Saraswati clarifies. It is due to the limiting adjuncts of the mind, and the Gita helps the seeker of liberation in the right spiritual practice, which will gain for him or her the right knowledge which finally liberates (Gambhirananda 1998: 89–91). An interesting characteristic of Saraswati's work is that he honestly puts himself in the position of an opponent, raises objections and then rigorously replies to them, in the style of several classic works on philosophy in Sanskrit.
All of these great philosopher-sages contributed to disseminating the Gita in different parts of India and helped in its more nuanced interpretation, further enriching Hindu philosophy. All of them wrote in Sanskrit. Sanskrit not only played the role of a link language of the country until English took its place, but also assured a secure place in the long tradition of scholarship and spiritual knowledge. It thus became a storehouse of Indian philosophy.
The lives of the three great acharyas and their perspectives have been dealt with at some length in the earlier text because of the significant role that they played in making the Gita much better known among people. Their perspectives left an indelible impact on the development of philosophical thought in India, which is felt even today. Since the Gita itself does not include systematic discussions of philosophical issues, the contribution of the three acharyas in a logically rigorous and coherent manner helped a better understanding of the Gita itself, though put in alternative thought-provoking perspectives. These perspectives need not be taken as conflicting with each other. The Gita supports all of them in a way that does not suggest any conflict between them or their mutual contradiction with each other. The focus of the three acharyas as well as of others in the Sanskritic tradition was, however, on the theological and metaphysical aspects and on spiritual striving (sadhana) to gain liberation from bondage to the mundane worldliness (or samsara) and realise the Ultimate. Though they did not altogether ignore ethics, they tended to assign it only the role of being the means to purify mind and prepare oneself for sadhana. The issue of serving people or society at large and meeting larger social needs apart from the individual spiritual aspirations was rather sidetracked. It hardly received any noticeable attention of the classical commentators, though the Vedic prayers had the welfare of all people in mind and, what is more, had an honourable place in the Gita itself. It was with Jnaneshwar that the focus started becoming broader and more comprehensive, giving more attention to ethics and ways of making one's life in the world itself more meaningful.
## Jnaneshwari – The Gita goes to people at large
Jnaneshwar, also known as Jnanadev and Jnanoba, is a unique saint. He introduced the Gita, probably for the first time, to common people in an Indian regional language. The earlier commentators wrote only in Sanskrit, though they gave discourses in Indian regional languages. Jnaneshwar wrote in old Marathi (prevalent in the thirteenth century) in the lyrical and easy-to-recite 'Ovi' metre. He called his work Bhavartha-dipika (one which sheds light on the essence of the Gita). But it became known popularly as Jnaneshwari. It is poetic and is not a mere literal translation, but more an explanation embellished with literary flourish like similes and illustrations. The ease with which he elaborates and interprets even some of the perplexing verses of the Gita is remarkable. A good translation of Jnaneshwari is available in English, made by M. R. Yardi (2011; first edition in 1991). Jnaneshwari was originally a series of discourses given to people by Jnaneshwar at a temple in Alandi, his native place near the present-day Pune where he was born. His elder brother and guru, Nivrittinath, used to be present during these discourses as an informal moderator, and questions and doubts from the audience were freely raised and discussed, all of which are recorded faithfully in the text. The dialogic form of the Gita was extended into a seminar in the Jnaneshwari, with Jnaneshwar being the main speaker of course.
Another uniqueness of Jnaneshwar, though at a personal level, was that he lived only for twenty-one years, born in 1271 CE (about 5 years before Madhva's death). He had two more brothers and one younger sister. They had a difficult childhood as the family – though Brahmin – was treated as outcaste by the society. Their father who had taken sannyas (a renunciant's vows) had returned to married life on his guru's orders. It amounted to breaking established convention according to which a person having once taken sannyas could not marry and take to family life. Jnaneshwar's family was ostracised for this reason, finally leading to suicide by his parents by drowning in a river. But the siblings, under the eldest brother Nivritti's leadership, could manage to survive and even get a good education. Jnaneshwar showed extraordinary brilliance and became accomplished in the Vedas and Upanishads quite early. He was initiated into the Nath tradition of yoga by Nivritti and became an adept in this order of yogis. Besides the Jnaneshwari, he also wrote a highly acclaimed treatise titled Amritanubhava (Experience of Immortality) (also in old Marathi). He was equally in the order of bhakti sants and composed several devotional and philosophical abhangs, which are still popular and sung. He mixed freely with the saint-poets of the time in Maharashtra, who emerged from among the common people, and was highly respected by them. He decided, however, that by the age of 21, his life's mission was accomplished, and time was ripe to merge with the Immortal, leading to his taking sanji-van samadhi (sitting in meditation in his tomb until so-called death). The samadhi in Alandi is attracting pilgrims in thousands even now. He is counted among the first sants who launched the bhakti movement in Maharashtra. His work on the Gita, Jnaneshwari, reflects his dual background clearly, with an emphasis both on meditative yoga and on bhakti.
Jnaneshwar was an Advaitin, being a follower of Shankara. But he did not accept the Jaganmithya implication of Shankara's teaching. The world is real, and not illusory; it is a chidvilasa (a joyous play by the Supreme Consciousness or Brahman). Along with human beings in it, the world is a natural and joyous expression of the Supreme Reality. We, the humans, are all players in this cosmic play and have to do our duties in the world. Jnaneshwar does not accept Shankara's prescription of renunciation; his is considered as a 'partially activist' interpretation of the Gita (Agarwal 1993: 246), because his emphasis is much more on bhakti rather than on karma. He suggests namajapa, repetition of God's name with devotion, as an easy form of bhakti, adding to the exposition of it in the Gita. He points to the mutual love of Krishna and Arjuna, as an essential part of the Gita while explaining the importance of bhakti. In fact, before ending his poetic presentation of the Gita, Jnaneshwar shows Krishna drawing Arjuna in close embrace, suggesting the union of the devotee with his God. An ardent devotee's experience of bliss surpasses that of even a jivan-mukti (a seeker who is liberated while living). A devotee (bhakta) can also be actively engaged in the world even while devoted to God. Moreover, as Jnaneshwar emphasises, the path of devotion is easily accessible to all, poor and rich, low caste and high caste, women and men. He also expounds on Patanjali's path of yoga and meditation for realisation of Brahman, which for him does not conflict with bhakti. Though Jnaneshwar considered himself a follower of Shankara and an Advaitin, he was not just a knowing or intellectual Advaitin, but also a feeling, inclusive and socially aware Advaitin, experiencing the same Divine in himself and all.
In Jnaneshwar's view, the world in its basic essence is no different from Brahman. Jnaneshwar often takes the example of gold and gold ornaments to illustrate his point. The important difference, however, is that we have consciousness and will, while ornaments do not have it. But this is natural because the Supreme Brahman is itself the principal centre and origin of consciousness, which we too are bound to have because we are created from it. There is an apparent contradiction between Krishna's statement in verse 4 of Chapter 9 (matsthani sarva bhutani – all beings are within Me), and his statement in the very next verse (na cha mat-sthahni bhutani – the beings are not in Me), and again in the next verse (tatha sarvani bhut a ni mat-sthani – all beings are within Me). Jnaneshwar explains it by saying that this is so because everything is basically Brahman. He says in Ovi (verse) 88 of Chapter 9 of Jnaneshwari that according to the Lord, the world is not different from Him nor is He different from the beings in the world (Yardi 2011: 123). Yet, bhakti – though involving dualism – is emphasised as a path to God realisation in several places, as in the original Gita and as with Shankara. This is because, once in this world as His players, we are subject to limiting conditions which Jnaneshwar calls as upadhis, and bhakti is necessary to transcend these limiting conditions and to realise one's own real nature.
Jnaneshwari has an important place in the history of the Gita. It is mainly because of Jnaneshwari that the Gita became popular among common people particularly in Maharashtra. According to Jnaneshwar, the Gita is meant essentially for the common people irrespective of caste and gender. In the last chapter (viz. 18) of his work, he observes in Ovis 1,456–1,460 that the Vedas were niggardly in imparting their knowledge restricting them only to the three upper castes, giving no 'elbow room' to women and Shudras, who equally needed this knowledge. But the Lord gave the Gita to the world so that everyone can have access to it (Yardi 2011: 349). Jnaneshwar could be said to have carried out the unfinished task of the Lord, because he made available the Gita, which was in inaccessible Sanskrit, to common people in lucid and simple Marathi. He thus launched the era of translations of the Gita in various languages of people.
No proper records are available to indicate when translations of the Gita appeared first in the various Indian regional languages. Kumaravyasa's poetic translation of the Mahabharata into Kannada, composed in the thirteenth century, contains a summary of the Gita as well. The next to follow Jnaneshwar, writing in the vernacular on the Gita as an independent text though after a long gap, was Akho (1591–1656), a Gujarati poet, who composed Akhe-gita. However, it was not a translation as such but his own formulation of themes in the Gita (Desai 2014: 9). It was Narahari (1611–63) who first translated it in Gujarati. Even before this, in the sixteenth century itself, the Gita was translated into Persian by Abul Fazl, a respected scholar in the Mughal emperor Akbar's court. The translation seems to have been done at the instance of Akbar himself. There have also been translations of the Gita in Hindi and Urdu in the medieval period itself. A palm leaf manuscript of a poetic translation of the Gita in Kannada by Nagarasa, titled Karnataka Bhagavadgite, was discovered towards the end of the nineteenth century and published in the early twentieth century. But details of when it was written are not available from it. It is written in the style of Kumaravyasa's Kannada Mahabharata in the same Bhamini-shatpadi metre, but linguist and lexicologist G. Ventatasubbaiah feels that its language is close to the spoken language of the eighteenth-century Karnataka (Anon 2011: vi). The style, spirit and approach of the work, however, are closer to medieval works on the Gita rather than of the modern works.
The foregoing interpreters of the Gita affirmed the authority of the Vedas and the Upanishads, but were more inclined towards the Upanishads. But they also transcended both, like the Gita, in their emphasis on bhakti. The consensus appeared to be on jnana–karma–bhakti samuchchaya, that is combining the paths of knowledge, action and devotion. The modern interpreters too continued this tradition. As observed earlier, what distinguishes the classical and medieval works on the Gita from the modern is that while the former focused on theology, metaphysics and sadhana (the means of seeking God realisation), the modern works give more emphasis on ethical implications of the Gita and its social relevance without ignoring theology and sadhana. This difference in approach would come out clearly from the next three chapters here.
Surprisingly, iconographic or pictorial representations of the Gita's Krishna and Arjuna engaged in dialogue have been rare before the modern period. Robert Minor mentions two such illustrations, one from a frieze in the Halebidu temple in Karnataka belonging to the late twelfth century and the other belonging to the fourteenth century in a temple at Pushpagiri in Andhra Pradesh (Minor 1991-a: 4). Though there were many commentaries and vernacular translations, the Gita did not appear to have been a noticeable part of public religious consciousness before the modern period starting from the second half of the eighteenth century. Krishna as the object of adoration by Gopikas had been more conspicuous in popular imagination in the medieval period than Krishna the teacher of the Gita. Nevertheless, the importance given to the Gita in the medieval period cannot be belittled. Almost every prominent traditional religious leader thought it to be a minimum duty to write a commentary on the Gita to establish his credentials as a spiritual leader and a scholar.
In any case, while the Gita was highly regarded mainly among the Sanskritist elite before the early thirteenth century, it started being established among the common people only after that, thanks to Jnaneshwar's trendsetting translation, and well before the eighteenth century, it came to be accepted as an important sacred scripture of the Hindus by a significant part of them. Modernity did not dent the fame and prestige of the Gita in any way, but on the contrary, only accelerated its popularity as never before, not only among Hindus but also among others. It was in the modern times that it came to be regarded by a majority of Hindus as the sacred book of Hinduism, like the Bible among Christians and the Quran among Muslims, though – paradoxically – it has hardly displaced other sacred texts of Hinduism.
## Notes
1 In the Padma-Purana, Uttara-khanda, Chapters 171–189, there is a detailed description of benefits accruing from the study and recitation of the Gita. In the Skanda-Purana, there is an appreciative reference to the Gita in three verses (49–51) in the section on Kartika-masa-mahatmyam in its second chapter. According to Hayavadana Puranik, an authority on the Puranas, the source of the Gita-mahatmyam, given at the end in many publications of the Gita, is unknown, since the Varaha-Purana itself does not have it though attributed to it (Personal communication, 2 January 2015).
2 For Shankara's commentary, see Sastri (1977), Warrier (1983) and Gambhirananda (1984); for references to Bhaskara's, see Dasgupta (1975, Vol. 3: 6), Arvind Sharma (1985: 16–41, esp. footnotes 1 and 3 on p. 16) and Harshananda (2008, Vol. I: 287); for Ramanuja's, see Sampatkumaran (1985) and Adidevananda (1992, 2014); and for Madhva's, see B.N.K. Sharma (1989).
3 While explaining the Advaita philosophy, it has been pointed out earlier that the interpretation of maya as the world being unreal and denying our moral responsibility in the world is misleading, and Shankara never meant it in this negative sense.
4 This is done in Chapter 2, verses 54–59, while describing Sthitaprajna; in Chapter 14, verses 22–26, while describing a person who has transcended the three gunas; and again in Chapter 18, verses 49, 51–56.
5 Svatah samarpana or self-surrender (in the sense of prapatti) is emphasised in Vishishta-adavaita also.
6 The account about Nimbarka, Vallabaha and Madhusudana Saraswati is based on respective entries in the three volumes of Harshananda (2008), and also on Gambhirananda (1998) for Madhusudana Saraswati. Gudhartha-deepika by Madhusudana Saraswati has been translated by Swami Gambhirananda, with an Introduction by Swami Atmaramananda (Gambhirananda 1998). An interesting thing about Madhusudana Saraswati narrated in the Introduction is about his founding a militant order of sannyasis, called the Naga sect, to counter frequent attacks on Varanasi by Muslim militant clergy. Thus he was not just a philosopher, but quite a worldly man too who rose to the occasion as required. This narration is based on a research by Prof. J. N. Farquhar (1925).
7 Cf. 'Introduction' by Swami Atmaramananda in Gambhirananda (1998: 18).
8 An Ovi has four padas (legs or steps), the first three with six letters each and the last having only four, together making twenty-two letters. This is shorter than the verses in the Gita, most of which have thirty-two letters in four padas of eight letters each, and some verses are larger still with each pada having eleven letters. While the Gita has 700 verses, the Jnaneshwari has some 9,000 Ovis spread over eighteen chapters corresponding to the respective chapters of the Gita.
9 It was mandatory to have the approval of one's mother (if unmarried) or wife (if married) for taking sannyas. Jnaneshwar's father had not informed his guru that he had not obtained his wife's approval for it. Once his guru came to know about it, he ordered him to return to his wife and family life.
## Appendix
The three Acharyas and their philosophies
| Shankara (eighth century) | Ramanuja (eleventh to twelfth centuries) | Madhva (twelfth to thirteenth centuries)
---|---|---|---
* * *
Name of the philosophy | Advaita (monism) | Vishishta-advaita (qualified monism) | Dvaita (dualism)
Nature of the Supreme | Impersonal (Nirguna) | | Personal (Saguna)
Nature of the system | Unity* | Unity in diversity | Diversity
Nature of the world | Relatively real, ultimately unreal | Absolutely real (leela or play of God)
Relation of the jivas to the supreme | Basically oneness, separateness illusory | Servant-master; part of one pan-organistic system | Basically separate but dependent
Concept of maya | Veil or projection on the basic oneness | Magical/creative/mysterious power of the Supreme; conditioning in samsara due to which God is concealed/forgotten
Means of liberation | Ultimately only jnana, but bhakti and karma can lead to jnana | Only bhakti aided by jnana and karma, or directly through prapatti (total and unconditional self-surrender to God)
Guiding principle | Sa evaham (He is me!) | Mamaivasau (He is mine only) | Tasmaivaham (I am meant for Him!)
State of mind of the seeker (a la D. V. Gundappa) | Svatah-vilayana (self merged into God) | Svatah-sahabhaga (self as associate of God) | Svatah-samarpana (self-surrender to God)**
Nature of liberation | Sarupya (realisation of oneness) (possible when living) | Sayujya joined with God)*** (possible after death) | Samipya (nearness or living with God)*** (possible after death)
Supportive verses from the Gita (examples only) | IV.24; VI.31; X.20; XIII.22, 32; XV.12–15. | VII.21–22; IX.22; XV.12–15; XVIII.55, 62, 65, 66 | VII.21–22; IX.14, 15, 22, 34; XV.16–17; XVIII.62, 65, 66
*Only unity is real in a fundamental sense in Advaita, diversity being only relatively or apparently real.
** Relevant to Ramanuja also.
*** With separate identity of the jiva/self retained.
Note: The table here presents only an approximate view; see the text of the chapter for a more nuanced presentation of differences as well as harmony between the three.
# [3
The Gita Goes Global](content.xhtml#bck_Ch03)
## Wilkins's English translation, impact and reactions
We can discern three stages in the spread and growing acceptance of the Gita. The first was when Shankara wrote his bhashya (commentary) on the Gita. Though there were a few commentaries on the Gita earlier and it was endorsed by a few Puranas which helped its popularity, it was after Shankara's bhashya and his tireless travel all over the country that the Gita came into the mainstream of Hinduism with a bang. Several more commentaries followed it, not only by Ramanuja and Madhva, but also by others mentioned in the last chapter. There may have been differences in interpreting the Gita, but it was agreed by all that it was a very important and authoritative religious text. Its position as a sacred text was consolidated. The second stage came in the thirteenth century, about five centuries after Shankara, when Sant Jnaneshwar liberated the Gita from the confines of Sanskrit and wrote his Bhavartha-dipika on it in people's own spoken language, Marathi. It launched the era of translations into vernacular, making the Gita more popular among common people. This trend continued well into the modern period and shows no signs of a slowing down.
The third stage began with the direct translation of the Gita from the Sanskrit original into English by Sir Charles Wilkins (1749–1836), at the instance of Warren Hastings (1732–1818), the then Governor General of India, published in London in 1785. It was titled: Bhagavad Geeta or the Dialogues of Kreeshna and Arjoon, in Eighteen Lectures; with Notes. Hastings, who encouraged and supported it, had to justify its printing and publication before the Court of Directors of the East India Company. Though he may have done so on the grounds of the need to know the culture and religion of the people who were ruled by the British, Hastings had a genuine interest in India's cultural heritage and was sure that the Gita 'will survive when the British in India shall have long ceased' (quoted by Desai 2014: 10). Hastings argued that 'reading the Gita would help a British public overcome its previous prejudice about Indian savagery, and acquire a more generous and true estimation of native dignity as well as accomplishment' (Davis 2015: 94). He also believed that the British should govern the Indian territories under its control, 'not according to British law but according to the laws and customs of the local residents' (Davis 2015: 76). Knowing their religion was a part of this policy. The choice of the Gita was because, as Wilkins wrote in his preface to his translation, 'The Brahmans esteem this work to contain all the grand mysteries of their religion' (Davis 2015: 79). Wilkins had located himself at Benares (Varanasi) to study Sanskrit and Sanskrit texts. The translation of the Gita was a result of collaboration between him and Sanskrit pundits there, particularly with the pundit Kashinatha Bhattacharya. Davis notes that there were no Sanskrit–English dictionaries then, and Kashinatha prepared a 10,000-word vocabulary and a list of Sanskrit verb roots to help Wilkins and William Jones, who was also deeply interested in translating Sanskrit texts (Davis 2015: 79). A flood of translations of other texts followed Wilkins's Gita (1785) – Hitopadesha (1787), Shakuntala (1789), Gita Govinda (1792), the Laws of Manu (1794) with many more to follow (Davis 2015: 76). William Jones took a sustained interest in Sanskrit, founded the Asiatic Society in Calcutta in 1784 and, apart from translating several Sanskrit texts, published a critical edition of Amarakosha in 1808. Jones is considered as the father of Indology, which emerged as a separate discipline by itself devoted to Indic studies. It interested many in Europe particularly in Germany. It gave rise to the idea that ancient Sanskrit could well have been the source of Indo-European languages. Wilkins's Gita was thus a trendsetter.
For Wilkins, the significance of the Gita did consist neither in showing the paths to liberation as interpreted by the earlier Indian commentators, nor in being a moral guide for living, but in the religious reforms it proposed within Hinduism. Davis quotes him in this regard:
It seems as if the principal design of these dialogues was to unite all the prevailing modes of worship of those days; and by setting up the doctrine of the unity of Godhead, in opposition to idolatrous sacrifices, and the worship of images, to undermine the tenets inculcated by the Veds;... the design was to bring about the downfall of Polytheism; or, at least, to induce men to believe God present in every image before which they bent, and the object of all their ceremonies and sacrifices.
(as quoted in Davis 2015: 81–82)
Obviously, Wilkins did not know that the religion of the Vedas and Upanishads had no idolatry or image worship, and they already had the concept of the unity of Godhead. He or Hastings hardly knew about the Vedas and the Upanishads, except being aware that such texts existed. Wilkins saw the Gita from a Christian perspective and interestingly saw coherence and consistency between his own faith and the teaching of the Gita. This was a notion which helped later in perceiving the universality of the Gita.
There were two younger contemporaries of Warren Hastings, evangelical Christian Charles Grant (1746–1823) and utilitarian James Mill (1773–1836), both of whom were quite opposed to Hastings's outlook and Orientalist enthusiasm for India and Sanskrit texts including the Gita. For both of them, India was scarcely above savage level in the 'evolutionary scale of civilisation'. This was not because of racial differences, but due to 'political and cultural despotism' from which India suffered. Only a profound transformation of society could save India. While Grant would allow a great role for Christian missionaries in this transformation, Mill would vouch for the secular process of modernisation (Davis 2015: 94–95). 'In Mill's view', Davis explains, 'religion ought to provide a depiction of the cosmos as a connected, perfect system governed by general laws and directed toward benevolent ends' (Davis 2015: 97). Mill points out at the account of Arjuna's awe at Krishna's all-encompassing form and observes that this is a 'monstrous exhibition' of a guilty cosmology. Mill also says that yogis (as per the Gita) are required to renounce all moral duties and affections, which is a torture that the religion of the Hindus requires. Davis rightly observes here that Mill failed to notice that the Gita on the contrary requires yogis to work in the world as per dharma, which includes both moral duties and affection. Davis concludes on Mill by the observation that his 'selective decontextualizing method of reading set the horizon of expectations for other colonial period English readers approaching the Gita and other classical works' (Davis 2015: 99). The attack by Grant, Mill and the like hardly however checked Orientalist enthusiasm for things Indian and the Gita. It also hardly halted the global spread of the Gita in and beyond Europe. As if to spite such criticism, one more direct translation appeared in English by J. Cockburn Thomson in 1855 (though after a long gap after Wilkins), with more to follow.
Wilkins's Gita was not the first cross-cultural translation of the Gita; that credit probably belongs to Abul Fazl, who translated it into Persian at the instance of Emperor Akbar in the sixteenth century. There was also a Latin translation of the Gita by an Italian Jesuit missionary, Fransisco Benci, in the sixteenth century, from which a re-translation was done into Polish by Stanislaw Grochowski in 1611 (Brockington 2002: 100). Yet, Wilkins's Gita proved to be an important landmark in the history of the Gita. More than others before, it induced translations into other European languages, and its copies crossed the Atlantic and stimulated the curiosity of many in America and even impressed them. For the East India Company, which got it published, it proved to be a good investment well beyond its expectations.
Wilkins's translation had a greater impact abroad than the earlier Latin and Polish translations. Several English poets were influenced by the Gita as a result, like Robert Southey, William Blake, Wordsworth and Coleridge in Britain, and Ralph Waldo Emerson and Henry David Thoreau in America (Brockington 2002: 102). Thoreau (1817–62) was a leading American poet, philosopher, transcendentalist and environmentalist, whose work on Civil Disobedience was to influence Mahatma Gandhi later. Thoreau saw in the Gita a powerful advocacy of the discipline of a muni (sage), 'preferring the cultivation of wisdom through contemplation but not excluding action in the concentration on knowledge', and believed that the Gita epitomised the best of Eastern spirituality and that the West could learn much from the text (Robinson 2013: 104). Thoreau observed further that 'the New Testament is remarkable for its pure morality; the best of the Hindoo scripture [the Gita], for its pure intellectuality' (Robinson 2013: 105) and hinted that they were thus complementary to each other. Elsewhere, however, Thoreau does appreciate the 'moral grandeur and sublimity' of the Gita (Robinson 2013: 106). Robinson observes that 'the Bhagavadgita was hailed [by Thoreau] as an important work worthy of the widest possible readership, while its impact on his own ideas was [also] considerable' (Robinson 2013: 107). Both T. S. Eliot and E. M. Foster were particularly fascinated by the Gita's message of disinterested action, which was reflected in some of their works (Robinson 2013: 145).
Sir Monier Monier-Williams (1819–99) was another admirer of the Gita. He did not need Wilkins's translation to understand it. He was a professor of Sanskrit at Oxford University and known for his Sanskrit–English Dictionary and a book on Indian Wisdom (1875), among others. The book shows his familiarity with other Hindu texts also like the Vedas, Upanishads, Brahmanas, Sm ri tis, the six systems of philosophy and the two major epics – Ramayana and Mahabharata. In Indian Wisdom, he speaks of the Gita as 'one of the most interesting and popular works in the whole range of Sanskrit literature' and as representing the 'Eclectic school of Indian philosophy' (Monier-Williams 2001: 145). The Gita reconciles, in his view, the conflicting views of different systems by attempting to 'engraft the Sankhya and Yoga upon the Vedanta doctrines'. While the order of creation and cosmogony of the Sankhya is retained, 'the paramount sovereignty of the Supreme Soul of the universe as the source and ultimate end of all created things, and yet independent of all such creations', is also asserted (Monier-Williams 2001: 146). The author of the Gita, 'finding no rest for his spirit in any one system of philosophy,... was led to make a selection... so as to construct a composite theory of his own'. Monier-Williams adds that this was done with 'great perspicuity and beauty of language' (Monier-Williams 2001: 148). He considers the Gita as a pearl embedded in the Mahabharata, and yet quite independent of the great epic (Monier-Williams 2001: 147). Although finding several parallels between the Gita and the New Testament, he says that he would hesitate to concur with the view that the author of the Gita had any access to the New Testament, as the probability of contact between India and the Christian religion then could not have been high. Monier-William presumes that the Gita was composed sometime during the first two centuries of the Christian Era, though he does not try to explain the basis of this presumption. He adds however that there were such parallels between Roman philosophers and the Christian scriptures, and yet there is no ground whatever to suppose that the pagan writers of Italy derived their ideas from either Jewish or Christian sources, though the probability of contact between them was greater than in the case of India (Monier-Williams 2001: 166–67).
In spite of his admiration for the Gita, Monier-Williams finds a shortcoming in it as compared with the Bible. On the basis of his other writings, Robinson says that according to Monier-Williams, the devotion to personal God in Krishna is subordinated to the knowledge of an impersonal absolute – the Brahman. Thus theism in the Gita is undercut by pantheism. Pantheism is a bad word in Christianity. 'Monier-Williams is doubtful of the possibility of any real relationship between human and divine when both merge into the impersonal absolute' (Robinson 2013: 45–46). Monier-Williams is uncomfortable with any idea of a personal God (who in Christianity is the Supreme), being only an emanation of the impersonal Absolute in the Gita and also in Hinduism in general. This, in his view, makes Hinduism unable to meet the challenge of Christianity. Though from his Christian perspective, it constitutes a weakness of Hinduism, a Vedantin would consider it as its great strength. The idea of Brahman is intellectually more profound and rational and, when combined with a personal God, is emotionally satisfying and wish-fulfilling. Indian philosophers, saints and mystics found no conflict between a personalised conception of the Supreme and the Absolute – the Brahman. They are sure that both ensure spiritual success and realisation and liberation.
An important significance of Wilkins's translation is that it launched a trend of 'objective' and 'critical' scholarship with perspectives of history and comparative religion on the Gita. This new genre of scholarship on Indian sacred books was separated from the veneration of the Gita, which had characterised the earlier commentators in the Vedic tradition, though it was also often combined with critical appreciation. Not that earlier classical commentators of the Gita were uncritical, but their critical outlook was directed only against rival interpreters. The new trend of scholarship on Indian classics and sacred books influenced even Indian scholarship, promoting a dispassionate study of the Gita and critical discussion on it. K. T. Telang's translation of the Gita into English with a critical and detailed introduction (published in 1882, discussed further later) is an early example of such Indian scholarship. When 'objective' scholarship tended to be overcritical and prejudiced, when there was some tendency to view the Gita as no more than an inert given piece of historical material, resulting in distorted interpretations, both Indian and Western admirers of the Gita rose to the occasion coming up with more informed and deliberated response than would have been possible otherwise. The literature on the Gita thus became more thought-provoking and balanced, helping a better understanding of it. The role of Western scholarship, on the whole, has thus been positive and constructive.
## Reception in Germany
Wilkins inspired many more translations including other European languages, some 300 in English alone, bringing the Gita on to the stage of the world at large. Brockington (1882) has given a fairly comprehensive account of the many translations of the Gita, helpful in knowing its global journey, and has been used in writing this section. A French translation of Wilkins's appeared just two years later, a Russian translation a year further and a German in 1802 (1882: 103). The German philosopher and the first German Sanskritist, Friedrich von Schlegel, translated extracts of it directly from Sanskrit into German in 1808, while another Schlegel from Germany – Wilhelm von – translated it in Latin in 1823, giving along with it the original in Devanagari script. This Latin translation was considered to be not only accurate but also of high literary quality (1882: 104). This was followed by further translations into German in 1826 and 1834 (the latter being the first direct translation in German), French in 1846, Greek in 1848, Italian in 1859, Dutch in 1861 (not the full Gita but selected parts) and Czech in 1877 (1882: 104–5). What is more, in each language, there were several translations, particularly in English and German.
As Davis notes, the most enthusiastic reception to the Gita took place in Germany. He says that even before Sanskrit works appeared in Europe, Johan Gottfried Herder (1744–1803) had been portraying India as the cradle of civilisation. He translated portions of Wilkins's Gita into German, along with two other Indic texts in 1792. He declared the Gita to be a great unitary premise of pantheism: One in all, and all into One. He saw it as a theological principle with compelling ethical ramifications: all humans are energised by the one World Spirit, and human life should therefore be led by rational reflection and conscientious action. He believed that the Gita taught a universal principle as applicable to the eighteenth-century Germany as in ancient India. Friedrich Schlegel (1772–1829) is another important German deeply interested in India. He proclaimed that 'everything without exception has its origin in India'. Schlegel was an eminent 'romanticist' poet, philosopher, philologist, Indologist and a literary critic. He pioneered Indo-European studies on comparative linguistics and showed grammatical connection between Sanskrit and Indo-European languages. Schlegel was particularly interested in the Jnana-yoga preached by the Gita, the intellectual concept of Godhead, and 'the human quest to find union with the divine'. If India was the birthplace of human civilisation, the Gita came to be regarded as the earliest philosophical expression of the original wisdom with ideas that would remain relevant for centuries to come (Davis 2015: 84–87, 90). Friedrich Schlegel's contemporary, Wilhelm von Schlegel (1767–1845), did not share the former's 'romanticist' enthusiasm for the Gita but preferred its critical study from a Christian perspective, identifying 'good parts' that cohere with Christian doctrines and dismissing the remainder as myth or superstition (Davis 2015: 92).
Wilhelm Von Humboldt (1767–1835), the founder of Humboldt University in Berlin, gave two lectures on the Gita in Berlin in 1825 and 1826 respectively, which were subsequently published. He proclaimed the Gita as 'the most beautiful, presumably the only philosophical poem of all known literatures' (as quoted in Davis 2015: 101). His assessment was challenged by philosopher Hegel (1770–1831), who argued that yoga required withdrawal from the world leading to a passive immersion into the Brahman. Brahman is an inert conception, in contrast with the Christian God who engages in the world process. According to him, the introverted and static aspirations of Hinduism articulated in the Gita consigned India to a backward status (Davis 2015: 102–4). Hegel completely ignored the activist and intervening concept of God very much evident in the Gita, particularly in the idea of avatar, and also the Gita's conception of yoga as active but selfless work in the world. He had a misunderstood and distorted version of Advaita philosophy as represented in the Gita. The criticism from James Mill, Hegel and Christian missionaries stopped neither more Sanskritists from emerging, nor the spread of the Gita even among others including poets and other intellectuals.
F. Max Müller (1823–1900) was a leading Indologist from Germany, whose six-volume edition of the Rigveda along with translation and Sayana's commentary was a pioneering and major contribution to the knowledge of Sanskrit literature in the West. It proved to be valuable also to educated Indians who did not know enough Vedic Sanskrit to understand the Rigveda. His main interest, however, was in the Vedas and Upanishads, rather than in the Gita, and considered the Gita as less significant than the former particularly from the point of studying the history of religion. His major concern was 'to discover how and what he called Aryan religion began and developed', and in this task, the Gita did not interest him much (Robinson 2013: 39). Nevertheless, he commissioned K. T. Telang to edit and translate the Gita as a part of his project of publishing multi-volume Sacred Books of the East. More about this later.
Richard von Garbe (1857–1927), a noted Indologist, translated the Gita and proposed the theory of its multiple authorship. According to him, the original Gita was written in the second century BCE as a theistic tract, and in the second century CE, it was adapted by the upholders of Upanishadic monism. He says: 'These two doctrines – the theistic and the pantheistic – are mixed up with each other, and follow each other, sometimes quite unconnected and sometimes loosely connected.... [T]he two beliefs are treated of almost throughout as though there was no difference between them, either verbal or real' (as quoted in Radhakrishnan 1996, Vol. I: 530). Radhakrishnan meets this criticism by suggesting that a living warm religion of personal religion does not eschew the spiritual idealism of the Upanishads (Radhakrishnan 1996: 530). Some of the sants of the bhakti movement were known to have had the mystical experience of the immanent Brahman through intense devotion. The kind of criticism Garbe makes is a result of a purely scholastic or academic approach which divorced from any experiential attempt at gaining knowledge. Paul Deusen and Leopold von Schroeder separately opposed Garbe's views on the multiple authorship of the Gita (Brockington 2002: 105).
Rudolf Otto (1869–1917), Garbe's pupil, supported his teacher on the issue of multiple authorship or interpolation. He was, however, attracted by the charm of the Gita and was not concerned with using it for proselytising Hindus. He took the Gita as an example of the literature of the 'numinous' or the holy, and valued its theistic and mystical content. He also considered it to be an eclectic work, evolved through different stages of composition, and tried to identify the 'original Gita' (see Note 5 to Chapter 1). He said that the original Gita was 'Krishna's own voice and deed, referring directly to the situation in which Arjuna finds himself, intended, however, not to proclaim to him any transcendent dogma of salvation, but to render him willing to undertake the special service of the Almighty Will of the God who decides the fate of battles' (quoted in Robinson 2013: 46). According to Otto, the rest of the Gita added later, which included Sankhya and yoga philosophies, was meant to accord divine authority to various ideas. Yet, he was impressed by the devotional theism of the Gita, much of which is in those parts of it which he did not regard as 'original'. Robinson says that Otto's refusal to regard the Gita as a unitary work representing a synthesis gained him many detractors (Robinson 2013: 47).
Otto had studied Shankara's commentary on the Gita and observed that 'the impersonal Brahman rests here also on a theistic basis, and this is not unimportant for the perception of the Brahman itself'. This affirmation of the lower level of truth associated with personal deity, an affirmation arising out of a developmental rather than an oppositional model of knowledge of Brahman, meant that he described Shankara's thought as 'super-theism', not 'anti-theism' (Robinson 2013: 47). Shankara, according to him, 'conflated the qualities of the personal deity with the impersonal Brahman' and applauded it because it did not trivialise devotion to a personal deity (Robinson 2013: 48).
An interesting contribution of Otto is his comparison of the Gita's religion with Christianity. He notes striking similarities particularly in theism, but also 'the most profound difference of all'. He explains: 'The difference consisted in the contrast between grace that releases a soul from sin and guilt in the case of Christianity, and grace that releases a soul from the bonds of samsara (the wheel of existence) in the case of Indian religion' (Robinson 2013: 48). Otto asserted that 'the Indian notion of sin and related concepts such as repentance and confession were not as developed as they were in Christianity and that conscience was not as central' (Robinson 2013: 48). However, Swami Vivekananda turned the table against such criticism by observing that Hinduism did not consider humans as born in sin, regarding them in their basic essence as the children of Immortal (Amritasya putrah). Sin arises out of ignorance and sentimental attachment (moha), according to Hinduism; it is not innate, inherent or inevitable. The concept of repentance (paschattapa) and atonement (prayaschitta) is quite present and popular in Hinduism as well, which is not possible without a concept of conscience. In any case, the Christian concept of salvation is different from the Hindu concept of liberation or moksha, and this is highlighted by Otto. A Hindu can also confidently claim that the Hindu concept of liberation as more sophisticated than the Christian concept of salvation. However, it is unfair to regard one of them as less developed or inferior to the other. Each has its own distinct background and justification.
## Further spread
Interestingly, it is not the Hindus who first took the trouble to take the Gita beyond the cross-cultural frontiers, but others. A cross-cultural translation presents special problems, of which Hastings was quite aware. He duly appreciated the difficulty of doing so which in his view was due to 'the subject itself, which is highly metaphysical, to the extreme difficulty of rendering abstract terms by others exactly corresponding with them in another language, to the arbitrary combination of ideas in words expressing unsubstantial qualities, and more to the errors of interpretation' (quoted by Desai 2014: 10). However, it is to the credit of Western scholars who crossed the cultural barriers and came out with so many translations of not only the Gita but many other Sanskrit texts as well. As a result, the Gita acquired the label of 'Hindu Bible' much before the end of the nineteenth century. Hindus started translating the Gita into English only in the second half of the nineteenth century, and by the twentieth century, there were many who did so. By then, English hardly remained a foreign language, as Indians had made it their own. Several eminent Swamis went abroad, particularly to Britain and the United States, beginning with Swami Vivekananda's trip to Chicago in 1893, to give discourses on Hinduism and the Gita in English, which in due course were compiled in book form. Interestingly, many of the prominent translations into English were first published either in Britain or the United States, followed by less-expensive Indian editions. Several translations of the Gita along with the commentaries of Shankara, Ramanuja and Madhva also became available in English. Many Indians, paradoxically, started studying the Gita through these translations in English, which thus played an important role in popularising the Gita even in India. It is not necessary to list all the translations of the Gita even in English here, there being others who have done so fairly comprehensively (see, e.g. Kapoor 1983; Brockington 2002; Davis 2015). There were also many translations in every major Indian language by the first half of the twentieth century. There is no need to list them in this chapter, the purpose of which is only to indicate how the Gita went on to be seen as a sacred book at the global level. What is noteworthy here is that globalisation of the Gita went ahead simultaneously with its accelerating popularity within India.
The first translation into English by an Indian was by K. T. Telang, which was published in 1882 as the eighth volume under the series on Sacred Books of the East, edited by Max Müller and published in England. Max Müller by then had acquired the reputation of being an authority on India and oriental studies and, apart from being respected by scholars both in the West and India, was also popular among the public in both places. The inclusion of the Gita as a part of the Sacred Books of the East gave a stamp of international recognition to its already known status. It also made the Gita better known in the West than before. Telang had a little earlier published a translation into English in verse in Bombay (now, Mumbai). The 1882 edition, however, was in prose and included Anu-Gita and Sanatsugatiya. Telang also wrote a scholarly introduction to it in the true spirit of the new discipline of Indology, which was becoming popular.
Sir Edwin Arnold (1832–1904), already a reputed poet being the author of Light of Asia (1879) and known for his interest in the Orient, particularly India, went through Telang's translation of the Gita. He did not very much like it and observed that it lacked 'the dignity and the grace of the original' (quoted by Yardi 1991: 9). Arnold gave his own translation in verse form, calling it, The Song Celestial, published first in 1885. It was not a literal translation; instead it tried to bring out the essence, dignity and grace of the original in verse. It is said that Arnold aimed at bringing the original closer to the readers, rather than bringing the reader to the original, as Goethe had suggested as a principle to be followed while translating (Sinha 2010: 307). Great in literary quality, The Song helped the Gita to enter into public consciousness in the West. Between 1785 when Wilkins's translation came out and 1885, there was a lot of change in readers' perception of the Gita. A reviewer of Wilkins's translation had observed that its primary function was to satisfy readers' curiosity about the exotic East; a hundred years later, a reviewer of Arnold's The Song Celestial saw it as 'a book for a spiritual truth-seeker, irrespective of religion and culture', and yet the poem is 'also simultaneously read as a text whose pleasure lies in its value as an aesthetic object' (Sinha 2010: 308). Thus, from its status as a sacred book, the Gita acquired in public consciousness of the West the eminence of being a literary work, besides being a storehouse of universal values. Mahatma Gandhi was introduced to the Gita by his theosophist friends in England through Arnold's rendering of it. It impressed him a lot and inspired him to not only study the Sanskrit original, but also make his own translation and commentary (which we will take up later). By a significant coincidence, 1885, the year of publication of Arnold's poetic rendering of the Gita, was also the year when the Indian National Congress was founded. The year marked the birth of Indian nationalism, and the Gita was to play a crucial role in it as we would be observing in the next chapter.
More translations followed Arnold's, not only in English but also in other languages. Owing perhaps to his influence, there was a shift from literal to free translations, with greater concern for content than to form (Brockington 2002: 115). There was a Hungarian translation in 1887, and two translations in to Spanish – one published in Buenos Aires, Argentina, in 1893, and another in Barcelona, Spain, in 1896–97. Early in the twentieth century, a direct and complete translation into Dutch was made by J. W. Boissevain in 1903 at Amsterdam; and M. L. Kirby and C. Jinarajadasa brought out a translation in Italian together with the commentary by Shankara in 1905. A free verse translation in Russian by A. P. Kasnacheyeva came out in 1909. Two Polish translations were published in 1910, one at Warszawa by Stanislaw Franciszek Michalasky, an able Sanskritist, and the other by Bronislaw Olszewsky at Brody. There were thus so many translations of the Gita by 1920 that Max Weber remarked in his work, The Religion of India, that the Gita had been 'translated into almost all the languages of the earth' (quoted in Desai 2014: 12). The steady trend of translations along with erudite introductions continued thereafter too, to justify Max Weber's remark even if he had somewhat exaggerated. Translations came out in Japanese by J. Takakushi in 1921, in Swedish by Nino Runeberg in 1922, Icelandic by S. K. Petersson in 1924, Serbian by P. Jevotic (extracts) and Rumanian by D. Nanu in 1932 (Brockington 2002: 117).
Further translations appeared in English, significant for their insightful introductions, some of which may be noted here. They were respectively by Franklin Edgerton (an eminent American Sanskritist and linguistic scholar) published in 1944 at Cambridge Mass, by S. Radhakrishnan with commentary based on Shankara's published first in 1948 at London and by R. Zaehner in 1969 at Oxford. Radhakrishnan's proved most popular, which has been continuously reprinted ever since its publication, never going out of stock. In terms of the enormous circulation, Radhakrishnan's seems to have been overtaken now by the translation by A. C. Bhaktivedanta Swami Prabhupada, Bhagavad-Gita as It Is, published by the International Society of Krishna Consciousness (ISKCON) first in English in 1968 at Los Angeles, which has subsequently gone into many reprints and further translations into other languages. The contributions of Radhakrishnan and Prabhupada will be discussed in greater detail in Chapter 5.
R. C. Zaehner (1913–74), apart from discussing the Gita in his other books on Hinduism and Hindu scriptures, wrote one specially focused on it (Zaehner 1969). Advocacy of devotion to a personal deity and related mysticism was the most significant contribution of the Gita according to him. He attributed the subsequent rise of devotional Hinduism to the influence of the Gita. He rejected the notion that devotion was just one of the paths, because knowledge and action could hardly be differentiated 'from the life of love and devotion to God' (quoted in Robinson 2013: 50). He felt that according to the Gita, only divine grace could liberate a living being to become one with the Brahman. According to him, Ramanuja was a better interpreter than others because he 'was nearest to the mind and the author of the Gita' (Robinson 2013: 50). He upheld theism over monism in his reading of the text, not only because devotion to a personal deity was the ultimate means and the end, but also because it was easier and open to all irrespective of class, caste and gender. He thought that the Gita changed India's religious history. However, he also pointed out that 'the type of devotion it inculcated was not the impassioned self-abandonment later to characterise the bhakti movement but detached dutiful service' (Robinson 2013: 51). Zaehner accorded a pivotal position to the Gita, as providing a basis for dialogue between religions, as it contained different standpoints of all religions of the East and the West, ranging from 'immanent pantheism' to 'transcendental monotheism' to 'atheistic mysticism' (Robinson 2013: 52). He considered it not only as 'the most influential text within the Hindu tradition', but also as 'the most significant sacred text in the whole history of religion' (Robinson 2013: 52).
Many swamis went abroad, particularly to the United States, United Kingdom and Europe, to preach Hindu philosophy and gave series of well-attended lectures on the Gita, which were also published in book form soon including the original with the translations and commentaries. Among these, some of the notables are Purohit Swami (1935), and more recently, Sri Sri Paramahamsa Yogananda (1995, 2002), Swami Chinmayananda (1996) and Swami Dayananda Saraswati (to be distinguished from the founder of Arya Samaj of the same name) (2011). Chinmayananda and Dayananda Saraswati, based mainly in India, have done wonders both in dharma-prachar and social service there, though they also went abroad. Eknath Eswaran, not a swami as such, also did remarkably well in contributing to a better understanding of the Gita through a free translation in beautiful verse in four volumes. Their work is taken up in the subsequent chapters.
Translations continued in other languages too. Among them, noted by Brockington, are the translations in Russian by B. L. Smirnov in 1956, Hebrew by I. Olsvanger in 1956, Japanese by M. Hattori first in 1959 and another by Naoshiro Tsuji in 1980. There were three in Italian respectively by Juan Mascaro in 1962, by Giulio Cogni in verse form in 1973 and by Raniero Gnoni with Abhinavagupta's commentary in 1976. There was Estonian by Linnart Mall in 1980 (2011: 117). Meghnad Desai mentions at least four recent translations of the Gita into Chinese: 'first in 1985 by Yang Feihua, then in 1989 by Zang Baosheng and within the last few years by Xu Fanshang in 2009 and Huang Baosheng in 2010' (Desai 2014: 24). Desai further refers to a long tradition of Sanskrit learning in China. It implies that there may have been some translations earlier too, though traditionally China had more links with Buddhism than with Hinduism in the past.
The emergence of Indology as a scientific discipline in its own right has been briefly referred to earlier while discussing Wilkins's first translation into English. It needs a little more attention because of the important role it played in promoting a critical understanding of India's history, art and literature, not only abroad but even among Indians themselves. It included a critical study of the Gita too. There was indeed the tradition of commentaries (bhashyas) on the Gita, but they ignored historical aspects and approached the Gita from the perspective of a given doctrine which may not be considered as quite objective. Indology helped a better appreciation of the Gita and its strengths and of the precautions to be taken in interpreting it. Telang's long introduction to his English translation referred earlier is a testimony of how Indian scholars too were influenced by the analytical tools developed by the new discipline. The birth of Indology owes greatly to the enthusiastic interest and pioneering efforts of both Wilkins and William Jones (1746–94), who had come to Calcutta as a judge when Hastings was the Governor General. Basham remarks that Jones was a linguistic genius who had not only mastered the important European languages but also learnt Arabic, Persian, Turkish and even had a 'smattering of Chinese'. After coming to India, he learnt Sanskrit with the help of Wilkins (Basham 1967: 5). Jones pioneered the idea that the ancient classical languages of Sanskrit, Greek, Latin and Persian had a common root. He labelled this family of languages Indo-European (Desai 2014: 11). Translations of several classics from Sanskrit into English and other European languages were made around the same time. Jone's translation of Kalidasa's play, Sacontala or the Fatal Ring (Shakuntalam), published in 1789, became even more popular for a while than Wilkins's Gita and whetted the appetite for knowing more about India and its culture (Sinha 2010: 302). The publication of translations of the Upanishads by the French Jesuit priest philologist Abraham-Hyacinthe Anquatil Duperon in 1802–04 into Latin (Oupnek'hat) further added to this appetite (Sinha 2010: 302). Sanskrit began to be taught in European universities. A new breed of scholars known as Orientalists or Sanskritists emerged. Institutions like the Asiatic Society (started first in Calcutta, and then in London and Bombay) and Écoles des Langues Vivantes (Paris) were started to promote research in Indian languages, culture and history. Dictionaries of Sanskrit into English, German and French started being published. Basham has described in some detail the exciting period of the emergence of Indology and its rich outcome in terms of a much better understanding of India's past, in a book he fondly titled as The Wonder That Was India (1967: 4–8). All these activities boosted the self-confidence of Indians making them proud of their past and stimulated in turn social reforms and India's freedom struggle – an impact which was not even dreamt of when Hastings encouraged Wilkins to translate the Gita.
Though curiosity about the things Indian was a major factor behind the spread of the Gita abroad initially, there was a qualitative change in the perception about it by the end of the nineteenth century. The publications respectively of Duperon's Oupnek'het, Max Müller's Sacred Books of the East and Arnold's The Song Celestial gave the Western public access to a new dimension of spirituality which they were seeking. A further landmark was the six lectures by Swami Vivekananda at the Parliament of Religions at Chicago in 1893, followed by many more lectures in America and Europe on Hinduism and its philosophy. In his first address itself on 11 September, the Swami referred to the traditional tolerance of Hinduism towards different faiths and its acceptance of multiple paths of spirituality, and quoted the Gita's famous verse (IV.11), translating it as: 'Whosoever comes to Me, through whatever form, I reach him; all men are struggling through paths which in the end lead to Me' (CWSV 2000, Vol. I: 4). The way he began his first lecture addressing the audience as 'Sisters and Brothers of America', which received standing ovation for a few minutes, signified the basic philosophical spirit of Hinduism. Here was an exponent of Hinduism who knew about all the religions and could impress the representatives of various faiths about what made Hinduism so special and outstanding. In his 'Paper on Hinduism' presented on 19 September at the same Parliament, he told the audience that humans are not sinners, but they are the 'Children of the Immortal Bliss' as proclaimed in the Vedas. He told them, you are not matter, you are not mere bodies, you are the masters of matter; matter is your mere servant. Hinduism is not just a set of beliefs or doctrines, but shows the ways of realisation; it consists not in believing, but in being and becoming (CWSV 2000: 11–13). His lectures strengthened the awareness about the potential of Hinduism and the Gita to help in spiritual pursuit and to help overcome new problems of excessive materialism created by industrialisation. A feeling that the philosophical knowledge and spiritual wisdom of Hinduism, particularly as found in the Upanishads and the Gita, was not meant for Hindus alone, but is relevant for all humanity, began to spread. One did not have to convert to Hinduism to study or even follow the Gita and the Upanishads. Whatever be one's cultural or geographical background, their relevance seemed universal. This growing belief in the universality of the Gita was a major factor behind the success of the Gita in its global journey. Interestingly, the particular context of the Gita – that of the war and its teaching to Arjuna to fight in a declared and just war and not mind killing even his own people in the process – hardly became a hindrance in the global march of the Gita. This was because, not only Gandhi but also other interpreters like Aldous Huxley considered the war background of the Gita as only an allegory (Sinha 2010: 304). Far from teaching violence, both Gandhi and Huxley interpreted the Gita as pacifist, teaching nonviolence. As Sinha points out, this transformation in the perception of the Gita was an outcome of a dialogic or dialectical process between the Western and Indian spiritual interpreters (Sinha 2010: 299, 304). It culturally enriched both India and the West.
In any case, the nineteenth and twentieth centuries played an important role in the history of the Gita. It received the widest recognition in the world during these two centuries as never before. This also deepened its recognition within Hindus, even contributing to their identity. The remarks of Jaqueline Hirst are noteworthy in this regard:
It would be quite wrong to insinuate that the Bhagavadgita was an insignificant text before the interaction of Europe and India in the modern period. Its place in the triple foundation of Vedanta is clear and it acted as a model for other texts, the Ishwaragita being one example. However, its prominence in neo-Hindu thought in the nineteenth and twentieth centuries cannot be abstracted from a context in which western academics stressed the textual basis of religions, Christian missionaries preached on social ethics and Hindu Indian nationalists looked for inspiration to their own heritage as justification for diverse approaches to obtaining Independence.
(Hirst 2000: 49)
## Notes
1 The account about Otto here is based on Robinson (2013: 46–49).
2 Between 1879 and 1910, forty-nine volumes of the Sacred Books of the East were published. Sinha informs that the cost of this 'extraordinary and expensive project' was met by the Oxford University Press and the British Government in India (Sinha 2010: 305).
3 Sir Arnold was in India before he became eminent in Britain. He was the principal of Government College, Poona (now, Pune). After returning to England, he became a successful journalist and became the editor-in-chief of the Daily Telegraph (Sinha 2010: 307).
4 This book was first published in German in 1920 and in English in 1958.
5 The account about Otto's contribution is based on Robinson (2013: 49–52), which in turn is based on most of Otto's books on Hinduism and Hindu scriptures.
6 Sinha (2010: 299) has referred to Kapoor's Bibliography of the Gita, which has listed almost all the translations of the Gita into various languages of the world between 1785 and 1979 (Kapoor 1983). There have of course been further translations too.
7 Swami Vivekananda's lectures and writings are included in The Collected Works of Swami Vivekananda (CWSV), first published in 1907 under four volumes, with the ninth volume (the last thus far) added in 1997, in continuous editions or reprints ever since (Vivekananda 1997–2001).
8 See his Introduction to Prabhavananda and Isherwood (1944).
# [4
Makers of Modern India and Their Interpretations of the Gita](content.xhtml#bck_Ch04)
The exposure to Western education and ideas had a significant multidimensional impact on India. Social and religious leaders emerged one after the other who absorbed the new ideas and values of modern Renaissance in Europe and, in that light, examined India's social, economic and political conditions. Though rooted in Indian culture and proud of it, these leaders could also see that India's conditions cried for drastic improvements in almost all spheres of life. They became acutely aware of mass illiteracy and rampant superstition prevailing in the country, associated with the exclusion of the bulk of people from education. Colonial exploitation and destruction of indigenous industries had produced mass poverty. Subjugation under foreign rule had also made people docile and submissive. There was a need to awaken and energise the masses, make them aware of the social evils and yet restore their self-confidence. Most of the modern leaders did not feel that they had to reject the old scriptures wholesale either as hindrance or as irrelevant in this task, but found on the contrary that many of them were quite useful if only re-interpreted consistent with the Renaissance ideals of liberty, equality and fraternity. They found that these values were very much there in Indian scriptures, but were obscured by some of the later developments like the caste system and suppression of women. Basic teaching of sacred books had to be redis-covered, while unnecessary accretions of irrational and false religious beliefs could be rejected. The modern Indian leaders produced their own version of Indian Renaissance, without which India could not have become a modern nation in the comity of nations in the world and make its presence felt.
The modern leaders were essentially makers of modern India. They made their thrusts on three fronts simultaneously and in interrelated way, sometimes the same leader (especially Gandhi) operating on all of them. The first was social reforms particularly for emancipating Indian women, eradicating social evils like untouchability, breaking down the hierarchy of the caste system and imparting education to the masses. The second was achieving renaissance within Hinduism and reviving it in a reinvigorated form, and ridding it of its major weaknesses – superstition, irrational but not so amusing ideas of purity and pollution, and above all the obstinate belief in a hierarchical caste system. And the third was mobilising the masses for a freedom struggle to throw off the yoke of colonialism and make a transition to a democratic, egalitarian, sovereign republic.
Interestingly, most of the leaders made use of the Gita in important ways in operating on these three fronts, investing new meanings into the old text in this task. They found the Gita inspiring in their work, whether it was in social reform, reforming Hinduism, nation building or in freedom struggle. As Minor observes, 'the exhortation to action from Krishna suited the aspirations of Indian nationalists in their struggle for swaraj, independence from British sovereignty' (1991-a: 6). Further, 'they regarded Krishna as the karmayogin par excellence, working unceasingly and with total selflessness' and as an inspiration for engaging in social action and reform (1991-a: 6). The spirit with which the modern leaders used and interpreted the Gita conveyed at least three important messages: one, that the Gita was relevant not merely in purely spiritual pursuit, but also in mundane and secular matters; two, it was relevant not merely in the case of private life of individuals, but also in nation building and reforming the society; and three, it was relevant not merely to Hindus alone but also to all people all over the world, without in any way implying either that it was an exclusive guide in all matters or that it was necessarily so important or dominating that it subjugated all other sources of guidance and authority. In other words, even while importance was given to it, there was no fanaticism about it, and no overarching dominance ever given to it. In his daily public meetings of prayers and bhajans, Gandhi insisted on combining recitation of selected verses from the Gita with the sacred books of other religions too.
The modern interpreters of the Gita are divided for the purpose of presentation in this book into two broad categories: the first consisting of early modern interpreters who were very much engaged in the task of making of the modern India particularly in the national freedom struggle and social reforms, and found the Gita quite useful in their task; and the second consisting of the more recent interpreters whose main focus was on providing spiritual and moral guidance and on adapting Hinduism to the challenges of modern times. The first group, which played a major role in the renaissance of both India and Hinduism, is presented in this chapter itself, and the second group in the next. These eminent persons are taken in the chronological order of the year of their birth within each of the two chapters. Many of them worked both in India and abroad, though a few focused mainly on India. A few others concentrated on providing spiritual guidance to Hindus abroad, incidentally also acquainting not merely scholars but even common people among non-Hindus there with Hinduism and its scriptures, thus building valuable cross-cultural bridges and taking Hinduism and the Gita even beyond Hindus.
## Raja Rammohan Roy
The pioneer in the work of renaissance in India is undoubtedly Raja Rammohan Roy (1772–1833). He is for this reason known as the father of Indian renaissance, and the Maker of Modern India. He was well versed not only in English and Sanskrit, but also in Persian and Arabic. At Varanasi, he studied the Vedas, the Upanishads, the Gita, and the Shastras. He visited Tibet to study Buddhism first-hand. He acquired a wide scholastic vision and a broad mind spanning different cultures. He helped the Mughal emperor in getting his pension from the British increased, for which Rammohan had to visit London. It was the Mughal emperor at Delhi who gave him the title of Raja. He held a good position in the East India Company Government, but gave up his job in the interest of scholastic pursuits and social work.
Two incidents in his life left a lasting impression on him and firmed up his special soft corner for women and his resolve to improve their condition. He was deeply opposed to idol worship, and he openly expressed his views about it while in Tibet. The enraged lamas there who wanted to assault him, but was saved in time by Tibetan women who hid him. The second incident was when his elder brother died, his wife wanted to commit sati or have concremation with her husband's body. Rammohan tried his best to dissuade her but failed. However, when she climbed onto the funeral pyre, she could not bear the burning heat and wanted to rush out of it. To the horror of Rammohan, the group gathered there pushed her back into the fire with bamboo poles and she was burnt to death. Rammohan resolved then and there that he would not rest until he put an end to this horrible practice.
The practice of sati prevailed more in Bengal than in other parts of India and that too among propertied elite than among the working classes. More than the keenness to ensure heaven for the deceased and his wife, or the backing of the Shastras, the husband's relatives were interested in usurping the property which otherwise would have gone to the wife. They used to subtly encourage the wife to commit sati. Rammohan began his work of ending the practice both by requesting the government to do it and by promoting public opinion against it. Since the practice had a social backing, the government could not instantly act in the matter in spite of being convinced by Rammohan's arguments. There had to be public debates between Rammohan on one side and pundits on the other who backed the practice by quoting from the Shastras in their support. Rammohan argued based on his deep knowledge that no Shastra had made it mandatory for a woman to commit sati and that it was entirely voluntary, and rare or occasional even in the past. The pundits agreed but argued that the practice of sati ensures heaven not only to the couple but also to several generations on both sides and is a noble practice. They argued that while a widow does not enjoy a respectable status, the woman who commits sati – who is not regarded as a widow – is highly honoured; and that a woman from a noble family desires this honour rather than the disrespect shown to a widow, and why should she be prevented from exercising her choice? Rammohan countered saying that while the arguments of the pundits were based on quotations from some of the later Smritis, his own arguments are based on the philosophy of Shrutis and the Gita, which undoubtedly had a higher status as authorities in matters of religion and its practice. He said that the Gita despised the desire for going to heaven through rituals, and sati was a horrible and cruel ritual, but praised a life dedicated to selfless work and devotion to the Lord. He further argued that the contemporary society attached false value to sati and despised the widows, but this attitude had no approval of the Hindu sacred books. Above all, the practice imposed horrible pain on the woman for no fault of hers, and amounted to extreme cruelty, and needed to be stopped forthwith. He quoted from the Gita profusely in his support. He pleaded that the society should instead respect widows and allow them to lead a pious life. In reply to the pundits' argument that women may not be capable of such a life and fail, he replied that the Gita does not send such people failing in piety to doom and instead promises that no sincere effort in spiritual pursuit goes in vain even if done imperfectly. Ultimately, Rammohan's arguments prevailed, and the Governor General, William Bentinck (who had come to India in 1828), declared the practice of sati (Suttee) illegal in 1829.
Rammohan had a far-sighted vision about imparting education to people of India at large. He sent a letter to Lord William Amherst, the Governor General of India during 1823–28, requesting the British government to take initiative in starting Western or English education in India for people at large, through which Indians can learn modern sciences, history and geography in English medium at higher levels. This general education was to be made available to both boys and girls from all classes of people. He also backed the idea of some financial support from the government to traditional educational institutions so that the traditional languages and knowledge systems do not go extinct due to official neglect. In other words, he appears to have envisaged two systems of education in India operating simultaneously. Rammohan's initiative came well before Macaulay's famous Minute on Education in 1835. Rammohan pursued the issue with Lord Bentinck, who was the next Governor General (1828–35), but it was adopted as a policy for implementation only after Macaulay's Minute, that is, unfortunately after Rammohan's death. Rammohan also wanted to reform Hinduism; he was particularly opposed to idol worship and ritualism, which had dominated Hinduism. In this again, he had the support of the Gita since there was no idol worship in the Gita and it was also opposed to ritualism. He founded the Brahmo Samaj in 1828 to show a way of practicing genuine Hinduism based on the Upanishads, the Brahmasutras and the Gita, ridding Hinduism of false beliefs and unnecessary rituals.
## Bankimchandra
The next important leader of Hindu Renaissance was Bankimchandra Chattopadhyay (Chatterjee, as per British spelling) (1838–94), a pioneering Bengali novelist and poet, noted for his strong nationalist inclinations. He is famous as the author of the national song – Vande Mataram, which for the first time perhaps conceptualised India as the mother, a powerful concept to mobilise people in the national cause. He wrote a commentary on the Gita in Bengali, titled – Shrimadbhagabadgita, parts of which were published in his journal Prachar, between 1886 and 1888. The rest of the book remained as manuscript, until it was posthumously published in 1902. An English translation was, however, published only in 2001, edited with an introduction by Hans Harder. But even this book contains his commentary only on the first three chapters of the Gita and a half of the fourth chapter. Harder attributes this to Bankimchandra's heavy preoccupations including other literary works, but Gowda feels that perhaps Bankim felt satisfied that his another work Dharmatattva did justice to the remaining parts of the Gita and kept aside the former work unfinished.
Bankim believed in the historicity of the Mahabharata and tried to demonstrate it scientifically. It implies that he also believed in the historicity of the Gita, though he did not try to separately prove it. He also firmly believed that Krishna was a historic person, an ideal man and an avatar, but he admitted that the depiction of his story suffered in the hands of later romancers. Bankim argued that Krishna not only preached but also acted to actually establish dharma and eliminate adharma as claimed in the Gita. Bankim gave a fresh meaning to the term, dharma, by describing it as humanism, involving the culturing of human beings and their faculties – physical, mental, executive and aesthetic – to the fullest possible extent. He believed that both bhakti-yoga and karma-yoga are needed in realising this full potential, but bhakti involves not merely unwavering devotion to God, but also incessant love and benevolence towards the distressed, backed by appropriate action like selfless social service (Gowda 2011: 26–27). This is where Bankim's nationalist cause becomes relevant. In his view, the dismal state of the country then was due to ignoring this basic teaching of Krishna about engaging in karma combined with bhakti in the cause of people and their welfare as one's duty (Gowda 2011: 27). This is where Krishna's concept of swadharma becomes relevant.
Bankim argues that just as karma does not mean sacrifices and mere rituals as per the Gita, its concept of swadharma does not just mean following caste- or varna-assigned duties. The concept is based on equality, not inequality. Following swadharma on the contrary means following the process of realising one's fullest human potential according to one's aptitude, even if it means going outside one's varna. It may have a specific context as when one has to do one's duty as required by circumstances, but selflessly, without prejudice to the overall goal of achieving people's welfare or loka-hita, a term used by the Gita itself. Selfless service, or nishkama karma, is worship of God, and as the Gita says, it has to be done with dedication and also efficiency. But it is not purposeless and has to be directed at achieving certain defined goals. In that sense, it is not desireless. Non-attachment similarly does not mean lack of dedication, and even a certain amount of passion may be needed in dedication. Non-attachment emphasised by the Gita means only avoiding selfishness (Gowda 2011: 28–35).
Regarding nonviolence and violence, Bankim takes the stand that the Gita teaches nonviolence as an ultimate value and condemns violence when it is not according to or required by dharma. But sometimes, adhering to nonviolence in specific or special circumstances may vitiate against achieving overall nonviolence, or limited acceptance of violence in special cases may prevent overall or larger violence. In the act of self-protection, resort to violence may sometimes be required. Defending oneself or one's property and livelihood against aggression or oppression is neither selfishness nor violence; it is dharma or dharma-yuddha. This complexity is recognised and appreciated by the Gita, according to Bankim.
Regarding idol worship, in Bankim's view, the material objects of worship are irrelevant for the Gita; a true devotee, even if he starts with idol worship, has to transcend it soon and has to conceptualise the Supreme, which is beyond all forms. He thinks that the Gita is basically monotheistic, and polytheism such as there in practice is only secondary, a mere aid to a variety of people to access God. The Gita is not fanatical about it and reconciles different forms of worship with the One Supreme. He did not consider idol worship as an essential part of Hinduism. Thus, Bankim takes a sympathetic, to some extent a condescending, view of idol worship, but does not go the extreme of rejecting it as was done by Rammohan and his Brahmo Samaj.
In his essay on Bankim, Gowda says that Bankim felt troubled by the 'debilitating passivity of India in thought and action, a kind of sentimental paralysis that made Arjuna drop his famed Gandiva [bow] in the battlefield' (Gowda 2011: 45). He took it as a metaphor for the condition that India was in then. But Bankim did not take the metaphor too far and did not suggest taking up arms against the British for a freedom struggle. His call to Indians was only to realise the social and political realities in the country, and to appropriately respond to them, and to 'remember that love for one's country is the highest dharma and ranks above all else' (quoted in Gowda 2011: 45).
## Theosophical Society and Annie Besant
Bankim's contribution soon complemented and further developed by the Theosophical Society and its movement, which did a lot by giving a new allegorical interpretation of the Gita both through its Indian and through Western followers. It was officially formed in New York City of the United States in 1875 by Helena Petrovna Blavatsky, Henry Steel Olcott, William Quan Judge and others. Olcott became the first president and remained so until his demise in 1907. The theosophical movement had started even earlier in the United States and India, with several followers among intellectuals. Olcott and Blavatsky moved to India and established the international headquarters of the Society in Adyar at Madras (now, Chennai) in 1882, which became quite active with significant Indian following. Neufeldt has presented in some detail the contribution of theosophists, both Indian and Western, to the interpretation and modern understanding of the Gita (1991: 11–33). A few highlights of this contribution are briefly given later.
A pioneer among these theosophists was an Indian, Subba Row, who gave four lectures at the annual convention of the society at Adyar in 1886, published in 1888 as a book, The Philosophy of the Bhagavad Gita, republished later in 1921. For Row, the Gita is a practical guide for man in the evolutionary path towards realising the essential immortality. He adopts a fourfold classification of cosmic principles, Parabrahman being the first cause, eternal and omnipresent. From this springs Logos, which is individualised or personalised principle of the first or ego of the cosmos. It is represented by Krishna, an instance of the Logos descending to the human plane for the benefit of humanity. Mulaprakriti is the veil over Parabrahman, its material manifestation in the cosmos. Daiviprakriti is the light of the Logos, acting on the Mulaprakriti, and everything that occurs in the cosmos owes to this process. Arjuna represents the human monad in the process of evolution, struggling through conflicts and confusions. There is no loss of individuality at the end of the evolution; there is only eternal bliss and no re-entry into the cycles of rebirth and death.
The next Indian theosophist was Mohini M. Chatterjee, who gave both a translation and a commentary of the Gita in The Bhagavad Gita or The Lord's Lay. The date of its first publication is not known, but an edition of this was current by 1888, which may have come out earlier. Arjuna's predicament is described by Chatterjee as follows: 'Whenever a man loses faith, these three evils, grief, fears, and weakness, attach him, and he begins to delude himself into the belief that it is fruitless to persevere on the upward path' (Chatterji 1960: 26; as quoted by Neufeldt 1991: 16). The concern of the Gita is to help man to overcome these internal weaknesses. An interesting feature of Chat-terjee's work is its frequent referring to the Bible, particularly the New Testament, trying to bring out parallels and commonalities in ethical content between the two sacred books.
A further work comes from A Brahmin, F.T.S., based on a series of lectures at the Kumbhakonam branch of the Theosophical Society, published in 1893 under the title, Thoughts on the Bhagavad Gita. According to it, the Mahabharata symbolises the battle between divine and gross elements, or what takes place between higher and lower selves. The exhortation to fight does not mean an injunction to literally kill people, but to fight in the cause of justice in a spirit of renunciation and perform duties consistent with human evolution to a higher plane. The emphasis on sacrificial action means an obligation to work for others to relieve their grief.
A few studies came thereafter by Rajendra Lal Mukerji, published under the pseudonym, Dreamer, between 1902 and 1904. He considers the Gita as a timeless guide on the path of non-attachment, service, love and sacrifice, and says, 'the Ego's progress... lies in recognising itself as merely an aspect of the Divine energy instead of being a separate centre by itself' (quoted by Neufeldt 1991: 22). Following swadharma means in the Gita following the most efficient lines appropriate for individual development. Avatar appears when conflicts grow so much as to threaten the evolutionary progress, which then restores the equilibrium.
According to the interpretation by Pandit Bhavani Shankar, as given in his lectures between 1914 and 1925 (published in 1966), Advaitins are wrong because they reject individuality while Krishna teaches the perfection of individuality. He advises activity which leads to progress, not passivity. Individuality, however, is not rejected by Advaita at the vyavaharika (practical) level, and thus there is no question of requiring individuals to be passive as a part of spiritual pursuit, as has already been clarified earlier.
Among the Western theosophists, Neufeldt first takes up William Q. Judge, an American, who interpreted the Gita through a series of essays in the last quarter of the nineteenth century, published together much later in 1969 along with a recension. Judge felt that focusing on the Gita as historical material, as Western scholars do, overlooks the Hindu psychological system underlying the work (Judge 1969: 108; Neufeldt 1991: 23). The Kurus represent the material side of existence, while the Pandus represent the spiritual. The blindness of Dhritarashtra, the father of Kurus, 'represents the fact that the material body has no inherent power of sight or feeling; rather, it is the Self which is the final support of every phase of consciousness and form' (Judge: 112–14; Neufeldt: 23). Arjuna represents the human being standing at the threshold of higher development. The battle is between the material and spiritual forces in every individual, and the battle is necessary for evolutionary progress. According to Judge, 'The Bhagavad-Gita tends to impress upon the individual two things: first selflessness, and second, action' (Judge: ix; Neufeldt: 24).
With Annie Besant (1847–1933), we move into an active and accelerating phase of renaissance in India and Hinduism. She is the most well known of theosophists at least in India and an outstanding figure in quite a few ways. Born in England, the country of India's rulers then, she came to India in 1893 as a theosophist in search of truth and soon adopted the country as her own as if she was born into it. As Neufeldt observes, 'while she was a Westerner, she identified thoroughly with India and Indian causes' (1991: 25). After her entry into Indian politics in 1913, she started mobilising Indians to fight for what she called home rule or self-rule and freedom from the British. She was the president of the Indian National Congress in 1917 and was interned by the British government for three months in the same year for her political activities. More than her political role, her contribution to Hinduism and its renaissance – which started even much earlier – has been immense and enduring, no less significant than that of others born as Hindus. She wrote more than 200 books and pamphlets, including particularly on Hinduism, Buddhism, Jainism, psychology, yoga, and social problems such as women's issues. Through her writings, she tried to make Indians aware of their glorious spiritual heritage and goaded them to rise above their present status and conditions. She was an excellent orator and travelled through the length and breadth of India for the purpose. She also campaigned for women's rights and for spreading education including higher education for all. She founded the Central Hindu School and College at Varanasi (Benares, as the British called it) in 1898, which later became the nucleus of Benares Hindu University. She also started two colleges in the south, one at Madanapalle (Andhra Pradesh) and another at Adyar. She became the second president of the Theosophical Society in 1907 after Colonel Olcott and developed the headquarters of the Society at Adyar. The society played an important role through its many publications and branches spread in various parts of India, in promoting a philosophical and cultural understanding of Hinduism in non-sectarian and non-ritualistic ways, suitable to modern times. Mrs Besant was the main inspiration behind these activities.
Mrs Besant had joined the Theosophical Society in 1889, after her disillusionment with free or secular thinking in which she had played an active role. She was close to Madame Blavatsky, who had pioneered the theosophical movement. Mrs Besant helped the theosophical movement to spread in England. Gandhi when in England was introduced to them by friends who were in this movement. He informs in his autobiography that it was these friends who had introduced him to the Gita first – to Arnold's The Song Celestial. Gandhi also read Madame Blavatsky's Key to Theosophy (1889), which stimulated him to take more interest in Hinduism. Theosophists thus played a crucial role in Gandhi's life and the development of his thoughts.
Annie Besant has the distinction of first interpreting and applying the Gita explicitly as a weapon in India's freedom struggle, before others like Tilak and Aurobindo did it. Her book in which she did it, The Bhagavad Gita or the Lord's Song, was first published in 1907, based on her lectures at Adyar in 1905. It played a pioneering role in the nationalist discourse on the Gita and gave a fillip to India's freedom struggle. Though Bankim was aware of this potential of the Gita and also had hinted at it earlier, he was not so explicit and forceful in this task. Mrs Besant 'used her allegorical reading of the text [of the Gita] in the service of her political commitment constructing an ingenious parallel between the Mahabharata war and the Indian freedom struggle' (Sinha: 312). The war in the Mahabharata became a struggle by Arjuna 'to destroy a usurper who was oppressing the land; it was his duty as prince, as warrior, to fight for the deliverance of his nation and restore order and peace' (Besant 1907: iv; Sinha: 312–13). For Mrs Besant, action is the central teaching of the Gita – action with optimism, skill and sacrifice (Neufeldt: 27). In addition to the special significance of the Gita as a source of inspiration for India's freedom struggle, theosophists, including Mrs Besant, acknowledged its universal spiritual significance as relevant to all humanity. For her, the Gita was 'not a Hindu text, nor even an Indian text, but a universal text' (Neufeldt: 25). Viewed thus, as Sinha observes, 'the Gita became a central text of Theosophy and through its intercession the Gita could reach a transnational, transcultural audience, acquiring new, spectacularly effective, forms and meanings in the West as well as in India' (2010: 313). Interestingly, an allegorical interpretation of the Gita is common to all theosophists, and they deliberately did not look upon it as a mere historical material. All human beings are in the place of Arjuna on the evolutionary path to spiritual progress, and as Subba Row put it, 'We are each of us called upon to kill out all our passions and desires... [and] establish ourselves on the higher planes' (Row 1934: 4; Neufeldt: 32). It was the genius of Mrs Besant that she successfully built a bridge between the universal significance of the Gita and its application to the specific situation of India's freedom struggle. Just as individuals, nations too are on the evolutionary path of moral and spiritual development, and she found the teaching of the Gita pertinent to them too.
## Bal Gangadhar Tilak
Bal Gangadhar Tilak (1856–1920) dominated the Indian political and intellectual scene like a titan before the entry of Gandhi. Around his time, two groups were identified among nationalists, the 'moderates' and the 'extremists'. The former, like Gopal Krishna Gokhale, were content with greater autonomy within the British Empire, and focused more on social reforms, educational and economic progress, addressing illiteracy and poverty rather than full political freedom. On the other hand, the first and immediate priority of the 'extremists' was on attaining swaraj or full freedom from the British. Tilak was a prominent leader of this latter group. He is famous for his public assertion in 1908 for the first time: 'Swaraj is my birth right and I shall have it.' Though at times, he also used the term 'home rule', he meant by it only swaraj or full independence. Tilak wanted the social reforms to be introduced and implemented by the Indians themselves through their own government rather than by the British. This stand of his was sometimes misunderstood as when he opposed the Age of Consent Bill (1891) aimed at ending child marriages. He got his own daughters married much above the minimum age envisaged in the Bill, showing that he was not opposed to the reforms as such but only to their being introduced by a government which was not responsible to the people as in a democracy.
Tilak was already established as a popular journalist by the age of 25, when he owned and edited two weeklies, one in Marathi, called Kesari, and the other in English, called Mahratta. Both were openly nationalistic and aimed at spreading awareness about the need to secure independence from the British rule. Tilak was imprisoned twice, and both times, it had a connection with the Gita (Agarwal 1993: 102–13). The first time was in 1897, when in a public speech, taking support of the Gita, he had defended the killing of Mughal General Afzal Khan by Shivaji. Tilak argued that Afzal Khan had invaded upon Shivaji's land and people, and the killing was not in his personal interest as such but for getting rid of an aggressor in the interest of the society. It was, therefore, moral according to the Gita, argued Tilak. A few days later after this speech, one Chapekar shot dead a British officer, Rand, who had become notorious because of his insensitive and oppressive handling of anti-plague measures. He was so dreaded that people felt that plague was better than Rand! The British government assumed that Chapekar was incited by Tilak's speech, though they could not establish any link between the two. Chapekar was hanged, and Tilak was sentenced for eighteen months in jail. Thanks to the intervention of Max Müller and others in England who appealed to the Queen, Tilak was released six months before the completion of his jail term.
The second imprisonment of Tilak was ordered in 1908 in the wake of the partition of Bengal by Lord Curzon, announced in 1905 on the grounds of administrative efficiency. It became extremely unpopular with Indians, and strong agitations against it were launched in various parts of the country. Tilak supported this agitation and called for non-cooperation with the government, and for Swadeshi, that is, boycott of British goods and British education and replacing it by national institutions of education (Agarwal 1993: 109). He reiterated his demand for full independence. It was in this context that Tilak asserted for the first time at Akola in 1908 that Swaraj was his birthright and he would have it. Interestingly, Tilak's weapons of non-cooperation and Swadeshi were to be later used by Gandhi also. But what actually occasioned Tilak's imprisonment in 1908 again was the incident in Muzaffarpur involving Khudiram Bose. Bose threw a bomb at a carriage which he thought was occupied by the chief presidency magistrate who had earlier got some young men flogged for a minor offence. The bomb killed a British woman and her daughter instead. Bose was caught and hanged. Though no connection of Tilak with this incident could be established, he was charged with sedition particularly for two of his editorials in his popular Marathi daily, Kesari. Tilak pleaded his own case eloquently and said that as an editor it was his duty to speak his mind on topical issues without fear of reprisal. Wolpert saw in his defence an echo of swadharma doctrine of the Gita (Agarwal 1993: 111). The jury found Tilak guilty, and he was sentenced for six years in Mandalay jail. But the connection between Tilak's second imprisonment and the Gita was of a different nature. It was in Mandalay jail that he wrote his Gita Rahasya ('The Esoteric Import of the Gita') in Marathi in less than five months from 2 November 1910 to 30 March 1911. As Desai observes, 'Tilak wrote what is the first complete modern treatise on the Gita written by a political activist who was also a Sanskrit scholar' (2014: 19).
Tilak had been thinking of the Gita for several years already and had given a few lectures on it earlier in 1902 at Nagpur, but his active journalistic and political work had not given him the needed time to write in detail his thoughts on the Gita. The jail sentence made it possible now. He titled his work as Shri Bhagavadgita Rahasya athava Karmayoga-shastra. In the first volume of this work, Tilak provides a holistic and general account of the Gita in terms of his perspective in fifteen chapters, followed by scholarly appendices and indexes. One of the indexes is on the words used in the Gita, which can be very useful to scholars working on the Gita. The first volume constitutes the main contribution of Tilak's Gita Rahasya. The second volume contains original Sanskrit verses of the Gita and their translation. As Stevenson observes, 'Tilak's Gita Rahasya is very much a combination of the "traditional Indian" and "modern scholarship"' (1991: 49). Tilak's book in Marathi was published in two volumes first in 1915 at Poona and was translated soon in other Indian languages. Hindi and Gujarati translations came out within two years in 1917; Kannada, Tamil and Telugu in 1919; Bengali in 1924 and English in 1935–36. It has gone into many reprints thereafter in all these languages. It turned out to be the most popular work on the Gita before India's independence.
Tilak starts by explaining in his preface why he wrote the book. He was exposed to the Gita since childhood, but was troubled by a doubt as to why the Gita should contain detailed exposition of how to obtain the release (moksha) through the path of knowledge (jnana) and devotion (bhakti), when the main issue was how to induce Arjuna to fight in the war. So he tried to study the Gita independent of all commentaries such as by Shankara and probe what the original Gita taught. He came to the conclusion that the original Gita did not teach the philosophy of renunciation (Nivritti), but taught energism (karma-yoga) instead, and where the term 'yoga' was used, it meant only karma-yoga. Though Tilak admits that the Gita also taught about how to obtain release through jnana and bhakti, he asserts that it is not its principal subject matter. The Gita basically propounds the way of ultimately obtaining release by performing action without incurring sin. The earlier commentators missed this point because they had a preconceived doctrine to propound by using the Gita. Though Tilak follows Shankara in matters like the concept of Brahman, he rejects Shankara's prescription of jnana (knowledge) through renunciation as the only means to liberation. The Gita does not preach renunciation of the world, but only of the desire for fruits of action for oneself. According to Tilak, the Gita is openly emphatic about karma-yoga, which stares in the face of anyone who approaches the Gita with an open mind. He feels that the Gita is essentially about ethics, ethics of action, and the question of ethics arises when one acts or decides to act or not to act. Often we face a deadlock arising from mutually conflicting principles resulting in confusion, but the Gita shows a way out of this deadlock and confusion. We cannot prevent ethical problems from emerging by deciding not to act. Avoiding action is not an option according to the Gita, because, willy-nilly, we are ever engaged in some action or the other as long as we are in this world alive. But we have the option of making our action meaningful, moral and liberating, by making it unselfish and aimed at the welfare of the world (loka-sangraha, as said in the Gita), carried out in a spirit of non-attachment and without any desire for personal appropriation of the fruits of action.
Tilak says that according to the Gita, even a realised jnanin (one who has attained knowledge) like Janaka did not give up worldly life. A person who seeks only a personal release or moksha for oneself alone is selfish and destructive of the society. Therefore, even sannyasis should be engaged in action for the welfare of the society at large, and this is the clear message of the Gita. As for bhakti, according to Tilak, it is recommended by the Gita only as an easier option to attaining jnana or knowledge of the Brahman, whatever be the form of devotional worship. True bhakti leads invariably to the knowledge of the Brahman. If a jnanin cannot avoid action, a bhakta too cannot and has to be engaged in action in the world as a form of worship to God. A bhakta offers the fruit of his action at the feet of the Lord as his tribute. Work done as service to God is also a form of devotion. Thus, even jnana and bhakti are consistent with and help karma-yoga, but cannot be substitutes to it. They are actually subservient to karma-yoga as preached in the Gita (Tilak 1936: xxv).
The ideal person according to the Gita, in Tilak's view, is a sthitaprajna ('mentally steady and balanced'), who is not only unattached, even-minded, egoless, calm and cool, but also actively engaged in the welfare of the world without any selfish motive. Such a person is sinless, and karma does not attach him. In the course of being engaged in action, such a person honestly follows the principles of non-violence, truth, non-stealing and justice. But in the course of being so engaged, if exceptions are made at times, entirely in the interest of people or society or welfare of the world, then sin does not attach such a person. This is a teaching of the Gita, in Tilak's view. This is how Tilak defended Shivaji's killing of Afzal Khan. Stevenson observes in this context that 'It is disturbing for ethical theory for it asserts that some men are above the law' (Stevenson 1991: 58). These exceptions are particularly disturbing if we take note of the fact that Tilak wanted the leaders of a nation to have the qualities of a sthitaprajna. He desired that leadership of a country should be in the hands of such ideal persons or moral elite. By providing exceptions to them, Tilak could not have meant that leaders can be corrupt and opportunistic on the excuse of acting in national interests. Even a genuinely honest leader may at times commit a mistake or impropriety as judged by others, but in good faith. However, Tilak's ethics, at least as he meant, can be dubbed neither as ethical relativism, nor as moral hypocrisy. Exceptions to moral principles are subject to severe controls and cannot be indulged in by all sorts of people for all sorts of reasons in all sorts of circumstances. Even Gandhi who had absolute faith in truth and non-violence conceded exceptions to them under severe controls, and even common law recognises them. Otherwise, life would be impossible. Absolute non-violence is an impossibility. Even a single step of walk upon this earth involves killing countless germs or life forms. In the cases of a conscious act, law recognises that killing a person in self-defence in the face of a murderous assault cannot be considered as murder. A soldier killing enemies in a legally declared war and according to norms of just war cannot also be accused of murder. Though in principle Tilak was not against using violence as the last resort against cruelty and oppression, at no time he advocated it as the preferred path in the freedom struggle. He proposed instead swadeshi and non-cooperation as weapons for it. Ruling out violence altogether and absolutely would in Tilak's view eliminate in the oppressor's mind any fear of reprisal by the oppressed and could encourage tyranny. Gandhi was on the other hand totally opposed to the use of violence in the freedom struggle, but used the legacy of Tilak's peaceful methods of swadeshi and non-cooperation.
Tilak was not the first to declare a socially oriented karma-yoga as the central message of the Gita. Perhaps that credit goes to his younger contemporary, Swami Vivekananda. The young Swami had become quite well known by the end of the nineteenth century, well before Tilak's writing down his own treatise on the Gita. But Tilak was probably the first to do so in a rigorous, scholarly, and systematic way. Anyhow, Tilak's activist reading of the Gita contributed to energising the whole country and gave a further momentum to the freedom struggle. It helped Gandhi too to take the country further ahead towards India's independence. Tilak can be said to have prepared the ground for Gandhi in more than one way. Tilak's Gita Rahasya also influenced many subsequent commentators of the Gita including the contemporary, in giving more importance to karma-yoga in the form of social service. Social service got a boost as a result. In the discussions on the relative merits of alternative paths of God realisation or means of moksha, the balance was now irreversibly tipped in favour of socially engaged karma-yoga as never before. The Gita Rahasya played an important role in this.
## Swami Vivekananda
Swami Vivekananda (1863–1902) was born seven years later, but died eighteen years earlier than Tilak. But the Swami accomplished a great deal in less than forty years of his short life and left a long-lasting impact on India and Hinduism, which is cherished even today. Born in a highly cultured Bengali Kshatriya family, Narendranath Dutt (Naren, for short, the Swami's earlier name before sannyas) had a good Western-type education in Scottish Presbyterian College at Calcutta. Even while in the college, he had come under the magnetic influence of Shri Ramakrishna Paramahamsa. He asked Ramakrishna a straight question, 'Sir, have you seen God?' And Ramakrishna also gave an equally straight answer, 'Yes, my son, I have, just as I see you before me, only much more intensely.' He went on, 'God can be realised. One can see and talk to Him as I am seeing and talking to you. But who cares?' The sincerity of his reply left Naren tremendously impressed (Disciples 1989, Vol. I: 77). The association between the two became closer and profound, and Naren even began to experience spiritual trances. As French says, 'The combination of paths which the Gita speaks seems to have been experienced personally by the young Vivekananda, as his relationship with Ramakrishna unfolded' (1991: 132).
It was not just the spirituality of Ramakrishna which Vivekananda imbibed, but also his deep compassion for humanity. Once when Vivekananda implored Ramakrishna to show the way to nirvikalpa samadhi, the state of ultimate release and bliss, the latter rebuked him. 'Shame on you!' he said, 'I never thought you to be so mean as to be anxious for your own salvation only, whereas you have the powers to do so much good to mankind' (French 1991: 144). It influenced Vivekananda to make his monastic order to be responsive to the needs of the society, particularly of the poor and downtrodden. After Ramakrishna shed his earthly body in 1886, his disciples got together to establish the Ramakrishna Math and start the Ramakrishna Mission with the aim of carrying forward the ideals and philosophy of Shri Ramakrishna. Both the math and the mission have since grown worldwide with many branches or centres, doing exemplary work in health care, education, famine and disaster relief, and providing homes, besides disseminating basics of Indian culture and philosophy. In this philosophy, an activist compassion for humanity is an intrinsic part of spiritualism. Vivekananda's interpretation of the Gita was a reflection of this philosophy.
Vivekananda did not write any systematic treatise or a commentary with the translation of the Gita as many others did, for he had no time for it. But many of his lectures and discussions are replete with references to the Gita. He gave a discourse particularly on the Gita in Bengali at Calcutta in 1897 (Thoughts on the Gita, CWSV 1998, Vol. IV: 102–10), and three lectures at San Francisco in 1900 (The Gita I, II, and III, CWSV 2000, Vol. I: 446–80). Further, his lectures on karma-yoga (CWSV 2000: 25–118), bhakti-yoga (CWSV 2001, Vol. III: 31–100; CWSV 1998, Vol. IV: 3–60), jnana-yoga (CWSV 1999, Vol. II: 57–288) and other topics draw significantly from the Gita. He was leaned in most of the scriptures, but had a special fascination for the Gita. Referring to it, he said, 'no better commentary on the Vedas has been written or can be written'; 'that wonderful poem, without one note in it of weakness or unmanliness' (Disciples 1989, Vol. II: 203, 361).
For Swami Vivekananda, the historicity of the Gita or of its characters was not a pertinent issue. It was composed at a time when no importance was given in India to assigning dates and authorship. What is pertinent in the Gita for all time is its teaching, not history. The universal value of the Gita hardly depends on its history, he felt. Citing a parable of his guru, he said that if you go to a mango garden full of luscious fruits hanging down the trees, you should savour the mangoes, rather than waste time in counting the leaves and branches or estimating the age of the trees. Nevertheless, the Swami agreed that historical research may be useful in identifying unnecessary accretions to the texts, which distort the basic teaching. But he considered the Gita an intrinsic part of the Mahabharata, taking into account the similarity both in the teachings and in the style, and not as an interpolation (Gowda 2011: 94–98).
Swami Vivekananda thought that the teaching of the Gita was most pertinent in achieving the main task before him. Awakening national consciousness among Indians and making them sensitive to the appalling poverty and backwardness of the masses constituted his main task. This is reflected often in many of his lectures and letters. He asked whether there is any reason why India should lie in the ebb-tide of the nations of the world. 'Is she inferior in intellect? Is she inferior in dexterity? Can you look at her art, at her mathematics, at her philosophy, and answer "yes"? All that is needed is that she should de-hypnotise herself and wake up from her age-long sleep to take her true rank in the hierarchy of nations' (CWSV 1997, Vol. V: 227). In this task, he found two verses of the Gita in which Krishna exhorted Arjuna as not only very pertinent, but also constituting its central message to India, placing India in the position of Arjuna. When Arjuna cast away his bow and arrows, and sank down in the seat of his chariot, distressed and full of tears, Krishna addressed him thus in two most powerful verses (The Gita II.2–3):
Kutastva kashmalam idam vishame samupasthitam /
Anaryajushtam asvargyam akirtikaram Arjuna //
Klaibyam maasma gamah Partha naitad tvaiyupapadyate /
Kshudram hridaya-daurbalyam tyakvotthista Parantapa //
('In such a strait, whence comes upon thee, O Arjuna, this dejection, un-Aryan-like, disgraceful, and contrary to the attainment of heaven? // Yield not to unmanliness, O son of Pritha! Ill doth it become thee. Cast off this mean faint-heartedness and arise, O scorcher of thine enemies!'
– Tr. by Swami Vivekananda, CWSV 1998, Vol. IV: 107–8)
The Swami says, 'If one reads this one Shloka – Klaibyam... Parantapa – one gets all the merits of reading the entire Gita; for in this one Shloka lies imbedded the whole Message of the Gita' (CWSV 1998: 110). What he meant was that India should arise and fight poverty, ignorance and other such weaknesses among its masses. He observed, 'The one thing that is at the root of all evils in India is the condition of the poor.... The only service to be done for our lower classes is to give them education, to develop their lost individuality.... Every nation, every man, and every woman must work out their own salvation' (CWSV 1998: 362). He put great emphasis on educating the poor, and said, 'If the poor boy [or girl] cannot come to education, education must go to him [or her]' (CWSV 1998: 363). He advised volunteers, including sadhus and sannyasis, to go from village to village or even door to door and provide education, where there were no schools, or where poor children were not attending schools. He deplored the practice of child marriage and advocated providing proper education along with character building both to boys and girls. He ridiculed orthodox religious leaders who opposed the Age of Consent Bill raising the age of marriage and asked them whether religion consisted in making a girl become a mother at the age of twelve or thirteen (CWSV 1997, Vol. V: 341–42). But he did not favour preaching religion to the poor before their problems of hunger and constant anxiety about bare existence are solved (CWSV 1997: 380).
Now, such a social service to the poor and needy should be provided without ego, arrogance and any selfish motive. And that is where the Gita's nishkama-karma (desire-free or selfless work) becomes relevant for the Swami. His preference for karma-marga over jnana and bhakti comes out clear and loud. He said, 'If you want any good to come, just throw your ceremonials overboard and worship the Living God, the Man-God – every being that wears a human form' (French 1991: 143). Further, 'You think Jnana is dry knowledge to be attained by a desert path, killing out tenderest faculties of the heart. Your Bhakti is sentimental nonsense which makes one impotent.... Who cares for your Bhakti and Mukti?... I will go to hell cheerfully a thousand times, if I can rouse my countrymen, immersed in Tamas [darkness, lethargy], and make them stand on their feet and be Men, inspired with the spirit of Karma-yoga' (French 1991: 144). The Swami did not, however, altogether cast aside the other paths taught in the Gita, so long as the importance of karma-yoga is not undermined. In a sense, he saw no conflict, since karma-yoga can be made more effective by combining it with knowledge and devotion. In fact, he gave several lectures both in India and abroad on the other paths taught in the Gita also, but he firmly believed that India's destiny lies in following karma-yoga, whatever may be the value attached to other paths by individuals for their personal spiritual progress.
According to the Swami, a karma-yogi should resist any temptation to choose only work of 'higher' status and avoid 'lower'. Each is great in his own place, he declares, and advises that each should respect his or her own work, for no work is inferior. One should never despise or hate oneself (CWSV 2000, Vol. I: 36–51). Swadharma in the Gita does not necessarily mean caste-duty. In the course of our life, we have to perform different roles with corresponding duties, that of a student, teacher, parent, husband, wife, a soldier, a judge, a doctor, a sanitary worker and so on, which fall to our lot sometimes by choice and sometimes without. But, 'Duty of any kind is not to be slighted. A man who does the lower work is not for that reason only, a lower man than he who does higher work; a man should not be judged by the nature of his duties, but by the manner in which he does them' (CWSV 1997, Vol. 5: 241).
An important characteristic of karma-yoga, which the Swami stressed, is that it is not for the elite alone, but meant for everyone irrespective of the level of education and wealth. It is an equaliser. Karmayoga, in the sense in which the Swami meant (and not in the sense of rituals), does not need any priest as an intermediary between the individual and God. The Swami rejected caste and other hierarchies, and underlined the egalitarianism of the Gita, by pointing out to verses 27 and 28 in its Chapter 13, which said: 'He who sees the Supreme Lord dwelling equally in all beings, the Imperishable in things that perish, he sees verily. For seeing the Lord as the same everywhere present, he does not destroy the Self by the Self, and thus he goes to the highest goal' (Gowda 2011: 93). Realisation of the Supreme is open to all, and one can achieve this through dedicating one's work and fruit thereof to the Supreme with a pure mind, sincerity and skill. And this is the secret of karma-yoga, according to the Gita.
The manner in which we perform our work shapes our character. The Swami says:
Karma in its effect on character is the most tremendous power that man has to deal with. Man is, as it were, a centre, and is attracting all the powers of the universe towards himself, and in this centre is fusing them all and again sending them off in a big current. Such a centre is the real man.... Good and bad, misery and happiness, all are running towards him and clinging around him; and out of them he fashions the mighty stream of tendency called character and throws it outwards. As he has the power of drawing in anything, so has he the power of throwing it out.
(CWSV 2000, Vol. I: 29–30)
A karma-yogi cannot be flippant and should not fritter away his or her energies. The Swami reminds that according to the Gita, yoga is doing work with skill and cleverness. He says, 'by knowing how to work, one can obtain the greatest results. You must remember that all work is simply to bring out the power of the mind which is already there, to wake up the soul. The power is inside every man, so is knowing; the different works are like blows to bring them out, to cause these giants to wake up' (CWSV 2000: 31). This is how karma-yoga helps spiritual development. A karma-yogi, explains the Swami, can find serenity and peace of mind amid most intense activity; stress or tension does not affect his soul. The key to success of karma-yoga as a spiritual discipline lies in introspecting over the motive of our work and keeping it pure and noble (CWSV 2000: 31–35).
The secret of success in karma-yoga lies in working through love according to the Swami. Referring to a verse in the Gita (III.22), he quotes Krishna: 'Look at Me, Arjuna! If I stop working for one moment, the whole universe will die. I have nothing to gain from work; I am the one Lord, but why do I work? Because I love the world.' His teaching is work like a master, not as a slave; work with a sense of freedom. Selfish work, work without love, is slave's work, the Swami explains. Working with love brings its own happiness, which is a reward in itself (CWSV 2000: 57–58). He clarifies further that working merely with a sense of duty is not enough, because even a slave may do that. It brings no joy. The Gita stresses due performance of one's duties without attachment for the sake of society's welfare. But unless one begins to take pride in one's duty skilfully done and with love, it may not be effective and will not bring happiness either to the individual or to the society. That is why good work is possible only in an environment of freedom, not compulsion, nor slavery. Inculcating a sound work ethic is crucial to the character building of an individual and also to what the Gita calls loka-sangraha, or people's welfare or nation building. He wanted a modern India, strong and self-confident, freed from poverty and squalor, built on the foundation of a work ethic provided by the philosophy of karma-yoga. Developing such a work ethic has been the most important contribution of Swami Vivekananda, which energised the whole country as well as Hinduism and continues to do so.
## Lala Lajpat Rai
Lala Lajpat Rai (1865–1928) was one of the three prominent members of the Indian National Congress, before its domination by Gandhi, bracketed as Lal-Bal-Pal. Like the other two, Lajpat Rai also demanded full freedom, and not just autonomy under British supremacy. He studied law at Government College, Lahore. He became a follower of Swami Dayananda Saraswati, the founder of Arya Samaj. He was known as Punjab Kesari, or the Lion of the Punjab, for his bravery in leading the struggles in national cause. He too found Lord Krishna and his Gita a great source of inspiration and motivation in his movement. Like many educated Indians, Rai also found the Puranic Krishna problematic. The Puranas had shown him as a seductive lad, indulging even with married milkmaids' amorous games and dances. Jayadeva's Gitagovindam, though acknowledged as great in literary and lyrical qualities and a favourite work with professional classical singers and dancers, was particularly a source of headache for those who wanted to depict Krishna only as a great teacher and a yogi. Rai felt that these devotional poets 'have so pierced him with the arrows of their petty and vulgar imaginations that his personality has totally changed'. Like Bankim, Rai tried to probe into the true historical Krishna, stripped of his romantic image created by Puranas and other later works, and in this attempt published Yogiraj Shri Krishna in 1900 (Davis 2015: 121).
According to Rai, Krishna was a warrior and a ruler during the early Vedic period and took part in the great Mahabharata war as a friend of Arjuna and presented his teachings to the latter, which formed the core message of the Gita. Rai describes Krishna as 'a great teacher, a great warrior, and a man of great learning' (quoted in Davis 2015: 121). In his view, later additions turned Krishna into a divine incarnation, and still later accounts created his romantic image during his childhood and teenage years. However, Rai believes that Krishna was indeed a 'model human being', living a life as taught in the Gita. This meant that the Gita represented a lived experience of its teacher and held valuable lessons relevant to the Indian youth of Rai's time and beyond. 'What Lajpat Rai has in mind is that Indian youths should commit themselves to opposing British colonial rule, even if that involves risking their lives – as he did' (Davis 2015: 124).
Lajpat Rai was leading widespread peasant's agitations in the Punjab and was therefore deported without even a trial to Mandalay in Burma in May 1907. But Lord Minto decided that there was not enough evidence against Rai, and he was thus allowed to return in November the same year. During his incarceration, much like Tilak (who wrote his Gita Rahasya later, also from Mandalay), Lajpat Rai wrote a lengthy article on 'Message of the Bhagavad Gita', published later in Modern Review. According to his interpretation, the primary purpose of the Gita was to persuade Arjuna to fight, all else – discussions on jnana, bhakti and so on – being secondary. 'Dharma or duty should be the supreme law of one's life, and once one recognises that duty, no consideration of self-interest, love, or mercy should distract one from it. As such, Krishna above all advocates the path of karma-yoga' (as summarised in Davis 2015: 125). Rai found this message greatly relevant to India of his time. Referring to the concluding verse in the Gita attributed to Sanjaya about the beneficial presence of Krishna and Arjuna, Rai said: 'A nation's prosperity and success depend upon wisdom like that of Krishna and bravery like that of Arjuna' (quoted in Davis 2015: 125).
Lajpat Rai died at the age of sixty-three in 1928 following a police attack on him during a demonstration. He was a martyr to the cause of India's freedom struggle, inspiring many more to join the freedom movement. There was no doubt that he highly adored Krishna as well as his Gita. But he saw Krishna as a highly evolved human rather as an avatar, which hardly diminished the significance of the Gita in his view. On the contrary, since as he argued, Krishna lived the life he taught and bade Arjuna also to follow, his teaching had the backing of experience and was amenable to being put into practice. Yes, Rai put aside aspects of Krishna's teaching other than karma-yoga as secondary and could be considered to have attenuated its scope and significance in the bargain. But he was in the thick of freedom struggle, and his sharp focus on karma-yoga was a product of the time and circumstances he faced. Karma-yoga, however, is relevant in other circumstances too (like fighting poverty, promoting social welfare) as many interpreters found. While Rai may not have brought out the universality of the Gita, his interpretation of the Gita acted as powerfully as Swami Vivekananda's and Tilak's in awakening India's national spirit, urgently needed then.
## Mahatma Gandhi
Mohandas Karamchand Gandhi (1869–1948) made no secret of his tremendous admiration and love for the Gita. In a letter to Gulzarilal Nanda in 1927, he wrote: 'though I am reading many things, the Bhagavad-Gita is becoming more and more the only infallible guide, the only dictionary of reference in which I find all the sorrows, all the troubles, all the trials arranged in the alphabetical order with exquisite solutions' (Iyer Ed. 172). He considered it as his eternal mother and an inspiring source of his ideas, which he applied not only to his daily life but also in leading the freedom struggle. He declared, 'I somehow or other fancy that "my philosophy" represents the true meaning of the teaching of the Gita' (CWMG Vol. 26: 140). He said that some of his key principles like mental equipoise, swadharma, equality and even the idea of non-cooperation came from the Gita (Gowda 2011: 198n). More than understanding the literal meaning of the Gita, Gandhi strove to get at its inner significance and implications for problems faced in life. He observed:
The Gita is not an aphoristic work; it is a great religious poem. The deeper you dive into it, the richer the meaning you get. It being meant for the people at large, there is pleasing repetition. With every age the important words will carry new and expanding meanings. But its central teaching will never vary. The seeker is at liberty to extract from this treasure any meaning he likes so as to enable him to enforce in his life the central teaching.
(Desai 1946: 130–31)
Gandhi was first introduced to the Gita in 1889 while in England through Sir Edwin Arnold's Song Celestial, which fascinated him instantly. But it was only later while in South Africa that he took it seriously enough to study the original and even learn Sanskrit to understand it properly, which he continued after returning to India in early 1915. By 1919, he felt confident enough to make his own interpretation of the Gita. When he called for a hartal and suggested people to observe fast and read the Gita, some people questioned him as to how the Gita was relevant since it preached violence. Gandhi responded by explaining that the war was only an allegory, which the Gita seized to draw attention to the war going on within ourselves between the forces of good and forces of evil (represented respectively by the Pandavas and Kauravas), and that he was saying this on the basis of experience and not just as an argument (Jordens 1991: 89). Jordens observes that by 1925, Gandhi had a fully worked-out definitive approach to and interpretation of the Gita, which was evident from a six-page article he wrote for Navjivan, on 11 October 1925, on the 'Meaning of the Gita' (Jordens 1991: 89) (CWMG Vol. 28: 315–21). Even before this, he used to refer often to the Gita in his speeches and letters ever since 1890 (Gowda 2011: 169). In 1926, he gave as many as 218 lectures on the Gita at the Satyagraha Ashram at Ahmedabad during morning prayers over nine months. They were mostly simple translations of the Gita (Gowda 2011: 169, 199n) They have been collected and published as Discourses on the Gita (CWMG Vol. 32: 94–376) and also as a separate book (Gandhi 1980). His more systematic translation and commentary articulating his interpretation of the Gita as Anasakti-yoga appeared in Gujarati in 1930. It was translated into English with an introduction by Mahadev Desai (1946) and published with a foreword by Gandhi, titled as The Gospel of Selfless Action or The Gita according to Gandhi.
Gandhi took the stand that while determining the meaning of a text like the Gita, 'one should not stick to the letter, but try to understand its spirit, its meaning in the total context' (CWMG Vol. 28: 318; Jor-dens 1991: 95). The literal text is a product of the times and circumstances in which it was written, but its basic message may well outlast those times, and the real task is to explore and determine that enduring message. That is why there is a need for interpretation in a way that one can transcend the literal translation, whether it pertains to a given word or even the text as a whole. Gandhi pointed out that the author of the Gita himself did so by 'extending the meaning of words' like 'karma', 'sannyasa' and 'yajna'. He therefore felt justified in following the footsteps of Vyasa, by observing that 'We shall do no injustice to Vyasa by expanding the meaning of his words. Sons should enrich the legacy of their fathers' (quoted in Jordens 1991: 95). Gandhi further felt that, in the task of interpreting such texts, more than mere scholarship, 'one must have a well-cultivated moral sensibility and experience in the practice of their truths' (Gandhi 1980: 10). He considered himself well qualified from this point of view even if he might not be a scholar like B. G. Tilak (Jordens 1991: 97).
Tilak's advocacy of karma-yoga and acceptance of violence if needed for a noble and selfless cause was interpreted by revolutionaries as justifying its use in the freedom struggle. Gandhi did not agree with the use of the Gita as an ideological support for an armed struggle against the British. He made it a point to show that the Gita and the Mahabharata stood for non-violence, about which he was firmly convinced. Gandhi's first step in this task was to stress that the background of war in the Gita was only an allegory, indicative of the ceaseless spiritual struggle going on within all of us. Had it been only an advice to Arjuna to fight his adversaries, the Gita would not have attained such widespread popularity and significance. The purpose of the Gita, Gandhi argued, is not to narrate history, but to teach something of a universal and everlasting value. Motivating to kill could hardly have been such a teaching. Gandhi points to the Gita's description of the characteristics of a perfected man, a sthitaprajna, in the last eighteen verses of its second chapter and says: 'I do not see any to correspond to physical warfare. Its whole design is inconsistent with the rules of conduct governing the relations between warring parties' (Desai 1946: 124). It is not inconceivable for such a person to choose himsa as his means of solving problems, Gandhi suggests. The Gita does not of course teach non-violence of a coward, and Gandhi made it clear that faced with a choice between cowardice and violence, he would choose the latter (Iyer Ed. 1993: 237). For the same reason, by non-violence, Gandhi did not mean compromising with the evil. He observed in an article in Young India (of 4 August 1920) that 'the Bhagavad Gita is a gospel of non-cooperation between forces of darkness and those of light' (Iyer Ed. 1993: 330). Taking the Gita as a whole, Gandhi had no doubt that it advocated truth and non-violence, the foundational values for human life and spiritual progress according to him. He thought that this was true not only of the Gita but also for the Mahabharata as a whole. According to Gandhi, 'Vyasa wrote his supremely beautiful epic to depict the futility of war. What did the Kauravas' defeat and the Pandavas' victory avail? How many among the victors survived?' (quoted in Jordens 1991: 99). The epic demonstrated the utter futility of war and its violence. Gandhi took the epic essentially as anti-war, and the Gita stood at its centre.
The object of the Gita, Gandhi said, was to show 'the most excellent way to attain self-realization' (Desai 1946: 125). The key to self-realisation is anasakti, which literally means disinterestedness or detachment, but Gandhi took it as indicative of desireless action, or renunciation of the fruits of action, with as much emphasis on action as on detachment. It cannot be renunciation of action because, according to the Gita, renunciation of action is not possible. 'There must be action where there is a body. (But) every action is tainted, be it ever so trivial.... How can one be free from action, i.e. from the taint of sin? The Gita has answered the question in decisive language: By desireless action; by renouncing fruits of action; by dedicating all activities to God, i.e. by surrendering oneself to Him body and soul' (Desai 1946: 125). Gandhi's prescription thus is selfless work. Renunciation does not mean indifference to the outcome of action. It also does not mean a want of fruit to the renouncer; in fact, 'he who renounces reaps a thousand-fold' (Desai 1946: 128). Gandhi explains that leaving the outcome to God contributes to efficiency in work and improves outcome. 'He who is ever brooding over result often loses nerve in the performance of his duty. He becomes impatient and then gives vent to anger and begins to do unworthy things; he jumps from action to action, never remaining faithful to any' (Desai 1946: 128). Gandhi does not undermine the role of jnana and bhakti, but only in combination with selfless action; they can make action meaningful and effective. Jnana ensures that the action is meaningful and purposive, and bhakti ensures that action is pursued with the whole heart in it. But jnana and bhakti or either of them devoid of action amount to self-indulgence, according to him. Jnana and bhakti have to stand the test of renunciation of the fruits of action (Desai 1946: 126, 127). There is thus no conflict between the interpretations of Tilak and Gandhi; the former's karma-yoga and the latter's anasakti-yoga amount to one and the same, and both make jnana and bhakti subordinate to action. The crucial difference between the two was in respect of the attitude to violence and non-violence. While unselfish violence for a noble cause would be permitted by Tilak, Gandhi was quite reluctant about it. While an armed struggle for freedom was consistent with the Gita's teaching in Tilak's view, it was not in Gandhi's. According to Gandhi, the central teaching of the Gita, viz. anasakti, is inconsistent with violence. Though he admitted that 'the Gita was not written to establish ahimsa', he pointed out that while following the central teaching of the Gita, one is bound to follow truth and ahimsa (Desai 1946: 129). He added emphatically that 'after 40 years' unremitting endeavour fully to enforce the teaching of the Gita in my own life, I have, in all humility, felt that perfect renunciation is impossible without perfect observance of ahimsa in every shape and form' (Desai 1946: 130).
In Gandhi's view, non-violent means of freedom struggle were more potent than the violent; what is more, as he said, he derived the former directly from the Gita. He wrote: 'The Bhagavadgita's intention [is] that one should go on working without attachment to the fruits of work (anasakti). I deduce the principle of Satyagraha from this.... As far back as 1889, when I had my first contact with the Gita, it gave me a hint of Satyagraha and as I read it more and more, the hint developed into a full revelation of Satyagraha' (CWMG Vol. 15: 312–13; Gowda 2011: 176). Gowda explains that for Gandhi, satyagraha was the positive form of anasakti, and non-cooperation its negative aspect. Gandhi believed that he derived even his belief in the spinning wheel (charkha) and swadeshi from the Gita. Swadeshi was inspired by the concept of swadharma in the Gita (Gowda 2011: 176, 200n).
Gandhi emphasised the paramount importance of performing one's own duties in life, as advised by the Gita. The question, however, is how duties are determined. He translates the first half of verse 8 in Chapter 3: 'Do thy allotted task (niyata karma); for action is superior to inaction.' Commenting on this, he takes niyata karma as synonym for swakarma or one's own task, swadharma or one's duty, swabhavaniyata karma or work determined according to one's nature, and also for sahaja karma or 'work to which one is born' – the words used in Chapter 18, verses 45, 47 and 48. Gandhi adds then: 'What falls to one's lot does not therefore mean work imposed upon one, but work which one has found out to be in accordance with one's own nature, one's bent, the law of one's being' (Desai 1946: 172). The added comment clearly implies that according to Gandhi, swadharma does not mean caste duty based on one's birth. A hurdle is his translation of sahaja karma as 'work to which one is born'. But this is too literal a translation, and Gandhi was emphatically critical of any literal translation when applied to interpreting the Gita as supportive of violence even for a selfless or a patriotic cause. Sahajam also means swabhavaniyatam or natural or one's bent, not necessarily determined by birth. Lord Krishna was well aware of non-Kshatriyas fighting in the Mahabharata war like Kshatriyas, and he hardly had an objection to it. Gandhi too was aware of it. And yet, when it comes to commenting on the famous verse Chaturvarnyam maya sristam guna-karma-vibhagashah ('the four Varnas were created by Me according to guna and karma' – Chapter 4, verse 13), Gandhi says, 'The Gita does talk of Varna being according to guna and karma, but guna and karma are inherited by birth. The law of Varna is nothing if not by birth' (Desai 1946: 196). Gandhi was surprisingly conservative and orthodox in taking such a stand. It is accepted that guna karma, means work as per one's aptitude, which need not necessarily be based on birth. By declaring that varna is according to one's calling and aptitude, the Gita demonstrated only its intention to reform and democratise the society and religion. Gandhi's interpretation of birth or heredity determining one's duty is inconsistent with his own explanation that it is not work imposed, but found according to one's own nature and bent. It is inconsistent also with his emphasis on equal treatment of all work and occupations, and denial of any hierarchy in this, for which also he found support from the Gita (verses 29–32 in Chapter 6). He believed in according the same status to a scavenger as to a lawyer or a doctor. Gandhi was certainly not against elevating the social and economic status of downtrodden castes, particularly of untouchables. But how could he ignore the connection between elevating their status and freedom necessary to break out of traditional caste-based occupations and move to new jobs? The verse 47 in Chapter 18 may have appeared to Gandhi as being against such freedom. His translation of this verse is: 'Better one's own duty, though uninviting, than another's which may be more easily performed; doing duty which accords with one's nature, one incurs no sin' (Desai 1946: 374). He comments on this further by saying that if one follows the Gita's principle of doing one's work with detachment, there is no room for preferring another's work. But there is no reason why the verse should necessarily refer to hereditary caste duty.
It is important to note, however, that Gandhi changed his stance subsequently and in his later writings was clearly against the caste system based on birth. He said: 'The caste system as it exists today in Hinduism is an anachronism. It is one of those ugly things, which will hinder the growth of true religion. It must go if both Hinduism and India are to live and grow' (CWMG Vol. 79: 384). Gandhi moreover was forthright and consistent throughout in condemning and completely rejecting untouchability. There was no support for untouchability in the Gita directly or indirectly. On the other hand, Gita called for equal treatment of all, as Gandhi stressed. He warned: 'If untouchability is not removed root and branch, Hinduism is bound to perish, for no religion can nurture itself on the degradation of its votaries' (CWMG Vol. 56: 194). He worked sincerely for eradication of untouchability in India, for which he found justification from the Gita. Gandhi's philosophy of anasakti, selfless and detached work for the good of the country, could hold no room for hierarchical social relations, because cornering any privileged position in society is totally opposed to this philosophy.
## Aurobindo Ghose
Sri Aurobindo Ghose (1872–1950) was a slightly younger contemporary of Gandhi, born three years later and died two years after him. By a significant coincidence, his birth date, namely, 15 August, was also the date when in 1947 India became independent. On the first day of independence, he had just completed seventy-five years. Sri Aurobindo was a revolutionary politician, teacher, social analyst, philosopher, visionary poet and a mystic all rolled into one. He became a highly respected spiritual leader only after he transcended nationalism and started looking into the spiritual problems of the humanity as a whole. Born in Calcutta, he had a highly anglicised upbringing and was sent to England for education when he was only seven. He spoke only in English as a child and started learning Bengali when he was eighteen. He appeared for the Indian Civil Service examination under pressure, passed it, but did not want to go into it. He, therefore, deliberately neglected horse riding due to which he was not finally selected. He returned to India in 1893 when he was twenty-one and joined the administrative service of the Maharaja of Baroda. Soon he changed to the educational service of the state by serving in the Baroda College and became its vice principal. It was in Baroda that he acquired a thorough knowledge of Bengali and Sanskrit and studied the Upanishads and other Indian ancient texts. It was also at Baroda that he had started his yogic practices in 1904 and had deep spiritual experiences, initiated into them by a Maharashtrian yogi, Vishnu Bhaskar Lele. But drawn also into the nationalist struggle in the wake of the partition of Bengal, he resigned from his service in Baroda in 1906 and went to Calcutta as the principal of a newly started Bengal National College. There, he was enrolled by the eminent nationalist leader Bipinchandra Pal to write regularly for the English newspaper Bande Mataram, which had become the mouthpiece of nationalist movement. At that time at least, Aurobindo did not believe in Gandhi's path of peaceful resistance, but in violent resistance. His association with revolutionaries who plotted a bomb blast led to his imprisonment in Alipore in May 1908, but was acquitted for lack of evidence and released in May 1909. While in jail, he had series of deep spiritual experiences. In one of them, he felt that the Divine placed the Gita in his hands and that he became a faithful and selfless instrument of the Divine. Out of jail, he started two journals in 1909 – Karmayogin in English and Dharma in Bengali, 'in which he propounded the philosophical basis of nationalism and important features of Indian culture' (Pandit 1998: 12). In both the journals, he also began expounding the yoga of the Gita. Soon, however, he had to entrust the responsibility of running the journals to others. For, it was at this time that he had a spiritual urging to go to Chandernagore (a French enclave then) in February 1910 and then to Pondicherry in April 1910. Pondicherry being under the French at that time, he could escape from regular harassment from the British government there and pursue his spiritual path in peace. He spent the rest forty years of his life there, making Pondicherry internationally famous because of his presence and his Ashram there.
Sri Aurobindo saw India's independence as necessary not only for its political and socio-economic emancipation, but also for its spiritual rejuvenation. He believed that the Gita justified even violence if needed for attaining India's freedom, but he viewed it mainly as a text to guide us on the path of spiritual progress. In his later writings on the Gita from Pondicherry, this spiritual aspect of the Gita became most important, rather than it being a source of inspiration for the freedom struggle. He would not express the main spiritual goal of man as merely breaking the cycle of births and deaths, and liberation from samsara. The main goal in his view was spiritual perfection, and this goal was as much relevant to humankind as a whole as to individuals.
Essays on the Gita is one of the well-known works of Sri Aurobindo, a few of the many others being The Life Divine, Savitri, The Secret of the Veda, The Upanishads, The Synthesis of Yoga, The Foundation of Indian Culture, The Human Cycle, and The Ideal of Human Unity. He wrote The Essays on the Gita during 1916–1920, in two series, the first consisting of twenty-four essays on the first six chapters of the Gita and the second also with twenty-four essays on the remaining twelve chapters of the Gita. They were published first as a series of articles in the monthly Arya, which he started at Pondicherry. The Essays together (with nearly 600 pages) constitutes Volume 13 of Sri Aurobindo Birth Centenary Library (brought out by Sri Aurobindo Ashram, Pondicherry, in 1970) and is available as an independent book too (Aurobindo 1996). His another much larger work (with more than 900 pages), The Synthesis of Yoga, is also based on the Gita. It is an elaboration of the yoga, which in his view was taught by the Gita. It began to appear as a series of articles in the same Arya from 1914, but stopped before completion when the journal ceased publication in January 1921. His last article for the same series, 'The Supramental Consciousness', was published as the last chapter of the work along with the rest of the articles of the series in one book, only after his death. The Synthesis constitutes Volumes 20 and 21 of Sri Aurobindo Birth Centenary Library and is available online (surasa. net/aurobindo/synthesis/). Both these works are useful in understanding Sri Aurobindo's thoughts on the Gita.
Sri Aurobindo had regarded the Gita very highly even before his departure for Pondicherry. In his journal, Karmayogin, he had declared that the Gita is our 'chief national heritage, our hope for the future, our great force for the purification of the moral weaknesses that stain and hamper our people' (quoted in Minor 1991-a: 61). During the time he was busy in the nationalist movement, he drew support from the Gita for active resistance to the British rule. The titles of the two journals that he published from Calcutta, Karmayogin and Dharma, were inspired by the Gita. He had derived this activist understanding of the Gita from his Baroda days (Minor 1991-a: 65). He also saw universal value of the Gita apart from its being a source of inspiration in the nationalist struggle, for he declared in his journal, Dharma, that 'the Gita will become the universally acknowledged Scripture of the future religion' (quoted in Minor 1991-a: 71). However, it was really at Pondicherry that he devoted more of his philosophical reflection and writing on the Gita, from which emerged both The Essays and The Synthesis, as mentioned earlier. For, he had started his practice of the 'yoga of the Gita' (as he called it) soon after reaching Pondicherry.
Sri Aurobindo called the yoga of the Gita by various terms – purnayoga (complete yoga), 'integral yoga' and adhyatma-yoga (Minor 1991-a: 77). In his view, the yoga of the Gita was an integral combination of different paths – jnana, karma, bhakti and self-perfection. They are all harmonised in the Gita without conflict. The yoga of the Gita is meant basically for experiencing, practicing and living it, rather than merely for arriving at an intellectual understanding. Sri Aurobindo was not a karmayogin of popular Bankim–Tilak brand. His position is much more nuanced and much above the mundane and ordinary. In his essay on 'The Core of the Teaching [of the Gita]', he explains: 'The argument of the Gita resolves itself into three great steps by which action rises out of the human into the divine plane leaving the bondage of the lower for the liberty of a higher law.' Briefly, the first step is the renunciation of desire for fruit and doing the works only as a sacrifice to the Supreme. The second step is to give up not only the desire of the fruit, but also the claim to be the doer of works. All works in the world are to be understood as simply the operation of the universal force of the nature–soul, prakriti, the only unequal, active and mutable power. The third and the last step is to see the Supreme Self as the Purusha (Purushottama, or the highest or supreme Purusha) as governing this prakriti, or the nature–soul, which is a partial manifestation of the highest Purusha, and by whom all works are directed through nature. It is to this highest Purusha that all love, adoration and sacrifice of works are to be offered (Aurobindo 1996: 37–38).
Sri Aurobindo then explains:
The first step is Karmayoga, the selfless sacrifice of works, and here the Gita's insistence is on action. The second is Jnanayoga, the self-realisation and knowledge of the true nature of the self and the world; and here the insistence is on knowledge; but the sacrifice of works continues and the path of Works becomes one with but does not disappear into the path of Knowledge. The last step is Bhaktiyoga, adoration and seeking of the supreme Self as the Divine Being, and here the insistence is on devotion; but the knowledge is not subordinated, only raised, vitalised and fulfilled, and still the sacrifice of works continues; the double path becomes the triune way of knowledge, works and devotion. And the fruit of the sacrifice, the one fruit still placed before the seeker, is attained, union with the divine Being and oneness with the supreme divine nature.
(Aurobindo 1996: 38)
Aurobindo does not thus consider 'disinterested performance of duty as the highest and all-sufficient law'. It is necessary but not sufficient. He admits that 'an inner situation may even arise, as with the Buddha, in which all duties have to be abandoned, trampled on, flung aside in order to follow the call of the Divine within. I cannot think that the Gita would solve such an inner situation by sending Buddha back to his wife and father and the government of the Sakya state.... The Gita does not teach the disinterested performance of duties but the following of the divine life, the abandonment of all dharmas, sarvadharman, to take refuge in the Supreme alone' (Aurobindo 1996: 32–33).
Nevertheless, Aurobindo does not recommend renunciation as the general path for spiritual progress for all and admits that the Gita prefers action to renunciation. He says, 'God in the world and you in the world are realities, the world and you are true and actual powers and manifestations of the Supreme. Therefore, accept life and action and do not reject them. One with God in your impersonal self and essence, an eternal portion of the Godhead turned to him by the love and adoration of your spiritual personality for its own Infinite, make your natural being what it is intended to be, an instrument of works, a channel, a power of the Divine' (Aurobindo 1996: 577).
In The Synthesis of Yoga, Sri Aurobindo (1999) not only further elaborates the three paths of 'divine works' (karma-yoga), 'integral knowledge' (jnana) and 'divine love' (bhakti) in the first three parts of the book respectively, but also discusses 'the yoga of self-perfection' (raja-yoga or dhyana-yoga) in the fourth and final part. As the other three, the last one also is taken from the Gita. Here again, this path is not shown as a substitute, but as complementing the first three, together making the sadhana, or spiritual striving, a more perfect one. Aurobindo distinguishes two stages of this path. In the first, the mind is disciplined, purified, stilled, concentrated and directed towards the Divine. In doing so, he clarifies, the 'Hatha-yogic methods of disciplining the body, can be dispensed with, but there is no objection to their partial use' (Aurobindo 1999: 612). The main effort has to be directed at the mind, rather than the body. An obsession with the body can be an obstacle, but a disciplined and healthy body is certainly a help. Once the mental motives are raised above the ordinary and spiritualised, and the disciplined mind is directed to the Divine on a stable basis through one's own will, the next stage is self-surrender to the will of the Divine. 'A greater perfection can only be arrived at by a higher power entering in and taking up the whole action of the being. The second stage of this Yoga will therefore be a persistent giving up of all the action of the nature into the hands of this greater Power..., until the Divine to whom we aspire becomes the direct Master of the Yogic efforts' (Aurobindo 1999: 619). This does not mean that such a yogi abstains from the society and its work, or from service to fellow beings or suffering humanity. The only difference is that such a work is done by the yogi as an instrument of the Divine and as willed by the Divine.
It is difficult to label Sri Aurobindo in terms of Advaita, Dvaita and so on. According to him, the ineffable Brahman is the Supreme Soul, and all individual souls are tireless flames of this one Soul (Aurobindo 1996: 578). The Supreme has manifested in the world or nature from out of His own infinite existence and spiritual essence, and this world is not an illusion but a reality. As in Vishishta-advaita and Dvaita, he emphasises love or bhakti towards the Supreme, combined with an attitude that one is only an instrument in the hands of the Divine. The individual souls are different from the Supreme only in a transitional phase when they are in pursuit of perfection, that is, before they realise their true essence. The transitional phase itself has a function or purpose, for it is in this phase that the individual souls can experience the joy of their love and of the striving for oneness with the Supreme. It is the destiny of every soul to attain this oneness sooner or later.
## Swami Sahajananda Saraswati
Swami Sahajananda Saraswati (1889–1950) is a creative interpreter of the Gita. The Swami was no ordinary monk confined to spiritual pursuit. He was a polymath, a highly learned person, having a good knowledge of sociology, Marxism, history, linguistics, grammar, apart of course from Vedantic philosophy. He was an ascetic, a peasant leader, a nationalist and a revolutionary. Yes, many modern Hindu monks have engaged themselves in social work, but mostly of a nonadversarial nature without unduly disturbing the powerful established vested interests. Though Swami Sahajananda set out as an earnest sadhaka at the age of eighteen itself, he came under the influence of Gandhi in around 1920 and until 1930 was active in the Congress party and its struggles (Agrawal 2004: 93). He took up the cause of downtrodden peasants and led their movements in Bihar during the 1930s and 1940s against powerful zamindars or landlords who were mercilessly exploiting them. The mobilisation of these peasants had been initiated earlier by another monk, Swami Vidyananda during 1919 and 1920. After taking over the leadership of the movement, Swami Sahajananda started the Bihar Province Kisan Sabha in 1929 first, and then the All India Kisan Sabha (AIKS) in 1936 along with other luminaries like N. G. Ranga, E.M.S. Namboodiripad and Jayaprakash Narayan. The Swami was elected as the first president of AIKS at its Lucknow session in 1936. The peasants sought nothing less than a complete abolition of the zamindari system. The movement also demanded minimum wages to agricultural workers through legislation (Das 1982: 48–87). Notably, the Swami was born in the same caste to which the majority of zamindars belonged, but that was no problem for the Swami, whose conscience commanded him to struggle against even his own caste in favour of the oppressed. Unfortunately, the Swami had to leave the AIKS, which he founded and nourished due to ideological differences with the communists who increasingly came to dominate it. While the Swami was a staunch nationalist as well as a peasant leader and did not want either the nationalist or the peasant struggles to abate, the communists wanted to moderate, probably sedate them, as the British along with the Soviet Union were fighting fascism in World War II. He had left the Congress earlier and later left the communists as well. He was a more serious revolutionary in supporting the peasant cause, than either the Congress or the communists, and both disappointed him. Yet his fascination for Marxism continued.
Both Swami Vidyananda and Swami Sahajananda were steeped in traditional discourse but were also sensitive to social issues (Agrawal 2006). They saw no contradiction between their being monks and peasant leaders at the same time. Once the landlord leaders got together and asked Swami Sahajananda how he being a sannyasi could get involved in such temporal issues as peasant problems. He told them, quoting a Sanskrit verse: 'Mendicants are selfish, living away from society, they try for their own salvation without caring for others. I cannot do that. I do not want my own salvation apart from that of many destitutes. I will stay with them, live with them, and die for them' (Saraswati 2000: 171). To the Swami goes the credit of extending the meaning of the word, moksha, to mean not individual liberation from the cycle of births and deaths as in the traditional religious sense, but the liberation of the oppressed from poverty, exploitation and illiteracy (Agrawal 2004: 94). For this, he depended on his own interpretation of the Gita.
When imprisoned along with numerous others for participating in the Freedom Movement of 1942, the Swami completed his commentary on the Gita in Hindi, under the title Gita Hridaya. It was first published in 1948 and has been republished recently under the collected works of the Swami as Volume 3 (Saraswati 2003). The heart of the Gita, the Swami contends, lies in activist spiritualism, which gets manifested in the love for humanity and active participation in relieving the humanity, particularly the oppressed and the poor, of the burden of injustice, exploitation and oppression. Indeed, Raja Rammohan Roy and others also had used the Gita for their social reform projects, but Swami Sahajananda Saraswati used it in his fight against economic and social exploitation. In the preface to Gita Hridaya, he takes the stand that there is no conflict between the dharma enunciated in the Gita and that of Marxism (Saraswati 2003: 4–5). At least two verses in the Gita strongly support equity as much as Marxism, though the Gita is theistic while Marxism is atheistic. The two verses are: 'He who sees Me in all things and sees all things in Me, never becomes departed from me, nor am I lost to him' (VI.30). Again, 'he who judges pleasure and pain in others by the same standard as he applies to himself, that yogi is the highest' (VI.32). The Gita praises sarva-bhutatma-bhutatma (one who realises the self in all beings as his own self, V.7) and sarva-bhuta-hite ratah (engaged in the welfare of all beings, V.25; XI.4). The Swami asks who can be truer Marxist than a sarva-bhutatma-bhutatma and sarva-bhuta-hite-ratah? (Saraswati 2003: 5).
The Swami explains further that social work needs the Gita's yoga, which insists on doing the work without expecting any personal reward, without attachment and arrogance, but with dedication, skill and enthusiasm, leaving the outcome into the hands of the Divine, facing with equipoise both success and failure, and with the feeling that the doer is only an instrument of the Divine will (Saraswati 2003: 27–37). Lord Krishna may have meant this yoga for following in everyday living, but all ordinary economic activities are carried out with some motivation of a gain, through which one can make a living. But there can be no doubt that the Gita's yoga is particularly relevant and necessary in social and political work. Such work has to be fully directed at helping others, not at making a personal gain. The reward will come on its own, without one having to stressfully striving at it.
Explaining why he considers that there is no conflict between the Gita and Marxism, the Swami observes that there is really no strict opposition to atheism in the Gita (Saraswati 2003: 56). The emphasis instead is on selfless work for the benefit of humanity. There is no problem for an atheist Marxist or a socialist in becoming a karmayogi. In fact, a genuine socialist has to be a karma-yogi. Just as the Swami does not take a narrowly religious view of the Gita, he does not also take a narrow view of Marxism, as being necessarily opposed to religion, which after all is a personal matter. He quotes Lenin to say that the Marxist opposition to religion is not absolute or valid in all times and circumstances; the opposition arises because leaders of religion have always in the past supported exploiting classes at the expense of exploited classes. Religion need not necessarily act as an opium for the poor; it has helped the poor in being optimistic. He also quotes Lenin, who advised against splitting the working class into atheists and believers (Saraswati 2003: 71). He quotes Stalin also, who told the dean of Canterbury: 'Religion cannot be stopped. Conscience cannot be stilled. Religion is a matter of conscience and conscience is free. Worship and religion are free' (Saraswati 2003: 67). As for the class struggle, the Swami tried to show that religion is and should be supportive of the downtrodden rather of the powerful propertied class.
Agrawal points to the liberation theology movement of the 1970s in Latin America. The church there took the stand that it should be on the side of the exploited and oppressed in the class struggle, and Christ was seen as the liberator. Swami Sahajananda launched a similar idea in India in the context of Hinduism and the Gita over four decades earlier (Agrawal 2004: 93). If one takes into account the peasant struggle launched by Swami Vidyananda in 1919, the date of religious leaders taking up the cause of the exploited classes against their oppressors goes backwards even more. However, neither of the two Swamis led their struggles under the flag of Hinduism, unlike the liberation theology movement which was church-led. Nevertheless, Swami Sahajananda did not fail to point out, both by teaching and by acting, how the Gita is fully supportive to the cause of the exploited and the poor in their struggle against exploitation and oppression.
## Jawaharlal Nehru
This chapter concludes with an account of how Jawaharlal Nehru (1889–1964) viewed the Gita. This chapter has dealt with how the makers of modern India viewed and used the Gita, which had become global by their time. Yet they also saw something in the Gita which was vitally relevant to designing their approach to solving the problems of India and modernising it. Nehru was the most conspicuous of the makers of modern India. He certainly did not claim that he used the Gita as a guide in formulating his approach to modernising India, which was the main task he undertook. Neither did he write any verse-by-verse commentary on the Gita. Yet, he had some insightful and interesting observations on the Gita, particularly in his book, The Discovery of India, written when he was in Ahmednagar Fort prison, during five months, April to September 1944, and first published in 1946. His observations are mostly in a special section devoted to it, 'The Bhagavad Gita' (Nehru 1981: 108–10), apart from stray references elsewhere.
Nehru begins by noting that the Gita's 'popularity and influence have not waned ever since it was composed and written in the pre-Buddhist age, and today its appeal is as strong as ever in India.... In times of crisis, when the mind of man is tortured by doubt and is torn by conflict of duties, it has turned all the more to the Gita for light and guidance' (Nehru 1981: 109). He also notes, however, that its interpretations have differed widely, some like Gandhi basing his belief in non-violence on it, while others have justified violence and warfare for a righteous cause. This seems to be a problem with all sacred books, and the propriety of the interpretation lies more in the person interpreting rather than in the book. The Gita itself calls for using one's own critical faculty (XVIII.63) and desisting from taking a one-side view of the problem at hand deluding oneself that everything concerned is taken into account (XVIII.22). Most misinterpretations or wrong decisions arise because of failure to heed this advice of the Gita. That is how Godse, who murdered Gandhi, tragically misread the Gita, though ironically both Gandhi and Godse claimed to follow it. Nehru observes that according to the Gita, its call for action has to keep the spiritual background and the larger purpose of the universe in mind (Nehru 1981: 109). Any action or decision is always subject to the advice of the Gita to use one's critical faculty and to take a holistic view. Nehru is in full agreement with the interpretation of the Gita's karma-yoga as one aimed at social betterment and social service, and based on altruism. One can fight for a righteous cause, but the mental frame should be one of non-violence and spirituality.
Nehru characterises the message of the Gita as 'not sectarian', and as 'universal in its approach for everyone, Brahmin or outcaste', and thus 'found favour with all classes and schools' (Nehru 1981: 110). He adds:
There is something in it which seems to be capable of being constantly renewed, which does not become out of date with the passing of time – an inner quality of earnest inquiry and search, of contemplation and action, of balance and equilibrium, in spite of conflict and contradiction.... Its temper is one of supremacy over the changing environment, not by seeking escape from it but by fitting in with it. During the 2500 years since it was written, Indian humanity has gone repeatedly through the process of change and development and decay;... but it has always found something living in the Gita, something that fitted into the developing thought and had a freshness and applicability.
(Nehru 1981: 110)
We now go, in the next chapter, to modern interpreters who saw the Gita in universal terms, beyond the immediate relevance to India, and more into its spiritual significance like the ancient and medieval interpreters. This was in keeping with the global appeal of the Gita, which emerged since modern times. The next chapter can also be viewed as an extension or continuation of the present one. While these interpreters used the earlier commentaries, they also added something of their own, further enriching the expanding literature on the Gita.
## Notes
1 The account here of the Rammohan's life and work, and of he used the Gita, is based mainly on Agarwal (1993: 3–48) and to some extent on the website en.wikipedia.org/wiki/Ram_Mohan_Roy downloaded on 17 January 2015.
2 The account of Bankim's work here is based mainly on Gowda (2011: 9–49); also see Ajit Ray's chapter on Bankim, in Minor (Ed.) (1991: 34–43).
3 Source: en.wikipedia.org/wiki/Theosophical_Society, downloaded on 26 January 2015.
4 The account of Annie Besant's life and work here is based on Neufeldt (1991: 25–28), Sinha (2010: 311–13) and www.ts-adyar.org/content/annie-besant-1847–1933, downloaded on 20 January 2015. Also see Jinarajadasa (1996).
5 The account of Tilak's life, work and philosophy here is based on Tilak (1936), Stevenson (1991), Agarwal (1993: 89–135) and Gowda (2011: 50–89).
6 Apart from CWSV (see Note 5 in Chapter 3), two volumes of The Life of Swami Vivekananda by 'His Eastern and Western Disciples' (referred simply as 'Disciples'), sixth and seventh editions (1989 and 2001 respectively) have been used here. French (1991), Agarwal (1993: 49–88) and Gowda (2011: 90–123) were also quite useful.
7 Writings of Lajpat Rai have been published together in The Collected Works of Lala Lajpat Rai (CWLLR), edited by B. R. Nanda, 2003, Delhi: Manohar. Yogiraj Shri Krishna is in Vol. I. The account about Lajpat Rai here is based on Davis (2015: 120–28), and to some extent on www.britannica.com/EBchecked/topic/328063/Lala-Lajpat-Rai.
8 Included in the CWLLR (op. cit.), Vol. 3, 329–53 (as per Davis 2015: 219n-10).
9 For Gandhi, as in the Hindu tradition, Self-realisation is God realisation. He did not believe in a personal God existing beyond the universe. Gandhi's concept of God or cosmic spirit is presented insightfully by Bhikhu Parekh (2001: 35–41).
10 For details of how Gandhi worked for eradication of untouchability, see Nadkarni (2013: 141–42).
11 The brief life-sketch here is based on Minor (1991: 61–67), Agarwal (1993: 137–62), Pandit (1998: 1–27), Diwakar (1999: 1–80), Heehs Ed (1999: xiii–xviii), and Gowda (2011: 124–26). A few of the other books on his life and work are A. B. Purani (1978), The Life of Sri Aurobindo, Sri Aurobindo Ashram, Pondicherry; Peter Heehs (1989), Sri Aurobindo: A Brief Biography, Oxford University Press, Delhi; Nirodbaran (1990), Sri Aurobindo for All Ages, Sri Aurobindo Ashram, Pondicherry.
12 For more details, see Hauser (1995), Saraswati (2000), and Das (2008).
13 Sahajananda Saraswati devotes a lengthy section on Marxavad aur Dharm (Marxism and religion) to prove his point that Marxism is compatible with religion (Saraswati 2003: 67–79).
14 Sahajananda Saraswati devotes a lengthy section on Marxavad aur Dharm (Marxism and religion) to prove his point that Marxism is compatible with religion (Saraswati 2003: 67–79).
# [5
Contemporary Interpretations](content.xhtml#bck_Ch05)
## Swami Ramdas
In this chapter are presented a few selected modern interpreters of the Gita who were focused more on its general ethical and spiritual content rather than its nationalistic and social implications. Even while dealing with the spiritual content of the Gita, these interpreters also brought out its relevance as a guide for day-to-day living in a morally acceptable and psychologically satisfying way. This helped in re-establishing the Gita as a text of universal significance. These interpretations aimed at bringing out the Gita's teaching in helping man (or woman) find a higher purpose and sustainable and deeper happiness. There have been several more interpreters than those presented here, but some had to be excluded so that repetition of overlapping ideas is minimised and also because enough material on them was not available. Some of the contemporary interpreters are referred to in succeeding chapters, though separate sections are not earmarked for them.
Swami Ramdas (1884–1963) was a modern mystic who could convey his teaching in simple and lucid English as well as in Hindi, Malay-alam, Kannada and Konkani (his mother tongue). Born as Padukone Vitthal Rao in Kanhangad, then a part of the South Kanara district, now in Kerala, he became a sannyasi in 1922, renouncing family life and business, and took a new name – Ramdas, a servant of Ram. He also took three vows: 'I am no more Vitthal Rao', 'this body is servant of Ram and shall always be in His service alone' and 'all women are mothers to me'. His Ram was not the son of Dasharath and king of Ayodhya, but the unborn formless Immortal Ram who is within us all, the same Ram to whom Kabir and Gandhi were devoted. After taking sannyas, Ramdas met Ramana Maharshi and received his grace. The Maharshi blessed him through his glance – 'Be filled with Joy', and he was. A smiling, benign and compassionate expression never left Ramdas since then. With the mantra of 'Om Shri Ram Jaya Ram Jaya Jaya Ram' ever on his lips and in his heart, he set out literally 'In Quest of God', wandered for a few years all over India and the Himalayas, in a frenzy of God intoxication. He finally returned to his native place, Kanhangad, set up an ashram there ('Anandashram') and started his spiritual teaching. He went on a world tour in 1954, giving discourses to his admirers in several countries. His books are all in English, In Quest of God, In the Vision of God, God Experience, Swami Ramdas on Himself and Gita Sandesh (Message of the Gita).
Gita Sandesh, published first in 1966 after his demise, is based on the discourses that Swami Ramdas gave, having eighteen chapters, each on the eighteen respective chapters of the Gita (Ramdas 1976). Ramdas interprets Arjuna's predicament as one of overcoming moha or emotional attachment to a narrow circle of relatives, family and possessions. Moha arises from identifying 'I' with the body and 'mine' with only those near. The Gita holds forth the goal of liberation from all limitations and enjoying the bliss inherent in a state of expansive immortal self. This requires overcoming moha, which is the source of bondage. Like Arjuna, everyone faces the conflict between aspiring for the vaster vision and 'the crystallised selfishness of an individualistic view of life' (Ramdas 1976: 2). Ramdas observes further that moha has its own ways of defence against the development of a higher and expanded vision, which are reflected in Arjuna's arguments for unwillingness to fight (Ramdas 1976: 2–3). Krishna's call to Arjuna to fight is not to be interpreted literally to kill relatives, but to overcome his narrow attachments and to realise his larger universal purpose. Altruistic service to humanity taught by Krishna is not possible otherwise. Moha throws a person only into darkness, delusion and selfishness, overcoming which is the basic requirement to spiritual progress, according to Ramdas's interpretation of the Gita.
The way to transcending moha, Krishna tells, lies in discriminating between what is transient and what is permanent, between what is perceived by senses and what is beyond, between what is particular or individual and what is universal. One has to understand one's true self as the eternal spirit present in all, and not as the perishable body, and thus get on to a higher level fixed in the bliss of the Universal Atman. In this state, one does not have to – in fact, should not – desist from being engaged in action in the world. Being in such state helps one to do work effectively and enjoy it too and, what is more, to judge with fairness what is a right action and what is wrong. What one should desist from is any desire to appropriate the fruit of action and arrogance of being the actor. It means a complete surrender of all actions and their outcomes to God. This is real karma-yoga. Swami Ramdas clarifies that sannyas is not necessary for this or any type of yoga. He emphasises, however, that the Gita advises strict discipline and self-control in one's life, particularly in eating and sex, also overcoming anger, jealousy and such mental weaknesses, as an aid to control attachment and desire. Spending some time regularly every day in japa (recitation of a holy mantra or of even the syllable OM or AUM in the proper way) and dhyana (meditation) is quite helpful in this task. A person practicing the Gita's yoga, combining detached work with right knowledge, and devotion, would always be in perfect poise and peace. A karma-yogi treats all people with equal respect and love and is compassionate even to animals.
Swami Ramdas points out to three kinds of doers of action, as explained in the Gita. A satvik doer performs his work without any selfish desire for personal reward, but effectively, efficiently and calmly without being ruffled by success and failure. A rajasik doer works for a personal reward and is emotionally attached to the outcome, ruffled by success and failure. A tamasik doer is lazy, slow, irregular, deceitful and quarrelsome, doing a mess of his work. Only a satvik doer can be a karma-yogi (Ramdas 1976: 102). A karma-yogi is freed from karma only when he transcends egoism completely, that is the sense of 'I am the doer!', and dedicates all credit for action and its outcome to God. This requires combining work with complete devotion and surrender to God. This is the highest stage of sadhana or spiritual seeking, which the Gita has shown.
The central message of Swami Ramdas's interpretation of the Gita is to get rid of moha, that is emotional attachment to a small circle of oneself, one's family and friends, and possessions only. Moha is the source of selfishness, corruption, violence and all evils. Though he teaches this in the context of spiritual seeking, it seems to be equally – probably, more – relevant in social and even political work. Social workers and political leaders have to be karma-yogis to be credible, effective and efficient. Eradicating moha does not mean destroying the spring of love in oneself, but extending it to cover the larger world, making it more general and universal, instead of confining it to a small circle.
## D. V. Gundappa (DVG)
D. V. Gundappa (1887–1975), popularly known as DVG, is a big name in Kannada literature. An eminent philosopher-poet-journalist, adept equally at both prose and poetry, he was simply prolific, with fifteen collections of poems, two book-size-long poems on moral wisdom (Manku Timmana Kagga and Marula Muniyana Kagga, the latter being an extension of the former), a book on the Gita and several other books on political science and journalism, biographies, memoirs and books for children. He founded the Gokhale Institute of Public Affairs in 1945 in Bengaluru. He gave a series of lectures in Kannada on the Gita at this Institute, which were tape-recorded, transcribed and serially published first in a Kannada weekly, Prajamata, between February 1963 and July 1964. They were then put together, checked and edited by DVG himself and published in 1966 as a book, with a preface and an introduction by him. The book in Kannada was titled as Shrimad-Bhagavad-Gita-Tatparya Athava Jeevana-Dharma-Yoga (Essence of the Bhagavad-Gita or The Dharma of Living, Gundappa 2001). It won the Sahitya Akademy Award in 1967.
DVG's work on the Gita is one of the most thorough-going, well-reasoned, refreshing and at the same time lucid books on the subject in any language. It is immensely enjoyable to read with apt examples and witty expressions, and at the same time enlightening. The beauty of its prose makes it difficult to translate into English, but a modest attempt is made here to bring out the essence of this great book or at least its salient points. This is done in my own way, not necessarily in the same order as in the original.
In his introduction, DVG explains how his approach to the Gita is different. First, while the great acharyas saw the Gita only as a moksha-shastra (the science of liberation), his work sees it as a guide for living. The key to a happy, honourable and stress-free living lies in following the Gita's dharma. We need a source of confidence and courage in the turbulent journey of life, and the Gita provides it. The rigours of life's journey are thus lightened by the Gita. Second, his book steers clear of theological controversies on Advaiata, Dvaita and so on and indicates the possibility of harmonising them. Third, DVG also sees complementarity between different paths of sadhana like jnana, karma-yoga and bhakti. Fourth, while the great acharyas relied more on the authority of the Vedas and the Upanishads to support their arguments, DVG relies more on critical reasoning. Fifth, the book is not addressed to the followers of the Vedic religion alone, but is non-sectarian and universal in approach; nor is it addressed to pundits but to common men and women who are keen to solve riddles of God, soul, dharma, faith and destiny, written in a conversational style (Gundappa 2001: 22–23, 33).
DVG explains that what may seem as a lack of coherence in the Gita is due to the fact that it was an informal dialogue between friends, though on a serious theme (Gundappa 2001: 26–27). As to how a serious and long dialogue could take place in a battlefield, DVG observes that wars at that time had certain rules and disciplines to follow, and though the war was declared earlier, there was still time for actual commencement. Around the time that the Gita-dialogue was taking place, Yudhishthira (the eldest of the Pandava brothers) was entering the enemy camp unarmed to pay respects to the elders and teachers like Bhishma and Drona (he also came out unharmed), and thus there was time enough for a serious dialogue to take place (Gundappa 2001: 28).
As to the question of whether Arjuna was right in his refusal to fight or whether Krishna was right in insisting that Arjuna should, DVG observes that Arjuna was only sentimental, and his grief was more an impulse of the moment rather than one based on reasoning. DVG insists that even compassion cannot be irrational and has to be based on dharma. The war was earlier tried to be avoided by the Pandavas and Krishna himself, and the Kauravas had rebuffed every such move by further humiliating them. It was now a question of honour, and any move on the part of Arjuna to withdraw at that juncture was bound to be judged as cowardice. Compassion is good no doubt, but even compassion has to be justified by dharma or justice. One cannot refrain from punishing a murderer or a rapist out of compassion for him. It does not mean that a criminal does not deserve any compassion. Even a heinous murderer should not be subjected to stoning by death or such other undue torture to match his crime. Dharma is based not on momentary impulses, but on long-term considerations of ethics and moral duty. If there is a conflict between the two, dharma should prevail since otherwise the world cannot function in an orderly way (Gundappa 2001: 63–74).
According to the Gita, there is no conflict between dharma and pursuit of moksha. There can be no moksha without following dharma, says DVG. A primary requirement for following the Gita's dharma is to keep one's mind pure, cleaning it of arrogance, lust, jealousy, emotional attachment and anger, and similar other mental weaknesses. One needs also a sense of discrimination between the everlasting and momentary, and between right and wrong. One has to cultivate self-control, endurance (titiksha) and equipoise in the context of ups and downs, and also shraddha, translated approximately as faith. Shraddha does not mean being credulous and giving up critical reasoning, but a faith in oneself, in the basic goodness of others and in God. It also means faith in the goodness and joy of living. Life is a manifestation of the energy of God, and one should respect it both in oneself and others. Without shraddha, one cannot do anything in life; one cannot even travel from one place to another without some faith that one would reach the destination safely and on expected time. Precautions may be necessary, but one should also have faith (Gundappa 2001: 33–53).
The Gita says, DVG points out, that no person can avoid action. Action is both nature induced and deliberate. Deliberate action binds all persons to the law of karma and the cycle of births and deaths. The way to circumvent the law of karma is to do work without desiring its fruit and with an honest attitude that 'I am not the doer'. DVG clarifies that one cannot escape from moral responsibility for one's action by merely telling oneself hypocritically that 'I am not the doer'. The action should be genuinely selfless and done with utter humility and as an offering (Gundappa 2001: 168, 181). The action also should not be done thoughtlessly without heed to consequences; it should be based on a discrimination between desirable and undesirable action, the former promoting the welfare of the world (i.e. consistent with dharma) and the latter harmful to the world (i.e. against dharma). A helpful criterion, a golden rule, which the Gita provides here, is contained in verse 32 of Chapter 6: 'Judge the pleasure or pain in others by the same standard as you would apply to yourself.' Having done accordingly, one should not worry about the outcome, but leave it to God. Accept the outcome with equanimity, says the Gita. All these together, with all the given conditions or riders, constitutes karma-yoga. According to the Gita, DVG points out, it is easier to follow karma-yoga and be effective, if one follows swadharma, what comes naturally to oneself based on aptitude (Gundappa 2001: 126–51, 332–35).
Chapter 16 of the Gita lists what it considers as divine or godlike and demon-like qualities. Among the former are included fearlessness, purity of heart, generosity, self-control, uprightness, non-violence, truthfulness, kindness to all, modesty, forgiveness, cleanliness and such other qualities (verses 1–3). Among the demonical are included ostentation, arrogance, anger and self-conceit (verse 4). DVG asks, why did not the Gita talk about human qualities? That is because the same qualities can be found in the human beings themselves. He considers life as an adventure sport of climbing a greasy pole, constantly striving to climb up from the base demonical qualities to the godlike qualities (Gundappa 2001: 404). A success in this sport has its own rewards, apart from the thrill of adventure. It gives us a great peace of mind, fearlessness, moral confidence, a sense of freedom and ever being in a state of happiness, in spite of all vicissitudes of life.
DVG points out at another type of distinction made in the Gita in the seventeenth and eighteenth chapters, between satvika (good, upright), rajasika (pleasure loving, emotional) and tamasika (dull, lethargic) gunas (qualities), which too has a great relevance to practical issues. Applying the distinction to voters' attitude in elections, DVG says that a voter who does not mind the trouble of going to the booth irrespective of sun or rain, takes into account who is the most deserving and votes accordingly is a satvika voter. A fair-weather voter who avoids going if there is hot sun outside, or goes to vote only if he or she gets a free lift, is rajasika. One who is indifferent, forgetful and too lazy to go to vote is the worst type – a tamasika voter (Gundappa 2001: 452). A given person may not exclusively be of one guna, but can be satvika in one respect, rajasika in another and tamasika in yet another respect. All are combinations of the three in different degrees.
In the concluding chapter, DVG points out at four challenges to the interpreters of the Gita in modern times. They are (i) significant loosening of the caste system; (ii) growing complexity in the nature of livelihoods; (iii) women's status, rights and duties; and (iv) a significant questioning tendency (Gundappa 2001: 524). As to the first, DVG says that the 'mixing of castes' (varna-sankara) is inevitable and will continue, though castes may not altogether disappear. In the olden days, the economic system was simple and caste system was clear and conspicuous, and swadharma could have been taken simply to mean caste duty. Not anymore. It can now be taken only to mean dharma based on one's aptitude, inclination and training. Regarding women's issues, there is hardly anything in the Gita which throws light on them, but DVG seems to have raised it mainly to express his opinion. He welcomes women stepping out of the kitchen and working alongside men to earn an income of their own, provided the vocation is such that it does not compromise the dignity and status of women. However, as to the women's right to choose their own husbands particularly outside the community or caste, DVG does not give a straight answer. He only insists that a marriage is not to be seen only as a union between two individuals, but as a coming together of two families, and the happiness of the couple and the families depends on mutual regard and affection. He also hints that there is no question of the husband being dominant and the wife being just obedient in married life, because the relationship is one of unity, solidarity, cooperation, mutual trust and equality. As to the fourth point earlier, DVG observes that a questioning tendency is not new in Indian culture and religion; otherwise, there would not have been so many philosophies. The Upanishads are a strong evidence of the questioning tradition. In the Gita itself, Arjuna was not to be convinced at one stroke; he kept asking questions. The questioning attitude poses no threat to the Gita or its following, believes DVG (Gundappa 2001: 552–53). At the same time, faith has its own role and scope, just as critical reasoning has, particularly in matters like acceptance of the idea of Atman and Brahman. Even for a person who has no faith in the idea of Atman or Brahman or moksha, who regards the Gita teaching on spiritual pursuit as irrelevant, it can still be a guide in leading a stress-free, enjoyable and morally honourable life.
## Swami Sivananda
Swami Sivananda (1887–1963) was one of the most leading yoga gurus of the twentieth century. A prolific writer in English, he has to his credit some 340 books and pamphlets on yoga and philosophy, including several on the Gita. Born in Tamil Nadu, with Kuppuswami as his earlier name, he studied medicine in Tanjore and began his medical practice in Malaysia in 1913. He became popular as a kind, hardworking and selfless doctor, known particularly for helping poor patients with not only medicine but even with food and money. A spiritual urge took him back to India leaving lucrative medical practice. He became a wandering mendicant for a while, until he found his guru at Rishikesh and became a sannyasi in 1924. He started an ashram and a free dispensary for the poor there. He began lecturing and writing on spiritual matters since 1929, founded the Divine Life Society at Rishikesh in 1936 to publish books, conduct seminars and conferences, and disseminate spiritual education. He was also the main inspiration behind Yoga Vedanta Centres worldwide. He observed no caste distinctions and admitted both men and women from all castes and classes to his order. He has summarised his teaching in six words: serve, love, give, purify, meditate and realise.
According to Sivananda, the Gita is the 'cream of the Vedas' and the 'essence of the Upanishads' (Sivananda 2013: xi), and it 'alone is sufficient for the purpose of Svadhyaya (scriptural study)' as 'you will find a solution here for all your doubts' (Sivananda 2013: xiii). Its appeal is for all mankind (Sivananda 2013: xiv). There is a yoga here for every type of a spiritual seeker, jnana-yoga for a rational or intellectual temperament, karma-yoga for the active and bhakti-yoga for the emotional, and there is harmony between them (Sivananda 2013: xv). Moksha can be attained in this very life by annihilating the ego, by overcoming raga (likes) and dvesha (dislikes), and through selfless active work (Sivananda 2013: xvii).
Sivananda admits that there are conflicting verses in the Gita. One should not, therefore, go by the literal meaning of one verse alone, but relate them to others and get an overall message. For example, in verse 33 of Chapter 3, Krishna says that everyone behaves as per one's nature and asks 'what can restraint do?', but in the very next verse, as Sivananda points out, Krishna also adds that one must nevertheless try not to yield to senses and get them under control (Sivananda 2013: xxv). Taking another example, Sivananda points out (Sivananda 2013: xxvi), Krishna says that he dwells in the hearts of all beings and controls them as if they are mounted on a machine (XVIII.61). Yet, he also advises that one must raise the self by the self and not allow it to be destroyed (VI.5). In the very advice that one must do one's own natural duty rather than someone else's (XVIII.47), there is an assumption of freedom of will and action. The overall sense is that human beings do have some freedom, but it is not absolute and unconstrained. However, whatever freedom of discrimination and action one has should be exercised and exercised wisely and with moral responsibility and humility.
What is heartening about the Gita is the series of assurances one after another given by the Lord, out of kindness and love to the spiritual seekers and devotees. Sivananda puts the concerned verses together – at least eight of them – and calls them together as Pratijna Gita (Promising Gita). (Sivananda 2009: 11, 12). In one of such verses, the Lord assures that a seeker should not feel frustrated and fear about the waste of effort and that even a little effort goes a long way (II.40). 'No one who does good, will ever perish' (VI.40). 'I am easily attainable to those who remember me all the while' (VIII.14). 'I will take care of devotees who have set their minds on Me' (IX.22). These verses increase one's self-confidence.
In the Gita, moral and spiritual progress go together; there can be no gap between the two. Sivananda says: 'An ethically disciplined, morally purified and spiritually illumined soul is the goal of Gita-ethics. To attain this, the Lord exhorts the Man to spiritualise his entire personality' (Sivananda 2009: 50). He explains further that to do this, the Gita ethics requires one eradicate tamas in oneself, wisely control rajas and add to the satvika in all possible ways. A great thing about the Gita's ethics is that it does not demand extreme austerity; it is practical and teaches moderation. Neither starvation diet nor overindulgence in food is advised; neither sleeping too little nor too much is recommended. Even in spiritual seeking, there is insistence on self-effort so as to avoid insincerely or lazily leaving everything to God's will (Sivananda 2009: 28–29). Sivananda insists that merely avoiding evil ways is not enough; one should positively cultivate virtue and the divine qualities insisted upon in the Gita (in Chapter 16) (Sivananda 2009: 41).
To teach the values of self-control, self-reliance, selfless service, virtuousness and mental equipoise, Sivananda says that the Gita should be taught as a part of the curriculum in all schools and colleges in India and even the world at large (Sivananda 2013: xxvii). Since the Gita is not the only religious text which teaches these values, students should also be exposed to the highlights of moral teachings from the sacred texts of all religions which are relevant for solving the problems of our times. Avoiding religious parochialism is itself a moral value to be taught, and the Gita's tolerant outlook to differences in religious paths is particularly pertinent. There is no doubt a great necessity to develop students' moral personality and their self-confidence, and the Gita can play an important role in this task also.
## K. M. Munshi
Kulapati Dr K. M. Munshi (1887–1971) was a prolific writer in Gujarati, English and Hindi, an eminent lawyer, an outstanding political leader and an educationist. He is remembered, among other things, for founding Bharatiya Vidya Bhavan, which has now grown into a worldwide institution for promoting Indian art, culture, literature and values. A major project finished by the Bhavan is a monumental eleven-volume venture on The History and Culture of the Indian People, which was a brainchild of Munshi, nourished by his indomitable energy. He was a member of the committee which selected the National Flag and also of the committee which drafted the Constitution of India. He took the initiative and leadership of the project for reconstructing the historical Somanath temple in Gujarat, inaugurated in 1951 by Dr Rajendra Prasad, the then president of India. As a member in Nehru's Cabinet for a brief period during 1952–53, he is known for launching the Vana-mahotsava movement for increasing the forest cover. Due to differences with Nehru, he left the Indian National Congress in 1959 and joined C. Rajagopalachari's Swatantra Party.
Munshi wrote some twenty-one novels and three plays in Gujarati and Hindi and nine books in English. Among the latter is Bhagavad Gita and Modern Life, published first in 1947. It was revised and enlarged from time to time until 1969 (two years before his demise) and reprinted several times thereafter (Munshi 1988). It is an inspiring, elegantly written and thoughtful book on the Gita.
In the introduction, Munshi discusses various ways in which the Gita is read or studied and how it is best approached. Some recite it expecting a material gain, some for punya, some view it merely as a textbook on morals and some see it as a piece of literature meant for criticism, dissecting and analysing it with no love for it. Such people miss the mystical charm of the Gita and the pure joy of reading and reciting it. It has to be approached with reverence and due humility. He advises that we must 'read the Gita to see the Light which burns within us' (Munshi 1988: xiv).
When you read the Gita again and again, as all scriptures should be read, the words begin to grow in you; the nervous system is stimulated; the constant repetition of some appealing verse transforms our minds and makes our spirit more articulate. Then it is woven into the texture of our mind and spirit. If we feel down-hearted, one of the oft-repeated verses rise to the surface in the shape of a mandate and we find in ourselves a new hope.
(Munshi 1988: xiii–xiv)
Munshi does not belittle the moral significance of the Gita. He says that as we recite the Gita everyday at least for some time, taking its meaning to heart, we become purer, more patient and forgiving, develop more compassion for the poor and the deprived, and become more active in helping the desolate. In brief, we become more noble and strong at the same time (Munshi 1988: xiv). He adds: 'It [the Gita] is not a scripture of the next world, nor of asceticism, nor of inaction. It is an intensely human document; a guide for every human situation. It urges man in the thick of life's battle to shed his limitations' (Munshi 1988: 19).
Dharma, as taught by the Gita, is neither a religion, nor a dogma, according to Munshi (Munshi 1988: 20). It is a universal law of becoming, also a law of moral causation and is all pervading. It is as ineluc-table as the law of gravitation. Munshi spells out this law:
If anyone achieves Truth, his work shall bear fruit.
If anyone achieves Non-violence, men shall come to him shedding their hostility.
If anyone achieves Non-stealing, wealth shall come to him.
If anyone achieves Non-waste, he shall obtain the vigour that does not fade.
If anyone achieves Non-possession, he shall know the end and meaning of life.
When this Law is followed, however little, attachment, fear, and wrath begin to give way to truth, beauty, and love.
The Law of Moral Causation is the Law of Freedom.
(Munshi 1988: 21–22)
Yoga is another concept which occurs in the Gita throughout. Munshi defines yoga as 'the one comprehensive process by which man ascends in the scale of life by performing acts which are the expression of a dynamic personality based on the complete co-ordination of all his powers' (Munshi 1988: 39). Based on verses 17 and 18 in Chapter 3 of the Gita, Munshi says that 'to be a yogi is to be oneself' (Munshi 1988: 43). A personality has to be enriched from within by developing inner power. Therefore, he says, a yogi is greater than what he does. 'To "be" is nobler than to "do".' For example, 'there was something greater and nobler in Gandhiji himself than in anything he said or did' (Munshi 1988: 45).
Karma-, jnana- and bhakti-yogas are not substitutes, exclusive from one another. Munshi says, 'Action brings to knowledge its true fulfilment. Knowledge gives the true direction to Action' (Munshi 1988: 81). Bhakti is a divine emotion. He observes that knowledge and action without emotion would amount to 'cold-blooded activities', untouched by love and inspiration. They would serve no purpose. 'Action in search of the self-realisation which the Gita envisages is illumined by Knowledge and inspired by Devotion. It is an offering to be made at the feet of the Lord with love and humility' (Munshi 1988: 81). Munshi clarifies that knowledge that the Gita speaks of is not just worldly knowledge, but knowledge which burns our fetters, frees us from delusion and makes our mind purer and nobler.
An important question in karma-yoga is to decide what work to pursue. Krishna's answer, stresses Munshi, is to follow one's nature and aptitude. Though repression of one's nature is fraught with danger, it can be transmuted through careful training (Munshi 1988: 102). One can begin with the task which circumstances dictate, then find a task to which one is best suited by trial and error and perform it with detachment without aspiring for its fruits. No task, however, is to be considered as inferior or low.
Munshi explains why we should not worry about results but be detached. This is because otherwise anxiety, impatience and restlessness will disturb our attention; steady flow of concentration necessary for achieving results is not possible. Mental energy gets dissipated (Munshi 1988: 106–7, 109). Work, however, needs to be performed seriously as a yoga in pursuit of truth. That is why performing the task as a worship is stressed. It follows that once a task is chosen, the difficulties and pains involved have to be faced cheerfully and with titiksha (endurance). In fact, this is true in facing life in general. Munshi clarifies that one does not have to practice sleeping on a spiked bed to develop endurance, for the Gita advises against such methods which torture oneself. Endurance is a mental attitude of being tough. It has to be developed not by repressing one's mind, making it only more rebellious, but by practice and exercising imagination (Munshi 1988: 114–20).
Munshi declares that 'if Yoga means anything at all, my ordinary life has to be transformed by conscious effort into a life which can ultimately lead me to discover God in me' (Munshi 1988: 226). However, he also says that all conscious striving ceases when one submits to the will of God. Then the path becomes clearer, anxiety vanishes and frustration ends (Munshi 1988: 238). Munshi thus sees in the Gita more than moral discipline. It shows a way out from the oppression of materialism of modern life and shows a path of peace and progressive pursuit of reaching the Divine, without having to abdicate one's responsibility to the world.
## S. Radhakrishnan
Dr Sarvepalli Radhakrishnan (1888–1975) is a modern acharya who, like the earlier ones, translated and wrote commentary on the Prasthanatriya – the Upanishads, the Brahmasutras and the Bhagavad-Gita. He did more. Through several scholarly books, he interpreted Hinduism for the modern times, defending it against unreasonable criticism. He showed that Hinduism is essentially rational, humanistic and ethical, besides being deeply spiritual. For him, Vedanta was 'not a mystical flight into other-worldly experience, but a rational system of thought deserving the name "philosophy"' (Minor 1991-b: 424). He will be remembered much more for this contribution than for the fact that he was India's second President during 1962–67, an uneasy and critical period when India's first prime minister, Jawaharlal Nehru, passed away. Apart from his books on the Prasthanatriya, among his well-known books are the two-volume Indian Philosophy (1923), The Hindu View of Life (1927), An Idealist View of Life (1932) and Eastern Religions and Western Thought (1939). His birthday, September 5, is celebrated every year in India as the Teachers' Day.
Apart from references to the Gita in other works, Radhakrishnan dealt directly with the Gita in two works. The first is a substantive chapter on 'The Theism of the Bhagavadgita' in Volume 1 of his Indian Philosophy, published first in 1923 (Radhakrishnan 1996, Vol. 1: 519–80). The second is a full book, The Bhagavadgita, first published in 1948 (Radhakrishnan 1993). His accurate translation, along with commentary, an insightful scholarly introduction, apt references to other commentaries and sacred books in the scholarly notes, has all made his book a classic and a standard reading on the Gita. In the beginning of the book, he gives a brief account of different commentaries on the Gita, from Shankara to Vallabha, and concludes it by observing that even if seemingly conflicting, the different views about the ultimate reality are held to be complementary in the Hindu tradition (Radhakrishnan 1993: 16–20).
Radhakrishnan observes that for the Gita, the world is the scene of an active struggle between the good and the evil, and an intervening personal God 'pours out his wealth of love in helping man to resist all that makes for error, ugliness and evil' (Radhakrishnan 1993: 25). The Gita's emphasis is on the Supreme as the personal loving God, who resides in the heart of every being (XVIII.61). He stirs our hearts to devotion and grants our prayers (IX.24) (Radhakrishnan 1993: 25). He also leads us to right conduct if we listen to His voice in us. Humans, however, also have freedom of will or choice, which entangles them in the law of karma. The Gita, therefore, suggests ways of overcoming binding karma, through karma-yoga. This is at the individual level. At the social or aggregate level, God intervenes whenever there is a grave crisis or deadlock between forces of good and evil. When felt necessary, He takes birth again and again to renew the work of creation on a higher plane (Radhakrishnan 1993: 33). Radhakrishnan clarifies: 'The avatara is the demonstration of man's spiritual resources and latent divinity. It is not so much the contraction of Divine majesty into the limits of the human frame as the exaltation of human nature to the level of Godhead by its union with the Divine' (Radhakrishnan 1993: 32).
According to Radhakrishnan, the Gita is specially suited to address the problem of reconciliation of mankind, as 'it attempts to reconcile varied and apparently antithetical forms of the religious consciousness and emphasises the root conceptions of religion which... belong to the very flesh of humanity, past, present and future' (Radhakrishnan 1993: 8). It is addressed to 'pilgrims of all sects who seek to tread the inner way to the city of God.' Radhakrishnan explains further that the contribution of the Gita lies in reconciling different currents of thought, the Upanishadic teaching of the transcendent Brahman, the Sankhya dualism, the yoga of meditation, action and devotion, into an 'organic unity', and shows how these different lines of thought converge towards the same end (Radhakrishnan 1993: 14). The reconciliation is achieved in a way that 'sets forth in precise and penetrating words the essential principles of a spiritual [and universal] religion which are not contingent on ill-founded facts, unscientific dogmas or arbitrary fancies' (Radhakrishnan 1993: 11). Radhakrishnan believes, therefore, that the Gita represents no particular sect of Hinduism but Hinduism as a whole, and 'not merely Hinduism, but religion as such in its universality, without limit of time and space' (Radhakrishnan 1993: 12).
Radhakrishnan explains further how the Gita has been creative in achieving a synthesis between different schools of philosophy. For example, there is a dualism between purusha and prakriti in Sankhya philosophy. Prakriti or nature does not have its own consciousness or awareness, and yet its activities are purposive and serve the purpose of gaining freedom of the soul. On the other hand, the individual purushas or souls are merely passive experiencers; they have consciousness but they do not act. This seems unconvincing and inconsistent. The difficulty is solved by the Gita by bringing in Purushottama, the Supreme Soul, who provides the three-way integration between Purusha, Prakriti and Itself (Radhakrishnan 1996, Vol. 1: 528–29). Another example of a creative synthesis is that the Supreme can be viewed as an impersonal immanent Absolute (Brahman) or as a personal God amenable to devotion, or both. Through the Gita, the reflective or contemplative idealism of the Upanishads can be combined with emotional demands of human nature. 'The Gita attempts a spiritual synthesis which could support life and conduct on the basis of the Upanishadic truth, which it carries into the life-blood of the Indian people' (Radhakrishnan 1996: 531). A spiritual pursuit of realising the Brahman need not be dry; it can provide room for devotional love, faith, prayer and service, in terms of the Gita's teaching. Radhakrishnan admits: 'Of course the Gita does not tell us of the way in which the absolute as impersonal non-active spirit becomes the active personal Lord creating and sustaining the universe. The problem is considered to be intellectually insoluble. The mystery clears up only when we rise to the level of intuition' (Radhakrishnan 1996: 539; emphasis added). All problems are not amenable to logical reasoning. Some can be solved only experientially and through intuition. As Radhakrishnan rightly observes, 'When devotion is perfected, then the individual and his God become suffused into one spiritual ecstasy, and reveal themselves as aspects of one life. Absolute monism is therefore the completion of the dualism with which the devotional consciousness starts' (Radhakrishnan 1996: 565). In spite of this emphasis on experiencing and intuition, Radhakrishnan uses reasoning too. Referring to Ramanuja's commentary on the Gita, Radhakrishnan suggests that an impersonal immutable Absolute could not have created the Universe and the beings in it. It is only a personal manifested God who could do so. Maya is the mystic creative power of this God, as told in the Gita (VII.25) (Radhakrishnan 1996: 543).
The central purpose of the Gita, according to Radhakrishnan, is to solve the problem of life and stimulate the right conduct (Radhakrishnan 1996: 532). It is thus an ethical treatise, a yoga-shastra. Yoga is 'the discipline by which we can train ourselves to bear the shocks of the world with the central being of our soul untouched.... We can train our will so as to make our whole life one continuous divine service' (Radhakrishnan 1996: 532). There cannot be a void between righteous living and spiritual pursuit. They nourish each other. The search for truth is both an ethical and spiritual endeavour. The Gita thus provides 'an intellectual search for truth as well as an attempt to make the truth dynamic in the soul of man' (Radhakrishnan 1996: 533).
Ethics are relevant only if the world and changes therein are held to be real. According to Radhakrishnan, the Gita repudiates the view that 'the world is untrue, without any fixed basis, devoid of any ruler, brought about by union caused by lust and nothing else' (XVI.8) (Radhakrishnan 1996: 548). I shwara is the ruler of the world, about whose welfare He is deeply concerned; He incarnates Himself as an avatar to save the world from crisis whenever the need arises. He combines within Himself the immutability of the Absolute, as well as the mutation of becoming for the sake of the world (Radhakrishnan 1996: 546–47).
According to the Gita, points out Radhakrishnan, I shwara as personal God can be approached and worshipped through any of His aspects or forms, which makes Hinduism tolerant of differences in faiths. It promoted a spiritual culture of allowing that the one truth can have many sides and can be approached in many ways. Consistent with this stand, there is no question of which of the different paths among jnana, karma, bhakti and meditation is superior. The Gita tends to leave it to the choice of the individual (Radhakrishnan 1996: 575) and the individual's aptitude, nature and circumstances. There is no conflict between them, and they can also be combined. From the point of view of the welfare of the world, however, the Gita tends to emphasise action or work more than others. Radhakrishnan observes: 'the Gita recognises that it is through work that we are brought in relation with the rest of the world. The problem of morality has significance only in the human world' (Radhakrishnan 1996: 566). The Gita does not support the ascetic ethics of abandoning the moral responsibilities to the world (Radhakrishnan 1996: 567).
Though Radhakrishnan says that the Gita asserts the truth of Advaita or non-dualism (Radhakrishnan 1996: 537), he is not a rigid follower of Shankara's commentary. Here and there, there are appreciative references to the views of other schools. Moksha or release in Advaita is the full realisation of the identification of the individual soul with Brahman, and there remains no separate individuality. But Radhakrishnan says that moksha or mukti is 'not an obliteration of individuality for all eternity, but a state of blissful freedom of the soul with a distinct existence in the presence of God' (Radhakrishnan 1996: 577). In this state, the individual soul is in continuous enjoyment of being in God's presence. Quoting Krishna ('My devotees come to Me', VII.23, IX.25, IV.9), Radhakrishnan says that the 'author of the Gita seems to believe in a continuance of conscious individuality even in freedom [mukti]' (Radhakrishnan 1996: 577). Radhakrishnan also refers to the other view of mukti where the freed soul loses itself in the impersonality of Brahman and attains a peace beyond worldly strife, and says that it depends on how one applies the idea of the Absolute Brahman. If one believes firmly that the Absolute reveals Itself as a personal God who enjoys our love and whose love we can enjoy, the individuality would remain (Radhakrishnan 1996: 578).
Radhakrishnan thus interpreted the Gita not only as the basis of Hinduism, but also as a spiritual religion of universal relevance, as stressing inclusiveness and accommodation of different currents of thought, an activism with a social purpose and meeting the demands of ethics as well as the emotional needs of persons to be loved by a caring God, without depriving them of a mystical experience of the immanent Divine. He never attempts to assert the superiority of the Gita over the scriptures of other religions. His scholarly and insightful contribution will be long remembered.
## Sri Sri Paramahansa Yogananda
Sri Sri Paramahansa Yogananda (1893–1952) is a very eminent name in exporting spiritual wisdom and yoga from India, who became a celebrity in bringing together the people of India and the United States. His movement covered mainly the Americans, and not just the Indian diaspora in the United States. Born in a Bengali family in Gorakhpur as Mukunda Lal Ghosh, he came under the influence of eminent Yoga Guru Sri Yukteshwar, whose disciple he became. At the instance of his guru, he went to the United States in 1920, but kept in touch with India. He founded Yogada Satsanga Society in Ranchi in India and Self-Realization Fellowship in Los Angeles in the United States. He preached a specially developed method called kriya-yoga. He insisted, however, that yoga is not a question of mere technique, but something through which all selfish desires are consumed in the fire of love for the Divine. His Autobiography of a Yogi, first published in 1946, turned out to be one of the most popular autobiographies, next only perhaps to Mahatma Gandhi's.
Most of the commentators on the Gita have focused on karma-yoga, jnana-yoga and bhakti-yoga, to the relative neglect of raja-yoga, which also is an important teaching of the Gita. Yogananda, however, gave priority to the training of one's mind, without which the other paths or yogas meaningful cannot be meaningful. His two-volume book (with continuity in pagination), having a mouthful title, God Talks with Arjuna: The Bhagavad Gita – Royal Science of God Realization – The Immortal Dialogue between Soul and Spirit – A New Translation and Commentary, was published first in 1995 at Los Angeles, and then in India too (Yogananda 2002). It has been very well received both in the United States and in India and considered as a significant contribution.
Yogananda says that the real background of the timeless message of the Gita is not one battle a long time ago, but a continuous and universal conflict between various opposing forces – good and evil, life and death, knowledge and ignorance, health and disease, changelessness and transitoriness, self-control and temptations, discrimination and non-discrimination, soul and body, and spirit and matter (Yogananda 2002, Vol. 1: 1). The Gita intends to guide human beings in the onerous task of resolving these conflicts in a way that help them attain their spiritual goal and real and lasting happiness. The Gita teaches that it does not help one to be emotionally mired in these conflicts, but has to raise the level of consciousness to a higher plane of detachment to resolve them. So long as a human being is bogged down into the daily flux and is tossed about by changes, there will be restlessness. This restlessness has to change into calmness (but not depression). Yogananda classifies states of mind into four types: (i) always restless, never calm; (ii) restless part of the time, calm part of the time; (iii) calm most of the time, restless occasionally; (iv) always calm, never restless. He points out how the Gita explains that restlessness eclipses buddhi (discrimination, wisdom), leads to mistakes and further to more restlessness. One has to gradually try to reach the most ideal state of always being calm in spite of all vicissitudes, through a conscious effort (Yogananda 2002, Vol. 1: 34–36).
Yoga helps in this task by raising one's consciousness fixed on mundane matters to a higher cosmic level through various steps. First, by practicing guru-given meditation. The seeker would then be able to expand his narrow attachments confined to family and a narrow circle of friends to a larger level of all-inclusive love. The next step is to overcome constant body-consciousness and the identification of the self with the body, and focus on the Divine. Next is to achieve control on breathing and heartbeat, and direct attention and energy on the spinal centres (chakras) as described in the yoga-shastra. If this is achieved, the seeker can reach a state of super-consciousness and experience the Immanent Brahman (Yogananda 2002, Vol. 1: 37–38). Yogananda insists that yoga has to be practiced under the personal guidance of a guru. However, nothing is lost if one stops after achieving the first step, and the Gita assures that even persons practicing some level of spirituality and dharma are saved from fear and will find the strength to go higher (II.40). Everyone does not have to seek attaining the super-conscious state, but everyone – including the worldly – has to ponder over moving on to a higher plane of moral existence. That is how one can contribute to loka-hita (welfare of the world) preached by the Gita. Gurus like Yogananda think that if one sets aside even a little time regularly for meditation, it helps in spiritual, moral and material progress. They strengthen each other in a mutual way. Daily meditation helps in gaining 'self-control necessary to overcome the bad habits that constitute one's lower nature' (Yogananda 2002, Vol. 1: 178). That is how the Gita includes a chapter on dhyana-yoga as well, along with emphasising karma-, jnanaand bhakti-yogas.
An important teaching of the Gita which Yogananda highlights is about how to face death – one's own or of the dear ones. It has to be faced with equanimity. 'Grief is born of ignorance, attachment, and selfish love, because the ordinary man sees only the present frame of existence' (Yogananda 2002, Vol. 1: 240). The Gita says that for the immortal soul, death of the body is like giving up old clothes for a new one (II.22). It is a continuation of the same life process under which a child becomes an adult, and an adult becomes old (II.13). Yogananda observes: 'In sleep every night an individual discards the consciousness of the tired body and mind and so finds peace; in the greater sleep of death, a man forsakes the disease-torn body and the attachment-corroded mind for a restful state of joy' (Yogananda 2002, Vol. 1: 219). It does not mean that the Gita approves suicide or going about killing others on the plea that the soul cannot be killed. It is because the Gita also wants that as long as one lives according to God's will, one owes it to the world to perform one's duties and allow others as well to do so. A man has no right to violate the God-ordained circuit of life. Yogananda also clarifies that the Gita does not teach us to be heartless. A yogi, a person of perfect equilibrium, is 'neither hyper-sensitive nor stoically heartless'. During a bereavement caused by the loss of a loved one, he or she understands, feels the loss and even expresses it in a natural way. But the person will not allow the loss to emotionally devastate his or her life. Death of a loved one should teach us that one's emotional attachment should not be confined to a tiny little circle, but expand to cover all mankind (Yogananda 2002, Vol. 1: 240–44).
Like other interpreters, Yogananda also considers that the central message of the Gita is to get actively and selflessly engaged in the world to help others with a sense of detachment. Nevertheless, he adds that 'the worldly man should seek out a meditative man and create his own inviolate inner environment of God-communion.... Only when he has thus strengthened himself can he be of help in uplifting others' (Yogananda 2002, Vol. 2: 1042). The inner spiritual development of a person is necessary, according to him, to be an effective karma-yogi. That is why the Gita provides a holistic and comprehensive course for such a development. Yogananda's interpretation of the Gita has much to help a mentally troubled world; that is why he was so popular in the West. However, he did not show the same acute sensitivity which Swami Vivekananda had to social problems of underdeveloped countries like poverty and deprivation, where also the Gita had something useful and inspiring to say, which Vivekananda brought out clearly even in his brief writings on the Gita.
## Vinoba Bhave
Among a few rare persons who would strictly qualify to be called as karma-yogis, dedicating their entire life to the selfless service of humanity, there is undoubtedly Vinoba Bhave (1895–1982). He is considered as the spiritual successor to Mahatma Gandhi. He took a vow of brahmacharya (celibacy and highly disciplined life) early in his life and lived up to its ideals to the end. He did not take sannyas, but took the whole world as his family (Vasudhaiva kutumbakam) literally. He coined a slogan, Jai Jagat (Victory to the World!), to reflect his cosmopolitan and inclusive philosophy. He left his college studies in 1916, burnt his certificates and joined Gandhi in his freedom struggle and national reconstruction work. Gandhi placed him in charge of his Wardha Ashram and also sent him to Vaikom in Kerala to oversee the struggle for entry of Harijans (untouchables) to a temple there. Vinoba was jailed several times by the British. It was during one of the stints of imprisonment that he gave his famous 'Talks on the Gita' to fellow prisoners at Dhule in 1932, published first in Marathi (under the title Gita-pravachane) and later into many other Indian languages. The English translation came out in 1958. Vinoba also started the Sarvodaya (uplift of all) movement in rural areas even before independence. After independence, being moved by the plight of landless rural labour, he started the Bhoodan movement in 1951, beginning with Andhra Pradesh. He declared Sab bhoomi Gopalaki (all land belongs to God) and coaxed thousands of large landholders to donate their some of their lands for the landless. The movement was hardly a great success, since not much land was donated, and even the donated lands were mostly inferior, which needed investment of time and money to make them productive. This was not feasible for the poor landless. The Bhoodan movement spurred him to launch another rural movement called Gram-dan, wherein an entire village would pool all the productive resources together, managing and working the rural economy collectively, and distribute the output or income equally to all based on their needs. Remarkably, over a thousand villages came under this movement. He used to actually walk from village to village covering vast stretches of the country, addressing people, and move on, as part of these movements. In spite of his tireless work, he attracted a lot of criticism, including personal attacks for his idealism and idiosyncrasies. His support to Emergency declared by Indira Gandhi in 1975 attracted a strong criticism; he was nicknamed Sarkaari sant (Government's saint) by the media because of this. He had welcomed Emergency as Anushasan Parva, that is time for law and order, and discipline, which he felt India needed very much. However, he wanted such discipline not only among the ruled but also among the rulers. (But who would discipline rulers under a dictatorship?) He was reverentially called as acharya (spiritual teacher-scholar) by his followers. He was the first recipient of Ramon Magsaysay Award for community living in 1958 and posthumously honoured with Bharat-Ratna by the president of India in 1983.
His Talks on the Gita (Bhave 1964) could be said to be based broadly on Gandhi's ideas, with a great overlap in views. Like Gandhi, Vinoba also emphasised truth and non-violence and selfless service of humanity by setting a personal example. He tried to live according to the principles preached by the Gita. Both were opposed to cow slaughter and meat-eating, as an expression of following the principle of non-violence. Like Gandhi again, Vinoba considered the Gita as his spiritual mother.
Vinoba explains that it was not the fear of battle which made Arjuna reluctant to fight, not even the fear of causing violence in principle; but it was only his emotional attachment to his own people which made him despondent. That is why Krishna had to teach Arjuna that he had to place dharma or duty above attachment to some chosen people. One may come across similar situations in life, where moral duty requires a certain action, but there is a temptation to avoid it because of the fear that it may hurt one's near and dear ones. From the point of view of the Gita, ethics of duty should prevail over attachment to relatives and friends. Vinoba emphasises that according to the Gita, one does not have to wonder what duty to perform; it is swadharma, a duty which comes naturally to oneself. Vinoba does not, however, go deeper into this question though he reverts to it now and then almost throughout his Talks. He tells, however, that swadharma need not be something which is fixed for all life for a person, but can change based on thinking and experience (Bhave 1964: 8).
According to Vinoba, the right way to worship God is not to decorate His stone or metal images with diamond and gold, nor to engage in rituals like abhisheka (pouring water or milk on the idols), but see Him in the humanity at large and serve Him by serving people (Bhave 1964: 121), an advice which he followed himself in practice. He suggested that a farmer who toils all day to feed many is a greater spiritual seeker than others who do not work but engage themselves in rituals (Bhave 1964: 127). However, he recommended reciting the name of God (japa) even while working as a way to spiritual advancement. This way, all life can be filled with God. He had tremendous faith in the power of God's name (naam), just like Gandhi.
Nevertheless, Vinoba did not oppose image worship as idolatry. He considered it as an aid to spiritual seeking and respected a devotee who sees in the little image she or he adores the beauty, holiness and power of the Supreme who pervades the whole cosmos (Bhave 1964: 162). The Gita may not have endorsed image worship explicitly, but its endorsement is clear in the assurance of the Lord that he would accept all different ways of worship done with a pure heart. Vinoba considers the debate on saguna (God with attributes) and nirguna (God without attributes) as pointless, because God is both (Bhave 1964: 172–74). It is a matter left to devotee's own convenience and perception. Nevertheless, Vinoba feels that a devotee who sees God everywhere in all humanity and works for the welfare of all is the ideal (Bhave 1964: 176). He says that without nirguna, saguna worship is defective if God is seen only in the image and nowhere else. Similarly, without the love of a saguna devotee for humanity and without active work for the welfare of all, nirguna worship becomes dry and dreary (Bhave 1964: 177–78). Thus, in Vinoba's view, both saguna and nirguna are complementary.
The significance of Vinoba's interpretation of the Gita is that, as in Gandhi's case, he actually lived his life as taught by it. He thought like Gandhi, dressed like Gandhi, lived an austere life like Gandhi and spent all his life in the selfless service of others like Gandhi. He is perhaps the last Gandhian of his stature.
## A. C. Bhaktivedanta Swami Prabhupada
Abhay Charanaravinda Bhaktivedanta Sw a mi Prabhupada (1896–1977) is one of the most successful among swamis who went to the West to spread Hinduism, though a particular brand of it. He left for New York from Calcutta at the age of sixty-nine in 1965, and what he achieved thereafter in the last twelve years of his life was monumental both literally and metaphorically. He started his work alone at a storefront in New York chanting Krishna's name. His mantra to everyone until the end was simply Hare Krishna Hare Krishna, Krishna Krishna Hare Hare, Hare Rama Hare Rama, Rama Rama Hare Hare. His musical chanting and dancing with complete abandon attracted several including derelicts. Soon he gathered a large following, bulk of which consisted of drug addicts who had lost their purpose in life and were desperately seeking an alternative. Krishna chanting worked like magic on them and led to a complete transformation of their lives. Soon he attracted many more including intellectuals and spiritual seekers. Simultaneously Prabhupada edited and translated with commentary Srimad-Bhagavatam (30 volumes) and wrote many other books – about sixty, Bhagavad-Gita as It Is, being one of them. His book on the Gita was first published in 1968, followed by an expanded edition in 1972. The second edition came posthumously in 1983. Swami Prabhupada travelled widely, and with the help of his numerous followers, he built 108 temples all over the world. They turned out to be centres not only of spreading Krishna consciousness, but also of rendering social service. Their huge project of providing midday meals to school children has attracted many poor children to schooling in India. He started the International Society for Krishna Consciousness (ISKCON) in 1966 in New York – within a year of his arriving at the United States, which has now many branches all over the world.
According to Swami Prabhupada, though many have interpreted the Gita, most of them projected their own views rather than of the Gita itself, and his purpose was to project the Gita as it really was. Hence the title of his book Bhagavad-Gita as It Is (Prabhupada 1983). He asserts: 'We must accept Bhagavad-gita without interpretation, without deletion and without our own whimsical participation in the matter' (Prabhupada 1983: 15). After translating a verse, therefore, he does not give a 'commentary', but gives its 'purport'. Interpretations, in whatever name, are necessary because we cannot literally apply every word or sentence to a different time and context. For example, Prabhupada explains verse 19 in Chapter 2, by saying that merely because the soul cannot be killed and Krishna spurs Arjuna to fight, it does not mean that people can be killed wantonly. Prabhupada cites the Vedic injunction – Ma himsyat sarva-bhutani ('Never commit violence on anyone') – in support and opposes even animal slaughter on this ground (Prabhupada 1983: 100). Obviously, his advice for caution against irresponsible interpretation is well intentioned. Having read the Gita many times and also many interpretations of genuine sages and scholars, I cannot, however, really say that these interpretations are not based on the Gita and only Prabhupada's is genuine and authentic. Nevertheless the claim about the Gita 'as it is' helped marketing Prabhupada's Gita worldwide, backed by the unbounded energy which Prabhupada infused into ISKCON. It is considered as 'the largest selling, most widely used edition of the Gita in the Western world' (as claimed on the back cover). It has been translated into many languages – Arabic, Bengali, Chinese, Dutch, French, German, Gujarati, Hindi, Italian, Japanese, Portuguese, Russian, Spanish, Swedish and many more (Prabhupada 1983: vi).
If you go by the word of the Gita, Prabhupada says, Krishna is the 'Supreme Personality of Godhead'; He is also the Paramatma, the Supreme Soul, or I shwara, or Bhagavan, who lives in everybody as the controller. He is pure, free from all blemish, unborn and eternal. He is all grace, full of kindness and responds to the prayers of his devotees. He is just, sees all as equals (though having a soft corner for devotees). He is equally accessible to all irrespective of class, caste or gender. He intends to take his devotees to a higher level of spiritual development bringing them closer to Him. He not only controls all the jivas (living beings), but also helps them all in their spiritual evolution. The Supreme, as per the Gita, in Prabhupada's view, is not the impersonal Brahman. He does not deny the existence of an impersonal aspect of the Supreme, but it is in the form of 'shining rays of the Supreme Personality of Godhead' and is incomplete by itself and subordinate or secondary to the latter. Only the Supreme Personality of Godhead is complete. He elaborates: 'Impersonal Brahman realization is the realization of His sat (eternity) feature. Paramatma realization is the sat-chit (eternal knowledge). But realization of the Personality of Godhead, Krishna, is realization of all the transcendental features: sat, chit, and ananda (eternity, knowledge, and bliss) in complete vigraha (form)' (Prabhupada 1983: 14). Prabhupada thus tries to resolve the debate about how the Gita reconciles the two opposite perspectives of the Supreme – the impersonal and personal – and leaves us in no doubt about which is higher and fuller.
Prakriti, or material nature or universe, is controlled and run by the Supreme. It is of two kinds – superior prakriti comprises all life or living beings and inferior prakriti is purely material with no life as such. The gunas or 'qualities' – satvika, rajasika and tamasika – are attributes of living beings in superior prakriti, especially in human beings (Prabhupada 1983: 9). Manifestation of prakriti may be temporary, but it is not false. It is like a cloud, which appears and disappears, but cloud is not false merely because it is transitory. The cycle of manifestation and dissolution of prakriti takes place through phases of time, but the cycle itself is eternal. The material nature is the manifestation of energy of God, and He is above it. Prakriti is a part of Him, but He is not a part of it (Prabhupada 1983: 10, 865). Prabhupada is totally opposed to mayavada, which regards the world as an illusion and denies individualities. He thinks that individualities of living beings or their souls are retained even after moksha. He appreciatively quotes Chaitanya, who had forbidden reading commentaries based on mayavada, as it could spoil a correct understanding of the Gita (Prabhupada 1983: 89–90).
There is, however, a problem in Prabhupada's philosophy. According to him, all beings, including humans, are under complete control of the Supreme. 'If a living entity says that he is not controlled but that he is free, then he is insane' (Prabhupada 1983: 8). Krishna is the owner of all senses and also their director (Prabhupada 1983: 46–47). However, if God is in complete control, how can the question of karma, or of papa (sin) and punya (merit), arise? How can moral obligations arise if human beings are considered as having no freedom of will and are merely cogs in a machine run by God? However, Prabhupada is forced to modify his stand when it comes to explaining verse 63 in Chapter 18, where Krishna asks Arjuna to ponder over what all he had said and then decide what to do. Prabhupada admits here that 'God does not interfere with the little independence of the living entity' (Prabhupada 1983: 848).
The sum and substance of Prabhupada's interpretation of the Gita can be stated in his own words: 'Factually we are related to the Supreme Lord in service. The Supreme Lord is the supreme enjoyer, and we living entities are His servitors. We are created for His enjoyment, and if we participate in that eternal enjoyment with the Supreme Personality of Godhead, we become happy. We cannot be happy otherwise' (Prabhupada 1983: 20).
## Swami Ranganathananda
Swami Ranganathananda (1908–2005) became a monk in the Ramakrishna Math order at the age of twenty-five and rose to become the president of Ramakrishna Math and Mission in 1998. He was a great orator, and it is said that having listened to his lecture on Islam, Mohammad Ali Jinna exclaimed, 'Now I know how a true Muslim should be!' The Swami has authored more than fifty books, including the famous Eternal Values for a Changing Society (1971). He had a scientific temper and was open to learning from natural and social sciences. He had a simple way for assessing spiritual progress. He said: 'Are you growing spiritually? Can you love others? Can you feel oneness with others? Have you peace within yourself? And do you radiate it around you? That is called spiritual growth, which is stimulated by meditation inwardly, and by work done in a spirit of service outwardly.'
Apart from referring to the teachings of the Gita in other works, Swami Ranganathananda wrote a special three-volume book on the Gita itself – Universal Message of the Bhagavad Gita: An Exposition of the Gita in the Light of Modern Thought and Modern Needs (2000). The book contains not only a translation of each verse of the Gita, but also a lucid commentary on each of them. Like other interpreters, he explains the different paths of spiritual growth along with moral lessons of the Gita, but the way he does it appeals to modern minds. For example, while explaining verse 18 of Chapter 4, he says:
Work is no work at all. It is a question of agency and attachment. When these two are not there [with agency attributed to God], work ceases to be work, it becomes play, it becomes spontaneous, it becomes natural.... [W]ork comes when there is effort, struggle, tension. When you become thoroughly detached, then all that tension goes away. You are working, but you don't feel you are working.... [T]oday's industrial civilization is teaching that work is a drudgery. Joy must be found outside work.... As soon as Friday evening comes, millions of people are running out for a holiday. These five days were all drudgery.
(Ranganathananda 2000, Vol. 1: 431)
Swami Ranganathananda is certainly not against taking a few holidays to enjoy a quiet change in routine, but not because work is drudgery. Working with detachment does not mean indifference or non-seriousness. It is not loafing. One has to work sincerely and express one's personality through work. It can then be a spiritual experience (Ranganathananda 2000: 432).
Quoting Shankara's commentary, Swami Ranganathananda says that the human society needs to move on an even keel balancing both pravritti (material advancement) and nivritti (spiritual pursuit). Through pravritti, a welfare society has to be established, where no one goes hungry and without education, home and health care. This can be achieved only alongside a value-based life and spiritual strengthening of every person, ensuring dignity for all. There has to be peace between individuals and social groups, and also peace and self-confidence within each. A healthy society can be achieved with proper co-operation, co-ordination and understanding and not by each fighting against the other. Progress has to be achieved together both fronts, material and moral, with everyone included and covering the villages too (Ranganathananda 2000: 26–35).
Swami Ranganathananda's commentary is unique in emphasising environmental implications of the Gita, which have been ignored by other interpreters. The relevant verses on which his commentary is based are tenth to sixteenth in Chapter 3 of the Gita. The Swami says that according to these verses, the whole world – even the whole cosmos – is based on the concept of yajna, with give and take going on everywhere. Everything is related to everything, and there is great interconnectedness. Unknowingly birds carry seeds from place to place and propagate plant life; unknowingly again, they control pests affecting plants by eating them and rendering them harmless. Wastes are turned into manure. Earthworms toil ceaselessly, upturn the soil and make it nutrient rich. The nature uses them all and maintains ecological balance. Unfortunately, man in his ignorance and short-sightedness breaks this interconnectedness, violates the yajna of nature and endangers his own and others' lives. The Gita's advice is 'nourish and support each other and reap the highest' (parasparam bhavayantah shreyah paramavapsyatha, III.11). This is as much true for the natural resource environment as for the social world. If you only take from nature and not give it back to nourish it, it amounts to abusing it and would be breaking the natural ecological cycle that sustains the world. Verse 13 in the same chapter (the third) (Yajna-shishtasinah... atmakaranat) can be interpreted thus in this context: those who eat or use sustainably only what is left as prasada (grace) from the nature's yajna are blameless, but those who exploit God-given natural resources unsustainably and greedily for themselves eat only sin. The Gita condemns unsustainable exploitation of nature (including its use as waste-sink) and wants the cycle of life and natural regeneration to continue unbroken. The Gita supports sustainable use of nature in another way too, by exhorting us to curtail our wants and avoiding unnecessary luxury and wasteful consumption (Ranganathananda 2000: 265–78).
According to Swami Ranganathananda, '[e]very word in this Gita is meant to make people better, [and] civilization richer and purer, whether it is East or West' (Ranganathananda 2000: 444). He says that the Gita of course is the essence of the Upanishads, but the further essence of this essence is verse 55 in Chapter 11, which he translates as: 'One who does work for Me alone, and has Me for his or her goal, is devoted to me, is free from sensory attachment, and bears no enmity towards any being – he or she attains to Me, O Pandava!' (2000, Vol. 2: 513). In explaining this verse, the Swami clarifies that detachment is not being apathetic and lack of love. Nirvaira (no enmity) is not just a negative concept, but it means positive love for all. It is not enough not to hate anyone; one should have love for all, indicates the Swami. To say or feel 'Children are weeping there. I don't care. I have no attachment' is not detachment. He explains, it is expansion, not contraction of mind and love, expressed in helpful action. 'No hatred towards anybody; only love for all; with a detached mind.' That is the Gita, says the Swami (Ranganathananda 2000: 514). His interpretation of the Gita is one of the most inspiring, arousing the noblest human emotions and building a society of mutual concern and harmony – not focused just on individual salvation. He showed that individual salvation lies in fact in contributing to building such a society.
## Eknath Easwaran
Eknath Easwaran (1910–99) has been respected worldwide as one of the most profound and inspiring writers and orators on religion and spirituality. Born in Kerala, he had higher education at Nagpur University, where he also served as a professor of English. He went to California University in the United States as a Fulbright Scholar in 1959, lectured on meditation in San Francisco Bay Area in 1960 and met his wife Christine there first. He returned to India in 1961 and went back again to the Bay Area in 1965. He settled in California thereafter. He developed a way of meditation, which he called Passage Meditation. Essentially, it is a silent repetition of a memorised inspirational passage from the books of any great religion of the world. He founded the Blue Mountain Center of Meditation and Nilgiri Press in North California, which has published most of his books. He wrote many books on Hindu, Buddhist and Christian sacred texts and also on Gandhi and Badshah Khan – known as Frontier Gandhi in India (whom he called as the 'Nonviolent Soldier of Islam'). Many of his talks are available in audio and video format.
His two works on the Gita have been critically acclaimed, the first being a three-volume book on The Bhagavad Gita for Daily Living (Easwaran 1997). Its first volume, The End of Sorrow, came out in 1975; the second, Like a Thousand Suns, in 1979; and the third, To Love Is to Know Me, in 1984. The first volume, based mainly on the first six chapters of the Gita, is focused on the individual; it shows how one can find one's own self and transform one's life through meditation and selfless service. An interesting highlight of this volume is the detailed teaching on how to meditate. The second volume based mainly on the next six chapters of the Gita, projects the basic and indivisible Unity governing all creation, which has implications for relationships between individuals that can heal the divisions in society. The third volume mainly based on the last six chapters of the Gita deals chiefly with how each one of us can make a difference in the world today and how to find fulfilment through bhakti. The other book on the Gita by Eswaran, Essence of the Bhagavad Gita – A Contemporary Guide to Yoga, Meditation and Indian Philosophy (Easwaran 2012), was edited by his long-time students and close associates as per his instructions given less than a year before his death, published posthumously. It is based on the transcripts of his talks and informal sessions with his close students and is considered as the distillation of his teaching on the Gita in his own words. The following account of Easwaran's main contributions to the understanding of the Gita is based mainly on this second book, as it draws not only from the first book but also from his subsequent insights. However, the first book is no less essential for a full understanding of the Gita.
Easwaran regards the Gita as a text not so much on Hinduism as on sanatana dharma, 'which is the bedrock of reality, the eternal principles or changeless values on which life is based, regardless of creed, country, culture, or epoch,' and 'the whole point of sanatana dharma is that religion must be based on personal experience'(Easwaran 2012: 13). According to Easwaran, 'the central message of the Gita is that life is an indivisible whole – a concept that contemporary civilization flouts at every turn'. It is only when we realise this that we can live in harmony with others and with ourselves. He adds: 'The Gita doesn't ask us to take this on faith. It simply offers a frame of reference through which we can look afresh at what we see around us, scrutinize the plans and promises offered by contemporary politics and economics, and judge for ourselves how useful any approach can be that does not begin with the essential unity of life' (Easwaran 2012: 18).
The Gita has to be seen more as 'a practical manual for daily living' than as a text on philosophy or as mere poetry, emphasises Easwaran (1997, Vol. 1: 11). That is why the title The Bhagavad Gita for Daily Living for his major book on the Gita (1997). To convince us about the practicality of the manual, we need some exemplars who lived out their lives on it. There have been many of them in the Hindu spiritual tradition with unbroken continuity, says Easwaran (Easwaran 2012, Vol. 1: 13). '[T]o grasp the meaning of the Bhagavad Gita, we need look no further than Mahatma Gandhi, who made it a guide for every aspect of daily living' (2012: 8). Gandhi certainly has not been the last in this line, since such eminent exemplars like Vinoba Bhave, Swami Chinmayananda and Swami Dayananda have followed him.
Easwaran says that it is his surmise that 'the Gita was originally an Upanishad which has been inserted into the Mahabharata, its first chapter serving as a bridge' between the two (1997, Vol. 1: 14; 2012: 15–16). The colophons at the end of every chapter, calling the Gita as an Upanishad, confirm this, he adds. Even accepting the Gita as an integral part of the epic, Easwaran agrees with Gandhi in treating the battlefield background as an allegory. He says that the names themselves support this. 'Dhritar a shtra' means 'one who has usurped the throne'. The names of his sons all begin with 'du', which means 'evil'. 'Duryodhana' means a 'dirty fighter'. Krishna tells Arjuna that emotional weaknesses like lust and anger are enemies which must be fought. Easwaran, therefore, agrees with Gandhi in treating the battle as symbolic of a fight between the 'good' and the 'evil', which takes place right within us (2012: 16–17). He points out that the Gita presents this basically as 'a conflict between a lower self and a higher one' (Easwaran 2012: 27). The lower self is the ego or ahamkara (the 'I-maker'), which is a cage of separateness, and identifying oneself with this narrow self is 'the source of insecurity, friction, disrupted relationships, and mounting dissatisfaction' (Easwaran 2012: 27–28). Easwaran says that according to the Gita, 'in every one of us – by virtue of being human – there is an upward surge to evolve, to grow in humanity day by day, and a downward pull to remain engaged in conflict as separate creatures set against the rest of life' (Easwaran 2012: 29). He sums up the message of the Gita: 'gradually we can choose to throw more and more weight behind the pull towards our higher nature and away from the drag of separateness and conditioned behaviour' (Easwaran 2012: 31).
Meditation, according to Easwaran, is an important means of discovering the basic reality or the common ground of being within us and outside, which the Gita recommends. It cannot be realised through our sense organs tuned to the external world. The Upanishadic sages found that when we withdraw our attention from the senses, we can stand apart from the thought process and observe it objectively. 'Awareness then becomes absorbed in the world within' (Easwaran 2012: 38). He explains that 'as concentration deepens, thought merges in one titanic inquiry beyond words: "Who am I?" Finally, this inquiry itself dissolves and the mind remains completely still – yet awareness remains; we are immeasurably more awake than when the mind and senses are active'. It is then that we become aware that we are not separate creatures (Easwaran 2012: 39). Life finds its fulfilment when we know the common divine ground of existence, which is the Supreme Reality. This can be found within our lifetime; one does not have to wait for death (Easwaran 2012: 43–44). There are practical benefits also of this realisation. We can concentrate better, apply our mind more effectively and become better beings. Lower emotions like anger and hatred get dissipated and give way to compassion and goodwill for all. 'When we meditate every morning we are putting an armour for the day's battle against our own impatience, inadequacy, resentment, and hostility' (2012, Vol. 1: 46).
Easwaran cautions that meditating for some time in a day and then letting the mind do as it likes the rest of the day defeats its purpose (1997: 134). It should have an enduring impact on taming the mind and help self-realisation. Once we realise that we are not separate egocentric petty creatures and feel oneness with the real self, it marks the end of sorrow, according to the Gita, says Easwaran. He points to verses 21 and 22 in Chapter 6 of the Gita, which say: 'Having attained that abiding joy, there is no more to desire. You cannot be shaken even by the heaviest burden of sorrow' (1997: 65). Easwaran tells that the resulting profound peace in your heart spreads around you, and highest happiness comes to you (1997: 144). However, the Gita is careful to point out that this need not mark the end of action. 'Full inside, you don't need anything, but you are restless to give, to serve' (1997: 65). You continue to be sensitive, not to your own sorrow, but to the pain and suffering in others, and therefore work to mitigate it selflessly (1997: 66). Seeing the connection between self-realisation, development of compassion and love for all, and action, Easwaran comes up with an inspiring message based on the Gita's teaching: 'To know is to love, and to love is to act' (1997: 119). He gives his translation of verses 54–56 in Chapter 18 in support:
United with Brahman, ever joyful, beyond the reach of desire and sorrow, they have equal regard for every living creature and attain supreme devotion to me. By loving me, they come to know me truly; then they know my glory and enter into my boundless being. All their acts are performed in my service, and through my grace they win eternal life.
(1997: 119–20)
Like other interpreters of the Gita, Easwaran also explains the different paths of spiritual pursuit in his own insightful way. The advantage of emotional detachment from outcome or results even while working hard for a great cause selflessly is that we can then work without anxiety and with confidence and peace of mind. It does not mean we should be indifferent to expected consequences of our action. But we don't lose our nerve when things go wrong and can take corrective steps calmly. An important requirement of karma-yoga is to avoid work which may harm others (1997: 122–25). Bhakti, which means devotion or love, is a precious capacity and is a part of being human, says Easwaran. It is 'forgetting completely in the welfare of all' (1997: 125). This implies that bhakti is combined with work and is not a mere sentiment. He cautions against the 'absurd idea that love has anything to do with the body or senses'; it is 'a state of consciousness'. He adds that love is the 'opposite of self-will' through which one tries to manipulate and subordinate (1997: 126). Easwaran admits that Krishna at times seems to prefer one path over the others, which confuses Arjuna (III.2), but Easwaran advises that to avoid confusion, the Gita is to be taken as a whole and 'follow a way of life in which the three paths are combined'(1997: 127).
According to Easwaran, Homo sapiens represent a stage in evolution, which is halfway between our biological nature and what we can become. We have the capacity and choice to take up the evolutionary duty of taking our destiny as individuals into our own hands and guiding life to its fullest potential (1997: 206). As an example, he says that when somebody is angry with us and we remain calm instead of retaliating, we break a link with the animal and rise a bit higher on our personal ladder of evolution (1997: 208). The way we reflect the three gunas – sattva, rajas and tamas – explained in the Gita determines how we evolve. He translates verses 11–13 from the fourteenth chapter of the Gita in support:
When sattva predominates, the light of wisdom shines through every gate of the body. When rajas predominates, a person runs about pursuing selfish and greedy ends, driven by restlessness and desire. When tamas is dominant, a person lives in darkness – slothful, confused, and easily infatuated.
(1997: 210)
Sattva helps the evolutionary process, as when we render selfless service or forgive. At the opposite end is tamas, which pushes us down the ladder of evolution, as when one gets a feeling of 'who cares?' or 'not my problem'. Easwaran says, 'we can draw upon rajas to transform tamas, and then channel and harness rajas to transform it into sattva.... sattva is the platform we must reach in order to move beyond the gunas altogether into unitary consciousness' (1997: 213). In our evolutionary struggle, our real self or the higher self is our friend, and the ego is the enemy. Easwaran quotes two key verses from the Gita in this context (VI.5–6), which say 'Raise yourself by yourself, and don't demean yourself'. He says, 'we don't have enemies outside, but we have the fiercest enemies inside if we undermine our will'. The Gita, according to Easwaran, puts the responsibility for our evolutionary rise and emancipation on our own shoulders, through these two verses (1997: 226). He elaborates: 'As long as we try to prop ourselves up with possessions and people, we have no freedom, and the props are guaranteed to fail. Sooner or later we have to learn to rely on the Self alone' (1997: 242). Life is thus a struggle, and Easwaran says that it is significant that the Gita does not end with victory, but with a resolution (by Arjuna) to fight until the war is won.
Easwaran is one of the most readable authors. Both of his books on the Gita are so full of wisdom and so charmingly written that it is tempting to quote every sentence. I have selected, however, a few from just one chapter (1997, Vol. 1, Chapter 1), given here, to bring out the flavour of his writing.
One of the best definitions of confusion is doing what is unnecessary and failing to do what is necessary.
(p. 32)
We have been so conditioned to search for happiness in sense-pleasure that defying these urges appears to be a denial of life itself.... As we progress on the spiritual path,... we discover that we have been pursuing agitation instead of joy and accumulation instead of security.
(p. 33)
[I]t is the non-violent person who cannot be frightened; the violent person can always be threatened with greater violence.
(p. 35)
[W]hat lasting joy there is in trying to complete one another rather than compete against one another.
(p. 38)
When we fight others, we are harming everyone; when we fight all that is base and self-willed in us, we are benefiting everyone.
(p. 43)
Unfortunately in our day anger is considered to be part of expressing oneself.... We have anger groups, called by other names, and we have anger seminars, called by other names, in which people agitate one another and send each other out as harmful influences into their homes and society. We have anger books, anger plays, and even films glorifying the angry man.
(p. 44)
## Swami Chinmayananda
Swami Chinmayananda (1916–93) is well known as one of the most inspiring and knowledgeable exponents of the Gita, who contributed to popularising it both in India and abroad in many countries. He was born in Ernakulam, Kerala, as Balakrishna Menon, in an influential and religious family. He joined Lucknow University for higher education, but left it to join the Freedom Movement and was jailed. When down with typhoid and high fever, he was dumped on a roadside in Delhi to avert death in prison. Fortunately, an Indian Christian lady took him home and nursed him back to recovery. He later completed his masters in English literature with Honours. He was a firebrand agnostic and socialist in his views then and chose to become a journalist. To do a story on Swami Sivananda and to 'expose' him if possible, he visited Rishikesh in 1947. The visit and his experience there transformed his life. He stayed on there and chose the spiritual path and to become a monk. Swami Sivananda was not in a hurry and asked Balan (as he was then called) to wait and make up his mind and consult his parents. He made up his mind and, at the age of thirty-three, took sannyas initiated by Swami Sivananda. He stayed on to continue the study of Vedanta and practice of intense meditation, until his guru asked him in 1951 to go into the wider world and teach.
Swami Chinmayananda started conducting Gita Jnana Yajnas, which were attended by all classes of people and became very popular. He soon gathered around him a lot of disciples all over India, who started the Chinmaya Mission in 1953. The Swami took interest in initiating a world organisation to protect the interests of Hindus and Hinduism in all countries, and with the support of like-minded people, Vishwa Hindu Parishad was started in 1964, with him as its first president. In 1965, he set out on a world tour giving lectures covering thirty-nine cities in eighteen countries. Chinmaya Mission became an international organisation with more than 300 branches worldwide. The Swami laid emphasis on social service too, and the Chinmaya mission has established seventy-six schools and seven colleges. He also started Chinmaya Mission Hospital in Bengaluru in 1970, which is now a major landmark of the city, catering to the medical needs of the poor specially.
The Swami was not parochial in his views and supported inter-faith dialogue and harmony. He deplored the caste system and attracted disciples from all castes and classes to his movement. He often used to say that his mission was to convert Hindus to Hinduism! He felt that many Hindus had lost their roots, and religion was falsely identified with superficial rules and practices. He wanted to promote a greater sense of social responsibility and duties particularly to the disadvantaged, as per the teachings of the Gita.
The Swami was prolific in his writings, with more than sixty books to his credit, including several especially for children. His two main works on the Gita are The Art of Man Making (Chinmayananda 1978), being a compilation of 114 short talks on the Gita, and The Holy Geeta (Chinmayananda 1996), which includes a brief introduction, translation and detailed commentaries on the verses of the Gita. There are two more books on the Gita by the Swami, one a rendering in prose and the other for children. Though only ten out of eighteen chapters of the Gita are covered in the first book (1978), it contains many of the Swami's general insights on the Gita.
Swami Chinmayananda drew attention to the staggering difference between the respective backgrounds of the Upanishads and the Gita. The former were composed in the serene ashrams in forests, while the Gita had the background of a tumultuous, noisy, dusty battlefield. And yet the essence of the two is the same, the Swami says. What the Gita did was to transform a teaching which was supposed to be meant for quiet spiritual pursuit in a secluded corner to what is relevant to face the day-to-day battle of living. It teaches that what is needed to lead a successful, satisfying life in the world without stress – even in 'marketplace' – is not very different from what one needs for spiritual pursuit. One can, and has to, reconcile both the pursuits (Chin-mayananda 1978: 21–22). The Swami says that religion is meant to advance perfect living in the world. It is a process which brings forth an effective person from out of even a state shattered by despair, as Krishna did successfully in the case Arjuna through the Gita (Chinmayananda 1978: 18, 23). This reconstruction of personality has to be through dedicated self-effort, which the Gita prompts. 'The spirit of challenging yourself by yourself is the secret of self-improvement and personality unfoldment' (Chinmayananda 1978: 36).
The Swami says that the Gita is specially addressed to the young, who have a lifetime ahead to act and express themselves in the world. The Gita asks the youth not to run away from problems. Krishna tells Arjuna, 'This war has been thrust upon you. Face it as a welcome challenge as befits a soldier.' The young should face the challenges cheerfully and shoulder their responsibilities and solve the problems of the society, country and the world (Chinmayananda 1978: 44–45). This has to be done with a steady mind (stithadhih), without attachment, fear and anger (veeta-raga-bhaya-krodhah) (II.56) (Chinmayananda 1978: 51). The Swami devotes several talks (18–23, 52–70) to describe the 'Man of Perfection' (sthitaprajna) as in the Gita. He thinks that the description of the 'Man of Perfection' is a unique contribution of the Gita. Karma-yoga, which the Gita strongly recommends, can be followed only by turning oneself into a person of such perfection. The Swami explains: 'When any action is undertaken with ego and ego-centric desires – "I" and "I want" attitude – that action leaves its impression as a Vasana in us, prompting a repetition of the same action' (Chinmayananda 1978: 72), which binds a person more and more tightly in karma. A person becomes a criminal with even one murder, while a soldier who may kill many in a legitimate war is not a criminal (Chinmayananda 1978: 72). The secret of eliminating vasana while doing dedicated and selfless work as a duty for the benefit of the world is karma-yoga (Chinmayananda 1978: 73). Avoiding work is no way to eliminate vasana. To live is to work. 'To escape work is to escape "life" and run into "death"; it is suicidal' (Chinmayananda 1978: 77).
The Swami adds that since total detachment is an impossibility, the spiritual seeker first withdraws narrow attachments from worldly things (even while working) and attaches himself or herself devotedly to the Lord. This attachment to the Lord is inclusive of, and expressed as, love for all and enmity for none (1996: 752–53). The Gita thus focuses on the essentials and does not require rituals for spiritual progress, marking an important break from the Vedas. The path shown by the Gita develops the moral and spiritual content of man, while the rituals may not do so. The Swami says, 'The ritualist gets involved in the means, without aspiring for the Real Goal' (1996: 108).
Self-control is a crucial aspect of man-making both for a successful life and for spiritual progress, which is well emphasised by the Gita. The Swami explains, however, that to control is not to forcefully suppress. It is instead 'an inward blossoming, an inner growth and development by which one's earlier fields of enjoyments through the senses, drop out and make room for the perception of a newer field of ampler joys and more satisfying Bliss' (1996: 142). Happiness does not depend on the amount of consumption, the Swami says. According to him, happiness is a quotient, with the desires fulfilled divided by the total number of desires entertained by the same individual (or community). The clue to increasing happiness (without cost) lies in reducing the desires entertained! (1996: 154–55).
The Swami says that 'sin in Hinduism is a mistake of the mind in which it acts contrary to its essential nature of the Self' (1996: 95). An important contribution of the Gita is its reply to Arjuna's question as to what prompts us to sin, sometimes even against our wishes (III.36–37). According to the Gita, the explanation lies in our excessive desire or lust and anger born of rajas, which is our greatest enemy if not properly controlled. 'Any act of sensuousness which the mind pants for in the world of objects... creates necessarily within itself more and more agitations' resulting in sin (1996: 95). The Swami says that the Satan or Devil is not an external person with a tail and horns, but is something in our own animal urges. Untamed desire and anger is the cause of sorrow not only for individuals but also for communities and countries, as the Swami adds (1978: 113–15). Desire is so powerful that in unguarded moments, it can cloud our wisdom (III.38–39). We need, therefore, to be alert against temptations and keep awake our sense of discrimination all the while. The Swami says, 'Don't sleep at the steering wheel of life's vehicle' (1978: 120).
There is hope for all in the Gita, points out the Swami. There is no question of eternal damnation for anyone. Even the most sinful can cross across all sin by the raft of jnana (IV.36); all karmas are reduced to ashes by the fire of jnana (IV.37). This is reiterated again in Chapter 5, verse 17. What is this jnana which is so redeeming? The Swami explains that jnana consists in shedding one's ego completely, cultivating love for everyone in the world, engaging in selfless activities for the benefit of others, overcoming anger and lust, identifying with the Divine Light within oneself and having complete faith in God. Repentance as a requirement is not explicitly stated but very much implicit. What is required is much more than repentance (1978: 154–60). One who has attained jnana not only is cleansed of all sins, not only will not repeat the sins, but also attains spiritual liberation (Brahmanirvanam) (V.24–25) (1978: 193–98). What is involved is more than getting cleansed of the sins; the goal is not less than the realisation of the Supreme through jnana, selfless work, devotion and meditation, or combining these ways, as suggested in the Gita. The Lord assures, however, that one need not worry about a shortfall in this effort, for even a small but genuine step can free one from fear; but with practice and determination, it is possible for all (1978: 236–47).
Another important contribution of the Gita is its positive, soothing and consoling perspective on death. That is why Gita has been considered as the most effective balm on the minds of the bereaved in facing the loss of their dear ones. One can mourn and give a vent to the emotion, but not to the extent of allowing the death of a dear one to devastate the life of the bereaved. The key verse in the Gita in this regard (II.22) says that for the immortal soul, death is like throwing out worn out clothes for new ones; it is a process of renewal. Without death, there can be no change, no renewal. The Swami says that when there is an increase in population beyond the means to sustain it, people get worried. How unimaginably disastrous it would be if death were not there, and all people who were born since the beginning of creation were to survive. There could have been no births in fact if there were no deaths. Life is the purpose of death. The Gita does not mean, however, that one can indulge in wanton killing, as that would amount to himsa, which has to be avoided by all means. Life cannot be stubbed out whimsically; it has to be respected by all means (1996: 653–54).
Swami Chinmayananda is emphatic that the Gita is explicitly against the traditional caste system based on birth. When it refers to the four varnas, it refers to classes based on the nature of their work (IV.13). Every advanced society has these classes – those doing white-collar jobs, soldiers and political class ('rulers'); those engaged in business like trade, agriculture and industry; and lastly the blue-collar workers. India had a pretty advanced economy even in those days, and naturally had these classes, but there was social mobility, and the varnas were not based on birth. There are many examples like Vyasa himself who rose from humble origins and rose to the highest stature. The Lord has an illuminating presence in every bosom without distinction of caste, class and gender, according to the Gita, asserts the Swami. The Gita teaches equality (1978: 136–39).
Swami Chinmayananda has an important place in the history of popularisation of the Gita. His hundreds of lectures on the Gita all over India and abroad were attended by thousands and were highly admired. It was not just because he made the audience spellbound by his oratory, but also because he inspired many to take up the study of the Gita seriously with a modern perspective, leaving an enduring impact. He showed that the Gita is a living text, which is of no less relevance as a guide to life than it was in the past. He attracted all sections of people, not just the leisure class but also active youngsters irrespective of caste, creed or gender. He inspired his listeners to be generous and service oriented, making their lives meaningful by being helpful and affectionate to others. The Chinmaya Mission, which he founded and breathed his very life and philosophy into it, endowing it with his enormous energy, is a huge international organisation today. It has been doing excellent work not only in propagating the teachings of the Gita but also in doing valuable social and educational work. Its work has spread not only to major cities but also in the interiors of India, showing its broad-based character.
## Swami Dayananda Saraswati
Swami Dayananda Saraswati (1930–2015) is to be distinguished from the founder of the Arya Samaj of the same name. The Swami being presented now is from south India, born as Natarajan in Tamil Nadu on 15 August 1930, sharing the birth date with Sri Aurobindo, coinciding with India's Independence Day too. He started his career as a journalist and was soon attracted by Swami Chinmayananda's lectures and joined the Chinmaya Mission. He was initiated into sannyas by his icon in 1962. After a stint of editing the Mission's magazine, Tapovan Prasad, he left for Rishikesh for deeper study of Vedanta and for further sadhana. After three years, he started working on his own, travelling widely and giving discourses. He is known for profundity and deep insights, and he soon attracted plenty of followers. He established four centres for the teaching and dissemination of Vedanta philosophy under the name, Arsha-vidya Gurukulams, at Coimbatore, Nagpur and Rishikesh in India, and at Saylorsburg in Pennsylvania in the United States. He believes that Vedanta is a means of knowledge (pramana) rather than a fixed system of thought. This stand enabled him to creatively apply Vedanta to new situations and problems. The Swami was a gifted orator and established good rapport with both small and large audiences with remarkable ease. He travelled widely over the globe, lecturing on Vedanta philosophy. He was a prolific writer with more than sixty books to his credit, which are all marked by deep analysis as well as profound holistic insights, besides being quite inspiring.
A distinctive contribution of the Swami is his starting All India Movement for Seva (AIM for Seva) in 2000. The organisation reaches out to underprivileged sections of people particularly in far-flung areas as in forests and backward regions, by providing them health care and, what is more, education to the children through a chain of well more than hundred free student homes spread across some fourteen states of India. In these homes, children of the poor are housed, fed, clothed and educated with care and affection. The dropout rate of these students is reported to be below 1 per cent, and the rate of passing in Board examinations is 100 per cent. Swami Dayananda wanted to cover every district in India under this project within the next few years. This is financed entirely by donations from his admirers and followers. He has also started Veda Pathashalas for training in the study and recitation of Veda and Agama mantras, so that this precious heritage is preserved intact. He was thus an ideal karma-yogi cum jnana-yogi combined with dedication and love for both people and the Supreme, consistent with the Gita's teaching.
Swami Dayananda has written at least four books on the Gita, one of them, Bhagavad Gita – Home Study Course (Dayananda 2011, new edition), with a detailed introduction, explanation and commentaries on the verses. He follows Shankara's bhashya closely in this book, referring to it often and adding his own commentary on it. He does not hesitate to depart from Shankara where necessary. This book is the longest (with the main texts alone spread over nine volumes containing more than 1.2 million words in 3,974 pages!) and one of the most penetrating commentary on the Gita so far, yet enjoyable to read, logically leading from one point to another, with stories and anecdotes thrown in between. His other books on the Gita are compact – The Teaching of the Bhagavad Gita (1989, first edition), Value of Values – a treatise on ethics based mainly on the Gita (2007-a, revised edition) and Srimad Bhagavad Gita (transliteration and translation in English along with the original, 2007-b, first edition). Though the Swami is an Advaitin, there is much in his writings which may be acceptable to other schools of philosophies too.
The Swami provides a remarkable summary of the Gita's contents in the blurb for his nine volumes on the Gita (2011). It says: 'The Bhagavadgita gives a view and a way of life. The view is that the self is free from any blemish that inhibits self-approval. In reality, the self is the whole. While unfolding this truth, in seventeen chapters [second to the last], it also helps one with a way of life, making one a mature person in terms of self-management, to receive this vision – I am the whole.' This is a charming way of succinctly explaining that the Gita is both a brahma-vidya ('a view') and a yoga-shastra ('a way of life'). The Swami explains further that brahma-vidya means the knowledge of 'what is'. It answers questions like: 'What is Brahman? What is Ishwara, the Lord? What is the reality of the world, jagat? What is the nature of the individual, jiva? What is the truth of oneself, atma? What is the relationship between the jiva, the jagat and Ishwara?' (Dayananda 2011, Vol. 1: 45). The Swami's interpretation of the Gita's answers to these questions is in terms of Shankara's Advaita philosophy, which is rigorously argued throughout the nine volumes, guessing what objections the opponents would raise and replying to these objections one by one. Even those who may not accept Advaita philosophy in its full measure can enjoy the intellectual treat of the rigour of Indian philosophical debates from these nine volumes, particularly the last.
Contrary to the popular notion that that there is only one way in which the Ultimate reality is conceptualised in Advaita, the Swami presents two different standpoints (Dayananda 2011, Vol. 7: 1–4). One is to visualise It as nirguna (attributeless) and chaitanya (pure consciousness). Another way is to visualise the same Brahman as wielding the creative power of maya, and as the Creator (srishti-karta, jagatkarana), who can also be a saguna personal God, who can be prayed to and worshipped. The Gita permits both these conceptualisations, and there is no question of one being tenable and another nontenable.
About the question of how to gain Brahma-vidya, the Swami says that karma-yoga provides the answer (Dayananda 2011, Vol. 1: 45). He clarifies, however, that karma-yoga by itself does not directly lead to moksha or the release; karma-yoga qualifies and helps one to acquire the knowledge (jnana) of Brahman through purifying the mind and preparing it, and it is this jnana alone which leads to moksha.
The Swami distinguishes the Vedantic concept of moksha from the Christian concept of salvation. According to the latter, human beings are born in sin, and they need to be saved from sin, and this saving is salvation. In Vedanta, on the other hand, the self, the real one, is Atman, which is chit or pure consciousness (not the 'I-maker' – ahamkara – of the body–mind–senses complex), is blemish free and pure. The self is by its essential nature sat (existential, absolute Truth), chit (pure consciousness) and ananda (blissful and compassionate). It is beyond gunas (attributes). Sat, chit and ananda are not attributes but its svabhava, very nature. Now, this is also the nature of Brahman, and the basic teaching of Advaita is that Atman is the same as Brahman – Tat tvam asi (That thou art). Moksha is just realising that one's real self is this pure self, which/who is also absolutely free and eternal, and which is also Brahman. Moksha literally means release or freedom, and it is freedom from bondage to the mundane world due to avidya. Avidya is ignorance about the true nature of one's own real self. Moksha can come in one's lifetime itself, and not necessarily after death. What is more, a person who has gained it can carry on day-to-day activities of pursuing other purusharthas – dharma, artha and kama (Dayananda 2011, Vol. 1: 17–18). He or she can continue to practice karma-yoga as before. But there would be a difference between a jnani (one who has gained the knowledge of oneness with Brahman) and other ordinary humans. A jnani works not only with detachment and a pure mind, but also with the conviction that he or she is not the doer and with a feeling of complete freedom and fearlessness, knowing that his or her real self, the Atman, is no less than Brahman. Since the fruits of karma do not bind such persons, there is no rebirth for them. The Swami emphasises that the Upanishadic mahavakya (great saying) does not say that you will become (bhavishyasi) Tat (That) (Brahman), but you are (asi) already That (Dayananda 2011, Vol. 6: 1). One has to just know or realise this. Once this realisation takes place deep within oneself, all sorrows and anxieties are removed, and all sins or karmas are gone. Moksha is freedom from being a wanting person, from fear and insecurity, because Atman is free of these inadequacies. Once gained, moksha is permanent, there being no question of forgetting the self-realisation.
The Swami raises a question here (on behalf of an imagined opponent) whether the presumption of two selves here is tenable – one who seeks and the other who is sought (Dayananda 2011, Vol. 6: 94). The former is the jiva, the confused self, which has not yet realised its oneness with the Brahman, and the latter is Atman, which is the Brahman itself. The Swami answers by saying that as long as there is confusion or ignorance about the identity of the jiva with the Atman/Brahman, the duality persists, and there will be a difference between the seeker and the sought. The seeking self is a part of the creation, the samsara. The creation is a manifestation of the Brahman taking names (nama) and forms (rupa), which are not eternal and independent, but dependent on Ishwara, the Lord who possesses the creative power, who is the Brahman itself. The Swami gives the analogy of clay (analogous to Brahman) and clay pot (analogous to the world of names and forms). He observes:
... what we call an existent thing, like a pot, is neither nonexistent nor existent. It is not independently existent because it has no existence apart from the clay, nor can we dismiss it as totally non-existent because it has a functional reality. It is something in between, which we call mithya. This is the status of the whole creation. What is independently existent is real, satya, and what is dependent on it is called mithya.
(Dayananda 2011, Vol. 9: 224)
He clarifies further that the only thing that does not depend on anything else is Atman or Brahman. The pot cannot be taken as false; it is not non-existent. But it is not a fundamental reality as compared with clay. Maya then is what makes clay appear as pot; maya is nothing but the creative power of the Brahman. However, unlike the clay involved, Brahman does not undergo any transformation or change. According to Advaitins, the creation is an appearance superimposed on the Brahman, as when a rope is mistaken in dim light as a snake. The creation being the result of maya is mithya, which means relatively real/unreal, because it is transitory and dependent, and not fundamentally real or eternal and independent. Only Brahman is fundamentally real and eternal. And so is Atman, which is the same as the Brahman. While Atman is real, the mundane self which identifies itself with the body–mind–senses complex is only relatively real. It depends on the Atman, for the mundane self is nothing if it does not have consciousness.
There is a problem, however! One may reconcile with the idea of the individual self, being in the nature of pure consciousness, being akarta, but not with the idea of the Creator being akarta. The Swami clearly says that I shwara is both the efficient and the material cause of creation. In other words, he created the universe out of his supreme intelligence and power, which is also his material manifestation. And yet He is said to be akarta (Dayananda 2011, Vol. 9: 178). Moreover, if Ishwara is akarta, why should devotees pray to Him for help? But the Lord in the Gita is also presented as a personal deity to whom worship and the fruits of their works are offered, which he lovingly accepts and reaches out to them. The Brahman in Advaita is totally indifferent. The Swami argues, however, following Shankara, that from the point of view of Brahman, the Sat Chit Ananda, there is no creation; it is all pure chaitanya or energy ultimately. But names and forms, which constitute creation, cannot be completely false, which the Swami himself admits.
To understand the viewpoint of Shankara's Advaita, which the Swami upholds, a distinction between two types of truths or satyas – Adhyatmika (paramarthika) and vyavaharika – made by Shankara is relevant. The former is the fundamental or spiritual truth, and the latter is the truth of the mundane world under the veil of maya. The latter cannot be dismissed as pure illusion or ignorance or falsehood. Otherwise several problems arise. For example, if Atman is akarta (non-doer), for whom is meant the injunction of the Vedas to do certain rituals? Who has to practise karma-yoga? Who has to follow the ethical values commended by the Gita itself? The Swami is well aware of such questions and in fact raises some of them himself as possible objections by opponents in the debate (Dayananda 2011, Vol. 7: 261–344, and, Vol. 9: 399–553). If the world is mithya, why was the Swami himself engaged so much in social work to uplift thousands of poor and deprived children in neglected interiors all over India with total seriousness? Moreover, who is doing all this valuable, well-thought-out and organised work, if the Swami is akarta, his associates are akartas and even the Great Brahman Himself is akarta? Of course, akarta in the Gita is also one who is engaged but has no attitude or feeling of doer-ship. In an abstract, transcendental or paramarthika reality wherein all dualities are resolved, and there is One Absolute, there is neither doer nor non-doer. The answer to all the earlier questions lies basically in acknowledging that even vyavaharika satya has its own validity, its own meaning and its own values, and it has also to be taken seriously. Without this, an equally important part of the Gita, yoga-shastra, will have no meaning, as it is meant for the people in the world. If the world and the people in it are all false, all unreal, where is the need to teach them anything?
Usually, it is believed that the type of path chosen for spiritual striving or sadhana depends on one's conception of I shwara. If Ishwara is taken as a personal deity, the path of bhakti is taken along with karma-yoga including rituals associated with worship. If on the other hand, Ishwara is taken as the ultimate reality in terms of an impersonal Brahman, the path of jnana and meditation is taken, with much less emphasis on bhakti and karma. Swami Dayananda Saraswati does not accept such a notion of sadhana. According to him, though moksha is gained only through jnana, it does not come without preparing the mind for it through conquering our passions, attachments, hatred, jealousy and the like. A purification of mind on these lines is possible only through karma-yoga, which means doing all that one ought to do, without aspiring for the fruit of one's work for oneself, remembering that the giver of the fruit of work is only Ishwara and surrendering to His will. Having desires is no problem, the Swami clarifies, but having desires that bind you, with which you are obsessed, is a problem. The attitude to the desired outcome of work should be: 'If you get it, you will enjoy it; but if you do not get it, you are not going to be unhappy' (Dayananda 2011, Vol. 9: 329). Further, the work should be done without any ahamkara, that is, without any notion of 'I am doing this'. Karma-yoga cannot be a yoga without such an attitude, and without absolute faith in and devotion to I shwara. Choosing to do only what one enjoys doing, or only what is very easy to do, instead of doing what ought to be done, is not the spirit of karma-yoga, though it does not mean that one should not enjoy what one is doing. Compulsions of situations or circumstances may often determine what one ought to do in that context. The Swami explains: 'by doing my own karma, svakarma, what is appropriate at a given time and place, I am worshipping Ishwara' (Dayananda 2011, Vol. 9: 185). The work has to be done with devotion, thoughtfulness and detachment. Meditation or contemplation on the Divine or on the Atman helps, but without abandoning karma-yoga. The Swami explains the dilemma of choice which Arjuna faced between abandoning work (karma-sannyasa, as he thought) and karma-yoga, but the Lord tells him that doing work by surrendering its fruit to the will of God itself amounts to karmasannyasa and karma-yoga, and that one cannot desist from action. Can an atheist be a karma-yogi? The Swami says that 'an atheist can be an ethical person, but not necessarily a yogi, because karma-yoga means recognising Ishwara' (Dayananda 2011, Vol. 9: 187). Thus, what the Swami interprets as the message of the Gita for spiritual striving is a combination of the paths dominated by karma-yoga. He clarifies, however, that it is not tenable to say that karma is the means of moksha, because moksha, being nitya (eternal), is not produced as such. The knowledge of the nitya atma alone is moksha, and karma only facilitates this knowledge (Dayananda 2011, Vol. 9: 398).
There is a verse in the Gita, verse 66, in the last chapter (eighteenth), wherein the Lord tells Arjuna, which literally translated means: 'Giving up all dharmas, take refuge in me alone. I will release you from all sins, do not grieve'. This verse is liable to be misinterpreted. Can one give up all moral duties (dharmas), and just take refuge in God, and be released from all sins and derelictions of duties? Swami Dayananda Saraswati says that by 'dharmas', the Lord meant karmas, and the karmas are given up – not by abandoning work or duties – but by leaving the outcome to the Supreme, and not entertaining any notion of self-agency or kartritva (Dayananda 2011, Vol. 9: 370–71). The swami raises another question about this verse. By asking Arjuna to 'take refuge in me alone' is Lord Krishna showing himself as a jealous God? 'No,' replies the Swami. 'When Bhagavan says, "Me", we are to understand that as the one who is of the nature of all' (Dayananda 2011, Vol. 9: 373), the One who is all pervading and all-inclusive.
It is impossible to do justice to all the writings of the Swami, even on the Gita alone. What is attempted earlier is to convey his main ideas briefly and to give a flavour of his valuable contribution to the understanding of the Gita. I conclude this summary of his contribution by pointing out how, according to the Swami, Advaita can make a difference to the psyche of a person: 'Since the basic problem is one of self non-acceptance [non-acceptance of the Self], acceptance is possible only when a person discovers the self to be free from any lack, in other words, complete. And the self happens to be complete. Discovering this fact releases the individual from his erroneous sense of imperfection' (Dayananda 2011, Vol. 8: 156). He believed in the equal dignity of all, and the right of all to dignified treatment. He put this in practice through his All India Movement for Seva, opening student homes and schools for the children of the weak and the neglected, assuring them of dignified life.
## Notes
1 For a list of DVG's works, and books on DVG, see Gundappa (2001: 638–39).
2 There is another more recent book, Lokayatre, in Kannada by G. S. Amur (2013), but on the Mahabharata as a whole, which brings together the moral wisdom for day-to-day living as in the numerous dialogues one finds in the epic like the Gita.
3 See 'Why ethics?' in Nadkarni (2014: 10–15).
4 The threefold classification of qualities or gunas can be similarly applied to several other things. For a tabular representation, see Nadkarni (2013: 59–60).
5 For more details of Swami Sivananda's life and work, see Nadkarni (2013: 277–78).
6 The concerned verses are: II.40; IV.36; VI.40; IX.22, 31; XII.7; XVIII.65, 66.
7 For details of his life and work, see en.wikipedia.org/wiki/Kanaiyalal_Maneklal_Munshi.
8 This account about Acharya Vinoba Bhave here is based mainly on en.wikipedia.org/wiki/Vinoba_Bhave. It is given in some detail, not only because he was an important follower of Gandhi and known for his work for the rural poor, but also because his 'Talks on the Gita' is among the most popular commentaries available in most of the Indian languages.
9 This account is based on Prabhupada (1983: 867–68) and Nadkarni (2013: 274–75). Also see prabhupada.krishna.com.
10 See en.wikipedia.org/wiki/Swami_Ranganathananda; downloaded on 22 April 2015.
11 For further details, see the opening page about the author in Easwaran (1997, any of the three volumes), and en.wikipedia.org/wiki/Eknath_Easwaran (downloaded on 24 April 2015).
12 For more details, see Patchen (2003), Nadkarni (2013) (279–81) and en.wikipedia.org/wiki/chinmayananda_saraswati (downloaded on 24 April 2015).
13 This account is based on Nadkarni (2013: 281–82).
14 For example, in the verse Karmanyevadhikaraste (II.47), while Shankara interprets karma narrowly as Vedic rituals, Dayananda takes the word to mean any work (as other modern interpreters have also done) (Dayananda 2011, Vol. 2: 238). In the verse (II.49) again, while Shankara interprets kripanah, as dinah or 'helpless weaknings', Dayananda takes the word to mean 'misers' who amass money but have no heart to spend it (Dayananda 2011, Vol.2: 288–89). The latter makes better sense in the context.
# [6
Philosophy of the Gita](content.xhtml#bck_Ch06)
## The Gita and the pursuit of happiness
Many interpretations of the Gita have been reviewed above. What do they all add up to? What does the text itself say? What follows is not an attempt to present yet another interpretation different, but an attempt to arrive at my own understanding of the Gita's philosophy with openness, without sentimental attachment to any particular established school of philosophy (following the Gita's advice to be detached in this respect too!). Such an intellectual detachment can also impart some freshness.
The whole purpose of the Gita can be said to be to show how to obtain lasting happiness, after basic needs are met. If basic needs of food, shelter, clothing, health and some education are not met, there is no question of obtaining higher happiness. Though the main concern of the Gita is with higher happiness beyond the satisfaction of basic needs, it is not indifferent to people's need for welfare or well-being. It explicitly advises that one should have adequate food and sleep, not eat too little (nor too much), and it calls upon all those who have enough to eat and have a surplus, to rise to meet the basic needs of others through its emphasis on loka-sangraha (III.20, 25) or people's welfare. Since it says there is divinity in all beings, it emphasises the value of loving respect for all and selfless service. This is the philosophy of the Gita expressed in the concepts of karma-yoga and loka-sangraha. Gandhi was inspired by this philosophy and developed his concept of trusteeship. To follow this teaching is to help not only others but also one's own self in spiritual advancement. However, the main focus of the Gita is on guiding people to attain higher levels of happiness well beyond animal existence, without detriment to satisfying basic needs. Even after the basic needs are met, relentless pursuit of further happiness through sensual pleasures can be endless and frustrating (apart from being environmentally unsustainable). The Gita points to better and more satisfying alternatives of real and higher happiness.
This section on the Gita's approach to human pursuit of happiness not only introduces but also integrates the rest of the next three sections in this chapter. The themes of the following sections, namely, ethics, the concept of God and the world, and sadhana, are of interest to us only because of our pursuit of happiness. Indian culture and literature recognise that 'all beings act with the motive of attaining happiness and getting rid of unhappiness' (sukham me syat dukkham me ma bhut iti loka-pravrittih, quoted in Banavathy and Choudry 2014: 146, fn. 12). The Gita is read by many people as it is believed to be a guide to happiness both in this world and beyond. According to the second verse in the Gita-Mahatmyam ('Greatness of the Gita'), a person burdened with karmic baggage would feel lightened, liberated and happy through the study and practice of the Gita's teaching. How?
First, in the pursuit of happiness, we need the guidance of ethics which the Gita provides. Acting in an immoral way merely to get some momentary pleasure cannot really give happiness because such a person will be burdened with bad conscience, complications in personal relationships, possibly even having to face legal consequences, apart from adding to one's karmic junk, which is not easy to dispose of. Suffering takes over the moment goodness departs. On the other hand, goodness imparts happiness (Sattvam sukhe sanjayati, XIV.9). A person burdened with a guilty feeling cannot really be considered as happy. Happiness should therefore be consistent with ethics. Second, the Gita recognises several types and levels of happiness, and tries to take us from lower levels of momentary happiness to higher levels of more enduring and lasting happiness. This is what leads to God and sadhana. The Gita is not against happiness, not even against mundane happiness, but does not want us to stop at or confine ourselves within the narrow sphere of sensual pleasures, but rise above. We are certainly welcome to enjoy the beauty of the rising and setting Sun against the backdrop of birdsongs, the comforting company of our loved ones and the soothing and lifting tunes of music. In addition to having many such small and harmless but invigorating pleasures, which indeed make our life happy, we also have to think how to make our more life worthwhile and meaningful on the whole. The teaching of the Gita can provide valuable guidance in this task.
Unlike sensual pleasures which can be accessible only to the rich, one does not need to be rich to pursue higher levels of happiness, once the basic needs are met. This is because the higher happiness which the Gita points to is not something which is caused by external stimulus but by internal contentment. This is emphasised in several verses. For example, verse V.21 says that the Yogi realises the joy within his own self, not obsessed with external objects. This requires that passions for external objects are controlled (V.24). Such a person becomes prashantatma (peaceful at heart) and vigatabhih (fearless), finds the inner light easily and realises absolute bliss (VI.14). A real yogi is always in a state of happiness, without any apparent external reason to make him or her happy. This does not mean that such persons are insensitive to the suffering of either one's own or of others. But such suffering or sorrow, instead of depressing or destroying them, makes them rise to the occasion and do their duty to help as karma-yoga. It does not end either their composure or their selfless activism. The composure is combined with activism. It is obvious from this that from the point of view of the Gita, happiness is not a state of constant excitement. By its very nature, excitement stimulated by external objects is bound to be momentary, and if one is after constant excitement, one can land into depression. The Gita, on the contrary, points to stable, sustainable or lasting happiness, which comes from controlling desires for external objects and finding inner light. Like the Buddha, Krishna also teaches in the Gita that the sure way of finding lasting happiness is to control passions or desires. The path of karma-yoga is itself based on this precept, since its activism is without desire for personal gain.
It follows that happiness cannot be captured by one word. The Gita is fully aware of it and uses many words which have a bearing on it, ranging all the way down from bhoga up to nirvana or brahmi-sthiti. It is clear that these words are not synonyms. The Gita's approach to happiness is nuanced and sophisticated. It is worth having a look at the variety of these words and note where they appear in the Gita. They are listed here arranged approximately – but not smoothly – in the ascending order of their spiritual or moral value from the Gita's perspective:
#####
* Bhoga – Sensual enjoyment (I.32; II.5, 43, 44; III.12; and V.22).
* Priti – Pleasure (I.36).
* Harsha – Cheer, rejoice (I.12; XII.15; XVIII.76–77).
* Abhinanda – Rejoice (II.57).
* Tripti/Tripta – Satisfaction/Satisfied (III.17).
* Tushti/Tushta – Cheerful satisfaction/cheerfully satisfied, contentment (II.55; VI.20; X.5).
* Santushti/Santushta – Well satisfied, quite content (III.17; XII.14, 19).
* Shubham – Good (II.57).
* Shreya – Good, beneficial, morally desirable (II.7; III.11, 35).
* Hita – Well-being, welfare (as in Bhuta-hita or loka-hita) – (V.25; X.1; XII.4).
* Utsaha – Enthusiasm (XVIII.26)
* Yogakshema – Meeting the needs, providing security (IX.22).
* Arama – Relaxation (V.24).
* Prasanna-chetas – Tranquil mind (II.65).
* Prashanta – Peaceful at heart, calm and composed (VI.14).
* Sukham – Pleasure/Happiness (I.32; II.15, 38, 56, 66; IV.40; V.3, 13, 21; VI.7, 21, 27, 28, 32; XII.13, 18; XIII.6, 20; XIV.6, 9, 24; XVIII.36–39).
* Vigatabhih – Fearless (VI.14).
* Prasada – Tranquillity, coolness, composure, good temper, well-being (II.64–65).
* Siddhi – Perfection (III.4).
* Shanti – Peace (II.70–71; IV.39; V.12, 29; VI.15).
* Shrirvijaya – A combination of success, prosperity, welfare and moral perfection (XVIII.78).
* Nirvana/Brahmi-sthiti/Brahma-nirvana/Moksha – A state of perfect or highest and eternal happiness; union with the Divine; release from the cycle of births and deaths (II.72; V.25–26; VI.15).
Most of the these words are in the context of individual or personal happiness, but words like hita, yogakshema, shanti, sukham and shrirvijaya are applicable both at the personal and at the community/country level. The Gita is concerned with happiness both at the personal and at the community level, but the former gets greater attention. Sukham is a general term, denoting varieties of happiness, and is used most often. Sometimes the Gita specifies what type of sukha is meant in a particular context. It distinguishes between three kinds of sukhas or happiness in three verses (XVIII.37–39): sattvika, rajasika and tamasika, in diminishing the order of moral value. The first, sattvika, is unpleasant in the beginning, involving hardships and sacrifices, but is like nectar at the end. Though the Gita specifically refers to the happiness arising from realisation of the self to illustrate this kind of happiness, we know many examples at a mundane level: a farmer, after back-breaking work and investment on his farm, getting a good harvest and a good return; scientist, after painstaking and long research, coming out with an important discovery or a research paper; a student putting in long hours of study with concentration, getting a good rank; and so on. The second type of happiness which the Gita mentions here, rajasika, comes mostly from contact with senses, which seems like nectar in the beginning, but results in sorrow at the end. A student who spends all the time in getting fun and frolic but ignoring studies is an apt example of such a type. The third type, tamasika, is illusory happiness, for there is no happiness here both at the beginning and at the end. It is based on delusion and ignorance. The Gita itself mentions examples of this type: laziness, over-sleeping and mis-comprehension or miscalculation. Complacence and non-seriousness in work can also be considered as tamasika. The main point of this classification is the stress on taking a long-term view of things, and not to be tempted only by short-lived momentary pleasures as they cannot provide happiness in the long run. The Gita cautions particularly against reckless indulgence in sensual desires, as they are insatiable and lead to frustration, anger and delusion (XVI.10; II.62).
The important Upanishadic word for highest happiness or bliss, ananda, does not figure in the earlier list, as it is not used in the Gita. The word is used, however, as part of prayer both in the Gita-Dhyanam and in the Gita-Mahatmyam as an epithet of Krishna, the Divine. In the Upanishads, the essential nature of Atman is described as one of ananda or blissfulness, along with two other epithets of Atman – sat (existence, truth) and chit (consciousness). Its essential nature of blissfulness is masked or eclipsed because of maya or involvement in samsara. The aim set before everyone is to realise one's essential blissful nature, for which the Gita shows the way through sadhana. The Gita does use the concept underlying ananda, employing words for it such as sukham atyantikam (highest/infinite happiness, VI.21) or brahmi-sthiti (II.72). The Gita observes that such a happiness, which is beyond sense perception, can be gained while living, and after gaining it, no sorrow disturbs the person enjoying this bliss and no sensual pleasures or material gains hold any attraction (VI.22). In this state, there is no grief and no desire anymore (XVIII.54). Such a person who is one with Brahman (Brahmabhutah), is also called as Prasannatma, since his Atman is in absolute bliss (XVIII.54). This bliss is felt within (Atmani tushyati, V.20) and does not come from outside. In fact, it comes from focusing the mind on the Atman and attaining quietude by withdrawal of senses from outside through meditation (V.20). The person experiencing it attains Brahma-nirvana (V.24). According to the Gita, when such a spiritual peak and height of happiness is possible, it makes no sense to be bogged down to petty pleasures. However, for success in this, mental discipline for years is necessary as advised by the Gita. Though meditation helps in this task, it is not enough, and one should consciously strive to control one's mind keeping it free from immoral temptations and other weaknesses of mind.
Does it mean that the Gita teaches one to be only an obsessive introvert, cutting off from any interaction with others, and spend all the time concentrating on the tip of one's nose? This may not be the concept of happiness for most of us. This is not the concept of happiness in the Gita either. Had it been so, Krishna would not have taken so much time and trouble to persuade Arjuna to fight the war. He would then have agreed with Arjuna to drop his weapons and go to a forest as a recluse. It stands to reason to deduce that such practices like meditation and withdrawal of senses from external objects are to be confined only to limited periods (though to be practiced regularly for a long time), after fulfilling one's duties in the world. The Gita clearly and explicitly rejects renunciation of the world and duties to the world as a way to gaining the ultimate or infinite happiness, as is evident from its second chapter. The Gita is not concerned only with personal happiness and stresses one's duty to contribute to the happiness (hita) of all beings (sarva-bhuta-hita) and people (loka-hita). Its recommendation of loka-sangraha (taking care of people) also suggests this (III.20, 25). But how is it consistent with the goal of moksha or nirvana? The Gita shows the way.
The way shown by the Gita is karma-yoga. The charm of this concept in the Gita is that one can be an active participant in and contributor to the affairs of the world and, at the same time, in the very process, achieve the final goal of moksha. What is more, the advice to do one's work or perform one's duties unselfishly or unattached to the fruit of work does not mean that one cannot enjoy doing so. On the contrary, the Gita recommends doing one's work with enthusiasm (utsaha) and efficiency. A work can be done efficiently only if done with enthusiasm and pleasure. All genuine social workers know this. They derive immense happiness and spiritual satisfaction working for a noble cause unselfishly. This happiness is their reward, for which they do not mind facing risks and undergoing hardships. Karma-yogis may sacrifice many a mundane personal pleasure, but they do not have to sacrifice the happiness of doing work unselfishly. They love doing it so much that even the goal of moksha holds no attraction for them, and they pray that they be born again and again if only to serve the people in dire need. As we saw, many interpreters of the Gita have stressed this teaching of the Gita as its main contribution.
It is not that the Gita is absolutely against other sources of happiness. It allows even vishaya-sukha (sensual pleasure) but within limits, and it urges moderation. Notably, sensual pleasure is considered as rajasika, appearing in the middle of the hierarchy of sukhas, and not at the lowest end. It is certainly inferior to the sattvika sukha of spiritual happiness, but is not considered as contemptible as lolling in lethargy which is considered as tamasika. After all, kama (sensual desire/pleasure) is accepted as one of the four purusharthas (basic human goals) in Indian philosophy including the Mahabharata, and the Gita is not against it. The Gita is also not against pursuing another mundane human goal, artha (wealth and power), and enjoying it. This is clear from Krishna's call to Arjuna to enjoy the wealthy kingdom which he (along with his brothers) would gain after winning the war (bhunkshva rajyam samriddham, XI.33). The Gita only insists that pleasures should be indulged in a way which is not against dharma or ethics. While describing how the Divine is identified with what is good and glorious in all the things, the Gita includes sensual desire unopposed to dharma among them (VII.11).
Indian tradition since ancient times has also commended aesthetic pleasure (rasasvada), and enjoyment of kavya (poetry), natya (drama), sangita (music) and other arts. There is not a word in the Gita indicative of disapproval of them on the ground of their being inconsistent with religious pursuit. On the contrary, verse X.35 is appreciative of music, and poetry. While the Gita has warned against excesses in the enjoyment of sensual pleasures, no such deterrent is expressed against aesthetic pleasures. The ancient Indian tradition has effectively developed these arts both on a religious and on a secular basis. It has been recognised that these arts can be used also for spiritual progress. A question that arises here is where to draw the line between sensual and aesthetic pleasures, since aesthetic pleasures like music can be had only through senses. Adopting the Gita's approach, one could say that the criterion can be on the basis of which of the pleasures stops at a merely physical level, and which, by contrast, is morally acceptable, soul-filling and spiritually uplifting. Even sexual pleasure with one's spouse out of spontaneous love and without force can be soul-filling and provide spiritual ecstasy. A pleasure should not make one feel guilty, as that is a sign of unhappiness. This may be a subjective criterion, but is nevertheless useful. After all, happiness is itself subjective. The Gita would add, however, that sexual pleasure, or enjoyment of power, however ecstatic and intense, and free from guilt, would not be everlasting, and if one is obsessed with it or makes them the dominant or ultimate goal of one's life, it would come in the way of spiritual liberation and blissful peace. But such a blissful and everlasting peace can come only through sincere sadhana.
A significant teaching of the Gita is that there are both happiness and unhappiness in the world, and that we should not lose our head on gaining some happiness, nor should we allow ourselves to be devastated or destroyed by sorrow (duhkha). The Gita's wise person rises above this duality of sukha and duhkha both of which are momentary, treating both with a certain maturity, detachment or equanimity, and aims at making his or her life meaningful and worthwhile on the whole. Some may find this worthwhileness in gaining jivan-mukti (liberation while living); to some it may consist in unselfish service to suffering humanity; and some may be wise enough to combine both of these effortlessly and effectively. The Gita can be a guide for all the three types.
## Ethics in the Gita
A problem in presenting the philosophy of the Gita is to decide the sequencing of the three main aspects – ethics, theology and metaphysics, and sadhana or yoga. The three issues are expressed in terms of three questions respectively: 'What ought I to do?' 'What can I know?' and 'What may I become?' (Srinivasachari 2009: 36). Which of these three questions has the utmost primacy? The problem arises in sadhana too – which comes first, which is basic, between being ethical, understanding the ultimate reality, karma-yoga as selfless action or bhakti? Can one be ethical without having to know the Ultimate and without sadhana? Is the Gita relevant to such a person too, or only to those who are earnest about all these aspects? Is it possible to arrive at the knowledge of the ultimate reality without striving for it, without sadhana? Or is sadhana possible at all without some notion of the Ultimate? What will inspire sadhana in one who has no notion of It (Tat) – the Ultimate or the Supreme? We begin in any case with ethics first as a starting point, but without isolating it from the other two questions, since all the three are interrelated. A remarkable thing about the ethics of the Gita is a near unanimity about its content and approach among its many interpreters, in spite of differences on other questions particularly related to theology and metaphysics.
The first issue which arises philosophically in ethics is who is the moral agent or moral self? Which self is it that is responsible for moral conduct or behaviour? Interpreters of the Gita, particularly Advaitins, emphasise that Atman, the real Self, is akarta, a non-doer; being Pure Consciousness, it does not act. It is only a witness, sakshi. Gandhi interpreted it as the inner voice. Atman is simply the Divine presence in all beings. This presence is not responsible for our punya (merit) or papa (sins). There is someone else who is responsible for them. The Gita actually talks of three selves – the embodied and empirical self or dehi, the subtle self or jiva and the real spiritual and eternal Self or the Atman. Dehi is 'I-maker' or ahamkara which acts in the mundane world and also feels that it acts. It has the power of free will, though within certain limits. It has the three mental faculties of cognition, conation and feeling. It is responsible for karma and receives its fruit (karma-phala). It is also the one which pursues the purusharthas – human ends or purposes or ideals. It is clear, therefore, it is dehi in this sense who is the moral agent. After death, it is not an active doer any more, but its subtle body, the sukshma-sharira or the jiva carries the fruit of the dehi's karma as a repository. In essence, dehi and jiva are one and the same, the distinction lies in the former being embodied and the latter after death has only a subtle body or sukshma-sharira. To 'enjoy' the fruit of karma, after death, it either goes to heaven or hell as the case may be, or takes rebirth. Even if they go to heaven or hell, they return to the mundane world through rebirth, after the fruit of karma (be it punya or papa) is exhausted. There is no eternal damnation. Freedom from the cycle of death and rebirths is gained only when the dehi realises that its real self is not the embodied mundane self of the body–mind complex but the Atman, the non-doer. In Gandhi's perspective, there is liberation when there is perfect harmony between the moral agent and the inner voice. Moksha can simply be viewed as freedom from the bondage of the dehi from the trap of narrowness of mind, its weaknesses and limitations, such as infatuations, obsessions, hatred, jealousy, arrogance and greed. When the mind is freed from such limitations, it becomes one with the Pure Self, the Atman. Liberation can take place during one's own life. What the Gita teaches is not liberation from the world, but liberation in the world. But after death, there is no rebirth for the jiva identified with the Atman, unless there is a deliberate attachment to the world from the point of contributing to its welfare as an instrument of God. Such persons, having no desire for even moksha in its traditional sense, may be reborn for the good of the world. The Gita, however, is not quite consistent in the use of these three terms – dehi, jiva or sukshmasharira, and Atman, in the strict sense indicated respectively for them, often using the terms dehi and jiva for the Atman. But if we try to arrive at consistency and coherence in their use, they have to be interpreted in the way explained earlier.
One of the most profound ethical teachings of the Gita is contained in verse VI.5, which emphasises the moral responsibility of the self and the freedom of will it enjoys to shape its own destiny. The verse is as follows:
Uddharet atmana atmanam na atmanam avasadayet /
Atma eva hi atmanah bandhuh atma eva ripuh atmanah //
(VI.5)
A literal translation of this verse is that the self should uplift itself by its own self. Never let the self destroy itself. The self is the only friend of oneself; it can also be its only enemy.
Though the word used here for self is Atman, what the Gita means by it is the moral agent having the power of free will. Easwaran, therefore, translates the verse as follows: 'A man should reshape himself through the power of the will. He should never let himself be degraded by self-will. The will is the only friend of the Self, and will is the only enemy of the Self' (Easwaran 1997, Vol. I: 340). This does not mean that we cannot count on God at all, but that even to gain God's help, we have to first exercise our own will, and make our own efforts, since we are not inert matter but are sentient spirits endowed with free will, expected to shape our own destiny. A thinking person can certainly have bitter conflicts within oneself, and it is God's presence within as Atman that resolves them. The Gita makes it evident where the self can itself be one's own enemy in verses like III.37, that is, when the mind is overtaken by lust and wild temper and loses self-control.
The moral responsibility or obligation of individuals, and their freedom to carry it out is acknowledged by all interpreters of the Gita. Moral obligations are expressed by the term 'dharma', which is basic to the ethics of the Gita right from the beginning to the end. It can be said to be the central principle of the Gita. It makes a beginning with the word dharma. The world is itself a dharma-kshetra, mentioned in the very first verse of the Gita, which means the ground on which our moral obligations are played out, resolved and reconciled. The word, 'dharma', occurs in most of the chapters of the Gita, since it is a principle stressing the significance of which can be taken as the main purpose of the text. Dharma is so important that preaching and promoting it is the very rationale of Krishna's avatar, as two popular verses in the Gita itself declare:
Yada yada hi dharmasya glanirbhavati Bharata/
Abhyutthanam adharmasya tadaatmanam srijamyaham //
(IV.7)
('I bring forth Myself, Bharata [Arjuna], [into the world] whenever there is a decline of dharma and an ascent of adharma [immorality].')
Paritranaya sadhunam vinashaya cha dushkritam/
Dharma samsthapanarthaya sambhavami yuge yuge //
(IV.8)
('To protect the good and destroy evil, and [thus] to establish dharma, I take birth in the world from time to time.')
The verses have a double significance. On the one hand, they clarify that the essence of dharma in the sense of ethics or moral order is to promote the good and eliminate the evil. Second, dharma is so important that the Lord Himself takes responsibility to uphold it in the world. He is the very abode (pratishthaham) of Eternal Dharma (shaswatasya dharmasya) just as He is abode also of Absolute Bliss (aikantikasya sukhasya cha) (XIV.27). The Gita affirms God's moral responsibility in several places and also clearly suggests that humans follow His way (mama vartmanuvartante manushyah Partha sarvashah, IV.11). In the very act of following the Divine moral order of dharma, human beings fulfil the purpose of the Divine and come close to It or identify with It (madbhavam agatah, IV.10).
Unfortunately, the Gita itself does not explicitly define dharma anywhere, though its content is implied as we see here. But in Karna-parva of the Mahabharata (Chapter 69, verse 58), which comes after the Gita (in Bhishma Parvan), dharma is clearly defined thus:
Dharanat dharma ityahuh dharmo dharayate prajah/
Yat syat dharana-samyuktam sa dharma iti nishchayah//
('Dharma is so called as it upholds. It upholds people [society]. Whatever has this [moral] quality of upholding may be considered as dharma.')
The importance given to dharma in the Gita and in the Mah a bh a rata as a whole stimulated more discussion of it in later writings particularly in the Dharma-shastras. Dharma is mentioned as the first and foremost of the four purusharthas, human goals, in all the Shastras. The other human goals, artha (acquisition of wealth and power, earning livelihood), kama (satisfying sense objects and one's emotional needs) and moksha (explained earlier), are all subject to first satisfying dharma – the moral conditions or the moral law. It is the moral responsibility of dehi to reconcile any conflicts that may arise between dharma on the one hand and other purusharthas, without sacrificing dharma. This is because the world will not run smoothly otherwise. The dominant need for dharma led the Manu Smriti to famously declare that when dharma is protected, it protects us in turn (dharmo rakshati rakshitah, VIII.15). The Dharma-shastras took up the task of explaining to people that we cannot just live in a society where adharma (immorality) prevails, where anybody can cheat, rob and attack anybody with impunity, and that dharma is the basis of law and order which even kings should follow. A lot of story literature like the Panchatantra, the Hitopadesha and Katha-saritsagara, and Puranas also developed to make the same point especially to impress young minds and common people. Thus, the Gita and the Mahabharata can be said to have given a big boost to literature on ethics.
However, what upholds people or the society? The people, and with them the society as a whole, are upheld, and their collective welfare taken care of when everyone or at least the bulk of them follow their dharma by carrying out their duties or moral obligations scrupulously. Dharma means both common morality and duty. But is dharma universal or relative? Is dharma suited only according to occasions and circumstances? Does it vary according to varna (profession), kala (time), kula (family) and ashrama? Is this what the Gita commends? The stress in the Gita on swadharma (one's own dharma) has led to a lot of controversy. The Dharma-shastras distinguish universal or common dharma (samanya dharma) from the specific or relative dharmas like varna and ashrama dharmas. The samanya dharma is mandatory for all and at all times, and consists in following a whole lot of various values or virtues commended in the Gita quite explicitly in several chapters, particularly, e.g. the twelfth chapter in verses 13 and 14, the thirteenth in verses 7–11, and 13–14, and in the sixteenth in verses 1–3 as divine qualities and again in the seventeenth in several verses as characterising the satvika (the gentle and the good) particularly in verses 14–17. These are not relative, but absolute values to be respected and followed in common, and are more crucial to upholding the society than the relative dharmas. These values are well known and also many and, include, in the words of the Gita itself purity of mind and body (sattva-samshuddhi, shaucham), truthfulness (satyam), non-injury or non-violence (ahimsa), absence of hatred (adveshta), compassion to all beings (daya bhuteshu, mardavam), uprightness (arjavam), forbearance (kshantih), steadiness (sthairyam), fortitude (dhritihi), self-control (atma-vinigrahah), humility or absence of egoism (amanitvam, anahamkarah), unpretentiousness (adambhitvam), charity or generosity (danam), control of senses (damah), fearlessness (abhayam), forgiveness (kshama), studiousness (svadhyaya-abhyasanam), austerity (tapah), tranquillity and absence of anger (shantih, akrodhah), aversion to fault-finding (apaishunam), non-treacherousness (adrohah) and so on. The Gita also advises that one should see the presence of God equally among all (XIII.27) and cause no harm or injustice to any one including one's own self (XIII.28, 13). The Gita expects all to speak the truth without causing vexation and in a way which is beneficial (XVII.15). It is a long list indeed, but it shows the seriousness of the Gita about the need for virtuous conduct. In listing such virtues, the word, dharma, may not have been mentioned, but we are left in no doubt that they constitute the common dharma. The significance given to these values also explains why dharma is considered as the very first or basic of the four purusharthas – human goals (others being artha, kama and moksha). Dharma is meant for everyone to aim at, at being virtuous, without exception. This is clear from the fact that everyone wants to be judged as a good person by the society and by God. Dharma is a human goal not only for individuals but also for every society or state (rajyam), because without it no society or state can be happy and liveable. Gandhi, in his Hind Swaraj, thought that the quality of even a civilisation is to be judged in terms of its moral development, and not in terms of the level of comforts and conveniences. He said, 'Civilisation is that mode of conduct which points out to man the path of duty' (in Parel Ed. 2009: 65).
In immediate response to Arjuna's declared unwillingness to fight the war, Krishna does not, however, appeal to any of these earlier values, but to swadharma – his own duty under the circumstances. In the same breath, he reminds him that he is a Kshatriya – a fighter, a soldier – and as such, he should not desist from fighting a just or righteous war (II.31). When Arjuna talks about renunciation or becoming a monk, thus escaping from all the evils of a war, Krishna says that he cannot do so by avoiding his own duty: 'Better to stick to one's own dharma even with a shortcoming, than to perform someone else's duty well; better to die doing one's dharma, since doing others' duties is (more) risky' (III.35). These two verses have been interpreted by some as casteist – an allegation which is made even against the Gita as a whole. The criticism about the Gita being casteist will be addressed in the next chapter, but we stick here to explaining the real meaning of swadharma which can make good sense even in the present times.
Swadharma does not mean caste duty or jati-dharma, but only the duty of one's profession or varna which is most relevant in the given circumstances. Varna cannot be translated as caste, and the Gita certainly does not talk about caste or jati. Every country in the world has these four professional classes – the intelligentsia; the class of soldiers, security and police; the business class; and the class which depends on manual labour. They are referred to as the varnas in the Gita, the Kshatriyas coming under the second of the four classes. Krishna clarifies that the classes are divided on the basis of aptitude (guna) and work or profession (karma) (IV.13) and svabhava (natural disposition, XVIII.41). They are not necessarily based on birth, though in the course of history, they came to be determined by and large on birth and became castes. This was because there were no professional schools except for teaching the Vedas and the art of governance and fighting in wars, and children simply adopted the work of their parents and were taught by them. But the system was hardly rigid, and in the Mahabharata war, there were many in the armies of both sides who were not born Kshatriyas. There was nothing casteist in Krishna's asking Arjuna to fight like a Kshatriya, a warrior. Krishna also does not commend hierarchy in the varnas, but on the contrary preaches equality. The Gita declares that the one who sees own self among all beings everywhere, treating all as equals (sarvatra samadarshinah), is a yogi (VI.29). Krishna endorses this further by saying that 'one who sees Me [God] in all and all in Me, is never lost to Me, nor am I lost to him' (VI.30). Consistent with this philosophy, no varna or profession or work has superiority over others. The advice is that whatever be one's work or profession, it has to be done as a yoga or sadhana, and attain spiritual fulfilment. The Gita clearly implies that a belief in the presence of God in all the beings means equal dignity of all the beings and their equal right to dignified treatment. It endorses equality, not inequality.
The advice to follow one's duty (swadharma) rather than someone else's (paradharma) is not meant to bar professional mobility but to insist that once a job is chosen, it must be performed sincerely. The wisdom of the advice to follow swadharma and shun paradharma comes out hilariously clear from a cartoon by R. K. Laxman. The cartoon shows a minister telling his peon who is shown sitting comfortably and reading the files near the door to the minister's chamber: 'You may be a graduate, but bring the papers and files to my desk first!' Nothing can prevent a peon from becoming a minister, but as long as he is a peon, he should do a peon's duty, and not start reading the files and passing orders in them even if he may be more educated than the minister. Matters can hardly be so hilarious, if, for example an army chief tries to take over the duties of the civilian president or prime minister on the plea that he can perform the task of governance better than the latter. 'Swadharma' just means own sphere of duty, and nothing more. There is nothing to prevent a change in one's sphere of duty when circumstances change, subject to universal ethics or samanya dharma.
Why swadharma cannot necessarily mean the dharma inherited from parents is illustrated by A. Parthasarathy with the example of an established physician with a well-equipped clinic and lucrative practice, trying to persuade his son or daughter to follow the same career (Parthasarathy 2011: 252). But the physician may not succeed in this. His children may be interested in some other career. For them, medical practice would be a paradharma, not swadharma.
Swadharma is important as a starting point on one's spiritual path, according to Eknath Easwaran, and relevant even now. He observes:
On the spiritual path, we start from where we stand by fulfilling our present responsibilities, on the campus, at the office, or in the home. This svadharma may change as our spiritual awareness deepens. Later on, as our capacities grow, our responsibilities and opportunities for service will become greater. What is the right occupation now may not be right later on, but as long as it is not at the expense of others, our job or profession can be made a part of our sadhana.... By using the word svadharma Sri Krishna is saying not to try to follow a profession because someone else is following it. It is much better for you to learn to know yourself, to know your assets and liabilities, to remember your training and follow the career which blends with your sadhana, than for you to compare yourself with others and what they are doing.... It is a very enjoyable thing to be oneself, to stop acting.
(Easwaran 1997, Vol. I: 197–98)
The Gita often uses the words swadharma and svakarma as synonyms, since own duty (dharma) and own work (karma) are expected to be the same. The advice to stick to one's dharma in the third chapter (verse 35 referred earlier) is given again in Chapter 18 in verse 48, saying that one should not give up one's natural work (karma) even if it has a shortcoming (sahajam karma Kaunteya [Arjuna] sadosham api na tyajet). In both these verses, it is clearly added that dharma has to be followed, work has to continued, even if it is found to be defective or imperfect or short of one's ideal (vigunah in III.35; sadosham in XVIII.48). This is a very practical advice not only for the smooth functioning of a society or economy, but also for a peaceful, tension-free, complex-free individual living. A college teacher may feel that he or she is better suited to become the principal of the college, and it may also be true. But as long as she or he is only a teacher, it is better to do that job honestly and then catch the opportunity of becoming the principal when it arises. Following swadharma does not mean going against social and occupational mobility. Two other important verses in the eighteenth chapter, verses 45 and 46, are also clear examples of swadharma and svakarma being used as synonyms. The actual word used is karma, but it means both dharma and karma here.
One should guard against the temptation of interpreting swadharma as purely relative or as so specially tailored to each individual's nature or svabhava that the common dharma could be ignored. Alf Hiltebeitel quotes Ravana telling Sita in the Ramayana (5.18.5) that 'making love to others' wives and even carrying them off by force is the svadharma of Rakshasas' (Hiltebeitel 2011: 532). Hiltebeitel does not of course approve of such an interpretation of swadharma and shows in a detailed footnote that the epics and the Puranas do not accept it (Hiltebeitel 2011: 533). The Gita itself terms some qualities as asuree or devil-like – qualities such as arrogance, ostentation, hot temper, harshness and ignorance – and says that they lead only to bondage and suffering (XVI.4, 5). Even if such qualities are 'natural' to some, it does not mean that they are therefore morally justifiable. On the other hand, the expectation is that all persons try to overcome them, by pondering over what ought to be done (karya) and what ought not to be done (akarya) (XVI.24). The Gita does of course provide for specific dharma, as, for example, for students, householders and other ashramas (stages in life), but they too are subject to the common basic moral codes. It is clear, therefore, that universal ethics or samanyadharma prevails over even swadharma.
The opposite of dharma is adharma. The Gita is concerned with it also. While dharma is what promotes moral development and welfare of the world, adharma obstructs them. One could say that deliberately indulging in adharma is sin or papa. The Gita uses several words for it like durachara, dushkrita and mogham. They may not be exact synonyms, the first two conveying more heinous sins. In any case, they represent evil – evil in us as well as in others. The question is how the Gita treats evil. The Gita goes to great lengths in emphasising that we should eliminate or at least control evil within us, by controlling our anger, desires, greed, jealousy and so on. This is an essential part of sadhana or spiritual seeking. However, the question of dealing with evil in others is more problematic and raises the question of whether using violence to curb evil in others is justified. Tilak and even Sri Aurobindo, in his early phase, thought that according to the Gita, violence is justified in countering the evil and evil persons and evil regimes. Tilak finds support for this view from what Krishna says in the verse – Ye yatha mam prapadyante tanstathaiva bhajamyaham (IV.11). Tilak takes it to mean 'eye for an eye', while its usual translation by most is 'In whatever way people worship Me, I reward them accordingly'. Gandhi took the clear stand that the Gita is against using violence as a way of solving the problem of evil in others. There is no dichotomy between dealing with evil within us and dealing with evil in others according to Gandhi's perspective of the Gita. We have to appeal to the sense of what is good even in persons indulging in evil and help to change their heart. There is a battle within us between the forces of the good and evil, and Gandhi treats the Mahabharata war as only an allegory of this struggle. But even Gandhi did not believe in absolute non-violence, since some violence in self-defence in fending off an attack is justified. However, such violence cannot be extended as a long-term solution. He had faith in the power of non-violent resistance even against mighty oppression. He found the Gita supportive of this view, because of its stress on control on mind, equanimity, moral development and dharma, all of which indicate non-violence. Hannah Arendt, however, significantly observes (with which some would agree): 'If Gandhi's enormously powerful and successful strategy of nonviolent resistance had met with a different enemy – Stalin's Russia, Hitler's Germany, even prewar Japan, instead of England – the outcome would not have been decolonization, but massacre and submission' (1970: 53). A few lines later, however, she also observes that use of violence involves a very high price; 'for it is not only paid by the vanquished, it is also paid by the victor in terms of his own power' (1970: 53). Violence can never be a preferred option; it cannot also be a lasting solution, hence the emphasis of the Gita on cleansing one's mind and on appreciating others' problems by asking what we would have done if we were in their position (which is the purport of VI.32). This is Gita's solution to conflict resolution, not violence, on which Gandhi's reading of the Gita on violence as a way of dealing with evil is based. The Gita's attitude to sinners is not spiteful and violent; it gives a chance to them to realise their mistake and submit to God. This is evident from verses 30 to 32 in Chapter 9 (as explained further in another context at the beginning of the next section).
A great teaching of the Gita for which it is well known is about karma as a yoga. It is already discussed earlier under most of the interpreters since this has hardly been ignored by any of them. It is a means of spiritual striving no doubt, but it is also an important part of the social ethics of the Gita. It says that it is impossible to be without karma or work or activity, but one can make one's work more meaningful, purposive and efficient by doing it selflessly and without attachment, and for the welfare or maintenance of the world at large and all its beings (loka-sangraha, in III.20, 25; sarva-bhuta-hita in V.25, XII.4). Of the two means of liberation, renunciation and selfless activism, the Lord clearly shows His preference for the latter (V.2). Though the emphasis of the Gita on karma-yoga is hardly in doubt, a few verses in it do not seem to be consistent with it. In the verse XII.16, and again in XIV.15, Krishna says that a devotee who is sarvarambha-parityagi is dear to Him. Gandhi translates it as the one 'who indulges in no undertakings' (Gandhi 1980: 232). Many commentators have also translated it similarly as the one who has renounced every undertaking (Swarupananda 1982: 284) or one who has given up all initiative or initiation (of action) (Radhakrishnan 1993: 297; Dayananda 2007-b: 167; parentheses in original). Such a translation goes counter to the philosophy of karma-yoga. Palshikar has, however, pointed out that according to Oliville (1987: 16), the word arambha is also used in ascetic literature to mean ritual activities, which meaning seems to be more pertinent. Sarvarambha-parityagi would then mean 'the one who has given up all rituals' (Palshikar 2014: 10). Indulging in rituals does not constitute karma-yoga, as clarified by this verse. A real ardent devotee has no use for rituals, but she or he will continue to serve God through work for the love of God, and God likes it that way. That is the intent of the verse.
Karma-yoga advocated by the Gita is intended to raise the moral and spiritual status of the individual, as well as to take the society to a higher level of welfare, in a mutually fulfilling way. The Gita transformed the earlier concept of yajna as ritualistic offering of food in sacrificial fire or animal sacrifice into sharing with others what one has. The philosophy is that we have received from God everything that sustains us, and we repay our debt to Him through yajna by sharing with others, be it food, wealth, knowledge or simply labour or work. The Gita speaks of a virtuous cycle (chakram) of mutual help, which one should not break (III.10–16). Karma is mentioned as an important form of yajna. The word shrama-dan (gift in the form of free labour) may have been coined much later, but its basic principle is to be found in the Gita (Nadkarni 2013: 66). The makers of modern India like Raja Rammohan Roy, Swami Vivekananda, Lokamanya Bal Gangadhar Tilak and Mahatma Gandhi, whose role and interpretations of the Gita have been discussed earlier (in Chapter 4), were quick to recognise the social significance of the activism preached by the Gita and tried to put this teaching into practice. There is a whole book of 475 pages addressed to the social significance and role of the Gita by Satya P. Agarwal (1993). Agarwal observes that there are two stages of loka-sangraha in the Gita – the first according to dharma which is duty-bound and the second is spontaneous as a way of life (Satya P. Agarwal 1993: 351–84). The second is spiritually higher and more promising, and characterises spiritually and morally mature persons. Such persons are described in the fifth chapter, verse 25 of the Gita, which is translated by Easwaran as follows: 'With all their conflicts healed and all their sins removed, the holy sages work for the good of all beings, and attain the Nirvana of Brahman'(Easwaran 1997, Vol. I: 326). It does not mean that one has to wait until the person becomes perfect for spontaneity to emerge. It may well emerge naturally once the person is mentally attuned to selfless work. The performance of duties sincerely is itself a sure way to purification of mind leading to perfection (XVIII.45–46).
What distinguishes the two stages is the refinement of mental attitude. A given act like an act of charity (danam) can be morally of a high order, middle order or a low level. An act of charity is morally highest or sattvika, only if done not only without expecting anything in return but also done with all humility without any conscious sense of doer-ship, and with a thoughtful propriety in the choice of the gift and receiver. An act of charity done with expecting something in return is of the middle order morally (rajasika). It can be of a low moral order even if done without expecting a return benefit, if done with contempt towards or to harm the receiver (XVII.20–22). Similarly, a doer or a performer of work (karta) is said to be sattvika if he or she is free from emotional attachment and egoism, but at the same time has steadiness, enthusiasm and equanimity about the outcome of work. If he or she is passionately attached to work and its outcome, and desirous of the fruit of work, such a doer is rajasika. If a doer is lazy, non-serious about work, haphazard or despondent, such a one is called as tamasika by the Gita (XVIII.26–28). It is thus clear that non-attachment commended by the Gita is not lack of interest in the work and its consequences. The Gita insists on efficiency in work (yogah karmasu kaushalam, II.50); further, a work done mindlessly without heeding to consequences (anapekshya) is called as tamasika (XVIII.25).
Ethical grading of several activities like work, devotion and danam, of mental traits like fortitude and happiness, in terms of trigunas (three gunas) – sattvika, rajasika and tamasika – is one of the most interesting contributions of the Gita. Actually, the Gita contains two ways of ethical grading. One is a simple two-way distinction between the divine (daivee) and devil-like (asuree) qualities in Chapter 16 (verses 1–4), referred to earlier. Human beings, however, are a varying mix of the two qualities. The path of spiritual or moral evolution lies in overcoming devilish qualities and acquiring the divine ones through deliberate effort at first. Then it becomes a spontaneous process. The purification of mind involved in the process leads one to freedom from bondage. What is even more interesting is the three-way classification into the three gunas in Chapters 14, 17 and 18. The Gita goes to great lengths here in discussing what is most ethical, what is less so and what is least ethical or plainly unethical. The rationale for this additional classification seems to be that no human is purely divine or devilish, and that it is better to spell out in some detail what makes a particular thing more ethical or less ethical. Moreover, when it comes classifying human nature and activities, there cannot be only pure white and pure black, but there is a vast grey zone in the middle. We cannot also say that rajasika is just a mixture of sattvika and tamasika, but has its own independent features.
The origin of the concept of the three gunas goes to the Sankhya system of philosophy, which is perhaps the most ancient of the six darshanas or schools of philosophy. Though Sage Kapila is considered as the father of this school, no text authored by him is available. There is a reference to him in the Gita as the most eminent among the perfected sages (X.26). A systematic exposition of Sankhya philosophy came after the Gita in Sankhyakarika by Ishwarakrishna of the second century CE. The Gita has taken some points from this philosophy as extant then and developed further, especially the ethical application of it. While Chapter 14 of the Gita explains the concept and significance of the three gunas, Chapters 17 and 18 apply the concepts to several things like action, renunciation and charity. The three gunas taken by the Gita as indicating psychological nature are sattva or sattvika (good, gentle, virtuous, truthful, wise, kind, productive of enduring happiness and enlightenment, benevolent – ethically at the highest level), rajas or rajasika (emotional, passionate, active, dynamic, energetic, outgoing, greedy – ethically at the middle level) and tamasika (or of the quality of tamas – darkness, indolent, dull, passive, apathetic, ignorant – ethically at the lowest level). The Gita indicates, however, that no person is purely of one type, but a combination of the three. In the same person, one nature may dominate at one time, and another at another time. But it helps if a person is aware of his or her nature in each context, particularly to control passion and sloth, and strives to maximise one's material as well as spiritual well-being. A classification of various things in terms the three gunas is provided in Table 6.1, based on Chapters 17 and 18 of the Gita.
Table 6.1 Trigunas - sattvika (truthful, sage-like), rajasika (emotional) and tamasika (dismal)
| Sattvika | Rajasika | Tamasika
---|---|---|---
* * *
Individual's nature | Kind, compassionate, generous, friendly, soft spoken, calm and composed | Emotional, energetic, active, easily provoked to anger, harsh and critical, passionate | Dull, sleepy, lazy, ignorant, passive
Bhakti (devotion to God) | With a pure heart, for the pure joy of loving God and feeling one with Him | Expecting some material reward | Without proper shraddha, halfhearted or reward expected in the form of harming enemy
Shraddha(commitment and faith) | Essentially in the Divine and humanity | Essentially in acquiring wealth and power | Belief in evil spirits or witchcraft to acquire power to harm others, irrational, superstitious
Karta (agent, doer) | Endued with dhriti (fortitude), enthusiasm, humility, detachment, commitment and equanimity in success and failure | Passionately attached to fruits of outcome, affected by elation or dejection with outcome, tendency to be aggressive | Indifferent, uncommitted, unskilled, dishonest, ignorant of consequences, malicious, lazy, slow
Karma (work) | Done with detachment and for the good of all | Done with narrow selfish motive for the good of only oneself | Done with malice, with no heed to consequences, harmful to oneself and others, duties performed reluctantly
Tyaga (renunciation) | Renouncing the fruit of action for oneself alone | Renouncing work because it is difficult or unpleasant | Renouncing obligatory work out of delusion
Jnana (knowledge) | Sees the unity behind diversity, synthesises, based on holistic perception | Focuses on diversity or multiplicity, based on analysis | Wrong knowledge that mistakes a part for the whole, indifferent to cause and reasoning, obstinate
Buddhi(intellect, discrimination, understanding, perception) | Knows what is to be done/not to be done, what is good and what is bad, the distinction between pravritti and nivritti, and what leads to liberation | Confused, quick to judge | Pervert in attitude, taking right as wrong and wrong as right ('Fair is foul and foul is fair', as in the Macbeth)
Yajna and other rituals | Done with faith, devotion and understanding of the significance, desiring nothing for oneself but for the good of humanity and world peace | Done for the benefit of oneself and family only and for power and ostentation | Done improperly and without faith, or for harming others, involving no charity
Dana (gift, charity, donation) | Given without expecting anything in return, given to the needy or deserving | Expecting something in return including power and fame, given reluctantly | Given to undeserving or at wrong place and time, with contempt for the receiver, or for manipulating and harming the receiver
Tapas (penance, austerity) | Performed with faith for the good of all | Performed to gain yogic powers, fame and honour, or for worldly things for oneself | Performed with self-torture and/or for harming others
Dhriti(fortitude) | High level of moral courage and resoluteness, self-confident, in full control of mind | Using resoluteness only for a selfish purpose and ready to compromise for it | Wavering, not resolute, given to grief, diffidence, depression and fear
Sukham (happiness) | Based on clear understanding, and clean/clear conscience, long term | Sensual, short term | Based on delusion or perversion
Ahara (food) | Soft and juicy, easily digestible, contributing to health, hygienic, adds to life and nourishment, feel good type | Acidic, spicy, pungent, may be tasty but produces discomfort | Stale, tasteless, unclean, makes one sleepy and indolent, harmful to health
Note: Based on the interpretation of Chapters 17 and 18 of the Gita.
A few points need to be noted regarding the contents of this table, which throws light on much of the Gita's ethical philosophy. The first is that there can be cross-effects and interaction between things, reinforcing the influence of a particular operating quality on other things too. For example, the Gita believes that the type of food one eats can influence not only physical health but also psyschology and the nature of individuals, which in turn can influence attitudes and actions. A sattvika food promotes a sattvika nature, a sattvika nature enables a sattvika knowledge and a sattvika knowledge leads to sattvika actions. A person is known after all by his or her actions, shaping the whole life of the persons concerned. The starting point in the whole process here is right food – a very mundane thing may be, but having a significant moral and spiritual impact! Fresh fruit, vegetables, whole grains, nuts and milk are considered as sattvika food by tradition, though not specifically mentioned as examples of the type in the Gita. Interestingly, there is also no explicit bar on meat or fish or eggs; they are not included either in rajasika or in tamasika list (see XVII.9, 10), nor are they recommended as healthy or nutritious (XVII.8). The Gita does not go into too much detail, but leaves the choice to people having clarified the criteria of classification.
The second point is the contemporary relevance of triguna analysis even in day-to-day living, and not just for spiritual progress. A tamasika nature can hardly be helpful for success even in mundane matters. One may feel that a rajasika nature is the one most workable in a highly competitive world of today. It all depends on what our real goal is. If it is just acquisition of endless goods and power, it may appear so. If it is enduring happiness or oneself and all those around, the relevance of sattvika becomes obvious. An aggressive executive will hardly be successful with those working with him or her. On the other hand, a sympathetic understanding executive will be far more successful in getting co-operation from colleagues and subordinates. It does not mean that one need not be strict in implementing the right standards, but an executive who has no respect for those working with him may hardly succeed in doing so. Similarly, an executive or even an ordinary worker, who has a certain amount of healthy detachment about the outcome, is likely to be more effective. The Gita does not commend apathy or indifference to the outcome or consequences of one's work or action, as that would be tamasika as it says. One should have efficiency (kaushalam) and commitment (shraddha) in one's work. The Gita commends, however, that having worked sincerely, putting in one's best, leave the outcome to God, as He is the karmaphala-data (the dispenser of the fruit of action). The detachment about outcome makes a person free from anxiety and tension, which not only will make the work more efficient but also is good for the health of the person concerned.
The third point is that an analysis in terms of trigunas can be used as a tool of ethical assessment, with due discrimination, on a much a wider scale than what is explicitly envisaged in the Gita. It can, for example be applied to an economy, politics, a society or institutions. Just to illustrate, a sattvika economy can be said to be one which is organised and functions ethically, with no poverty, no conspicuous inequality, with safety nets to all especially to the weak and vulnerable to meet emergencies and adequate social security for all. It provides for all equitable access to education, health care, all amenities and infrastructure. It allows only moderate inflation, and only a sustainable use of natural resources, with pollution minimised to sustainable limits. It ensures employment to all. Even its private sector is not focused on maximisation of profits or shareholder wealth alone, but takes into account the interests of all stakeholders also. In other words, a sattvika economy is sarva-bhuta-hite-rata, that is engaged towards achieving the welfare of all beings. A rajasika economy is focused only on the maximisation of the growth rate of national income, with less attention to other concerns mentioned earlier. A tamasika economy has neither social justice nor economic growth, but only rampant corruption and illegalities in running both public and private sectors, with very ineffective or little intervention to improve the situation. A rider to such a classification of economies is that an actual economy will not be purely of any of the three types and may in actual fact have features of all the three, but it may be possible to broadly determine which type or nature dominates it more than the other two.
Fourth point is that no one individual or an institution or an economy is permanently of one guna, and there is always a struggle to get to the higher ethical level because it is more satisfying. It need not necessarily be a hard task. Roopa Pai points to simple ways of changing one's state of mind.
If you are feeling depressed and sluggish and Tamasik, force yourself into a state of Rajas, i.e. do something – go for a run, call a friend, pull out your to-do list and finish a couple of tasks on it. You will feel energized, happy, and have a sense of accomplishment at the end of it. If you are stressed from too much Rajasik activity, take a break. Sit down by yourself in a quiet place for 10 minutes, take a few deep breaths, and try and empty your mind of all thoughts. Or simply remove yourself from the scene of activity and sink into glorious Tamas – take a nap, flip channels on your TV, have a snack. It will help you to recharge, and you will soon be ready for Rajas again. From Rajas to Sattva isn't a long journey – any activity that is right and unselfish and good naturally leads to Sattva.
(Pai 2015: 194)
Even tamas can be useful, as hinted by Pai, provided it is within moderate limits. Sattva, rajas and tamas need not always be momentary states of mind and can even characterise more enduring traits of one's personality. That is when tamas has to be minimised and rajas moderated. Those who are already sattvika will have to try to remain so, until it becomes their very nature. The Gita also talks about a state where one transcends all gunas and becomes nirguna, but that is when one's sadhana is ripe and the spiritual goal of liberation is attained.
The discussion of ethics in the Gita so far has been on one aspect, the most important one, that is, dharma, which is also the first of the four purusharthas, human goals or aspirations. The second purushartha is artha (desire for wealth and power), and the third is kama (desire for pleasure). The word, kama, has been used in tradition mainly to denote sensual pleasure, but it is also used in the wider sense of desire for any material thing in the Gita. For example, the Lord while identifying himself with whatever is best in the world says – dharmaaviruddho bhuteshu kamosmi, that is 'I am the desire in all beings which is not opposed to dharma' (VII.11). The Buddha considered desire as the source of all sorrow in the world. The stand of the Gita also is not very different, though in the verse just referred, it appears that desire is no problem if it is pure and well oriented. Nicholas Lash (Lash 2000: 4, 10 in n-13) raises the question of whether the Gita stands for the suppression of desire or only for its purification. He thinks that the Gita is ambivalent on this issue and refers to verse VII.11, which stands for purification of desire, and verses III.39 and 43, where the Lord advises Arjuna to kill desire, which is the formidable enemy of the soul (Jahi shatrum Mahabaho [Arjuna] kamarupam durasadam – III.43). In the verse III.41 also, the advice is to kill desire, which is the destroyer of knowledge and spiritual realisation, by controlling the senses. This verse suggests that Krishna wants only that desire to be curbed which goes against moral and spiritual development, not where it is consistent with dharma. After all, not all desires can be an enemy of the soul according to the Gita, and even according to the interpretation of Lash, there can be a desire for being virtuous (follow dharma) and desire for God or spiritual realisation (Lash 2000: 6). Even other desires such as for food and basic comforts need not necessarily be an enemy of the soul, in so far as they are necessary to keep oneself alive and well functioning to practice karma-yoga.
The Gita is practical enough to commend moderation; it is against extreme austerity. Here again, the Gita's stand is similar to that of Buddhism. The Gita says clearly that yoga is not for those who either overeat or starve, nor for those who sleep all the while or those who hardly sleep (VI.16). But one who is moderate in eating, in recreation, in work, in sleeping and in wakefulness, yoga is not difficult (VI.17). The key to yoga is the control over one's mind, like an oil lamp burning steadily where there is no disturbing breeze (VI.19). The Gita is not against gaining happiness; it is supportive of gaining happiness which is more meaningful, truly satisfying and lasting. The Gita is not against even recreation of mind so long as it is proper and not against dharma (Yukta-cheshta – VI.17). Overall, the basic advice of the Gita is not one of suppressing desires, but one of purifying the mind itself through freeing it from both raga (obsessive attachment) and dvesha (hatred), since it is these two enemies which stimulate uncontrolled desire, anger, greed, jealousy and the like and lead to moral decay. The Gita says that one can move among the objects of the senses without coming under their control, but keeping them under control. The importance of controlling raga and dvesha is emphasised now and then at least in three verses (II.64, III.34 and XVIII.51). Verse XVI.21 regards kama in the sense of lust, krodha (raging anger) and lobha (greed) as the enemy of the self, and hence to be rejected or avoided. As Swami Chinmayananda has emphasised, it is the essential purpose of religion 'to lift the limited and selfish human being from his passions, greed, and hatreds to a loftier vision of the world' (Thapan Ed. 2005: 33). That is the purpose of the Gita too.
However, desirelessness as an ideal comes prominently in the karmayoga commended by the Gita. It comes out clearly in several chapters from the second to the last. It is reiterated in the discussion of trigunas also. According to verse XVIII.9, a sattvika karma is that which is done without attachment and without desiring its fruit (sangam tyaktva phalam chaiva), but done based on an assessment of what needs to be done as duty under the given situation. This is a very difficult ideal. Put aside for a while the second requirement of working without desire for its fruit. But what does work without attachment mean? Can such a work be effective or good? One may not always be lucky enough to get work which she or he loves, but unless the person begins to love the work she or he gets to do, can it be meaningful and fruitful? An emotional attachment to the work one does is not bad, but has, on the contrary, beneficial effects. Isn't mother's loving care for her child not sattvik merely because she is emotionally attached to her child, though her care is certainly not selfish. Osho solves this problem by explaining that non-attachment is not its opposite – aversion. 'Even aversion is a kind of attachment – to the opposite of attachment' (2006: 532). Desirelessness is also not aversion, because aversion is a desire for the opposite. Osho says that 'a non-attached mind, according to Krishna, is one who accepts everything unconditionally' (2006: 533). Mothers loving care for her child is non-attached because it is unconditional and is therefore sattvika. Non-attachment is just another way of expressing absence of any narrow or selfish desire. It is not indifference.
Let us now attend to the second requirement, namely desirelessness. What does it mean for a modern economy or society? Workers work for a salary, business enterprises work for profit. Can a doctor or a lawyer work without expecting fees? What is implied by the Gita is that one can certainly work for a legitimate reward, but without greed. A health-care system where the doctors are rewarded by the government, but not by the patients directly, appears to be consistent with karma-yoga. Even under a system of private health care, a doctor cannot treat a more paying rich patient better than another less paying patient. A school teacher may be paid for doing evaluation work by the school or the concerned institution, but cannot expect a higher remuneration for giving higher marks. The direct link between work and reward can lead to problems in several situations, and an emotional detachment about the reward can increase both fairness and commitment. That is why the Ishopanishad in its very first verse tells us that the world belongs to God, and we have to take only what is legitimately ours, and not covet what is not ours. This is of course a general philosophical principle, and the question of how to decide what is legitimately ours and what is not ours can be decided only on the basis of goodwill for all in Gandhian spirit and if we avoid being covetous. This is the message of the Gita too. We may pursue our legitimate desires and work to satisfy them, but this has to be within moral limits, without resorting to dishonesty, hypocrisy and greed.
There is a great significance in the overall advice of the Gita to keep our desires in check, which would be good not only for one's moral and spiritual development, for curbing inequality in consumption and making more resources available for the needy and for environmentally benign and sustainable development. A situation where endless consumption is the prime engine of economic growth can lead us to environmental disaster. We thus gain guidance from the Gita not only about individual ethics, but also about social ethics and environmental ethics. A few thinkers distinguish between virtues which are 'self-regarding' or individual oriented, and altruistic virtues which benefit others also or the society at large (e.g. Yardi 1991: 111). Yardi puts virtues like fearlessness, control over senses and emotions, and modesty under the first category; virtues like truthfulness, kindness and generosity come under the second and promote social ethics. It is not possible to draw a neat line of distinction between the two, because if in a society most people are virtuous even in the 'self-regarding' or individual sense, it would have tremendous benefits to the society; and altruistic virtues on the other hand would also benefit the individuals concerned by contributing to their moral and social development. In any case, Yardi's intention is to show that the Gita upholds both the types of virtues and promotes both individual and altruistic or social ethics. We have in the Gita a synthesis of not only individual and social virtue ethics, but also deontological ethics and consequential ethics – all in a harmonious blend. Human rights may not be explicitly mentioned, but human duties are, and in the Indian approach, as Gandhi insisted, rights and duties are counterparts of the same coin. There is in the Gita ethics for soldiers in war, for workers in the mundane world, for devotees in love with God, for the wise who seek spiritual realisation through knowledge and even for ascetics who like to transcend the world. What we get from the Gita is holistic, comprehensive ethics, useful in leading our day-to-day lives meaningfully and also transcending it gracefully.
A tricky issue in the Gita is about the question of free will. There are certain verses in it suggestive of absolute control by God in all matters, there being no question of choice or free will. The picture of universal form of God in Chapter 11 shows human beings as merely His instruments, acting according to His will, with their destiny completely in His hands (verses 32–34). Krishna tells Arjuna that all on the opposite side of the army are already decided to be killed, and Arjuna will be a mere instrument (nimitta-matram) with no choice. Again in the last chapter, Krishna tells that the Lord who dwells in the heart of all makes them to revolve as if they are all mounted on a machine (XVIII.61). Yet, in the 63rd verse of the same chapter, Krishna gives a choice to Arjuna asking him to ponder critically over all that was said and then decide as per his wish. Was it a mere courtesy? Why does a mere instrument, a cog in the machine require courtesy? Further, why does a mere instrument lacking free will need all the teaching about swadharma, controlling desires, detachment and ethics in general? What is the meaning of the law of karma if human beings are all puppets? Karma has a meaning only if they are moral agents. If you take the Gita as a whole, there is no doubt that human beings at least have been granted the power of free will, though this power is not absolute and subject to limits. Perhaps the distinction between vyavaharika satya (relative or practical reality) and paramarthika satya (ultimate or absolute reality) made by Shankara may help in resolving the issue. There is free will subject to limits in the former, but no such question in the latter, where everything is fused in to one ultimate reality with no distinctions.
## God and His world
The Gita expects us to go beyond ethics. It is important to be a good person, but it is not enough. One has to have a higher purpose in life to awaken to deeper reality and meaning of what we are and our relation with the Ultimate. The Gita goads us to rise above the mere mundane, find the Divine within and realise our full potential. The Gita in its essence is theistic, and its major purpose is to make us know ourselves and our Maker. Ethics, however, continues to be relevant. It is connected to God in two ways in the Gita. One is that by leading a virtuous and selflessly active life, the mind is purified, which leads to God realisation. Ethics are quite necessary for sadhana. But the Gita's God is kind even to a sinner (suduracharo) (IX.30). Even if a person falters in ethics, he will be saved by God and brought to a good path if only he takes refuge in Him with a right resolve. There is a heart-warming promise, 'My devotee will never perish' (IX.31). The whole tone of the two verses, X.30–31, is one of compassion to sinners who surrender to God, but not to sinners considered as incorrigible – too arrogant, lustful, hateful and obstinate (XVI.18–19). A distinction between two types of sinners – those having the potential for redemption and those who are incorrigible – is evident. This does not belittle the importance of ethics, but only enhances the significance of the role of God. If only a sinner turns to God, he becomes a dharmatma, a righteous person (IX.31), because it will make him repent and resolve that he will not behave badly again. 'God holds us, fallen though we be, by the roots of our being and is ready to send His rays of light into our dark and rebellious hearts. The very consciousness of our imperfection and sin betrays the pressure of the Divine on our hearts' (Radhakrishnan 1993: 251). The primary condition here is that one should give up ego and be open to the Divine, because then 'the Divine takes up the burden and lift the soul into the spiritual plane' (Radhakrishnan 1993: 251). The concept of God here is a personal deity. People need God not so much as an impersonal force running the universe at large, but as the father, mother, friend or all combined, with whom they can talk and confide, to whom they can pray for help and protection in an increasingly uncertain and vulnerable world, made so by man himself in no small measure. Faith in a personal God is a source of great emotional security, even if there may be occasions of disappointment. For a devotee, each disappointment draws him or her closer to Him, enabling not merely survival through a turbulent world but even motivating for success in it.
Lord Krishna himself is a personal God in human form, who according to the Gita itself is an avatar of the Supreme. But the Gita also speaks of the Supreme as a personal but formless God endowed with all good qualities necessary to take care of the world and His devotees. Even as He is the origin and the Lord of all the worlds (sarvaloka-maheshwara), He is at the same time the friend of all beings (sarvabhutanam suhridah) (V.29). This God is described as omniscient (kavi), ageless or ancient (purana), ruler of everything (anushasitara), minuter than an atom (anoraniya), sustainer of all (sarvasya dhatara), of no conceivable form (achintya-rupa), self-luminous like the sun (aditya-varna) and beyond darkness (tamasah parastat) (VIII.9) (Tr. Swarupananda 1982: 184). He is the One to be known, the Pure, Upholder, the syllable Om, the three Vedas (Rik, Sama and Yajur), the Ultimate Refuge and Abode, the [understanding and helping] Friend, the Witness and the immutable substratum too (IX.17, 18) (Tr. Swarupananda 1982: 207). The Supreme is said to have manifold manifestations due to His yoga power (X.7) and is thus the origin of all (X.8). Arjuna acknowledges that Krishna in human form is the same as the Supreme Brahman, the Unborn, the Eternal, to whom all sages have been devoted (X.12). He is the Highest of all beings, Purushottama, a concrete Divine Personality and also the Supreme Self, the Paramatma, who pervades and sustains all the worlds (XV.17).
The concept of personal gods goes back to the Vedas. Vishnu, Indra, Varuna and Agni, for example, were personal gods in the Rigveda. Vishnu was taken as pervading (vish) the whole universe and controlling and taking care of it. Krishna was identified with Vishnu (Radhakrishnan 1993: 26). The Shwetashwatara Upanishad too has a personal god – Shiva – but also identified with the Supreme Brahman (Yardi 1991: 91). The Gita similarly has Krishna as a personal god, also identified with the same Supreme Brahman.
A personal God can be worshipped in any form or even without form as per the inclination of the devotee, who should have the faith that his personal God is only the same Supreme who is the origin and protector of all. The Gita assures that whatever form any devotee wishes to worship with shraddha, the Lord strengthens that shraddha and grants his or her desires (VII.21–22). Moreover, in Hinduism, personal God can be One or several, though it is always insisted that all the personal gods are various forms of the One and the same Supreme. The Gita declares that the worship of different gods also goes to the same Supreme, even if this worship is not done systematically (IX.23). There is thus freedom to conceptualise a personal god as per one's inclination and find fulfilment. The form is not important, but the shraddha and devotion are.
It is clear in the Gita that this personal God, also the Ishwara of all the worlds, is saguna Brahman (the Supreme with attributes). He is identified with the essence, the most significant and the best of everything in the world. He is the radiance of the sun, the sapidity of water, the syllable Om of the Vedas, the life of all beings and intellect of the intelligent (VII.8–10). 'There is no end to the particulars of My manifestations' (nasti anto vistarasya), as Krishna tells Arjuna (X.19) (Tr. Swarupananda 1982: 229). He is the atma existing within all beings. Among the radiant, He is the sun. Among the priests, He is Brihaspati. Of the waterbodies, He is the ocean. He is the Himalaya among the mountains, the Ashwattha (peepul) among the trees, the Ganga among the streams, alphabet A among letters and so on. He is the knowledge of the knowers, and the power of the powerful. He is the knowledge of the Self among all knowledges. He exists supporting the whole world only by a portion of Himself (X.19–42). He is the strength of the strong which is devoid of desire and attachment. He is the desire among beings which is unopposed to dharma (VII.11). He is the very abode of absolute and eternal dharma (XIV.27). Being the father and mother and friend of all the beings (IX.17–18, V.29), He is also the source of limitless love.
Though Ishwara as the personal God is Omnipotent, with no limits to His power, He takes responsibility neither for anyone's sin nor for anyone's merit (nadatte kasyachit papam na chaiva sukritam Vibhuh, V.15). Radhakrishnan explains: 'If the universe consists of active choosing individuals who can be influenced but not controlled, for God is not a dictator, conflict is inevitable. To hold that the world consists of free spirits means that evil is possible and probable. The alternative to a mechanical world is a world of risk and adventure. If all tendencies to error, ugliness and evil are to be excluded, there can be no seeking of the true, the beautiful and the good' (1993: 24). Yet, the Gita's God is vitally interested in ensuring that in the conflict between good and evil, the good ultimately wins. 'He pours out His wealth of love in helping man to resist all that makes for error, ugliness and evil. As God is completely good and His love is boundless, He is concerned about the suffering of the world' (1993: 25).
As personal God, Krishna is not only our parent and friend, but also our Guru too. 'He is not a hero who once trod the earth and has now left it, having spoken to His favourite friend and disciple, but is everywhere and in every one of us, as ready to speak to us now as he ever was to anyone else. He is not a bygone personality but the indwelling spirit, an object for our spiritual consciousness' (1993: 31). A special significance of the personal God in the Gita is that He is 'declared to be available to all, irrespective of their social status, gender or rules of ritual purity', or even 'karmic luggage' (Malinar 2009: 7). The Gita can be said to have democratised religion in India as never before, because of this and also by providing easy means like bhakti accessible to all.
Though an important aspect of saguna Brahman is His fond personal relationship with all beings, He is also immanent and has a universal form – Vishwarupa. A whole chapter, the eleventh, is devoted to describing this form of the Supreme. In this form, He does not appear as a personal God, but as not only the one who pervades the universe, but also the fierce force behind its relentless and continuous dissolution and renewal. Arjuna wanted to have a cosmic vision of the Lord, His real form, and is given special divine eyes to visualise the magnificent cosmic spectacle of the immanence and unlimited raw power of the Supreme. The impact of this on Arjuna is one of profound awe and even fear. The whole universe is now seen to be the body of the Supreme, in which exist and move all things, sentient and insentient – from celestial bodies to smallest creatures. The cosmic form has the splendour of a thousand suns and hosts all the multiplicity of the world (pravibhaktam anekadha, XI.13). It is 'the very form of the Lord that includes all the forms', and there is nothing of the universe that is outside it (Dayananda 2011, Vol. 7: 11). Even though awestruck, with his hair standing on end, Arjuna breaks out into a spontaneous and a highly poetic praise of the Lord. He exclaims, among other things, in the form of an evocative verse:
Tvam aksharam paramam veditavyam
Tvam asya vishwasya param nidhanam /
Tvam avyayah shashwata-dharma-gopta
Sanatanahstvam purusho mato mey //
(XI.18)
('You are the Imperishable Supreme, the only thing to be known. You are the great refuge of the whole cosmos, the undying Guardian of Eternal Dharma, and the primeval eternal Purusha as I see.'
– Tr. by the author)
The essential idea conveyed by the Vishwarupa chapter is that the world is not separate from God, but is the body of God, though He may be much more than this body. Arjuna visually experiences that the whole cosmos is all one in the Supreme. There is no hint anywhere in the Gita that the cosmic form is unreal or an illusion, or that only the unmanifest is real. Interestingly, the idea of the Cosmos being the body of the Supreme corresponds with the related concept of the human body being the abode or the field (kshetra) of the jiva, who is the knower of the field (Kshetrajna, in the thirteenth chapter). At the cosmic level, the Supreme is the Kshetrajna of the whole Cosmos. Since He pervades it, He is also in us all. Easwaran remarks: 'The greatest wonder is that this tremendous radiance reflected throughout the cosmos shines within us too. As our spiritual experience grows, as our separateness goes and our ego dissolves, we will experience a tremendous effulgence spreading throughout our consciousness, which no experience of the senses can ever help us comprehend' (Easwaran 1997, Vol. II: 276). 'The deeper we probe into the nature of the universe, whether it is the vastness of space or infinitesimal world within the atom, the more we shall see the glory of the Lord revealed' (Easwaran 1997: 277).
With nirguna Brahman, or Parabrahma, one goes deeper into the quest for ultimate reality. Brahman, though an impersonal Absolute, as Radhakrishnan stresses, is not a mere abstraction in the Gita, for It is the very basis of all Reality, including the finite world (1996, Vol. I: 534). In contrast to the detailed description of the Supreme in His manifested cosmic form, there is not much in the Gita by way of explaining Parabrahma, the Unmanifest Supreme. This is understandable, because what is basically ineffable cannot be explained much; it has to be experienced or realised. Among the few verses which deal specifically with the Unmanifest or nirguna Brahman is the third in the twelfth chapter, where the Supreme is said to be Akshara (Eternal), Anirdeshya (Indefinable), Avyakta (Formless, Unmanifest), Sarvatraga (Omnipresent), Achintya (Unthinkable, Unimaginable), Kutastha (Immutable), Achala (Unmoving) and Dhruva (Firm, Constant, Permanent).
Surprisingly, there is explicitly no reiteration of the Upanishadic epithets of Brahman in the Gita – Sat Chit Ananda (truth/existence, consciousness, bliss), in personal, universal or even nirguna form. Indirectly, however, these epithets are clearly implied. The sixteenth verse in the second chapter says that what exists is sat, and the unreal does not and cannot exist. It is the very nature of truth that it alone exists, and falsehood (asat) can never. The second line of the verse says that persons of discrimination know what exists and what does not, what is real and what is unreal. The intention of the verse is not just to convey a truism that what exists, but to indicate that Brahman is sat, and Existence is Its very nature. It is meaningless to call for a proof of the existence of God in the sense of Brahman, because Existence is what it is, and nonexistence is its opposite – asat. The first half of the fifteenth verse in the fifteenth chapter, for example says that the Lord resides in the heart of all and is responsible for the memory and perception, as also their loss (Sarvasya chaham hridi sannivishto mattah smritirjnanam apohanam cha). Memory and perception are suggestive of consciousness or chit. Verses 27 and 28 of Chapter 6 clearly say that supreme bliss comes to the yogi who has realised the Brahman, the resulting happiness from the contact with the Brahman being simply extreme (atyantam sukham). This is clearly about the ananda aspect of the Brahman. The Gita also goes further. Not only is the Brahman of the nature of sat, chit and ananda, It is also the source of energy (ojas) which runs the world, whether the beings of the world are conscious of it or not. The Lord tells Arjuna, 'entering the earth with my energy I support all beings, and nourish all the herbs.... Abiding in the bodies of all beings as digestive fire, working with their very breathing, I absorb the fourfold food' (XV.13–14). Thus conceptualised, one does not have to seek God outside, be it in idols in temples or places of pilgrimage. The striving yogis find Him in themselves and, that too, firmly established (yatanto yoginah chainam pashyantyatmani vyavasthitam, XV.11).
There are several verses, especially in Chapter 2, which explain evocatively the nature of Atman, and in the Advaita philosophy, Atman and Brahman are the same. Atman or Brahman is that by which all this world is pervaded (yena sarvam idam tatam), which is indestructible (avinashi, anashinah), illimitable (aprameya) and immutable (avyaya) (II.17, 18). It neither slays, nor is slain; it is never born, nor does it die. It is not that not having been; it comes into being. It is eternal (II.19–21). Like discarding old clothes and wearing new ones, the dehi discards worn out bodies and takes up new ones (II.22). Weapons cannot pierce, nor can fire burn it. Water cannot soak it, nor can wind dry it (II.23). It is eternal (nityah, sanatanah), omnipresent (sarvagatah), constant (sthanuh) (II.24). It is unmanifest (avyakta), unthinkable (achintya) and unchangeable (avikarya) (II.25). Remarkably, the same epithets are thus used both for Atman and for Brahman in the Gita. A few verses leave no doubt that the indwelling Atman and Brahman are the same. Take this for example:
Upadrishta Anumantacha Bhart a Bhokta Maheshwarah /
Paramatmeti cha api ukto dehesmin Purushah Parah //
(XIII.22)
('And the Supreme Purusha in this body is also called the Looker-on, the Permitter, the Supporter, the Experiencer, the Great Lord, and the Highest Self.'
Tr. Swarupananda 1982: 301)
The remarkable similarity between epithets used for the Brahman and the Atman has been noted earlier showing them to be impersonal. Equally remarkably, the epithets of personal God are, however, different, the emphasis here being on His compassion, love for devotees, protective nature and so on, which are absent among the epithets for Brahman. The utterly impersonal nature of the Brahman, the ultimate reality, can be not only puzzling but also disconcerting to devotees in need of a personal God. The thirty-first verse in Chapter 13 states clearly: 'This inexhaustible Supreme Self, being without beginning and without qualities, does not act and is not tainted, though stationed in the body' (Tr. Radhakrishnan 1996, Vol. I: 535). It is merely a spectator, a looker-on and is not a doer (akarta). Radhakrishnan explains: 'The whole drama of evolution belongs to the object world. Intelligence, mind, senses are looked upon as the developments of the unconscious prakriti, which is able to bring about this ascent on account of the presence of spirit. The subject self is within us calm and equal, uncaught in the external world' (Radhakrishnan 1996: 535). The Brahman may be the source of our very existence and our consciousness. But to a devotee who needs an intervention to tide over the problems of the world, an impersonal non-doer God may serve little purpose. The impersonal God may appeal to a reasoning or rationalist mind, but not to a person who wants a listening and acting God.
We have thus different conceptualisations of God in the Gita, which can appeal to different types of persons. The Lord says in the Gita itself that four types of virtuous persons worship Him, the distressed (artah), seeker of knowledge (jijnasu), seeker of wealth (artharthi) and the wise (jnani) (VII.16). The Lord further says that the last, being singularly devoted and liberated [from pettiness, selfishness, etc.], is dearest to Him, just as the Lord is dearest to him or her above everything else in the world (VII.17). He clarifies further that though some of them have a narrower vision and discrimination, all these seekers are noble and good (udarah), and He fulfils their respective desires (VII.18–22). The important point made is that the Lord satisfies all His devotees according to their own respective conceptualisation and perception, some getting their earthly desires fulfilled, which means limited rewards, and some (like the jnanis) getting nothing less than oneness with the Lord Himself and gaining absolute bliss. Thus, the question of what form of God or ultimate reality is most relevant is decided by the devotees themselves according to their own perception and understanding. They get what they seek. If they seek only earthly titbits, their rewards may be limited, but they are their own making. If the seekers are spiritually more ambitious, and aim big, they also get as per their aspirations. We get the God we strive for.
Deepak Chopra observes in this connection, 'In our search for the one and only one God, we pursue the impossible. The issue isn't how many Gods exist, but how completely our own needs can be spiritually be fulfilled' (2001: 43). He says that in any case we need God, 'because without a source, our existence has no foundation at all' (2001: 7). But we select a deity based on our own perception or interpretation of our experience with reality. It is not as if our selection of a particular deity or concept of God is fixed and final. It evolves through stages, and Chopra identifies seven stages, with each stage meeting a particular human need. A person who needs protection from threat and danger needs a Protector God. This is the first stage. In the language of the Gita, it is the stage of arta, the distressed. Once the feeling of fear and insecurity of the first stage are transcended, the stage is set for seeking power and wealth – that of artharthi. This is stage two, where the seeker needs an Almighty God. The dominant emotion of stage two is awe about the power of God and the desire to accomplish much in the world with His blessings. In the third stage, we seek a God of Peace, detachment and calm. Life's challenge now is to be actively engaged and detached at the same time, enjoying the inner silence. The third stage can be said to be one of beginning with karma-yoga in the Gita's terminology. The spiritual seeking is no longer self-oriented. In the words of the Gita, the seeker is engaged in loka-sangraha and sarvabhuta-hita, the welfare of all beings. In the fourth stage, God is the Redeemer. We seek forgiveness and love of God. We need a loving and understanding God now. This is the stage of ardent devotion – bhakti in the words of the Gita. It enables the bhakta to get out of the coils of karma, having been aided by karma-yoga. It is still a stage where dualities persist.
In stage five, God is the Creator. In this stage, the sense of duality between the seeker and the sought God is reduced, and he or she joins God in a partnership of co-creation, participating in the continuous renewal and improvement of the world, enhancing its beauty and enjoyableness. One discovers one's creative self and gets engaged in creative expressions in art, literature, music and so on. In this stage, the seeker works with God, in harmony with His benign purpose. In stage six, we have a God of miracles – not in the usual sense of doing abnormal things breaking natural laws created by God Himself, but bringing about deeply transformative or epochal or paradigmatic shifts breaking with past trends. Several revolutionary changes have changed the world irreversibly, in which humans have partnered with God. The development of paper and printing technology in the past and of computers and Internet in the more recent times are examples of these changes. In this stage, the seeker is imbued with rich vision, drawing it from God. Many smaller miracles also take place in our lives, such as a recovery from what was regarded as a hopeless illness, often due to the strength of one's mind and faith. Miracles can and do take place at a spiritual level, when after years of effort, a sudden spiritual awareness dawns effortlessly and stays thereafter, transforming one's life. Such experiences have been recorded in the lives of several spiritual masters such Shri Ramakrishna Paramahamsa and Shri Ramana Maharshi. The stupendous achievements in the short lives of Adi Shankaracharya and Jnanadev are nothing short of miracles. The final stage, stage seven, is a culmination of spiritual striving, when the realisation of God as Pure Being, 'I am', takes place. Chopra says, 'The God of stage seven is holistic – he encompasses everything. To know him, you would have to possess a mind to match.... The God of stage seven is so intangible that he can be defined by no qualities' (2001: 155). He tells further that in the ancient Indian tradition, this aspect of spirit is defined only by negation – 'Unborn, Undying, Unchanging, Unmoving, Unmanifest, Immeasurable, Invisible, Intangible, Infinite' (2001: 156). As Chopra says, if this stage had been suddenly reached at the outset, one would not have realised the reality of God. 'You have to climb the spiritual ladder from one rung to the next', and once having reached the summit, no support is needed, not even of the mind (2001: 156). When you become one with the Pure Being, ordinary life, with all its activities, does not cease to exist, says Chopra. Until then, one's identity is 'wrapped up' in karma, in a cycle of desires that lead to actions, actions leaving impressions, which in turn lead to desires. In the final stage, one is freed from these cycles even while engaged in the world, for deep within, there is a genuine feeling of non-doer-ship, complete calm and detachment.
We come now to the question of the relationship of God with the world, including the question of the reality of the world and of us all in it. The Gita does not anywhere say that the world is unreal or that it is an illusion. There is not a single verse suggestive of denial of the world as false or mithya. On the other hand, there are many verses which affirm the reality of it, though not as a reality independent of God. The Lord describes His lower prakriti as eightfold which includes the five elements including the earth, mind, intellect and the ego (VII.4), which are sustained by His higher prakriti of sat, chit and ananda] (VII.5). The implication is that the world is a part of the same Divine, but dependent part. It is affirmed that God is both the origin and the cause for dissolution of the world (VII.6). There is a brief discussion of the different aspects of the Divine and its relation to the world at the beginning of [Chapter 8 in verses 3 and 4, in response to an opening query by Arjuna made in verses 1 and 2. It sounds a bit complicated, but is significant. According to Krishna's reply, the Ultimate transcendental and imperishable reality is the Brahman, which at the same time imparts existence to everything and every creature in the universe. It exists also in the perishable bodies (adhibhuta) as the adhyatma, sustains everything as adhidaiva and receives all offerings (including actions, devotion) as adhiyajna. The different aspects of the Divine may have been given different names, but it is One, and there is no question of a lack of co-ordination! The verses could also be interpreted as speaking of three planes of reality – the perishable material world (adhibhuta), under the care of a personal God (adhidaiva and adhiyajna), both resolving ultimately into the impersonal and imperishable spiritual reality of adhyatma or Brahman. The whole universe is pervaded by Him (IX.4). He projects again and again into the whole multitude of beings (IX.8). As noted earlier, the whole universe is shown as within His body, in Chapter 11. He moves the whole universe and all the beings in it including plants through His energy (ojas) (XV.13). Yet, it is stressed that He is both doer and non-doer, and His actions do not bind Him, and He is unattached (IV.10–11). In other words, the moral responsibility for actions of at least the human beings does not belong to Him but to the humans alone. The Lord has provided them with a mind (manas), intellect (buddhi) and ego (ahamk a ra) of their own (VII.4). They have the freedom of choice or free will. It is clear from this that the whole world and all the beings in it are real, dependent on the Lord, and yet separate, with at least the human beings having a will of their own. But there is no question of absolute freedom for us, because we are subject to not only laws of nature, but also each other's freedom. Thus, the world of the Gita is one in which Arjuna and all beings are with enough freedom to operate and do their roles, and choose between good and bad, or between right and wrong. In Shankara's words, it is a world of vyavaharika satya.
Yet the Lord does not expect us to get bogged down to the mundane world (samsara) all the while and be burdened with its sorrows and problems until eternity. Though the world is His creation, having given free will to the humans at least, the Lord knows that this could land them into conflicts and even crises. Man needs freedom of will to realise his full mental and spiritual potential, but the karmic cycle is the side effect of this free will. The Lord expects humans to come out of the coils of karma, and thus transcend samsara or the world, and ultimately realise oneness with Him, and wake up to the ultimate reality, where there is no distinction between God and His world. Thus, the first verse in Chapter 15 compares samsara with the inverted peepul tree, with roots in the air and branches and leaves below, and expects the spiritual seekers to cut this tree down with the axe of detachment. This has to be done by each seeker individually, since the tree is perennial and when cut down by one seeker, it does not mean that it is cut down for others too. Significantly, the tree is stated as having its roots above (in the Lord himself) since the mundane world is also His creation only. But the fact that it can be cut down at the individual level when the spiritual development of the individual is ripe means that samsara can vanish in individual cases and the liberation from it is individual, not collective.
The world, thus, is not a completely objective fact, and its nature depends on individual seeker's perception of it too. It can vary with the level of one's spiritual development. Chopra sees seven stages in this regard too, corresponding to seven perceptions of God described earlier. In the first stage, the world is seen as full of misery and conflicts, and the response of a person to it is fight or flight. In the second stage, it is a world of competition and opportunities of satisfying one's ambition. In the third stage, it is one of inner self-sufficiency and calm, not by being an introvert but by being constructively and altruistically engaged in the affairs of the world and finding inner peace and fulfilment. In the fourth, it is a world of opportunities for insight and understanding, and further personal growth. In the fifth, it is one of providing scope for creative response through art, discovery and invention. The sixth stage is one developing a visionary response to the problems of the world. The seventh and the final stage is a transcendent world, where one sees beyond the world and gets to realise the ultimate reality of being one with the Pure Being (Chopra 2001: 177). The world thus is not just one of sorrow and struggle for survival, but is also one full of opportunities for self-development and fulfilment. The approach of the Gita to the world is not one of escapism, but one of active and constructive engagement, and ultimately gaining inner calm and fulfilment.
## Sadhana: spiritual striving
The literal meaning of sadhana is achieving or striving to achieve. Achieving anything requires dedication and commitment or shraddha, which are implicit in the term. The one who is on the path of achieving is called a sadhaka or spiritual seeker. Often the travel is more meaningful and exciting than the destination, and so it is with sadhana. The very task of spiritual seeking, when sincerely practised, purifies and ennobles us, imparts more confidence in us, strengthens our moral fibre, gives us peace of mind and lifts us above the ordinary, gross and mundane. And that is an achievement by itself.
Even then, it is the destination which inspires us and makes us undertake the arduous journey, and it is the destination which makes us choose the path. What is the destination of sadhana? The conventional answer is, by and large, moksha, which literally means liberation, and what is meant is liberation from the karmic cycle of births and deaths, taken as the highest human goal, or purushartha. This is also the ultimate goal set by the Gita – the goal of 'freeing oneself from the fetters of rebirth and reach that state which is beyond all evil' (Janma-bandha-vinirmuktah padam gachchanty anamayam, II.51). But not everybody may entertain such a distant goal, even if believing in rebirth. Most of us would be content with success in ensuring material prosperity and happiness in the present life itself, for which we may seek the blessings of the Almighty. The Gita does not deplore seeking artha and kama, but expects us not only to gain them through following the path of dharma, but invites us also to even go beyond and find greater, more meaningful and lasting happiness. The Gita does not think greatly of aiming at the goal of heaven and its pleasures, as they are also temporary like the pleasures of the world itself (II.43). It invites us to broaden and raise our concept of happiness itself. The beauty of the goal of liberation is that whether we believe in freedom from rebirth or not, the very path of achieving it promises freedom from bondage to narrow limitations of the world, and in the process, we find our self expanding in scope going beyond selfish interests and becoming so inclusive that we would be engaged selflessly in achieving the good of all (sarva-bhuta-hite ratah, V.25 and XII.4). Interestingly, in both verses, where there is a reference to those engaged in sarva-bhuta-hita (the welfare of all), there is either a promise of liberation (brahma-nirvanam) or uniting with the Supreme (prapnuvanti mam). Such persons are also described as freed from all blemish (kshina-kalmashah, V.25), with senses under control, and even minded (XII.4). The fact that moksha was not necessarily the preferred goal of sadhakas even in the past is clear from a verse in Shrimad-Bhagavatam:
Naham kamaye rajyam na svargam na cha apunarbhavam /
Praninam dukkha-taptanam kamaye dukkha-nashanam //
('I desire no kingdom, no heaven, not even freedom from rebirth [apunarbhavam or moksha] for myself. I desire only that beings afflicted by sorrow be relieved of it.'
– Tr. by author)
The moksha that is sought here is the liberation from sorrow (duhkha) in others. It is freedom from poverty, illiteracy, homelessness and disease in others, not freedom from rebirth for own self, because that would be selfishness. The goal is expanding one's self into such an inclusive entity that it covers a concern for all the beings in the world. And that is genuinely uniting oneself with God, the ultimate reality, which is real moksha.
To avoid a possible misunderstanding that the goal of sadhana is escaping from the material world and seeking spiritual solace thereby, Jayant Kalawar prefers to define sadhana as striving to transcend the gross and seeking the subtle, and sadhaka as the subtle striver. He says that we are all material experiencers (samsarika) and at the same time subtle strivers (Kalawar 2012: 84–85). The value of the Gita's teaching lies precisely in reconciling the material with the spiritual and helping us to move from the gross to the subtle. G. S. Amur through his book, Lokayatre (in Kannada 2013), shows that not only the Gita but the Mahabharata as a whole does not teach us to reject the material and choose instead the other-worldly spiritual, but spiritualising our life and material experiencing itself. But what is gross and what is subtle? Kalawar explains that lemonade may be a gross matter, but its quality which we savour and enjoy – its aroma, sweetness and cooling effect – is subtle. What we seek is not matter per se, but this subtle quality. The material experience itself is sukshma, that is, subtle (Kalawar 2012: 17). The Gita of course goes much further, and by explaining the anatomy of the self, it shows how we move from the less subtle to the more subtle. It says in verses III.42 and 43: 'The senses are superior (to the body), the mind is superior to the senses, the intellect is superior to the mind, and that which is superior to the intellect is He (the Atman). Thus knowing Him who is superior to the intellect, and restraining the self by the Self, destroy, O mighty-armed, that enemy, the un-seizeable foe – desire' (Tr. by Swarupananda 1982: 94–95). Sadhana thus consists of striving to find the most subtle and important in us, the Atman, and realise our full potentiality. Stated in a different way, the goal is to find Krishna in us all as the witness, inner voice, energiser and protector.
An important part of sadhana is preparing the mind for it, whatever path the seeker chooses. Being a virtuous person for its own sake, respecting moral values like truthfulness, compassion, generosity, in day-to-day conduct, totally avoiding hypocrisy, makes the mind pure and trains it for sadhana. Closely related to this is the need for self-control – the conquest of the shadvairis (six enemies – lust, anger, avarice, conceit, infatuation and jealousy), and what is more, the conquest also over dualities like pain and pleasure, and whimsical likes and dislikes. Conquest does not necessarily mean full suppression or elimination of them; it only means being in full control of oneself and not losing one's balance by these disturbances. A mind prepared thus qualifies not only for spiritual striving, but also for social service and assuming leadership (V.25). As Parthasarathy says, a self-controlled person 'operates with his discriminating intellect rather than whims and fancies of his mind' (2011: 372).
Though apparently a number of alternative ways of sadhana are indicated in the Gita – karma-yoga, jnana-yoga, dhyana-yoga and bhakti-yoga – there are interpreters as we have noted earlier who feel that all the yogas resolve themselves into an integrated one yoga only. However, there is no unanimity about what that one yoga is, or which of the different yogas mentioned in the Gita is the most important or decisive. The Gita starts with Arjuna's dilemma in this regard. He wants nihshreyas, the ultimate good, for himself and thinks that the best way of achieving it is renunciation of work or sannyasa. No, declares Krishna! The world is bound by action and driven by action, and there can be no escape from action. The choice lies not between action and inaction, but between more meaningful or ennobling action and selfish actions, or between action that secures us freedom and actions that bind us to karmic cycles. Work done with detachment, without a desire to appropriate the fruit of it for oneself, and further, done without a sense of agency or doer-ship, is the key to freedom, according to the Gita. This in brief is karma-yoga. The essence of the whole Gita can be said to have been captured by one shloka, which explains its unparalleled popularity. It prescribes karma-yoga and also defines it. It is:
Karmanyevadhikaraste ma phaleshu kadachana/
Ma karma-phala-heturbhu ma te sangostvakarmani //
(II.47)
('You have a right only to work [to perform your duty], but not to its fruits. Don't think of yourself as the cause (hetu) of the outcome of work, but do not avoid work.'
– Tr. by the author)
When one does not consider oneself as the cause of the outcome, because it is only God who produces it, there is no basis left to claim the fruit of work. When there is no link between one's work and getting its fruit for oneself, the karmic cycle is automatically broken. This is the rationale of karma-yoga. Malinar explains why this activity under karma-yoga is 'exempt from karmic retribution and conducive to a quest for liberation'. It amounts to 'equating one's actions with those of the cosmic cause of all activity (called Brahman or prakriti). Anyone who manages to substitute his own agency with "cosmic" agency for the sake of "welfare of all beings" can be liberated' (Mali-nar 2009: 5–6). Even a philanthropic or altruistic work done for the good of others does not necessarily amount to karma-yoga if it is done with the purpose gaining fame or even for, say, booking a luxury suite in the heaven for oneself. The work should not only be selfless and done without arrogating to oneself its agency or doer-ship, but also be sincere, efficient, duly regardful of consequences, and not indifferent or sloppy or harmful to anyone. When one works leaving the outcome to God's will, and with single-minded devotion, it frees one from anxiety and tension and can thus promote efficiency too. These riders or qualifications to karma-yoga are not all mentioned together in one place, but need to be noted. Feeling non-doer-ship in spite of being actively engaged and renouncing doer-ship of all actions to God is stressed in II.19, III.30 and again in V.8. Renunciation of the fruit of action is commended in II.47, detachment (sangam tyaktva) and dedication (yogasthah) in II.48, and dexterity or efficiency in work (karmasu kaushalam) in II.50. The fact that the work is not to be done mindlessly or disregarding harmful consequences is emphasised by calling work done with such indifference as tamasika (XVIII.25). Also tamasika is absent-minded (ayukta) lazy work (alasah) taking unduly long time (dirghasutri), reluctant (vishadi), pedestrian (prakritah) or malicious (naishkritika) work (XVIII.28). On the other hand, work done with fortitude and enthusiasm is praised as sattvika (XVIII.26). The only incentive permitted in karma-yoga is that it purifies one's mind, avoids tension and brings peace and fulfilment. It is thus purely a labour of love, to be enjoyed for its own sake. It may not be everybody's cup of tea, and that is why everyone is not a karma-yogi though everyone may be working on something or the other.
But is the recommendation practical at all? The bulk of modern economic activity is motivated by desire for personal gain or profit. Adam Smith, called the father of economics, thought that self-interest is not bad for the world. According to him, when each person acts according to self-interest, there is a mutual balance and natural order in a market system, and the common good is promoted. In an oft-quoted passage which has made him famous even among non-economists, he said: 'It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard to their own interest. We address ourselves, not to their humanity but to their self-love, and never talk to them of our own necessities but of their advantages' (Smith 1776; as quoted by Sen 1990: 23). Yet, Smith himself in another book, The Theory of Moral Sentiments (1790), chastised philosophers who saw virtue entirely in terms of prudence or self-interest, and he doubted whether prudence was enough for a good society (Sen 1990: 23–24). It has been realised since long that a 'natural order' produced exclusively by narrow self-interest can be very unfair with a lot of exploitation of the weak by the strong and can also be environmentally unsustainable. The Gita calls for reining in selfishness and narrow desires, if not preventing them altogether, so that it is within socially or morally acceptable bounds. Moreover, human nature is not entirely selfish. For example, without the selfless love of the mother for her child, humanity would not have even survived. As the society has developed over the millennia, human beings have learnt to broaden their self-interest to go beyond their own narrow self and cover larger and larger circles around them, family to society and ultimately all the world. Economic rationality seen narrowly in terms of selfishness alone would be a case of what Amartya Sen called as 'rational fools', and not sensible and wise human behaviour (Sen 1982). There is a plurality of motivations governing human behaviour, not self-interest alone (Sen 1990: 19). With the progress of human civilisation, the role of motivations other than narrow self-interest can be said to have been expanding more and more significantly, though it may certainly not have obliterated it. The Gita encourages this trend only in the interest of both the spiritual advance of the individuals themselves and the overall advance of the society. The Gita accepts all the purusharthas including artha and kama, but with due qualifications. In fact an implication of karma-yoga is that thereby worldly activity including work for livelihood is reconciled with sadhana. The Gita does not prohibit even the accumulation of wealth; it prohibits only doing it through dishonest or unjust means (anyayena artha-sanchaya) (XVI.12). To do good to others selflessly, which the Gita recommends, one cannot himself be a destitute after all!
Nevertheless, the Gita recommends work as worship. This idea comes out clearly when it discusses varieties of yajna in Chapter 4 in eleven verses from 23 to 33. There are references to yajna elsewhere too. The Gita transforms the Vedic concept of yajna as a ritual offering in sacred fire with elaborate mantras into a metaphor for offering in any form to either God or humanity. Not only the fruit of work but also the pride of doer-ship is sacrificed or offered. In the yajna described in the Gita, everything and everyone is taken as Brahman – the act of offering, the offering itself, the person offering and the object of offering; it is also offered into the fire of Brahman. The Gita assures that one who performs the yajna in any form in such a spirit reaches Brahman surely (IV.24). The varieties of yajna mentioned in the Gita are work, wealth (by way of donation), austerity, knowledge, prayer, devotional worship, japa, meditation and even pranayama. Yajna is usually translated as sacrifice; it is sacrificing oneself, fruits of one's actions, one's possessions for the sake of world welfare, seeing God in everything and acknowledging everything including oneself as His. All yogas are fused into yajna. All actions performed by a karma-yogi are yajnas, says Swami Bhoomananda (2014, Vol. II: 12). As Parthasarathy explains, 'Yajna denotes conversion of any activity into worship. Each and every activity becomes a prayer' (2011: 306).
An important qualification of karma-yoga is that both the doer-ship of work and its fruit are surrendered to God, but this cannot be done without bhakti or devotion to Him. It means that according to the Gita at least, there cannot be a 'secular karma-yogi'. This qualification is spelt out in several verses. Verse III.30 calls for renunciation of all works in favour of the Lord with the mind centred on one's spiritual self. Verse V.10 says that karma does not bind one who has dedicated his works to the Brahman with no attachment to them. Verse IX.27 asks a devotee to make an offering to the Lord of whatever one does, eats, sacrifices or gives. The requirement of bhakti dominates even over karma in some of the verses. There are two almost repetitious verses, which are considered as crucially significant for sadhana by almost all the interpreters:
Manmana bhava madbhakto madyaji mam namaskuru /
Mamevaishyasi yuktaivam atmanam matparayanah //
(IX.34)
('Fill thy mind with Me, be My devotee, sacrifice unto Me, bow down to Me; thus having made my heart steadfast in Me, taking Me as the Supreme Goal, thou shalt come to Me.'
– Tr. by Swarupananda 1982: 217–18)
Manmana bhava madbhakto madyaji mam namaskuru /
Mamevaishyasi satyam te pratijane priyosi me //
(XVIII.65)
('Occupy thy mind with Me, be devoted to Me, sacrifice to Me, bow down to Me. Thou shalt reach Myself; truly do I promise unto thee, (for) thou art dear to Me.'
– Tr. by Swarupananda 1982: 397). The repetition involved is for emphasis.
Interpreters who have emphasised bhakti above everything have been particularly fond of quoting verse IX.26, which makes God accessible to the poorest of the poor if only he or she has devotion for Him. It says: 'Whatever one offers with devotion and a pure heart, whether it is a leaf, flower, fruit or a bit of water, I accept that as a loving gift' (Tr. by the author). Roopa Pai writes about this verse: 'In one stroke, through this single, deceptively simple verse, the Gita makes it clear God does not belong to the privileged. He does not need gold and fine silks and sumptuous food. All He needs is love' (Pai 2015: 135).
There is a separate chapter also on bhakti-yoga (the twelfth), which commends devotion to God in a personal form, as it is easier. What is more, actions can be dedicated to a personal God one loves, who is amenable to meditation too (XII.6). It is noteworthy here that karmayoga and bhakti-yoga are not offered as substitutes; they go together closely. There is no question of which of them is better or more important, as both are equally necessary and complement each other. Yet, the Lord leaves the relative emphasis between the two entirely to the personal inclination of the devotee. If the person finds it difficult to fix the mind on the Supreme firmly, he or she can get themselves in actions dedicating them with devotion to God, without desiring personal benefit from these actions (XII.9–11). It is clarified that a true devotee has also to be compassionate and friendly to all beings, free from egoism, forgiving and always have a happy or pleasant disposition (santushtah satatam) (XII.13–14). By implication, it applies to karma-yogi as well, for the Gita makes no difference between the two. A devotee, moreover, is equitable both to friend and to foe, and even minded between honour and dishonour, between praise and censure; nothing deters him or her from steadfast devotion to the Lord (XII.18–19).
In spite of commending both karma-yoga and bhakti-yoga together almost throughout the dialogue of the Gita, there is a verse at the end of the advice which appears almost as a concluding summary of Krishna's teaching, but which is intriguing and needs some discussion. The verse is
Sarva-dharman parityajya mamekam sharanam vraja /
Aham tvam sarva-papebhyo mokshayishyami ma shuchah //
(XVIII.66)
('Relinquishing all Dharmas take refuge in Me alone; I will liberate thee from all sins; grieve not.'
– Tr. Swarupananda 1982: 398)
What is meant by 'dharmas' here? Shankara in his commentary on the verse says that dharmas mean karmas, that is, all works, and renouncing dharma includes renouncing adharma (unrighteousness) also. He explains further that renouncing karma means giving up not only fruits of action but also any sense of doer-ship or agency in oneself. The intention of the Gita according to Shankara is to show that works by themselves cannot be the means for moksha ultimately; only the knowledge of the self leads one to it (Warrier 1983: 616–18). Dayananda Saraswati, interpreting Shankara, clarifies that it does not mean karma-yoga is irrelevant in sadhana, since it is through karmayoga that the mind is purified and prepared to awaken to self-knowledge and thus to moksha (Dayananda 2011, Vol. 9: 329). In asking Arjuna to give up his dharma/karma, the Lord expects him to give up only his doer-ship by surrendering everything and himself too to the Lord. That is how Ramanuja also interprets. Complete surrender asked for is giving up any sense of agency or doer-ship, and all sense of possessiveness. It is only such a surrender that purifies and cleanses one of all sins and guilt. Ramanuja explains that several expiatory rites were in vogue at the time of the Gita, which meant spending a lot of time and money on them with no assured effect. According to the Gita, none of these are necessary; in fact, such rites have to be given up. It is only surrender with complete devotion to God which ensures expiation of all sins, and purifies one, according to this verse in the Gita (Adidevananda 2014: 598–99). Madhva also agrees with such an interpretation and says that the call to abandon all duties (dharmas) is not to give them up physically, but only to give up the fruits thereof. Otherwise, Madhva asks, how can one explain the Lord's injunction to Arjuna to fight the war to the finish? (Sharma 1989: 314). The Lord's intention is to emphasise bhakti, and not necessarily to undermine karma-yoga.
The ultimate goal of a passionate devotee is to be united with God. The Gita assures that one who leaves his or her body and departs remembering only Him attains Him without doubt (VIII.5). But it may not always happen that one meditates on God or remembers Him at the time of death. To ensure that this happens, the Gita suggests cultivating the habit of remembering Him all the time (VIII.7). It means that even if engaged in some work or the other, there has always to be awareness or consciousness of God's presence and benevolence at the back of one's mind. Even normal day-to-day activities like eating and giving are done in the spirit of an offering to God (IX.27). Then there is no question of being lost to God or failing to remember Him at the crucial times including death.
We have thus far discussed only karma-yoga and bhakti-yoga, and found that they go together in the Gita. Where does jnana-yoga come then? Jnana-yoga is referred also as buddhi-yoga in the Gita. It appears that sadhana can be pursued in different ways, depending on one's concept of the ultimate goal, and different goals may suggest different ways. If one's goal is personal fulfilment through unselfish service to humanity, seeing God in every being and serving Him through serving humanity, the chosen path obviously would be karma-yoga. When the goal is seen as personal fulfilment through union with a personal God, one follows bhakti-yoga, the path of love and worship. If, on the other hand, self-realisation is perceived as the ultimate goal of sadhana, the path followed is one of jnana-yoga combined with raja-yoga, as the two go together. The emphasis of karma-yoga is on using the will power, or conation, the capacity to detach oneself from desiring the fruits of one's actions, while the emphasis of bhakti-yoga is on using and developing one's inclination to love and feel, to sympathise and where necessary commiserate with others and consoling them assuring them of the love of God and His protection. The emphasis of jnana-yoga is on using buddhi, the capacity to discriminate between the real and unreal, momentary and everlasting. It needs a certain ability for abstraction, to go beyond multiplicity and detail to unity and synthesis. It is not the analytical ability which is demanded, but an ability for holistic conceptualisation. Though all the three paths seem to be individual affairs where the striving is by individuals, and not group affairs of collective effort, the striving even by individuals is not in complete isolation from others. This is because all the three paths require seeing God not only in one's self but also in the selves of every being. This is most obvious in karma-yoga, where selfless service for the welfare of all is commended (sarva-bhuta-hita). Bhakti towards a personal God cannot be complete unless this God is seen in all. And jnana cannot similarly be complete unless one sees the Self in all and all in the Self. There is no problem if the Self is substituted by one's concept of God, personal or impersonal.
There is no conflict at all between these three paths, and the Gita seems to prefer their integration or combination. In combination, they would have a synergetic effect, since they mutually help each other. A sadhaka does not at all have to make a neat distinction between karma, bhakti and jnana, following only one exclusively. The Gita itself makes no clear separation of the three paths, as is clear from three verses which are among the concluding ones in the last chapter (XVIII.55–57). They say: 'By loving [bhaktya] Me, he shares in my glory and enters into my boundless being [Verse 55]. All his acts are performed in my service, and through my grace he wins eternal life [Verse 56]. Make every act an offering to Me; regard Me as your only protector. [Resorting to Buddhi-yoga], make every thought an offering to Me; meditate on Me always [Verse 57] (Tr. Easwaran 1997, Vol. 3: 446 for verses 55 and 56, and 448 for verse 57). Karma-yoga, jnana [buddhi]-yoga and bhakti are all explicitly united here. If sadhana is done in an integrated way, the whole personality and mind of the seeker is raised to a great moral and spiritual height, finding fulfilment, confidence and lasting happiness even when faced with numerous vicissitudes of the mundane world. This indeed is the clear promise of the Gita. This indeed is moksha, that is, freedom from bondage to desires, from anxieties and tensions, from petty narrowness, problems and foibles, in spite of being engaged in the world. This freedom has no meaning if it comes only after death. It is something which one can expect even while living in the world. This is clear from verses 50 to 57 in the last chapter of the Gita. Moksha while living is not escapism, but is depicted as a life of perfection here. Freedom from karmic cycle of rebirth is a matter of belief, and not every seeker may bother about it, if the verse from the Srimad-Bhagavatam quoted earlier is an indication. What matters is fulfilment while living the present life itself and making it meaningful.
Most interpreters, particularly Swami Dayananda Saraswati, regard jnana or realisation of the ultimate truth as the final stage of sadhana, karma-yoga and bhakti-yoga being the means of preparing and purifying the mind for it (Dayananda 2011, Vol. 9: 329). Shankara himself composed many hymns of fervent bhakti addressed to personal deities. A very interesting example of such a hymn, addressed to Shiva, which is at the same time an earnest prayer to purify the mind, is as follows:
Ma gatchcha tvam itasthatah Girish bho mayyeva vasam kuru Swaminnadikirata mamaka-manah-kantara-simantare /
Vartante bahusha mriga-mada-jusho matsarya-mohadayah Tan hatva mrigaya vinoda-ruchita labham cha samprapyasi //
(From Shivananda-lahari by Shri Shankaracharya, as quoted in Herur 2001: 356)
('My Lord, the Lord of Mountains, the Primeval hunter! Do not go here and there, but reside in Me! Within the wilderness of my mind, there are many animals – arrogance, jealousy, infatuation, and the like! Have the pleasure of a hunt by hunting them down!'
– Tr. by author)
There is no conflict between bhakti and jnana. But bhakti is expected to lead to jnana, as otherwise it remains at the level of raw sentimentalism or emotion. Similarly, bhakti has to be combined with karma, because bhakti without karma (when one is physically capable of working) would be pure idleness and a burden on the society. Karmayoga is designed to achieve jnana, because karma-yoga has no meaning if the ego is not dropped and all fruits of work and even doer-ship are not surrendered to God in the service of humanity. Thus it would appear that jnana-yoga is not a separate sadhana, but simply a culmination of sadhana through karma-yoga and bhakti.
The purpose of jnana-yoga is realisation of the Ultimate Truth. In Advaita philosophy, the ultimate truth is seen as the identity of one's individual spiritual self, the Atman, with the Parabrahman or Paramatman. In Vishishta-advaita, it is seen as the Atman being a part of the body of Paramatman, but dependent on Him. In Dvaita, it is seen as total dependence of the Atman on the Paramatman, the relationship being one of a servant to the master. The realisation is aided by meditation on the Self/Parabrahman in Advaita, and on the personal deity in the two schools. The Gita does not make a distinction between the three schools, and the object of meditation is left to the choice of the seeker. It can be on a personal deity in a form of one's choice, or it can be on the Self in an impersonal and pure form of consciousness or simply awareness of what goes on in the mind, tracking the mind consciously and bringing it back to focus on the chosen object. In Advaita, the paradox is one of the subjects, the Self, itself being the object of meditation. The method of meditation recommended by the Gita is briefly 'Fix the mind on the Self and think of nothing else'. The full verse is as follows:
Shanaih shanairuparamet buddhya dhriti-grihitaya /
Atmasamstham manah kritva na kinchid api chintayet //
(VI.25)
('With the intellect endowed with perseverance, may one slowly resolve [quieten] the mind (in atma). Making the mind abide in the self, may one not think of anything else.'
– Tr. by Dayananda 2011, Vol. 5: 147; square brackets added)
Dayananda says that it is not difficult as one might fear. 'The pursuit of [spiritual] knowledge is not like climbing Mount Everest; it is more like dropping that you are holding in your hand. Because it is more dropping than climbing, it is not as difficult as you might think. It is simply a question of dissociating yourself from your own identity of being only so much' (Dayananda 2011: 148). What it means that one has to simply drop the things that the mind is burdened with including not only anxieties and fears but also the notion of one's limited self. Dayananda explains that according to Krishna, yoga can be pursued without despair (anirvinna chetasa) (Dayananda 2011: 147).
The science of yoga particularly as codified by Patanjali developed after the Gita. But one can see the seeds of it in the Gita. The Gita even explains breath control (pranayama) as a practice, which can precede meditation, as it helps in focusing the mind (IV.29). Swami Vivekananda has presented a lucid and detailed exposition of Patanjali's Yogasutras (CWSV 2000, Vol. 1: 110–313), along with the eight steps of raja-yoga. All of them are mentioned in the Gita, though not necessarily in one place one after another, and though not in exactly the same words – yama (practising virtues like non-violence, non-possessiveness), niyama (cleanliness – both internal and external, contentment, austerity, etc.), asana (right but comfortable posture), pranayama (breath control), pratyahara (withdrawing sense organs from their objects and shutting down external 'noises'), dharana (fixing the mind on an object – may be on a part of one's body such as the tip of one's nose or the middle of the brows, or on a mantra, or a favourite form of God), dhyana (meditation proper) and, finally, sam adhi (a state of being fully absorbed).
Meditation consists first of observing the mind itself, being aware of all that goes in the mind, as a sakshi or witness. The sadhaka becomes her or his own psychoanalyst here. It is a deep state of introspection. This is where the Gita's advice to fix the mind on the self and think of nothing else becomes relevant. Once this state is reached, the mind becomes objectless, unburdened, with nothing (shunya) left in the mind except super-consciousness. This is the stage when the Self is realised, leaving the sadhaka in peace with ineffable joy. When this state is maintained effortlessly and a little longer, it can be said to be samadhi, where a unity with the Ultimate is experienced. This is not a state of deep sleep. Deep sleep is when one is 'beneath consciousness'. In deep meditation, on the other hand, one is all the while not only awake but also fully aware and conscious. Initially, such a state may be experienced for only a fraction of a minute, but the duration can significantly improve with practice, provided that the person has shraddha (perseverance and commitment) and has also been following the mental and moral discipline of keeping the mind and conduct pure.
For those who find it difficult to focus on the self or one's own consciousness as a method of meditation, the Gita recommends japa – repeated recitation of God's name or a mantra in one's mind with awareness, not allowing the mind to stray. Even if it strays, one has to, with practice, try to bring it back on track (VI.35). Krishna regards japa so highly that he identifies himself with it, saying that among the yajnas, He is yajna in the form of japa (X.25). There are a few longish mantras like the Gayatri and Mrityunjaya, which have their own rich benefits. However, for holding the mind fixed for meditation, shorter mantras, preferably those which can be uttered mentally along with breathing, are advisable. For example, while inhaling, utter Om (stretching it longer to synchronise with deep inhaling), and Namah Shivaya while releasing the breath or exhaling. Similarly, Shree Hari can be recited repeatedly. The Gita itself does not prescribe any particular mantra, leaving it to the sadhaka, though it does mention Om tat sat as the triple designation or expression (nirdesha) of Brahman (XVII.23), to be uttered with shraddha while performing all rituals or worship (XVII.24–28). Om is Brahman, and together the three words mean Brahman, that (is) the truth. The Gita explains that sat means both truth and goodness or benignity (XVII.26). In spite of the great popularity of japa in sadhana, it is not considered as meditation proper, the latter being focusing on the abstract, formless Self, that is, sat chit ananda (Tejomayananda 1997: 5). However, japa is an invaluable aid in preparing the mind for meditation proper.
Whatever be the object and form of meditation, its great benefits – hinted at in the Gita itself – are widely acknowledged all over the world, and its popularity is soaring. The benefits of meditation are not just momentary, but also lasting and go well beyond the duration of time when it is practised. The enduring benefits claimed are increased ability to concentrate, reduction in stress and anxiety, prevention of depression, control over blood pressure, sound sleep, a healthy check on anger and other emotions, increased clarity of mind with consequent improvement in decision making and a better physical as well as mental health, on the whole resulting in a state of always feeling happy, cheerful, compassionate and friendly to all. I know of cases where through the practice of meditation, people have been able to give up addictions like smoking and drinking. Incidentally, these benefits are at the mundane level, not necessarily spiritual benefits, though in this case they go together.
Is there a conflict between self-realisation and continuing to work in the world for the welfare of all? Does not a self-realised yogi become indifferent to the world, immersed in his newfound bliss? Not so, if we understand the Gita correctly. Self-realisation does not require sitting in meditation all day and night. On the contrary, as Sri Aurobindo warns, such excesses may be self-defeating and un-harmonious (Aurobindo 2010: 145). A self-realised person sees God in everyone and that his or her own self-realisation is further enriched and fulfilled when he or she serves others selflessly. Such a person has no desire for oneself, and all his or her actions become spiritual.
Arjuna puts a question which is relevant to all sadhakas. What if someone, even if endowed with faith, falls short of expectations of sadhana, by wandering off the right path? Does he or she not lose both the worlds – the pleasures of this world and also the promise in the other world, and get destroyed? (VI.37–38). Krishna dispels this anxiety and fear by assuring that there is no destruction whatsoever for such persons and that anyone who has done some good will never perish (na hi kalyanakrit kaschit durgatim tata gatchchati – VI.40). On the contrary, there is ample hope for them of redemption and spiritual success eventually (VI.41–45). One need not be afraid of committing honest mistakes, of faltering on the path to God, for there is always an assurance of help and support.
A significant feature of the different paths of sadhana is its accessibility to all irrespective of social or economic differences and also irrespective of 'karmic luggage' as Malinar puts it (2009: 7). There is no emphasis on rituals, and any offering which a seeker can afford is accepted and appreciated (IX.26). Even a manual worker working on a wage can be a spiritual seeker. He or she does not have to sit in meditation for hours and forgo remunerative work. Even if busy all the day with work for livelihood, a few minutes for remembering God with devotion assures grace from Him. Sadhana is not meant only for the luxury class. Moreover, even a sinner has hope as per the Gita.
## Notes
1 The discussion here about trigunas and Table 6.1 is mainly based on Nadkarni (2013: 58–61).
2 Jayant Kalawar thinks that shraddha is much more than 'faith' as usually translated; according to him, it means 'a disposition to keep acting regardless of obstacles' or 'passionate focused commitment', with which I agree (Kalawar 2012: 97, 150).
3 The discussion here about the sattvika economy is mainly based on Nadkarni (2013: 77, n-9).
4 See the last row under Shankara in the table given as Appendix to Chapter 2 for examples of such verses.
5 This para and the preceding one are a summary of Chopra (2001: 39–174), but at the same time relating the account with the Gita's language and philosophy.
6 This section uses to some extent Chapter 4 on 'Sadhana in Hinduism' in Nadkarni (2013: 78–101), but it is much more than a mere summary of that chapter having several new things to say.
7 About taming desire and selfishness as commended by the Gita, see also the discussion on karma-yoga under the first section of this chapter.
8 The Yoga-shastra mentions different postures for exercise, but in the context of meditation, asana means sitting in an erect posture with the back, neck and the head in one straight line, comfortably (so that one is not distracted by discomfort), but not so comfortably as to go to sleep (as when slouching). The Gita commends sitting cross-legged in a clean and lonely place free from distractions, on a dry and firm floor on a grass mat covered by soft cloth. For elderly people, sitting cross-legged on the floor is not mandatory; they can as well sit on a straight-backed chair with legs drawn in. The trick is in sitting relaxed but being alert and mindful at the same time.
9 There is a pauranik story about Lord Vishnu telling the celestial saint Narada why He values a farmer's devotion as of higher spiritual quality than that of Narada himself because the concerned farmer devotes a few minutes of his busy day for remembering Him, while Narada has no such constraints (Nadkarni 2013: 91–92).
# [7
Criticisms of the Gita and Responses](content.xhtml#bck_Ch07)
## Contradictions
Like the Bible, the Gita also has attracted a lot of critical scrutiny both in the West and in India. Hinduism has so far at least been quite tolerant of criticism, as it should be, but sometimes so 'tolerant' that even major points of criticism have not been responded to by many. Interpreters of the Gita such as Gandhi and Aurobindo did respond to some of them, but by and large, the criticisms have been ignored. Such an attitude, however, may not be quite helpful for a proper understanding of the texts criticised. However, it is wiser to attend to criticism and respond through logic and politeness than to launch abusive attacks or to threaten the critics physically. Surely, that kind of response is not in the spirit of Hinduism and even the Gita itself. The whole dialogue between Arjuna and Krishna is in the spirit of friendly and calm logical discussion. This chapter attempts a fairly comprehensive response to major criticisms made about the Gita, taking also into account responses made by others too. Though the Gita attracted a good deal of criticism mainly in the West before India's independence, almost all the criticism made after independence has come from Indians, that too Hindus themselves. This is a healthy sign, indicating the scope for scepticism and openness in Hindu society. Faith should not be forcibly imposed; it should be heartfelt. In any case, criticisms merit response.
German philosopher, Karl Jaspers, is said to have remarked to an Indian friend that he thought the Gita to be full of contradictions. After a pause, he also added: 'That is why it is a great book' (Lash 2000: 1). Arjuna himself was baffled, and charged his mentor, Krishna, as confusing him and asked him to be clear and definite, as recorded in the Gita itself (III.2). Take, for example the eighteenth verse in the fourth chapter of the Gita, which says: 'One who sees inaction in action, and action in inaction, even while being engaged (yukta) in all action, is a wise human being.' Apparently, it is a very confusing and self-contradictory verse. Yet going by what is said earlier in the Gita, the verse only means that the Gita expects a yogi to be engaged all the while in action but with selflessness, humility without any sense of doer-ship, and without worrying about the final outcome. To understand the Gita, one needs to understand its intent and perspective. That is also why it is a challenging text for interpreters. The task of deriving a coherent philosophy from it has therefore been exciting and, I should add, fruitful too. It has been fruitful because the Gita has been a marvellous attempt at reconciling the seeming opposites and arriving at a synthesis. That is why M. R. Yardi calls his book on the Gita 'As a Synthesis' (1991). There is a lot of conflict and confusion in life, and yet we need to face it and seek enough balance and peace of mind to pursue our chosen goals. Along with conflicts and confusion, there comes complexity too. Many conflicts and contradictions appear so because of complexity. Life is quite complex, the universe no less so. At a superficial level, things may look very simple. But when you begin to probe into them, understand them in depth, complexities emerge at once. Sri Sri Ravi Shankar says, 'life is simple and complex at the same time' (Shankar 2013: 47). He takes the instance of eating a banana. It is so simple. But its digestion is quite a complex process, and a lot of 'work' is involved. The origin of the Gita itself owes to conflict and confusion in the mind of Arjuna. In a way, using Sri Sri Ravishankar's words, the Gita is also simple and complex at the same time. We can, while trying to comprehend its complexity, savour its simplicity too. As Hirst observes, 'contradictory statements need not indicate inconsistency or textual additions, but can be seen as part of a process of understanding which drives Arjuna and the reader beyond initial preconceptions' (Hirst 2000: 50).
## Historicity
The criticism about the historicity of the Gita, and by implication its authenticity, was raised first by Western scholars. However, as noted in Chapter 3 of this book earlier, Western scholarship also helped the Gita to go global. It also stimulated a lot of critical scholarly writing on the Gita and its translation both in the West and in India, and both in English and in Indian regional languages. Western scholarship included Christian theologians and missionaries who also played an important role in disseminating the Gita in the West. Though missionaries studied it mainly to show how it falls much short of the Holy Bible, there were also large-hearted Christian scholars who thought highly of it and considered it as a sacred text of Hinduism on par with the significance that the Bible has for Christianity. The study of the Gita also occasioned a search for common ground between Hinduism and Christianity. For example, Robinson mentions R. D. Griffith, who in 1849 likened the incarnation of Krishna to that of Christ, and the concepts of Trimurty and the syllable AUM (Om) in Hinduism to Trinity in Christianity. Griffith also found several points of similarity between the Bible and the Gita, though he insisted on the primacy of the former and inadequacy of the latter, and claimed that Hinduism could not meet the challenge of Christianity. Griffith felt that proselytisation of Hindus would be expedited more by appealing to features common to the two religions than by arrogant assertion of Christian truth which could cause resentment (Robinson 2006: 73–74). But contrary to expectations of Griffith, the common features only boosted the confidence in Hinduism among its followers and justified their faith in the Gita. Christian missionaries could not in any case obliterate Hinduism though they won many converts. Hinduism has still remained the religion of the bulk of the people of India and has emerged as a global religion in its own right, criticisms notwithstanding.
There have been quite a few points of Christian critique that were taken up even more forcefully by Hindu scholars themselves, which will be taken up subsequently. A major point of Christian critique of the Gita, however, is its lack of historicity, expressed, for example by J. N. Farquhar (1861–1929) notwithstanding his full praise for it on other counts like its theism and emphasis on personal morality. According to him, Jesus Christ is a historical personality, while Krishna is not; incarnation stories of Krishna are myths, and 'the Gita did not come from Krishna' (quoted in Robinson 2006: 78). So, the Bible has a historical authenticity, which the Gita does not have. Farquhar felt that human nature was such that 'man needs an incarnate saviour', and where there was no such saviour, this saviour would be imagined and a mythological substitute capable of inspiring faith and devotion would be created (Robinson 2006: 79). He saw in Christianity and in the historic real person of Jesus Christ the fulfilment of Hinduism's hopes and dreams (Robinson 2006: 80). As a religion, Hinduism lacks a historical founder which other religions have; its major sacred text also does not have a historicity, which the texts of other religions have. Thus, this critique does not stop with the Gita, but covers Hinduism itself as a whole.
A major problem about establishing historicity of Hinduism and its texts is their very hoary unparalleled past. One has to appreciate that their history starts some two millennia BCE. Dating systems comparable with the present were not developed for a long time. But it does not mean that there is no evidence of historical reality of the Ramayana and the Mahabharata events. The criticism about lack of historicity was made before archaeological work was done on the Mahabharata sites and can therefore be considered as outdated now.
B. B. Lal, an eminent archaeologist, whose excavation work in Kali-bangan in Rajasthan brought to light a prosperous city of Harappan civilisation, points out that 'all the sites associated with the Mahabharata story continue to bear the same name even to this day', and there is not more than one place having the same name. 'For example, there is only one Hastinapura, one Mathura, one Kurukshetra and so on' (Lal 2013: 60). Hastinapura was the capital of the Kuru (Kaurava) kingdom of the Mahabharata and is located at right bank of Ganga in Meerut district of Uttar Pradesh. Excavations at Hastinapura were carried out by Lal during 1950–52, which showed several layers of the old city indicating at least five historical periods with breaks in between. The first period is characterised by finds of pottery known as Ochre Colour Ware dating back to pre-1200 BCE, and the remains of the period were found on natural soil showing them to be the oldest. Period II is characterised by Painted Grey Ware, belonging to ca. 1100–800 BCE, period III by Northern Black Polished Ware dated between early sixth and early third century BCE, period IV by Shunga terracotta of early second century BCE to the end of third century CE and period V by Early Medieval Ware of late eleventh to the early fifteenth century CE (Lal 2013: 64). Several things like bowls, a 'dining set' (consisting of a thali or dish, katoris or small bowls and lota or tumbler), iron-and-copper objects, a stone mould for casting jewellery, gamesmen pieces and dice used in the game of chaupar have been retrieved belonging to period II (Painted Grey Ware). Many of these things had artistic designs painted in black including swastikas, sigmas and spirals. Even a large house consisting of thirteen rooms and courtyard (with roof and much of the walls destroyed) of the same period was unearthed. Photographs of these things have been presented in the book by Lal (Lal 2013: 66–75). An enormous flood of the Ganga eroded or washed away many things associated with this period. There is evidence of a huge fire ending the third occupation as well (Lal 2013: 73, 77). After giving considerable thought and cross-checking with evidence obtained from other Ochre Colour Ware and Painted Grey Ware sites, Lal concludes that only Painted Grey Ware Culture was associated with the Mahabharata period (1100–800 BCE) (Lal 2013: 83). Lal estimates the most probable date of the Mahabharata war as between 860 and 900 BCE (Lal 2013: 86). Lal rejects the earlier dates estimated on the basis of astronomical data, since at the dates so arrived (viz. 3102–3067 BCE), none of the Mahabharata sites such as Hastinapura, Mathura and Kurukshetra had existed (Lal 2013: 87). Thus, there is some concrete archaeological evidence of the Mahabharata period. There are, however, no contemporary inscriptions proving the historicity of either Mahabharata or of Krishna. But Lal observes that there are also no such inscriptions to prove the historicity of either the Buddha or Mahavira, and the historical reality of Krishna cannot be rejected on this ground (Lal 2013: 89). Lal points out at several references to Vasudeva (Krishna) and some of the other characters in the Mahabharata in ancient literary tradition right from the Atharva Veda to Kautilya's Arthashastra – texts which are independent of the Mahabharata (Lal 2013: 90). Lal concludes that 'the epic has a basis in history' (Lal 2013: 91) to prove, which was the purpose of his whole book (Historicity of the Mahabharata – Evidence of Literature, Art & Archaeology). The book also discusses the extensive impact of the epic on art and literature, coins and inscriptions in India and abroad (Laos, Cambodia, Java and Bali) in the ancient as well as the medieval periods, along with many interesting photographs of paintings and sculpture which include those relating to the Gita.
An eminent Marxist scholar, Damodar D. Kosambi, doubted – if not the historicity of the Mahabharata events – at least the scale of the Kurukshetra war as described in the epic. According to the epic, some four to five million men killed each other with only a few survivors left, in the eighteen-day war. Unrealistically large number of chariots, horses and elephants were reportedly deployed (see Note 3 of Chapter 1). Kosambi felt that there would certainly have been as many camp followers and attendants as fighters. He says: 'A host of this size could not be applied without a total population of 200 million which India did not attain till the British period' (Kosambi 1962: 12–13; as quoted in Desai 2014: 28). Desai adds that exaggeration was a norm in the epics, and with each successive rendition, it would have only increased further (Desai 2014: 28). Kosambi criticised the Gita on other grounds too (taken up later), but as for this criticism about the exaggeration of the scale of war, it does not affect either the historicity or the authenticity of the Gita as such.
But is historicity that important as to make religion history-centric? There have been responses to the Christian critique on the ground of historicity, not by proving the historicity of Hinduism or of the Ramayana and the Mahabharata, but by challenging the relevance or the necessity of historicity itself as the basis of religion (or its sacred books) or as the criterion of judging them. Balagangadhara questioned the propriety of applying the characteristics of Semitic religions (like having a single historical founder and having a common and historically dated sacred text) to all religions as the necessary criteria of defining religion. He remarked that 'what makes Christianity into religion is not what makes Hinduism into a religion' (1994: 22). Rajiv Malhotra also took up this issue assertively in two books – Being Different (2011) and Indra's Net (2014). He rejects insistence on history centrism, that is 'the mandated belief that God has revealed himself in history only in unique events and only to specific peoples or prophets, and in a way that is forever unavailable to others directly. This dogma demands that the exclusive path can be found only in the literal words of god as heard by specific projects and mentioned in some particular text that comprises literal history' (Malhotra 2014: 283). By contrast, avatars and gurus play the role of 'fresh entrepreneurial start-ups' (Malhotra 2011: 100) from time to time according to Hinduism to establish dharma and fight adharma, responding to changing circumstances and providing continuity. Even while recognising the authority of sacred texts, as Radhakrishnan says, Hinduism always tempered this respect by 'the recognition of the truth that God has never finished the revelation of His wisdom and love' (1971: 16). Radhakrishnan adds: 'Hinduism is a movement, not a position; a process, not a result; a growing tradition, not a fixed revelation' (1971: 91). Realisation of the Ultimate Truth or the Divine is not claimed to be anybody's monopoly and is accessible to all. There is scope for debate, not for fundamentalism. There is 'built-in pluralism and context sensitivity' (Malhotra 2011: 100).
As for the Gita's status and authority as a sacred text, we have already explained the factors contributing to it in the first section of the first chapter of this book. This status does not have to depend upon its dating according to some calendar or the other. Moreover, the sayings or teachings of even the Buddha and Jesus Christ were not as written down by themselves, but were recorded by others. This has not reduced the authenticity of their teaching. The same thing happened in the case of Krishna and the Gita too.
However, the development of Hinduism independent of historical dating involved a price to pay. Though the ancient sages did have a concept of time, no concerted effort was made towards evolving a long-term calendar acceptable widely and with a commonly accepted benchmark until after the Buddha's time. The ancient Hindus had a cyclical – not a linear – concept of time. Not only seasons, but also the names given to years were based on a cycle of sixty samvatsaras. It is not that their time horizon did not extend beyond sixty years, for they had the concept of four yugas each extending over many millennia, with the yugas also moving cyclically. This is a paradox because the ancient Indians had a fascination for astronomy and enumeration, for which there is evidence in the Vedas. Though it is claimed that the Kali-yuga calendar is the oldest having a zero point in the year 3102 BCE, it is not clear when the calendar was actually adopted. As Amartya Sen says, there is always a difference in point of time between the benchmark zero year and the adoption of a calendar, just as the Christian calendar was not adopted right when Christ was born. Sen points out that there is reference to dating according to the Kaliyuga era neither in any of the Vedas, nor even in the Ramayana and the Mahabharata. Sen cites the opinion of the Calendar Reform Committee that it was probably started during the time of Aryabhata in 499 CE. There were other calendars also in ancient India, namely the Buddha Nirvana Era with a zero point of 544 BCE, Mahavira Nirvana Era with a zero point of 527 BCE, Vikrama Samvat with a zero point of 57 BCE, and Shaka calendar with a zero point of 78 CE. These calendars came into vogue after the Gita was composed. Even otherwise, there was no established practice of dating according to any of these calendars in much of the literary work in ancient and even medieval India. While the Christian criticism about the lack of historicity may not reduce the authenticity of Hinduism and its texts, it does point to a genuine lack of adequate awareness to properly date the texts, whatever the calendar.
## Is the Gita other-worldly? amoral? deterministic?
One of the major criticisms of Hinduism and other eastern religions by some Christian and even secular Western scholars is that they are world-denying or life-negating. They have not charged the Gita specifically as such, but being a major sacred text of Hinduism, we need to consider whether this is applicable to it. This criticism about the eastern religions has been made mainly by Max Weber, Albert Schweitzer and K. W. Kapp, and has been dealt with elsewhere in necessary detail (Nadkarni 2014: 157–67). Concerning the Gita, we have already noted earlier that it has nowhere mentioned or hinted that the world is unreal or a mere illusion. On the other hand, it has a rich ethical content (presented in the preceding chapter), which would have had no purpose or meaning if the world was unreal, since ethics has application only in this world and not beyond. The Gita's ethical teaching is intended to make our lives more meaningful and purposive. There is also no conflict between working for a living or even earning wealth and spiritual striving in the Gita, though unjust or dishonest means of earning a living or wealth are not allowed.
R. D. Griffith, however, felt that the Gita, as a general principle, prioritised metaphysical truth over moral truth. He was uncomfortable particularly with what he thought was a reliance on the Divine that detracted from moral responsibility (as quoted in Robinson 2006: 73). Even some of the Hindu interpreters of the Gita, like Aurobindo and later Dayananda Saraswati (of south India), have interpreted the Gita as having prioritised the Divine, not as a criticism of the Gita but as an exposition. Both felt that the ultimate purpose of life is to realise the Divine, and ethics prepare the mind for it by purifying and raising its potential to realise the goal (Dayananda 2011, Vol. 9: 329). Ethics are part of sadhana, spiritual striving, and as a means, it is subordinated to the goal. This does not mean that the significance or role of ethics is negated or even undermined, or considered dispensable. There is no question of detracting from moral responsibility, because there cannot be a conflict between the Divine and moral responsibility. The confusion is caused probably by misinterpreting the verse, Sarva dharman parityajya... (XVIII.66), taking dharma to be moral responsibility (which admittedly is the literal meaning of the word), but as explained in the last section of the preceding chapter, it is not the purport of the verse to abandon all morality! If the intention of the verse was to advise relinquishing all moral responsibilities as a general principle, Krishna would not have taken the trouble to emphasise ethics and ethical behaviour so much (see the second section of the preceding chapter). The Gita is not amoral. What the verse expects from a devotee is ultimately a total surrender to the Divine and its will, without any sense of 'I' and doer-ship, and forgetting all mundane problems. This is not an advice for inaction either, but only an advice to give up the ego and even the sense of oneself being the cause of any action. No devout Hindu would interpret the verse as a license to be amoral; he or she will take it in the right spirit of a call for total submission to the Divine forgetting one's separate individuality completely, submerging into the one and the only one ultimate reality in the final and decisive stage of sadhana. Even after realising this state, the Gita expects the seeker to remain active, and such a person by the very nature of the state would also remain moral carrying out duties automatically if not self-consciously.
Griffith does not seem to give up his criticism, however. If everything is left to God, it amounts to determinism or even fatalism! (Robinson 2006: 73). A few other critics also have charged the Gita of being deterministic. This criticism seems to be based on Krishna's declaring after assuming his cosmic form (Vishwarupa) that Drona, Bhishma and other warriors are already killed (or destined to be killed) and that Arjuna would be a mere nimitta, an instrument (XI.33–34). In the last chapter of the Gita again, Krishna tells people behave according to their inborn nature, and even if one wishes to act otherwise, they would be bound by their nature. Further, the indwelling Ishwara (Lord) causes all beings to move as if they are mounted on a machine (XVIII.60–61).
There is apparently a huge self-contradiction in the Gita on this point, which was briefly discussed at the end of the first section of the preceding chapter. It bears some reiteration given the importance of the point. If the Gita is deterministic as implied by the two verses referred earlier, why indeed in the next-but-one verse (sixty-third) of the same chapter (eighteenth) does Krishna ask Arjuna to ponder critically over the teaching and act as per his choice? Obviously, Arjuna is not regarded as a mere instrument without any decision-making power. What is the meaning of preaching ethics to a mere instrument? Why did Arjuna need so much persuasion if he was a mere cog in the machine? The self-contradiction arises because the problem is not a simple one. There is a huge debate in philosophy after all about free will vs. determinism. We seem to have free will, but that is within severe limits imposed not only by the laws of nature including our own nature but also by the similar freedom given to others. That is why the Gita takes the stand that our right or freedom is only in acting; the outcome depends on so many factors, some of which cannot be foreseen that we cannot presume ourselves to be the cause of the outcome. Ramakrishna Paramahamsa compared our freedom with the freedom of a tethered animal. But the length of the rope by which we are tied appears to be long enough to give us the freedom of moral choice and perform our moral responsibilities. Such being the case, self-contradiction in the Gita on this point is understandable. To call it as deterministic would, however, be misleading and incorrect.
These difficulties of reconciling sadhana with making a living in the world, aspiring for spiritual liberation with dharma in the world, and free will and surrender to God, do no doubt indicate tensions, but the Gita itself shows the way of reconciling and resolving them. As Malinar observes, 'the impact of the BhG [Gita] lies in its attempt to mediate between two opposing referential frameworks of human aspirations on the one hand, the realm of socio-cosmic relationships encompassed by dharma and based on ritual performances as transmitted in Vedic texts; and on the other, the quest for liberation from this very realm through ascetic practices and the employment of new forms of knowledge' (Malinar 2009: 5). She explains further that this mediation is achieved on two levels: (a) through working in the world in the spirit or attitude of karma-yoga, and (b) through the concept of a single highest God, Krishna, who is both the lord of the world and its dharmic order as well as the ever-liberated yogi. He mediates between 'ascetic detachment and royal engagement' (Malinar 2009: 6). God is not at all seen as a hindrance in an active engagement in the world but a help, a supporter and a guide, provided that the engagement is according to dharma or moral principles, with this help, support and guidance leading ultimately to spiritual liberation. For a believer, all difficulties and tensions are resolved in Krishna ultimately. An important message of the Gita, especially as hinted in its last verse (Yatra Yogeshwarah Krishno... – XVIII.78), as Roopa Pai points out, is: 'man's actions are incomplete and eventually "unsuccessful" – both in terms of being right and in terms of bringing him lasting joy – if he does not work hand in hand with God, or if he does not have God's blessings (this is a religious text we are talking about, after all!)' (Pai 2015: 259; parentheses in original).
## Is the Gita reactionary?
Dr Ambedkar, Kosambi and a few other rationalists and Marxists consider the Gita as a reactionary text, its main purpose being to reverse the social reform initiated by the Buddha and to uphold the dominance of Brahminism, if not of only Brahmins. It may be recalled (from the first chapter) that Dr Ambedkar takes the Gita as post-Buddhist. The gist of the contention of the critics who charge the Gita of being reactionary is as follows. The Buddha had preached against the chaturvarnya system and also violence involved in animal sacrifices. Even Shudras and women could be admitted as monks in the Buddhist sangha. Around the same time when the Gita was being composed, Charvaka's Lokayata philosophy of atheism was gaining ground. This philosophy had rejected religion and rituals. Jainism also was spreading which too rejected violence and theism. All the three influences together broke the hegemony of Brahmins and even threatened their livelihood and their very survival. Brahmins desperately wanted a text which could help reviving Brahminism or the Vedic religion of rituals priest-craft, and the Gita was the result. The real meaning of karma as commended in the Gita is not work in general but rituals of karmakanda, so that the priests are assured of their livelihood. The Brahmins cleverly chose Krishna, a Kshatriya, instead of a Brahmin to be its formal spokesman of the preaching in the Gita to make it more convincing. The Buddha and Mahavira were also Kshatriyas who were in the forefront of a revolution against the Vedic religion, and Krishna was projected to counter them. Thus the Gita in its very essence is counter-revolutionary. It is casteist as is indicated by the emphasis on swadharma which is nothing but caste duty. It seeks to consolidate the caste division in the society by threatening that there would be no salvation for those who transgress their caste duties. Its attitude of contempt towards the two lower castes – Vaishyas and Shudras – as well as to women is evident from the verse which puts them together as those with sinful breeds (papa-yonayah) (IX.32) (though, of course, they are all assured of reaching the Supreme Goal if they take refuge in Krishna). The real intention behind karma-yoga is to exploit the working class by indoctrinating them to believe that they should work wholeheartedly without expecting a due reward for work but be content with whatever they get as God-given. This is to avoid any class struggle and ensure submission to the established order under the hegemony of Brahmins.
The criticism is based on a thorough misunderstanding and needs a detailed response. First of all, the Gita's stand on the birth-based caste system is presented. A few verses in it do refer to the existing varnas which by then had become birth based. In Chapter 1, Arjuna refers to the threat of mixing of varnas (varna-sankara) as a result of the catastrophic war leading to a breakdown of countless families (verses 41–43), but these verses are a part of his misgivings which Krishna removes later. A key quotation in this context is the first line of verse IV.13, where Krishna tells Arjuna that the four varnas were created by him on the basis of guna (aptitude, nature) and karma (work or occupation) (Chaturvarnyam maya srishtam guna-karma vibhagashah). Krishna does not say here that the varna is based on birth, which is deliberate. Kane observes that if Krishna wanted to make birth as the basis of his division of labour, he could have easily said jati-karma vibhagashah or janma-karma vibhagashah instead of guna-karma vibhagashah as actually stated (Kane 1990, Vol. II: 1635–36). Sardar K. M. Panikkar considers the verse as constituting a devastating attack on caste based on birth, far from supporting it. He says: 'It is the most unequivocal repudiation of divine origin of caste based on birth, the most categorical denial of Brahmin claim to inherent superiority' (Panikkar 1961: 40–41). The second line in the same verse (IV.13) has also some significance. Krishna says in it: 'I am the author (kartara) of this (varna system), I am also its non-doer (a-kartara)'. It implies that though the system of division of labour based on aptitude and work is natural (made by Him), its later transformation into a birth-based system is man-made and He does not support it. Uttara-Gita which is also a dialogue between Krishna and Arjuna makes the same point explicitly. When Arjuna specifically asks Krishna how varna is determined, he replies:
Na jatih karanam tata gunah kalyanakaranam/
Vratastham api chandalam tam devah brahmanam viduh//
('Birth is not the cause, my friend; it is virtues which are the cause of welfare. Even a chandala observing the vow is considered as a Brahmana by the gods.')
It follows that when Krishna advising Arjuna (in II.31) to do his duty as a Kshatriya, he was asking him to follow his chosen duty as a soldier, not his caste or jati. Not all those fighting in the Kurukshetra war were born as Kshatriyas. Arjuna's guru himself was a Brahmin who took a leading part in the war on Kaurava side by being the chief of the army after Bhishma fell. There were also others like Ashwatthama and Kripa who fought in the same war.
There is again an explanation in XVIII.41–44 about how the division of labour into the four varnas is based on their nature. Thus, only those having a pure mind and righteousness can be called as Brahmins. It just cannot be the case that only those born in Brahmin families would be considered as having a pure mind; Krishna could not surely have meant such perversion. Some responsibility and common sense would have to be exercised in interpreting the texts after all. The Gita nowhere says that one's nature or svabhava is based on jati or the community of birth. The verses concerned only classify human nature into four types, which in turn influences their work. Radhakrishnan says: 'The four-fold order is not peculiar to Hindu society. It is of universal application. The classification depends on types human nature' (1998: 364). He further quotes Gerard Heard from his book, Man the Master (1942): 'It would seem that there have always been present in human community four types or strata of consciousness.... the Aryan-Sanskrit sociological thought, which first defined and named this four-fold structure of society, is as much ours as India's' (quoted in Radhakrishnan 1998: 367, fn-1).
Radhakrishnan also observes in this regard: 'All men are not equal in their capacities but all men are equally necessary for society, and their contributions from their different stations are of equal value' (Radhakrishnan 1998: 366–67). The Gita also teaches this equality clearly, for example in Chapter 6, where Krishna says in as many as four verses (29–32) that yogi treats all beings with evenness, sees Krishna in all beings, and considers pleasure and pain everywhere as his own. Such an attitude cannot allow any unequal and exploitative caste system. Krishna reiterates this philosophy of equality again when he declares, samoham sarva-bhuteshu (I am the same to all beings) (IX.29). Much is made of the Gita bracketing together persons of 'sinful breed' (papa-yonayah), women, Vaishyas and Shudras in IX.32, as mentioned earlier under criticisms summarised. Having declared that He is the same to all beings, it could not just have been Krishna's intention to view women and the two lower castes as of 'sinful breeds' as translated by Kosambi (quoted by Desai 2014: 38). What Krishna says in the verse is only that all those who take refuge in Him, be they sinners, women, vaishyas and shudras (or anyone), they all attain the highest spiritual goal. Kosambi translates the phrase as 'sinful breeds such as women, vaishyas and shudras'. Instead of 'such as', it is more logical and consistent with the intention of Krishna to read it with a coma after 'sinful breeds' as done here, others being separate categories. Papa-yonayah only means those born with a 'karmic luggage', and they could be anyone, not necessarily women and the low castes. What Krishna meant is that salvation is open to all, sinners as well as those having a low social status. There is absolutely no hint of any support to such a low status given by the society to women and lower castes. On the contrary, the accessibility of all to the Divine and salvation or liberation is assured regardless of gender and social status.
Now about the alleged attempt by the Gita to revive ritualism and restore the hegemony of Brahmins in religion. According to Dr Ambedkar, the Gita defends the dogmas of counter-revolution by Jai-mini's Purva Mimamsa, and '[b]y Karma yoga or action, the Gita means the dogmas contained in Jaimini's Karma kanda' (as quoted in Rodriguese ed. 2004: 195). On another page of the same article, Dr Ambedkar says: 'Jaimini preaches pure and simple Karma yoga [rituals]. The Bhagavad Gita on the other hand preaches anasakti karma. Thus the Gita preaches a doctrine which is fundamentally modified' (2004: 200). He is right in observing that the concept of karma-yoga fundamentally modifies the ritualism of not only the ritualism of Jaimini's Purva Mimamsa but also the earlier Yajurveda. While ritualism of the two texts was meant for achieving certain selfish ends like getting a son or wealth or for ensuring that one goes to heaven after death, the Gita's karma-yoga meant undertaking selfless action in general for the benefit of people and even of all beings (lokasangraha or sarva-bhuta-hita). But Dr Ambedkar is wrong in contending that karma in the Gita simply means ritualism or it simply intends to revive ritualism and support Jaimini's counter-revolution.
There is no doubt a reference to yajna in several places in the Gita, particularly in Chapter 3, verses 9–15, which Dr Ambedkar cites in support of his argument about the Gita being ritualist. However, the Gita changes the very meaning of yajna as selfless offering, not necessarily ritual sacrifices. Verse 9 itself clearly asks Arjuna to do his work or duty (karma) unattached, since any work other than for the sake of yajna is binding (not liberating). It is only if yajna is interpreted as a selfless offering, the word fits in properly in the verse. Yajnas were normally performed to earn merit (punya) for oneself and family, for protection against evil spirits, for expiating sins, to seek the birth of a son and also for the welfare of the world. These goals were not exclusive, but often combined. Yajnas always had a purpose and could not have been done without attachment as advised in the Gita. The Gita did not of course prohibit Vedic yajnas, but did not encourage them either, and emphasised selfless action instead as spiritually more fulfilling. It is clear from the previous verse (III.8) that what Krishna had in mind while referring to karma was not ritual sacrifices but action in general. He says here that one cannot even 'travel in body' (do sharira-yatra) or maintain oneself through inaction and asks Arjuna to do his obligatory work (without being tempted by renunciation of work since that is not possible). The meaning of 'karma' in the Gita is very clear from this verse, which is so elsewhere too. Even in verse III.10, where again there is a reference to yajna, the Gita treats the act of creation itself as a yajna, which makes sense only if taken as a selfless service or action. Easwaran translates this verse as follows: 'At the time of creation the Lord gave humanity the path of selfless service and said: "Through selfless service you will always be fruitful and find the fulfilment of your desires."' (Easwaran 1997, Vol. I: 160). In verses 12–15 also, yajna makes much better sense when interpreted as selfless service or action than as a Vedic ritual. The gist of these verses is that one who enjoys without making a selfless offering or service is as good as a thief; he is a sinner (III.12, 13). Even rain which makes it possible for us to have our daily food is the result of a selfless service (III.14). Brahman, as the creative and blissful energy, is present in every selfless act (III.15). The next verse (sixteenth) calls upon us to participate in this virtuous cycle of selfless service, instead of thinking always in terms of indulging in sensual pleasures. These verses together (in fact, the whole of the third chapter, which is on karma-yoga) are a powerful plea for selfless service. It makes little sense here if karma is interpreted narrowly as Vedic rituals.
Rationalists, particularly if they are Marxists, do not give up easily in argument. Well, they say, if karma-yoga is selfless work, it is only a ploy of the elite to exploit the working class to get free labour! This is the contention of Veerabhadrappa, who has devoted a full chapter on 'Work without reward' for it (2004: 154–65). He rightly asks, 'Is it ever thinkable that one can do one's work without having one's eye on the return of one's labour? After the Gita propounded this idea, in any part of the world, including India or any society, has this principle been practised? Is translating this principle into action beneficial to any individual or human society?' (2004: 154). He contends that such a teaching was brought up only as an instrument of the two uppermost castes to indoctrinate the lower but producing and working castes into unprotesting submission. The aim was only to consolidate feudalism.
First of all, karma-yoga is not meant for preaching to others but for following it oneself. It is not as if one can free oneself from its obligations and expect only others to follow it. Before an employer expects selfless work from his workers, he too should render selfless service to his workers and society. Second, was the Gita addressed to the lower castes as such? Formally at least, the Gita's teaching is addressed to Arjuna, asking him to fight without attachment to outcome but as a duty. Even if Arjuna was only a pretext and the teaching was meant for others too, it is accepted by all the interpreters of the Gita that the teaching has a general and universal application. It is not meant for the working class alone in a capitalist economy, but also to capitalists earning interest and making profits. Gandhi addressed his proposal of the trusteeship ideal to treat wealth as a trust for the benefit of the society mainly to the capitalists, but the idea is essentially the same. Even before Gandhi, the Communist ideal of 'From each according to his ability; and to each according his needs' was proposed for the working class itself. Is this also not based on the same idea as the Gita's? Is preaching some altruism in a world dominated by unbridled selfishness as unrealistic as Veerabhadrappa and his likes think? There was an occasion to indirectly take up the questions posed by him, in the preceding chapter itself. It was argued that the Gita cannot be interpreted as expecting workers to work, or doctors to treat patients, or capitalists to invest without any due return. The Gita teaches moderation. What it teaches us is to balance our selfishness with some altruism, some regard for the welfare of the society. What would you tell a doctor in a government or municipal hospital who expects a bribe from patients even though on a regular salary? What moral principle would you apply to a minister who wants a cut in every deal or project that is sanctioned by him, or to a prosecution lawyer who is willing to moderate his attack on a criminal against a secret consideration, or to a cricketer who is willing to drop a catch or two against payment? The Gita's teaching of karma-yoga is particularly addressed to such people, who expect an unduly high and morally unacceptable reward for what little they do and tend to hold the society to ransom. To the extent that even the poor have been capable of and inclined to altruism (why not?), they too can be taken as believing in karma-yoga of the Gita.
The Gita is not a reactionary text; on the contrary, it is a revolutionary text no less than Buddhism. It rejected the claim of upper castes for superiority based on birth by taking instead the criteria of quality and aptitude. It brought religion to the masses, making it inclusive. This was by undermining the role of rituals and emphasising the role of simple bhakti as an important path of reaching God. Whatever little one offers, be it a humble leaf, flower or even a little water, is accepted by God if only offered with devotion (IX.26). No expensive or elaborate rituals are necessary. Bhakti-yoga preached by the Gita opened the doors of religion to the lower castes and women. The Gita stressed the equality of all on the ground that the self resides in all (VI.29). It declared that one who sees God in all and all in God is never lost to God, nor is God lost to such a person (VI.30). Even if the Gita is taken as post-Buddhist, it can be accused of trying to take the wind out of the sails of Buddhism, or of borrowing its reformist zeal, but not of thwarting its progressive elements. Buddhism had left a gap by its agnosticism, but the Gita gave a personal God to the masses whom they could pray to, love and aspire to be united with. The Gita fulfilled a mass need and helped to improve their sense of well-being, even if Marxists may say that it provided opium to the masses. By emphasising karma-yoga and altruism, it also opened a way to help the masses. The Gita did not want to make it appear as condescending charity, because whatever you give has to be with no contempt to receivers, and what is more, it has also to be without any feeling of pride or egoism, as it stressed (XVII.21). If I give to anyone with contempt, it only means I have no humility. The Gita played an important role in democratising Hinduism and paved the way for the bhakti movements later.
## The Gita and its deontology
The Gita has been criticised by several on several counts as we have reviewed earlier. An important point of criticism is that the Gita is not against violence and war if duty required them. This owes mainly to the immediate context of the Gita – Arjuna's reluctance to fight his relatives and particularly his teacher, and Krishna's persuasion that Arjuna should do his duty as a soldier. In his book, The Idea of Justice (2009), Amartya Sen questions the wisdom of focusing on duty irrespective of the consequences and makes his criticism in the context of his assessment of deontology – priority for doing one's duty as a candidate principle for the basis of justice. Sen does not of course charge the Gita with inciting violence, but is only critical of its rigid deontology even where it results in the massive violence of war, which certainly was not unexpected. Sen brings out Arjuna's predicament clearly. Arjuna has no doubt that he would be fighting for a just cause and also has no doubt about his victory. His fear is not the fear of defeat (Sen 2009: 209). He is doubtful only whether he would be doing the right thing if it ends up killing all those whom he had loved and respected, causing also large-scale death and carnage. Sen feels that 'Krishna got away with an incomplete and unconvincing argument against Arjuna' (Sen 2009: 212-fn). Doing one's duty regardless of consequences is not acceptable to Sen as a principle. He is aware that several other important issues too are raised in the Gita and that the Gita is not about deontology only. He also concedes that 'there is nothing to prevent a general deontological approach from taking considerable note of consequences' (Sen 2009: 216). What bothers him, however, is that the general principle of doing one's duty is raised in the Gita to a purist or absolute status. Sen takes care to distance his emphasis on consequences from the old debate in the West between deontological and consequential approaches to justice. In this debate, consequential ethic was considered as relativist, with end justifying any means adopted. For Sen, 'processes' too are important (which Gandhi calls as the means).
Much before Sen's critique of the Gita, Gandhi too faced the problem of interpreting Krishna's urging Arjuna to fight the war. Gandhi took the firm view that the Gita preaches non-violence, because not only does it mention ahimsa as a virtue to be practised but also advocates several other virtues which are not consistent with violence. He saw the war background of the Gita only as a metaphor for life's struggle that cannot be avoided but should be faced with equanimity. Gandhi certainly bases his argument on the evidence in the Gita itself. For example, the Gita explicitly advocates ahimsa along with other virtues like truthfulness, compassion, absence of anger, peacefulness, gentleness and humility as the divine qualities to be cultivated (XVI.2). On the other hand, demonical (asuree) qualities emerging from lust and anger are deplored (XVI.12, 13). There is a special denunciation of an asuree attitude of: 'I have eliminated this enemy today and I will eliminate others too. I am the lord, I am powerful, I will succeed and will enjoy' (XVI.14). This is precisely an attitude of violence and warmongering, and precisely what the Gita denounces. We have noted earlier that many other modern and contemporary interpreters also consider the war background of the Gita as a metaphor for the struggle between the forces of good represented by Pandavas and forces of evil represented by Kauravas.
However, even if the war was real and not just a metaphor, we need to appreciate that it was thrust on the Pandavas; it was not their choice. Krishna had himself tried his best to avoid the war and went to the court of Kauravas as an emissary to negotiate for an honourable settlement. He said that the Pandavas want peace and would be willing to settle for a village each for the five brothers, so that they could live outside the vicious rule of the Kauravas. Duryodhana arrogantly retorted that he would not part with even a tiny bit of land equal to a needle-top – forget five villages. The elders watched helplessly though sympathetic to the Pandavas. The war became inevitable, as it was a point of self-respect for Pandavas to fight for their right. The elders including the common teacher of Pandavas and Kauravas, who were under the patronage of the latter, had to be on their side. Paradoxically, they wished victory for the Pandavas and their cause, though fighting sincerely on the side of the Kauravas. There was no doubt in their minds that the Pandavas were virtuous persons and theirs was a just cause, and the Kauravas were vicious and greedy, but they felt duty-bound to be on the Kaurava side. For them also, duty prevailed over other considerations of ethics like virtue. They failed to exercise their weight and influence over the Kauravas to accept a peaceful settlement. What is puzzling, there were several kings around who were common relatives and friends of both Pandavas and Kauravas, and they too could not avoid the war, but instead took one side or the other to support and participated in the war with their huge armies. The war was hardly Krishna's decision or choice. It was as if everyone around was itching for a good fight and willed the war. It was beyond anyone to withdraw from it at the final point when the dialogue of the Gita took place. It may seem that Krishna being an avatar of God could have prevented it. But we have to remember that humans have been granted freedom of will by God, and the war was entirely within the free will of everyone and it would have been against the principle of free will if Krishna were to interfere at the late stage and prevented it as a miracle. The freedom of will had to simply work itself out. It is another matter that free will was abused by the Kauravas. A moral lesson that the Mahabharata teaches is that free will has to be exercised with responsibility and at the right time.
Moreover, a pertinent question is if it is ethically alright for a soldier on the battlefront to withdraw from war unilaterally, particularly one so responsible a leader as Arjuna. How should it be interpreted? Krishna warns Arjuna that he would be regarded as a coward if he withdraws at this stage. Is absolute non-violence practicable or even ethical, particularly if there is a risk of its being one sided? As we noted earlier, Gandhi thought that if Arjuna and with him the Panda-vas were to withdraw from war at that stage, they would have been pursued and massacred. Were Russia and France wrong in resisting Hitler when he invaded them?
Further, Sen is not quite correct in thinking that the Gita's deontology is indifferent to consequences. The Gita condemns action taken without regard to consequences as tamasika (XVIII.25). Sen misinterprets the Gita's advice to act without aspiring for fruits as an exhortation to disregard consequences. The exhortation is to only act unselfishly. Moreover, the Gita's concept of duty is not limited to a soldier's fighting in a battle, but goes beyond and covers other concerns too like serving people and promoting their welfare (loka-sangraha) as observed in the preceding chapter. The Gita's emphasis on duty is quite understandable since the bulk of human activity in any civilisation is governed by a sense of duty, as Gandhi observed. Otherwise, no human society can run smoothly. Different kinds of duties evolved in a society precisely because of their expected beneficial consequences. Affection and sentiments alone are not enough. It is the duty of parents to take care of their children, get them educated and cultured, and keep them healthy. It is the duty of a married couple to respect each other, be loyal and help each other. If someone is hit by a vehicle and is lying on the road, it is the duty of passers-by to help that person. It is the duty of government servants to serve people without seeking gratification. The whole system of law and order can be said to be based on duty-ethics. The consequences of ignoring duty will be far more disastrous than those of doing one's duty regardless of consequences. Action guided only by consequences, disregarding one's moral duty, could mean sliding down into a relativist or opportunistic ethic of the end justifying the means.
The Gita is not alone in emphasising duty. The Holy Bible asks us to love our neighbour and be a Good Samaritan in an hour of need. The Ten Commandments are duty centred. Kant, the most respected leader of the Enlightenment Age, formulated his ethical doctrines in terms of 'categorical imperatives' as he called. It is said that Kant's ethical theory is one of the best ever devised (Richter 2008: 139). For him, ethics requires responsibility, not just sentiment. We fulfil this responsibility through 'orders that we give ourselves' or 'imperatives'. He distinguishes between 'hypothetical imperatives' from 'categorical imperatives'. 'If you want a pie, then go to a bakery' is an example of the former; it does not come under the ethical domain. Some things must be simply done, regardless of what you want, and these acts come under categorical imperatives or duties, and come under ethics. Kant distinguishes between imperfect duties and perfect duties. Imperfect duties are the ones which one may not do all the time, but as much as possible, like helping people. Perfect duties are the ones which one must do all the time. Kant gives an example of a perfect duty – never make a false promise, that is, we must never pretend to be promising to do something which in fact we have no intention of doing (Richter 2008: 127). Some duties may be pleasant and some unpleasant, but the Gita says that we must not attach ourselves to doing only the former and avoiding the latter. We may not know the consequences of doing some duties, but moral duties remain categorical imperatives. Krishna and Kant seem to have a lot of common ground! Sen's call to prevent manifest instances of injustice such as poverty itself can be considered as an instance of deontological ethic, though of course he is explicit and insistent on taking into account the consequences of one's action, with which the Gita also agrees.
## Miscellaneous criticisms
A point of criticism which has been made by most of the critics of the Gita is that its arguments are not logically ordered; they are jerky, unconvincing and often lack consistency. It is, for example unrealistic to argue about the irrelevance of human agency. A conspicuous instance, as pointed out by Dr Ambedkar, is Krishna's argument that Arjuna could fight and kill because the Atman is immortal and he would not be killing the real self of anyone, and even the bodies are after all mortal and he would not be able to prevent them from dying. Dr Ambedkar writes: 'If Krishna were to appear as a lawyer acting for a client who is being tried for murder and pleaded the defence set out by him in the Bhagavad Gita there is not the slightest doubt that he would be sent to the lunatic asylum' (2004: 197). Dr Ambedkar's point is that it was alright to argue that Arjuna could not back out of the war at that stage, but to use such a logic would be hardly convincing. If Krishna's logic and his theory of irrelevance of human agency were to prevail, what is the meaning left to ahimsa and crime? Desai asks, is death so trivial? He terms such a logic as sheer escapism. Under this, 'the responsibility of human action is completely evaded', and if I take a bribe, 'I can convince myself that it is not I who take a bribe nor you who give it to me' (2004: 65). He adds: 'Since certain though my death may be, should I not insist that you the murderer who threatens me, have no right to kill me?... Just because death is inevitable, its timing cannot be of no consequence' (2004: 66–67).
This criticism also is based on a serious misunderstanding of the Gita and misapplication of its theory. The reality is quite complex, and statements made with a particular meaning and motive in a particular situation cannot be literally applied in all contexts and situations. In such a kind of distorted universalism, the Gita's arguments may sometimes do look conflicting and inconsistent. For example, the Gita not only accepts freedom of will in some situations but also denies it in others as discussed in the preceding chapter. Similarly, human agency or responsibility is certainly accepted in asking man to be ethical and do his duty, but is to be taken as irrelevant in situations such as a soldier fighting in a legally declared war involving killing. A soldier who participated in such a war would always be tortured by a heavy conscience all his life, if he accepts moral responsibility for the deaths he caused in a war. Arjuna would have had the same problem, but for Krishna's arguments. He wanted to have a clean conscience before going to war, and Krishna enabled him to have it. It does not mean that anyone can indulge in murder with a clean conscience. War is certainly bad, and as pointed out earlier, Krishna tried his best to avoid it and negotiate a solution. Kauravas forced the war on the Pandavas, but Arjuna could not just forget that those with whom he would be fighting were not his enemies but his near and dear ones and wanted to escape from the war when it was too late. Krishna deplored such escapism and inspired him to have the will to fight. Krishna said that you cannot solve problems by escaping from them, but by facing them with a sense of duty. Thus Krishna's logic was not one of escapism as alleged but was a remedy for escapism in a particular situation. This cannot be interpreted as Krishna having given a free run for all sins, crimes and murders. That would amount to complete twisting and distortion, and cannot certainly be accepted as the message of the Gita. It does not mean that the Gita's teachings are all purely contextual and relativist. They do have general significance and relevance, but this has to be interpreted without giving up common sense. If the Gita's philosophy was only a pep talk to Arjuna and had no significance to other situations and other people, it would not have been accepted as a popular sacred text. Even then, it cannot be interpreted mechanically in all circumstances. The theory that Atman is immortal and does not die with the body has been a source of immense solace to many Hindus who are grieved by the death of their dear ones. Man in search of immortality finds it in this theory; they believe that there is something immortal in man in spite of the body being perishable which gives solace. It may sound irrational to rationalists, but it is a matter of faith. It looks that very common people at large have shown more common sense and logic in interpreting the Gita than rationalist intellectuals!
Kosambi points out at the variety of interpretations of the Gita, each opposed to the other, and says: 'No question remains of its basic validity if the meaning is so flexible' (quoted approvingly in Desai 2014: 27). It is indeed true that apparently very different interpretations in terms of Advaita, Vishishta-advaita and Dvaita have been made by Indian philosophers, but we have also observed while discussing them that they express different aspects of the Ultimate. Differences are bound to arise while explaining philosophy, particularly when a text like the Gita is not intended to be partisan but aims at achieving a synthesis. The Gita, after all, is not a do-it-yourself manual for assembling a TV set or a car!
Veerabhadrappa takes the Gita to task for being strongly critical of atheists and has devoted a full chapter of his book to explain why the Gita is anti-rational and wants to promote blind belief in whatever is said in the Gita and in Krishna (2004: 115–23). Far from being anti-rational, Charles Wilkins, who first translated the Gita into English, and Warren Hastings, who patronised the translation, saw in the Gita a potential to raise the level of 'vernacular religion' from 'venality and corruption' it had fallen into and making it comparable with Christianity (Robinson 2013: 39). They appreciated its being capable of lifting popular religion from superstition to a higher form of religion, more amenable to reason (if only it reaches the masses). The Gita accepts several paths to one goal and even freedom to conceptualise several forms of God. It is not fanatical and, at least for this reason, should be accepted as more rational than other faiths which are fanatical about both the issues. Robinson observes, 'the Bhagavad-gita can be read as proving that the Hindu tradition is more experiential and less dogmatic than Western faiths' (2006: 152). If you read the Gita with an open mind, you will not find it anti-rational. The fact that Krishna did not want Arjuna to simply and blindly follow him is evident from his almost final advice to Arjuna to critically think over what all was said and then do as per his choice (XVIII.63). The general spirit of the Gita should also be appreciated. It points out different paths of sadhana; it gives the choice of the form of deity to the devotee and says in whatever form you worship or to whomsoever you worship, it reaches the same God, Krishna. The Gita is liberal in its religion, not fanatical. It does not, after all, call upon believers to kill all non-believers or harass them until they become believers! Moreover, the attack is more on the incorrigibly arrogant, those tending to greedily amass wealth through foul means and those given to indulging in sensual pleasures without compunctions, than on atheists (XVI.10–18). The reference to atheists is only in one verse, viz., XVI.8, which may well have been an insertion in the final version of the Gita by someone hating them. How could Krishna having taught non-hatred elsewhere permit hatred only for atheists? In the last chapter, Krishna of course says that the Gita is not for those who are devoid of devotion, who do not render any service [to others in the society or to God] and those 'who cavil at Me' (XVIII.67). This hardly amounts to a strong attack on the atheists. Krishna does not want anyone to be forced to believe in the Gita, and that is all.
Staunch rationalists are poor psychologists. They do not comprehend the vast scope of human aspirations and confine these aspirations only to the mundane. Anyone who does not fit it into rationalists' narrow conception of human mind is taken as irrational. To assume that believers are necessarily irrational and only the non-believers are rational is itself very unreasonable and irrational.
In any case, too much emphasis on rationality in matters of religion is itself not rational. Religion is not science, and their spheres are different, though they need not conflict. Gandhi thought that religion has no business in matters to be resolved by science, and science and rigid logic have in turn no business to be judgemental about religion. He felt that 'attribution of omnipotence to reason' is as bad as idolatry. He said: 'I do not know a single rationalist who has never done anything in simple faith.... But we all know millions of human beings living their more or less orderly lives because of their child-like faith in the maker of us all.... I plead not for the suppression of reason, but for due recognition of that in us which sanctions reason itself' (quoted in Fischer 1998: 308). The Gita also pleads for the same.
Let us savour a sample of Veerabhadrappa's own rationality. According to him, the Mahabharata war was a conflict between interests of Brahmins represented by Pandavas and Kshatriyas represented by Kauravas (2004: 42–43). He forgets that Pandavas and Kauravas were cousins and that both were Kshatriyas. The war hardly had any caste basis. In fact, Brahmin leaders like Drona, Ashwatthama and Kripa fought on the side of Kauravas. There were no such Brahmin leaders on the Pandava side. Krishna himself was a Kshatriya.
Desai calls the Gita 'toxic' because he thinks that it is casteist, misogynist and even racist, and therefore irrelevant for the modern times (Desai 2014: 138). The misunderstanding about its being casteist and misogynist has been cleared earlier already. The charge about its being racist rests on the basis of deliberate misinterpretation of Chapter 16, where Krishna distinguishes between divine and demonical qualities (Desai 2014: 150). Desai thinks that the reference to daivee and asuree is to devas and asuras, whom he considers as races. He interprets that Krishna praises devas and condemns non-aryan asuras as respective races. It is beyond Desai's 'rationalist' and 'secular' mind to think that Krishna was referring only to ethical differences between the good (divine) and the bad (demonical or devilish). As a matter of fact, there is absolutely no hint of racism either in the chapter or in the entire Gita.
It is a healthy sign of the Hindu society that it has permitted a critical scrutiny of its sacred texts. The sacred texts of all religions should be open to such criticism and reinterpretations to make them relevant to the times as the Gita. However, the teachings of sacred texts, be it the Gita, the Bible or the Koran, have to be interpreted with some responsibility, objectivity, reasonable positiveness of attitude and balance. They should not be distorted merely for the thrill of attacking a text respected as sacred by millions and be happy in the thought that it is being progressive. Calling the Gita as 'terrorist tract' or 'toxic' betrays a complete lack of balance, responsibility, sensitivity and even rationality. Criticism of sacred texts, however, should be responded to with logic, not intolerance. The texts should neither be fanatically defended to the point of not permitting any criticism and even threatening the critics, nor be ridiculed by critics to the point of twisting the logic and the context of the teachings unduly hurting the sentiments of believers. While tolerance to criticism and freedom of thought is necessary for a healthy, humane and a progressive society, such freedom should be expressed with responsibility, and not recklessly.
## Notes
1 The account of Christian critiques of the Gita here is based mainly on Chapter 3, 'Christian Theological and Missionary Critiques' in Robinson (2006: 71–85).
2 There is obviously a major typing error in the table on page 100 of Malhotra's book (2011). To be consistent, the sub-column titled 'Open Architecture Spiritual Eco-system' should be shifted to the left column under 'Human bottom-up potential independent of history', and the sub-column titled 'Non-negotiable Grand Narrative of History' should be shifted to the right column under 'God makes top-down history'.
3 For a critical discussion of India's own calendars, see Chapter 15 on 'India through its Calendars' in Sen (2005: 317–33).
4 The gist of arguments of three critics is given here: Kosambi (1968) as quoted in Desai (2014: 37–38), Ambedkar (2004: 193–2000) and Veerabhadrappa (2004).
5 As quoted by Sharma (2000: 165). His source is S. V. Oka (1957) The Uttaragita with a Translation into English and Appendices, Poona: Bhandarkar Oriental Research Institute.
6 Dr Ambedkar contends that since the Gita's aim was to defend the ritualism of Jaimini's Purva Mimamsa, the Gita must have been composed thereafter. However, Jaimini is not the founder of ritualism, since the Yajurveda, which was composed much earlier than both the Gita and Purva Mimamsa, had its main contents in ritualism. Dr Ambedkar also contends that Jaimini does not refer to the Gita and must therefore have been prior to it. But the Gita also does not refer to Jaimini or his work, though it refers to the Veda(s).
7 The verse, at least as translated by Easwaran, may sound self-contradictory. How can a selfless work be expected to fulfil one's desires? This point has been addressed in the first section of the preceding chapter while discussing desirelessness in the context of the Gita's teaching, and is taken up again in the next two paras here while responding to the criticism by Veerabhadrappa.
8 This point has been taken up for further elaboration with examples in the next chapter.
9 For a fairly detailed account of the bhakti movements and their role, see Nadkarni (2013: 203–50).
10 The account of Amartya Sen's criticism and my response to here are developed further from Nadkarni (2014: 136–41).
11 There is an apparent self-contradiction on this point in the Gita. Freedom of will is clear from the pains that Krishna has taken to persuade Arjuna to do his duty and his advice that he can do what he wants after critically thinking over what all was said to him and then do what he liked. If Arjuna was a mere puppet in the hands of God, the Gita was not needed. On the other hand, in Chapter 11 of the Gita, Krishna also says after showing His universal form (Vishwarupa) that all the people with whom Arjuna would be fighting are already destined to be killed and that Arjuna was only an instrument (nimitta). The contradiction can be resolved by distinguishing between two planes of reality. In the practical plane of what Shankara called as vyavaharika satya, there is freedom of will. But in the ultimate plane of paramarthika satya, everything is Brahman, and the question of separate individual freewill does not arise.
# [8
Novel Applications](content.xhtml#bck_Ch08)
## The Gita as a guide to leadership, enterprise and management
The Gita was traditionally seen only as a source of spiritual guidance and also as a means of earning punya or merit through daily recitation. In applying the Gita to novel situations, which were not visualised at the time of its composition, it is important to appreciate that one should transcend the literal meaning of the verses and grasp the general spirit or purport (tatparya) behind them. The application of this requirement begins with the very first verse of the Gita, which includes a reference to the battlefield. A few novel applications of the Gita have already been presented in earlier chapters, such as the efficacy of a business executive and the nature of a national economy in Chapter 6 while discussing Table 6.1. The verses of the Gita have a remarkable profundity to enable this, which is what makes it a timeless and living sacred text which continues to inspire millions even after two and a half millennia of its composition.
After the first English translation of the Gita by Charles Wilkins appeared in 1785 in London, many Western thinkers saw it as a source of deliverance from excessive materialism. The value of the Gita as a source of inspiration and guidance in mundane problems, at both the national and individual levels, was realised only recently since the eighteenth century, compared with its long recognition as a sacred book. This use had to do with the Gita's practical approach to ethics. Practical approach does not mean sacrificing ethics for the sake of convenience. That would be hypocrisy, and the Gita abhors hypocrisy as mithyachara (III.6) or dambha (XVI.10). Practical approach to ethics means having the potential to guide through ethical problems one faces in day-to-day life. These problems arise at various levels – in the private lives of individuals, in community or national problems, and of course in business enterprises. The very background of the Gita is set in a battlefield. Gandhi and many others took it as a metaphor. The triumph of the good over the evil, or of justice over injustice, does not take place automatically, but only through a relentless struggle. During the days of India's freedom struggle, the Gita was taken as a direct source of inspiration for the national movement by such luminaries as Bankim Chandra and Bal Gangadhar Tilak. Raja Rammohan Roy used the Gita as a source of support for reforming the Hindu society and for eradicating such social evils. He used the Gita even to oppose idolatry and superstition. Mahatma Gandhi treated the Gita as his mother, as a source of solace and as a guide in all practical problems he faced in the freedom struggle he led and in social reforms he launched. He wrote in 1925:
I find a solace in the Bhagavadgita that I miss even in the Sermon on the Mount. When disappointment stares me in the face and all alone I see not a ray of light, I go back to the Bhagavadgita. I find a verse here and a verse there and I immediately begin to smile in the midst of overwhelming tragedies – and my life has been full of external tragedies – and if they have left no visible, no credible scar on me, I owe it to the teachings of the Bhagavadgita.
– Mahatma Gandhi in Young India 1925 (pp. 1078–79); CWMG Vol. 32: 195
Shriranga, an eminent modern writer in Kannada, also known as Adya Rangacharya, takes the Gita as a guide to leadership, as it transformed a confused, indecisive and forlorn Arjuna into a clear-headed, determined and self-confident leader prepared to fight his battle finally. At the critical moment, Arjuna forgot that he was an important leader of his army which was vitally dependent on him and allowed his mind to wander to irrelevant things as a common individual without thinking the implications of his temptation to give up the battle for his army and for his own honour. Krishna brought him back to an awareness of his duty and gave him a philosophy of selfless duty and dedication suited to a leader (Shriranga 1972: 63, 143–44, 147, 249–50). Shriranga makes another point also. Justice requires that a wrongdoer is duly punished; to desist from so punishing because the punishment hurts and it amounts to violence would plunge the society into deep trouble. As a leader, Arjuna should have been aware of this, but lapses into sentimentalism. Krishna removes his confusion by asking him, among other things, to make his thinking focused on the most relevant, instead of allowing it get diffused into multiple directions (II.41) (Shriranga 1941: 32, 35). A very important requirement of a leader is to be focused on the relevant. The final decision has to be by the leader, but, as said by Krishna, it has to be based on critical thinking (XVIII.63). Since decisions by leaders affect a large number of people, it is people's responsibility to see that only persons of sattvika qualities as described in the Gita are selected as leaders in a democracy (Shriranga 1972: 250). Leaders themselves, if they care for a good reputation, should take the initiative to be virtuous and be persons with enormous self-control who will refuse to yield to temptations of abusing their power. The Gita has a lot to teach about self-control, as explained under 'Ethics in the Gita' in Chapter 6.
It is no wonder that the Gita has now come to be seen as a source of inspiration for business enterprise and business leaders, and guide to management. In a sense, the modern business environment under competitive capitalism also looks like a ruthless battlefield. Starting a new business needs courage, an enterprising spirit and pride in doing it. A timid person neither can start business enterprises nor can satisfactorily run them as they may buckle down under pressure of competition. What verse can be more inspiring and invigorating than the third one in Chapter 2? It says:
Klaibyam masma gamah Partha naitat tvaiyyupapadyate /
Kshudram hridaya daurbalyam tyaktvotthishta Parantapa //
It means: 'Yield not to unmanliness, O Son of Pritha! Ill doth it become thee. Cast off this mean faint-heartedness and arise, O Scorcher of thine enemies!' (Tr. Swarupananda 1982: 28).
Swami Vivekananda considered this verse as containing the whole message of the Gita (CWSV 1998, Vol. IV: 110). He thought it to be particularly relevant to the then mass of Indians immersed in ignorance and superstition who needed to struggle for a respectable place in the comity of nations. They had to fight numerous social evils like untouchability and mass illiteracy. But the verse can be considered to be equally relevant to talented young men inspiring them to start their own enterprises and create new employment instead of tamely being content with being employed by others. An unenterprising or inert nature (apravritti) is condemned by the Gita as tamasika (of the quality of dullness or passivity) (XIV.13).
However, the enterprising will always find ups and downs, which they have to face with equanimity and patience. 'Have patience' (titikshasva), says the Gita emphatically (II.14). An important message of the Gita is: Treat joy and sorrow, profit and loss, success and failure with equipoise and be ready to struggle (II.38). How is it possible? The Gita says, through detachment (sangam tyaktvaa). A certain amount of detachment even while actively engaged in work helps one to gain an evenness of mind (samatvam) against ups and downs, success and failure (II.48). Apart from avoiding stress and depression, detachment equips one to deal with all vicissitudes calmly and efficiently. Detachment does not mean non-seriousness with work or lack of commitment. The Gita is very emphatic about working with dexterity (yogah karmasu kaushalam) (II.50). It considers working with fortitude (dhriti) and enthusiasm (utsaha) as sattvika, the most desired of the three gunas or mental qualities (XVIII.26). There is no question of the Gita accepting indifference to quality of work as detachment. But the detachment as taught by the Gita is a key to both success and survival. An entrepreneur, whether an industrialist or farmer, has to accept risks and uncertainties as unavoidable facts of life and be ready to face them with boldness and confidence. The tragic suicides of numerous farmers or even of small businessmen are a consequence of not imbibing this teaching of the Gita. Even if a business goes into liquidation, a business person should not loose cool and equipoise, and be ready to reincarnate himself or herself within this God-given life. There is a famous verse in the Gita (II.22) which has served as a source of solace in the context of passing away of a dear one. It says: 'Even as a man casts off worn-out clothes, and puts on new ones, so the embodied casts off worn-out bodies, and enters into others which are new' (Tr. Swarupananda 1982: 42). This verse could as well be applied to situations of failure or liquidation of a business enterprise. An entrepreneur should not lose his or her cool in such situations, but be ready to learn from experience and start a new enterprise. Professor B. Mahadevan interprets this verse as teaching the need to discard obsolete ideas and experiment with new ones in business; it is a mantra for innovation. Old models which do not any longer work may have to be given up and new ones tried. Chatterjee observes that 'a core capability that all leaders of all times must possess is the ability to lead change' (2012: 215). He thinks that the Gita teaches business leaders to remain relevant and be useful to the world. 'To do that, they need to deal with discontinuities and question their own mental models' and 'go beyond wishful leadership to wilful leadership' that leads to action. They triumph when they merge their individual will with the life's larger purpose, which is what was taught by Krishna (2012: 216–17).
Hindu scriptures have accepted the goal of earning wealth as a valid purushartha (human goal). Earning wealth per se is not regarded as a sin, but is encouraged on the contrary. It is considered as the duty of a householder to earn, take care of family, be hospitable and help others. The principles of morality are applicable to all including householders. They have no concessions or exemptions from them just because they need to earn, though sannyasis are subject to even more rigorous moral and spiritual discipline. Similarly, business enterprises enjoy no exemptions from the principles of morality, just because they are in business. They have in fact special responsibilities because they are in a position to affect the lives of others. The Rigveda gave a general advice which is relevant even today, both for individuals and for business enterprises. It is pertinent in the context of what the Gita also has to say further on the issue. The Rigveda (X.31.2) says:
Parichin marto dravinam mamanyad ritasya patha namasa vivaset /
Uta svena kratuna samvadeta shreyamsam daksham manasa jagribhyat
('Let a man/woman ponder well on wealth, earn it through the path of moral law and with humility, consulting one's own conscience, and then heartily gain upright prosperity.'
Tr. by the author)
Wealth does not come on its own. One has to consciously ponder (parichin) over how it has to be earned through the path of moral law or truth (ritasya patha), and not by dishonest means. It has to be earned with humility (namasa), since success depends on the grace of God and one owes it to the society at large for making it possible. Ethical dilemmas are bound to arise, which have to be resolved through consulting one's conscience (kratuna samvadeta) or inner voice as Gandhi called it. Once these qualifications or conditions are respected and followed, one can heartily (manasa) earn wealth and gain well-deserved (daksham) prosperity (shreyamsam).
The Gita not only implicitly accepts this, but also adds that the wealth earned must also be shared and be used for the welfare of humanity (loka-hita). Earning wealth has to be done in the spirit of a yajna, an offering, and one should enjoy its fruit only after meeting the dues of all; that is one has a right to eat only the remnants of yajna. The Gita further explains this by saying that those who cook only for themselves eat sin (III.13). Even food that one prepares has to be shared, what then of earning wealth through business? A further extension of the idea that one has a right only to the remnants after meeting the dues of all is that the moral responsibility of a corporate enterprise is not confined only to its shareholders, but extends to other stakeholders as well like employees, customers, suppliers, state and society. Shareholders come last; they are entitled only to what remains after all dues and liabilities are met. The business enterprises have no right also to unsustainable and/or illegal exploitation of nature. If the business operations cause some negative externalities like depriving some people of their land or livelihoods, they need to be first compensated and rehabilitated. If any pollution is involved, the business enterprise has to honestly take steps to avoid or at least minimise pollution within permissible or acceptable limits, and duly compensate the victims of pollution. All these are implied when the Gita says that one has a right only to the remnants of yajna.
The great thing about the Gita is that it does not stop at teaching ethics, though it is important. It goes beyond and teaches how to be effective and efficient too. As a matter of fact, being ethical in business also contributes to efficiency and effectiveness. There is no conflict between ethics and business efficiency. The management should not be guided by only short-term gains and sacrifice its long-term credibility. Micro-economic theory is developed on the premise that a firm has the goal of profit maximisation. A healthy firm, however, aims at maximising profits over a long run and not tempted by short-run gains, which harm long-run profitability. We often talk about the brand value of an enterprise, which is essentially a long-term concept. Brand value does not so much depend on the profitability of business, as profitability depends on brand value. Brand value depends in the main on the moral integrity with which business is conducted, the confidence the customers have in the product and services of business, the reputation of the enterprise on the treatment of its employees and suppliers, its social welfare projects and also the extent of eco-friendliness of the enterprise. A good management has to ensure all these for success. The secret of success in business management lies in following the Gita's advice: Parasparam bhavayantah shreyah paramavapsyatha (III.11). It means: 'Cherish each other, support each other, you gain the highest good'. A little further, in verse 16 of the same chapter, the Gita refers to a cycle (chakra) of good works, involving helping each other and gaining mutual benefit, and warns that one who does not participate in this virtuous cycle lives in vain (mogham jeevati). But if this cycle is followed, it contributes to the longevity and brand value of the enterprise. Yes, as explained earlier, ups and downs, and even mortality, of business enterprises have to be faced with equanimity. But it does not mean that management cannot do anything about it. Brand value helps a business greatly to tide over ups and downs and promotes its longevity. However, brand value is not dropped to an enterprise from the high heavens as a gift; it has to be created and assiduously patiently built. The secret of boosting brand value lies in following the Gita's twin principles of eating only the remainder after sharing and cherishing each other. It cannot be ignored that cherishing each other covers our natural environment also. When the environment is protected, it nourishes us too!
Another advice which the Gita gives is to have humility, and not only to avoid arrogance but also to be cautious about not giving any such impression to others. Arrogance, even an impression of being arrogant, is highly counterproductive. In the old days, many used to think that throwing one's weight around and creating an aura of fear works best in getting work done. But it also creates unconsciously or consciously resistance and an attitude of withdrawing from wholehearted cooperation. A mature way is to prefer being loved rather than being feared. Amiability and exuding warmth in relationships with all are not so much as a way of getting willing obedience from those below, but essentially because it is the person's very nature. Such a person is quick to give credit to others for both small and big things, and not appropriate all credit for oneself. A good manager, even a CEO, is open to suggestions and advice from others, and attends to complaints on time and sincerely, and even when a complaint has to be rejected, it has to be done after due consideration and respect to the complainant. A good and healthy organisation does not depend on the competence of just one or a few persons, but of most. A competent manager creates an environment where all contribute wholeheartedly and is quick to recognise the role of others. The Gita says that it is the deluded arrogant who thinks that he or she is the only doer (Ahamkara-vimudhatma kartaram aham manyate) (III.27).
The Gita insists that (even in private enterprises) due procedures and codes of conduct (shastra-vidhi) have to be followed wherever applicable. They should not be flouted under temptation or selfish impulse (kamakaraka), since any such irresponsible behaviour on the part of management does not lead to success (na sa siddhim avapnoti) (XVI.23) and could instead land it in disaster. Legal procedures and codes of conduct tell us what should be done and what should not be, and need to be taken into account (XVI.24). Decisions and their implementation have to be transparent, and in a spirit free from undue personal selfishness, anger and greed (XVI.21).
The Gita hints (in XVIII.20–22) that in taking important decisions or solving problems, it is best to have a total view of things involved, rather than be content with analysing a thing in isolation even if in depth. Taking a holistic view is expected to give the most satisfactory outcome, since it recognises that there are several dimensions to an issue all of which may be relevant directly or indirectly. It takes the larger picture into account, which may produce new insights that a purely analytical approach may miss. It will be very harmful for an enterprise to take decisions on the basis of what the Gita calls as tamasika knowledge. We have more to discuss about this in the next section. We may note, here, however, that taking a total view includes negative externalities imposed on the environment and other people, which the management cannot ignore. These negative externalities are side effects the cost of which is imposed on others of both the present and future generations, but not borne by the enterprise causing them. They are in the form of air and water pollution, and depletion of the stock of natural resources beyond sustainable limits. A sattvika management should take responsibility for all these damages and prevent them or duly compensate for them. For example, a factory which has come up on a farmland may have paid some price for land purchased and occupied, but it should also rehabilitate the displaced farmer who has lost his source of livelihood by employing him or his family members in suitable jobs. There is emphasis on shaucham (cleanliness) and adroha (non-treacherousness) in the Gita (XVI.3), which also mean avoiding or minimising air and water pollution and adverse side effects on other people. The implications of the Gita's teaching for environmental management and ethics have been brought out by Swami Ranganathananda in his work on the Gita and have been narrated in the section on him in Chapter 5.
There is a principle of work in the Gita, karma-yoga, which in simple English means selfless service. But if you try to spell it out particularly for general application, it can be seen as intriguing in spite of all the attention and publicity it has received. The essence of karmayoga is considered to be as spelt out in Chapter 2, verse 47. It says: 'you have a right only to doing work (Karmanye eva adhikaraste); but never to its outcome or fruit (ma phaleshu kadachana); don't think of yourself as the cause of work (ma karma-phala-heturbhu); but don't abstain from work (ma te sangostvakarmani)'. If you take this verse in isolation, it may sound highly unacceptable, even revolting. If a manager goes before his workers with such a harsh injunction, he will be considered as a slave-driver. How can anyone be expected to work without anticipating a reward, be they workers or professionals? Let alone workers, how can a manager, who is supposed to be result oriented and has certain targets to achieve, be indifferent to the outcome of what he or she does? How can you expect an entrepreneur to be indifferent to profits or success of his enterprise, when his or her very motive is to make a surplus at least in the long run and make a success of his enterprise? Even a spiritual seeker is motivated by the desire for moksha or nirvana, and cannot be entirely desireless.
Actually, however, the Gita does not intend it. Krishna himself motivates Arjuna by saying: 'If you die in the battle, you will attain heaven; if you win, you will enjoy this earth. Stand up, therefore, and resolve to fight' (II.37). How can he, then, in just ten verses later in the same chapter ask the same Arjuna not to desire any fruit of his action? Nor does Krishna advise indifference to expected outcome. He denounces work done without heeding to the consequences as tamasika (XVIII.25). The Lord does not expect any work to be done in an 'unengaged' way (ayukta), that is mindlessly or thoughtlessly, nor lazily taking one's own sweet time (dirghasutri) (XVII.28). He insists on commitment (shraddha) and treats the lack of it as tamasika (XVII.13). He emphasises dexterity in work (karmasu kaushalam) (II.50) after recommending work without seeking a gain. Work has also to be done with fortitude and enthusiasm (XVIII.26). It means that one has to enjoy the work. The question is whether anyone can fulfil all these expectations without anticipating anything at all in return. What then does karma-yoga exactly mean? It is easier to do so through illustrations.
Basically, karma-yoga is meant for one's own self to practice and not for asking others to follow it freeing oneself from the obligation of it. That would amount to hypocrisy and an attempt at slave-driving. Karma-yoga is a mental discipline with practical applications for both efficiency and spiritual advancement. We can think of following karma-yoga at two levels, primary and advanced. We may illustrate it with the example of a doctor or a surgeon, who may charge a higher normal fee to well-to-do patients, but a much lower fee to the poor. The doctor has to meet expenses involved in giving a good service and also make a living, and cannot therefore afford to give free medical service for all. But having charged a fee, the doctor will not discriminate between a rich and a poor patient, giving a better and careful service to the former and indifferent service to the poor. In giving the medical service, a good doctor is guided by the motive of professional excellence and pride in work, and compassion for all patients, irrespective of what they pay. A more-paying rich indoor patient may be accommodated in a special AC room, and a common patient in a general ward. But as far as the medical service itself is concerned, there will be no discrimination between the two, and even the general ward will be kept as clean and hygienic as the special room. This is karma-yoga at a primary level, doing work with professionalism, pride in the quality of work, with complete care and mindfulness, and also of course with compassion to beneficiaries of work and to all. The doctor may charge fees, but is not guided only by the pecuniary considerations which in fact are pushed to background. A more enterprising doctor may intensify his or her social service by charging nothing or only a nominal fee, and meeting the expenses involved through donations from the admiring public, without compromising on the quality of service and professionalism. What really distinguishes a more mature or higher-level karma-yogi from the one at the primary level is that the sense of 'I am doing' totally vanishes in the mature who considers himself or herself as a mere instrument or puppet in the hands of the Divine, carrying out the Divine Will, not one's own will. The selflessness here is on two counts: the person does not work for a personal reward and, second, drops all the feeling of 'I' or 'mine'.
Similarly, a teacher may accept a salary to make a living, but as a karma-yogi, she will be totally lost in teaching, constantly improving herself in the profession, giving her best, enjoying teaching for its own sake and not working just for a salary. The teacher as a real karmayogi would feel that she is just an instrument of the Divine, carrying out the Divine Will. Such a teacher cannot be unmindful of the outcome of the teaching, for it has to be ensured that the students absorb the knowledge and skills taught. But a karma-yogi does not judge the outcome in terms of the income gained. To that extent, the teacher is selfless or desireless (anahamvadi, nispraha), a requirement for a karma-yogi.
Can we apply the principle of selfless work to business enterprises in general? The bulk of economic activity in the world is assumed to be motivated by the desire for a personal gain or profit. According to Adam Smith, considered the father of economics, self-interest is not necessarily bad for the world. When each person acts according to self-interest, according to him, there is a mutual balance and natural order as in a market, and the common good is protected. Smith's view has not gone unchallenged. A natural order produced exclusively by narrow self-interest can be very unfair with a lot exploitation of the weak by the strong and can also be environmentally unsustainable. What is needed is to rein selfishness. This can be done either by the government through regulation and control, or by self-restraint, or through both. What the Gita does is to emphasise voluntary self-restraint. This aspect of the Gita was later developed by Mahatma Gandhi, who asked for treating the surplus wealth as a trust for the benefit of the society. Economic rationality seen narrowly in terms of selfishness would be a case of 'rational fools', and not a sensible or wise behaviour (Sen 1977). One is free to earn enough to prove one's self-worth, but it is not necessary that all the earning be spent on oneself and family only. Just as individuals can, and many do, devote part of their wealth for philanthropy, corporate enterprises can also, and do, devote some of their earnings for the benefit of the larger society. This is a part of their social responsibility.
Social responsibility of business enterprises has several dimensions and is in addition to their responsibility to their stakeholders. Only some of the social responsibility has direct monetary implications, like contributing to social projects of the country to enhance the welfare and environmental improvement, and promoting education and culture. The company can even persuade their highly paid employees to make similar contributions. It creates an environment of social awareness in the company. The other dimensions of social responsibility are avoiding any discrimination against women in employment including in top positions, deliberately diversifying the social background of employees so that SCs and STs and religious minorities are adequately represented including the middle and top positions, and giving some preference and proper facilities to the physically challenged persons in employment. Doing all these would amount to following the Gita's advice of caring for loka-hita or loka-sangraha (promoting people's welfare). It is no longer left to the sweet will of the companies to do this. The Companies Act 2013 stipulates in Section 135 that every company having a net worth of 5,000 million rupees or more, or having a net profit of 50 million rupees or more, shall spend at least 2 per cent of its net profit before distribution (averaged over the preceding three years) on social welfare projects as its corporate social responsibility. Do I hear a voice from somewhere that such a provision discourages enterprise and investment? Those having such a feeling should reread the third verse in the second chapter of the Gita quoted earlier (Klaibyam masma gamah...).
In the foregoing part of this section, we have seen the Gita only from the point of what guidance it gives to business leaders, enterprises and management. What about ordinary employees or workers? The criticism of the Gita that its advice to do work regardless of reward is only to deprive the workers of their due and to ideologically indoctrinate them to work like slaves has been duly refuted in the previous chapter, and it is not necessary to revisit it here. An important aspect of management is managing the human resources. A tamasika approach to personnel management is viewing employees narrowly as instruments hired for certain tasks and nothing more. A holistic and sattvika way would be to treat them as whole human beings with aspirations and families, and caring for all their concerns. Workers would then get a feeling that they are a vital part of an extended family in the form of the employing organisation, in which they count, resulting in a significant improvement in their loyalty and commitment. The workers have their responsibilities too.
The distinction made in the Gita between sattvika, rajasika and tamasika is a useful source of guidance for workers. Nobody can either expect any worker to work without a due and reasonable remuneration, or expect any worker to meekly put up with a remuneration which is not just and fair. But any enterprise or organisation would not expect a worker to be tamasika, which means being lazy, slow, irregular, deceitful, quarrelsome and clumsy in work and uncivilised towards other workers particularly women. In other words, such a worker is without any work ethic. The Gita deplores being tamasika. If there is enough proof of a worker having such a record, the employing enterprise is within its right to terminate his or her services provided that a due legal procedure is followed. Dismissing workers on false charges would amount to cruelty or himsa, and the Gita is definitely against himsa. A rajasika worker works only for salary, obsessively attached to personal reward, working with just enough efficiency to retain his or her job, but without a sincere commitment to the interests of the enterprise employing him or her. Such a worker may survive, but this kind of attitude is not conducive either to the moral and material progress of the worker or to the success and prosperity of the enterprise. Workers indulging in such behaviour believe in the philosophy of free ride. The crux of free-riding philosophy is the attitude of 'What if I don't work enough? In the totality of work, my being uncommitted does not matter'. But this philosophy militates against both the individual and collective interest. It will corrode the integrity and the very personality of the person believing in it. A sattvika worker by contrast, even if working for a due remuneration for work, believes that his or her long-run interest lies in working honestly and efficiently, maintains good interpersonal relationship with all workers as well as with those to whom he or she has to report, and does not subscribe to the philosophy of free ride. A sattvika worker does not have such a calculating mind as to expect an immediate reward for every little extra thing done. A sattvika attitude will contribute to the welfare of not only the enterprise but also of the worker.
It is not claimed here that the Gita contains within its 700 verses all the issues of the ethics, art and science of management. But the Gita can be quite inspiring and invigorating to business leaders and managers, when properly understood. The attempt of this chapter has been to promote such an understanding.
## Pursuit of truth in scientific research
A contribution of the Gita, which so far had largely gone unnoticed, relates to approach to pursuing true knowledge and scientific research including methodology of social science research. The Gita has an astoundingly direct relevance to scientific research and has the potential to make it more insightful and productive. An important problem in research is objectivity. Objectivity does not mean value-neutrality. A research may be oriented to help in promoting health and human welfare in general, and also social justice. But there is often a risk of having pet biases and prejudices including pet theories which one tries to prove somehow, especially in social sciences. There is sometimes a risk of the research being doctored to promote certain interests. Technocrats interested in carrying out say, a power generation project, who are asked to do a social cost–benefit analysis, may somehow underestimate costs, especially social costs, and exaggerate benefits, just to get the project approved. The Gita's advice to be emotionally detached from the outcome of work and also do it with honesty and efficiency is useful in ensuring some objectivity.
There is another problem in research, one of adopting a right approach. An approach which compartmentalises a problem, splitting it into several parts and studying each in isolation, is quite common with academics. This goes under the name of rigorous analysis. This is in contrast with the holistic approach which looks at the problem as a whole, with all its aspects and parts and their cross connections. Both these find their place in the Gita. A common mistake is that we tend to miss the whole, while being obsessed too much with the parts, as in the famous fable of the elephant and the nine blind men. It is usually referred to also as 'missing the forest for the trees'. It is not that there is any essential conflict between the approaches, and both may be necessary in full and proper research. Even while a holistic approach takes note of the whole, it cannot also ignore the parts, just as a car mechanic cannot afford to ignore the working of individual parts while test-driving the car.
In the Gita, there is a direct and explicit discussion of both these approaches. When contemplating or probing into the nature of reality, the approach adopted in the Upanishads was holistic, irrespective of whether the perceived ultimate reality allowed for diversity or not. But their discussion in the Gita has relevance in the context of not only philosophical or metaphysical questions, but also mundane issues raised in sciences including social sciences and in governance and management.
The Gita acknowledges that there could be genuine differences in perception and conclusions in the pursuit of knowledge. According to it, there is a unity or consistency between knowledge (jnanam), object of knowledge (jneyam) and the knower (parijnata), just as there is coherence between the means of action (karanam), action (karma) and the actor or agent of action (karta) (XVIII.18). They influence each other and differences in the nature of the knower (particularly innate biases or prejudices), and the means or approach to knowledge may produce differences in the outcome of the process – perceptions and knowledge produced. The Gita makes an assessment of perceptions and approaches adopted in terms of the theory of Gunas, which has already been discussed at length in earlier chapters.
Three verses in Chapter 18 (20–22) of the Gita present this assessment in brief directly. In addition to them, there are other statements in the Gita which are of indirect help, which are also taken into account here. The first of these three key verses is as follows:
Sarva-bhuteshu yenaikyam bhavam avyayam ikshate /
Avibhaktam vibhakteshu tat jnanam viddhi sattvikam //
(XVIII.20)
('Understand this to be sattvika or the highest knowledge which sees the enduring unity in different things or the universal in differences.'
– Tr. by the author)
Any understanding or knowledge which views the object of knowledge holistically, finds what is unifying, universal or common from the diversity of particulars, and sees how different parts relate to each other, thus constituting the whole, such knowledge is the highest, according to the Gita. A knowledge of only the discrete particulars can be descriptive; it does not explain the particulars. Real knowledge is what leads us beyond the particulars and explains the totality of particulars. In other words, sattvik is totalising, synthesising or philosophical knowledge, which finds the meaning that lies behind everything observed. Seeing the universal from the particulars is a part of the holistic approach, but it is also much more. The approach looks at the whole, as more than a sum of its parts. It is not necessary that the whole should exist or should be seen as an organic unity in an undifferentiated way. It does not deny diversity. It does not have to declare the diversity as false. In fact, the Gita declares elsewhere that truth can be approached both as one and as of separate or manifold parts (ekatvena prathaktvena bahudha vishwato mukham, IX.15). There are also several other verses in the Gita which emphasise the diverse and pluralistic nature of truth (XI.5, 13; XIII.3, 27 and 30). But truth is fully perceived and knowledge emerges only when the unity in diversity is grasped, which is what the sattvika approach is about. The approach can even look at parts as wholes within a whole, each part having its own diversity and yet bound together either conceptually or ontologically in a unity. The essence of holism does not depend on the level of aggregation, but on whether to the maximum extent possible, all facets, all components, all connections and all the factors bearing on the object of the study are taken into account.
Swami Vivekananda asserts that only when a particular is related to the universal, it leads to knowledge. There can of course be several universals, since more than one generalisation can be drawn from a particular group of the observed material. He asks, 'What is knowledge?' and answers, 'Destruction of peculiarity. Suppose a boy goes into a street or a menagerie and sees a peculiarly shaped animal. He does not know what it is. Then he goes to a country where there are hundreds like that one, and he is satisfied; he knows what the species is. Our knowledge is knowing the principle. Our non-knowledge is finding the particular without reference to the principle' (quoted in Vidyatmananda 2006: 12).
Most research is holistic in the sense that it seeks to get a larger picture or the meaning that lies behind the particulars. It is totalising. Explaining what real research is and how it is different from mere data or information gathering, Kurien gives the example of a crime investigation. A police constable may record all particulars of the crime scene, which is the first step in investigation. Research starts when a senior police officer has a close look at the overall scene, studies all the particulars, forms hypotheses and tests them, seeing the larger picture and taking a holistic view of the crime (Kurien 1973). This is not the end of the process and needs validation in a court of law by a detached judge, who also has to take a holistic view. In such a view, particulars are not ignored, but are related and totalised.
We can thus speak of two ways to holistic research: One is conceptualising the whole comprising parts, viewing the parts in relation to the whole and to each other, and seeing how the whole characterises the parts taken together, as in studying a forest or an economy. The other is deriving the general from diverse particulars, finding what is common or universal among them, as in the case of studying a set of individuals making up a distinct community or society. Both are valid ways to holistic knowledge, in fact to any meaningful knowledge. In the first, you start from the whole, and in the second, you start from the parts. But both take due note of the whole as well as parts and the interrelation between the two.
An approach which stops at the particulars without transcending to the whole is considered by the Gita as a lower level of knowledge, which it calls rajasika. The next verse in the Gita (XVIII.21) deals with it. But in this context, rajasika does not mean emotional or selfish, but simply a stage lower than the highest. If the highest knowledge is total-ising or holistic or synthesising, the lower is disaggregating and analytical. While satvika transcends the particulars even while grasping them, the rajasika is focused on the particulars and their diversity without seeking the connectivity between them. The concerned verse is:
Prathaktvena tu yat jnanam nana bhavan prathak-vidhan //
Vetti sarveshu bhuteshu tat jnanam viddhi rajasam //
(XVIII.21)
('Understand that knowledge to be rajasik which view different entities separately, treating each as different and separate.'
– Tr. by author)
The rajasika approach is not condemned here. A concern for the particulars may be necessary both in any plan of action and also in ascending to the higher approach to knowledge (sattvika). The method, however, has limitations, a major one being it stops short of full or holistic knowledge, which alone can provide full insights. What makes a rajasika approach inferior or inadequate is not that it includes analytical techniques, but that it excludes a holistic vision or misses the larger picture. Once it includes the larger picture, it becomes sattvika. A sattvika approach may not only include but may also need analytical techniques. Intuition plays an important role in sattvika approach, but intuition unsupported by analytical corroboration may not carry conviction. In this sense, the sattvika need not necessarily exclude rajasika, and the two can be complementary.
The Gita cautions against grave mistakes in the pursuit of knowledge and research. The next verse (XVIII.22) describes what can lead to false or misleading knowledge, and in turn to ignorance. The verse is:
Yat tu kritsnavad ekasmin karye saktam ahaitukam /
Atatvarthavat alpam cha tat tamasam udahritam //
(XVIII.22)
('The tamasika is said to be that which treats a small unit or thing as if it is the whole, in a purposeless way or without understanding the objective and without grasping the essence.'
– Tr. by author)
Tamasik means dark, which sheds no light. It cannot be said to be contributing to knowledge; on the contrary, it may be misleading. Taking a small sample and examining it as representing the whole is not tamasika by itself. What makes it tamasika is if it is done without a proper awareness of the objectives of or reasons for investigation (ahaitukam) and without any theoretical framework to guide (atatvarthavat), and if the sample is too small (alpam) to be the representative of the whole. Under such conditions, the investigation would be misleading and hence tamasika. This one verse captures thus the essence of sampling theory and cautions against the pitfalls of sample surveys. What makes an approach to knowledge or scientific research tamasika is narrow-mindedness due to conscious or unconscious prejudice, which can lead to ignoring some parts or aspects of the whole, and even the objectives of our search. The outcome is not just ignorance but misleading or wrong understanding.
It is common knowledge in scientific research that there can be two types of errors in testing hypotheses: Type I error consists in the rejection of a hypothesis which is true; and Type II error is acceptance of a hypothesis which is false. The Gita comes astoundingly close to stating this in the following verse:
Nasato vidyate bhavo nabhavo vidyate satah /
Ubhayorapi drishtontahstvanayoh tatva-darshibhih //
(II.16)
('The unreal never is. The Real never is not. Men possessed of the knowledge of the Truth fully know both these.'
Tr. by Swarupananda 1982: 37)
The crux of knowledge is to know what is real or true, and what is false. According to this verse, truth alone exists. It is the essence of falsehood (asat) that it cannot exist. We may, however, commit the mistake of thinking something to be false what really is true and exists (Type I error), and of taking something to be real and existing what really is false and does not exist (Type II error). The Gita wants a seeker of truth to be wary of both these errors. Gandhi captured the original purport of the verse pithily as: 'Truth is God.... God alone is real and all else is unreal' (Gandhi 1927: xi). But the verse is as relevant even to the seekers of mundane knowledge. And the Gita's criterion of distinguishing the real from the unreal, presented with clarity and precision, is simply that the former alone exists.
It is important to appreciate that no backdoor entry to monism is proposed by advancing the case for the holistic method. Monism tends to disregard diversities and disjunctions as either unreal or secondary. By contrast, a holistic method takes full note of them as important parts or aspects of the whole and probes into the interconnectedness and functioning of the system as a whole. When Gandhi said that 'he endeavoured always to look at all the sides of a question' (CWMG 1972, Vol. 53: 441), he was describing a basic requirement of the holistic approach. Holism does not deny that reality is complex and multifaceted, evolving over time, with parts interacting with each other and having such a synergetic outcome that it is more than a mere sum of its parts. In the case of social issues, a holistic approach may be even more important than in the case of machine like a car. Social reality is evolutionary, and a society or community is more than a collection of individuals.
The type of questions asked may qualitatively and quantitatively differ between a holistic and a purely analytical approach. For instance, when it comes to agriculture and increasing the crop yields, the analyst may focus on increasing the dose of inputs like fertilisers, pesticides, weedicides and irrigation. A holist, on the other hand, will think over how to improve the capability or knowledge of the farmer and ensure the sustainability of the ecosystem, with the question of increasing crop yields being a part of the probe. However, a holistic method can be said to have gone astray if it is divorced from parts or particulars. A doctor is welcome to treat a human being as whole instead of doing a symptomatic treatment, but in the name of holistic treatment, the diseased part or limb can hardly be ignored. A focus only on the whole, ignoring the parts, may not be enlightening. In India, for example poverty was viewed mainly in general or on the whole for the country in terms of its economic backwardness for a long time. It was when V. M. Dandekar and Nilakanth Rath came out with their study on Poverty in India in 1971 that poverty began to be viewed in its particularities, identifying who and how many were poor. This was a more meaningful approach and helped in formulating policies for poverty alleviation. It is quite possible that even if a country is not economically backward or its national income and infrastructure is improving, such prosperity on the whole may bypass a significant number who constitute the poor. That is why a seeker of knowledge has to be clear about what he or she wants to know or do and ask right type of questions.
The Gita does not propose mere logical rigour or formal correctness of the method employed to seek knowledge. Its insistence on the coherence among the object of knowledge, the knower and knowledge may be recalled here. A cold-blooded murder may be planned by seeking to know the whereabouts and movements of the person selected for killing. Seeking knowledge may not always be innocent or guided by noble motives. The Gita insists on the purity of intention of the knower, her or his selflessness, honesty and moral status in general. Sattvika knowledge implies all these; it is not enough whether it is just holistic.
Employing a sattvika approach, however, need not necessarily lead to unanimity, because perceptions and situational contexts of knowledge seekers may differ and their conclusions may not be the same. It may be misleading to adopt the criterion of unanimity for objectivity. A more helpful criterion of objectivity is to see whether the knowledge seeker has his own axe to grind or is, on the contrary, unselfish, detached and open in the pursuit of knowledge. Even if sattvika holism need not lead to unanimity, it may promote greater understanding. And what is more, tolerance and respect for differences in views. It is possible that all views may not stand the scrutiny of critical inquiry, but the inquiry should be honest and detached. That is why the Gita insists on moral purity of motive or purpose of the knowledge seeker.
Knowledge may well be sought by a team together. Different members of the team may be assigned different tasks, but the team leader at least should necessarily have a holistic grasp of the purpose and approach of the research project as a whole. It is more desirable, however, that all the members of the team share this holistic perspective or vision. Otherwise, the different members doing segregated tasks may develop a sense of alienation, making their work joyless and mechanical, which may in turn supress their creativity. Holism promotes creativity, and if creativity is expected from all members of the team – as it should be – the holistic vision of the project should also be shared by all.
A few examples from social sciences are given here to see how a holistic perspective makes a difference. A classic case is that of the Great Depression in the world economy, which started in 1929. As employment and prices started crashing, wage cuts were advised with the hope that enterprises will not then cut back on jobs, even if they do not increase them. The wage-cut policy was based on the confused reasoning that what applies at the individual or micro-level would hold at the aggregate or macro level too – which what the Gita called as a tamasik approach. As a result of widespread wage cuts, the Depression only deepened and widened. John Maynard Keynes argued that wage cuts made the aggregate demand decline and increased un employment. He recommended deficit budgets and increased public spending to boost aggregate demand and fight the Depression. Thanks to this lesson learnt, the recession during 2008–09 though widespread did not reach the same magnitude as the Great Depression of the 1930s. Another example of the use of the holistic method is Karl Marx's analysis of the working of the capitalist system as a whole, showing how the system produced frequent crises and generated poverty and inequality. Both the governments and the working class learnt a good deal from this analysis and took steps in their own respective ways, which moderated the worst evils of capitalism and helped it to improve.
Identifying the well-being of a country mainly in terms of its per capita income and its development mainly in terms of growth of GNP is an instance of what the Gita calls a tamasik approach. A holistic and sattvika view of development, on the other hand, would take into account not only the growth of GNP, but also indicators like reduction in poverty, inequality, illiteracy, gender disparity, environmental pollution and crime rate.
The holistic method has the advantage of enabling the emergence of new paradigms, which may be needed to solve newly emerging problems. This can lead to better informed, better perceived policies which will be more effective and welfare-promoting, since as the Gita says, it is the sattvika which leads to knowledge (sattvat sanjayate jnanam, XIV.17) and binds with happiness (sattvam... sukhasangena badhnati, XIV.6). More than formal training in a fixed set of techniques, one awakens to such an approach by an open mind and wide reading, very necessary for researchers. Even others may benefit by making a habit of it, as it enables wiser decisions in day-to-day life or career.
## Success in career and life
This leads us to the enormous guidance which the Gita provides in making our lives and careers more successful and meaningful, quite apart from spiritual striving or sadhana. Our life is a loka-yatra (travel of the soul in this world) as the Mahabharata terms it. It is in this 'travel' or pilgrimage, as Amur explains the metaphor used in the great epic, that we seek to achieve the four goals of purusharthas (Amur 2013). Amur's scholarly and perceptive book in Kannada, Loka-yatre, covers major dialogues in the entire Mahabharata, but we are focused here only on the Gita. The Gita calls it as sharira-yatra ('travel in body') (III.8). All of us like to have an enjoyable and also successful ride of life by our body. It is clear from the metaphor that what travels is our jiva or soul, and its vehicle is the body.
First of all, what constitutes success in life? We begin as a starting point from the narrow context of mundane careers in pursuit of artha (wealth, power, status) and then go on to wider goals. It is certainly good to be ambitious and realise one's highest potential, but everyone cannot ultimately become the president of the United States of America, or a Rabindranath Tagore, or an Albert Einstein or a Bill Gates. There is bound to be inequality in power, wealth, creativity and talent. Success in life cannot be measured only in terms of becoming any of these. Vacancies at the top are bound to be limited in any sphere, and all those who fail to make it to the very top cannot be termed as failures in life or even in career however ambitious, merited and self-confident one may be. Often public recognition eludes many, even if highly merited and deserving. For example, for every Nobel Prize winner, there may be several others who may have missed it but are no less deserving. But they cannot be considered as failures in life.
The Gita's approach to success in life is that even when one is working in a given career (sve sve karmani abhiratah), one can attain the highest perfection (samsiddhim labhate) (XVIII.45). This is facilitated when one chooses a career best suited to the person's nature and quality or ability (XVIII.47), because that is when the highest potential is reached. All undertakings and careers have shortcomings or hazards, just as fire is accompanied with smoke (XVIII.48), and wisdom lies in choosing work or career where one's potential is the highest. One need not mechanically interpret what constitutes the career where one's potential is the highest. It need not be seen in absolute terms, but in terms of where one's comparative advantage lies, including the aspect of what the society or economy values most amongst various alternative potentials that one has. The statement in the first line of the verse XVIII.47 (viz., 'better to do one's own duty even if it has shortcomings, than doing others' work well') does not at all have to be interpreted as an advice to stick to the hereditary work of one's caste, because the Gita speaks about the division of labour according to one's quality and aptitude and chosen work and not about jati or castes. This point has already been discussed fully in the preceding chapters. One's potential need not be seen in static or rigid terms. If, as a result of training and acquiring new skills, one's potential changes, there is no bar on changing one's work or career. There is, for example no bar on a nurse getting herself trained as a doctor, but as long as she is a nurse, she cannot take up the duty of a doctor even if she is good at it. According to the Gita, even if one's job is less paying or less prestigious, one should strive to do it perfectly with utmost sincerity and selflessly, because that is the way one contributes to the welfare of the society and also to one's own moral, spiritual and even material (why not?) development.
There is more to success in life than success in career and earning, though the latter is certainly a very important ingredient of the former. We need also to be happy and successful with our relationships within and outside the family, making them harmonious, enjoyable and constructive. A happy, mutually loving, supportive, trusting and encouraging family contributes very positively to success in career too. In fact, such a family is an end in itself. The theory of purusharthas insists on a balanced development of moral, material, emotional and spiritual goals or aspects of life. Moksha is not necessarily liberation from the world physically; it is freedom from bondage to narrowness of mind, self-imposed limitations and complexes. It is when such freedom is achieved that one reaches one's highest potential, and this requires consistency and coherence in the way of achieving the purusharthas. An interesting thing about the Gita is that it shows a way in which there is no conflict between its key to success in work and success in life, and similarly between success in life and spiritual progress. There has to be a large degree of coherence in this regard. A yogi need not necessarily be a sannyasi; a family man and a housewife are also yogis in their own rights. Success in life inclusive of career can be defined almost in the same terms as success in yoga. The Gita gives this definition, which is also the criterion of success for each individual. It is that one should feel happy, peaceful, content and blissful within, relaxed within, find his or her light within and experience freedom (V.24; VI.27). It is not that there are no sorrows or pains for such a person, but they would be overcome with dignity and without self-destruction (II.65).
The Gita is full of verses which can guide towards success on all the three fronts together – career, life and spiritual striving. We may go through some of these verses and draw out their guidance in this respect. Perhaps the most important advice is to make sure that your self is your friend and not an enemy and uplift yourself by your own self (VI.5). This is obviously by pushing away thoughts that depress you, overcoming guilt complex and avoiding brooding over your past mistakes as well as mistakes of others that have hurt you. The Gita focuses on the present and what can be done here and now. This is implied by the famous verse, Vasamsi jirnani yatha vihaya (just as old worn out clothes are cast off) (II.22), which was used in the context of management also earlier. One should have self-respect and self-confidence, without being conceited, and should acknowledge the self-respect of all others as well, since the same Lord dwells in us all. A person who respects himself or herself as well as others is respected by all. Humility does not require anyone to demean oneself. The first requirement of self-confidence and getting rid of any guilty feelings is to live a moral life and making it a matter of habit. Verses 13 and 14 in Chapter 12 and again verses 1–4 in Chapter 16 stress the moral qualities one has to try to cultivate, like truthfulness, ahimsa, freedom from hatred and egoism, a compassionate and generous disposition, forgiveness, not losing mental balance and calm over pains and sorrows, self-control, dedication to a worthy cause like welfare of all beings, developing contentment and avoiding hypocrisy and treacherousness absolutely. The Gita says that neither sentimental attachment nor hatred, but self-control and evenness of mind, take you to a life of peace and happiness (II.64). It teaches tolerance and ability to stand up to vicissitudes in life (II.14, VI.7). Obsessive anxieties over loss and other fears often disturb this evenness, and it is wise to learn to cope with them (II.56). The Gita's recipe for this is to develop detachment over the outcome of all endeavours, leaving them to God. It stresses the importance of cultivating a calm, cool and composed mind (VI.19), as a great asset. The Gita assures that those who inculcate such sattvik qualities always tend to rise to great heights; those confined only to selfish pursuits (rajasiks) stagnate; and those tamasiks who are visionless, ignorant and lazy fall down in life (XIV.18).
A valuable teaching of the Gita is to care for and nurture mutuality in relationships – crucial for success in career as well as in personal life. It is by caring for each other (parasparam bhavayantah) that one can gain success (III.11). We should trust others, avoid being habitually cynical and doubtful, for that can be counterproductive and self-destructive (samshayatma vinashyati – IV.40). There is a natural virtuous cycle at work in the world contributing to the welfare of all (III.14), and we should be a part of this contributing to it. A physical evidence of the virtuous circle at work is a wound healing even without medication. Even when there is medication, we have to depend on the natural healing process. This is the evidence of God at work promoting the welfare of all. The Gita expects us to participate in this process as active partners of God. According to the Gita, one who does not do so lives in vain (mogham jivati, III.16). The good begets good, which further begets it; a good deed produces more such deeds, moving in an exponentially virtuous cycle. An important way of participating in this cycle, as suggested by the Gita, is to selflessly promote the protection or welfare of people (loka-sangraha) to the best of one's ability (III.20, 25) and being generous. The Gita's idea of yajna is sharing one's wealth, both material and mental, with others. Genuine generosity is selfless and non-egoistic. It should not, however, be reckless. Help should be given only to the deserving who need and value it, at the right time, with shraddha, and without a trace of disrespect to the recipients (XVII.22, 23). When done so, the giver is blessed more than the recipient, and the giver's living becomes meaningful and successful.
The Gita advises us also about how we ought to work. In fact, this advice constitutes the most important part of it, accounting for more verses than any other issue. Karma-yoga has been discussed earlier in several chapters, especially in the sixth on the philosophy of the Gita. But we are not referring to it here as a path of sadhana. Several interpreters such as Tilak, Aurobindo and Gandhi discussed it as a philosophy of life, not necessarily as part of sadhana. Though the original purpose of advising karma-yoga is to escape from the karmic binding and the cycle of births and deaths, interpreters of the Gita found it valuable in making our work effective and our life purposeful. We are not concerned here with karma-yoga for sadhana only. Our interest is in seeing what guidance the Gita gives in making our work more effective yet stress free at the same time. The key to this lies in giving up our obsessive attachment to the fruit of work, though taking care to do it skilfully and with dedication and also with due regard to consequence on others. In other words, there should be no desire to selfishly appropriate the fruit of work or even the credit for work done. Share even the credit with others, acknowledging it explicitly from the beginning. This is what a good leader also does. There is no question here of disowning either legal or moral responsibility for the work done. The advice is with reference to mental or psychological responsibility only. It is a matter of cultivating this mental attitude which helps in avoiding stress. It does not also mean that one should not enjoy one's work. One can certainly learn to enjoy one's work and even take pride in doing it well, in spite of not being interested in appropriating either the fruit or the exclusive credit for doing it. However, interpreters like Swami Dayananda have pointed out that it is nearly impossible for anyone to do karma-yoga with all these qualifications, particularly giving up any feeling of doer-ship or agency, unless one believes deeply in God, surrendering the agency and also the fruit or outcome completely to Him. Only a dedicated devotee of God is able to do so, they feel. The Gita, for example expects work to be done in the spirit of a yajna, as an offering to God, with detachment but with a sense of freedom (muktah) (IV.23, 33). However, an agnostic can certainly be as unselfish as a fervent devotee of God and even be broad-minded enough to share the credit for work and success with others, though he or she may not be willing to surrender the doer-ship and outcome to God.
Detachment from outcome by itself may not make our work effective and enjoyable. The Gita has some more things to teach in this regard too. There are two teachings which may seem mutually contradictory, but can be reconciled by the wise. One is, be focused and single minded in tackling any problem setting aside considerations irrelevant to the problem at hand (Vyavasayatmika buddhirekeha kuru, II.41). The second teaching is what we discussed in the previous section – to treat a problem at hand holistically taking all its sides, implications or aspects into account including the ethical. While evaluating a PhD thesis, for example, the referee has to evaluate all its aspects – the perspective, literature cited, clarity of objectives, methodology adopted, rigour of analysis and the logic of conclusions and policy implications. This is being holistic. But considerations like gender, race, religion, caste, place of origin, mother tongue and wealth of the candidate are all absolutely irrelevant. If there is any 'conflict of interest', such as the candidate being the son or daughter of a close relative, the evaluator should not accept the concerned work, so that any irrelevant matter prejudicing evaluation is avoided. This is being single-minded. One can be both holistic and single minded.
The Gita's teaching about being focused, concentrating on the process instead of outcome, and taking success and failure with equanimity – not allowing them to make one either complacent or depressed – is of highest relevance in a competitive world. Not only in business, but also in careers like sports, it can make a significant difference. Tenacity and dogged pursuit of one's goal will be difficult otherwise. One cannot afford to lose cool amidst adversity. Not allowing any feeling of frustration even in dismal failure is the key to bounce back to success. The Gita preaches optimism, hope and active engagement, telling you emphatically to avoid despair and escapism.
In my view, the ultimate message of the Gita is summarised in just four verses in Chapter 6, verses 29–32, especially the last one among these. It conveys a golden rule for ethics in all walks of both private and public lives. It is to see oneself in others and others in oneself, and to place oneself in others' problems and situations and think how one would have reacted to them. It is a message of not merely sympathy but also empathy and harmony. It is to establish harmony with other beings, harmony with environment and harmony with oneself. That really is Advaita – non-separateness. The opposite of Advaita is egocentric separateness, which is the source of all trouble and all violence in the world.
Important teachings of the Gita relevant in making for success in life may be briefly listed (some of them already discussed earlier):
* Believe in yourself. You are not just a perishable and insignificant piece of matter, but Atman the immortal (II.23, 24). You are not a sinner, you are pure. Be good and true to yourself as well as to others, and thus be your true self.
* Make yourself your friend, not your enemy. Raise yourself by your own self, by your efforts. You can also destroy your life, career and yourself by yourself, but never do it (na atmanam avasadayet, VI.5). Obsessive attachment (raga) and hatred (dvesha) are your worst enemies; subjugate them (II.64).
* Respect all beings as equals, as there is divinity in all (VI.30, 31). Respect their life, feelings and right to dignified treatment. Treat others' pain and pleasures as yours (VI.32). God is impartial to all (samoham sarva bhuteshu, IX.29), cares equally for all (IX.29; XIII.27). The Gita does not teach fanaticism and parochialism; on the contrary, it explicitly teaches respect for differences in faiths and forms of worship (IV.11; VII.21).
* Respect the code of conduct. Be sattvik, even while being active. Sattva is the single-most dependable source of lasting happiness (Sattvam sukhe sanjayati, XIV.9) and also of knowledge (XIV.11). Make a good name for yourself, as an example for others to follow (III.21). Even individuals have 'brand' values or reputations like companies; if reputation is gone, it is worse than death (II.34).
* Be assured that a doer of good never perishes; a good deed never goes in vain (VI.40). Participate in the cosmic virtuous cycle (III.11–16).
* Make a distinction between karma (action to be performed), vikarma (forbidden action) and akarma (inaction). Avoid the latter two. Even inaction is karmically binding (sinful), if what is necessary to be done is avoided (IV.15).
* Do your chosen work with dedication and commitment (shraddha). Perseverance with shraddha helps you to get what you aim at (VII.22).
* Do your duty with humility. Don't worry about the outcome of your struggle (II.47, 48). Work has to be performed with detachment (asakta, III.19; XVIII.49), with no desire to appropriate its fruit (II.47). Instead enjoy the work, doing it with skill (kaushalam) (II.50), with enthusiasm and fortitude (dhrityutsaha-samanvita) (XVIII.26) and with due regard to consequences.
* Don't be attached only to pleasant work, avoiding the unpleasant, but do your duty (XVIII.10). There is nothing superior, nothing inferior in work; all works and their fruits are offered to God (XII.6).
* When faced with a choice between what is perishable or momentary and what is enduring or lasting, choose the lasting. Fight for the lasting (II.18).
* Have a higher purpose in life. Enjoy your life, but don't get bogged down to momentary sensual pleasures. Don't aim only at quick success though tempting (II.44; IV.12). Through your work, contribute to the protection or welfare of people (loka-sangraha), for that makes your life meaningful (III.20, 25).
* Consciously aim at moderation and make it a habit. Neither extreme abstinence nor indulgence is good for success; they can instead be self-destructive (VI.16–17).
* Mind is fickle, tempted too easily, and can divert you from your higher aims. Proper management of mind is a key teaching of the Gita as it leads to success and happiness in life. Manage your anger, as it can cloud your reasoning, lead to confusion and divert you from your path. Try to keep it calm, cool and composed. Sometimes anger is justified where it plays a corrective role, but keep it well under control and apologise later for losing your temper. Steady and focused mind is a great asset. Cultivating a sense of detachment, meditation and conscious practice can help in this (VI.25–27, 35). Detachment is neither being absent minded nor indifferent. Be mindful or aware of what you do. That is what the Lord means when he advises vyavasayatmika buddhi (engaged mind) and to avoid wandering in many directions at the same time (II.41).
* Be focused on your goal (II.41), but take all aspects of the problem at hand (II.20–22).
* Don't lose opportunities of fighting for a good cause if you have the aptitude of a fighter. Cowardice is worse than defeat (II.35, 36).
* Have equanimity of mind even while fighting (II.38). Transcend the pair of opposites like a mature person (II.45; XII.18, 19).
* Learn to adjust to change in circumstances even if adverse and try to innovate by adopting novel ideas, just as you discard old worn out clothes and wear new ones (vasamsi jirnani yatha vihaya navani grihnati naroparani, II.22). Be enterprising.
* Depend upon what is wise without being narrowly selfish (II.49). Be cautious that even knowledge or information may be at times masked by ignorance or misinformation, resulting in confusion and misleading you (V.15). Prejudices – either your own or of others – caused by selfish interests create this confusion (III.39). Beware of them.
We have thus eighteen main teachings of the Gita in this list, which incidentally tally with the number of chapters in the Gita. Significantly, the relevance of the Gita as our guide to even daily life and to career, not to mention sadhana has remained the same ever or more probably increased. That is why its popularity is accelerating, with more and more people reading, and also more and more saints, philosophers and scholars writing about it discovering new meanings and new applications. Over the time, it has proved itself to be a timeless and living sacred text.
## Notes
1 A smaller earlier version of the first section of this chapter was published in Nitte Management Review, IX(1), July 2015, pp. 1–13, as an invited article, under the title 'The Bhagavad-gita as an Inspiration to Enterprise and Guide to Management'. The author is grateful to NMR for permission to use the article here.
2 For details, see Agarwal (1993: 19, 48) and Nadkarni (2013: 257).
3 See B. Mahadevan, 'Bhagavad Gita: Ideas for Modern Management', talk delivered in a Seminar on 'Towards a new paradigm of business management: Alternative perspectives from ancient Indian wisdom', at IIM, Bangalore, 12 December 2009, http://www.samskritibookfair.org.archives/882, p. 2.
4 The translation of the verse and its explanation are taken from Nadkarni (2013: 62).
5 For details, see Chapter 10 on 'Ethics in Business', in Nadkarni (2014: 243–70).
6 The explanation of karma-yoga here is based on Nadkarni (2013: 89–94).
7 For details, see Nadkarni (2014: 264–68).
8 For a more detailed discussion in a broader context, see Chapter 10 on 'Ethics in Business' in Nadkarni (2014: 243–70).
9 This section not only draws some points from Chapter 7 in Nadkarni (2014: 169–95), but also has additional inputs. See this chapter for detailed applications in several social and ethical problems of today. Only a few of them are dealt with in the present book that too in brief.
10 See the sections on 'Ethics in the Gita' in Chapter 6, and 'Is the Gita reactionary?' in Chapter 7 of the present book.
11 However, not all those who have failed in life or are poor and deprived are visionless, ignorant and lazy. Similarly not all those who have succeeded in their career and have hoarded a lot of wealth are necessarily sattvik. Life is too complex for easy generalisations. We can only speak about tendencies, given other things.
12 See especially verses II.47–51.
13 Work done without heed to expected consequences (including any harm to others) is considered as tamasik (XVIII.25).
# Glossary
Advaita | A school of Vedanta/philosophy which regards the Ultimate Truth as One, where the Supreme/Brahman and the Atman are one and the same; monism.
---|---
Ahamkara | Ego-consciousness; 'I'-maker; arrogance.
Ahimsa | Non-violence.
Ananda | Bliss, blissful joy; everlasting happiness/ecstasy; an attribute of Brahman/the Divine and Atman.
Anasakti | Detachment.
Artha | see Purusharthas.
Atman | The self, identified with one's consciousness (chit).
Avatar/Avatara | Descent of God in the world to solve its problems/to restore dharma.
Bhakta | A spiritual/religious devotee.
Bhakti | Spiritual/religious devotion.
Bhashya | Commentary, gloss.
Bhedabheda/Dvaitadvaita | A school of philosophy which takes the Ultimate Truth vis-à-vis the world as one of difference-cum-non-difference or dualism-cum-monism, as in the case of waves in the sea, both being real.
Bhoga | Sensual enjoyment.
Bhuta-hita | Welfare of (all) beings.
Brahman/Parabrahma | The Ultimate Truth/Reality; the Absolute (not to be confused either with Brahma, the Creator, first of the Trinity of Deities; or with Brahmana, a caste).
Brahma-vidya | Science of knowing Brahman.
Brahma-nirvana Brahmi-sthiti | Ultimate and lasting happiness/positioning in the Brahman.
Buddhi | Sense of discrimination, wisdom.
Chaitanya | Pure consciousness, pure energy.
Chit | Pure consciousness; an attribute of the Divine and the Atman.
Dharma | Code of moral conduct, system of rules of ethics/justice; duty (See Purusharthas).
Dharma-shastras | Texts like the Manu Smriti which prescribe codes of conduct.
Dharma-yuddha | Fighting for defending a lofty value/s; war justified by the need to protect one's life, property or honour; war fought on the basis of a code of conduct.
Duhkha | Sorrow (opposite of sukha).
Dvaita | A philosophy which takes God, the insentient world and the jivas to be basically different from each other, all being real, but the latter two being dependent on God.
Harsha | Cheer, rejoice.
Hita | Welfare, well-being.
Ishwara | The Supreme with attributes; saguna Brahman; God.
Japa | Repeatedly reciting in mind a holy mantra or a name of God.
Jiva | Embodied Atman.
Jivan-mukti | Liberation while living.
Jnana | Spiritual knowledge; knowledge of the Brahman and the Atman; a path of sadhana.
Kama | Desire; sensual pleasure (See Purusharthas).
Karma | (1) Action or work as in karma-yoga. (2) Accumulated effects of past deeds with potential to affect the present and the future until they are exhausted by their 'enjoyment'.
Karma-marga Karma-yoga | The path of works in which work is done self-lessly without desiring the fruit of work, or as God's work.
Karta | Agent/doer of action.
Lila | Sport.
Loka-hita | Welfare or well-being of the world; benefiting the people at large.
Loka-sangraha | Promoting the welfare or maintenance of people.
Matha | Monastery, usually headed by a monk.
Maya | Appearance, projection as on a screen; creative power of God.
Mayavada | A doctrine which takes the world as illusory or unreal.
Mithya | Unreal.
Moha | Infatuation, sentimental attachment.
Moksha/Mukti | Liberation from the cycle of births and deaths (See Purusharthas).
Nimitta | Instrument.
Nirguna | Attributeless, formless (opposite of saguna).
Nirvaira | Absence of enmity.
Nirvana | Ultimate and lasting happiness.
Nivritti | Spiritual pursuit (as opposed to pravritti); renunciation; release.
Papa | Sin (opposite of punya).
Paramarthika satya | Spiritual or transcendental truth (opposite of vyavaharika-satya).
Paramatman | The Supreme.
Prapatti | Total surrender to God.
Prasada | (1) Grace; (2) Tranquillity, coolness (Gita II.64, 65).
Prasthana-triyi | The three sacred texts (literally, points of departure in acquiring jnana): the Upanishads, the Bhagavad-Gita and the Brahmasutras.
Pravritti | (1) Material advancement (as opposed to nivritti). (2) Tendency.
Priti | Pleasure (Gita I.36).
Punya | Merit, credit (opposite of papa).
Puranas | Popular religious texts in Sanskrit like the Bhagavata Purana, which narrate stories and glories of the Divine, convey moral lessons and philosophy in easy-to-understand way and promote bhakti.
Purna | Perfect, complete, full, not lacking in anything; an attribute of the Supreme.
Purusharthas | The four human goals/pursuits: dharma (good/just conduct, dutifulness), artha (wealth, power), kama (sensual pleasure, desire) and moksha (liberation).
Rajasika | See Trigunas.
Sadhaka | Person engaged in sadhana.
Sadhana | Spiritual striving.
Saguna | With attributes, with form (opposite of nirguna).
Samsara/Sansara | Cycle of births and deaths; day-to-day life in the mundane world.
Sanatana Dharma | Ancient and eternal dharma; traditional name for Hinduism.
Sannyasa | Renunciation.
Sat | Pure existence, truth, an attribute of Atman and the Divine.
Sattvika | see Trigunas.
Satya | Truth.
Sharira-yatra | Embodied passage/travel through the world (synonym of loka-yatra).
Shastra/s | Science, system of knowledge as in yoga-shastra; texts like Manu Smriti which prescribe codes of conduct as in Dharma-shastras.
Shraddha | Dedication, commitment, faith.
Shreya | What is morally/spiritually most desirable; good/beneficial in the long run.
Shruti | Texts which are taken as authoritative, foundational and sacred (like the Vedas) (as distinct from Smritis, which are secondary or supplementary).
Smritis | see under Shruti.
Sthitaprajna | Person of equipoise; mentally steady, cool and balanced; person of perfection.
Suhrida | Friend.
Sukha | Happiness, pleasure.
Swadeshi | Self-reliance.
Swadharma | Following one's aptitude and skills, and thus realising full human potential for good (not to be confused with caste duty).
Tamasika | See Trigunas.
Tatparya | Purport.
Trigunas | Three characteristics or qualities: sattvika (sattva) (good, upright), rajasika (rajas) (pleasure loving, emotional) and tamasika (tamas) (dull, lethargic).
Tripti | Satisfaction.
Varnas | The four classes based on traditional division of labour: Brahmanas (those learned in Vedic texts, priests), Kshatriyas (soldiers), Vaishyas (traders and agriculturists) and Shudras (labourers). (Not to be confused with birth-based jatis or castes.)
Vedanta | Spiritual knowledge as contained in the Upanishads, the Brahmasutras and the Gita.
Vishishta-advaita | Qualified monism; a philosophy which believes in 'Panorganistic system' in which the Supreme, the world and the jivas, though different, form a unity in the body of the Supreme, the latter two being dependent on the former.
Vyavaharika satya | Practical/empirical truth, mundane truth.
Yajna | Offering, sacrifice.
Yoga | (1) Yoking, joining, striving, a path of sadhana as in bhakti-yoga. (2) A suffix to the title of each chapter in the Gita.
Yoga-kshema | Meeting the needs and providing security or well-being.
Yoga-shastra | Science of spiritual striving.
Yogi | Person engaged in yoga.
Yuga | Era.
# Bibliography
Adidevananda, Svami (Tr.) (2014; first published in 1992). Shri Ramanuja Gita Bhashya (with Text and English Translation). (With an Introduction by Svami Tapasyananda). Madras (Chennai): Sri Ramakrishna Math.
Agarwal, Satya P. (1993). The Social Role of the Gita – How and Why. Delhi: Motilal Banarasidass.
Agrawal, Purushottam (2004). Nija Brahma Vichar – Dharma, Samaj, aur Dharmetar Adhyatm (Hindi). New Delhi: Rajakamal Prakashan.
Agrawal, Purushottam (2006). 'Decoding the Ethics of Srimadbhagavadgita', The Book Review, 30(1 and 2), January–February, p. 29.
Ambedkar, B. R. (2004). 'Krishna and His Gita', in Valerian Rodrigues (Ed.), The Essential Writings of B. R. Ambedkar. New Delhi: Oxford University Press, pp. 193–204.
Amur, G. S. (2013). Lokayatre (Kannada). Bengaluru: Priyadarshini Prakashana.
Anandashram, Swami (2014). 'Advaita Vedanta ani Bhaktiyogu' (from a lecture in Konkani), The Chitrapur Sunbeam, 21, July 7, pp. 5–8.
Anon (2011). Nagarasa Kaviya Karnataka Bhagavadgite (Kannada). Bengaluru: Bharatiya Vidya Bhavan.
Arendt, Hannah (1970). On Violence. San Diego: Harcourt Brace.
Aurobindo, Sri (1996; ninth edition). Essays on the Gita. Pondicherry: Sri Aurobindo Ashram.
Aurobindo, Sri (1999). The Synthesis of Yoga. Pondicherry: Sri Aurobindo Ashram.
Aurobindo, Sri (2010; first published in 1993). Integral Yoga: Sri Aurobindo's Teachings and Method of Practice (Selected Letters of –). Pondicherry: Sri Aurobindo Ashram.
Badrinath, Chaturvedi (2007; first published in 2006). The Mahabharata: An Inquiry in the Human Condition. Hyderabad: Orient Longman.
Balagangadhara, S. N. (1994). 'The Heathen in His Blindness...' Asia, the West and the Dynamic of Religion. Leiden: E. J. Brill (published in India by Manohar, New Delhi, in 2005).
Banavathy, Vinayachandra K. and Anuradha Choudry (2014). 'Understanding Happiness: A Vedantic Perspective', Psychological Studies, 59(2), June, pp. 141–52.
Besant, Annie (1907). The Bhagavad Gita or the Lord's Song. Madras (Chen-nai): G. A. Natesan & Co.
Basham, A. L. (1967; third edition). The Wonder That Was India: A Survey of the History and Culture of the Indian Sub-Continent before the Coming of the Muslims. London: Sidgwick & Jackson.
Bhaumik, Mani (2005). Code Name God. New Delhi: Penguin.
Bhave, Vinoba (1964; first edition in English in 1958). Talks on the Gita. Varanasi: Sarva-Seva-Sangh Prakashan.
Bhoomananda Tirtha, Swami (2014; first published in 1999). Essential Concepts in Bhagavadgita (6 Volumes). Thrissur: Narayanashrama Tapovanam.
Brockington, John (2002). 'Translating the Sanskrit Epics', Indologica, 27, pp. 97–126.
Chatterjee, Debashis (2012). Timeless Leadership – 18 Leadership Sutras from the Bhagavad Gita. Singapore: John Wiley; New Delhi: Wiley India.
Chatterji, Mohini M. (1960). The Bhagavad Gita or the Lord's Lay. New York: Julian Press.
Chinmayananda, Swami (1978). The Art of Man-making – 114 Short Talks on the Bhagavad Geeta. Mumbai: Central Chinmaya Mission Trust.
Chinmayananda, Swami (Tr. and Commentary) (1996; new edition). The Holy Geeta. Mumbai: Central Chinmaya Mission Trust.
Chopra, Deepak (2001). How to Know God – The Soul's Journey into the Mystery of Mysteries. London: Rider, Random House.
CWMG (Collected Works of Mahatma Gandhi) (98 Volumes, Electronic Book, 1999). New Delhi: Government of India, Publications Division.
CWSV (Complete Works of Swami Vivekananda) (9 Volumes, 1997–2001). Calcutta: Advaita Ashrama.
Dalal, Neil Akshay (2009). Texts beyond Words: Contemplation and Practise in Shankara's Advaita Vedanta. Austin: University of Texas.
Das, Arvind N. (1982). 'Peasants and Peasant Organisations: The Kisan Sabha in Bihar', in Arvind N. Das (Ed.), Agrarian Movements in India: Studies in 20th Century Bihar. London: Frank Cass, pp. 48–87.
Das, Arvind N. (2008). 'Swami and His Friends: Sahajananda Saraswati and Those Who Refused to Let the Past of Bihar's Peasant Movements to Become History', in William R. Pinch (Ed.), Speaking of Peasants in Indian History and Politics – Essays in Honour of Walter Hauser. New Delhi: Manohar, pp. 193–232.
Das, Gurcharan (2009). The Difficulty of Being Good – On the Subtle Art of Dharma. New Delhi: Allen Lane (Penguin).
Dasgupta, Surendra Nath (1975; first edition in 1922). A History of Indian Philosophy (5 Volumes). Delhi: Motilal Banarasidass.
Davis, Richard H. (2015). The Bhagavad Gita: A Biography. Princeton: Princeton University Press.
Dayananda Saraswati, Swami (1989). The Teaching of the Bhagavad Gita. New Delhi: Vision Books.
Dayananda Saraswati, Swami (2007-a). Value of Values. Chennai: Arsha Vidya Research & Publication Trust.
Dayananda Saraswati, Swami (2007-b). Srimad Bhagavad Gita. Chennai: Arsha Vidya Research & Publication Trust.
Dayananda Saraswati, Swami (2011). Bhagavad Gita: Home Study Course (9 Volumes). Chennai: Arsha Vidya Research and Publication Trust.
Desai, Mahadev (Tr. with additional Introduction by) (1946). The Gospel of Selfless Action or the Gita according to Mahatma Gandhi. Ahmedabad: Navjivan Publishing House.
Desai, Meghnad (2014). Who Wrote the Bhagavadgita? A Secular Inquiry into a Sacred Text. Noida: Element (Harper Collins).
Disciples, His Eastern and Western (1989). The Life of Swami Vivekananda (2 Volumes). Calcutta: Advaita Ashrama.
Disciples, His Eastern and Western (2001; Seventh Edition.). The Life of Swami Vivekananda (Two Volumes). Kolkata: Advaita Ashrama.
Diwakar, R. R. (1999; first published in 1953). Mahayogi – Life, Sadhana & Teachings of Sri Aurobindo. Mumbai: Bharatiya Vidya Bhavan.
Easwaran, Eknath (1997). The Bhagavad Gita for Daily Living – Volume I: The End of Sorrow; Volume II: Like a Thousand Sons; Volume III: To Love Is to Know Me. Mumbai: Jaico, in association with Nilgiri Press, Tomales, CA.
Easwaran, Eknath (2012). Essence of the Bhagavad Gita – A Contemporary Guide to Yoga, Meditation and Indian Philosophy. Mumbai: Jaico, in association with Nilgiri Press, Tomales, CA.
Edgerton, Franklin (Ed. and Tr.) (1944). The Bhagavad Gita (2 Volumes). Cambridge, MA: Harvard Oriental Series – 38.
Farquhar, J. N. (1925). 'The Organisation of Sannyasis of Vedanta', The Journal of the Bombay Branch of the Royal Asiatic Society, 1 (new series), July, pp. 479–86.
Fischer, Louis (1998; first published in 1953). The Life of Mahatma Gandhi. Mumbai: Bharatiya Vidya Bhavan.
French, Harold W. (1991). 'Swami Vivekananda's Use of the Bhagavadgita', in Minor (Ed.), pp. 131–46.
Gambhirananda, Swami (Tr. and Ed.) (1984). Bhagavadgita with the Commentary of Sankaracharya. Kolkata: Advaita Ashrama.
Gambhirananda, Swami (Tr.) (1998). Bhagavad-Gita with an Annotation, Gudhartha-Dipika by Madhsudana Saraswati. Kolkata: Advaita Ashrama.
Gandhi, M. K. (1927). An Autobiography or the Story of My Experiments with Truth. Ahmedabad: Navjivan Trust.
Gandhi, M. K. (1972). Collected Works of Mahatma Gandhi. New Delhi: Government of India, Publications Division.
Gandhi, M. K. (1980). The Bhagavadgita. Delhi: Orient Paperback.
Gowda, Nagappa K. (2011). The Bhagavadgita in the Nationalist Discourse. New Delhi: Oxford University Press.
Griffith, R. D. (1849). 'An Essay on the Bhagavat-Geeta', in J. Garret (Ed.), The Bhagavat-Geeta or Dialogues of Krishna and Arjoon in Eighteen Lectures. Bangalore: Wesleyan Missionary Press, pp. xxxvii–lvii.
Gundappa, D. V. (2001). Shrimad-Bhagavad-Gita Tatparya Athava Jeevana Dharma Yoga (Kannada). Mysuru (Mysore): Kavyalaya.
Harder, Hans (2001). Bankimchandra Chattopadhyay's Srimadbhabadgita – Translation and Analysis. Delhi: Manohar.
Harshananda, Swami (2008). A Concise Encyclopaedia of Hinduism (3 Volumes). Bangalore: Ramakrishna Math.
Hauser, Walter (1995). Swami Sahajananda and the Peasants of Jharkhand. New Delhi: Manohar.
Heehs, Peter (1989). Sri Aurobindo: A Brief Biography. New Delhi: Oxford University Press.
Heehs, Peter (Ed.) (1999). The Essential Writings of Sri Aurobindo. New Delhi: Oxford University Press.
Herur, Suresh (2001). Subhashita-Manjari (Script and Tr. in Kannada). Bengaluru: Kannada Sahitya Parishattu.
Hiltebeitel, Alf (2011). Dharma – Its Early History in Law, Religion, and Narrative. New Delhi: Oxford University Press.
Hirst, Jaqueline (2000). 'Upholding the World: Dharma in the Bhagavadgita', in Lipner (Ed.), pp. 48–66.
Iyer, Raghavan (Ed.) (1993). The Essential Writings of Mahatma Gandhi. New Delhi: Oxford University Press.
Jinarajadasa, C. (1996). A Short Biography of Annie Besant. Adyar: TPH (Theosophical Publishing House).
Jordens, J.T.F. (1991). 'Gandhi and the Bhagavadgita', in Minor (Ed.), pp. 88–109.
Judge, William Quan (1969). Bhagavad Gita, Recension Combined with Essays on the Gita. Pasadena: Theosophical University Press.
Kalawar, Jayant (2012). The Advaita Life Practice: Balancing Relationships, Work & Money in the Twenty-first Century. West Windsor, NJ: First Windsor Group LLC.
Kane, P. V. (1990; first published in 1932–62). History of Dharma-shastras (Ancient and Medieval Religious and Civil Law in India). Poona (Pune): Bhandarkar Oriental Research Institute.
Kapoor, J. C. (1983). Bhagavadgita: An International Bibliography of 1785–1979 Imprints. New York: Garland Publishers.
Karve, Irawati (1991). Yuganta – The End of an Epoch. Hyderabad: Disha Books (Orient Longman) (first published in Marathi in 1967, and first English Tr. in 1969).
Khair, Gajanan Shripat (1997; first edition in 1969). Quest for the Original Gita. Mumbai: Somaiya.
Kosambi, D. D. (1962). 'Social and Economic Aspects of the Bhagavad-Gita', in his Myth and Reality: Studies in the Formation of Indian Culture. Bombay: Popular Prakashan, pp. 12–41.
Kumarappa, Bharatan (1979; first published in 1933). The Hindu Conception of the Deity. Delhi: Inter-India.
Kurien, C. T. (1973). Guide to Research in Economics. Madras (Chennai): Sangam Publishers.
Lal, B. B. (2013). Historicity of the Mahabharata: Evidence of Literature, Art & Archaeology. New Delhi: Aryan Books International.
Lash, Nicholas (2000). 'The Purification of Desire', in Lipner (Ed.), pp. 1–10.
Malhotra, Rajiv (2011). Being Different – An Indian Challenge to Western Universalism. Noida: Harper Collins.
Malhotra, Rajiv (2014). Indra's Net – Defending Hinduism's Philosophical Unity. Noida: Harper Collins.
Malinar, Angelika (2009; first published in 2007). The Bhagavadgita – Doctrines and Contexts. Cambridge: Cambridge University Press.
Minor, Robert N. (Ed.) (1991; first edition in New York in 1986). Modern Indian Interpreters of the Bhagavadgita. Delhi: Sri Satguru Publications (A Division of Indian Books Centre).
Minor, Robert N. (1991-a). 'Sri Aurobindo as a Gita-yogin', in Minor (Ed.), pp. 61–87.
Minor, Robert N. (1991-b). 'Sarvepalli Radhakrishnan and "Hinduism" Defined and Defended', in Robet D. Baird (Ed.), Religion in Modern India. New Delhi: Manohar, pp. 421–54.
Monier-Williams, Monier (2001; first published in 1875). Indian Wisdom. New Delhi: Rupa.
Munshi, K. M. (1988; first published in 1947). Bhagavad Gita & Modern Life. Mumbai: Bharatiya Vidya Bhavan.
Nadkarni, M. V. (2013). Handbook of Hinduism – Ancient to Contemporary. New Delhi: Ane Books Pvt Ltd.
Nadkarni, M. V. (2014; second edition). Ethics for Our Times: Essays in Gandhian Perspective. New Delhi: Oxford University Press.
Nehru, Jawaharlal (1981; first published in 1946). The Discovery of India. New Delhi: Oxford University Press.
Neufeldt, Ronald N. (1991). 'A Lesson in Allegory: Theosophical Interpretations of the Bhagavadgita', in Minor (Ed.), pp. 11–33.
Nirodbaran (1990). Sri Aurobindo for All Ages. Pondicherry: Sri Aurobindo Ashram.
Oka, S. V. (1957). The Uttara Gita with a Translation into English with Appendices. Poona (Pune): Bhandarakar Oriental Research Institute.
Osho (2006; first edition in 1980). Krishna – The Man and His Philosophy. Mumbai: Jaico.
Otto, Rudolf (1939; first published in German in 1933). The Original Gita: The Song of the Supreme Exalted One (English Tr. by J. E. Turner). London: George Allen & Unwin.
Pai, Roopa (2015). The Gita for Children. Gurgaon: Hachette India.
Palshikar, Sanjay (2014). Evil and the Philosophy of Retribution – Modern Commentaries on the Bhagavad-gita. New Delhi: Routledge.
Pande, Govind Chandra (1994). Life and Thought of Shankaracharya. Delhi: Motilal Banarasidass.
Pandit, M. P. (1998). Sri Aurobindo. New Delhi: Munshiram Manoharlal.
Panikkar, K. M. (1961). Hindu Society at Cross Roads. Delhi: Asia Publishing House.
Parekh, Bhikhu (2001). Gandhi – A Very Short Introduction. Oxford: Oxford University Press.
Parel, Anthony J. (Ed.) (2009). M. K. Gandhi – Hind Swaraj and Other Writings. New Delhi: Cambridge University Press.
Parthasarathy, A. (2011; first edition in 2008). Bhagavad Gita. Mumbai: A. Parthasarathy.
Patchen, Nancy (2003 Reprint). The Journey of a Master – Swami Chinmayananda – The Man, the Path, the Teaching. Mumbai: Central Chin-maya Mission Trust.
Prabhavananda, Swami and Christopher Isherwood (Trs.) (1944). The Song of God: Bhagavadgita (with an Introduction by Aldous Huxley). Hollywood: M. Rodd Co.
Prabhupada, A. C. Bhakti Vedanta Swami (1983; first published in 1968). The Bhagavad Gita as It Is. Los Angeles: The Bhaktivedanta Book Trust.
Purani, A. B. (1978). The Life of Sri Aurobindo. Pondicherry: Sri Aurobindo Ashram.
Purohit Swami (Tr.) (1935). The Geeta: The Gospel of the Lord Shri Krishna. London: Faber.
Pusalker, A. D. (1955). Studies in the Epics and Puranas. Bombay (Mumbai): Bharatiya Vidya Bhavan.
Radhakrishnan, S. (1939). Eastern Religions and Western Thought. Oxford: Clarendon Press.
Radhakrishnan, S. (1971; first published in 1927). The Hindu View of Life. London: Unwin.
Radhakrishnan, S. (1996; first published in 1923). Indian Philosophy (2 Volumes). New Delhi: Oxford University Press.
Radhakrishnan, S. (1998; first published in 1948). The Bhagavadgita (with an Introductory Essay, Sanskrit Text, English Translation, and Notes). New Delhi: Harper Collins.
Rajagopalachari, C. (2006; first published in 1951). Mahabharata. Mumbai: Bharatiya Vidya Bhavan.
Ramakrishnananda, Swami (1959). Life of Sri Ramanuja. Madras (Chennai): Sri Ramakrishna Math.
Ramdas, Swami (1976; first published in 1966). Gita Sandesh (Message of the Gita). Bombay (Mumbai): Bharatiya Vidya Bhavan.
Ranganathananda, Swami (2000). Universal Message of the Bhagavad Gita – An Exposition of the Gita in the Light of Modern Thought and Modern Needs (3 Volumes). Kolkata: Advaita Ashrama.
Rangaswami, Sudhakshina (Ed.) (2012). The Roots of Vedanta: Selections from Shankara's Writings. New Delhi: Penguin.
Richter, Duncan (2008). Why Be Good? A Historical Introduction to Ethics. New York and Oxford: Oxford University Press.
Robinson, Catherine A. (2013; first published in 2006). Interpretations of the Bhagavad-gita and Images of the Hindu Tradition – The Song of the Lord. London and New York: Routledge.
Row, Subba (1921). The Philosophy of the Bhagavad Gita. Madras (Chennai): Theosophical Publishing House.
Row, Subba (1934). Notes on the Bhagavad Gita. Point Loma, CA: Theosophical University Press.
Sampatkumaran, M. R. (1985). The Gitabhashya of Ramanuja. Mumbai: Ananthacharya Indological Research Institute.
Saraswati, Swami Sahajananda (2000; first published in 1952). Mera Jeevan Sangharsh (My Life Struggle) (Hindi) (Edited by Awadesh Pradhan). Delhi: Granth Shilpi.
Saraswati, Swami Sahajananda (2003). Gita Hridaya. Volume 3 under Swami Sahajananda Saraswati Rachanavali (Hindi) (Edited by Raghav Sharan Sharma, 6 Volumes). New Delhi: Prakashan Samsthan.
Sastri, Alladi Mahadeva (1977). The Bhagavadgita with the Commentary of Sri Sankaracharya. Madras (Chennai): Samata Books.
Sen, Amartya (1982). 'Rational Fools', in his Choice, Measurement and Growth. Oxford: Oxford University Press, pp. 84–106 (First published in 1977 as 'Rational Fools: A Critique of the Behavioural Foundation of Economic Theory', Philosophy and Public Affairs, Vol. 6(4), pp. 317–44.)
Sen, Amartya (1990; first published in 1987). On Ethics and Economics. New Delhi: Oxford University Press.
Sen, Amartya (2005). The Argumentative Indian – Writings on Indian History, Culture and Identity. London and New Delhi: Allen Lane (Penguin).
Sen, Amartya (2009). The Idea of Justice. London and New Delhi: Allen Lane (Penguin).
Seshadri, Kandadai (1996). 'Ramanuja: Social Influence of His Life and Teaching', Economic and Political Weekly, 31(5), February 3, pp. 292–98.
Shankar, Pandit Bhavani (1966). The Docrine of the Bhagavad Gita – The Path of Initiation. Bombay (Mumbai): Popular Prakashan.
Shankar, Sri Sri Ravi (2013). Bhagavad Gita – Commentary (Chapters 1 to 6). Bangalore: Sri Sri Publications Trust.
Sharma, Arvind (1985). The Hindu Gita: Ancient and Classical Interpretations of the Bhagavadgita. La Salle, IL: Open Court.
Sharma, Arvind (2000). Classical Hindu Thought – An Introduction. New Delhi: Oxford University Press.
Sharma, B.N.K. (1986; first edition in 1962). Philosophy of Shri Madhvacharya. Delhi: Motilal Banarasidass.
Sharma, B.N.K. (1989). The Bhagavadgita Bhashya of Shri Madhvacharya (Tr. with an Introduction). Bangalore: Anandatirtha Pratishthana. Sharma, B.N.K. (1997; first edition in1961). Madhva's Teachings in His Own Words. Mumbai: Bharatiya Vidya Bhavan.
Sharma, R. S. (2011). Economic History of Early India. New Delhi: Viva Books.
Shriranga (Adya Rangacharya) (1941). Gita-gambhirya Athava Shrikrishnana Samaja-shastra (Kannada) (The Profundity of the Gita, or Sociology of Shri-krishna). Dharwad: Rangamanga Prakashana.
Shriranga (A dya Rangacharya) (1972). Gita-darpana (Kannada) (A Mirror to the Gita). Sagar: Akshara Prakashana.
Sinha, Mishka (2010). 'Corrigibility, Allegory, Universality – A History of the Gita's Transnational Reception, 1785–1945', Modern Intellectual History, 7(2), pp. 297–317.
Sivananda, Swami (2009). Ethics of the Bhagavad Gita. Shivanandanagar: The Divine Life Society.
Sivananda, Swami (2013). The Bhagavad Gita. Shivanandanagar: The Divine Life Society.
Smith, Adam (1776). An Inquiry into the Nature and Causes of the Wealth of Nations. Reprinted R. H. Campbell and A. S. Skinner (Eds) (1976). Oxford: Clarendon Press.
Smith, Adam (1790, first published 1759). The Theory of Moral Sentiments. London: T. Cadell. (Republished by Clarendon Press, Oxford, in 1976). Srinivasachari, P. N. (2009 reprint of the third edition in 1996; first edition in 1943). The Ethical Philosophy of the Gita. Madras (Chennai): Sri Ramakrishna Math.
Stevenson, Robert N. (1991). 'Tilak and the Bhagavadgita's Doctrine of Karmayoga', in Minor (Ed.), pp. 44–60.
Swarupananda, Swami (1982; thirteenth edition). Srimad Bhagavad-gita (with Text, Word-for-word Translation, English Rendering, Comments, & Index). Calcutta (Kolkata): Advaita Ashrama.
Tapasyananda, Swami (year na). Shri Ramanuja – His Life, Religion & Philosophy. Madras (Chennai): Sri Ramakrishna Math.
Tejomayananda, Swami (1997). Dhyanasvarupam – The Principles and Practice of Meditation. Mumbai: Central Chinmaya Mission Trust.
Telang, K. T. (1970; first published in 1882). The Bhagavadgita with the Sanatsujatiya and the Anugita (Under the Series – Sacred Books of the East (Tr. with an Introduction and Edited by Max Muller, Volume VIII). Delhi: Motilal Banarasidass.
Thapan, Anita Raina (Ed.) (2005). Swami Chinmayananda Reader. New Delhi: Penguin.
Tilak, Bal Gangadhar (1936). Shrimad-Bhagavad-gita-rahasya or Karmayoga-shastra (English Tr. by B. S. Sukthankar, 2 Volumes). Poona (Pune): Tilak Bros. [<http://sanskritebooks.org>].
Upadhyaya, Kashi Nath (1971). Early Buddhism and the Bhagavadgita. Delhi: Motilal Banarasidass.
Veerabhadrappa, B. V. (2004). The Bhagavadgita – A Rational Enquiry. Bangalore: Navakarnataka (Tr. from Kannada original published in 1997 by D. K. Seetharama Sastry).
Vidyatmananda, Swami (Ed.) (2006; first published in 1972). What Religion Is: In the Words of Swami Vivekananda. Kolkata: Advaita Ashram.
Vivekananda, Swami (1997–2001). The Complete Works of Swami Vivekananda (CWSV). Kolkata: Advaita Ashram.
Warrier, A. G. Krishna (Tr.) (1983). Srimad Bhagavad Gita Bhashya of Sri Shankaracharya. Madras (Chennai): Sri Ramakrishna Math.
Wilkins, Charles (1785). The Bhagavadgeeta, or Dialogues of Kreeshna and Arjoon; in Eighteen Lectures; with Notes, Translated from the Original in Sanskreet. London: C Nourse.
Yamunacharya, M. (1988; first edition in 1963). Ramanuja's Teachings in His Own Words. Mumbai: Bharatiya Vidya Bhavan.
Yardi, M. R. (1991). The Bhagavadgita as a Synthesis. Poona (Pune): Bhandarkar Oriental Research Institute.
Yardi, M. R. (2011; first edition in 1991). Shri Jnaneshwar's Bhavartha-Dipika Popularly Known as Jnaneshwari (Translated from Marathi). Mumbai: Bharatiya Vidya Bhavan.
Yogananda, Sri Sri Paramahansa (Indian Edition 2002; first edition from Los Angeles in 1995). God Talks with Arjuna: The Bhagavad Gita – Royal Science of God Realisation – The Immortal Dialogue between Soul and Spirit – A New Translation and Commentary (2 Volumes). Kolkata: Yogoda Satsang Society of India.
Zaehner, R. (Ed. and Tr.) (1969). The Bhagavad-Gita. Oxford: Clarendon.
# Name Index
Adidevananda, Svami , , n2,
Agrawal, Purushottam , ,
Agarwal, Satya P. , , ,
Akho
Ambedkar, B. R. xi, , , , , , n6
Amur, G. S. x, ,
Anandashram, Swami
Arendt, Hannah
Arjuna ,
Arnold, Sir Edwin , ,
Aurobindo, Sri , –, ,
Balagangadhara, S. N.
Banavathy, Vinayachandra
Bankimachandra –
Basham, A. L.
Baudhayana ,
Besant, Annie –
Bhaskara
Bhaumik, Mani
Bhave, Vinoba –
Bhoomananda, Swami xi,
BORI ,
Brockington, John , , , ,
Buddha, the , , , , ,
Charvaka
Chatterjee, Debashish
Chatterjee, Mohini
Chinmayananda, Swami –,
Chopra, Deepak , ,
Choudry, Anuradha
Christ, Jesus
Dalal, Neil Akshay
Dandekar, V. M.
Das, Arvind N.
Das, Gurucharan
Dasgupta, S. N. , , , n2
Davis, Richard n5, , , , , ,
Dayananda, Saraswati Swami , –, , , , , ,
Desai, Mahadev , , , ,
Desai, Meghnad , , , , , ,
Easwaran, Eknath –, , , ,
Farquhar, J. N. n6,
Fazl, Abul ,
Fischer, Louis
French, Harold
Gambhirananda, Swami , n2, n6
Gandhi (Mahatma) , , , , , , –, , , , , , ,
Garbe, Richard von
Gowda, Nagappa K. , , , , , ,
Griffith, R. D.
Gundappa, D. V. , –
Harshananda, Swami , , , , n2, n6
Hastings, Warren , ,
Heehs, Peter n11
Hiltebeitel, Alf
Hirst, Jaqueline ,
Huxley, Aldous
Iyer, Raghavan ,
Jaspers, Karl
Jnaneshwar –
Jones, William
Jordens, J. T. F. ,
Judge, W. Q.
Kalawar, Jayant , n2
Kane, P. V.
Kant, Immanuel
Kapila
Kapoor, J. C.
Karve, Irawati ,
Keynes, John Maynard
Khair, Gajanan S. ,
Kosambi, D. D. xi, , , ,
Krishna, Lord , , –; was he right in advising Arjuna to fight? , , , –, ; his love
Kumarappa, Bharatan (J. C.) ,
Kurien, C. T.
Lal, B. B. , ,
Lash, Nicholas ,
Laxman, R. K.
Madhusudana Saraswati
Madhva (Madhvacharya) –
Mahadevan, B.
Malhotra, Rajiv
Malinar, Angelika , , ,
Marx, Karl
Mill, James
Minor, Robert , , , ,
Monier-Williams, Sir M. ,
Muller, F. Max
Munshi, K. M. –
Nadkarni, M. V. n10, , , n9, n10
Nagarasa
Narahari
Nehru, Jawaharlal –
Neufeldt, R. N. , , , , n4
Nimbarka
Osho
Otto, Rudolf , n6, ,
Pai, Roopa , ,
Palshikar, Sanjay
Pande, Govinda Chandra , ,
Pandit, M. P.
Panikkar, K. M.
Parthasarathy, A.
Prabhupada, A. C. Bhaktivedanta Swami , –
Puranik, Hayavadana n1
Pusalker, A. D. , ,
Radhakrishnan S. , n2, , , , , –, , , , , , ,
Rai, Lala Lajpat –
Rajagopalachari, C.
Ramakrishnananda
Ramakrishna Paramahamsa
Ramanuja (Acharya) –
Ramdas, Swami –
Ranganathananda, Swami –,
Rangaswami, Sudhakrishna
Rath, Nilkanth
Richter, Duncan
Robinson, Catherine , , , , , , , , ,
Roy, Raja Rammohan –
Sampatkumaran, M. R. n2
Saraswati, Swami Sahajananda –
Sastri, Alladi n2
Schlegel, Friedrich von
Schlegel, Wilhelm von
Sen, Amartya xi, , , , , ,
Shankar, Pandit Bhavani
Shankar, Sri Sri Ravi
Shankara (Acharya) (Adi-) , –
Sharma, Arvind , , , n5
Sharma, B. N. K. , , n2,
Sharma, R. S.
Shriranga ,
Sinha, Mishka , , , n6,
Sivananda, Swami –
Smith, Adam ,
Srinivasachari, P. N.
Stevenson, R. N.
Sukhtankar, V. S.
Swarupananda, Swami , , , , , , , ,
Tapasyananda, Swami , ,
Telang, K. T. , , , , , ,
Thapan, Anita R.
Thoreau, Henry David
Tilak, Bal Gangadhar , , n1, , –,
Tiruvalluvar
Upadhyaya, Kashi Nath , ,
Vallabha (Acharya)
Veerabhadrappa, B. V. , ,
Vivekananda, Swami , , , n7, , –, , ,
Vyasa , , , n5
Warrier, A. G. Krishna , , , n2,
Weber, Max ,
Wilikins, Charles –,
Yamunacharya, M. , , ,
Yardi, M. R. , , , , , , , , ,
Yogananda, Swami , –
Zaehner, R. C. ,
# Subject Index
Note: Page numbers in italics indicate tables.
adharma –,
Advaita , , , ,
Advaita, Dvaita and Vishishtadvaiata: Reconciliation –
ahamkara
akarta ,
akshohinis n3
ananda
Anu-Gita
atheist
Atman , , , ; see also self
avatara ,
bhakti , , , , , , , , , ,
BORI ,
Brahmasutras , , , ,
brahma-vidya ,
brand value –
Buddhism , , ,
caste, caste system , ,
charity
code of conduct
corporate social responsibility
death, perspective on
deontology –
desire, the Gita's attitude to –
detachment , , ,
dharma –,
Dvaita ,
Dvaitadvaita
ethics in the Gita –
free will in the Gita , , , n11
Gita, the: authorship of –; background story (from the Mahabharata) –; contradictions in –; criticisms of xi, –; date of –; dharma in – (see also dharma); Dhyanam ; environmental implications ; first English translation –; Gita-rahasya –; historicity of –; interpolation? –; Jayanti ; Kashmiri Recension ; liberalism (not fanatical or parochial) –; Mahatmyam , ; and Marxism –; modern interpreters of –; the other Gitas –; place in Mahabharata –; and pursuit of happiness –; reactionary? –; in the rest of the Mahabharata and Puranas ; sacred text –; Shruti or Smriti? –; theology of –; and violence –, ,
God (Ishwara) , , ; as impersonal (nirguna) , , , ; personal (saguna) , , , , , , , –; relation with jiva –; stages to –; two ways of looking at , ; and the world –
happiness and the Gita –
Hinduism , , , , , , ,
holistic approach/view ,
humility
jnana-marga (yoga) , , –
Jnaneshwari –
karma-marga (yoga) , , , , , –, , , , , , , , , , , , –, –, –
Karnataka Bhagavadgite
karta
loka-hita/loka-sangraha xi, , , , ,
Mahabharata, the , , ,
Manisha-panchakam ,
Marxism ,
mathas –
maya ,
meditation –, , –, –n8
mind, states of
modern leaders of Indian Renaissance-their three thrusts –
moha , –
moksha/mukti (liberation) , , , , , –, , ,
non-attachment , ; see also detachment
non-violence
pancha-bhedas
personnel management
prapatti ,
Prasthanatriya ,
Puranas ,
purusharthas , , , ,
Renaissance, Indian
Rigveda
rituals , , , ,
sacred text –
Sadhana viii, ix, , , , –; relative roles of ways to , , , , , –, , ,
Sankhya ,
sarva-bhuta-hita xi
Satya: Adhyatmika and Vyavaharika , , , –
self , , ; three selves
shadvairis
sharira-yatra x,
shraddha , n2
shramadana
Shuddhadvaita
sin/sinner, Gita's attitude to , ,
success in life –
svakarma ,
sw(v)adharma , , , , ,
theosophical society –
three stages in the spread of the gita
trigunas (sattvika, rajasika and tamasika) , , –, –; –
truth pursuit in research –
upanishads ,
Uttara-Gita, the
varnas ,
virtuous cycle (chakra) , –,
Vishishtadvaita –
Vishwarupa
work ethics
world-its reality , , , ,
yajna , , –
yoga-shastra ,
| {
"redpajama_set_name": "RedPajamaBook"
} | 1,501 |
display: none;
}
body.docs div.docs {
margin: 0 !important;
border: none !important
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 3,160 |
{"url":"https:\/\/www.physicsforums.com\/threads\/uncertainty-rule.333528\/","text":"# Uncertainty rule\n\n1. Aug 30, 2009\n\n### Rajini\n\nDear PF members,\nI want to know some accurate informations regarding the time-energy uncertainty principle.\nFrom several websites i got that $$\\Delta$$E$$\\Delta$$t$$\\geq$$$$\\hbar$$\/2 (for e.g., hyperphysics, wiki, etc.).\nBut in some books they use $$\\Delta$$E$$\\Delta$$t$$\\geq$$$$\\hbar$$.\nCan anyone clear this why it is like that...Also is there any small derivation for that?\n\nThanks.\n\n2. Aug 31, 2009\n\n### clem\n\nThe uncertainty is of order hbar. The 1\/2 is the absolute minimum for a Gaussian distribution in time and energy, which is not usually the case for energy and time.\nSome books just don't bother with factors like 1\/'2 when giving order of magnitude lower limits.\n\n3. Aug 31, 2009\n\n### Count Iblis\n\nThere is no time energy uncertainty relation like that at all! See e.g. here:\n\nhttp:\/\/arxiv.org\/abs\/quant-ph\/0609163\n\nPages 6, 7 and 8.\n\n4. Aug 31, 2009\n\n### Count Iblis\n\n5. Aug 31, 2009\n\n### dx\n\nIn quantum mechanics, energy eigenstates have a time dependence of the form $$\\exp(i\\omega t)$$. Since all solutions to the dynamical equation (Schrodinger equation) are superpositions of energy eigenstates (on spacetime), the time dependence of an amplitude will be generally of the form\n\n$$A(t) = \\int_{-\\infty}^{\\infty} \\tilde{A}(\\omega) e^{i\\omega t} d\\omega$$\n\nwhere $$\\tilde{A}$$ is the Fourier transform of A(t). If A(t) is mostly finite only in a region of size \u0394t, then by familiar properties of the Fourier transform, $$\\tilde{A}(\\omega)$$ will be finite in region of size \u0394\u03c9 ~ 1\/\u0394t, or (using $$E = \\hbar \\omega$$)\n\n\u0394E \u0394t ~ h\n\nThe precise constant of proportionality depends on the definition of '\u0394', i.e. what we mean by \"mostly finite only in a region of size \u0394t\".\n\n6. Aug 31, 2009\n\n### clem\n\nRegardless of formalism, the natural width of a spectral line is related to the lifetime of the state by $$\\Delta E\\Delta t\\sim\\hbar$$.\n\n7. Sep 7, 2009\n\nHi Dx,","date":"2017-12-15 14:24:57","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7783397436141968, \"perplexity\": 1833.90289610306}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-51\/segments\/1512948572676.65\/warc\/CC-MAIN-20171215133912-20171215155912-00452.warc.gz\"}"} | null | null |
{"url":"https:\/\/solvedlib.com\/exercise-23-12-condensed-financial-data-of-splish,233320","text":"# Exercise 23-12 Condensed financial data of Splish Company for 2017 and 2016 are presented below. SPLISH...\n\n###### Question:\n\nExercise 23-12\n\nCondensed financial data of Splish Company for 2017 and 2016 are presented below.\n\n SPLISH COMPANY COMPARATIVE BALANCE SHEET AS OF DECEMBER 31, 2017 AND 2016 2017 2016 Cash $1,830$1,140 Receivables 1,720 1,280 Inventory 1,610 1,900 Plant assets 1,870 1,690 Accumulated depreciation (1,210 ) (1,190 ) Long-term investments (held-to-maturity) 1,270 1,410 $7,090$6,230 Accounts payable $1,230$910 Accrued liabilities 200 250 Bonds payable 1,410 1,540 Common stock 1,880 1,710 Retained earnings 2,370 1,820 $7,090$6,230\n SPLISH COMPANY INCOME STATEMENT FOR THE YEAR ENDED DECEMBER 31, 2017 Sales revenue $6,880 Cost of goods sold 4,680 Gross margin 2,200 Selling and administrative expenses 930 Income from operations 1,270 Other revenues and gains Gain on sale of investments 80 Income before tax 1,350 Income tax expense 540 Net income 810 Cash dividends 260 Income retained in business$550\n\n##### FanAUse Appendix F in the textbook to calculate the binding energy of ZH (deuterium). Express your answer using four significant figures:Binding energy 2.243 MeVSubmitPrevious AnswersAnswer Requested\nFanA Use Appendix F in the textbook to calculate the binding energy of ZH (deuterium). Express your answer using four significant figures: Binding energy 2.243 MeV Submit Previous Answers Answer Requested...\n##### What type of nuclear reaction is found on the sun? spontaneous decay artificial transmutationfissionfusion\nWhat type of nuclear reaction is found on the sun? spontaneous decay artificial transmutation fission fusion...\n##### How do you simplify (6 - 1 + 7) using order of operations?\nHow do you simplify (6 - 1 + 7) using order of operations?...\n##### 3. During volcanic eruptions, chunks of solid rock can be blasted out of the volcano; these...\n3. During volcanic eruptions, chunks of solid rock can be blasted out of the volcano; these projectiles are called volcanic bombs. Figure shows a cross section of Mt. Fuji, in Japan.a. At what initial speed would a bomb have to be ejected, at angle Theta o = 35 Degree to the horizontal, from the ven...\n##### Please help if u really know Identify the characteristic kind of molecular motion that absorbs electromagnetic...\nplease help if u really know Identify the characteristic kind of molecular motion that absorbs electromagnetic radiation of each type listed below e.g. infrared is for vibration. Motion Radiation Radio frequency Microwave Infrared Vibration Visible and ultraviolet X-ray...\n##### In the United States Capitol there is an elliptical chamber in which a person whispering while standing at one focus can easily be heard by another person standing at the focus. The whispering gallery in the Capitols Statuary Hall is 46 ft wide and 96 ft long:politician noted this feature of the chamber because the desk of the opposing partys floor leader was at one focus How far from the desk should the politician stand to overhear the floor leader's whispered conversation?b) How far from\nIn the United States Capitol there is an elliptical chamber in which a person whispering while standing at one focus can easily be heard by another person standing at the focus. The whispering gallery in the Capitols Statuary Hall is 46 ft wide and 96 ft long: politician noted this feature of the ch...\n##### We found that the marketing research department for the company that manufactures and sells memory chips for microcomputers established the following price-demand and revenue functions: p(x) = 75 - 4x Price-demand function R(x) = xp(x) = x(75 4x) Revenue function where p(x) is the wholesale price in dollars at which million chips can be sold and R(x) is in millions of dollars. Both functions have domain <x<15.400400200-20040O(B) Find the output that will produce the maximum revenue_mill\nWe found that the marketing research department for the company that manufactures and sells memory chips for microcomputers established the following price-demand and revenue functions: p(x) = 75 - 4x Price-demand function R(x) = xp(x) = x(75 4x) Revenue function where p(x) is the wholesale price...\nFind the number of 13-blt binary strings that start with Oo1, end wlth 01001, start with O01 and end wlth 101001 Selected Ansvrer;...\n##### ---- --- m uuuca VIILUDLU 0I 2 .11 LICU UIC IIC OILY LIVU ILVCuIILLILL 8-3 in...\n---- --- m uuuca VIILUDLU 0I 2 .11 LICU UIC IIC OILY LIVU ILVCuIILLILL 8-3 in her portfolio, what is her portfolio's beta? REQUIRED RATE OF RETURN Assume that the risk-free rate is 5.5% and the required return on the market is 12%. What is the required rate of return on a stock with a beta of 2?...\n##### How many photons are produced in a laser pulse of 0.177 J at 413 nm? ___________...\nHow many photons are produced in a laser pulse of 0.177 J at 413 nm? ___________ photons...\n##### Question 3 Coronado Corporation sells three different models of a mosquito \"zapper.\" Model A12 sells for...\nQuestion 3 Coronado Corporation sells three different models of a mosquito \"zapper.\" Model A12 sells for $54 and has variable costs of$39. Model B22 sells for $105 and has variable costs of$73. Model C124 sells for $411 and has variable costs of$309. The sales mix of the three models is A...","date":"2022-07-07 14:35:14","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2442316710948944, \"perplexity\": 4189.7622784981}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-27\/segments\/1656104692018.96\/warc\/CC-MAIN-20220707124050-20220707154050-00054.warc.gz\"}"} | null | null |
{"url":"https:\/\/cs.stackexchange.com\/questions\/128976\/how-to-group-intervals-which-overlap-by-some-amount","text":"# How to group intervals which overlap by some amount?\n\nI have an algorithm that generates a list of intervals. The algorithm is run m times. Lets mark the intervals as tuples (s1, e1), (s2, e2), .., (sn, en). It is possible to add the run ID to the tuple (though I don't think it helps).\n\nThe goal is to \"clean\" spurious ranges (appearing in few runs) and to find groups of at least k almost perfectly overlapping intervals out of m runs of the algorithm, where k is close to m. E.g. if we have 10 runs, k will be 7-9.\n\nBy almost perfectly overlapping I mean >0.95 overlap, but exact requirement is user-defined (won't be 0.5 or such). The overlap should be between all intervals in the group (i.e. intersection). However, since I am trying to translate an eyeball analysis into exact requirements this requirement might be too strong ..\n\nThe differences in the intervals generated by multiple runs of the algorithm stem from a random factor (seed) as well as slightly different ranges may passing requirements, therefore there is some 'wiggling room' in the results. It also detects some ranges infrequently (think local minima), ranges which should be ignored as spurious.\n\nThe origin of the problem is running multiple times some algorithm that searches a range for \"interesting\" areas. By the nature of the algorithm, each run may return slightly different ranges as well as, at times, a range not seen before.\n\nThe intervals can be viewed as integers, though in reality the intervals I get may be real number in any range. I assume I can always use a min-max scaler to, for example, have the ranges have (approximated) integer values in the 0-1000 range or similar.\n\nBelow is a (very simple) example of the problem marked as I would do manually. The three green intervals and the three red intervals should be reported as groups, whereas the other three are a group on their own. The overlap of the blue interval is too small. The Yellow interval is not \"similar\" to the red ones in size.\n\nThere may be problems like in the diagram below which I am not sure how to address. The green (bottom) interval and the one above it are certainly \"the same\" as are the red one and the one below, however the green and red are already too far apart to be considered a group.\n\nMy initial idea was to build an interval graph. On that I can greedily find the point at which most intervals intersect, than somehow (no clear idea how yet) I would remove intervals which should not belong to the group. Once done I remove the group from the graph and repeat.\n\nAnother method I thought about, but which is O(N^3) (and not guaranteed to yield a good result) is to calculate the overlap of all pairs, selecting the best and merging (union? intersection? average start\/end?) then repeating until there are no more \"interesting\" overlaps.\n\nI consider an overlap interesting if it is larger than some percentage, e.g. 95%.\n\nAre there any algorithms already achieving something similar? Any direction someone can point me in?\n\n\u2022 @D.W. I added (much) more detail on what I am trying to achieve. Hope this helps. \u2013\u00a0mibm Aug 6 at 6:59\n\u2022 @D.W. I cleaned the post, hopefully clarifying it. I think overlap should be between all intervals in group, but I am trying to translate a visual analysis into an algorithm, so this requirement may be too strong. Naively I can measure intersection of multiple ranges using sets of integer values in them and intersect the sets. Unfortunately this would be O(n^2) worst case. Since each run should find mostly the same intervals I hope the groups I need to check intersection between would contain around the same number of intervals as runs (up to a factor of 2) \u2013\u00a0mibm Aug 6 at 12:32\n\u2022 Thank you for all the edits! That helps. (I'm curious: does each run output a single interval, or multiple intervals? If multiple, is there some relationship, like that each run outputs about one interval for each group, or that each run outputs a bunch of intervals usually from the same group?) \u2013\u00a0D.W. Aug 6 at 18:25\n\u2022 @D.W. Expected output is multiple intervals, about one per group (0-2 per group should be the common output), however I don't know how many groups are expected ahead of time. Also some (few) runs may add a unique interval or split one that other runs report as one. These intervals I would like to remove as \"spurious\". \u2013\u00a0mibm Aug 9 at 7:06\n\nHere is one interpretation of your problem:\n\nGiven $$n$$ observed intervals $$I_1,\\dots,I_n$$ and $$k$$, find $$k$$ disjoint inferred intervals $$J_1,\\dots,J_k$$ that maximizes the number of observed intervals are covered by at least one of the inferred intervals. Say that $$I_i$$ is covered by $$J_j$$ if they have at least 95% overlap, where the overlap between $$I_i,J_j$$ is measured as $$|I_i \\cap J_j|\/|J_j|$$ where $$|\\cdot|$$ denotes the length of an interval.\n\nThis problem can be solved with dynamic programming. Sort the endpoints of the observed intervals. For each endpoint $$e$$ and each $$k_0$$ with $$0 \\le k_0 \\le k$$, let $$f(e,k_0)$$ denote the maximum number of observed intervals that can be covered by $$k_0$$ disjoint inferred intervals that are all in $$[-\\infty,e]$$. Then you can write a recurrence relation for $$f$$: in particular,\n\n$$f(e',k_0) = \\max(f(e^*,k_0), \\max \\{f(e,k_0-1) + \\eta : e\n\nwhere $$e^*$$ is the endpoint immediately before $$e$$, and $$\\eta$$ is the number of observed intervals that are covered by $$[e+1,e']$$.\n\nThat said, I suspect a more pragmatic approach might be to use some standard clustering algorithm, adapted for this problem. For instance, you might use k-means on the centers of the intervals. Given a set of intervals that have been clustered together, you might use the median of their left endpoints and median of their right endpoints to define a new interval that serves as the clusterhead. You can probably come up with other heuristics. It's plausible that this might be adequate in practice.\n\n\u2022 Interesting approach, I'll try it. For overlap I thought of using Jaccard distance (intersection\/union). Using clustering might be a too big hammer for 1d data; another problem is getting a good approximation of the number of seeds (in most clustering algos). \u2013\u00a0mibm Aug 11 at 8:24","date":"2020-10-29 16:51:37","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 22, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6701000332832336, \"perplexity\": 608.7473132260674}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-45\/segments\/1603107904834.82\/warc\/CC-MAIN-20201029154446-20201029184446-00534.warc.gz\"}"} | null | null |
Current (https://current.org/2011/09/temporary-hosts-rotate-into-need-to-know-anchor-chair/)
Temporary hosts rotate into Need to Know anchor chair
By | September 12, 2011
WNET's Need to Know will have several temporary hosts, including an NPR veteran, reports the New York Times, in the wake of Alison Stewart's departure. Scott Simon, host of Weekend Edition Saturday, will fill the chair this week. Coming soon will be Maria Hinojosa of Now on PBS, Ray Suarez of PBS NewsHour and Jeff Greenfield, a network news vet who also hosted WTTW's national production CEO Exchange on PBS. WNET programmer Stephen Segaller called it an "interim arrangement" to provide the program "some breathing room" as the station ponders its future.
Also, NTK Executive Producer Shelley Lewis is being replaced by Marc Rosenwasser, whose background includes work on ABC and NBC newsmagazines as well as executive producing WNET's Worldfocus, which was canceled just before NTK premiered last year.
One thought on "Temporary hosts rotate into Need to Know anchor chair"
John Proffitt on September 12, 2011 at 5:11 pm said:
Did the Titanic rotate captains on the way to the bottom of the Atlantic? I'm asking for a friend… | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 3,078 |
Monster Hunter Rise: Sunbreak Expansion Getting More News Spring 2022
December 27, 2021 admin Gaming 0 Comments
Capcom announces via Twitter that more information on the upcoming Monster Hunter Rise Sunbreak expansion will be revealed in Spring 2022.
Nintendo's fall 2021 Nintendo Direct presentation in September was packed full of major headlines for many of the gaming juggernaut's most popular franchises. The broadcast was highlighted by segments like Bayonetta 3's fantastic gameplay trailer and major reveals like the upcoming Kirby and the Forgotten Land and the cast of the Illumination Super Mario Bros. movie. Monster Hunter fans also got something to celebrate with Monster Hunter Rise confirmed to be receiving its first major expansion in Monster Hunter Rise: Sunbreak.
Monster Hunter Rise, the first main-series title to release on Switch, was a phenomenal success for Capcom following its release in March of this year. The game had already surpassed 7.5 million units sold in just over six months while still being a Switch exclusive. The announcement of the Sunbreak expansion was met with plenty of excitement from fans, with Sunbreak being compared to Monster Hunter World's massive Iceborne expansion in scale. With Sunbreak still several months out, Capcom recently teased when fans will get their next update on the new content.
A post from the official Monster Hunter Twitter account Monday morning teased fans with the promise of new information on the highly-anticipated Sunbreak expansion. Without giving many details on what the news would pertain to, the post announced that more info on Sunbreak would be coming early next year, with the ambiguous date of Spring 2022 given for the next update.
The Japanese Monster Hunter Twitter also revealed there would be new Silkbind attacks coming in Sunbreak, though no video was shown of the upcoming skills. The new Silkbind attacks are just a few of a huge library of content already revealed to be coming in Monster Hunter Rise: Sunbreak. The new "Master Rank" quests will add new monsters and more powerful versions of iconic monsters already in Rise, like the game's cover monster, Magnamalo. Sunbreak will also feature a new storyline and new areas for players to explore and hunt, as well as plenty of new gameplay elements Capcom has teased but has yet to reveal to its fans.
2021 has been a banner year for the Monster Hunter franchise, with both Rise and Monster Hunter Stories 2 releasing through the year to critical and commercial success. Rise received nominations for Best Role-Playing Game and Best Multiplayer Game at the 2021 Game Awards, though it would ultimately lose to Tales of Arise and It Takes Two, respectively. While Rise may not have taken home any awards, the game's popularity among franchise fans should bode well for both the PC version releasing early next year and the eventual success of Sunbreak.
Monster Hunter Rise is available now for Nintendo Switch, with a PC version scheduled to release on January 12, 2022. The Sunbreak expansion is currently in development, planned for Summer 2022. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,208 |
\section{Introduction}
\label{s_intro}
Globular clusters were once known to be simple structures made of stars formed at the same time with the same initial chemical composition. This picture has been deeply revised since various sub-groups of stars have been discovered in the vast majority of them. These sub-groups, also known as multiple populations, are detected in both spectroscopy and photometry. Determinations of surface chemical abundances indicate that some stars are enriched in nitrogen, sodium, and aluminum, while being at the same time depleted in carbon, oxygen, and magnesium \citep[e.g.,][]{sneden92,kraft97,car10}. A wide range of enrichment or depletion is usually observed, leading to so-called anticorrelations between nitrogen and carbon, sodium and oxygen, and aluminum and magnesium \citep[][]{yong06,car06,gratton07,car09,marino11,car15}.
Additionally, color-magnitude diagrams (CMDs) of essentially all the globular clusters reveal multiple sequences (or at least spreads) in one or several branch (main sequence, MS; turn-off,
TO; red giant branch, RGB; asymptotic giant branch, AGB; and
horizontal branch, HB). The Hubble Space Telescope has pioneered the identification of such sequences \citep[e.g.,][]{bedin04,piotto07,milone10,milone12,piotto15,soto17}, but they are now observed with any high spatial resolution photometric facilities \citep{han09,gruy17}.
The origin of the multiple populations observed in globular clusters remains unknown. The chemical abundance patterns all point to nucleosynthesis through the CNO cycle, Ne--Na and Mg--Al chains at high temperature \citep[75 MK,][]{pr07,pr17}. These conditions are encountered in the core of MS massive, very massive and super-massive stars or in the envelope of some AGB stars. This has led to a generation of scenarios invoking a first generation of stars formed from pristine gas. Out of this first generation, some stars \citep[massive or AGB stars;][]{ventura01,dec07,dh14,gieles18} ejected processed material that was subsequently mixed with gas to form a second generation of stars. Depending on the degree of mixing, the stars of the second generation show the observed chemical anticorrelations. The different scenarios proposed to explain the presence of multiple populations partly rely on nucleosynthesis through the CNO cycle and the Ne-Na and Mg-Al chains. As such, they also predict some degree of helium enrichment, which should be observed in stars that formed out of the ejecta of the first-generation stars. When AGB stars are the main polluters, a maximum helium mass fraction of 0.38 is expected \citep{ventura13}, while for scenarios involving massive stars, higher values are not forbidden \citep{chantereau16} and can be limited to 0.4 in the case of super-massive stars if stellar winds are efficient enough \citep{dh14}. However, spectroscopic determinations of the helium content in globular clusters are almost impossible owing to the absence of spectroscopic features in most stars, except for hot HB objects \citep{marino14}. For the latter, complications due to atomic diffusion render abundance determinations uncertain.
Hence, determinations of the helium content of globular clusters stars have mostly been made based on an indirect method: the comparison of theoretical isochrones built with different Y (i.e., helium mass fraction) to observed CMDs. A larger helium content decreases the envelope opacity and increases the mean molecular weight, two effects that combine to make helium-rich isochrones bluer \citep[e.g.,][]{chantereau16}. The method requires the transformation of theoretical Hertzsprung-Russell diagrams into CMDs. This can be done either by direct calculations of synthetic spectra along isochrones, or by use of bolometric corrections \citep[][]{milone13,milone17}.
Most determinations of Y performed so far rely on the color differences between multiple populations: the observed differences in colors between two populations are compared to the color differences between isochrones with different Y. In that sense, these determinations provide an estimate of the \textit{\textup{relative}} helium content between multiple populations. Assuming a value of Y for the less chemically processed population (the first generation
or population), this provides an absolute value for Y for each population. Such a differential analysis usually does not take into account any dispersion in theoretical isochrones: they are plotted as single lines in CMDs. A more physical approach would be to introduce a distribution of colors around the average value of the theoretical isochrone and to take this dispersion into account when performing comparisons to observed populations in CMDs. This would not affect the determination of the Y difference when Y is significantly different between two populations. However, this may be important for small Y differences, when the overlap between two theoretical isochrones due to dispersion is non-negligible.
Another method for constraining the Y content would be to directly compare the position of theoretical isochrones to observed CMDs. This direct approach is more complex than the differential one since it involves uncertainties in the modeling of stellar evolution and atmosphere models, uncertainties that mostly cancel out in a differential approach. However, direct comparisons of isochrones to CMDs do dot require any assumption on the chemical composition of the first population.
Directly comparing theoretical isochrones to observed CMDs is also important to constrain the age of globular clusters. Again, a dispersion around theoretical isochrones must be taken into
account, however, to correctly estimate uncertainties on ages. Finally, direct comparisons are useful for testing the physics of evolutionary models and atmosphere models.
In this paper, we present an investigation of the dispersion around theoretical isochrones. Our final goal is to produce theoretical CMDs that can be directly compared to observed CMDs. We plan to produce such theoretical CMDs by drawing artificial stars with parameters centered around those of theoretical isochrones and with a distribution characterized by the uncertainties determined in this work. This should provide an independent view of the properties of globular clusters.
In Sect.\ \ref{s_method} we describe our method and the standard stars we selected. Sect.\ \ref{s_res} describes our results,
which are summarized in Sect.\ \ref{s_conc}.
\section{Method}
\label{s_method}
\begin{figure}[t]
\centering
\includegraphics[width=9cm]{mag_pollux_effect_col.eps}
\caption{Difference between colors of Pollux models with one parameter varied compared to the color of the reference model (red filled triangle). The parameters of the reference model are given in the upper panel, together with the values of the parameters that were varied. Colors based on Johnson photometry (HST WFC3 and ACS) are shown in the upper (lower) panel. C$_{X}$=(275-336)-(336-X), where numbers refer to magnitudes in a given filter (i.e., 275 is the magnitude in the F275W filter), and $X$ is either the F410W or the F438W filter. Gray vertical bars indicate the typical separation between RGB populations in the cluster NGC~6752, according to \citet{milone13}.}
\label{mag_pollux}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=9cm]{mag_procyon_effect_col.eps}
\caption{Same as bottom panels of Fig.\ \ref{mag_pollux} for Procyon. The gray solid vertical lines in the bottom panel indicate the width of the turn-off of NGC~6752 in the corresponding color, from the data of \citet{milone13}.}
\label{mag_procyon}
\end{figure}
To estimate the dispersion around a theoretical isochrone in CMDs, we need to constrain the color variations that are due to changes in fundamental parameters and surface abundances. We assume that such variations exist in a population that is
theoretically represented by a single isochrone.
\citet{sbo11} have studied the effects of variations of various surface abundances on CMDs. They reported that C, N, and O significantly affect the shape of spectra below $\sim$ 4500 \AA. Conversely, helium has little effect on synthetic spectra at a given effective temperature, but affects the internal structure (see above) and thus \teff. As a consequence, the helium content also affects the shape of theoretical isochrones in CMDs through its effect on effective temperature. This was confirmed by \citet{cas17}, who have quantified the displacement of isochrones that is due to Y changes in synthetic CMDs built from the HST filters F606W and F814W. We thus consider C, N, O, and He as the main sources of color variations that are due to changes in surface abundances. We also take into account color variations that are due to fundamental parameters: effective temperature, surface gravity, and microturbulence. For these parameters, we assume that the dispersion in fundamental parameters is similar to the uncertainties of spectroscopic determination of such parameters. Another assumption could be to estimate the dispersion between isochrones that is produced by different groups and stellar evolution codes. We prefer using spectroscopic determinations as the source of uncertainties since they do not depend in the degree of refinement of the physics included in evolutionary models.
We also provide an estimate of some sources of systematic uncertainties on synthetic photometry: the effect of calibration, extinction, and airmass (for ground-based observations).
\subsection{Selection of stars and stellar parameters}
\label{s_targets}
For our purpose, we first focused on RGB stars since these objects are bright and thus more easily observed in globular clusters. Spectroscopic data are available for abundance determinations. In addition, we concentrated on stars at the bottom of the RGB to avoid additional complications due to stellar evolution in more advanced phases (dredge-up and deep mixing). From these criteria, and considering only bright targets with robust photometry, we selected the K0~III star POLLUX ($\beta$~Gem, HD~62509, HR~2990) as representative of this class of objects. Its photometry is stable over time \citep{gray14}, and it is usually considered an RGB star with low luminosity. \citet{auriere15} detected a weak magnetic field of 0.5 G at its surface.
To model the spectral energy distribution, we adopted the effective temperature and surface gravity of \citet{heiter15}. We chose a value of microturbulent velocity $\xi_{t}$ of 1.22 \kms\ from \citet{luck15}. The surface abundances were taken from \citet{luck15} and \citet{jofre15a}. A projected rotational velocity (\vsini) of 2.8 \kms\ was adopted from \citet{auriere15}. \citet{gray14} provides references for the different values of the stellar parameters encountered in the literature, and we refer to this work for further information. We extract from this work the typical uncertainties: 50 to 100 K for \teff\ with modern values closer to 50 K, 0.3 dex on \logg, and 0.3 \kms\ on $\xi_{t}$. Uncertainties on surface abundances depend on the element and are listed in \citet{luck15} and \citet{jofre15a}. They are on the order of 0.10-0.15 dex in units of 12+$\log(\frac{X}{H})$. Consequently, we adopt the following errors: 0.15 dex for carbon, nitrogen, and oxygen \citep[see also][]{adam13}, and 0.10 dex for iron.
In addition to Pollux, we also considered Procyon ($\alpha$ CMi, HD1421, HR~2943), an F5V-IV star with parameters typical of TO stars in globular clusters. Multiple populations in globular clusters are less easily detected on the TO, but they probably contribute to a widening of this region of the CMD since multiple populations are observed both on the MS and in evolved phases (RGB and AGB). A good knowledge of the uncertainties affecting synthetic photometry is crucial for quantitative determinations of stellar ages.
The effective temperature and surface gravity of Procyon were adopted from \citet{heiter15}, the surface abundances from \citet{jofre15a,jofre15b}. The projected rotational velocity (2.8 \kms) and microturbulent velocity (1.66 \kms) were taken from \citet{jofre15a}.
The adopted stellar parameters for Pollux and Procyon are given in the first line (below the star name) in Table \ref{param_colors}. The corresponding models are referred to as the ``reference models'' in the remainder of the paper.
Pollux and Procyon both have roughly solar metallicities, while stars in most globular clusters have [$\frac{Fe}{H}$] between 0.0 and -2.5 \citep{car09fe}. As stated above, Pollux and Procyon are nearby and relatively standard stars with well-determined stellar parameters and surface abundances. Finding such stars with [$\frac{Fe}{H}$]$\sim$-2.0 is difficult, since they are fainter and thus do not have spectroscopic parameters as good
as close-by objects. However, from the point of view of the determination of stellar parameters from spectroscopy, the only difference between solar metallicity and metal-poor stars is stronger non-local
thermal equilibrium (non-LTE) effects in the latter case \citep[e.g.,][]{lind12}. This adds a systematic uncertainty on stellar parameters and surface abundances, with a magnitude that increases at lower [$\frac{Fe}{H}$] \citep{merle11,ruchti13}. The statistical uncertainties (due to statistical uncertainties on \teff, \logg,\ and surface abundances) remain the same, however. Hence, this study strictly speaking applies to the most metal-rich globular clusters. At lower [$\frac{Fe}{H}$], systematic trends on colors are to be expected, in addition to the effects discussed in Sect.\ \ref{s_Vega}.
\subsection{Atmosphere models and synthetic photometry}
We have used the atmosphere code ATLAS12 \citep{kur14} and the spectral synthesis code SYNTHE \citep{kur05} to compute the spectral energy distribution (SED). Photometry in various filters was subsequently calculated from the SED. To do this, we retrieved the Johnson $UBVRI$ filter throughputs from the General Catalogue of Photometric Data \footnote{\url{http://obswww.unige.ch/gcpd/gcpd.html}} \citep[GCPD,][]{merm97}. We also used the Spanish Virtual Observatory\footnote{\url{http://svo2.cab.inta-csic.es/svo/theory/fps3/}} to retrieve the HST/WFC3/UVIS2 filters F275W, F336W, F410M, F438W, and F555W and the HST/ACS WFC filters F606W and F814W for a temperature of -81$^{\circ}$C. For each filter, we convolved the synthetic SED with the filter throughput and calculated the corresponding flux, which was subsequently divided by the zero-point flux to give the synthetic magnitude.
To ensure consistency in our photometry, we recalculated the zero-point fluxes for all filters in the VEGAMAG system. For this purpose, we retrieved the reference spectrum of Vega used in HST calibrations from \url{ftp://ftp.stsci.edu/cdbs/current_calspec/}. We used the spectrum ``alpha\_lyr\_stis\_008.fits'' (see also Sect.\ \ref{s_Vega}).
\section{Results}
\label{s_res}
\subsection{Estimates of uncertainties on synthetic photometry}
\label{s_unc}
\subsubsection{Effect of stellar parameters}
\label{s_effparam}
We first studied the effect of variations in stellar parameters on the resulting photometry. We selectively varied the effective temperature, the surface gravity, the microturbulent velocity, and the abundances of carbon, nitrogen, oxygen, and iron. We focussed on carbon, nitrogen, and oxygen since they show a wide range of values in globular clusters and affect the SEDs of globular cluster stars most \citep[e.g.,][]{sbo11}. For each parameter, we selected two values bracketing the reference value listed in Sect.\ \ref{s_targets}. These new values correspond to the reference value plus/minus the uncertainty. For instance, the reference value for \teff\ for Pollux is 4858 K, with an uncertainty of about 50K. We thus ran two models with \teff\ = 4800 and 4900 K, respectively. The results are gathered in Table \ref{param_colors}.
Fig.\ \ref{mag_pollux} shows a graphical representation of some results for Pollux. In the upper panel, the dispersion in color difference is largest in the blue ($U-B$ color), where the effects of effective temperature and microturbulence are the strongest. A difference of 0.04 mag is not unexpected. Table \ref{sig_colors} gathers the dispersion in colors shown in Fig.\ \ref{mag_pollux} and \ref{mag_procyon}. The dispersion is the standard deviation of the 15 models computed for each star. For Pollux, it is 0.022 in ($U-B$). The red part of the spectrum ($R-I$ color) is less sensitive to parameter variations with color differences not larger than 0.01 magnitudes (dispersion 0.005). For the $B$ and $V$ filters, color variations are intermediate, with differences reaching 0.02 magnitude and a dispersion of 0.011 ($B-V$).
The lower panel of Fig.\ \ref{mag_pollux} shows the effects of stellar parameter variations on colors based on HST photometry for Pollux. As above, the changes are greatest in the blue part of the spectrum. For the selected filters, the color differences can reach 0.05 magnitudes. Colors involving the filters F275W, F336W, F410M, and F438W are the most affected by variations in stellar parameters. The dispersion is 0.041 for (275-336)\footnote{The notation (275-336) stands for the magnitude difference between the F275W and F336W filters. Similar notations are used for the other HST filters.} and drops to 0.008 for (606-814). These variations are important in the context of understanding multiple populations in globular clusters since photometry based on two or three of the blue filters is the most efficient in separating multiple populations \citep[][]{milone13,piotto15}. As an illustration, we show in Fig.\ \ref{mag_pollux} some color separations between multiple populations in the globular cluster NGC~6752, which is one of the best-studied clusters \citep{yong05,car05,car07,yong08,villa09,milone10,car12,charb13,car13,krav14,yong15,dotter15,nardiello15,lapenna16,muc17}. The range of parameters we explored leads to a range of colors similar to the typical color difference between populations ``a'' and ``b'' in NGC~6752 according to \citet{milone13}, see their Fig.~12. However, the dispersion formally remains below the color difference between two populations (for the case of NGC~6752 taken as reference here). For instance, the dispersion in the C$_{410}$ index is 0.031, while the difference between the two main populations is on the order 0.140 mag. In the particular case here, when a theoretical CMD is built by drawing artificial stars with parameters centered on the isochrones that best fit the two populations a and b, and when a dispersion around these theoretical isochrones is included, most artificial stars are part of two groups that are well separated in color (dispersion of 0.031 mag versus observed separation of 0.140 mag), although some of the artificial stars from the bluest population may be located at the position of the redder population (total range of colors as wide as the separation between populations a and
b).
If the separation between populations a and b were on the order of the theoretical dispersion (0.031 mag), it would have been difficult to infer the difference in properties of the two populations from the theoretical isochrones because the two artificial populations
overlap significantly; this problem does not exist when no dispersion around isochrones is considered.
\begin{sidewaystable*}
\begin{center}
\caption{Absolute magnitudes for the Pollux and Procyon models and effect of stellar parameters.}
\label{param_colors}
\begin{tabular}{lcccccccccccccccccc}
\hline
\teff & \logg & \vturb & C/H & N/H & O/H & Fe/H & $U$ & $B$ & $V$ & $R$ & $I$ & F275W & F336W & F410M & F438W & F555W & F606W & F814W \\
$[K]$ & & [\kms] & [$\times 10^4$] & [$\times 10^4$] & [$\times 10^4$] & [$\times 10^5$] & & & & & & & & & & & & \\
\hline
& & & & & & & & Pollux & \\
\hline
4858 & 2.90 & 1.22 & 1.2 & 3.4 & 4.3 & 3.8 & 2.308 & 1.817 & 0.910 & 0.319 & -0.139 & 4.340 & 2.314 & 2.229 & 1.934 & 1.063 & 0.708 & -0.034 \\
4800 & 2.90 & 1.22 & 1.2 & 3.4 & 4.3 & 3.8 & 2.451 & 1.919 & 0.988 & 0.383 & -0.087 & 4.573 & 2.456 & 2.352 & 2.041 & 1.144 & 0.780 & 0.021 \\
4900 & 2.90 & 1.22 & 1.2 & 3.4 & 4.3 & 3.8 & 2.209 & 1.745 & 0.855 & 0.274 & -0.175 & 4.181 & 2.215 & 2.143 & 1.859 & 1.006 & 0.656 & -0.072 \\
4858 & 2.70 & 1.22 & 1.2 & 3.4 & 4.3 & 3.8 & 2.335 & 1.823 & 0.909 & 0.318 & -0.140 & 4.415 & 2.353 & 2.241 & 1.942 & 1.063 & 0.706 & -0.035 \\
4858 & 3.10 & 1.22 & 1.2 & 3.4 & 4.3 & 3.8 & 2.289 & 1.812 & 0.911 & 0.320 & -0.138 & 4.283 & 2.283 & 2.220 & 1.928 & 1.063 & 0.709 & -0.033 \\
4858 & 2.90 & 1.50 & 1.2 & 3.4 & 4.3 & 3.8 & 2.363 & 1.837 & 0.917 & 0.316 & -0.141 & 4.407 & 2.375 & 2.261 & 1.956 & 1.071 & 0.710 & -0.036 \\
4858 & 2.90 & 0.90 & 1.2 & 3.4 & 4.3 & 3.8 & 2.248 & 1.798 & 0.912 & 0.322 & -0.136 & 4.267 & 2.247 & 2.193 & 1.910 & 1.062 & 0.710 & -0.030 \\
4858 & 2.90 & 1.22 & 1.6 & 3.4 & 4.3 & 3.8 & 2.292 & 1.813 & 0.901 & 0.313 & -0.139 & 4.299 & 2.294 & 2.235 & 1.931 & 1.053 & 0.699 & -0.034 \\
4858 & 2.90 & 1.22 & 0.8 & 3.4 & 4.3 & 3.8 & 2.318 & 1.823 & 0.926 & 0.325 & -0.138 & 4.370 & 2.328 & 2.222 & 1.935 & 1.078 & 0.719 & -0.033 \\
4858 & 2.90 & 1.22 & 1.2 & 4.6 & 4.3 & 3.8 & 2.305 & 1.812 & 0.907 & 0.317 & -0.137 & 4.324 & 2.312 & 2.232 & 1.928 & 1.059 & 0.704 & -0.033 \\
4858 & 2.90 & 1.22 & 1.2 & 2.2 & 4.3 & 3.8 & 2.310 & 1.825 & 0.922 & 0.322 & -0.139 & 4.354 & 2.314 & 2.226 & 1.940 & 1.074 & 0.715 & -0.034 \\
4858 & 2.90 & 1.22 & 1.2 & 3.4 & 6.1 & 3.8 & 2.317 & 1.821 & 0.920 & 0.321 & -0.139 & 4.409 & 2.332 & 2.225 & 1.935 & 1.072 & 0.714 & -0.034 \\
4858 & 2.90 & 1.22 & 1.2 & 3.4 & 3.1 & 3.8 & 2.298 & 1.817 & 0.911 & 0.318 & -0.138 & 4.281 & 2.296 & 2.231 & 1.933 & 1.063 & 0.707 & -0.033 \\
4858 & 2.90 & 1.22 & 1.2 & 3.4 & 4.3 & 4.8 & 2.327 & 1.821 & 0.915 & 0.317 & -0.141 & 4.397 & 2.323 & 2.232 & 1.936 & 1.066 & 0.708 & -0.036 \\
4858 & 2.90 & 1.22 & 1.2 & 3.4 & 4.3 & 3.0 & 2.290 & 1.818 & 0.915 & 0.321 & -0.136 & 4.287 & 2.306 & 2.226 & 1.933 & 1.067 & 0.711 & -0.031 \\
\hline
& & & & & & & & Procyon & \\
\hline
6554 & 4.00 & 1.66 & 2.5 & 0.6 & 4.7 & 3.2 & 2.935 & 3.017 & 2.605 & 2.302 & 2.079 & 3.795 & 2.897 & 3.162 & 3.060 & 2.686 & 2.507 & 2.121 \\
6600 & 4.00 & 1.66 & 2.5 & 0.6 & 4.7 & 3.2 & 2.894 & 2.975 & 2.572 & 2.276 & 2.058 & 3.734 & 2.858 & 3.117 & 3.016 & 2.651 & 2.476 & 2.099 \\
6500 & 4.00 & 1.66 & 2.5 & 0.6 & 4.7 & 3.2 & 2.984 & 3.068 & 2.644 & 2.333 & 2.104 & 3.867 & 2.946 & 3.215 & 3.111 & 2.726 & 2.543 & 2.147 \\
6554 & 3.80 & 1.66 & 2.5 & 0.6 & 4.7 & 3.2 & 2.960 & 3.007 & 2.599 & 2.299 & 2.078 & 3.842 & 2.937 & 3.151 & 3.049 & 2.679 & 2.502 & 2.120 \\
6554 & 4.20 & 1.66 & 2.5 & 0.6 & 4.7 & 3.2 & 2.911 & 3.027 & 2.610 & 2.304 & 2.080 & 3.753 & 2.860 & 3.172 & 3.070 & 2.691 & 2.511 & 2.122 \\
6554 & 4.00 & 2.50 & 2.5 & 0.6 & 4.7 & 3.2 & 2.983 & 3.023 & 2.599 & 2.292 & 2.071 & 3.893 & 2.954 & 3.176 & 3.067 & 2.680 & 2.499 & 2.113 \\
6554 & 4.00 & 0.00 & 2.5 & 0.6 & 4.7 & 3.2 & 2.966 & 3.045 & 2.625 & 2.318 & 2.092 & 3.844 & 2.930 & 3.192 & 3.088 & 2.707 & 2.526 & 2.134 \\
6554 & 4.00 & 1.66 & 3.4 & 0.6 & 4.7 & 3.2 & 2.933 & 3.017 & 2.603 & 2.300 & 2.078 & 3.790 & 2.896 & 3.160 & 3.060 & 2.684 & 2.505 & 2.120 \\
6554 & 4.00 & 1.66 & 1.6 & 0.6 & 4.7 & 3.2 & 2.936 & 3.017 & 2.606 & 2.303 & 2.080 & 3.799 & 2.899 & 3.163 & 3.059 & 2.687 & 2.508 & 2.122 \\
6554 & 4.00 & 1.66 & 2.5 & 0.8 & 4.7 & 3.2 & 2.936 & 3.017 & 2.605 & 2.301 & 2.079 & 3.794 & 2.900 & 3.161 & 3.059 & 2.685 & 2.507 & 2.121 \\
6554 & 4.00 & 1.66 & 2.5 & 0.4 & 4.7 & 3.2 & 2.933 & 3.018 & 2.605 & 2.302 & 2.079 & 3.795 & 2.896 & 3.162 & 3.060 & 2.686 & 2.507 & 2.121 \\
6554 & 4.00 & 1.66 & 2.5 & 0.6 & 6.6 & 3.2 & 2.935 & 3.017 & 2.604 & 2.301 & 2.079 & 3.796 & 2.899 & 3.161 & 3.059 & 2.685 & 2.507 & 2.120 \\
6554 & 4.00 & 1.66 & 2.5 & 0.6 & 3.3 & 3.2 & 2.935 & 3.018 & 2.605 & 2.302 & 2.079 & 3.794 & 2.896 & 3.162 & 3.060 & 2.686 & 2.507 & 2.121 \\
6554 & 4.00 & 1.66 & 2.5 & 0.6 & 4.7 & 2.6 & 2.929 & 3.019 & 2.609 & 2.306 & 2.083 & 3.768 & 2.893 & 3.160 & 3.061 & 2.690 & 2.511 & 2.125 \\
6554 & 4.00 & 1.66 & 2.5 & 0.6 & 4.7 & 4.1 & 2.939 & 3.014 & 2.600 & 2.296 & 2.075 & 3.820 & 2.900 & 3.161 & 3.056 & 2.680 & 2.501 & 2.116 \\
\hline
\end{tabular}
\tablefoot{A stellar radius of 9.30 R$_{\odot}$ was assumed for Pollux, according to \citet{auriere15}. For Procyon, a radius of 2.05 R$_{\odot}$ was calculated from the effective temperature and luminosity of \citet{heiter15}.}
\end{center}
\end{sidewaystable*}
\begin{figure*}[t]
\centering
\includegraphics[width=0.47\textwidth]{comp_vega_ref_spectra.eps}
\includegraphics[width=0.47\textwidth]{mag_pollux_effect_VegaRef.eps}
\caption{\textit{Left}: Comparison between two Vega reference spectra. \textit{Right}: Difference in the magnitudes of Pollux caused by photometric calibrations based on the two Vega reference spectra. The gray vertical lines have the same meaning as in Fig.\ \ref{mag_pollux}.}
\label{mag_effectVega}
\end{figure*}
\begin{table}
\begin{center}
\caption{Dispersion in colors shown in Fig.\ \ref{mag_pollux} and \ref{mag_procyon}.}
\label{sig_colors}
\begin{tabular}{lccccccccccccccccc}
\hline
Color & Pollux & Procyon\\
\hline
($U-B$) & 0.022 & 0.015 \\
($B-V$) & 0.011 & 0.006 \\
($V-R$) & 0.006 & 0.002 \\
($R-I$) & 0.005 & 0.005 \\
(275-336) & 0.041 & 0.014 \\
(336-410) & 0.020 & 0.020 \\
(336-438) & 0.024 & 0.020 \\
C$_{410}$ & 0.031 & 0.021 \\
C$_{438}$ & 0.030 & 0.020 \\
(555-814) & 0.011 & 0.005 \\
(606-814) & 0.008 & 0.004 \\
\hline
\end{tabular}
\end{center}
\end{table}
\smallskip
In Fig.\ \ref{mag_procyon} we gather the color differences for Procyon. In Johnson photometry, the ($U-B$) color is the most affected by parameters variations (differences of up to 0.04 mag and a dispersion of 0.015). The smallest variation is observed in the ($V-R$) color, with a dispersion of 0.002 magnitude. For ($B-V$), the dispersion is 0.006. The color differences are smaller than in the case of Pollux. In the HST filters, colors involving filters located below 4500 \AA\ are most affected, with dispersion variations of up to 0.05 mag and dispersions reaching 0.02 mag. In these colors, the dispersion is smaller than the width of the TO in the globular cluster NGC~6752, but the range of colors can be of the same size. For colors based on filters covering redder parts of the spectrum, the dispersion drops to below the TO width.
\subsubsection{Photometric calibration: effect of the Vega reference spectrum}
\label{s_Vega}
Synthetic photometry requires calibration on a reference spectrum. In the VEGAMAG system, the star Vega is used for this. Its magnitude is set to 0.0 in all filters. In practice, this means that a correction factor (the zero point) must be applied to the integral of the stellar flux over the filter passband. Hence the final photometry depends on the choice of the Vega reference spectrum. In Fig.\ \ref{mag_effectVega} we show the difference in Johnson and HST photometry when using two different Vega reference spectra. The two spectra were retrieved from the HST calibration database\footnote{\url{ftp://ftp.stsci.edu/cdbs/current_calspec/}}. The ``Vega reference STScI'' spectrum was used by \citet{bedin05}. The spectrum ``Alf Lyr STIS 008'' is the spectrum currently used in the calibration of HST data. The difference between them is the use of the \citet{hayes85} Vega spectrum in the optical up to 1.05 $\mu$m and an ATLAS12 model (binned to a 25\AA\ resolution) beyond that limit for ``Vega reference STScI'' spectrum; the STIS spectrum from 1675 to 5350 \AA\ and an ATLAS12 model with \teff\ = 9400 K for the ``Alf Lyr STIS 008'' spectrum. The two spectra are compared in the left panel of Fig.~\ref{mag_effectVega}. Differences are present especially near the Balmer jump.
The right panel of Fig.\ \ref{mag_effectVega} illustrates the effect of changing the Vega reference spectrum on the photometry of Pollux. The differences are large, reaching 0.07 magnitudes in the C$_{410}$ color index. All colors are affected. It is therefore mandatory to treat the zero points consistently to compare observed to synthetic colors.
\subsubsection{Effect of extinction}
\label{s_effext}
Extinction affects the SED of stars differentially, being stronger at shorter wavelength. Extinction is characterized by two main quantities: the ratio of extinction at wavelength $\lambda$ compared to that at a reference wavelength (usually in the $V$ or $K$ band), this is the extinction law; and the total extinction at the reference wavelength. To quantify the effect of extinction on synthetic photometry, we have used two sets of extinction laws. The first is a combination of the extinction law of \citet{seaton79} in the ultraviolet and of \citet{howarth83} in the optical. The second is the extinction law of \citet{ccm89}.
We have parameterized the total extinction by A$_V = R_V \times\ E(B-V),$ where $R_V$ is the ratio of total to selective extinction, which we held fixed to 3.2, and $E(B-V) = (B-V) - (B-V)_0$ with $(B-V)_0$ the intrinsic color.
Fig.\ \ref{mag_ext} shows the effect of extinction on synthetic colors. A variation of 0.02 in E($B-V$) translates into variations between 0.015 and 0.10 in colors depending on the filters used. The changes are largest for colors based on filters that are
more separated in wavelength. For a given observed ($B-V$), a variation of 0.02 in E($B-V$) corresponds to an uncertainty of 0.02 in intrinsic ($B-V$)$_0$. For comparison, a K0~III star (spectral type of Pollux) has ($B-V$)$_0$=0.81, while a K1~III star has ($B-V$)$_0$=0.86 \citep{lang93}, or a difference in intrinsic color of 0.05. Hence our test corresponds to an error smaller than one spectral sub-type in spectral classification. The choice of extinction law also affects the resulting colors, the difference between our two laws being $<$ 0.01.
\begin{figure}[t]
\centering
\includegraphics[width=9cm]{mag_pollux_effect_Ext.eps}
\caption{Effect of extinction on colors for Pollux. Red triangles, blue squares, and green hexagons correspond to the extinction laws of \citet{seaton79} in the ultraviolet and of \citet{howarth83} in the optical. The orange empty triangles refer to calculations made with the extinction law of \citet{ccm89}. $\Delta_{color}$ is the color difference relative to the models shown by red triangles.}
\label{mag_ext}
\end{figure}
\subsubsection{Effect of atmospheric correction}
\label{s_airmass}
For ground-based observations a correction for the absorption in the Earth's atmosphere has to be performed. The absorption is stronger at shorter wavelength and increases with airmass. In our calculations, we adopted the correction coefficient for the ESO/La Silla observatory provided by \citet{burki95}. Fig.\ \ref{mag_airmass} shows the effect of airmass on colors based on $UBVRI$ photometry. As expected, colors are bluer when the airmass increases from 1.0 to 1.1. The difference remains below 0.01 magnitude when the $U$ filter is not used. For ($U-B$), the airmass increase leads to a color 0.035 magnitudes bluer. The stars and crosses correspond to a case where photometry was acquired in two different airmass conditions for the filters used in a given color. In this configuration, color differences can reach almost 0.06 magnitude in ($U-B$). They remain below 0.02 magnitude for the other colors.
\begin{figure}[t]
\centering
\includegraphics[width=9cm]{mag_pollux_col_airmass.eps}
\caption{Effect of atmospheric correction on colors based on $UBVRI$ photometry. $M^1$ and $M^2$ are the first and second magnitude used to build a given color (e.g., $M^1$=$U$ and $M^2$=$B$ for color ($U-B$)). The subscripts (1.0 and 1.1) correspond to the airmass adopted for the computation of the atmospheric corrections. $\Delta_{color}$ is the color difference relative to the models shown by red triangles.}
\label{mag_airmass}
\end{figure}
\subsection{SED fit}
\label{s_sed}
So far, we have assumed that theoretical spectra perfectly reproduce the SED of Pollux and Procyon. In this section we investigate to which degree this is correct.
\begin{figure}[]
\centering
\includegraphics[width=9cm]{mag_pollux.eps}
\caption{Comparison of observed magnitudes and colors of Pollux (blue circles) with those predicted by the reference model (red triangles). An airmass of 1.0, a radius of 9.3 R$_{\odot}$ and a distance of 10.36 pc are assumed. Solid error bars take into account only uncertainties due to stellar parameters (Table\ \ref{sig_colors}). Gray error bars take into account an additional contribution due to extinction and airmass. The former is set to half the difference in colors between models with E($B-V$)=0.00 and E($B-V$)=0.02, the latter to half the difference between corrections for an airmass of 1.0 and 1.1. In the upper right panel, the black squares show the difference between the observed and predicted magnitudes for the five Johnson filters.}
\label{mag_pol}
\end{figure}
Fig.\ \ref{mag_pol} shows the ground-based $UBVRI$ photometry of Pollux according to \citet{ducati02}. It is identical to that of the GCPD database from which we have retrieved the filters throughputs. We added the magnitudes computed from our reference model, together with error bars adopted from Sect.\ \ref{s_unc}. Our model reproduces the $UBV$ photometry very well, but faces problems with the $R$ and $I$ filters. From the top and bottom right panels, it appears that the model lacks flux in both bands, which translates into a too blue $V-I$ color and a too red $R-I$ color. The problems are most severe in $V-I,$ where the mismatch between model and observations reaches 0.10 mag.
Fig.\ \ref{fit_sed_pollux} shows the comparison of the reference model and two spectra observed from the ground: the medium-resolution spectrum of \citet{valdes04} (left panel), and the low-resolution spectrum of \citet{alek97}. The agreement between the model and the observed spectrum is good. Differences between magnitudes calculated from the spectra presented in Fig.\ \ref{fit_sed_pollux} are shown in Table \ref{col_diff}. The $V$ and $R$ magnitudes are very similar between the synthetic and the observed spectra (within 0.04 magnitudes), regardless of the observed spectrum. In the B band, differences vary from 0.02 to 0.13 magnitude depending on the observed spectrum. This presumably shows that flux calibration is critical since the two observed spectra do not show the same flux level in the blue, while Pollux is supposed to have a stable flux level (see Sect.\ \ref{s_targets}). The $U$ and $I$ bands are almost fully probed only by the spectrum of \citet{alek97}. We calculated the corresponding magnitudes on the wavelength range covered by this spectrum (i.e., we cut the synthetic spectrum below and above the limits of the observed spectrum). The $I$ band is very well reproduced by our model, while a difference of 0.50 magnitude appears in the $U$ band. These results indicate that the ($V-I$) color of our model reproduces (within less than 0.02 magnitude) the ($V-I$) color that is obtained from the spectrum of \citet{alek97} well, while there is a mismatch in ($V-I$) in Fig.\ \ref{mag_pol}. Alternatively, the ($U-B$) color of our model reproduces the observed color in Fig.\ \ref{mag_pol} very
well, while the spectrum of \citet{alek97} has much less flux than our model in the $U$ band.
\smallskip
\begin{figure*}[]
\centering
\includegraphics[width=0.49\textwidth]{fit_sed_pollux_valdes04.eps}
\includegraphics[width=0.49\textwidth]{fit_sed_pollux_Alek97.eps}
\caption{Comparison between the Pollux model computed with the parameters obtained from spectroscopy (red line) and the observed spectrum of \citet{valdes04} (left panel) / the SED of \citet{alek97} (right panel). The model was degraded to the resolution of the observed spectra (R$\sim$10000 in the left panel, R$\sim$100 in the right panel). In addition, the model and the observed spectrum of the left panel were smoothed for clarity of the comparison. In both panels the spectra have been normalized with respect to their flux at 5500 \AA. The dot-dashed line shows the $UBVRI$ filters throughputs. The bottom panels show the difference between model and observation.}
\label{fit_sed_pollux}
\end{figure*}
\begin{figure}[]
\centering
\includegraphics[width=9cm]{mag_procyon.eps}
\caption{Same as Fig.\ \ref{mag_pol} for Procyon. The radius was adjusted so that the $V$ magnitude from the model matches the observed magnitude.}
\label{mag_pro}
\end{figure}
\begin{table}
\begin{center}
\caption{Difference between magnitudes calculated from the synthetic and observed spectrum of Pollux and Procyon.}
\label{col_diff}
\begin{tabular}{lccccc}
\hline
Observed spectrum & $\Delta U$ & $\Delta B$ & $\Delta V$ & $\Delta R$ & $\Delta I$ \\
\hline
& & Pollux & \\
\hline
Valdes & -- & 0.02 & 0.02 & 0.04 & --\\
Alekseeva & 0.50 & 0.13 & 0.02 & 0.01 & 0.01 \\
\hline
& & Procyon & \\
\hline
Alekseeva & 0.11 & 0.03 & 0.00 & 0.00 & 0.00\\
\hline
\end{tabular}
\end{center}
\end{table}
Ground-based $UBVRI$ photometry from our reference model of Procyon is compared to observed photometry in Fig.\ \ref{mag_pro}. A comparison of the Procyon ground-based spectrum of \citet{alek97} and \citet{prusou01} with our model is shown in Fig.\ \ref{fit_sed_procyon}\footnote{There is no Procyon spectrum in the database of \citet{valdes04}.}. From this figure and Table \ref{col_diff}, we conclude that the model reproduces the observed spectrum in the $VRI$ bands very
well and that deviations appear in the $B$ and mostly $U$ band. Fig.\ \ref{mag_pro} confirms that the synthetic colors involving the $U$ and $B$ band are problematic. However, the observed $U$ magnitude indicates a higher flux than predicted, while the opposite is seen in Fig.\ \ref{fit_sed_procyon}, where the spectrum of \citet{alek97} has less flux than our model shortward of 4000 \AA. Fig.\ \ref{mag_pro} shows that the the theoretical ($R-I$) color is 0.06 magnitude redder than the color obtained from imaging. This trend is not confirmed by the direct comparison of the Alekseeva et al.\ spectrum in Fig.\ \ref{fit_sed_procyon}: according to Table \ref{col_diff}, the $R$ and $I$ magnitudes calculated from the Alekseeva spectrum are the same as those of our Procyon reference model, hence ($R-I$) is also the same.
\begin{figure*}[]
\centering
\includegraphics[width=0.49\textwidth]{fit_sed_procyon_elodie.eps}
\includegraphics[width=0.49\textwidth]{fit_sed_procyon_Alek97.eps}
\caption{Same as Fig.\ \ref{fit_sed_pollux} for Procyon. The left (right) panel shows the ELODIE spectrum of \citet{prusou01} \citep{alek97}.}
\label{fit_sed_procyon}
\end{figure*}
\smallskip
Our comparisons indicate that model reasonably well reproduces flux-calibrated observed spectra. When we compare photometry computed from the synthetic spectra to photometry resulting from imaging, discrepancies appear. Given the uncertainties in photometry based on filters with passbands covering (part of) the wavelength range below $\sim$4500 \AA, this is expected for $U$ and $B$ filters. The companion white dwarf to Procyon may explain part of the discrepant ($U-B$) and ($B-V$) colors. However, the mismatch observed for $R$ and $I$ filters is worrisome. The magnitude of the discrepancy between observed and synthetic ($V-I$) (or
($R-I$)) for Pollux (for Procyon) cannot be attributed to incorrect modeling of the spectra of these stars since comparisons to observed SEDs are quantitatively rather good. We speculate that difference between the calibration process of our synthetic photometry and the reduction and calibration of the observed photometry is responsible for the mismatch.
This stresses the need for accurate calibrations and for the publication of all the reduction details in order to minimize systematic errors. This is crucial for performing synthetic photometry at the level of 0.01 mag accuracy, a level required if blue filters
are to be used,
which are best suited to studying multiple populations in globular clusters.
\section{Conclusion and future work}
\label{s_conc}
We have presented a study of uncertainties on synthetic photometry in the context of the understanding the properties of globular clusters. Our goal was to provide an estimate of the dispersion that can be used to build artificial populations of stars centered on a theoretical isochrone. Such artificial populations can then
be compared to observed populations in CMDs to infer properties of globular clusters.
We have calculated atmosphere models and synthetic spectra with the codes ATLAS12 and SYNTHE, respectively. We chose two reference stars: Pollux, a K0III star typical of giants at the bottom of the RGB in globular clusters, and Procyon, an F5IV-V dwarf typical of TO stars. Using the best spectroscopic parameters and their uncertainties for these two stars, we studied the effect of effective temperature, surface gravity, microturbulent velocity, C, N, O, and Fe abundances on the resulting photometry. We also estimated the changes in photometry caused by uncertain extinction, by the airmass conditions, and by different calibrations of zero points in the VEGAMAG system.
We provide estimates of the dispersion to be expected in photometry based on $UBVRI$ and the following HST filters: F275W, F336W, F410M, F438W, F555W, F606W, and F814W. We show that uncertainties are larger at shorter wavelength, as was known before. Our results indicate that even if synthetic spectra reproduce flux-calibrated SEDs well, synthetic photometry may not reproduce published $UBVRI$ photometry. This most likely reflects different reduction and
calibration processes and calls for the publication of all the details of such processes. This is crucial if a 0.01 mag accuracy, which is necessary to study the properties of multiple populations in globular clusters, is to be reached by synthetic photometry.
Regardless of these issues, the effects of uncertain stellar and observational parameters on synthetic colors will be used in subsequent studies to produce synthetic CMDs that include a realistic treatment of errors. In practice, artificial populations will be built from theoretical isochrones and the dispersion estimated in the present study. The ability of theoretical isochrones to reproduce the location of multiple populations in globular clusters will be tested. This will be useful to constrain the physics of evolutionary models providing isochrones, the physics of atmosphere models that provide synthetic photometry, and ultimately, it will bring additional constraints to some properties of globular clusters (helium content and age).
\begin{acknowledgements}
We thank an anonymous referee for comments that helped to clarify the goal of this study.
We thank Fiorella Castelli, Robert Kurucz, and Marwan Gebran for help with the codes ATLAS12 and SYNTHE. We thank Corinne Charbonnel and William Chantereau for fruitful discussions. We warmly thank Antonino Milone for sharing his HST photometry of NGC~6752.
This research has made use of the SVO Filter Profile Service supported from the Spanish MINECO through grant AyA2014-55216.
This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France.
We thank the french ``Programme National de Physique Stellaire (PNPS)'' of CNRS/INSU for financial support.
\end{acknowledgements}
\bibliographystyle{aa}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 96 |
Q: Why is $2^{16}=65536$ the only power of $2$ less than $2^{31000}$ that doesn't contain the digits $1$, $2$, $4$ or $8$ in its decimal representation?
$65536$ is the only power of $2$ less than $2^{31000}$ that does not contain the digits $1$, $2$, $4$ or $8$ in its decimal representation.
http://en.wikipedia.org/wiki/65536_%28number%29
A: A simple explanation turns on the apparent randomness of the base-$b$ digits of sufficiently large powers of two, in the sense that they tend to behave like random samples. (However, this leaves the apparent randomness unexplained.)
Thus, let $S_k$ denote the multiset of digits appearing in the numeral of $2^k$, and let $n_k$ be their number; i.e., $n_k = \lfloor 1 + k\cdot log_{b}2\rfloor $. If each $S_k$ were a simple random sample, then, for any $K$ and any subset $D\subset \{0,1,...,b-1\}$ (e.g., $D=\{1,2,4,8\}, \ b=10$),
$$\begin{align}
P_K &= P(\text{at least one digit from D appears in *every* }S_K, S_{K+1}, S_{K+2},...)\\
&= P\left( \bigcap_{i=K}^\infty C_{i} \right)\\
&= \prod_{i=K}^\infty P(C_{i})\\
&= \prod_{i=K}^\infty(1-q^{n_i})
\end{align}
$$
where
$C_i = \{S_i\cap D \ne \oslash\}$,
$q = P(\text{digit } \notin D) = 1 - \frac{|D|}{b}$.
Here are some computed cases (rounded) for $b=10$ and $|D|=4$:
\begin{array}{|c|c|} K & P_K \\
\hline
1&0.002\\
10&0.304\\
15&0.575\\
20&0.780\\
50&0.998\\
100&0.999999\\
200&0.9999999999999
\end{array}
As examples, I've verified that
*
*with $D=\{1,2,4,8\}$ and $b=10$, a digit from $D$ occurs among the digits of every $2^k$ for the range $17\le k\le 200000$
*with $D=\{1,2,3,4\}$ and $b=10$, a digit from $D$ occurs among the digits of every $2^k$ for the range $4\le k\le 200000$
Therefore, it seems extremely likely that both of these statements are true:
*
*$2^{16}$ is the only power of two that does not contain a digit from $\{1,2,4,8\}$.
*$2^3$ is the only power of two that does not contain a digit from $\{1,2,3,4\}$.
NB: These are examples of probably true and unprovable "Dyson statements".
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,884 |
Who is Marshawn Lynch? Within days one of the shafts gave way and burst, letting water flow into the plaintiff's mines on the adjoining property. Ryland lynch latest news photos and videos. Are ryland lynch and savannah hudson dating site best online.
How tall is ryland blackinton? Therefore, referential audits are needed if there were any violations in the history of the resource. While Rome used cords of wood as crosses for standing human bodies along its highways in great numbers, you are here using the tree and the rope on occasions.
It should be recalled that inGoogle received about 35 thousand messages about spam from users every month. Take this simple little list of differences and think about them. The term "lynching" is derived from his last name.
During the Ver pelicula idiocracia online dating, the contractor found five long ago abandoned vertical shafts. Defendant owned a mill and decided to build a reservoir.
You must use the DARK skin slaves vs. I have it for are ryland lynch and savannah hudson dating apps years already and I do not have a file named Disavow.
As for the report processing time, it takes some considerable time. William Lynch performed lynchings as a method of extralegalsuppression against uprisings.
Have your wives and children use them, never miss an opportunity.
In the coming months, developers are planning to launch it for a wide audience along with official rules and guidelines. The new feature is primarily targeted at corporate Google Drive users. Between the end of the US Civil War and the s, lynchings of blacks for racial reasons were more likely to occur in the states of the former Confederacy.
The defendant occupied his land near to where the plaintiff operated a coal mine. See the pic rydel lynch kisses her brother39s rumored girlfriend.
I am here to help you solve some of your problems with slaves.
At the same time, he noted that small reports about violations of one page scale are less prioritized for Google. Savannah latimer and taylah winters have been best friends for most of their lives. The reason is that the crawler already scans the content that fast, so the benefits that the browser receives web pages loading time is decreased are not that important.
No, we do not check all spam reports manually. Rydel lynch puts a gender reversal spin on the hugh hefner costume while dressing upnbsp. This can also be the contents of the entire hard disk or the Documents folder.
Gentlemen, you know what your problems are; I do not need to elaborate. He including with his siblingsnbsp. It is important to remember that rejecting links can lead to a decrease in resource positions in the global search results, since many webmasters often reject links that actually help the website, rather than doing any harm to it.
He was invited to the colony of Virginia in to teach his methods to slave owners there.
Where did the name ryland come from? We publicly state that we have factors when it comes to scanning, indexing and ranking. According to Gary Illyes, auditing of links is not necessary for all websites at the present moment.
Do you check each and every report manually? What were the issues of the case Rylands v Fletcher?
This will help them understand how subscribers interact with similar materials. They are not necessary for many website owners and it is better to spend this time on improving the website itself, says Slagg.
For instance, one algorithm can be used to display a letter on the search results page. Nah, I would not worry about that, but do not try to make them as less obtrusive as possible.
Siblings brandon older brother dating ryland lynch savannah is a. Using Canonical, you are telling that two pages should be processes identically. Any member of your family or your overseer can use it.
One of the participants asked Mueller at the meeting: First, I shall thank you, the gentlemen of the Colony of Virginia, for bringing me here.
But when this information can be applied to a number of pages, these reports become more valuable and are prior to be checked. This information was stated by the Google search representative Gary Illyes on Twitter. Don't forget, you must pitch the OLD black male vs. Yes he just doesnt no it yet.
Who was lynched in Marion IN? There is not much information about ryland lynch39s personal life. I don't think that helding too many audits makes sense, because, as you noted, we successfully ignore the links, and if we see that the links are of an organic nature, it is highly unlikely that we will apply manual sanctions to a website.
To date, a new feature is only available for a small number of companies and content authors. It means "land where rye is grown. But when savannah starts dating riker lynch and is dragged into his rocknbsp. Earlier this month it became known that the location of internal links on the page does not affect their weight.
For geotargeting we use mostly the ccTLD or search console setting, so place the server. Her bff savannah hudson who has been in an onagain offagain relationship with ryland lynch shared a photo of their. We discussed this issue for a long time, at least inside the team. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,533 |
Coelanthum grandiflorum är en kransörtsväxtart som beskrevs av Ernst Meyer och Edward Fenzl. Coelanthum grandiflorum ingår i släktet Coelanthum, och familjen kransörtsväxter. Inga underarter finns listade i Catalogue of Life.
Källor
Externa länkar
Kransörtsväxter
grandiflorum | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,154 |
\section{Introduction}
The purpose of this paper is to construct a model structure on the category of (small) bigroupoids and pseudofunctors. In a nutshell, a model structure provides an environment in which one can do abstract homotopy theory. The notion was first introduced by Quillen in \cite{MR0223432}, but has been further refined over the years. Standard references regarding the theory of model structures are \cite{MR1650134} and \cite{MR1944041}. Some well known examples of categories carrying a model structure are the category of topological spaces, the category of simplicial sets and the category of (small) groupoids. The latter is closely related to the main category of this paper. As the name suggests, bigroupoids are a second order analog of groupoids. This analogy persists in the model structure we present below, as it highly similar to the classical model structure on the category of groupoids. The fact that the collection of 1- and 2-cells between two fixed 0-cells in a bigroupoid form a groupoid even allows us to use the model structure for groupoids to our advantage at several points in the construction.
The model structure on bigroupoids we give here is not the first model structure on a category whose objects are 2-categorical in nature. In \cite{MR1239560}, Moerdijk and Svensson give a model structure on the category of (small) 2-groupoids and 2-functors, and in \cite{MR1931220}, Lack gives one on the category of (small) 2-categories and 2-functors. In \cite{MR2138540} Lack corrects an error made in \cite{MR1931220}, while also giving a model structure on the category of (small) bicategories and strict homomorphisms. A bicategory is a weaker variant of a 2-category, in the same way that a bigroupoid is a weaker variant of a 2-groupoid. So, we see that model structures exist both on categories with weak and categories with strict 2-categorical objects. However, a commonality of the aforementioned categories is that all their morphisms are strict.
The morphisms of the category on which we build a model structure are the pseudofunctors, which are not strict. Pseudofunctors are more general and in many aspects, they are the more natural notion of morphism to use. This is illustrated in Example 3.1 and Remark 4.4 of \cite{MR1931220}, where morphisms that `should' exist, only exist as a pseudofunctor, even if everything else is strict. It is also reflected in the fact that the cofibrations in the model structure we give below allow a more straightforward description than those of \cite{MR1239560}, \cite{MR1931220} and \cite{MR2138540}, despite using `the same' fibrations and weak equivalences. Moreover, the constructions in this paper are elementary, in the sense that no sophisticated machinery such as the small object argument or other transfinite constructions are used.
Weak morphisms are generally not as well-behaved as strict ones and can be, for this and other reasons, more difficult to work with. For example: although the category of 2-categories and 2-functors is complete and cocomplete by standard arguments, this argument breaks down if one also considers pseudofunctors. In fact, the category of 2-categories and pseudofunctors is neither complete nor cocomplete \cite{MR1931220}. A similar argument can be made for pseudofunctors in the context of bigroupoids. However, products and coproducts can be computed in the naive way, even in the presence of pseudofunctors, and in this paper we prove that certain pullbacks along pseudofunctors exist as well.
In the process of constructing our model structure, we make use of two coherence theorems, which are proven in their entirety in the appendix. The classical way to understand a coherence theorem is the following, as formulated by Mac Lane in \cite{MR1712872}:
\begin{quote}
\textit{A coherence theorem asserts: ``Every diagram commutes''; more modestly, that every diagram of a certain class commutes.}
\end{quote}
Since Mac Lane proved the first coherence theorem -- for monoidal categories in his case -- views have shifted on what is, or should be, considered a `coherence theorem' \cite{MR985657}, but for us the classical formulation remains the most useful one. At several points in the proofs below, the coherence theorems allow us to recognize that certain diagrams commute at a glance, trivializing computations that would have been very messy and laborious otherwise. The proofs of these coherence theorems draw heavily on \cite{MR723395} and \cite{MR3076451}, which are in turn based on \cite{MR641327} and \cite{MR1250465} respectively.
\section{The category of bigroupoids}
\subsection{Bigroupoids}
Before introducing bigroupoids, we will define a wider class of structures which we imaginatively name \textit{incoherent bigroupoids}. This weaker notion ignores the usual coherence conditions and is exclusively used as a convenient intermediary step in some of the constructions. Unless otherwise specified, the structures in this paper are bigroupoids.
\begin{dfn}
An \textit{incoherent bigroupoid} $\mathcal{B}$ consists of the following data:
\begin{itemize}
\item{A set $\mathcal{B}_0$ (with elements \emph{0-cells} $A, B, \ldots$)}
\item{For every combination of 0-cells $A,B$ a groupoid $\mathcal{B}(A, B)$ (with objects \emph{1-cells} $f, g, \ldots$ and arrows \emph{2-cells} $\alpha, \beta, \ldots$)}
\item{For every combination of 0-cells $A, B, C$ a functor
\begin{align*}
\mathbf{C}_{A, B, C} : \mathcal{B}(B, C) \times \mathcal{B}(A, B) & \longrightarrow \mathcal{B}(A, C)\\
(g, f) & \longmapsto g * f \\
(\beta, \alpha) & \longmapsto \beta * \alpha
\end{align*} }
\item{For every 0-cell $A$ a functor
\begin{align*}
\mathbf{U}_{A} : 1 & \longrightarrow \mathcal{B}(A, A)\\
\bullet & \longmapsto 1_A \\
\mathrm{id}_{\bullet} & \longmapsto \mathrm{id}_{1_A}
\end{align*} }
\item{For every combination of 0-cells $A, B$ a functor
\begin{align*}
\mathbf{I}_{A, B} : \mathcal{B}(A, B) & \longrightarrow \mathcal{B}(B, A)\\
f & \longmapsto f^{*} \\
\alpha & \longmapsto \alpha^{*}
\end{align*} }
\item{For every combination of 0-cells $A, B, C, D$ a natural isomorphism
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{B}(C, D) \times \mathcal{B}(B, C) \times \mathcal{B}(A, B) \arrow[r, "\mathrm{id} \times \mathbf{C}_{A, B, C}"] \arrow[d, swap, "\mathbf{C}_{B, C, D} \times \mathrm{id}"] & \mathcal{B}(C, D) \times \mathcal{B}(A, C) \arrow[d, "\mathbf{C}_{A, C, D}"] \\
\mathcal{B}(B, D) \times \mathcal{B}(A, B) \arrow[r, swap, "\mathbf{C}_{A, B, D}"] \arrow[ru, Rightarrow, shorten >=40pt, shorten <=40pt, "\mathbf{a}_{A, B, C, D}"] & \mathcal{B}(A, D)
\end{tikzcd}
\end{equation*}}
\item{For every combination of 0-cells $A, B$ natural isomorphisms
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{B}(A, B) \times 1 \arrow[d, swap, "\mathrm{id} \times \mathbf{U}_{A}"] \arrow[dr, sloped, "\sim"{name=A}] & &[-40pt] \mathcal{B}(A, B) \arrow[r, "!"] \arrow[d, swap, "{\langle \mathbf{I}_{A,B} , \mathrm{id} \rangle}"] & 1 \arrow[d, "\mathbf{U}_{A}"] \\
\mathcal{B}(A, B) \times \mathcal{B}(A, A) \arrow[r, swap, "\mathbf{C}_{A, A, B}"] & \mathcal{B}(A, B) & \mathcal{B}(B, A) \times \mathcal{B}(A, B) \arrow[r, swap, "\mathbf{C}_{A, B, A}"] \arrow[ru, Rightarrow, shorten >=45pt, shorten <=45pt, "\mathbf{e}_{A, B}"] & \mathcal{B}(A, A) \\[-20pt]
1 \times \mathcal{B}(A, B) \arrow[d, swap, "\mathbf{U}_{B} \times \mathrm{id}"] \arrow[dr, sloped, "\sim"{name=B}] & & \mathcal{B}(A, B) \arrow[r, "{\langle \mathrm{id} , \mathbf{I}_{A,B} \rangle}"] \arrow[d, swap, "!"] & \mathcal{B}(A, B) \times \mathcal{B}(B, A) \arrow[d, "\mathbf{C}_{B, A, B}"] \\
\mathcal{B}(B, B) \times \mathcal{B}(A, B) \arrow[r, swap, "\mathbf{C}_{A, B, B}"] & \mathcal{B}(A, B) & 1 \arrow[r, swap, "\mathbf{U}_{B}"] \arrow[ru, Rightarrow, shorten >=45pt, shorten <=45pt, "\mathbf{i}_{A, B}"] & \mathcal{B}(B, B)
\arrow[Rightarrow, shorten >=10pt, shorten <=10pt, from=2-1, to=A, "\mathbf{r}_{A,B}"]
\arrow[Rightarrow, shorten >=10pt, shorten <=10pt, from=4-1, to=B, "\mathbf{l}_{A,B}"]
\end{tikzcd}
\end{equation*}}
\end{itemize}
\end{dfn}
\begin{rmk} \label{localrmk}
The properties of the groupoids $\mathcal{B}(A, B)$ are referred to as \textit{local} properties. For example, if every $\mathcal{B}(A, B)$ is discrete, it is said that $\mathcal{B}$ is locally discrete.
\end{rmk}
\begin{dfn}
A \textit{bigroupoid} $\mathcal{B}$ is an incoherent bigroupoid satisfying the following extra conditions:
\begin{itemize}
\item{For every combination
\begin{equation*}
A \overset{f}{\longrightarrow} B \overset{g}{\longrightarrow} C \overset{h}{\longrightarrow} D \overset{k}{\longrightarrow} E
\end{equation*}
of composable 1-cells, the following diagram commutes
\begin{equation} \label{coh1}
\begin{tikzcd}[row sep=huge, column sep=huge]
((kh)g)f \arrow[r, "\mathbf{a} * \mathrm{id}" ] \arrow[d, swap, "\mathbf{a}" ] & (k(hg))f \arrow[r, "\mathbf{a}" ] & k((hg)f) \arrow[d, "\mathrm{id} * \mathbf{a}"] \\
(kh)(gf) \arrow[rr, swap, "\mathbf{a}" ] & & k(h(gf))
\end{tikzcd}
\end{equation}}
\item{For every combination
\begin{equation*}
A \overset{f}{\longrightarrow} B \overset{g}{\longrightarrow} C
\end{equation*}
of composable 1-cells, the following diagram commutes
\begin{equation} \label{coh2}
\begin{tikzcd}[row sep=huge, column sep=huge]
(g1)f \arrow[rr, "\mathbf{a}"] \arrow[dr, swap, "\mathbf{r} * \mathrm{id}"] & & g(1f) \arrow[dl, "\mathrm{id} * \mathbf{l}"] \\
& gf &
\end{tikzcd}
\end{equation}}
\item{For every 1-cell
\begin{equation*}
A \overset{f}{\longrightarrow} B
\end{equation*}
the following diagram commutes
\begin{equation} \label{coh3}
\begin{tikzcd}[row sep=huge, column sep=huge]
1f \arrow[r, "\mathbf{i} * \mathrm{id}" ] \arrow[d, swap, "\mathbf{l}"] & (ff^*)f \arrow[r, "\mathbf{a}"] & f(f^*f) \arrow[d, "\mathrm{id} * \mathbf{e}"] \\
f & & f1 \arrow[ll, "\mathbf{r}"]
\end{tikzcd}
\end{equation}}
\end{itemize}
\end{dfn}
\begin{rmk}
We will sometimes write $-*-$ for the functor $\mathcal{C}_{A, B, C}$ and shorten $g*f$ by $gf$, for 1-cells $f$ and $g$. The action of the functor $-*-$ on 2-cells is sometimes referred to as \textit{horizontal composition}, to distinguish it from the ordinary composition of 2-cells as arrows in a category, which is in turn referred to as \textit{vertical composition} and is usually denoted by $- \circ -$.
\end{rmk}
\begin{dfn}
A \textit{strict bigroupoid} or \textit{2-groupoid} is a bigroupoid in which the natural isomorphisms $\mathbf{a}$, $\mathbf{l}$, $\mathbf{r}$, $\mathbf{e}$ and $\mathbf{i}$ are all identities.
\end{dfn}
\subsection{Morphisms of bigroupoids}
As in the previous section, we first introduce a weaker notion of morphism, which ignores coherence conditions.
\begin{dfn}
An \textit{incoherent morphism} $(F, \phi)$ from a (possibly incoherent) bigroupoid $\mathcal{B}$ to a (possibly incoherent) bigroupoid $\mathcal{B}'$ consists of the following data:
\begin{itemize}
\item{A function
\begin{equation*}
F : \mathcal{B}_{0} \longrightarrow \mathcal{B}_{0}'
\end{equation*}}
\item{For every combination of 0-cells $A, B$ in $\mathcal{B}$ a functor
\begin{equation*}
F_{A, B} : \mathcal{B}(A, B) \longrightarrow \mathcal{B}'(FA, FB)
\end{equation*}}
\item{For every combination of 0-cells $A, B, C$ in $\mathcal{B}$ a natural isomorphism
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{B}(B, C) \times \mathcal{B}(A, B) \arrow[r, "\mathbf{C}_{A, B, C}"] \arrow[d, swap, "F_{B, C} \times F_{A, B}"] & \mathcal{B}(A, C) \arrow[d, "F_{A, C}"] \\
\mathcal{B}'(FB, FC) \times \mathcal{B}'(FA, FB) \arrow[r, swap, "\mathbf{C}_{FA, FB, FC}'"] \arrow[ru, Rightarrow, shorten >=40pt, shorten <=40pt, "\phi_{A, B, C}"] & \mathcal{B}'(FA, FC)
\end{tikzcd}
\end{equation*}}
\item{For every 0-cell $A$ in $\mathcal{B}$ a natural isomorphism
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
1 \arrow[r, "\mathbf{U}_{A}"] \arrow[d, swap, "\mathrm{id}"] & \mathcal{B}(A, A) \arrow[d, "F_{A, A}"] \\
1 \arrow[r, swap, "\mathbf{U}_{FA}'"] \arrow[ru, Rightarrow, shorten >=25pt, shorten <=25pt, "\phi_{A}"] & \mathcal{B}'(FA, FA)
\end{tikzcd}
\end{equation*}}
\item{For every combination of 0-cells $A, B$ in $\mathcal{B}$ a natural isomorphism
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{B}(A, B) \arrow[r, "\mathbf{I}_{A, B}"] \arrow[d, swap, "F_{A, B}"] & \mathcal{B}(B, A) \arrow[d, "F_{B, A}"] \\
\mathcal{B}'(FA, FB) \arrow[r, swap, "\mathbf{I}_{FA, FB}'"] \arrow[ru, Rightarrow, shorten >=30pt, shorten <=30pt, "\phi_{A, B}"] & \mathcal{B}'(FB, FA)
\end{tikzcd}
\end{equation*}}
\end{itemize}
\end{dfn}
\begin{rmk}
The properties of the functors $F_{A, B}$ are referred to as \textit{local} properties. For example, if every $F_{A, B}$ is faithful, it is said that $(F, \phi)$ is locally faithful. (This is similar to Remark \ref{localrmk}.)
\end{rmk}
\begin{dfn}
A \textit{morphism} $(F, \phi)$ from a (possibly incoherent) bigroupoid $\mathcal{B}$ to a (possibly incoherent) bigroupoid $\mathcal{B}'$ is an incoherent morphism satisfying the following extra conditions:
\begin{itemize}
\item{For every combination
\begin{equation*}
A \overset{f}{\longrightarrow} B \overset{g}{\longrightarrow} C \overset{h}{\longrightarrow} D
\end{equation*}
of composable 1-cells, the following diagram commutes
\begin{equation}\label{coh4}
\begin{tikzcd}[row sep=huge, column sep=huge]
(Fh * Fg) * Ff \arrow[r, "\phi * \mathrm{id}"] \arrow[d, swap, "\mathbf{a}'"] & F(h * g) * Ff \arrow[r, "\phi"] & F((h * g) * f) \arrow[d, "F\mathbf{a}"] \\
Fh * (Fg * Ff) \arrow[r, swap, "\mathrm{id} * \phi"] & Fh * F(g * f) \arrow[r, swap, "\phi"] & F(h * (g * f))
\end{tikzcd}
\end{equation}}
\item{For every 1-cell
\begin{equation*}
A \overset{f}{\longrightarrow} B
\end{equation*}
the following diagrams commute
\begin{equation}\label{coh5}
\begin{tikzcd}[row sep=huge]
Ff * 1_{FA} \arrow[r, "\mathrm{id} * \phi"] \arrow[d, swap, "\mathbf{r}'"] & Ff * F 1_{A} \arrow[r, "\phi"] & F(f * 1_{A}) \arrow[d, "F \mathbf{r}"] &[-20pt] 1_{FB} * Ff \arrow[r, "\phi * \mathrm{id}"] \arrow[d, swap, "\mathbf{l}'"] & F 1_{B} * Ff \arrow[r, "\phi"] & F(1_{B} * f) \arrow[d, "F \mathbf{l}"] \\
Ff \arrow[rr, swap, "\mathrm{id}"] & & Ff & Ff \arrow[rr, swap, "\mathrm{id}"] & & Ff \\[-20pt]
(Ff)^{*} * Ff \arrow[r, "\phi * \mathrm{id}" ] \arrow[d, swap, "\mathbf{e}'"] & F(f^{*}) * Ff \arrow[r, "\phi"] & F( f^{*} * f) \arrow[d, "F \mathbf{e}"] & 1_{FB} \arrow[d, swap, "\mathbf{i}'" ] \arrow[rr, "\phi"] & & F 1_{B} \arrow[d, "F \mathbf{i}"] \\
1_{FA} \arrow[rr, swap, "\phi"] & & F 1_{A} & Ff * (Ff)^{*} \arrow[r, swap, "\mathrm{id} * \phi"] & Ff * F (f^{*}) \arrow[r, swap, "\phi"] & F(f * f^{*})
\end{tikzcd}
\end{equation}}
\end{itemize}
\end{dfn}
\begin{rmk}
These types of morphisms are sometimes referred to as \textit{pseudofunctors} or \textit{weak 2-functors}, since they are not, in general, structure preserving maps. A morphism $(F, \phi)$ for which $\phi = \mathrm{id}$ and which therefore does preserves all structure (not just up to isomorphism) is called \textit{strict}.
\end{rmk}
The composition of two (possibly incoherent) morphisms $(F, \phi) : \mathcal{B} \longrightarrow \mathcal{B}'$ and $(G, \psi) : \mathcal{B}' \longrightarrow \mathcal{B}''$ is given by
\begin{equation*}
(G, \psi) \circ (F, \phi) = (G \circ F, G \phi \circ \psi F) : \mathcal{B} \longrightarrow \mathcal{B}''
\end{equation*}
Here, $G \phi \circ \psi F$ represents the pasting of diagrams, as in:
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{B}(A, B) \arrow[r, "\mathbf{I}_{A, B}"] \arrow[d, swap, "F_{A, B}"] & \mathcal{B}(B, A) \arrow[d, "F_{B, A}"] \\
\mathcal{B}'(FA, FB) \arrow[r, "\mathbf{I}_{FA, FB}"] \arrow[d, swap, "G_{FA, FB}"] \arrow[ru, shorten >=35pt, shorten <=35pt, Rightarrow, "\phi_{A, B}"] & \mathcal{B}'(FB, FA) \arrow[d, "G_{FB, FA}"] \\
\mathcal{B}''(GFA, GFB) \arrow[r, swap, "\mathbf{I}_{GFA, GFB}"] \arrow[ru, shorten >=35pt, shorten <=35pt, Rightarrow, "\psi_{FA, FB}"] & \mathcal{B}''(GFB, GFA)
\end{tikzcd}
\end{equation*}
This operation is clearly associative with identity.
\begin{rmk}
In many of the upcoming proofs, we need to make separate constructions concerning composition, inversion and identity respectively. However, since these three types of constructions are usually highly similar, we will generally only provide the one for composition. We will not mention this omission in every individual proof.
\end{rmk}
Let us prove two useful lemmas which show that maps and structures can `inherit' coherence properties to some extent.
\begin{lem}\label{lem1}
Let
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{A} \arrow[r, "{(F, \phi)}"] \arrow[dr, swap, "{(H, \eta)}"] & \mathcal{B} \arrow[d, "{(G, \gamma)}"] \\
& \mathcal{C}
\end{tikzcd}
\end{equation*}
be a commutative diagram of incoherent morphisms between (possibly incoherent) bigroupoids. If two of the following conditions are satisfied, then so is the third:
\begin{description}
\item[(1)] The diagrams (\ref{coh4}) and (\ref{coh5}) commute for $\gamma F$.
\item[(2)] The diagrams (\ref{coh4}) and (\ref{coh5}) commute for $\phi$, after $G$ is applied to them.
\item[(3)] The diagrams (\ref{coh4}) and (\ref{coh5}) commute for $\eta$.
\end{description}
\end{lem}
\begin{proof}
We only consider $\mathbf{a}$. The proofs for $\mathbf{l}$, $\mathbf{r}$, $\mathbf{e}$ and $\mathbf{i}$ are similar. The commutativity of the left inner rectangle, the right inner rectangle and the perimeter of the following diagram correspond to condition \textbf{(1)}, \textbf{(2)} and \textbf{(3)}, respectively.
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
& & \cdot \arrow[dr, swap, "\gamma F"] \arrow[drr, "\eta"] & & \\
\cdot \arrow[r, swap, "\gamma F * \mathrm{id}"] \arrow[rru, "\eta * \mathrm{id}"] \arrow[d, swap, "\mathbf{a}"] & \cdot \arrow[r, swap, "\gamma F"] \arrow[ru, swap, "G \phi * \mathrm{id}"] & \cdot \arrow[r, swap, "G( \phi * \mathrm{id})"] \arrow[d, "G \mathbf{a}"] & \cdot \arrow[r, swap, "G \phi"] & \cdot \arrow[d, "GF \mathbf{a}"] \\
\cdot \arrow[r, "\mathrm{id} * \gamma F"] \arrow[rrd, swap, "\mathrm{id} * \eta"] & \cdot \arrow[r, "\gamma F"] \arrow[rd, "\mathrm{id} * G \phi"] & \cdot \arrow[r, "G( \mathrm{id} * \phi)"] & \cdot \arrow[r, "G \phi"] & \cdot \\
& & \cdot \arrow[ru, "\gamma F"] \arrow[rru, swap, "\eta"] & & \\
\end{tikzcd}
\end{equation*}
Since the other components of the diagram commute by naturality of $\gamma$ and the fact that $(G, \gamma) \circ ( F, \phi) = (H, \eta)$, irrespective of the three conditions, this proves the lemma.
\end{proof}
\begin{cor}
Morphisms between bigroupoids are closed under composition, so the collection of bigroupoids forms a category.
\end{cor}
\begin{proof}
This follows directly from $\textbf{(1)} + \textbf{(2)} \Longrightarrow \textbf{(3)}$ of Lemma \ref{lem1}.
\end{proof}
\begin{lem}\label{lem2}
Let $(F, \phi) : \mathcal{A} \longrightarrow \mathcal{B}$ be a morphism between incoherent bigroupoids. Then the following are equivalent:
\begin{description}
\item[(1)] The diagrams (\ref{coh1}), (\ref{coh2}) and (\ref{coh3}) commute for 1-cells in the image of $F$.
\item[(2)] The diagrams (\ref{coh1}), (\ref{coh2}) and (\ref{coh3}) commute, after $F$ is applied to them.
\end{description}
\end{lem}
\begin{proof}
We only consider (\ref{coh2}). The proofs for (\ref{coh1}) and (\ref{coh3}) are similar. The commutativity of the innermost triangle and outermost triangle of following diagram correspond to condition \textbf{(1)} and \textbf{(2)}, respectively.
\begin{equation*}
\begin{tikzcd}[row sep=large, column sep=large]
\cdot \arrow[rrrrrrrr, "\mathbf{a}"] \arrow[dr, "(\mathrm{id} * \phi) *\mathrm{id}"] \arrow[dddddddrrrr, swap, "\mathbf{r} * \mathrm{id}"] & & & & & & & & \cdot \arrow[dl, swap, "\mathrm{id} * ( \phi * \mathrm{id} )"] \arrow[dddddddllll, "\mathrm{id} * \mathbf{l}"] \\
& \cdot \arrow[rrrrrr, "\mathbf{a}"] \arrow[dr, "\phi * \mathrm{id}"] \arrow[dddddrrr, "F\mathbf{r} * \mathrm{id}" description] & & & & & & \cdot \arrow[dl, swap, "\mathrm{id} * \phi"] \arrow[dddddlll, swap, "\mathrm{id} * F \mathbf{l}" description] & \\
& & \cdot \arrow[dr, "\phi"] & & & & \cdot \arrow[dl, swap, "\phi"] & & \\
& & & \cdot \arrow[rr, "F \mathbf{a}"] \arrow[dr, swap, "F( \mathbf{r} * \mathrm{id})" description] & & \cdot \arrow[dl, "F( \mathrm{id} * \mathbf{l} )" description] & & & \\
& & & & \cdot & & & & \\
& & & & & & & & \\
& & & & \cdot \arrow[uu, "\phi"] & & & & \\
& & & & \cdot \arrow[u, "\mathrm{id}"] & & & &
\end{tikzcd}
\end{equation*}
Since the other components of the diagram commute by naturality of $\phi$ and the fact that $( F, \phi)$ is a morphism, irrespective of the two conditions, this proves the lemma.
\end{proof}
\section{Model structures}
Since there exist multiple nonequivalent definitions in the literature of what constitutes a model structure, we give a brief description of what we consider to be a model structure here.
\begin{dfn}
Let $f$ and $g$ be morphisms in a category $\mathcal{C}$. If for every commutative square
\begin{equation*}
\begin{tikzcd} [row sep=large, column sep=large]
\cdot \arrow[r] \arrow[d, swap, "f"] & \cdot \arrow[d, "g"] \\
\cdot \arrow[ru, dashed, "\exists"] \arrow[r] & \cdot
\end{tikzcd}
\end{equation*}
a diagonal arrow exists as indicated in the diagram, then we say that \textit{$f$ has the left lifting property with respect to $g$} or, equivalently, that \textit{$g$ has the right lifting property with respect to $f$}.
\end{dfn}
\begin{dfn}
A \textit{weak factorization system} on a category $\mathcal{C}$ is a pair $( \mathcal{L}, \mathcal{R} )$ of classes of morphisms in $\mathcal{C}$ such that
\begin{description}
\item[(1)] any morphism in $\mathcal{C}$ can be factored as a morphism of $\mathcal{L}$ followed by a morphism of $\mathcal{R}$, and
\item[(2)] $\mathcal{L}$ consists precisely of those morphisms having the left lifting property with respect to every morphism in $\mathcal{R}$, and symmetrically, $\mathcal{R}$ consists precisely of those morphisms having the right lifting property with respect to every morphism in $\mathcal{L}$.
\end{description}
\end{dfn}
\begin{dfn}
A \textit{model structure} on a category $\mathcal{M}$ consists of three classes $\mathcal{F}$, $\mathcal{C}$ and $\mathcal{W}$ of morphisms in $\mathcal{M}$, called \emph{fibrations}, \emph{cofibrations} and \emph{weak equivalences} respectively, such that
\begin{description}
\item[(1)] $\mathcal{W}$ contains all isomorphisms and is closed under $2$-out-of-$3$, meaning that whenever the composition $g \circ f$ is defined and two of $f$, $g$ and $g \circ f$ lie in $\mathcal{W}$, then so does the third, and
\item[(2)] both $( \mathcal{C}, \mathcal{F} \cap \mathcal{W})$ and $( \mathcal{C} \cap \mathcal{W}, \mathcal{F})$ are weak factorization systems on $\mathcal{M}$.
\end{description}
\end{dfn}
\begin{rmk}
The classes $\mathcal{F} \cap \mathcal{W}$ and $\mathcal{C} \cap \mathcal{W}$ are commonly called the \textit{trivial fibrations} and \textit{trivial cofibrations} respectively.
\end{rmk}
We can now formulate the main theorem of this paper.
\begin{thm} \label{mainthm}
The category of bigroupoids and pseudofunctors carries a model structure, with fibrations, cofibrations and weak equivalences as given in Definitions \ref{dfn1}, \ref{dfn2} and \ref{dfn3} below.
\end{thm}
\begin{dfn}\label{dfn1}
A morphism $F : \mathcal{A} \longrightarrow \mathcal{B}$ is said to be a \emph{fibration} if it satisfies the following two conditions:
\begin{description}
\item[(1)] For every 0-cell $A'$ in $\mathcal{A}$ and every 1-cell $b : B \longrightarrow FA'$ in $\mathcal{B}$ there exists a 1-cell $a : A \longrightarrow A'$ in $\mathcal{A}$ such that $FA = B$ and $Fa = b$.
\item[(2)] For every 1-cell $a' : A \longrightarrow A'$ in $\mathcal{A}$ and every 2-cell $\beta : b \longrightarrow Fa'$ there exists a 2-cell $\alpha : a \longrightarrow a'$ in $\mathcal{A}$ such that $Fa = b$ and $F \alpha = \beta$.
\end{description}
\end{dfn}
\begin{dfn}\label{dfn2}
A morphism $F : \mathcal{A} \longrightarrow \mathcal{B}$ is said to be a \emph{cofibration} if it satisfies the following two conditions:
\begin{description}
\item[(1)] The function $F : \mathcal{A}_0 \longrightarrow \mathcal{B}_0$ is injective.
\item[(2)] For every combination of 0-cells $A, A'$ in $\mathcal{A}$, the functor $F_{A, A'} : \mathcal{A}( A, A') \longrightarrow \mathcal{B}( FA, FA')$ is injective on objects.
\end{description}
\end{dfn}
\begin{dfn}\label{dfn3}
A morphism $F : \mathcal{A} \longrightarrow \mathcal{B}$ is said to be a \emph{weak equivalence} if it satisfies the following two conditions:
\begin{description}
\item[(1)] For every 0-cell $B$ in $\mathcal{B}$ there exists a 0-cell $A'$ in $\mathcal{A}$ and a 1-cell $b : B \longrightarrow FA'$ in $\mathcal{B}$.
\item[(2)] For every combination of 0-cells $A, A'$ in $\mathcal{A}$, the functor $F_{A, A'} : \mathcal{A}( A, A') \longrightarrow \mathcal{B}( FA, FA')$ is an equivalence of categories.
\end{description}
\end{dfn}
\begin{rmk}
A morphism satisfying the conditions of Definition \ref{dfn3} is also known as a \textit{biequivalence}. Notice that when a morphism $F : \mathcal{A} \longrightarrow \mathcal{B}$ is in class $\mathcal{X}$ (fibrations, cofibrations, or weak equivalences), then $F$ is locally in class $\mathcal{X}$ of the canonical model structure on the category of groupoids. This is precisely the second part of Definitions \ref{dfn1}, \ref{dfn2} and \ref{dfn3}. Also note that the trivial fibrations may be characterized as those weak equivalences that are surjective on 0-cells and locally surjective on objects (1-cells).
\end{rmk}
\begin{lem} \label{lem8}
\hfill
\begin{description}
\item[(1)] Every isomorphism is a weak equivalence.
\item[(2)] The weak equivalences satisfy the \textit{2-out-of-3} property.
\item[(3)] The fibrations, cofibrations and weak equivalences are closed under retracts.
\end{description}
\end{lem}
\begin{proof}
Straightforward.
\end{proof}
\section{The cofibration - trivial fibration WFS}
In this section, we aim to prove the following proposition.
\begin{prop}
The cofibrations and trivial fibrations form a weak factorization system.
\end{prop}
By the retract argument, it suffices to show that the cofibrations have the left lifting property with respect to the trivial fibrations and that every morphism factors as a cofibration followed by a trivial fibration.
\subsection{Lifting property}
\begin{lem}
The cofibrations have the left lifting property with respect to the trivial fibrations.
\end{lem}
\begin{proof}
Given a commutative square
\begin{equation} \label{cotrflift}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{A} \arrow[r, "{(F, \phi)}"] \arrow[d, swap, "{(K, \kappa)}"] & \mathcal{B} \arrow[d, "{(G, \gamma)}"] \\
\mathcal{D} \arrow[r, swap, "{(H, \eta)}"] \arrow[ru, dashed, "{\exists (L, \lambda)}"] & \mathcal{C}
\end{tikzcd}
\end{equation}
in which $K$ is a cofibration and $G$ is a trivial fibration, we construct a diagonal filler $L$, as indicated in the diagram.
Let $L : \mathcal{D}_{0} \longrightarrow \mathcal{B}_{0}$ be a function which makes the diagram
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{A}_{0} \arrow[r, "F"] \arrow[d, swap, "K"] & \mathcal{B}_{0} \arrow[d, "G"] \\
\mathcal{D}_{0} \arrow[r, swap, "H"] \arrow[ru, dashed, "\exists L"] & \mathcal{C}_{0}
\end{tikzcd}
\end{equation*}
commute. Such a function exists because $K : \mathcal{A}_{0} \longrightarrow \mathcal{D}_{0}$ is injective and $G : \mathcal{B}_{0} \longrightarrow \mathcal{C}_{0}$ is surjective.
Given a pair of 0-cells $D$, $D'$ both in the image of $K$, say $D = KA$ and $D' = KA'$, we define $L_{D, D'} : \mathcal{D}(D, D') \longrightarrow \mathcal{B}(LD, LD')$ by taking a diagonal
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{A}(A, A') \arrow[r, "F_{A, A'}"] \arrow[d, swap, "K_{A, A'}"] & \mathcal{B}(LD, LD') \arrow[d, "G_{LD, LD'}"] \\
\mathcal{D}(D, D') \arrow[r, swap, "H_{D, D'}"] \arrow[ru, dashed, "\exists L_{D, D'}"] & \mathcal{C}(HD, HD')
\end{tikzcd}
\end{equation*}
which exists by the model structure on the category of groupoids. Given a pair of 0-cells $D, D'$ not both in the image of $K$, we define $L_{D, D'} : \mathcal{D}(D, D') \longrightarrow \mathcal{B}(LD, LD')$ by taking a diagonal
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
0 \arrow[r, "!"] \arrow[d, swap, "!"] & \mathcal{B}(LD, LD') \arrow[d, "G_{LD, LD'}"] \\
\mathcal{D}(D, D') \arrow[r, swap, "H_{D, D'}"] \arrow[ru, dashed, "\exists L_{D, D'}"] & \mathcal{C}(HD, HD')
\end{tikzcd}
\end{equation*}
again using the model structure on the category of groupoids.
To finish the construction of $(L, \lambda)$, we use the local fully faithfulness of $G$ to define
\begin{equation*}
\lambda = G^{-1}( \eta \circ (\gamma L)^{-1} ).
\end{equation*}
The calculation
\begin{equation*}
(G, \gamma) \circ (L, \lambda) = (G \circ L, G \lambda \circ \gamma L) = (G \circ L, G G^{-1}( \eta \circ (\gamma L)^{-1} ) \circ \gamma L) = ( H, \eta )
\end{equation*}
demonstrates that the lower right triangle of (\ref{cotrflift}) commutes. To check that the upper left triangle commutes as well, we use the fact that the square (\ref{cotrflift}) commutes to compute
\begin{equation*}
G \phi = H \kappa \circ \eta K \circ (\gamma F)^{-1} = GL \kappa \circ G G^{-1} (\eta K \circ (\gamma F)^{-1}) = G( L \kappa \circ \lambda K),
\end{equation*}
giving the desired result
\begin{equation*}
(F, \phi) = (L \circ K, L \kappa \circ \lambda K) = (L, \lambda) \circ (K, \kappa),
\end{equation*}
by the local faithfulness of $G$.
Lastly, we show that $(L, \lambda)$ is a morphism by verifying that (\ref{coh4}) and (\ref{coh5}) commute for $\lambda$. Since $G$ locally is faithful, it suffices to check that these diagrams commute after $G$ is applied to them. But this follows directly from $\textbf{(1)} + \textbf{(3)} \Longrightarrow \textbf{(2)}$ of Lemma \ref{lem1}.
\end{proof}
\subsection{Factorization}
\begin{lem} \label{lem6}
Given a square of categories which commutes up to a natural isomorphism $\alpha : F H \Longrightarrow F K$
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{A} \arrow[r, "K"] \arrow[d, swap, "H"] & \mathcal{B} \arrow[d, "F"] \\
\mathcal{B} \arrow[r, swap, "F"] \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\alpha"] & \mathcal{C}
\end{tikzcd}
\qquad = \qquad
\begin{tikzcd} [row sep=huge, column sep=huge]
\mathcal{A} \arrow[r, bend right=40, swap, "H"{name=H}] \arrow[r, bend left=40, "K"{name=K}] & \mathcal{B} \arrow[r, "F"] & \mathcal{C}
\arrow[from=H, to=K, dashed, Rightarrow, shorten >=5pt, shorten <=5pt, "\exists ! \beta"]
\end{tikzcd}
\end{equation*}
in which $F$ is an equivalence of categories, there exists a unique natural isomorphism $\beta : H \Longrightarrow K$ such that $F \beta = \alpha$.
\end{lem}
\begin{proof}
By hypothesis, there exists a functor $G : \mathcal{A} \longrightarrow \mathcal{B}$ and a natural isomorphism $\eta : \mathrm{id} \Longrightarrow GF$. For every $A$ in $\mathcal{A}$, the square
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
HA \arrow[r, "\beta_{A}"] \arrow[d, swap, "\eta_{HA}"] & KA \arrow[d, "\eta_{KA}"] \\
GFHA \arrow[r, swap, "GF \beta_{A}"] & GFKA
\end{tikzcd}
\end{equation*}
must commute by naturality of $\eta$. Since $F \beta_{A} = \alpha_{A}$ is required as well, this leaves the composite
\begin{equation*}
H \xRightarrow{\eta H} GFH \xRightarrow{G \alpha} GFK \xRightarrow{(\eta K)^{-1}} K
\end{equation*}
as the only possible candidate for $\beta$. We see that the square
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
FHA \arrow[r, "\alpha_{A}"] \arrow[d, swap, "F \eta_{HA}"] & FKA \arrow[d, "F \eta_{KA}"] \\
FGFHA \arrow[r, swap, "FG \alpha_{A}"] & FGFKA
\end{tikzcd}
\end{equation*}
commutes by naturality of $\eta$, as $\alpha_{A} = FF^{-1} \alpha_{A}$. This shows that our definition of $\beta$ indeed meets the requirement $F \beta = \alpha$.
\end{proof}
\begin{lem}
Let $(F, \phi) : \mathcal{A} \longrightarrow \mathcal{C}$ be a morphism of bigroupoids. Then there exists a factorization
\begin{equation*}
\mathcal{A} \xrightarrow{(G, \gamma)} \mathcal{B} \xrightarrow{(H, \eta)} \mathcal{C}
\end{equation*}
of $F$, where $G$ is a cofibration and $H$ is a strict trivial fibration.
\end{lem}
\begin{proof}
We define the 0-cells of $\mathcal{B}$ as the disjoint union of those of $\mathcal{A}$ and $\mathcal{C}$, so $\mathcal{B}_{0} = \mathcal{A}_{0} + \mathcal{C}_{0}$. We let $G : \mathcal{A}_{0} \longrightarrow \mathcal{B}_{0}$ be the inclusion map and we take $H = [F, \mathrm{id}] : \mathcal{B}_{0} \longrightarrow \mathcal{C}_{0}$.
To define the groupoids $\mathcal{B}(B, B')$, we factorize each $F_{A, A'} : \mathcal{A}(A, A') \longrightarrow \mathcal{C}(FA, FA')$ as
\begin{equation*}
\mathcal{A}(A, A') \xrightarrow{G_{A,A'}} \mathcal{B}(A, A') \xrightarrow{H_{A, A'}} \mathcal{C}(FA, FA'),
\end{equation*}
where $G_{A,A'}$ is a cofibration and $H_{A,A'}$ is a trivial fibration, using the model structure on the category of groupoids. For pairs of 0-cells of $\mathcal{B}$ not of the form $(A, A')$, we take (disjoint copies of) the groupoids in $\mathcal{C}$ corresponding to their image under $H$:
\begin{equation*}
\mathcal{B}(A, B') = \mathcal{C}(FA, B'), \qquad \mathcal{B}(B, A') = \mathcal{C}(B, FA'), \qquad \mathcal{B}(B, B') = \mathcal{C}(B, B').
\end{equation*}
The functor $H_{B, B'} : \mathcal{B}(B, B') \longrightarrow \mathcal{C}(HB, HB')$ is simply the identity in these last three cases.
We will now provide the functor $\mathbf{C}_{B, B', B''} : \mathcal{B}(B',B'') \times \mathcal{B}(B,B') \longrightarrow \mathcal{B}(B,B'')$ for a given triple of 0-cells $B$, $B'$, $B''$. Since $H_{B, B''} : \mathcal{B}(B, B'') \longrightarrow \mathcal{C}(HB, HB'')$ is a trivial fibration, it has a section $S_{B, B''} : \mathcal{C}(HB, HB'') \longrightarrow \mathcal{B}(B, B'')$. We define $\mathbf{C}_{B, B', B''}$ as the composite
\begin{equation*}
\mathcal{B}(B', B'') \times \mathcal{B}(B, B') \xrightarrow{H \times H} \mathcal{C}(HB', HB'') \times \mathcal{C}(HB, HB') \overset{\mathbf{C}}{\longrightarrow} \mathcal{C}(HB, HB'') \overset{S}{\longrightarrow} \mathcal{B}(B, B'').
\end{equation*}
Note that this makes the square
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{B}(B', B'') \times \mathcal{B}(B, B') \arrow[r, "\mathbf{C}"] \arrow[d, swap, "H \times H"] & \mathcal{B}(B, B'') \arrow[d, "H"] \\
\mathcal{C}(HB', HB'') \times \mathcal{C}(HB, HB') \arrow[r, swap, "\mathbf{C}"] & \mathcal{C}(HB, HB'')
\end{tikzcd}
\end{equation*}
commute, which allows us to define $\eta = \mathrm{id}$.
Next, we define $\mathbf{a} = S \mathbf{a} H$. Since $H S \mathbf{a} H = \mathbf{a} H$ and $\eta = \mathrm{id}$, the diagram (\ref{coh4}) commutes for $\eta$. We use a similar definition for $\mathbf{l}$, $\mathbf{r}$, $\mathbf{e}$ and $\mathbf{i}$, so by the same argument the diagrams (\ref{coh5}) commute as well, hence $(H, \eta)$ is a morphism.
To show that $\mathcal{B}$ is a bigroupoid, we verify that the diagrams (\ref{coh1}), (\ref{coh2}) and (\ref{coh3}) commute. Since $H$ is locally faithful, these diagrams commute if and only if they commute after $H$ is applied to them. But this follows directly from $\textbf{(1)} \Longrightarrow \textbf{(2)}$ of Lemma \ref{lem2}.
To define $\gamma$, consider the square
\begin{equation} \label{psidef}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{A}(A', A'') \times \mathcal{A}(A, A') \arrow[r, "G \circ \mathbf{C}"] \arrow[d, swap, "\mathbf{C} \circ (G \times G)"] & \mathcal{B}(GA, GA'') \arrow[d, "H"] \\
\mathcal{B}(GA, GA'') \arrow[r, swap, "H"] \arrow[ru, Rightarrow, shorten >=35pt, shorten <=35pt, "\phi \circ (\eta G)^{-1}"] & \mathcal{C}(FA, FA'')
\end{tikzcd}
\end{equation}
The calculation
\begin{equation*}
H \circ \mathbf{C} \circ (G \times G) \xRightarrow{(\eta G)^{-1}} \mathbf{C} \circ (H \times H) \circ (G \times G) = \mathbf{C} \circ (F \times F) \xRightarrow{\; \phi \;} F \circ \mathbf{C} = H \circ G \circ \mathbf{C}
\end{equation*}
shows that (\ref{psidef}) indeed commutes up to the natural isomorphism $\phi \circ (\eta G)^{-1}$. Since $H$ in (\ref{psidef}) is an equivalence of categories, Lemma \ref{lem6} provides us with a natural isomorphism
\begin{equation*}
\gamma (= \gamma_{A, A', A''}) : \mathbf{C} \circ ( G \times G ) \Longrightarrow G \circ \mathbf{C}
\end{equation*}
satisfying $H \gamma = \phi \circ (\eta G)^{-1}$. This means that we have indeed factored $(F, \phi)$ as $(H, \eta) \circ (G, \gamma)$.
To show that $(G, \gamma)$ is a morphism, we must verify that (\ref{coh4}) and (\ref{coh5}) commute for $\gamma$. Since $H$ is locally faithful, these diagrams commute if and only if they commute after $H$ is applied to them. But this follows directly from $\textbf{(1)} + \textbf{(3)} \Longrightarrow \textbf{(2)}$ of Lemma \ref{lem1}.
\end{proof}
\section{The trivial cofibration - fibration WFS}
The purpose of this section is to prove the following proposition.
\begin{prop} \label{prop}
The trivial cofibrations and fibrations form a weak factorization system.
\end{prop}
\subsection{Lifting property}
\begin{lem} \label{lem3}
Given a triangle of groupoids that commutes up to a natural isomorphism $\beta : H \Longrightarrow GF$
\begin{equation*}
\begin{tikzcd}[row sep=large, column sep=large]
& & \mathcal{B} \arrow[dd, "G"{name=G}] \\
& & \\
\mathcal{A} \arrow[rr, swap, "H"{name=H}] \arrow[rruu, bend right, "F"{name=F}] \arrow[rruu, dashed, bend left, "\exists F'"{name=F'}] & & \mathcal{C}
\arrow[Rightarrow, dashed, from = F', to=F, shorten >=5pt, shorten <=10pt, "\exists \alpha"]
\arrow[swap, Rightarrow, from=H, to=G, shorten >=15pt, shorten <=15pt, "\beta"]
\end{tikzcd}
\end{equation*}
and in which $G$ is a fibration, there exists a functor $F'$ making the triangle commute, along with a natural isomorphism $\alpha : F' \Longrightarrow F$ such that $G \alpha = \beta$.
\end{lem}
\begin{proof}
For every object $A$ of $\mathcal{A}$, there exists an object $B_{A}$ of $\mathcal{B}$ and an arrow $\alpha_{A} : B_{A} \longrightarrow FA$ such that $GB_{A} = HA$ and $G \alpha_{A} = \beta_{A}$, since $G$ is a fibration. Define $F' A = B_{A}$ and $F (f : A \longrightarrow A' ) = \alpha_{A'}^{-1} \circ Ff \circ \alpha_{A}$.
\end{proof}
\begin{lem} \label{lem5}
Given a square of categories which commutes up to a natural isomorphism $\alpha : H G \Longrightarrow K G$
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{A} \arrow[r, "G"] \arrow[d, swap, "G"] & \mathcal{B} \arrow[d, "K"] \\
\mathcal{B} \arrow[r, swap, "H"] \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\alpha"] & \mathcal{C}
\end{tikzcd}
\qquad = \qquad
\begin{tikzcd} [row sep=huge, column sep=huge]
\mathcal{A} \arrow[r, "G"] & \mathcal{B} \arrow[r, bend right=40, swap, "H"{name=H}] \arrow[r, bend left=40, "K"{name=K}] & \mathcal{C}
\arrow[from=H, to=K, dashed, Rightarrow, shorten >=5pt, shorten <=5pt, "\exists ! \beta"]
\end{tikzcd}
\end{equation*}
in which $G$ is an equivalence of categories, there exists a unique natural isomorphism $\beta : H \Longrightarrow K$ such that $\beta G = \alpha$.
\end{lem}
\begin{proof}
By hypothesis, there exists a functor $F : \mathcal{B} \longrightarrow \mathcal{A}$ and a natural isomorphism $\eta : \mathrm{id} \Longrightarrow GF$. For every $B$ in $\mathcal{B}$, the square
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
H B \arrow[r, "\beta_{B}"] \arrow[d, swap, "H \eta_{B}"] & K C \arrow[d, "K \eta_{B}"] \\
HGFB \arrow[r, swap, "\beta_{GFB}"] & KGFB
\end{tikzcd}
\end{equation*}
must commute by naturality of $\beta$. Since $\beta_{GFB} = \alpha_{FB}$ is required as well, this leaves the composite
\begin{equation*}
H \xRightarrow{H \eta} H G F \xRightarrow{\alpha F} K G F \xRightarrow{(K \eta)^{-1}} K
\end{equation*}
as the only possible candidate for $\beta$. We see that the square
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
H GA \arrow[r, "\alpha_{A}"] \arrow[d, swap, "H \eta_{GA}"] & K GA \arrow[d, "K \eta_{GA}"] \\
H GFGA \arrow[r, swap, "\alpha_{FGA}"] & K GFGA
\end{tikzcd}
\end{equation*}
commutes by naturality of $\alpha$, as $H \eta_{GA} = H G G^{-1} \eta_{GA}$ and $K \eta_{GA} = K G G^{-1} \eta_{GA}$. This shows that our definition of $\beta$ indeed meets the requirement $\beta G = \alpha$.
\end{proof}
\begin{lem} \label{muinvlem}
In any diagram of categories
\begin{equation*}
\begin{tikzcd} [row sep=huge, column sep=huge]
\mathcal{A} \arrow[r, bend right=40, swap, "G"{name=G}] \arrow[r, bend left=40, "F"{name=F}] & \mathcal{B} \arrow[r, bend right=40, swap, "K"{name=K}] \arrow[r, bend left=40, "H"{name=H}] & \mathcal{C}
\arrow[from=H, to=K, shift left=2, Rightarrow, shorten >=5pt, shorten <=5pt, "\beta"]
\arrow[from=H, to=K, swap, shift right=2, Rightarrow, shorten >=5pt, shorten <=5pt, "\alpha"]
\arrow[from=F, to=G, swap, Rightarrow, shorten >=5pt, shorten <=5pt, "\mu"]
\end{tikzcd}
\end{equation*}
with natural transformations $\alpha, \beta : H \Longrightarrow K$ and a natural isomorphism $\mu : F \Longrightarrow G$, the equality $\alpha F = \beta F$ holds if and only if the equality $\alpha G = \beta G$ holds.
\end{lem}
\begin{proof}
This follows from the equations
\begin{equation*}
K \mu \circ \alpha F = \alpha G \circ H \mu \qquad \text{and} \qquad K \mu \circ \beta F = \beta G \circ H \mu
\end{equation*}
and the fact that $\mu$ is invertible.
\end{proof}
\begin{cor} \label{cor1}
Let $(F, \phi) : \mathcal{A} \longrightarrow \mathcal{B}$ be an incoherent morphism between (possibly incoherent) bigroupoids. Suppose furthermore that for every pair of 0-cells $A$, $A'$ of $\mathcal{A}$, two endofunctors $G_{A, A'}, H_{A, A'} : \mathcal{A}(A, A') \longrightarrow \mathcal{A}(A, A')$ are given which are naturally isomorphic $\mu_{A, A'} : G_{A, A'} \Longrightarrow H_{A, A'}$. Then the diagrams (\ref{coh4}) and (\ref{coh5}) commute for $\phi G$ if and only if they commute for $\phi H$.
\end{cor}
\begin{proof}
This is a direct application of Lemma \ref{muinvlem}.
\end{proof}
\begin{lem} \label{surlift}
Given a commutative square
\begin{equation} \label{lift3}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{A} \arrow[r, "{(F, \phi)}"] \arrow[d, swap, "{(K, \kappa)}"] & \mathcal{B} \arrow[d, "{(G, \gamma)}"] \\
\mathcal{D} \arrow[r, swap, "{(H, \eta)}"] \arrow[ru, dashed, "{\exists (L, \lambda)}"] & \mathcal{C}
\end{tikzcd}
\end{equation}
in which $K$ is a trivial cofibration which is surjective on 0-cells and $G$ is a fibration, there exists a diagonal filler $L$, as indicated in the diagram.
\end{lem}
\begin{proof}
Let $L : \mathcal{D}_{0} \longrightarrow \mathcal{B}_{0}$ to be the unique function that makes the diagram
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{A}_{0} \arrow[r, "F"] \arrow[d, swap, "K"] & \mathcal{B}_{0} \arrow[d, "G"] \\
\mathcal{D}_{0} \arrow[r, swap, "H"] \arrow[ru, dashed, "\exists ! L"] & \mathcal{C}_{0}
\end{tikzcd}
\end{equation*}
commute. This function exists because $K : \mathcal{A}_{0} \longrightarrow \mathcal{D}_{0}$ is bijective.
Given two 0-cells $D = KA$ and $D'= KA'$ in $\mathcal{D}$, we construct the functor
\begin{equation*}
L( = L_{D, D'}) : \mathcal{D}(D, D') \longrightarrow \mathcal{B}(LD, LD')
\end{equation*}
by taking a diagonal
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{A}(A, A') \arrow[r, "F"] \arrow[d, swap, "K"] & \mathcal{B}(LD, LD') \arrow[d, "G"] \\
\mathcal{D}(D, D') \arrow[r, swap, "H"] \arrow[ru, dashed, "\exists L"] & \mathcal{C}(HD, HD')
\end{tikzcd}
\end{equation*}
which exists by the model structure on the category of groupoids.
To define $\lambda$, consider the square
\begin{equation} \label{lamdef}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{A}(A', A'') \times \mathcal{A}(A, A') \arrow[r, "K \times K"] \arrow[d, swap, "K \times K"] & \mathcal{D}(D', D'') \times \mathcal{D}(D, D') \arrow[d, "L \circ \mathbf{C}"] \\
\mathcal{D}(D', D'') \times \mathcal{D}(D, D') \arrow[r, swap, "\mathbf{C} \circ (L \times L)"] \arrow[ru, Rightarrow, shorten >=40pt, shorten <=40pt, "(L \kappa)^{-1} \circ \phi"] & \mathcal{B}(LA, LA'')
\end{tikzcd}
\end{equation}
The calculation
\begin{equation*}
\mathbf{C} \circ (L \times L) \circ (K \times K) = \mathbf{C} \circ (F \times F) \xRightarrow{\; \phi \;} F \circ \mathbf{C} = L \circ K \circ \mathbf{C} \xRightarrow{(L \kappa)^{-1}} L \circ \mathbf{C} \circ (K \times K)
\end{equation*}
shows that (\ref{lamdef}) indeed commutes up to the natural isomorphism $(L \kappa)^{-1} \circ \phi$. Since $K \times K$ in (\ref{lamdef}) is an equivalence of categories, Lemma \ref{lem5} provides us with a natural isomorphism
\begin{equation*}
\lambda (= \lambda_{D, D', D''}) : \mathbf{C} \circ (L \times L) \Longrightarrow L \circ \mathbf{C}
\end{equation*}
satisfying $\lambda K = (L \kappa)^{-1} \circ \phi$.
We make the necessary verifications. The left upper triangle of (\ref{lift3}) commutes, since
\begin{equation*}
(L, \lambda) \circ (K, \kappa) = (L \circ K, L \kappa \circ \lambda K) = (F, \phi),
\end{equation*}
as $\lambda K = (L \kappa)^{-1} \circ \phi$. We can also compute
\begin{equation*}
(G \lambda \circ \gamma L)K = G \lambda K \circ \gamma LK = G ( (L \kappa)^{-1} \circ \phi) \circ \gamma F = (H \kappa)^{-1} \circ G \phi \circ \gamma F = \eta K,
\end{equation*}
using $\lambda K = (L \kappa)^{-1} \circ \phi$ as well as the commutativity of the square (\ref{lift3}). Hence
\begin{equation*}
(G, \gamma) \circ (L, \lambda) = (G \circ L, G \lambda \circ \gamma L) = (H, \eta)
\end{equation*}
by the uniqueness requirement of Lemma \ref{lem5}, so the lower right triangle of (\ref{lift3}) commutes as well.
Lastly, we check that the coherence diagrams (\ref{coh4}) and (\ref{coh5}) commute for $\lambda$. Note that for each pair of 0-cells $D$, $D'$ of $\mathcal{D}$, there exists a functor
\begin{equation*}
T_{D, D'} : \mathcal{D}(D, D') \longrightarrow \mathcal{A}(A, A')
\end{equation*}
and a natural isomorphism
\begin{equation*}
\alpha_{D, D'} : \mathrm{id} \Longrightarrow K_{A, A'} \circ T_{D, D'},
\end{equation*}
as each $K_{A, A'}$ is an equivalence of categories. Since $(L, \lambda) \circ (K, \kappa) = (F, \phi)$, it follows that the diagrams (\ref{coh4}) and (\ref{coh5}) commute for $\lambda K$, by $\textbf{(2)} + \textbf{(3)} \Longrightarrow \textbf{(1)}$ of Lemma \ref{lem1}. In particular, they commute for $\lambda K T$. But then they commute for $\lambda$ by Corollary \ref{cor1}.
\end{proof}
\begin{lem} \label{isolift}
Given a commutative square
\begin{equation}\label{eq3}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{A} \arrow[r, "{(F, \phi)}"] \arrow[d, swap, "{(K, \mathrm{id})}"] & \mathcal{B} \arrow[d, "{(G, \mathrm{id})}"] \\
\mathcal{C} \arrow[r, swap, "\mathrm{id}"] \arrow[ru, dashed, "{\exists (L, \lambda)}"] & \mathcal{C}
\end{tikzcd}
\end{equation}
in which $K$ is a strict trivial cofibration, which is also a local isomorphism and $G$ is a strict fibration, there exists a diagonal filler $L$, as indicated in the diagram.
\end{lem}
\begin{proof}
We build $(L, \lambda)$ in three stages, each time `correcting' the previous stage. The morphism $(L^{(1)}, \lambda^{(1)})$ will make the upper-left triangle commute. In addition to this, $(L^{(2)}, \lambda^{(2)})$ will make the diagram commute on the level of 0-cells. And finally $(L^{(3)}, \lambda^{(3)}) = (L, \lambda)$ will make the entire diagram commute.
\textbf{Stage 1.} We construct a left inverse $(T, \tau) : \mathcal{C} \longrightarrow \mathcal{A}$ of $K$. Since $K$ is a trivial cofibration, there exists a function $T : \mathcal{C}_{0} \longrightarrow \mathcal{A}_{0}$ such that $TK = \mathrm{id}$ and for every 0-cell $C$ of $\mathcal{C}$, there exists a 1-cell $p_{C} : C \longrightarrow KTC$. Whenever $KTC = C$, we choose $p_{C} = 1_{C}$. We define members $P_{C, C'}$ of a $\mathcal{C}_{0} \times \mathcal{C}_{0}$-indexed family of functors by:
\begin{itemize}
\item{
$\begin{tikzcd}[column sep=huge]
\mathcal{C}(C, C') \arrow[r, "p_{C'} * ( - * p_{C}^{*})"] & \mathcal{C}(KTC, KTC')
\end{tikzcd}$, if at least one of $C$, $C'$ does not lie in the image of $K$;}
\item{$\begin{tikzcd}[column sep=huge]
\mathcal{C}(C, C') \arrow[r, "\mathrm{id}"] & \mathcal{C}(KTC, KTC')
\end{tikzcd}$, if both $C$ and $C'$ lie in the image of $K$.}
\end{itemize}
We take $T_{C, C'} = K^{-1}_{TC, TC'} \circ P_{C, C'}$.
The natural isomorphism
\begin{equation*}
\tau (= \tau_{C, C', C''}) : \mathbf{C} \circ (T \times T) \Longrightarrow T \circ \mathbf{C}
\end{equation*}
is given by the diagram
\begin{equation} \label{tau}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{C}(C', C'') \times \mathcal{C}(C, C') \arrow[r, "\mathbf{C}"] \arrow[d, swap, "P \times P"] & \mathcal{C}(C, C'') \arrow[d, "P"] \\
\mathcal{C}(KTC', KTC'') \times \mathcal{C}(KTC, KTC') \arrow[r, "\mathbf{C}"] \arrow[d, swap, "K^{-1} \times K^{-1}"] \arrow[ru, Rightarrow, shorten >=50pt, shorten <=50pt, "\mathbf{x}"] & \mathcal{C}(KTC, KTC'') \arrow[d, "K^{-1}"] \\
\mathcal{A}(TC', TC'') \times \mathcal{A}(TC, TC') \arrow[r, swap, "\mathbf{C}"] \arrow[ru, Rightarrow, shorten >=47pt, shorten <=47pt, "\mathrm{id}"] & \mathcal{A}(TC, TC'')
\end{tikzcd}
\end{equation}
In (\ref{tau}), $\mathbf{x}( = \mathbf{x}_{C, C', C''})$ is the canonical isomorphism (see Definition \ref{fordiagdef}). The diagrams (\ref{coh4}) and (\ref{coh5}) commute for $\tau$ by Theorem \ref{fordiagthm} since $\mathbf{x}$ is canonical and $K$ is a strict local isomorphism. Define $(L^{(1)}, \lambda^{(1)}) = (F, \phi) \circ (T, \tau)$ and note that $(L^{(1)}, \lambda^{(1)}) \circ (K, \mathrm{id}) = (F, \phi)$, as $(T, \tau) \circ (K, \mathrm{id}) = \mathrm{id}$ by construction.
\textbf{Stage 2.} Since $G$ is a fibration, there exists a function $L^{(2)} : \mathcal{C}_{0} \longrightarrow \mathcal{B}_{0}$ such that $L^{(2)}K = L^{(1)}K$, $G L^{(2)} = \mathrm{id}$ and for every 0-cell $C$ of $\mathcal{C}$, there exists a 1-cell $q_{C} : L^{(2)} C \longrightarrow L^{(1)}C$ satisfying $G q_{C} = p_{C}$. Whenever $KTC = C$, we choose $q_{C} = 1_{L^{(2)}C}$. We define members $Q_{C, C'}$ of a $\mathcal{C}_{0} \times \mathcal{C}_{0}$-indexed family of functors by:
\begin{itemize}
\item{
$\begin{tikzcd}[column sep=huge]
\mathcal{B}(L^{(1)}C, L^{(1)}C') \arrow[r, "q_{C'}^{*} * ( - * q_{C})"] & \mathcal{B}(L^{(2)}C, L^{(2)}C')
\end{tikzcd}$, if at least one of $C$, $C'$ does not lie in the image of $K$;}
\item{$\begin{tikzcd}[column sep=huge]
\mathcal{B}(L^{(1)}C, L^{(1)}C') \arrow[r, "\mathrm{id}"] & \mathcal{B}(L^{(2)}C, L^{(2)}C')
\end{tikzcd}$, if both $C$ and $C'$ lie in the image of $K$.}
\end{itemize}
We take $L_{C, C'}^{(2)} = Q_{C, C'} \circ L_{C, C'}^{(1)}$.
The natural isomorphism
\begin{equation*}
\lambda^{(2)} (= \lambda^{(2)}_{C, C', C''}) : \mathbf{C} \circ (L^{(2)} \times L^{(2)}) \Longrightarrow L^{(2)} \circ \mathbf{C}
\end{equation*}
is given by the diagram
\begin{equation} \label{lam2}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{C}(C', C'') \times \mathcal{C}(C, C') \arrow[r, "\mathbf{C}"] \arrow[d, swap, "L^{(1)} \times L^{(1)}"] & \mathcal{C}(C, C'') \arrow[d, "L^{(1)}"] \\
\mathcal{B}(L^{(1)}C', L^{(1)}C'') \times \mathcal{B}(L^{(1)}C, L^{(1)}C') \arrow[r, "\mathbf{C}"] \arrow[d, swap, "Q \times Q"] \arrow[ru, Rightarrow, shorten >=45pt, shorten <=45pt, "\lambda^{(1)}"] & \mathcal{B}(L^{(1)}C, L^{(1)}C'') \arrow[d, "Q"] \\
\mathcal{B}(L^{(2)}C', L^{(2)}C'') \times \mathcal{B}(L^{(2)}C, L^{(2)}C') \arrow[r, swap, "\mathbf{C}"] \arrow[ru, Rightarrow, shorten >=44pt, shorten <=44pt, "\mathbf{y}"] & \mathcal{B}(L^{(2)}C, L^{(2)}C'')
\end{tikzcd}
\end{equation}
In (\ref{lam2}), $\mathbf{y}( = \mathbf{y}_{C, C', C''})$ is the canonical isomorphism. By Theorem \ref{phifordiagthm} applied to $(L^{(1)}, \lambda^{(1)})$, the diagrams (\ref{coh4}) and (\ref{coh5}) commute for $\lambda^{(2)}$. Note that $(L^{(2)}, \lambda^{(2)}) \circ (K, \mathrm{id}) = (F, \phi)$, as $(L^{(2)}, \lambda^{(2)}) \circ (K, \mathrm{id}) = (L^{(1)}, \lambda^{(1)}) \circ (K, \mathrm{id})$ by construction.
\textbf{Stage 3.} We now modify $(L^{(2)}, \lambda^{(2)})$ to get the desired morphism $(L, \lambda)$. On the level of 0-cells, we make no changes, meaning that $L = L^{(2)} : \mathcal{C}_{0} \longrightarrow \mathcal{B}_{0}$. The need to modify $(L^{(2)}, \lambda^{(2)})$ arises because the triangle
\begin{equation} \label{eq4}
\begin{tikzcd}
& & \mathcal{B}(LC, LC') \arrow[dd, "G"] \\
& & {}\\
\mathcal{C}(C, C') \arrow[rr, swap, "\mathrm{id}"] \arrow[rruu, "L^{(2)}"] & {} \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\mathbf{z}"] & \mathcal{C}(C, C')
\end{tikzcd}
\end{equation}
will in general only commute up to a canonical isomorphism $\mathbf{z} (= \mathbf{z}_{C, C'})$. Indeed, let us define members $R_{C, C'}$ of a $\mathcal{C}_{0} \times \mathcal{C}_{0}$-indexed family of functors by:
\begin{itemize}
\item{
$\begin{tikzcd}[column sep=huge]
\mathcal{C}(KTC, KTC') \arrow[r, "p_{C'}^{*} * ( - * p_{C})"] & \mathcal{C}(C, C')
\end{tikzcd}$, if at least one of $C$, $C'$ does not lie in the image of $K$;}
\item{$\begin{tikzcd}[column sep=huge]
\mathcal{C}(KTC, KTC') \arrow[r, "\mathrm{id}"] & \mathcal{C}(C, C')
\end{tikzcd}$, if both $C$ and $C'$ lie in the image of $K$.}
\end{itemize}
Using the relations $G q_{C} = p_{C}$, $G q_{C'} = p_{C'}$ and the strictness of $G$, one easily verifies
\begin{equation} \label{GQ=RG}
G_{L^{(2)} C, L^{(2)} C'} \circ Q_{C, C'} = R_{C, C'} \circ G_{L^{(1)} C, L^{(1)} C'}.
\end{equation}
Then, with $G$ and $L^{(2)}$ as in (\ref{eq4}),
\begin{equation} \label{GL}
G \circ L^{(2)} = G \circ Q \circ L^{(1)} = G \circ Q \circ F \circ T = G \circ Q \circ F \circ K^{-1} \circ P,
\end{equation}
all by definition. Now using $G \circ Q = R \circ G$ (by (\ref{GQ=RG})) and $G \circ F = K$ (by (\ref{eq3})), we find that (\ref{GL}) is equal to
\begin{equation*}
R \circ G \circ F \circ K^{-1} \circ P = R \circ K \circ K^{-1} \circ P = R \circ P
\end{equation*}
and clearly there exists a canonical isomorphism $\mathbf{z} : \mathrm{id} \Longrightarrow R \circ P$.
If both $C$ and $C'$ lie in the image of $K$, then $\mathbf{z}$ is the identity and we define $L_{C, C'} = L^{(2)}_{C, C'}$ and $\alpha_{C, C'} = \mathrm{id} : L_{C, C'} \Longrightarrow L^{(2)}_{C, C'}$. In all other cases we apply Lemma \ref{lem3} to obtain a functor $L_{C, C'} : \mathcal{C}(C, C') \longrightarrow \mathcal{B}(LC, LC')$ which does make the triangle (\ref{eq4}) commute, together with a natural isomorphism $\alpha_{C, C'} : L_{C, C'} \Longrightarrow L^{(2)}_{C, C'}$ satisfying $G \alpha = \mathbf{z}$. We define $\lambda$ as the natural isomorphism
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{C}(C', C'') \times \mathcal{C}(C, C') \arrow[r, "\mathbf{C}"] \arrow[d, bend right=55, swap, "L \times L"{name=LL}] \arrow[d, bend left=55, "L^{(2)} \times L^{(2)}"{name=LLB}] & \mathcal{C}(C, C'') \arrow[d, swap, bend right=55, "L^{(2)}"{name=LB}] \arrow[d, bend left=55, "L"{name=L}] \\
\mathcal{B}(LC', LC'') \times \mathcal{B}(LC, LC') \arrow[r, swap, "\mathbf{C}"] \arrow[ru, Rightarrow, shorten >=40pt, shorten <=40pt, "\lambda^{(2)}" pos=0.6] & \mathcal{B}(LC, LC'')
\arrow[from=LL, to=LLB, Rightarrow, shorten >=10pt, shorten <=10pt, "\alpha \times \alpha"]
\arrow[from=LB, to=L, Rightarrow, shorten >=10pt, shorten <=10pt, "\alpha^{-1}"]
\end{tikzcd}
\end{equation*}
Note that that this choice of $(L, \lambda)$ gives $(L, \lambda) \circ (K, \mathrm{id}) = (L^{(2)}, \lambda^{(2)}) \circ (K, \mathrm{id}) = (F, \phi)$ and also ensures that the lower right triangle of (\ref{eq3}) commutes on the level of 0-, 1- and 2-cells.
To verify that the coherence diagram (\ref{coh4}) commutes for $\lambda$, consider the following diagram, whose perimeter is exactly (\ref{coh4}):
\begin{equation*}
\begin{tikzcd}[row sep=large, column sep=huge]
\cdot \arrow[rr, "\lambda * \mathrm{id}"] \arrow[ddd, swap, "\mathbf{a}"] \arrow[dr, "(\alpha * \alpha) * \alpha" description] & & \cdot \arrow[rr, "\lambda"] \arrow[d, swap, "\alpha * \alpha" description] & & \cdot \arrow[ddd, "L\mathbf{a}"] \arrow[dl, swap, "\alpha" description] \\
& \cdot \arrow[r, "\lambda^{(2)} * \mathrm{id}"] \arrow[d, swap, "\mathbf{a}"] & \cdot \arrow[r, "\lambda^{(2)}"] & \cdot \arrow[d, "L^{(2)} \mathbf{a}"] & \\
& \cdot \arrow[r, swap, "\mathrm{id} * \lambda^{(2)}"] & \cdot \arrow[r, swap, "\lambda^{(2)}"] & \cdot & \\
\cdot \arrow[rr, swap, "\mathrm{id} * \lambda"] \arrow[ur, swap, "\alpha * (\alpha * \alpha)" description] & & \cdot \arrow[rr, swap, "\lambda"] \arrow[u, "\alpha * \alpha" description] & & \cdot \arrow[ul, "\alpha" description]
\end{tikzcd}
\end{equation*}
The innermost rectangle is simply diagram (\ref{coh4}) for $\lambda^{(2)}$, which commutes because $(L^{(2)}, \lambda^{(2)})$ is a morphism; the leftmost square commutes by naturality of $\mathbf{a}$; the rightmost square commutes by naturality of $\alpha$ and all other `squares' in the diagram commute by definition of $\lambda$.
All that remains to show is that $G \lambda = \mathrm{id}$. Expand the definition of $\lambda$ to get
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
\cdot \arrow[r, "\mathbf{C}"] \arrow[d, swap, "L \times L"] & \cdot \arrow[d, "L"] \\
\cdot \arrow[r, "\mathbf{C}"] \arrow[d, swap, "G \times G"] \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\lambda"] & \cdot \arrow[d, "G"] \\
\cdot \arrow[r, swap, "\mathbf{C}"] \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\mathrm{id}"] & \cdot
\end{tikzcd}
\qquad = \qquad
\begin{tikzcd}[row sep=huge, column sep=huge]
\cdot \arrow[r, "\mathbf{C}"] \arrow[d, bend right=55, swap, "L \times L"{name=LL}] \arrow[d, bend left=55, "{}"{name=LLB}] & \cdot \arrow[d, swap, bend right=55, "{}"{name=LB}] \arrow[d, bend left=55, "L"{name=L}] \\
\cdot \arrow[d, swap, "G \times G"] \arrow[r, "\mathbf{C}"] \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\lambda^{(2)}" pos=0.6] & \cdot \arrow[d, "G"] \\
\cdot \arrow[r, swap, "\mathbf{C}"] \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\mathrm{id}"] & \cdot
\arrow[from=LL, to=LLB, Rightarrow, shorten >=10pt, shorten <=10pt, "\alpha \times \alpha"]
\arrow[from=LB, to=L, Rightarrow, shorten >=10pt, shorten <=10pt, "\alpha^{-1}"]
\end{tikzcd}
\end{equation*}
Since $G \alpha = \mathbf{z}$, this is the same as
\begin{equation}\label{eq1}
\begin{tikzcd}[row sep=huge, column sep=huge]
\cdot \arrow[r, "\mathbf{C}"] \arrow[d] \arrow[dd, bend right=100, swap, "\mathrm{id}"{name=A}] & \cdot \arrow[d] \arrow[dd, bend left=100, "\mathrm{id}"{name=B}] \\
\cdot \arrow[r, "\mathbf{C}"] \arrow[d] \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\lambda^{(2)}"] & \cdot \arrow[d] \\
\cdot \arrow[r, swap, "\mathbf{C}"] \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\mathrm{id}"] & \cdot
\arrow[from=A, to=2-1, Rightarrow, shorten >=8pt, shorten <=8pt, "\mathbf{z} \times \mathbf{z}"] \arrow[from=2-2, to=B, Rightarrow, shorten >=8pt, shorten <=8pt, "\mathbf{z}^{-1}"]
\end{tikzcd}
\end{equation}
Now consider the two cental squares of (\ref{eq1}):
\begin{equation} \label{botsq}
\begin{tikzcd}[row sep=huge, column sep=huge]
\cdot \arrow[r, "\mathbf{C}"] \arrow[d, swap, "L^{(2)} \times L^{(2)}"] & \cdot \arrow[d, "L^{(2)}"] \\
\cdot \arrow[r, "\mathbf{C}"] \arrow[d, swap, "G \times G"] \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\lambda^{(2)}"] & \cdot \arrow[d, "G"] \\
\cdot \arrow[r, swap, "\mathbf{C}"] \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\mathrm{id}"] & \cdot
\end{tikzcd}
\qquad = \qquad
\begin{tikzcd}[row sep=huge, column sep=huge]
\cdot \arrow[r, "\mathbf{C}"] \arrow[d, swap, "L^{(1)} \times L^{(1)}"] & \cdot \arrow[d, "L^{(1)}"] \\
\cdot \arrow[r, "\mathbf{C}"] \arrow[d, swap, "Q \times Q"] \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\lambda^{(1)}"] & \cdot \arrow[d, "Q"] \\
\cdot \arrow[r, "\mathbf{C}"] \arrow[d, swap, "G \times G"] \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\mathbf{y}"] & \cdot \arrow[d, "G"] \\
\cdot \arrow[r, swap, "\mathbf{C}"] \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\mathrm{id}"] & \cdot
\end{tikzcd}
\qquad = \qquad
\begin{tikzcd}[row sep=huge, column sep=huge]
\cdot \arrow[r, "\mathbf{C}"] \arrow[d, swap, "L^{(1)} \times L^{(1)}"] & \cdot \arrow[d, "L^{(1)}"] \\
\cdot \arrow[r, "\mathbf{C}"] \arrow[d, swap, "G \times G"] \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\lambda^{(1)}"] & \cdot \arrow[d, "G"] \\
\cdot \arrow[r, "\mathbf{C}"] \arrow[d, swap, "R \times R"] \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\mathrm{id}"] & \cdot \arrow[d, "R"] \\
\cdot \arrow[r, swap, "\mathbf{C}"] \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\mathbf{w}"] & \cdot
\end{tikzcd}
\end{equation}
The first and second diagrams of (\ref{botsq}) are equal by definition of $(L^{(2)}, \lambda^{(2)})$. In the third diagram, $\mathbf{w}$ is the canonical isomorphism. The bottom two squares in the second diagram of (\ref{botsq}) and the bottom two squares in the third diagram of (\ref{botsq}) both represent a canonical isomorphism, so they must be equal. Using the definition of $(L^{(1)}, \lambda^{(1)})$ and applying $(G, \mathrm{id}) \circ (F, \phi) = (K, \mathrm{id})$, we find that (\ref{botsq}) is equal to
\begin{equation}\label{eq2}
\begin{tikzcd}[row sep=huge, column sep=huge]
\cdot \arrow[r, "\mathbf{C}"] \arrow[d, swap, "P \times P"] & \cdot \arrow[d, "P"] \\
\cdot \arrow[r, "\mathbf{C}"] \arrow[d, swap, "K^{-1} \times K^{-1}"] \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\mathbf{x}"] & \cdot \arrow[d, "K^{-1}"] \\
\cdot \arrow[r, "\mathbf{C}"] \arrow[d, swap, "F \times F"] \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\mathrm{id}"] & \cdot \arrow[d, "F"] \\
\cdot \arrow[r, "\mathbf{C}"] \arrow[d, swap, "G \times G"] \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\phi"] & \cdot \arrow[d, "G"] \\
\cdot \arrow[r, "\mathbf{C}"] \arrow[d, swap, "R \times R"] \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\mathrm{id}"] & \cdot \arrow[d, "R"] \\
\cdot \arrow[r, swap, "\mathbf{C}"] \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\mathbf{w}"] & \cdot
\end{tikzcd}
\qquad = \qquad
\begin{tikzcd}[row sep=huge, column sep=huge]
\cdot \arrow[r, "\mathbf{C}"] \arrow[d, swap, "P \times P"] & \cdot \arrow[d, "P"] \\
\cdot \arrow[r, "\mathbf{C}"] \arrow[d, swap, "K^{-1} \times K^{-1}"] \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\mathbf{x}"] & \cdot \arrow[d, "K^{-1}"] \\
\cdot \arrow[r, "\mathbf{C}"] \arrow[d, swap, "K \times K"] \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\mathrm{id}"] & \cdot \arrow[d, "K"] \\
\cdot \arrow[r, "\mathbf{C}"] \arrow[d, swap, "R \times R"] \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\mathrm{id}"] & \cdot \arrow[d, "R"] \\
\cdot \arrow[r, swap, "\mathbf{C}"] \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\mathbf{w}"] & \cdot
\end{tikzcd}
\qquad = \qquad
\begin{tikzcd}[row sep=huge, column sep=huge]
\cdot \arrow[r, "\mathbf{C}"] \arrow[d, swap, "P \times P"] & \cdot \arrow[d, "P"] \\
\cdot \arrow[r, "\mathbf{C}"] \arrow[d, swap, "R \times R"] \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\mathbf{x}"] & \cdot \arrow[d, "R"] \\
\cdot \arrow[r, swap, "\mathbf{C}"] \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\mathbf{w}"] & \cdot
\end{tikzcd}
\end{equation}
We substitute (\ref{eq2}) back into (\ref{eq1}) to get
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
\cdot \arrow[r, "\mathbf{C}"] \arrow[d, swap, "P \times P"] \arrow[dd, bend right=100, swap, "\mathrm{id}"{name=A}] & \cdot \arrow[d, "P"] \arrow[dd, bend left=100, "\mathrm{id}"{name=B}] \\
\cdot \arrow[r, "\mathbf{C}"] \arrow[d, swap, "R \times R"] \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\mathbf{x}"] & \cdot \arrow[d, "R"] \\
\cdot \arrow[r, swap, "\mathbf{C}"] \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\mathbf{w}"] & \cdot
\arrow[from=A, to=2-1, Rightarrow, shorten >=8pt, shorten <=8pt, "\mathbf{z} \times \mathbf{z}"] \arrow[from=2-2, to=B, Rightarrow, shorten >=8pt, shorten <=8pt, "\mathbf{z}^{-1}"]
\end{tikzcd}
\qquad = \qquad
\begin{tikzcd}[row sep=huge, column sep=huge]
\cdot \arrow[r, "\mathbf{C}"] \arrow[d, swap, "\mathrm{id}"] & \cdot \arrow[d, "\mathrm{id}"] \\
\cdot \arrow[r, swap, "\mathbf{C}"] \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\mathrm{id}"] & \cdot
\end{tikzcd}
\end{equation*}
by Theorem \ref{fordiagthm}.
\end{proof}
\begin{lem} \label{lem4}
The pullbacks of fibrations along any other morphism exist. Furthermore, the resulting morphism can be taken strict.
\end{lem}
\begin{proof}
Given two morphisms $(F, \phi): \mathcal{B} \longrightarrow \mathcal{C}$ and $(G, \gamma): \mathcal{D} \longrightarrow \mathcal{C}$, with $F$ a fibration, we construct a square
\begin{equation} \label{pullb}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{A} \arrow[r, "{(R, \rho)}"] \arrow[d, swap, "{(P, \pi)}"] & \mathcal{B} \arrow[d, "{(F, \phi)}"] \\
\mathcal{D} \arrow[r, swap, "{(G, \gamma)}"] & \mathcal{C}
\end{tikzcd}
\end{equation}
and demonstrate its universal property. The set of 0-cells $\mathcal{A}_0$, equipped with functions $R : \mathcal{A}_0 \longrightarrow \mathcal{B}_0$ and $P : \mathcal{A}_0 \longrightarrow \mathcal{D}_0$, is given by the pullback square (of sets!)
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{A}_0 \arrow[r, "R"] \arrow[d, swap, "P"] & \mathcal{B}_0 \arrow[d, "F"] \\
\mathcal{D}_0 \arrow[r, swap, "G"] & \mathcal{C}_0
\end{tikzcd}
\end{equation*}
To cut back clutter, we write $PA = D$, $RA = B$ and $FB = GD = C$ for $A$ in $\mathcal{A}_{0}$. Given a pair of 0-cells $A$, $A'$ of $\mathcal{A}$, the groupoid $\mathcal{A}(A, A')$, equipped with functors $P_{A, A'} : \mathcal{A}(A, A') \longrightarrow \mathcal{D}(D, D')$ and $R_{A, A'} : \mathcal{A}(A, A') \longrightarrow \mathcal{B}(B, B')$ is given by the pullback square (of groupoids!)
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{A}(A, A') \arrow[r, "R_{A,A'}"] \arrow[d, swap, "P_{A, A'}"] & \mathcal{B}(B, B') \arrow[d, "F_{B,B'}"] \\
\mathcal{D}(D, D') \arrow[r, swap, "G_{D, D'}"] & \mathcal{C}(C, C')
\end{tikzcd}
\end{equation*}
We will now provide the functor $\mathbf{C}_{A, A', A''} : \mathcal{A}(A',A'') \times \mathcal{A}(A,A') \longrightarrow \mathcal{A}(A,A'')$ for a given triple of 0-cells $A$, $A'$, $A''$. Consider the following square:
\begin{equation} \label{cdef}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{A}(A',A'') \times \mathcal{A}(A,A') \arrow[r, dashed, bend left=50, "\exists H"{name=H}] \arrow[r, "\mathbf{C} \circ (R \times R)"{name=D}] \arrow[d, swap, "\mathbf{C} \circ (P \times P)"] & \mathcal{B}(B',B'') \times \mathcal{B}(B,B') \arrow[d, "F"] \\
\mathcal{D}(D,D'') \arrow[r, swap, "G"] \arrow[ru, Rightarrow, shorten >=40pt, shorten <=40pt, "\phi R \circ (\gamma P)^{-1}"] & \mathcal{C}(C,C'')
\arrow[from=H, to =D, swap, Rightarrow, dashed, shorten >=5pt, shorten <=5pt, "\exists \alpha"]
\end{tikzcd}
\end{equation}
The calculation
\begin{equation*}
G \circ \mathbf{C} \circ (P \times P) \xRightarrow{(\gamma P)^{-1}} \mathbf{C} \circ (G \times G) \circ (P \times P) = \mathbf{C} \circ (F \times F) \circ (R \times R) \xRightarrow{\; \phi R \;} F \circ \mathbf{C} \circ (R \times R)
\end{equation*}
shows that (\ref{cdef}) indeed commutes up to the natural isomorphism $\phi R \circ (\gamma P)^{-1}$. By Lemma \ref{lem3} there exists a functor $H (= H_{A, A', A''})$ which makes the square commute, along with a natural isomorphism
\begin{equation*}
\alpha (= \alpha_{A, A', A''}) : H \Longrightarrow \mathbf{C} \circ (R \times R)
\end{equation*}
(both indicated by dashed arrows), such that $F \alpha = \phi R \circ (\gamma P)^{-1}$. By the universal property of $\mathcal{A}(A,A'')$, this commuting square (\ref{cdef}) gives rise to the functor we are looking for
\begin{equation*}
\mathbf{C}_{A, A', A''} = \langle \mathbf{C}_{D, D', D''} \circ (P_{A', A''} \times P_{A, A'}) , H_{A, A', A''} \rangle .
\end{equation*}
We finish the definition of $(P, \pi)$ and $(R, \rho)$ by setting
\begin{equation*}
\pi_{A, A', A''} = \mathrm{id} : \mathbf{C}_{D, D', D''} \circ (P_{A', A''} \times P_{A, A'}) \Longrightarrow P_{A, A''} \circ \mathbf{C}_{A, A', A''}
\end{equation*}
and
\begin{equation*}
\rho_{A, A', A''} = \alpha_{A, A', A''}^{-1} : \mathbf{C}_{B, B', B''} \circ (R_{A', A''} \times R_{A, A'}) \Longrightarrow R_{A, A''} \circ \mathbf{C}_{A, A', A''}.
\end{equation*}
The calculations
\begin{equation*}
(F, \phi) \circ (R, \rho) = (F \circ R, F \rho \circ \phi R) = (F \circ R, F \alpha^{-1} \circ \phi R) = (F \circ R, (\phi R \circ ( \gamma P)^{-1})^{-1} \circ \phi R) = (F \circ R, \gamma P)
\end{equation*}
and
\begin{equation*}
(G, \gamma) \circ (P, \pi) = (G \circ P, G \pi \circ \gamma P) = (G \circ P, \gamma P)
\end{equation*}
show that (\ref{pullb}) commutes.
The definition of $\mathcal{A}$ is finished by letting
\begin{equation*}
\mathbf{a}_{A, A', A'', A'''} : \mathbf{C}_{A, A', A'''} \circ ( \mathbf{C}_{A', A'', A'''} \times \mathrm{id} ) \Longrightarrow \mathbf{C}_{A, A'', A'''} \circ ( \mathrm{id} \times \mathbf{C}_{A, A', A'''} )
\end{equation*}
be the unique natural isomorphism such that for any combination
\begin{equation*}
A \overset{a}{\longrightarrow} A' \overset{a'}{\longrightarrow} A'' \overset{a''}{\longrightarrow} A'''
\end{equation*}
of composable 1-cells the diagrams
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
(Pa'' * Pa') * Pa \arrow[r, "\pi * \mathrm{id}"] \arrow[d, swap, "\mathbf{a}"] & P(a'' * a') * Pa \arrow[r, "\pi"] & P((a'' * a') * a) \arrow[d, dashed, "P \mathbf{a}"] \\
Pa'' * (Pa' * Pa) \arrow[r, swap, "\mathrm{id} * \pi"] & Pa'' * P(a' * a) \arrow[r, swap, "\pi"] & P(a'' * (a' * a))
\end{tikzcd}
\end{equation*}
and
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
(Ra'' * Ra') * Ra \arrow[r, "\rho * \mathrm{id}"] \arrow[d, swap, "\mathbf{a}"] & R(a'' * a') * Ra \arrow[r, "\rho"] & R((a'' * a') * a) \arrow[d, dashed, "R \mathbf{a}"] \\
Ra'' * (Ra' * Ra) \arrow[r, swap, "\mathrm{id} * \rho"] & Ra'' * R(a' * a) \arrow[r, swap, "\rho"] & R(a'' * (a' * a))
\end{tikzcd}
\end{equation*}
commute. (The dashed arrows mark the two projections of $\mathbf{a}_{A, A', A'', A'''}$.) In other words, we force the diagram (\ref{coh4}) to commute.
To show that $\mathcal{A}$ is a bigroupoid, we must verify that the diagrams (\ref{coh1}), (\ref{coh2}) and (\ref{coh3}) commute in $\mathcal{A}$. Since a diagram in $\mathcal{A}$ commutes if and only if the projections of this diagram under $P$ and $R$ commute in $\mathcal{D}$ and $\mathcal{B}$ respectively, this follows from $\textbf{(1)} \Longrightarrow \textbf{(2)}$ of Lemma \ref{lem2}.
Lastly, we demonstrate that our square has the desired universal property:
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{E} \arrow[dr, dashed, "{\exists ! (L, \lambda)}"] \arrow[ddr, bend right, swap, "{(S, \sigma)}"] \arrow[drr, bend left, "{(T, \tau)}"]& & \\
& \mathcal{A} \arrow[r, "{(R, \rho)}"] \arrow[d, swap, "{(P, \pi)}"] & \mathcal{B} \arrow[d, "{(F, \phi)}"] \\
& \mathcal{D} \arrow[r, swap, "{(G, \gamma)}"] & \mathcal{C}
\end{tikzcd}
\end{equation*}
It is not difficult to check that there exists a unique incoherent morphism $(L, \lambda): \mathcal{E} \longrightarrow \mathcal{A}$ satisfying
\begin{equation*}
(S, \sigma) = (P, \pi) \circ (L, \lambda) = (P \circ L, P \lambda \circ \pi L) \qquad \text{and} \qquad (T, \tau) = (R, \rho) \circ (L, \lambda) = (R \circ L, R \lambda \circ \rho L),
\end{equation*}
namely
\begin{align*}
L & = \langle S, T \rangle : \mathcal{E}_{0} \longrightarrow \mathcal{A}_{0} \\
L_{E, E'} & = \langle S_{E, E'}, T_{E, E'} \rangle : \mathcal{E}(E, E') \longrightarrow \mathcal{A}(LE, LE') \\
\lambda & = \langle \sigma \circ (\pi L)^{-1}, \tau \circ (\rho L)^{-1} \rangle.
\end{align*}
To show that $(L, \lambda)$ is a morphism, we must verify that the diagrams (\ref{coh4}) and (\ref{coh5}) commute for $\lambda$. Again, it suffices that the projections of these diagrams under $P$ and $R$ commute in $\mathcal{D}$ and $\mathcal{B}$. But this follows directly from $\textbf{(1)} + \textbf{(3)} \Longrightarrow \textbf{(2)}$ of Lemma \ref{lem1}.
\end{proof}
\begin{lem} \label{lem11}
\hfill
\begin{description}
\item[(1)] Fibrations are closed under composition.
\item[(2)] Every isomorphism is a fibration.
\item[(3)] Fibrations are closed under pullback.
\end{description}
\end{lem}
\begin{proof}
Straightforward. By \textbf{(1)} and \textbf{(2)}, it suffices to check \textbf{(3)} for the explicit construction made in Lemma \ref{lem4}.
\end{proof}
\begin{lem} \label{lem9}
Let $(F, \phi) : \mathcal{A} \longrightarrow \mathcal{C}$ be a trivial cofibration. Then there exists a factorization
\begin{equation*}
\mathcal{A} \xrightarrow{(G, \gamma)} \mathcal{B} \xrightarrow{(H, \mathrm{id})} \mathcal{C}
\end{equation*}
of $F$, where $G$ is a trivial cofibration which is surjective on 0-cells and $H$ is a strict trivial cofibration which is also a local isomorphism.
\end{lem}
\begin{proof}
Let $\mathcal{B}$ be the sub-bigroupoid of $\mathcal{C}$ consisting of the 0-cells in the image of $F$ with all 1- and 2-cells of $\mathcal{C}$ between them. One easily verifies that the evident morphisms $(G, \gamma) : \mathcal{A} \longrightarrow \mathcal{B}$ and $(H, \mathrm{id}) : \mathcal{B} \longrightarrow \mathcal{C}$ have the desired properties.
\end{proof}
\begin{lem}
The trivial cofibrations have the left lifting property with respect to the fibrations.
\end{lem}
\begin{proof}
Let the lifting problem
\begin{equation} \label{lift1}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{A} \arrow[r, "{(F, \phi)}"] \arrow[d, swap, "{(K, \kappa)}"] & \mathcal{B} \arrow[d, "{(G, \gamma)}"] \\
\mathcal{D} \arrow[r, swap, "{(H, \eta)}"] \arrow[ru, dashed, "?"] & \mathcal{C}
\end{tikzcd}
\end{equation}
be given, in which $K$ is a trivial cofibration and $G$ is a fibration.
Consider the pullback $\mathcal{E}$, of $G$ along $H$, and apply its universal property to obtain
\begin{equation} \label{liftpull}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{A} \arrow[rr, bend left, "{(F, \phi)}"] \arrow[r, dashed, swap, "\exists !"] \arrow[d, swap, "{(K, \kappa)}"] & \mathcal{E} \arrow[r] \arrow[d, swap, "{(G', \mathrm{id})}"] & \mathcal{B} \arrow[d, "{(G, \gamma)}"] \\
\mathcal{D} \arrow[r, swap, "\mathrm{id}"] & \mathcal{D} \arrow[r, swap, "{(H, \eta)}"] & \mathcal{C}
\end{tikzcd}
\end{equation}
Note that this pullback exists and yields a strict fibration $G'$ due to Lemma \ref{lem4} and Lemma \ref{lem11}. The observation that a diagonal filler for the left square in (\ref{liftpull}) results in a filler for the original square (\ref{lift1}) establishes that we may assume that (\ref{lift1}) is of the form
\begin{equation} \label{lift2}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{A} \arrow[r, "{(F, \phi)}"] \arrow[d, swap, "{(K, \kappa)}"] & \mathcal{B} \arrow[d, "{(G, \mathrm{id})}"] \\
\mathcal{C} \arrow[r, swap, "\mathrm{id}"] & \mathcal{C}
\end{tikzcd}
\end{equation}
Factorize $(K, \kappa)$ into $(T, \mathrm{id}) \circ (S, \sigma)$, using Lemma \ref{lem9}. Substituting this into (\ref{lift2}) yields the square
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{A} \arrow[r, "{(F, \phi)}"] \arrow[d, swap, "{(S, \sigma)}"] & \mathcal{B} \arrow[d, "{(G, \mathrm{id})}"] \\
\mathcal{D} \arrow[r, swap, "{(T, \mathrm{id})}"] \arrow[ru, dashed, "{\exists (L, \lambda)}"] & \mathcal{C}
\end{tikzcd}
\end{equation*}
for which the indicated lift $L$ exists by virtue of Lemma \ref{surlift}. Lemma \ref{isolift}, in turn, provides a lift $M$ for the square
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{D} \arrow[r, "{(L, \lambda)}"] \arrow[d, swap, "{(T, \mathrm{id})}"] & \mathcal{B} \arrow[d, "{(G, \mathrm{id})}"] \\
\mathcal{C} \arrow[r, swap, "\mathrm{id}"] \arrow[ru, dashed, "{\exists (M, \mu)}"] & \mathcal{C}
\end{tikzcd}
\end{equation*}
as shown. But then $M$ is a diagonal filler for (\ref{lift2}).
\end{proof}
\subsection{Factorization}
\begin{dfn}
A \emph{path object} on a bigroupoid $\mathcal{B}$ is a factorisation of the diagonal $\Delta : \mathcal{B} \longrightarrow \mathcal{B} \times \mathcal{B}$ as a weak equivalence $R : \mathcal{B} \longrightarrow \mathcal{PB}$ followed by a fibration $\langle S, T \rangle : \mathcal{PB} \longrightarrow \mathcal{B} \times \mathcal{B}$.
\end{dfn}
The construction for path objects that we give below is basically the same as the one given in \cite{MR2138540} for bicategories.
\begin{lem} \label{lem7}
Every bigroupoid has a path object.
\end{lem}
\begin{proof}
Let $\mathcal{B}$ be a bigroupoid. We construct a path object $\mathcal{PB}$ for $\mathcal{B}$. By virtue of Theorem \ref{fordiagthm}, we allow ourselves to write as if $\mathcal{B}$ were a strict bigroupoid. The set of 0-cells of $\mathcal{PB}$ is the set of all 1-cells of $\mathcal{B}$. Given a pair of 0-cells $a : A \longrightarrow A'$, $b : B \longrightarrow B'$ in $\mathcal{PB}$, a 1-cell $a \longrightarrow b$ is a triple $(f, \phi, f')$, with $f : A \longrightarrow B$, $f' : A' \longrightarrow B'$ and $\phi : f' * a \longrightarrow b * f$. We can visualize such a 1-cell of $\mathcal{PB}$ as a square of 1-cells in $\mathcal{B}$, which commutes up to a 2-cell:
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
A \arrow[r, "f"] \arrow[d, swap, "a"] & B \arrow[d, "b"] \\
A' \arrow[r, swap, "f'"] \arrow[ru, Rightarrow, shorten >=20pt, shorten <=20pt, "\phi"] & B'
\end{tikzcd}
\end{equation*}
A 2-cell from $(f, \phi, f')$ to $(g, \psi, g')$ is a pair $(\alpha, \alpha')$ of 2-cells $\alpha : f \longrightarrow g$, $\alpha' : f' \longrightarrow g'$ in $\mathcal{B}$, such that the diagram
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
f' a \arrow[r, "\phi"] \arrow[d, swap, "\alpha' * \mathrm{id}"] & b f \arrow[d, "\mathrm{id} * \alpha"] \\
g' a \arrow[r, swap, "\psi"] & b g
\end{tikzcd}
\end{equation*}
commutes. One easily checks that $\mathcal{PB}(a, b)$, defined in this way, forms a groupoid.
Next, we define the functor $\mathbf{C}_{a, b, c} : \mathcal{PB}(b, c) \times \mathcal{PB}(a, b) \longrightarrow \mathcal{PB}(a, c)$. Given two 1-cells $(f, \phi, f') : a \longrightarrow b$ and $(g, \psi, g') : b \longrightarrow c$, we define
\begin{equation*}
(g, \psi, g') * (f, \phi, f') = (g * f, \psi * \phi, g'* f').
\end{equation*}
The composition $\psi * \phi$ makes sense, because we are willfully ignorant about associativity issues. Given four 1-cells
\begin{equation*}
(f_{1}, \phi_{1}, f_{1}'), (f_{2}, \phi_{2}, f_{2}') : a \longrightarrow b \qquad \text{and} \qquad (g_{1}, \psi_{1}, g_{1}'), (g_{2}, \psi_{2}, g_{2}') : b \longrightarrow c
\end{equation*}
and 2-cells
\begin{equation*}
(\alpha, \alpha') : (f_{1}, \phi_{1}, f_{1}') \longrightarrow (f_{2}, \phi_{2}, f_{2}') \qquad \text{and} \qquad (\beta, \beta') : (g_{1}, \psi_{1}, _{1}') \longrightarrow (g_{2}, \psi_{2}, g_{2}')
\end{equation*}
between them, we define
\begin{equation*}
(\beta, \beta') * (\alpha, \alpha') = (\beta * \alpha, \beta' * \alpha').
\end{equation*}
The commutative diagram
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
g_{1}' f_{1}' a \arrow[r, "\mathrm{id} * \phi_{1}"] \arrow[d, swap, "\beta' * \alpha' * \mathrm{id}"] & g_{1}' b f_{1} \arrow[r, "\psi_{1} * \mathrm{id}"] \arrow[d, swap, "\beta' * \mathrm{id} * \alpha"] & c g_{1} f_{1} \arrow[d, "\mathrm{id} * \beta * \alpha"] \\
g_{2}' f_{2}' a \arrow[r, swap, "\mathrm{id} * \phi_{2}"] & g_{2}' b f_{2} \arrow[r, swap, "\psi_{2} * \mathrm{id}"] & c g_{2} f_{2}
\end{tikzcd}
\end{equation*}
confirms that $(\beta * \alpha, \beta' * \alpha')$ is in fact a 2-cell.
Next, for any four 0-cells $a: A \longrightarrow A'$, $b: B \longrightarrow B'$, $c: C \longrightarrow C'$, $d: D \longrightarrow D'$ in $\mathcal{PB}$, we define the natural isomorphism $\mathbf{a}_{a, b, c, d}$. Given 1-cells $(f, \phi, f') : a \longrightarrow b$, $(g, \psi, g') : b \longrightarrow c$ and $(h, \theta, h') : c \longrightarrow d$, we take
\begin{equation*}
(\mathbf{a}_{a, b, c, d})_{((h, \theta, h'), (g, \psi, g'), (f, \phi, f'))} = ((\mathbf{a}_{A, B, C, D})_{(h, g, f)}, (\mathbf{a}_{A', B', C', D'})_{(h', g', f')}).
\end{equation*}
In order for this to be a genuine 2-cell, the diagram
\begin{equation} \label{pathsq}
\begin{tikzcd}[row sep=huge, column sep=huge]
((h' g') f') a \arrow[r, "(\theta * \psi) * \phi"] \arrow[d, swap, "\mathbf{a} * \mathrm{id}"] & d ((h g) f) \arrow[d, "\mathrm{id} * \mathbf{a}"] \\
(h' (g' f')) a \arrow[r, swap, "\theta * (\psi * \phi)"] & d (h (g f))
\end{tikzcd}
\end{equation}
must commute. Since we may calculate as if $\mathcal{B}$ were strict, we can remove all brackets appearing in (\ref{pathsq}) and set $\mathbf{a} = \mathrm{id}$, resulting in a square that trivially commutes. The diagrams (\ref{coh1}), (\ref{coh2}) and (\ref{coh3}) commute simply because they commute componentwise, hence $\mathcal{PB}$ is a bigroupoid.
The diagonal $\Delta : \mathcal{B} \longrightarrow \mathcal{B} \times \mathcal{B}$ now factors trough $\mathcal{PB}$ as the strict morphism $R : \mathcal{B} \longrightarrow \mathcal{PB}$, which
\begin{itemize}
\item{sends a 0-cell $A$ to $1_{A} : A \longrightarrow A$,}
\item{sends a 1-cell $f : A \longrightarrow B$ to $(f, \phi, f)$, with $\phi : f * 1_{A} \longrightarrow 1_{B} * f$ canonical}
\item{and sends a 2-cell $\alpha : f \longrightarrow g$ to $(\alpha, \alpha)$,}
\end{itemize}
followed by the strict morphism $\langle S, T \rangle : \mathcal{B} \longrightarrow \mathcal{PB}$, which
\begin{itemize}
\item{sends a 0-cell $a : A \longrightarrow A'$ to $(A, A')$,}
\item{sends a 1-cell $(f, \phi, f')$ to $(f, f')$}
\item{and sends a 2-cell $(\alpha, \alpha')$ to $(\alpha, \alpha')$.}
\end{itemize}
We leave it to the reader to verify that $R$ and $\langle S, T \rangle$ satisfy the necessary conditions.
\end{proof}
The following Lemma collects some miscellaneous results, to be used in Lemma \ref{lem10}.
\begin{lem}
\hfill
\begin{description}
\item[(1)] Trivial fibrations are closed under pullback.
\item[(2)] For every bigroupoid $\mathcal{B}$, the unique morphism $\mathcal{B} \longrightarrow 1$ is a fibration.
\item[(3)] Every split monomorphism is a cofibration.
\end{description}
\end{lem}
\begin{proof}
Straightforward. For \textbf{(1)}, note that the trivial fibrations form the right class of a weak factorization system.
\end{proof}
The following argument is originally due to Brown \cite{MR0341469}.
\begin{lem} \label{lem10}
Let $(F, \phi) : \mathcal{A} \longrightarrow \mathcal{C}$ be a morphism of bigroupoids. Then there exists a factorization
\begin{equation*}
\mathcal{A} \xrightarrow{(G, \psi)} \mathcal{B} \xrightarrow{(H, \eta)} \mathcal{C}
\end{equation*}
of $F$, where $G$ is a trivial cofibration and $H$ is a fibration.
\end{lem}
\begin{proof}
Since the unique morphism $\mathcal{C} \longrightarrow 1$ is a fibration and fibrations are closed under pullback, the two projections $\mathcal{C} \times \mathcal{C} \longrightarrow \mathcal{C}$ are fibrations as well. Since fibrations are closed under composition, it follows that $S : \mathcal{PC} \longrightarrow \mathcal{C}$ (with $\begin{tikzcd}[column sep=large]
\mathcal{C} \arrow[r, "R" description] &[-15pt] \mathcal{PC} \arrow[r, "{\langle S, T \rangle}" description] & \mathcal{C} \times \mathcal{C}
\end{tikzcd}$ as in Lemma \ref{lem7}) is a fibration. We can therefore take the pullback of $S$ along $F$ and apply its universal property, as depicted below
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{A} \arrow[dr, dashed, "\exists ! G"] \arrow[ddr, bend right, swap, "\mathrm{id}"] \arrow[drr, bend left, "R \circ F"] & & \\
& \mathcal{B} \arrow[r, "Q"] \arrow[d, swap, "P"] & \mathcal{PC} \arrow[d, "S"] \\
& \mathcal{A} \arrow[r, swap, "F"] & \mathcal{C}
\end{tikzcd}
\end{equation*}
Since $S \circ R = \mathrm{id}$ and $R$ is a weak equivalence, 2-out-of-3 implies that $S$ is a weak equivalence and hence a trivial fibration. These are stable under pullback, so $P$ is a trivial fibration as well. The equality $P \circ G = \mathrm{id}$ then shows that $G$ is a weak equivalence, by 2-out-of-3. It also shows that $G$ is a split monomorphism and therefore a (trivial) cofibration. Defining $H = T \circ Q$ yields a factorization $F = H \circ G$. The square
\begin{equation*}
\begin{tikzcd}[row sep=huge, column sep=huge]
\mathcal{B} \arrow[r, "Q"] \arrow[d, swap, "{\langle P, H \rangle}"] & \mathcal{PC} \arrow[d, "{\langle S, T \rangle}"] \\
\mathcal{A} \times \mathcal{C} \arrow[r, swap, "F \times \mathrm{id}"] & \mathcal{C} \times \mathcal{C}
\end{tikzcd}
\end{equation*}
exhibits $\langle P, H \rangle$ as a pullback (by the pullback Lemma) of the fibration $\langle S, T \rangle$, which implies that $H$ is a fibration as well.
\end{proof}
With this, Proposition \ref{prop} is proven, which also finishes the proof of Theorem \ref{mainthm}.
\begin{rmk}
Note that the only place where we seem to make essential use of the fact that we are working with \textit{bigroupoids} and not \textit{bicategories} is Lemma \ref{isolift}. It is quite possible that this may be adapted somehow, resulting in a model structure on the category of (small) bicategories and pseudofunctors.
\end{rmk}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 8,455 |
{"url":"https:\/\/socratic.org\/questions\/how-does-cosmic-background-radiation-affect-earth","text":"# How does cosmic background radiation affect earth?\n\nIt is now roughly equivalent to the heat radiated from an object cooled to 3 degrees Kelvin (${3}^{\\circ}$ above absolute zero).","date":"2021-10-23 16:33:54","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 1, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5558024048805237, \"perplexity\": 1213.8050832916713}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323585737.45\/warc\/CC-MAIN-20211023162040-20211023192040-00228.warc.gz\"}"} | null | null |
Q: Why does my React Website Take 43 seconds to Load? I'm not sure why my react website is taking so long to load. It takes 43 seconds
All I have is in index.jsx
import ReactDOM from "react-dom";
import React from "react";
import { HashRouter, Route } from "react-router-dom";
import Home from "./components/Home";
ReactDOM.render(
<HashRouter>
<div>
<Route exact path="/" component={Home} />
</div>
</HashRouter>,
document.getElementById("main")
);
Home.jsx: imports react and renders hi
webpack.config.js : https://pastebin.com/raw/zdUws0R8
package.json : https://pastebin.com/raw/VR6pSP44
index.html : https://pastebin.com/raw/9AVNBpTN
A: I checked your website and it seems to be working fine to me for now;
For more details, I have added a:
Website Request screenshot
You might want to have a look at your SSL certificate though.
All the best!
A: I think you need to reinstall your project via :
npx create-react-app YourProject
and use
BrowserRouter
instead of
HashRouter
in 'react-router-dom'
then start the development server after creating the components or editing it via
npm start
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,128 |
Ixtapan del Oro és un municipi de l'estat de Mèxic. Ixtapan del Oro és el cap de municipi i principal centre de població d'aquesta municipalitat. Aquest municipi és a la part nord-occidental de l'estat de Mèxic. Limita al nord amb els municipis de Villa Victoria i Chapa de Mota, al sud amb Santo Tomás, a l'oest amb Michoacán i a l'est amb Valle de Bravo.
Vegeu també
Municipis de l'estat de Mèxic
Referències
Enllaços externs
Portal de l'estat de Mèxic
Municipis de l'estat de Mèxic | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 5,140 |
Oaktown is een plaats (town) in de Amerikaanse staat Indiana, en valt bestuurlijk gezien onder Knox County.
Demografie
Bij de volkstelling in 2000 werd het aantal inwoners vastgesteld op 633.
In 2006 is het aantal inwoners door het United States Census Bureau geschat op 607, een daling van 26 (-4,1%).
Geografie
Volgens het United States Census Bureau beslaat de plaats een oppervlakte van
0,7 km², geheel bestaande uit land. Oaktown ligt op ongeveer 141 m boven zeeniveau.
Plaatsen in de nabije omgeving
De onderstaande figuur toont nabijgelegen plaatsen in een straal van 20 km rond Oaktown.
Externe link
Plaats in Indiana | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 5,700 |
People throughout the ages have used psychoactive drugs for a variety of reasons. They have been, and are used, for recreation, pain relief, psychological escape or religious expression just to name a few reasons. Only a small percentage of those who have tried drugs go on to develop what are commonly known as substance use disorders, the most common of which is addiction. Drug addiction has been described as "a behavioral pattern of compulsive drug use, characterized by overwhelming involvement with the use of a drug, securing its supply and has a high tendency for relapse after it's withdrawal [abstinence] (Jaffe, 1975).
Jaffe, J.H., (1975). Drug addiction and drug abuse. In: Goodman, L.S., Gilman, A. (eds) The pharmacological basis of therapeutics (pp. 284-324), MacMillan, New York.
Since the mid-1970's there have been a variety of scientifically based treatment approaches used to address addiction. These include therapies such as counseling, cognitive behavioral, psychotherapy, medications or a combination of these methods. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,554 |
{"url":"https:\/\/pressbooks.nscc.ca\/algebratrigonometryopenstax\/chapter\/probability\/","text":"# 92 Probability\n\n### Learning Objectives\n\nIn this section, you will:\n\n\u2022 Construct probability models.\n\u2022 Compute probabilities of equally likely outcomes.\n\u2022 Compute probabilities of the union of two events.\n\u2022 Use the complement rule to find probabilities.\n\u2022 Compute probability using counting theory.\n\n[1]\n\nResidents of the Southeastern United States are all too familiar with charts, known as spaghetti models, such as the one in (Figure). They combine a collection of weather data to predict the most likely path of a hurricane. Each colored line represents one possible path. The group of squiggly lines can begin to resemble strands of spaghetti, hence the name. In this section, we will investigate methods for making these types of predictions.\n\n### Constructing Probability Models\n\nSuppose we roll a six-sided number cube. Rolling a number cube is an example of an experiment, or an activity with an observable result. The numbers on the cube are possible results, or outcomes, of this experiment. The set of all possible outcomes of an experiment is called the sample space of the experiment. The sample space for this experiment is $\\left\\{1,2,3,4,5,6\\right\\}.\\,$An event is any subset of a sample space.\n\nThe likelihood of an event is known as probability. The probability of an event $p$ is a number that always satisfies $0\\le p\\le 1,$ where 0 indicates an impossible event and 1 indicates a certain event. A probability model is a mathematical description of an experiment listing all possible outcomes and their associated probabilities. For instance, if there is a 1% chance of winning a raffle and a 99% chance of losing the raffle, a probability model would look much like (Figure).\n\nOutcome Probability\nWinning the raffle 1%\nLosing the raffle 99%\n\nThe sum of the probabilities listed in a probability model must equal 1, or 100%.\n\n### How To\n\nGiven a probability event where each event is equally likely, construct a probability model.\n\n1. Identify every outcome.\n2. Determine the total number of possible outcomes.\n3. Compare each outcome to the total number of possible outcomes.\n\n### Constructing a Probability Model\n\nConstruct a probability model for rolling a single, fair die, with the event being the number shown on the die.\n\nBegin by making a list of all possible outcomes for the experiment. The possible outcomes are the numbers that can be rolled: 1, 2, 3, 4, 5, and 6. There are six possible outcomes that make up the sample space.\n\nAssign probabilities to each outcome in the sample space by determining a ratio of the outcome to the number of possible outcomes. There is one of each of the six numbers on the cube, and there is no reason to think that any particular face is more likely to show up than any other one, so the probability of rolling any number is$\\,\\frac{1}{6}.$\n\n Outcome Roll of 1 Roll of 2 Roll of 3 Roll of 4 Roll of 5 Roll of 6 Probability $\\frac{1}{6}$ $\\frac{1}{6}$ $\\frac{1}{6}$ $\\frac{1}{6}$ $\\frac{1}{6}$ $\\frac{1}{6}$\n\nDo probabilities always have to be expressed as fractions?\n\nNo. Probabilities can be expressed as fractions, decimals, or percents. Probability must always be a number between 0 and 1, inclusive of 0 and 1.\n\n### Try It\n\nConstruct a probability model for tossing a fair coin.\n\nOutcome Probability\nHeads $\\frac{1}{2}$\nTails $\\frac{1}{2}$\n\n### Computing Probabilities of Equally Likely Outcomes\n\nLet$\\,S\\,$be a sample space for an experiment. When investigating probability, an event is any subset of$\\,S.\\,$When the outcomes of an experiment are all equally likely, we can find the probability of an event by dividing the number of outcomes in the event by the total number of outcomes in$\\,S.\\,$Suppose a number cube is rolled, and we are interested in finding the probability of the event \u201crolling a number less than or equal to 4.\u201d There are 4 possible outcomes in the event and 6 possible outcomes in$\\,S,\\,$so the probability of the event is$\\,\\frac{4}{6}=\\frac{2}{3}.\\,$\n\n### Computing the Probability of an Event with Equally Likely Outcomes\n\nThe probability of an event $E$ in an experiment with sample space $S$ with equally likely outcomes is given by\n\n$\\,P\\left(E\\right)=\\frac{\\text{number of elements in }E}{\\text{number of elements in }S}=\\frac{n\\left(E\\right)}{n\\left(S\\right)}\\,$\n\n$\\,E$ is a subset of $S,$ so it is always true that $0\\le P\\left(E\\right)\\le 1.\\,$\n\n### Computing the Probability of an Event with Equally Likely Outcomes\n\nA six-sided number cube is rolled. Find the probability of rolling an odd number.\n\nThe event \u201crolling an odd number\u201d contains three outcomes. There are 6 equally likely outcomes in the sample space. Divide to find the probability of the event.\n\n$\\,P\\left(E\\right)=\\frac{3}{6}=\\frac{1}{2}\\,$[\/hidden-answer]\n\n### Try It\n\nA number cube is rolled. Find the probability of rolling a number greater than 2.\n\n$\\,\\frac{2}{3}\\,$\n\n### Computing the Probability of the Union of Two Events\n\nWe are often interested in finding the probability that one of multiple events occurs. Suppose we are playing a card game, and we will win if the next card drawn is either a heart or a king. We would be interested in finding the probability of the next card being a heart or a king. The union of two events$\\,E\\text{ and }F,\\text{written }E\\cup F,\\,$is the event that occurs if either or both events occur.\n\n$\\,P\\left(E\\cup F\\right)=P\\left(E\\right)+P\\left(F\\right)-P\\left(E\\cap F\\right)\\,$\n\nSuppose the spinner in (Figure) is spun. We want to find the probability of spinning orange or spinning a$\\,b.\\,$\n\nThere are a total of 6 sections, and 3 of them are orange. So the probability of spinning orange is$\\,\\frac{3}{6}=\\frac{1}{2}.\\,$There are a total of 6 sections, and 2 of them have a$\\,b.\\,$So the probability of spinning a$\\,b$ is $\\frac{2}{6}=\\frac{1}{3}.$ If we added these two probabilities, we would be counting the sector that is both orange and a $b$ twice. To find the probability of spinning an orange or a $b,$ we need to subtract the probability that the sector is both orange and has a $b.$\n\n$\\,\\frac{1}{2}+\\frac{1}{3}-\\frac{1}{6}=\\frac{2}{3}\\,$\n\nThe probability of spinning orange or a $b\\,$ is $\\frac{2}{3}.$\n\n### Probability of the Union of Two Events\n\nThe probability of the union of two events $E$ and $F$ (written $\\,E\\cup F$) equals the sum of the probability of $E$ and the probability of $F$ minus the probability of $E$ and $F$ occurring together $\\text{(}$which is called the intersection of $E$ and $F$ and is written as $E\\cap F$).\n\n$\\,P\\left(E\\cup F\\right)=P\\left(E\\right)+P\\left(F\\right)-P\\left(E\\cap F\\right)\\,$\n\n### Computing the Probability of the Union of Two Events\n\nA card is drawn from a standard deck. Find the probability of drawing a heart or a 7.\n\nA standard deck contains an equal number of hearts, diamonds, clubs, and spades. So the probability of drawing a heart is$\\,\\frac{1}{4}.\\,$There are four 7s in a standard deck, and there are a total of 52 cards. So the probability of drawing a 7 is$\\,\\frac{1}{13}.$\n\nThe only card in the deck that is both a heart and a 7 is the 7 of hearts, so the probability of drawing both a heart and a 7 is$\\,\\frac{1}{52}.\\,$Substitute$\\,P\\left(H\\right)=\\frac{1}{4}, P\\left(7\\right)=\\frac{1}{13}, \\text{and} P\\left(H\\cap 7\\right)=\\frac{1}{52}\\,$into the formula.\n\n$\\begin{array}{l}P\\left(E{\\cup }^{\\text{\u200b}}F\\right)=P\\left(E\\right)+P\\left(F\\right)-P\\left(E{\\cap }^{\\text{\u200b}}F\\right)\\hfill \\\\ \\text{ }=\\frac{1}{4}+\\frac{1}{13}-\\frac{1}{52}\\hfill \\\\ \\text{ }=\\frac{4}{13}\\hfill \\end{array}$\n\nThe probability of drawing a heart or a 7 is$\\,\\frac{4}{13}.$[\/hidden-answer]\n\n### Try It\n\nA card is drawn from a standard deck. Find the probability of drawing a red card or an ace.\n\n$\\,\\frac{7}{13}\\,$\n\n### Computing the Probability of Mutually Exclusive Events\n\nSuppose the spinner in (Figure) is spun again, but this time we are interested in the probability of spinning an orange or a$\\,d.\\,$There are no sectors that are both orange and contain a$\\,d,\\,$so these two events have no outcomes in common. Events are said to be mutually exclusive events when they have no outcomes in common. Because there is no overlap, there is nothing to subtract, so the general formula is\n\n$\\,P\\left(E\\cup F\\right)=P\\left(E\\right)+P\\left(F\\right)\\,$\n\nNotice that with mutually exclusive events, the intersection of$\\,E\\,$and$\\,F\\,$is the empty set. The probability of spinning an orange is$\\,\\frac{3}{6}=\\frac{1}{2}\\,$and the probability of spinning a $d$ is$\\,\\frac{1}{6}.\\,$We can find the probability of spinning an orange or a $d$ simply by adding the two probabilities.\n\n$\\begin{array}{l}P\\left(E{\\cup }^{\\text{\u200b}}F\\right)=P\\left(E\\right)+P\\left(F\\right)\\hfill \\\\ \\text{ }=\\frac{1}{2}+\\frac{1}{6}\\hfill \\\\ \\text{ }=\\frac{2}{3}\\hfill \\end{array}$\n\nThe probability of spinning an orange or a $d$ is$\\,\\frac{2}{3}.$\n\n### Probability of the Union of Mutually Exclusive Events\n\nThe probability of the union of two mutually exclusive events$\\,E\\,\\text{and}\\,F\\,$is given by\n\n$\\,P\\left(E\\cup F\\right)=P\\left(E\\right)+P\\left(F\\right)\\,$\n\n### How To\n\nGiven a set of events, compute the probability of the union of mutually exclusive events.\n\n1. Determine the total number of outcomes for the first event.\n2. Find the probability of the first event.\n3. Determine the total number of outcomes for the second event.\n4. Find the probability of the second event.\n\n### Computing the Probability of the Union of Mutually Exclusive Events\n\nA card is drawn from a standard deck. Find the probability of drawing a heart or a spade.\n\nThe events \u201cdrawing a heart\u201d and \u201cdrawing a spade\u201d are mutually exclusive because they cannot occur at the same time. The probability of drawing a heart is$\\,\\frac{1}{4},\\,$and the probability of drawing a spade is also$\\,\\frac{1}{4},\\,$so the probability of drawing a heart or a spade is\n\n$\\,\\frac{1}{4}+\\frac{1}{4}=\\frac{1}{2}\\,$[\/hidden-answer]\n\n### Try It\n\nA card is drawn from a standard deck. Find the probability of drawing an ace or a king.\n\n$\\,\\frac{2}{13}\\,$\n\n### Using the Complement Rule to Compute Probabilities\n\nWe have discussed how to calculate the probability that an event will happen. Sometimes, we are interested in finding the probability that an event will not happen. The complement of an event$\\,E,\\,$denoted$\\,{E}^{\\prime },\\,$is the set of outcomes in the sample space that are not in$\\,E.\\,$For example, suppose we are interested in the probability that a horse will lose a race. If event$\\,W\\,$is the horse winning the race, then the complement of event$\\,W\\,$is the horse losing the race.\n\nTo find the probability that the horse loses the race, we need to use the fact that the sum of all probabilities in a probability model must be 1.\n\n$\\,P\\left({E}^{\\prime }\\right)=1-P\\left(E\\right)\\,$\n\nThe probability of the horse winning added to the probability of the horse losing must be equal to 1. Therefore, if the probability of the horse winning the race is$\\,\\frac{1}{9},\\,$the probability of the horse losing the race is simply\n\n$\\,1-\\frac{1}{9}=\\frac{8}{9}\\,$\n\n### The Complement Rule\n\nThe probability that the complement of an event will occur is given by\n\n$\\,P\\left({E}^{\\prime }\\right)=1-P\\left(E\\right)\\,$\n\n### Using the Complement Rule to Calculate Probabilities\n\nTwo six-sided number cubes are rolled.\n\n1. Find the probability that the sum of the numbers rolled is less than or equal to 3.\n2. Find the probability that the sum of the numbers rolled is greater than 3.\n\nThe first step is to identify the sample space, which consists of all the possible outcomes. There are two number cubes, and each number cube has six possible outcomes. Using the Multiplication Principle, we find that there are $6\u00d76,\\,$or$\\text{ 36 }$total possible outcomes. So, for example, 1-1 represents a 1 rolled on each number cube.\n\n $\\text{1-1}$ $\\text{1-2}$ $\\text{1-3}$ $\\text{1-4}$ $\\text{1-5}$ $\\text{1-6}$ $\\text{2-1}$ $\\text{2-2}$ $\\text{2-3}$ $\\text{}$$\\text{2-4}$ $\\text{2-5}$ $\\text{2-6}$ $\\text{3-1}$ $\\text{3-2}$ $\\text{3-3}$ $\\text{3-4}$ $\\text{3-5}$ $\\text{3-6}$ $\\text{4-1}$ $\\text{4-2}$ $\\text{4-3}$ $\\text{4-4}$ $\\text{4-5}$ $\\text{4-6}$ $\\text{5-1}$ $\\text{5-2}$ $\\text{5-3}$ $\\text{5-4}$ $\\text{5-5}$ $\\text{5-6}$ $\\text{6-1}$ $\\text{6-2}$ $\\text{6-3}$ $\\text{6-4}$ $\\text{6-5}$ $\\text{6-6}$\n1. We need to count the number of ways to roll a sum of 3 or less. These would include the following outcomes: 1-1, 1-2, and 2-1. So there are only three ways to roll a sum of 3 or less. The probability is\n$\\,\\frac{3}{36}=\\frac{1}{12}\\,$\n2. Rather than listing all the possibilities, we can use the Complement Rule. Because we have already found the probability of the complement of this event, we can simply subtract that probability from 1 to find the probability that the sum of the numbers rolled is greater than 3.\n$\\begin{array}{l}P\\left({E}^{\\prime }\\right)=1-P\\left(E\\right)\\hfill \\\\ \\text{ }=1-\\frac{1}{12}\\hfill \\\\ \\text{ }=\\frac{11}{12}\\hfill \\end{array}$[\/hidden-answer]\n\n### Try It\n\nTwo number cubes are rolled. Use the Complement Rule to find the probability that the sum is less than 10.\n\n$\\,\\frac{5}{6}\\,$\n\n### Computing Probability Using Counting Theory\n\nMany interesting probability problems involve counting principles, permutations, and combinations. In these problems, we will use permutations and combinations to find the number of elements in events and sample spaces. These problems can be complicated, but they can be made easier by breaking them down into smaller counting problems.\n\nAssume, for example, that a store has 8 cellular phones and that 3 of those are defective. We might want to find the probability that a couple purchasing 2 phones receives 2 phones that are not defective. To solve this problem, we need to calculate all of the ways to select 2 phones that are not defective as well as all of the ways to select 2 phones. There are 5 phones that are not defective, so there are$\\,C\\left(5,2\\right)\\,$ways to select 2 phones that are not defective. There are 8 phones, so there are$\\,C\\left(8,2\\right)\\,$ways to select 2 phones. The probability of selecting 2 phones that are not defective is:\n\n$\\begin{array}{l}\\frac{\\text{ways to select 2 phones that are not defective}}{\\text{ways to select 2 phones}}=\\frac{C\\left(5,2\\right)}{C\\left(8,2\\right)}\\hfill \\\\ \\text{ }=\\frac{10}{28}\\hfill \\\\ \\text{ }=\\frac{5}{14}\\hfill \\end{array}$\n\n### Computing Probability Using Counting Theory\n\nA child randomly selects 5 toys from a bin containing 3 bunnies, 5 dogs, and 6 bears.\n\n1. Find the probability that only bears are chosen.\n2. Find the probability that 2 bears and 3 dogs are chosen.\n3. Find the probability that at least 2 dogs are chosen.\n\n1. We need to count the number of ways to choose only bears and the total number of possible ways to select 5 toys. There are 6 bears, so there are$\\,C\\left(6,5\\right)\\,$ways to choose 5 bears. There are 14 toys, so there are$\\,C\\left(14,5\\right)\\,$ways to choose any 5 toys.\n$\\,\\frac{C\\left(6\\text{,}5\\right)}{C\\left(14\\text{,}5\\right)}=\\frac{6}{2\\text{,}002}=\\frac{3}{1\\text{,}001}\\,$\n2. We need to count the number of ways to choose 2 bears and 3 dogs and the total number of possible ways to select 5 toys. There are 6 bears, so there are$\\,C\\left(6,2\\right)\\,$ways to choose 2 bears. There are 5 dogs, so there are$\\,C\\left(5,3\\right)\\,$ways to choose 3 dogs. Since we are choosing both bears and dogs at the same time, we will use the Multiplication Principle. There are$\\,C\\left(6,2\\right)\\cdot C\\left(5,3\\right)\\,$ways to choose 2 bears and 3 dogs. We can use this result to find the probability.\n$\\,\\frac{C\\left(6\\text{,}2\\right)C\\left(5\\text{,}3\\right)}{C\\left(14\\text{,}5\\right)}=\\frac{15\\cdot 10}{2\\text{,}002}=\\frac{75}{1\\text{,}001}\\,$\n3. It is often easiest to solve \u201cat least\u201d problems using the Complement Rule. We will begin by finding the probability that fewer than 2 dogs are chosen. If less than 2 dogs are chosen, then either no dogs could be chosen, or 1 dog could be chosen.\n\nWhen no dogs are chosen, all 5 toys come from the 9 toys that are not dogs. There are$\\,C\\left(9,5\\right)\\,$ways to choose toys from the 9 toys that are not dogs. Since there are 14 toys, there are$\\,C\\left(14,5\\right)\\,$ways to choose the 5 toys from all of the toys.\n\n$\\,\\frac{C\\left(9\\text{,}5\\right)}{C\\left(14\\text{,}5\\right)}=\\frac{63}{1\\text{,}001}\\,$\n\nIf there is 1 dog chosen, then 4 toys must come from the 9 toys that are not dogs, and 1 must come from the 5 dogs. Since we are choosing both dogs and other toys at the same time, we will use the Multiplication Principle. There are$\\,C\\left(5,1\\right)\\cdot C\\left(9,4\\right)\\,$ways to choose 1 dog and 1 other toy.\n\n$\\,\\frac{C\\left(5\\text{,}1\\right)C\\left(9\\text{,}4\\right)}{C\\left(14\\text{,}5\\right)}=\\frac{5\\cdot 126}{2\\text{,}002}=\\frac{315}{1\\text{,}001}\\,$\n\nBecause these events would not occur together and are therefore mutually exclusive, we add the probabilities to find the probability that fewer than 2 dogs are chosen.\n\n$\\,\\frac{63}{1\\text{,}001}+\\frac{315}{1\\text{,}001}=\\frac{378}{1\\text{,}001}\\,$\n\nWe then subtract that probability from 1 to find the probability that at least 2 dogs are chosen.\n\n$\\,1-\\frac{378}{1\\text{,}001}=\\frac{623}{1\\text{,}001}\\,$[\/hidden-answer]\n\n### Try It\n\nA child randomly selects 3 gumballs from a container holding 4 purple gumballs, 8 yellow gumballs, and 2 green gumballs.\n\n1. Find the probability that all 3 gumballs selected are purple.\n2. Find the probability that no yellow gumballs are selected.\n3. Find the probability that at least 1 yellow gumball is selected.\n\n$\\,\\begin{array}{lll}\\text{a}\\text{. }\\frac{1}{91};\\hfill & \\text{b}\\text{. }\\frac{\\text{5}}{\\text{91}};\\hfill & \\text{c}\\text{. }\\frac{86}{91}\\hfill \\end{array}\\,$\n\nAccess these online resources for additional instruction and practice with probability.\n\nVisit this website for additional practice questions from Learningpod.\n\n### Key Equations\n\n probability of an event with equally likely outcomes $P\\left(E\\right)=\\frac{n\\left(E\\right)}{n\\left(S\\right)}$ probability of the union of two events $P\\left(E\\cup F\\right)=P\\left(E\\right)+P\\left(F\\right)-P\\left(E\\cap F\\right)$ probability of the union of mutually exclusive events $P\\left(E\\cup F\\right)=P\\left(E\\right)+P\\left(F\\right)$ probability of the complement of an event $P\\left(E\\text{'}\\right)=1-P\\left(E\\right)$\n\n### Key Concepts\n\n\u2022 Probability is always a number between 0 and 1, where 0 means an event is impossible and 1 means an event is certain.\n\u2022 The probabilities in a probability model must sum to 1. See (Figure).\n\u2022 When the outcomes of an experiment are all equally likely, we can find the probability of an event by dividing the number of outcomes in the event by the total number of outcomes in the sample space for the experiment. See (Figure).\n\u2022 To find the probability of the union of two events, we add the probabilities of the two events and subtract the probability that both events occur simultaneously. See (Figure).\n\u2022 To find the probability of the union of two mutually exclusive events, we add the probabilities of each of the events. See (Figure).\n\u2022 The probability of the complement of an event is the difference between 1 and the probability that the event occurs. See (Figure).\n\u2022 In some probability problems, we need to use permutations and combinations to find the number of elements in events and sample spaces. See (Figure).\n\n### Section Exercises\n\n#### Verbal\n\nWhat term is used to express the likelihood of an event occurring? Are there restrictions on its values? If so, what are they? If not, explain.\n\nprobability; The probability of an event is restricted to values between$\\,0\\,$and$\\,1,\\,$inclusive of$\\,0\\,$and$\\,1.\\,$\n\nWhat is a sample space?\n\nWhat is an experiment?\n\nAn experiment is an activity with an observable result.\n\nWhat is the difference between events and outcomes? Give an example of both using the sample space of tossing a coin 50 times.\n\nThe union of two sets is defined as a set of elements that are present in at least one of the sets. How is this similar to the definition used for the union of two events from a probability model? How is it different?\n\nThe probability of the union of two events occurring is a number that describes the likelihood that at least one of the events from a probability model occurs. In both a union of sets$\\,A\\text{ } \\text{and }B\\,$and a union of events$\\,A \\text{and} B,\\,$the union includes either$\\,A \\text{or} B\\,$or both. The difference is that a union of sets results in another set, while the union of events is a probability, so it is always a numerical value between$\\,0\\,$and$\\,1.\\,$\n\n#### Numeric\n\nFor the following exercises, use the spinner shown in (Figure) to find the probabilities indicated.\n\nLanding on red\n\nLanding on a vowel\n\n$\\,\\frac{1}{2}.\\,$\n\nNot landing on blue\n\nLanding on purple or a vowel\n\n$\\,\\frac{5}{8}.\\,$\n\nLanding on blue or a vowel\n\nLanding on green or blue\n\n$\\,\\frac{1}{2}.\\,$\n\nLanding on yellow or a consonant\n\nNot landing on yellow or a consonant\n\n$\\,\\frac{3}{8}.\\,$\n\nFor the following exercises, two coins are tossed.\n\nWhat is the sample space?\n\nFind the probability of tossing two heads.\n\n$\\,\\frac{1}{4}.\\,$\n\nFind the probability of tossing exactly one tail.\n\nFind the probability of tossing at least one tail.\n\n$\\,\\frac{3}{4}.\\,$\n\nFor the following exercises, four coins are tossed.\n\nWhat is the sample space?\n\nFind the probability of tossing exactly two heads.\n\n$\\,\\frac{3}{8}.\\,$\n\nFind the probability of tossing exactly three heads.\n\nFind the probability of tossing four heads or four tails.\n\n$\\,\\frac{1}{8}.\\,$\n\nFind the probability of tossing all tails.\n\nFind the probability of tossing not all tails.\n\n$\\,\\frac{15}{16}.\\,$\n\nFind the probability of tossing exactly two heads or at least two tails.\n\n$\\,\\frac{5}{8}.\\,$\n\nFor the following exercises, one card is drawn from a standard deck of$\\,52\\,$cards. Find the probability of drawing the following:\n\nA club\n\nA two\n\n$\\,\\frac{1}{13}.\\,$\n\nSix or seven\n\nRed six\n\n$\\,\\frac{1}{26}.\\,$\n\nAn ace or a diamond\n\nA non-ace\n\n$\\,\\frac{12}{13}.\\,$\n\nA heart or a non-jack\n\nFor the following exercises, two dice are rolled, and the results are summed.\n\nConstruct a table showing the sample space of outcomes and sums.\n\n1 2 3 4 5 6\n1 (1, 1)\n\n2\n\n(1, 2)\n\n3\n\n(1, 3)\n\n4\n\n(1, 4)\n\n5\n\n(1, 5)\n\n6\n\n(1, 6)\n\n7\n\n2 (2, 1)\n\n3\n\n(2, 2)\n\n4\n\n(2, 3)\n\n5\n\n(2, 4)\n\n6\n\n(2, 5)\n\n7\n\n(2, 6)\n\n8\n\n3 (3, 1)\n\n4\n\n(3, 2)\n\n5\n\n(3, 3)\n\n6\n\n(3, 4)\n\n7\n\n(3, 5)\n\n8\n\n(3, 6)\n\n9\n\n4 (4, 1)\n\n5\n\n(4, 2)\n\n6\n\n(4, 3)\n\n7\n\n(4, 4)\n\n8\n\n(4, 5)\n\n9\n\n(4, 6)\n\n10\n\n5 (5, 1)\n\n6\n\n(5, 2)\n\n7\n\n(5, 3)\n\n8\n\n(5, 4)\n\n9\n\n(5, 5)\n\n10\n\n(5, 6)\n\n11\n\n6 (6, 1)\n\n7\n\n(6, 2)\n\n8\n\n(6, 3)\n\n9\n\n(6, 4)\n\n10\n\n(6, 5)\n\n11\n\n(6, 6)\n\n12\n\nFind the probability of rolling a sum of$\\,3.\\,$\n\nFind the probability of rolling at least one four or a sum of$\\,8.$\n\n$\\,\\frac{5}{12}.$\n\nFind the probability of rolling an odd sum less than$\\,9.$\n\nFind the probability of rolling a sum greater than or equal to$\\,15.$\n\n$\\,0.$\n\nFind the probability of rolling a sum less than$\\,15.$\n\nFind the probability of rolling a sum less than$\\,6\\,$or greater than$\\,9.$\n\n$\\,\\frac{4}{9}.\\,$\n\nFind the probability of rolling a sum between$\\,6\\,$and$\\,9\\text{,}\\,$inclusive.\n\nFind the probability of rolling a sum of$\\,5\\,$or$\\,6.\\,$\n\n$\\,\\frac{1}{4}.\\,$\n\nFind the probability of rolling any sum other than$\\,5\\,$or$\\,6.\\,$\n\nFor the following exercises, a coin is tossed, and a card is pulled from a standard deck. Find the probability of the following:\n\nA head on the coin or a club\n\n$\\,\\frac{5}{8}\\,$\n\nA tail on the coin or red ace\n\nA head on the coin or a face card\n\n$\\,\\frac{8}{13}\\,$\n\nNo aces\n\nFor the following exercises, use this scenario: a bag of M&Ms contains$\\,12\\,$blue,$\\,6\\,$brown,$\\,10\\,$orange,$\\,8\\,$yellow,$\\,8\\,$red, and$\\,4\\,$green M&Ms. Reaching into the bag, a person grabs 5 M&Ms.\n\nWhat is the probability of getting all blue M&Ms?\n\n$\\,\\frac{C\\left(12,5\\right)}{C\\left(48,5\\right)}=\\frac{1}{2162}\\,$\n\nWhat is the probability of getting$\\,4\\,$blue M&Ms?\n\nWhat is the probability of getting$\\,3\\,$blue M&Ms?\n\n$\\frac{C\\left(12,3\\right)C\\left(36,2\\right)}{C\\left(48,5\\right)}=\\frac{175}{2162}$\n\nWhat is the probability of getting no brown M&Ms?\n\n#### Extensions\n\nUse the following scenario for the exercises that follow: In the game of Keno, a player starts by selecting$\\,20\\,$numbers from the numbers$\\,1\\,$to$\\,80.\\,$After the player makes his selections,$\\,20\\,$winning numbers are randomly selected from numbers$\\,1\\,$to$\\,80.\\,$A win occurs if the player has correctly selected$\\,3,4,\\,$or$\\,5\\,$of the$\\,20\\,$winning numbers. (Round all answers to the nearest hundredth of a percent.)\n\nWhat is the percent chance that a player selects exactly 3 winning numbers?\n\n$\\,\\frac{C\\left(20,3\\right)C\\left(60,17\\right)}{C\\left(80,20\\right)}\\approx 12.49%\\,$\n\nWhat is the percent chance that a player selects exactly 4 winning numbers?\n\nWhat is the percent chance that a player selects all 5 winning numbers?\n\n$\\,\\frac{C\\left(20,5\\right)C\\left(60,15\\right)}{C\\left(80,20\\right)}\\approx 23.33%\\,$\n\nWhat is the percent chance of winning?\n\nHow much less is a player\u2019s chance of selecting 3 winning numbers than the chance of selecting either 4 or 5 winning numbers?\n\n$20.50+23.33-12.49=31.34%$\n\n#### Real-World Applications\n\nUse this data for the exercises that follow: In 2013, there were roughly 317 million citizens in the United States, and about 40 million were elderly (aged 65 and over).[2]\n\nIf you meet a U.S. citizen, what is the percent chance that the person is elderly? (Round to the nearest tenth of a percent.)\n\nIf you meet five U.S. citizens, what is the percent chance that exactly one is elderly? (Round to the nearest tenth of a percent.)\n\n$\\frac{C\\left(40000000,1\\right)C\\left(277000000,4\\right)}{C\\left(317000000,5\\right)}=36.78%$\n\nIf you meet five U.S. citizens, what is the percent chance that three are elderly? (Round to the nearest tenth of a percent.)\n\nIf you meet five U.S. citizens, what is the percent chance that four are elderly? (Round to the nearest thousandth of a percent.)\n\n$\\frac{C\\left(40000000,4\\right)C\\left(277000000,1\\right)}{C\\left(317000000,5\\right)}=0.11%$[\/hidden-answer]\n\nIt is predicted that by 2030, one in five U.S. citizens will be elderly. How much greater will the chances of meeting an elderly person be at that time? What policy changes do you foresee if these statistics hold true?\n\n### Chapter Review Exercises\n\n#### Sequences and Their Notation\n\nWrite the first four terms of the sequence defined by the recursive formula$\\,{a}_{1}=2,\\,{a}_{n}={a}_{n-1}+n.$\n\n$2,4,7,11$\n\nEvaluate$\\,\\frac{6!}{\\left(5-3\\right)!3!}.$\n\nWrite the first four terms of the sequence defined by the explicit formula$\\,{a}_{n}={10}^{n}+3.$\n\n$13,103,1003,10003$\n\nWrite the first four terms of the sequence defined by the explicit formula$\\,{a}_{n}=\\frac{n!}{n\\left(n+1\\right)}.$\n\n#### Arithmetic Sequences\n\nIs the sequence$\\,\\frac{4}{7},\\frac{47}{21},\\frac{82}{21},\\frac{39}{7},\\,...$ arithmetic? If so, find the common difference.\n\nThe sequence is arithmetic. The common difference is$\\,d=\\frac{5}{3}.$\n\nIs the sequence$\\,2,4,8,16,\\,...\\,$arithmetic? If so, find the common difference.\n\nAn arithmetic sequence has the first term$\\,{a}_{1}=18\\,$and common difference$\\,d=-8.\\,$What are the first five terms?\n\n$18,10,2,-6,-14$\n\nAn arithmetic sequence has terms ${a}_{3}=11.7$\nand ${a}_{8}=-14.6.$\nWhat is the first term?\n\nWrite a recursive formula for the arithmetic sequence $-20\\text{,}-10,0\\text{,}10\\text{,\u2026}$\n\n${a}_{1}=-20,\\text{ }{a}_{n}={a}_{n-1}+10$\n\nWrite a recursive formula for the arithmetic sequence $0,\\text{ }-\\frac{1}{2},\\text{ }-1,\\text{ }-\\frac{3}{2},\\dots ,$\nand then find the 31st term.\n\nWrite an explicit formula for the arithmetic sequence $\\frac{7}{8},\\text{ }\\frac{29}{24},\\text{ }\\frac{37}{24},\\text{ }\\frac{15}{8},\\dots$\n\n${a}_{n}=\\frac{1}{3}n+\\frac{13}{24}$\n\nHow many terms are in the finite arithmetic sequence$\\,12,20,28,\\dots ,172?$\n\n#### Geometric Sequences\n\nFind the common ratio for the geometric sequence $2.5,\\text{ }5,\\text{ }10,\\text{ }20,\\dots$\n\n$r=2$\n\nIs the sequence $4,\\text{ }16,\\text{ }28,\\text{ }40,\\dots$ geometric? If so find the common ratio. If not, explain why.\n\nA geometric sequence has terms$\\,{a}_{7}=16\\text{,}384\\,$and $\\,{a}_{9}=262\\text{,}144$ What are the first five terms?\n\n$4,\\text{ }16,\\text{ }64,\\text{ }256,\\text{ }1024$\n\nA geometric sequence has the first term$\\,{a}_{1}\\text{=}-3\\,$and common ratio$\\,r=\\frac{1}{2}.\\,$What is the 8th term?\n\nWhat are the first five terms of the geometric sequence ${a}_{1}=3,\\text{ }{a}_{n}=4\\cdot {a}_{n-1}?$\n\n$3,\\text{ }12,\\text{ }48,\\text{ }192,\\text{ }768$\n\nWrite a recursive formula for the geometric sequence $1,\\text{ }\\frac{1}{3},\\text{ }\\frac{1}{9},\\text{ }\\frac{1}{27},\\dots$\n\nWrite an explicit formula for the geometric sequence $-\\frac{1}{5},\\text{ }-\\frac{1}{15},\\text{ }-\\frac{1}{45},\\text{ }-\\frac{1}{135},\\dots$\n\n${a}_{n}=-\\frac{1}{5}\\cdot {\\left(\\frac{1}{3}\\right)}^{n-1}$\n\nHow many terms are in the finite geometric sequence $-5, -\\frac{5}{3}, -\\frac{5}{9},\\dots , -\\frac{5}{59\\text{,}049}?$\n\n#### Series and Their Notation\n\nUse summation notation to write the sum of terms $\\frac{1}{2}m+5$ from $m=0$ to $m=5.$\n\n$\\sum _{m=0}^{5}\\left(\\frac{1}{2}m+5\\right).$\n\nUse summation notation to write the sum that results from adding the number $13$ twenty times.\n\nUse the formula for the sum of the first $n$ terms of an arithmetic series to find the sum of the first eleven terms of the arithmetic series 2.5, 4, 5.5, \u2026 .\n\n${S}_{11}=110$\n\nA ladder has $15$ tapered rungs, the lengths of which increase by a common difference. The first rung is 5 inches long, and the last rung is 20 inches long. What is the sum of the lengths of the rungs?\n\nUse the formula for the sum of the first n terms of a geometric series to find ${S}_{9}$ for the series $12,\\text{ }6,\\text{ }3,\\text{ }\\frac{3}{2},\\dots$\n\n${S}_{9}\\approx 23.95$\n\nThe fees for the first three years of a hunting club membership are given in (Figure). If fees continue to rise at the same rate, how much will the total cost be for the first ten years of membership?\n\nYear Membership Fees\n1 $1500 2$1950\n3 $2535 Find the sum of the infinite geometric series $\\sum _{k=1}^{\\infty }45\\cdot {\\left(-\\frac{1}{3}\\right)}^{k-1}.$ [reveal-answer q=\u201dfs-id1691191\u2033]Show Solution[\/reveal-answer] [hidden-answer a=\u201dfs-id1691191\u2033] $S=\\frac{135}{4}$ [\/hidden-answer] A ball has a bounce-back ratio of $\\frac{3}{5}$ the height of the previous bounce. Write a series representing the total distance traveled by the ball, assuming it was initially dropped from a height of 5 feet. What is the total distance? (Hint: the total distance the ball travels on each bounce is the sum of the heights of the rise and the fall.) Alejandro deposits$80 of his monthly earnings into an annuity that earns 6.25% annual interest, compounded monthly. How much money will he have saved after 5 years?\n\n$5,617.61 [\/hidden-answer] The twins Sarah and Scott both opened retirement accounts on their 21st birthday. Sarah deposits$4,800.00 each year, earning 5.5% annual interest, compounded monthly. Scott deposits $3,600.00 each year, earning 8.5% annual interest, compounded monthly. Which twin will earn the most interest by the time they are $55$ years old? How much more? #### Counting Principles How many ways are there to choose a number from the set$\\,\\left\\{-10\\text{,}-6\\text{,}4\\text{,}10\\text{,}12\\text{,}18\\text{,}24\\text{,}32\\right\\}\\,$that is divisible by either $4$ or $6?$ [reveal-answer q=\u201dfs-id1701900\u2033]Show Solution[\/reveal-answer] [hidden-answer a=\u201dfs-id1701900\u2033] 6 [\/hidden-answer] In a group of $20$ musicians, $12$ play piano, $7$ play trumpet, and $2$ play both piano and trumpet. How many musicians play either piano or trumpet? How many ways are there to construct a 4-digit code if numbers can be repeated? [reveal-answer q=\u201dfs-id1292416\u2033]Show Solution[\/reveal-answer] [hidden-answer a=\u201dfs-id1292416\u2033] ${10}^{4}=10\\text{,}000$ [\/hidden-answer] A palette of water color paints has 3 shades of green, 3 shades of blue, 2 shades of red, 2 shades of yellow, and 1 shade of black. How many ways are there to choose one shade of each color? Calculate $P\\left(18,4\\right).$ [reveal-answer q=\u201dfs-id1408308\u2033]Show Solution[\/reveal-answer] [hidden-answer a=\u201dfs-id1408308\u2033] $P\\left(18,4\\right)=73\\text{,}440$ [\/hidden-answer] In a group of $5$ freshman, $10$ sophomores, $3$ juniors, and $2$ seniors, how many ways can a president, vice president, and treasurer be elected? Calculate $C\\left(15,6\\right).$ [reveal-answer q=\u201dfs-id1417313\u2033]Show Solution[\/reveal-answer] [hidden-answer a=\u201dfs-id1417313\u2033] $C\\left(15,6\\right)=5005$ [\/hidden-answer] A coffee shop has 7 Guatemalan roasts, 4 Cuban roasts, and 10 Costa Rican roasts. How many ways can the shop choose 2 Guatemalan, 2 Cuban, and 3 Costa Rican roasts for a coffee tasting event? How many subsets does the set $\\left\\{1,\\text{ }3,\\text{ }5,\\text{ }\\dots ,\\text{ }99\\right\\}$ have? [reveal-answer q=\u201dfs-id1692606\u2033]Show Solution[\/reveal-answer] [hidden-answer a=\u201dfs-id1692606\u2033] ${2}^{50}=1.13\u00d7{10}^{15}$ [\/hidden-answer] A day spa charges a basic day rate that includes use of a sauna, pool, and showers. For an extra charge, guests can choose from the following additional services: massage, body scrub, manicure, pedicure, facial, and straight-razor shave. How many ways are there to order additional services at the day spa? How many distinct ways can the word DEADWOOD be arranged? [reveal-answer q=\u201dfs-id1663522\u2033]Show Solution[\/reveal-answer] [hidden-answer a=\u201dfs-id1663522\u2033] $\\frac{8!}{3!2!}=3360$ [\/hidden-answer] How many distinct rearrangements of the letters of the word DEADWOOD are there if the arrangement must begin and end with the letter D? #### Binomial Theorem Evaluate the binomial coefficient$\\,\\left(\\begin{array}{c}23\\\\ 8\\end{array}\\right).$ [reveal-answer q=\u201dfs-id1537191\u2033]Show Solution[\/reveal-answer] [hidden-answer a=\u201dfs-id1537191\u2033] $490\\text{,}314$ [\/hidden-answer] Use the Binomial Theorem to expand ${\\left(3x+\\frac{1}{2}y\\right)}^{6}.$ Use the Binomial Theorem to write the first three terms of ${\\left(2a+b\\right)}^{17}.$ [reveal-answer q=\u201dfs-id1431101\u2033]Show Solution[\/reveal-answer] [hidden-answer a=\u201dfs-id1431101\u2033] $131\\text{,}072{a}^{17}\\text{+}1\\text{,}114\\text{,}112{a}^{16}b\\text{+}4\\text{,}456\\text{,}448{a}^{15}{b}^{2}$ [\/hidden-answer] Find the fourth term of ${\\left(3{a}^{2}-2b\\right)}^{11}$ without fully expanding the binomial. #### Probability For the following exercises, assume two die are rolled. Construct a table showing the sample space. [reveal-answer q=\u201d1676886\u2033]Show Solution[\/reveal-answer][hidden-answer a=\u201d1676886\u2033] 1 2 3 4 5 6 1 1, 1 1, 2 1, 3 1, 4 1, 5 1, 6 2 2, 1 2, 2 2, 3 2, 4 2, 5 2, 6 3 3, 1 3, 2 3, 3 3, 4 3, 5 3, 6 4 4, 1 4, 2 4, 3 4, 4 4, 5 4, 6 5 5, 1 5, 2 5, 3 5, 4 5, 5 5, 6 6 6, 1 6, 2 6, 3 6, 4 6, 5 6, 6 [\/hidden-answer] What is the probability that a roll includes a $2?$ What is the probability of rolling a pair? [reveal-answer q=\u201dfs-id1365304\u2033]Show Solution[\/reveal-answer] [hidden-answer a=\u201dfs-id1365304\u2033] $\\frac{1}{6}$ [\/hidden-answer] What is the probability that a roll includes a 2 or results in a pair? What is the probability that a roll doesn\u2019t include a 2 or result in a pair? [reveal-answer q=\u201dfs-id1375751\u2033]Show Solution[\/reveal-answer] [hidden-answer a=\u201dfs-id1375751\u2033] $\\frac{5}{9}$ [\/hidden-answer] What is the probability of rolling a 5 or a 6? What is the probability that a roll includes neither a 5 nor a 6? [reveal-answer q=\u201dfs-id1394021\u2033]Show Solution[\/reveal-answer] [hidden-answer a=\u201dfs-id1394021\u2033] $\\frac{4}{9}$ [\/hidden-answer] For the following exercises, use the following data: An elementary school survey found that 350 of the 500 students preferred soda to milk. Suppose 8 children from the school are attending a birthday party. (Show calculations and round to the nearest tenth of a percent.) What is the percent chance that all the children attending the party prefer soda? What is the percent chance that at least one of the children attending the party prefers milk? [reveal-answer q=\u201dfs-id1436282\u2033]Show Solution[\/reveal-answer] [hidden-answer a=\u201dfs-id1436282\u2033] $1-\\frac{C\\left(350,8\\right)}{C\\left(500,8\\right)}\\approx 94.4%$ [\/hidden-answer] What is the percent chance that exactly 3 of the children attending the party prefer soda? What is the percent chance that exactly 3 of the children attending the party prefer milk? [reveal-answer q=\u201dfs-id1433526\u2033]Show Solution[\/reveal-answer] [hidden-answer a=\u201dfs-id1433526\u2033] $\\frac{C\\left(150,3\\right)C\\left(350,5\\right)}{C\\left(500,8\\right)}\\approx 25.6%$ [\/hidden-answer] ### Practice Test Write the first four terms of the sequence defined by the recursive formula $a=\u201314, {a}_{n}=\\frac{2+{a}_{n\u20131}}{2}.$ [reveal-answer q=\u201dfs-id1409480\u2033]Show Solution[\/reveal-answer] [hidden-answer a=\u201dfs-id1409480\u2033] $-14,-6,-2,0$ [\/hidden-answer] Write the first four terms of the sequence defined by the explicit formula ${a}_{n}=\\frac{{n}^{2}\u2013n\u20131}{n!}.$ Is the sequence $0.3,\\text{ }1.2,\\text{ }2.1,\\text{ }3,\\dots$ arithmetic? If so find the common difference. [reveal-answer q=\u201dfs-id1371135\u2033]Show Solution[\/reveal-answer] [hidden-answer a=\u201dfs-id1371135\u2033]The sequence is arithmetic. The common difference is $d=0.9.$[\/hidden-answer] An arithmetic sequence has the first term ${a}_{1}=-4$ and common difference $d=\u2013\\frac{4}{3}.$ What is the 6th term? Write a recursive formula for the arithmetic sequence $-2,\\text{ }-\\frac{7}{2},\\text{ }-5,\\text{ }-\\frac{13}{2},\\dots$ and then find the 22nd term. [reveal-answer q=\u201dfs-id1638252\u2033]Show Solution[\/reveal-answer] [hidden-answer a=\u201dfs-id1638252\u2033] ${a}_{1}=-2,\\text{ }{a}_{n}={a}_{n-1}-\\frac{3}{2};\\text{ }{a}_{22}=-\\frac{67}{2}$ [\/hidden-answer] Write an explicit formula for the arithmetic sequence $15.6,\\text{ }15,\\text{ }14.4,\\text{ }13.8,\\dots$ and then find the 32nd term. Is the sequence$\\,-2\\text{,}-1\\text{,}-\\frac{1}{2}\\text{,}-\\frac{1}{4}\\text{,}\\dots$ geometric? If so find the common ratio. If not, explain why. [reveal-answer q=\u201dfs-id1430767\u2033]Show Solution[\/reveal-answer] [hidden-answer a=\u201dfs-id1430767\u2033]The sequence is geometric. The common ratio is $r=\\frac{1}{2}.$[\/hidden-answer] What is the 11th term of the geometric sequence$\\,-1.5,-3,-6,-12,\\dots ?$ Write a recursive formula for the geometric sequence $1,\\text{ }-\\frac{1}{2},\\text{ }\\frac{1}{4},\\text{ }-\\frac{1}{8},\\dots$ [reveal-answer q=\u201dfs-id1354578\u2033]Show Solution[\/reveal-answer] [hidden-answer a=\u201dfs-id1354578\u2033] ${a}_{1}=1,\\text{ }{a}_{n}=-\\frac{1}{2}\\cdot {a}_{n}{}_{-1}$ [\/hidden-answer] Write an explicit formula for the geometric sequence $4,\\text{ }-\\frac{4}{3},\\text{ }\\frac{4}{9},\\text{ }-\\frac{4}{27},\\dots$ Use summation notation to write the sum of terms$\\,3{k}^{2}-\\frac{5}{6}k\\,$from$\\,k=-3\\,$to$\\,k=15.$ [reveal-answer q=\u201dfs-id1456323\u2033]Show Solution[\/reveal-answer] [hidden-answer a=\u201dfs-id1456323\u2033] $\\sum _{k=-3}^{15}\\left(3{k}^{2}-\\frac{5}{6}k\\right)$ [\/hidden-answer] A community baseball stadium has 10 seats in the first row, 13 seats in the second row, 16 seats in the third row, and so on. There are 56 rows in all. What is the seating capacity of the stadium? Use the formula for the sum of the first $n$ terms of a geometric series to find $\\sum _{k=1}^{7}-0.2\\cdot {\\left(-5\\right)}^{k-1}.$ [reveal-answer q=\u201dfs-id1406965\u2033]Show Solution[\/reveal-answer] [hidden-answer a=\u201dfs-id1406965\u2033] ${S}_{7}=-2604.2$ [\/hidden-answer] Find the sum of the infinite geometric series $\\sum _{k=1}^{\\infty }\\frac{1}{3}\\cdot {\\left(-\\frac{1}{5}\\right)}^{k-1}.$ Rachael deposits$3,600 into a retirement fund each year. The fund earns 7.5% annual interest, compounded monthly. If she opened her account when she was 20 years old, how much will she have by the time she\u2019s 55? How much of that amount was interest earned?\n\n[hidden-answer a=\u201dfs-id1424663\u2033]Total in account: $\\text{\\}140,355.75;$\nInterest earned: $\\text{\\}14,355.75$[\/hidden-answer]\n\nIn a competition of 50 professional ballroom dancers, 22 compete in the fox-trot competition, 18 compete in the tango competition, and 6 compete in both the fox-trot and tango competitions. How many dancers compete in the fox-trot or tango competitions?\n\nA buyer of a new sedan can custom order the car by choosing from 5 different exterior colors, 3 different interior colors, 2 sound systems, 3 motor designs, and either manual or automatic transmission. How many choices does the buyer have?\n\n$5\u00d73\u00d72\u00d73\u00d72=180$\n\nTo allocate annual bonuses, a manager must choose his top four employees and rank them first to fourth. In how many ways can he create the \u201cTop-Four\u201d list out of the 32 employees?\n\nA rock group needs to choose 3 songs to play at the annual Battle of the Bands. How many ways can they choose their set if have 15 songs to pick from?\n\n$C\\left(15,3\\right)=455$\n\nA self-serve frozen yogurt shop has 8 candy toppings and 4 fruit toppings to choose from. How many ways are there to top a frozen yogurt?\n\nHow many distinct ways can the word EVANESCENCE be arranged if the anagram must end with the letter E?\n\n$\\frac{10!}{2!3!2!}=151\\text{,}200$\n\nUse the Binomial Theorem to expand ${\\left(\\frac{3}{2}x-\\frac{1}{2}y\\right)}^{5}.$\n\nFind the seventh term of ${\\left({x}^{2}-\\frac{1}{2}\\right)}^{13}$\nwithout fully expanding the binomial.\n\n$\\frac{429{x}^{14}}{16}$\n\nFor the following exercises, use the spinner in (Figure).\n\nConstruct a probability model showing each possible outcome and its associated probability. (Use the first letter for colors.)\n\nWhat is the probability of landing on an odd number?\n\n$\\frac{4}{7}$\n\nWhat is the probability of landing on blue?\n\nWhat is the probability of landing on blue or an odd number?\n\n$\\frac{5}{7}$\n\nWhat is the probability of landing on anything other than blue or an odd number?\n\nA bowl of candy holds 16 peppermint, 14 butterscotch, and 10 strawberry flavored candies. Suppose a person grabs a handful of 7 candies. What is the percent chance that exactly 3 are butterscotch? (Show calculations and round to the nearest tenth of a percent.)\n\n$\\frac{C\\left(14,3\\right)C\\left(26,4\\right)}{C\\left(40,7\\right)}\\approx 29.2%$\n\n### Glossary\n\ncomplement of an event\nthe set of outcomes in the sample space that are not in the event$\\,E\\,$\nevent\nany subset of a sample space\nexperiment\nan activity with an observable result\nmutually exclusive events\nevents that have no outcomes in common\noutcomes\nthe possible results of an experiment\nprobability\na number from 0 to 1 indicating the likelihood of an event\nprobability model\na mathematical description of an experiment listing all possible outcomes and their associated probabilities\nsample space\nthe set of all possible outcomes of an experiment\nunion of two events\nthe event that occurs if either or both events occur\n\n1. The figure is for illustrative purposes only and does not model any particular storm.\n2. United States Census Bureau. http:\/\/www.census.gov","date":"2021-12-06 23:06:03","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 2, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.777906060218811, \"perplexity\": 598.1753602314212}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-49\/segments\/1637964363327.64\/warc\/CC-MAIN-20211206224536-20211207014536-00337.warc.gz\"}"} | null | null |
\section{Introduction}
The Small Magellanic Cloud (SMC) is turning out to be an exciting nest of
X-ray binary pulsars. Estimates of the star formation rate (SFR) for the SMC
range between $0.044 M_{\sun}$ yr$^{-1}$ from H$\alpha$ measurements
\citep{ken91} to $0.38 M_{\sun}$ yr$^{-1}$ from supernova birth rates
\citep{fil98}. Using the relation between the X-ray luminosity function
of high-mass X-ray binaries (HMXBs) and the SFR of the host galaxy
\citep{gri03} and the upper and lower star formation rate estimates,
\citet{sg05} predict between 6 and 49 HMXBs with luminosities $\geq 10^{35}$
erg s$^{-1}$ in the SMC. We now know of $\sim 50$ such systems in the SMC
\citep{hab04,coe05}.
Several of these detections have come from {\it Chandra}\ and {\it RXTE} work over
the last couple of years \citep{edg04,lay05}. This large number suggests a
dramatic phase of star birth in the past, probably associated with the most
recent closest approach between the SMC and the Large Magellanic Cloud
\citep[LMC;][]{gar96}. Even more extreme, \citet{naz03} analysed a
$\sim 100$ ks exposure of just one $20\arcmin \times 20\arcmin$ {\it Chandra}\ field
and identified more than 20 probable Be/X-ray binary systems. Multiplying
these numbers up by the $\sim2\degr \times 2\degr$ size of the SMC, and
allowing for $\sim 10$\% X-ray duty cycles, suggests the final number of
Be/X-ray binaries could be well in excess of 1,000. Thus the study of the
SMC is not only providing a great homogeneous sample of HMXBs for study, but
is also providing direct insights into the history of our neighbouring galaxy.
\begin{figure}
\includegraphics[width=84mm]{fig1.eps}
\caption{The location of the 20 fields studied by Chandra in this work,
overlaid on a neutral hydrogen density image of the SMC \citep{sta99}. The
Wing and Bar of the SMC are marked.}
\label{fig:smc_fields}
\end{figure}
\begin{table*}
\centering
\begin{minipage}{180mm}
\caption{X-ray bright sources in the SMC Wing Survey.}
\label{tab:src}
\begin{tabular}{@{}rlcccccccc}
\hline
Object & Name & RA & Dec. & Error & Counts &
$P_{\rm pulse}$ & Pulsed fraction & Obs ID & Date \\
& & (J2000) & (J2000) & (arcsec) & &
(s) & (per cent) & & \\
\hline
1 & CXOU J005551.5-733110 & 00:55:51.54 & -73:31:10.1 & 0.88 & 231 & -- & -- &
5499 & 2006-03-03 \\
2 & RX J0057.3-7325 & 00:57:27.08 & -73:25:19.5 & 0.83 & 433 & 101.16 &
$29\pm7$ & 5499 & 2006-03-03 \\
3 & CXOU J005754.4-715630 & 00:57:54.41 & -71:56:30.9 & 0.95 & 130 & -- & -- &
5480 & 2006-02-06 \\
4 & CXOU J010014.2-730725 & 01:00:14.22 & -73:07:25.3 & 1.88 & 110 & -- & -- &
5498 & 2006-03-03 \\
5 & CXOU J010206.6-714115 & 01:02:06.69 & -71:41:15.8 & 0.83 & 383 & 700.54 &
$35\pm9$ & 5481 & 2006-02-06 \\
6 & CXOU J010245.0-721521 & 01:02:45.01 & -72:15:21.7 & 0.91 & 81 & -- & -- &
5486 & 2006-02-10 \\
7 & SAX J0103.2-7209 & 01:03:13.94 & -72:09:14.4 & 0.90 & 244 & 337.51 &
$45\pm19$ & 5486 & 2006-02-10 \\
8 & CXOU J010455.4-732555 & 01:04:55.50 & -73:25:55.2 & 1.33 & 58 & -- & -- &
5497 & 2006-03-03 \\
9 & CXOU J010509.6-721146 & 01:05:09.68 & -72:11:46.6 & 1.17 & 50 & -- & -- &
5486 & 2006-02-10 \\
10 & CXOU J010533.0-721331 & 01:05:33.08 & -72:13:31.2 & 1.21 & 80 & -- & -- &
5486 & 2006-02-10 \\
11 & CXOU J010712.6-723533 & 01:07:12.63 & -72:35:33.8 & 0.87 & 1919 & 65.78 &
$37\pm5$ & 5487 & 2006-02-10 \\
12 & CXOU J010735.0-732022 & 01:07:35.00 & -73:20:22.6 & 1.20 & 66 & -- & -- &
5496 & 2006-03-03 \\
13 & CXOU J010836.6-722501 & 01:08:36.65 & -72:25:01.7 & 0.99 & 67 & -- & -- &
5487 & 2006-02-10 \\
14 & CXOU J010849.5-721232 & 01:08:49.51 & -72:12:32.9 & 0.90 & 144 & -- & -- &
5485 & 2006-02-08 \\
15 & CXOU J010855.6-721328 & 01:08:55.64 & -72:13:28.2 & 1.02 & 54 & -- & -- &
5485 & 2006-02-08 \\
16 & CXOU J011021.3-715201 & 01:10:21.31 & -71:52:01.2 & 0.80 & 94 & -- & -- &
5483 & 2006-02-06 \\
17 & CXOU J011050.6-721025 & 01:10:50.62 & -72:10:25.9 & 0.92 & 82 & -- & -- &
5484 & 2006-02-06 \\
18 & CXOU J011154.2-723105 & 01:11:54.28 & -72:31:05.0 & 0.92 & 82 & -- & -- &
5488 & 2006-02-12 \\
19 & CXOU J011303.4-724648 & 01:13:03.46 & -72:46:48.4 & 1.83 & 86 & -- & -- &
5490 & 2006-02-27 \\
20 & CXOU J011744.7-733922 & 01:17:44.77 & -73:39:22.7 & 0.96 & 89 & -- & -- &
5494 & 2006-03-01 \\
21 & CXOU J011832.4-731741 & 01:18:32.44 & -73:17:41.6 & 1.40 & 71 & -- & -- &
5493 & 2006-02-27 \\
22 & CXOU J012027.3-724624 & 01:20:27.31 & -72:46:24.8 & 0.78 & 64 & -- & -- &
5491 & 2005-07-24 \\
23 & CXOU J012223.6-730848 & 01:22:23.65 & -73:08:48.5 & 0.96 & 301 & -- & -- &
5492 & 2005-08-12 \\
\hline
\end{tabular}
\end{minipage}
\end{table*}
It is important to note that, despite appearances, the SMC is a very
three dimensional object. Studies of the Cepheid population by \citet{lan86}
have revealed that the depth of the SMC is up to ten times its observed
width. The two main structures, the Bar and the Wing, lie $\sim11$ kpc
behind, and $\sim8$ kpc in front of the main body of the SMC respectively.
To date most of the X-ray studies by {\it Chandra}\ and {\it RXTE} have concentrated on
the Bar which has proved to be a large source of HMXBs. The {\it Chandra}\ work
presented here focuses on a study of the Wing of the SMC. Its intention is to
compare the pulsar population of the Wing with what we already know for the
Bar. \citet{coe05} studied the locations of the detected X-ray pulsars
and believed they identified a relationship between the H{\sevensize I}
intensity distribution and that of the pulsars. They found that the pulsars
seem to lie in regions of low/medium H{\sevensize I} densities, suggesting
that high-mass star formation is well suited to these densities. The
locations of the {\it Chandra}\ observations reported here are based upon this
analysis.
\section{The {\it Chandra} SMC Wing Survey}
We performed a survey with {\it Chandra}\ of the wings of the SMC between 2005 July
and 2006 March (see Figure~\ref{fig:smc_fields}). A total of 20 fields were
observed using the standard ACIS-I imaging mode configuration which utilises
chips I0-I3 plus S2 and S3. The exposure times for each field ranged
from 8.6--10.3 ks. We performed the data reduction using
{\sevensize CIAO V}3.2. The event files were filtered to restrict the energy
range to 0.5--8.0 keV and we barycentrically corrected the data. Potential
sources were detected using the {\sevensize WAVDETECT} algorithm. Here we
present the analysis of the 23 brightest X-ray sources ($>50$ counts) in the
survey (see Table~\ref{tab:src}). The positional errors quoted are the 95\%
confidence region for the source position that takes into account the
properties of the telescope optics and the source brightness
\citep[see][]{hong05}, combined in quadrature with the boresight error
($\sim 0.7 \arcsec$ at 95\% confidence). A full analysis and catalogue of
sources from all 20 fields will be presented in a future publication
(McGowan et al., in preparation).
\begin{figure*}
\includegraphics[width=120mm]{fig2.eps}
\caption{Temporal analysis of the {\it Chandra}\ data. Left: Lomb-Scargle
periodograms for the sources where a significant peak was detected. The 90\%
(dashed line) and 99\% (dotted line) confidence limits are shown. Right: the
pulse profiles for the sources with error bars, where the uncertainty is the
standard error for the data points in the bin. The fitted sine functions are
shown (dotted line).}
\label{fig:pow_spec_fold}
\end{figure*}
\begin{table*}
\centering
\begin{minipage}{100mm}
\caption{Spectral fits to the X-ray bright sources in the SMC Wing Survey.}
\label{tab:spec}
\begin{tabular}{@{}rlcccc}
\hline
Object & Model & $\Gamma$ & $T$ & $\Delta C$ (dof) &
$f_{X}$ \\
& & & $10^{6}$ K & & ergs cm$^{-2}$ s$^{-1}$ \\
\hline
1 & PL & $1.9^{+0.2}_{-0.2}$ & ... & 0.5 (20) & $3.3 \times 10^{-13}$ \\
& Brem & ... & $41.0^{+20.5}_{-12.0}$ & 0.4 (20) & $2.7 \times 10^{-13}$ \\
& Mekal & ... & $49.4^{+20.0}_{-11.9}$ & 0.6 (20) & $2.9 \times 10^{-13}$ \\
2 & PL & $0.6^{+0.1}_{-0.1}$ & ... & 1.0 (35) & $1.0 \times 10^{-12}$ \\
& Brem & ... & 2314 & 2.3 (35) & $7.5 \times 10^{-13}$ \\
& Mekal & ... & 927 & 2.5 (35) & $7.2 \times 10^{-13}$ \\
3 & PL & $1.3^{+0.2}_{-0.2}$ & ... & 1.1 (55) & $2.5 \times 10^{-13}$ \\
& Brem & ... & $273\pm 192$ & 1.1 (55) & $2.3 \times 10^{-13}$ \\
& Mekal & ... & $270\pm 188$ & 1.1 (55) & $2.4 \times 10^{-13}$ \\
4 & PL & $3.4^{+0.4}_{-0.4}$ & ... & 0.8 (45) & $3.6 \times 10^{-13}$ \\
& Brem & ... & $7.3^{+2.8}_{-1.4}$ & 0.8 (45) & $2.1 \times 10^{-13}$ \\
& Mekal & ... & $14.9^{+1.6}_{-3.4}$ & 1.4 (45) & $1.2 \times 10^{-13}$ \\
5 & PL & $0.4^{+0.1}_{-0.1}$ & ... & 0.8 (34) & $1.4 \times 10^{-12}$ \\
& Brem & ... & 2314 & 3.2 (34) & $8.5 \times 10^{-13}$ \\
& Mekal & ... & 927 & 3.6 (34) & $8.2 \times 10^{-13}$ \\
7 & PL & $0.8^{+0.2}_{-0.2}$ & ... & 1.4 (21) & $5.8 \times 10^{-13}$ \\
& Brem & ... & 2314 & 2.0 (21) & $4.5 \times 10^{-13}$ \\
& Mekal & ... & 927 & 2.2 (21) & $4.6 \times 10^{-13}$ \\
11 & PL & $0.3^{+0.1}_{-0.1}$ & ... & 1.1 (159) & $7.1 \times 10^{-12}$ \\
& PL$^{a}$ & $0.5^{+0.1}_{-0.1}$ & ... & 1.1 (158) & $6.9 \times 10^{-12}$\\
& Brem & ... & 2314 & 4.4 (159) & $4.0 \times 10^{-12}$ \\
& Mekal & ... & 927 & 4.8 (159) & $4.4 \times 10^{-12}$ \\
14 & PL & $1.7^{+0.2}_{-0.2}$ & ... & 1.1 (59) & $2.4 \times 10^{-13}$ \\
& Brem & ... & 133.4 & 2.0 (59) & $2.1 \times 10^{-13}$ \\
& Mekal & ... & $74.1^{+88.5}_{-27.2}$ & 1.1 (59) & $2.3 \times 10^{-13}$ \\
23 & PL & $1.5^{+0.2}_{-0.2}$ & ... & 0.8 (105) & $5.2 \times 10^{-13}$ \\
& Brem & ... & $109.3^{+129.9}_{-41.8}$ & 0.8 (105) & $4.6 \times 10^{-13}$ \\
& Mekal & ... & $105.1^{+172.2}_{-39.0}$ & 0.9 (105) & $4.9 \times 10^{-13}$ \\
\hline
\end{tabular}
\medskip
The data were fitted with absorbed power-law (PL), bremsstrahlung (Brem) and
Mekal models, assuming a neutral hydrogen column density of $N_{\rm H} =
0.06 \times 10^{22}$ cm$^{-2}$, where $\Gamma$ is the photon index and $T$ is
the temperature determined from the fits. The goodness of fit using the
C-statistic ($\Delta C$) and the unabsorbed flux in the 0.3--10 keV range are
also given. $^{a}$ Model fit results in $N_{\rm H} = 0.19^{+0.09}_{-0.08}
\times 10^{22}$ cm$^{-2}$ \\
\end{minipage}
\end{table*}
\subsection{Temporal Analysis}
\label{sect:pulse}
The main goal of the SMC Wing Survey is to detect new pulsars. We created
background subtracted light curves for the 23 sources in our sample and
searched for periodic variations from 6.5--1000 s. The temporal analysis
was performed using the Starlink {\sevensize PERIOD} software and we
generated Lomb-Scargle and Phase Dispersion Minimisation periodograms for
each of our sources.
We determined the 90\% and 99\% confidence levels for the Lomb-Scargle
periodograms from a cumulative probability distribution appropriate for each
data set. Using a Monte Carlo method we generated 10,000 simulated light
curves with the same time sampling and variance as the real data. The
simulated light curves were taken from a Gaussian distribution. A
Lomb-Scargle periodogram was produced for each simulated light curve, and the
peak power was recorded. From these values the probability of obtaining a
given peak power from pure noise can be calculated and the cumulative
distribution function derived.
Two sources show a significant peak in the Lomb-Scargle periodogram and a
corresponding strong peak in the Phase Dispersion Minimisation periodogram.
Two of the other sources in the sample are the previously known pulsars,
RX J0057.3-7325 = SXP101 and SAX J0103.2-7209 = SXP348. These two show
pulsations, but at a level below the 90\% confidence limit. The power
spectra for all four sources are shown in Figure~\ref{fig:pow_spec_fold}
(left). We folded the light curves on the detected periods (see
Figure~\ref{fig:pow_spec_fold}, right) and fitted the resulting pulse
profiles with a sine function to determine the pulsed fractions (see
Table~\ref{tab:src}). We define the pulse fraction as $(F_{\rm max} -
F_{\rm min}) / (F_{\rm max} + F_{\rm min})$, where $F_{\rm max}$ and
$F_{\rm min}$ are the maximum and minimum of the fitted pulse light curve.
The error on the periods were determined using Kov\'acs formula
\citep{kov81,hor86}. Using this method the uncertainty calculated takes
into account both the resolution due to the light curve sampling and the
signal-to-noise ratio of the detected modulation.
\begin{table*}
\centering
\begin{minipage}{170mm}
\caption{Optical counterparts for the X-ray bright sources in the SMC Wing
Survey.}
\label{tab:opt}
\begin{tabular}{@{}rlllcl@{}}
\hline
& \multicolumn{3}{c}{Optical Counterpart} & & \\
Object & OGLE III & OGLE II & MACHO & Period & Comment \\
& & & & (d) & \\
\hline
1 & SMC106.8 21521 & & & - & Changes of $\sim 0.1$ mag over $\sim 400$ d \\
2 & SMC106.7 15343 & & 211.16415.11 & 21.9 & SXP101. Optical period seen in
both OGLE III \\
& & & & & and MACHO data \\
3 & SMC108.4 384 & & & - & Some variability ($\sim 0.1$ mag) on timescales \\
& & & & & of $\sim 100$ d \\
4 & & & 211.16591.6 & 29.6 & MACHO data only, saturated in OGLE III. Period\\
& & & & & is not strong in MACHO and affected by many \\
& & & & & saturated points \\
5 & SMC114.7 39 & & & 267 & New pulsar, CXOU J010206.6-714115. Overall \\
& & & & & brightness change of 0.5 mag over $\sim 1000$ d \\
6 & & SMC-SC9 168928 & 206.16775.520 & - & No variability \\
7 & & SMC-SC9 173121 & 206.16776.17 & - & SXP348. Variability ($\sim0.1$ mag)\\
& & & & & on timescales of $\sim 400$ d \\
8 & SMC111.7 8943 & & & 1.49 & Sinusoidal folded light curve of 0.04 mag
amplitude \\
9 & SMC113.2 13190 & & 206.16947.35 & - & No variability \\
10 & SMC113.2 13509 & & 206.16946.1089 & - & No variability \\
11 & & SMC-SC11 48835 & 206.17055.21 & - & New pulsar, CXOU J010712.6-723533.
Brightening \\
& & & & & of $\sim 0.02$ mag over $\sim 1200$ d \\
12 & ? & & & ? & Not in OGLE or MACHO fields \\
13 & SMC113.1 9719 & SMC-SC11 116013 & 206.17114.1658 & - & Brightening of
$\sim 0.5$ mag over $\sim 800$ d \\
14 & SMC118.7 galaxy & SMC-SC11 120966 & 206.17175.739 & - & No variability \\
15 & SMC118.7 galaxy & & 206.17174.524 & - & No variability \\
16 & SMC118.5 1160 & & & - & Some variability ($\sim 0.2$ mag) on
timescales\\
& & & & & of $\sim 100$ d \\
17 & SMC118.7 10314 & & & - & No variability \\
18 & SMC115.5 12842 & & & - & No variability \\
19 & SMC115.7 18128 & & & - & Brightening of $\sim 0.4$ mag in $\sim 1500$
d \\
20 & SMC117.4 3274 & & & - & Brightening of $\sim 0.4$ mag in $\sim 1000$
d \\
21 & SMC121.6 galaxy & & & ? & Galaxy \\
22 & SMC120.7 6336 & & & - & No variability \\
23 & SMC121.4 542 & & & - & No variability \\
\hline
\end{tabular}
\end{minipage}
\end{table*}
\subsection{Spectral Analysis}
We have extracted spectra for the sources with $>100$ counts using
{\sevensize CIAO V}3.2 tools. The spectra were regrouped by requiring at
least 10 counts per spectral bin for the brighter sources ($>200$ counts),
and 2 counts per bin for the fainter sources ($100<$ counts $<200$). The
subsequent spectral fitting and analysis were performed using
{\sevensize XSPEC V}12.3.0. We fitted each spectrum with a power-law,
thermal bremsstrahlung and Mekal model. In each fit we included absorption.
Due to the low number of counts detected we fixed the column density at the
value for the SMC of $6 \times 10^{20}$ cm$^{-2}$, only in the case of
source \#11 (CXOU J010712.6-723533) have we also allowed the $N_{\rm H}$ to
be fit. The small number of detected counts renders chi-squared statistics
invalid. We have therefore used an alternative statistic, the C-statistic
\citep{cas79}, for our model fitting. The goodness of fit given in
Table~\ref{tab:spec}, $\Delta C$, is determined in a similar way to reduced
$\chi^{2}$. The unabsorbed flux in the 0.3--10 keV band has been determined
for each fit. For sources that are members of the SMC, or assumed members
(see Section \ref{sect:ratio}), we have also calculated the luminosity in the
0.3--10 keV range, using a distance to the SMC of 60 kpc (based on the
distance modulus \citet{wes97}). The results of the spectral fitting
are summarised in Table~\ref{tab:spec}.
\subsection{Long-term Optical Light Curves}
All 23 objects were investigated for possible optical periods (see
Table~\ref{tab:opt}). Such periods have been found in many previous HMXB
systems in the SMC and can either represent the binary period of the system
or non-radial pulsations (NRPs) from the mass-donor star
\citep[for examples of both see][]{edg05,sch06}. Data were collected from as
many of the following possible sources: OGLE II \citep{szy05,uda97}, OGLE III
and MACHO \citep{alc99}. The OGLE III archive contains $I$-band photometry
spanning 5 years, the data are not yet public. In each case the optical
light curves were searched for periods in the ranges $10-1000$ d and
$1-10$ d by generating Lomb-Scargle periodograms. If a significant period
was found, then the data were folded modulo that period with phase zero set
to the time of maximum light phase. In one case (source \#5, CXOU
J010206.6-714115) the data were first detrended with a polynomial fit. Quite
a few of the objects revealed long-term variations on timescales of
$100-1000$ d - such changes are similar to the common Type 4 variations
reported by \citet{men02} from Be stars in the SMC. The errors on the
periods were determined using the same method as for the pulse periods (see
Section \ref{sect:pulse}).
\subsection{X-ray to Optical Flux Ratios}
\label{sect:ratio}
Using the results from the spectral fitting and information from optical
catalogues we have constructed X-ray to optical flux ratios for the majority
of the sources in our sample (Table \ref{tab:ratio}). It has been shown
that such ratios are a good discriminator for different classes of sources
(see e.g. \citealt{mac88,hor01}), with typical values for active galactic
nuclei (AGN) lying in the region $\rm log (f_{X}/f_{opt}) = 0.0 \pm 1$. We
have used the flux in the 0.5--2 keV band and Eq. (3) from \citet{hor01} to
calculate the ratio for each source that has a measured $R$ magnitude. In
the case of a source that has too few counts to model in XSPEC, we have
determined the flux given the count rate and assuming a power-law index of
1.6 (see Section \ref{sect:disc}) and neutral hydrogen column density of
$6 \times 10^{20}$ cm$^{-2}$, using PIMMS v3.9a. The results of this
analysis allows us to provisionally classify the sources and determine
whether the object is a member of the SMC (see Section \ref{sect:source}).
\subsection{Quantile Analysis}
We have used the quantile analysis technique of \citet{hong04} to investigate
the X-ray colours of the 23 sources in our sample. In a traditional hardness
ratio the photons are split into predefined energy bands. The quantile
method divides the photon distribution into a given number of equal
proportions, where the quantiles are the energy values that mark the
boundaries between consecutive subsets. This has the advantage, compared to
traditional hardness ratios, that there is no spectral dependence and a
colour can be calculated even for sources with very few counts
\citep[for more details see][]{hong04}.
For each source we determine the median and quartiles of the photon
energy distribution. In Figure~\ref{fig:quantile} we show the quantile-based
colour-colour diagram (QCCD), using the median and the ratio of two
quartiles, for our sample. In the diagram the spectrum hardens as one goes
further right and changes from concave-downward to concave-upward moving from
top to bottom \citep[see Figure 7,][]{hong04}. We have included in the
figure the pulsars detected by \citet{edg04}. The new and known pulsars are
marked.
\section{Individual Sources}
\label{sect:source}
\subsection{CXOU J005551.5-733110}
This source was detected in observation ID 5499 taken on 2006 March 3 (MJD
53797). The temporal analysis does not reveal any significant periodicities.
The location of CXOU J005551.5-733110 is close to source 65 in the {\it ROSAT}
HRI catalogue of the SMC \citep{sas00}, being consistent within errors.
The position of the {\it Chandra}\ source coincides with 2MASS 00555147-7331101
($J=16.3$, $H=15.4$, $K=14.6$) and OGLE III source SMC106.8 21521 ($I=17.7$).
The optical light curve displays changes of $\sim 0.1$ mag over $\sim 400$ d.
The X-ray emission from the source is well fitted by a thermal bremsstrahlung
with a temperature of $4.1 \times 10^{7}$ K, however the temperature is
poorly constrained (see Table \ref{tab:spec}). A power-law also fits the
data well with an index of 1.9. If this source is a member of the SMC the
unabsorbed luminosities derived from these fits are $1.2 \times 10^{35}$ and
$1.4 \times 10^{35}$ erg s$^{-1}$, respectively.
\begin{table}
\centering
\begin{minipage}{80mm}
\caption{X-ray to optical flux ratios.}
\label{tab:ratio}
\begin{tabular}{@{}rcccc}
\hline
Object & $R$ & $f_{X}$ & $\rm log (f_{X}/f_{R})$ & Class \\
\hline
1 & - & $1.3 \times 10^{-13}$ & - & - \\
2 & 14.8 & $1.0 \times 10^{-13}$ & -1.58 & pulsar \\
3 & 20.1 & $8.3 \times 10^{-14}$ & 0.46 & AGN \\
4 & 10.9 & $1.3 \times 10^{-13}$ & -3.03 & star \\
5 & 15.1 & $9.7 \times 10^{-14}$ & -1.47 & pulsar \\
6 & 18.0 & $3.2 \times 10^{-14}$ & -0.79 & AGN \\
7 & 13.3 & $6.8 \times 10^{-14}$ & -2.34 & pulsar \\
8 & 14.5 & $2.8 \times 10^{-14}$ & -2.25 & star \\
9 & 18.0 & $2.0 \times 10^{-14}$ & -1.00 & - \\
10 & 18.7 & $3.2 \times 10^{-14}$ & -0.51 & AGN \\
11 & 14.9 & $4.4 \times 10^{-13}$ & -0.90 & pulsar \\
12 & - & $2.8 \times 10^{-14}$ & - & - \\
13 & 18.5 & $2.8 \times 10^{-14}$ & -0.65 & AGN \\
14 & 19.4 & $7.7 \times 10^{-14}$ & 0.15 & AGN \\
15 & 18.9 & $2.0 \times 10^{-14}$ & -0.64 & AGN \\
16 & 19.0 & $4.0 \times 10^{-14}$ & -0.30 & AGN \\
17 & - & $3.6 \times 10^{-14}$ & - & AGN \\
18 & 19.3 & $3.2 \times 10^{-14}$ & -0.27 & AGN \\
19 & - & $3.2 \times 10^{-14}$ & - & - \\
20 & 19.6 & $3.6 \times 10^{-14}$ & -0.10 & AGN \\
21 & - & $2.8 \times 10^{-14}$ & - & AGN \\
22 & - & $2.8 \times 10^{-14}$ & - & - \\
23 & 19.1 & $1.4 \times 10^{-13}$ & 0.29 & AGN \\
\hline
\end{tabular}
\medskip
$f_{X}$ is the unabsorbed flux in the 0.5--2 keV range. The X-ray to optical
flux ratio is determined using $\rm log (f_{X}/f_{R}) = \rm log f_{X} + 5.50
+ R/2.5$ \citep[Eq. (3),][]{hor01}.\\
\end{minipage}
\end{table}
\subsection{RX J0057.3-7325 = SXP101}
RX J0057.3-7325 = AX J0057.4-7325 \citep{kah99,tor00}, also known as SXP101
\citep{coe05}, was detected in observation ID 5499 taken on 2006 March 3
(MJD 53797). Coherent pulsations with a period of $101.45 \pm 0.07$ s were
detected in {\it ASCA} data \citep{tor00}. The strongest peak in both period
searches of the {\it Chandra}\ data occurs at $101.16 \pm 0.86$ s (see Figure
\ref{fig:pow_spec_fold}), however the peak in the Lomb-Scargle falls below
the $<90\%$ confidence level and would not have been regarded as significant
if the source was not already known.
\citet{edg03} narrowed the optical counterpart down to two sources, D and E in
Table 2 in their paper. Our more precise {\it Chandra}\ position allows us to
determine that the optical counterpart is source E, Table 2 in \citet{edg03},
the star MACS J0057-734 10 \citep{tuc96}. This position is also consistent
with 2MASS 00572706-7325192 ($J=15.7$, $H=15.6$, $K=15.6$), MACHO object
211.16415.11 ($V=14.9$, $R=14.8$) and OGLE III source SMC106.7 15343
($I=15.6$). We find a period of $21.94 \pm 0.10$ d in both the OGLE III
and MACHO data (Figure~\ref{fig:opt_lc2}). From the Corbet diagram
\citep{cor86} we would expect a longer period for the source, perhaps twice
the detected period, however there are no strong peaks in that region of the
periodogram. A period of 22.95 d is found in the X-ray data from {\it RXTE}
with $T_{0} =$ MJD 2452111.4 (Galache et al., in preparation). The spectrum
of SXP101 is described well by a power-law with a photon index of 0.6. The
resulting unabsorbed luminosity is $4.3 \times 10^{35}$ erg s$^{-1}$.
\begin{figure*}
\includegraphics[width=130mm]{fig3.eps}
\caption{Quantile-based colour-colour diagram for the X-ray bright sources
in the SMC Wing survey. The median and two quartiles of the photon energy
distribution are given by $m$, $Q_{25}$ and $Q_{75}$, respectively. The
choice of x-axis allows the soft and hard phase space to be explored equally
well \citep[for more details see][]{hong04}. The new and known pulsars in
the SMC Bar \citep[open squares,][]{edg04} and the SMC Wing (filled triangles)
and marked. Source \#6 is a known quasar, and source \#4 is a probable
variable star. The two sources marked with a cross are source \#18
(CXOU J011154.2-723105) and source \#23 (CXOU J012223.6-730848). Both of
these sources display possible pulsations at 19.52 and 140.99 s,
respectively, but their optical magnitudes and lack of long-term variability
seems to rule out classification as Be/X-ray transients, and the nature of the
sources are uncertain.}
\label{fig:quantile}
\end{figure*}
\subsection{CXOU J005754.4-715630}
This source was detected in observation ID 5480 taken on 2006 February 6 (MJD
53772). The temporal analysis does not reveal any significant periodicities.
Within errors the position of CXOU J005754.4-715630 is consistent with
USNO-B1.0 0180-0037995 ($B2=19.2$, $R2=20.1$), 2MASS J00575428-7156306
($J=16.8$, $H=16.4$, $K=14.8$) and OGLE III source SMC108.4 384 ($I = 18.5$).
The OGLE III light curve displays slight variation ($\sim 0.1$ mag) on
timescales of $\sim 100$ d. The X-ray spectrum can be well-fitted
with a power-law with photon index of 1.3. Statistically the data are equally
well-fitted with the thermal models, however the models are very poorly
constrained (see Table \ref{tab:spec}). The X-ray to optical flux ratio for
this source implies that it is a background AGN.
\subsection{CXOU J010014.2-730725}
This source was detected in observation ID 5498 taken on 2006 March 3 (MJD
53797). The temporal analysis does not reveal any significant periodicities.
The position of CXOU J010014.2-730725 is close to the previously detected
source RX J0100.2-7307 \citep{kah99} and source 86 in the {\it ROSAT} HRI
catalogue of the SMC \citep{sas00}. The archival source has a 90\% confidence
positional uncertainty of $10 \arcsec$ which encompasses the {\it Chandra}\ position
making it probable that they are the same source. \citet{sas00} made a
tentative classification of the source as a foreground star. The counterpart
to CXOU J010014.2-730725 is ISO-MCMS J010014.0-730725 detected by the
Infrared Space Observatory and has been classified as a long-period red
variable \citep{cio03}. The source position is also consistent with 2MASS
01001398-7307253 ($J=10.9$, $H=10.2$, $K=10.0$) and MACHO source 211.16591.6
($V=12.2$, $R=10.9$). The source is saturated in the OGLE III data.
\citet{cio03} analysed the MACHO data for periodicities. The blue light
curve was considered of poor quality. The red light curve was found to have
a period of 29 d. We have analysed the MACHO data, after removing all
the saturated points we find a weak period at $29.57 \pm 0.35$ d in the red
band (Figure~\ref{fig:opt_lc4}). A larger peak at $367\pm 10$ d is too
close to one year to be considered as astrophysical in origin.
The X-ray emission from CXOU J010014.2-730725 is very soft with almost no
photons above $\sim 2$ keV. The spectrum can be well-fitted with a
power-law with index of 3.4 or a thermal bremsstrahlung with temperature
$7.3 \times 10^{6}$ K (see Table \ref{tab:spec}). The position of the source
in the quantile-based colour-colour diagram suggests that it is a stellar
coronal emission source \citep{hong05}. Assuming this is the case, the X-ray
to optical ratio based on the thermal fit is consistent with a galactic
source. The $V-R$ colour for the source is 1.3, consistent with an M star
(M0-M2), as are the infrared colours. We classify CXOU J010014.2-730725 as a
foreground star.
\subsection{CXOU J010206.6-714115}
Observation ID 5481 took place on 2006 February 6 (MJD 53772). Timing
analysis of this object revealed a period of $700.54 \pm 34.53$ s with a
confidence of $>99$\% (see Figure \ref{fig:pow_spec_fold}). The data were
examined to ensure that this periodicity was not an artifact of the 707 s Z
dithering frequency.
The position of this pulsar coincides with the emission-line star [MA93]
1301 \citep{mey93}, the $V = 14.6$ mag O9 star AzV 294 \citep{mas02}, the
OGLE III object SMC114.7 39 ($I = 14.3$), and 2MASS J01020668-7141161
($J=14.2$, $H=14.0$, $K=13.9$). The $B-V$ colour index is $-0.14$
\citep{mas02} which is consistent with the value expected from the optical
companion of a Be X-ray binary. The OGLE III light curve shows a
strong period at $267.38 \pm 15.10$ d (Figure~\ref{fig:opt_lc5}). This
period is consistent with that predicted from the Corbet diagram
\citep{cor86} for a 700 s Be/X-ray pulsar (following the convention of
\citealt{coe05} this source would be designated SXP700).
The X-ray spectrum of CXOU J010206.6-714115 can be well-fitted with a
power-law with index of 0.4 (see Table \ref{tab:spec}). This model implies an
unabsorbed luminosity of $6.0 \times 10^{35}$ erg s$^{-1}$.
\subsection{CXOU J010245.0-721521}
This source was detected in observation ID 5486 taken on 2006 February 10 (MJD
53776). The position of CXOU J010245.0-721521 is consistent with OGLE II
SMC-SC9 168928 ($B=19.4$, $V=18.9$ and $I=18.4$), a known quasar at $z=1.06$
\citep{dob03}. The source is also detected by MACHO and is designated
206.16775.520 ($V=18.9$, $R=18.0$). The optical light curves do not show any
variability. The position of the source on the QCCD indicates
that the classification as a quasar is correct and the X-ray to optical flux
ratio is consistent with a background AGN.
\begin{figure}
\includegraphics[width=84mm]{fig4.eps}
\caption{OGLE III light curve (top), Lomb-Scargle periodogram with
significant period marked (middle) and folded light curve (bottom) for the
optical counterpart of source \#2, RX J0057.3-7327 = SXP101. The data have
been folded on $P = 21.94$ d using $T_{0} =$ JD 2452124.8625.}
\label{fig:opt_lc2}
\end{figure}
\begin{figure}
\includegraphics[width=84mm]{fig5.eps}
\caption{MACHO light curve (top), Lomb-Scargle periodogram with significant
period marked (middle) and folded light curve (bottom) for the optical
counterpart of source \#4, CXOU J010014.2-730725. The data have been folded
on $P = 29.57 $ d using $T_{0} =$ JD 2450322.5525.}
\label{fig:opt_lc4}
\end{figure}
\subsection{SAX J0103.2-7209 = SXP348}
SAX J0103.2-7209 = 2E 0101.5-7225 = RX J0103.2-7209 was first identified by
{\it BeppoSAX} in 1998 \citep{isr98} and showed pulsations at a period of
$345.2 \pm 0.1$ s. The source, also known as SXP348 \citep{coe05}, was
also seen in {\it ASCA} data taken in 1996, with a reported pulse period of
$348.9 \pm 0.3$ s and period derivative (with respect to {\it BeppoSAX}) of
1.7 s yr$^{-1}$ \citep{yok98}. The pulse period determined from observations
taken in 1999 with {\it Chandra} of $343.5 \pm 0.5$ s indicated that the
pulsar had been spinning up with a constant rate since 1996 \citep{isr00}.
Using serendipitous {\it XMM-Newton} observations of the source from 2000,
\citet{hab04} measured a pulse period of $341.21 \pm 0.50$ s, implying that
the spin-up was continuing.
We observed SXP348 on 2006 February 10 (MJD 53776). Although the highest
peak in the Lomb-Scargle periodogram is at $337.51 \pm 5.17$ s, it falls below
the 90\% confidence limit (see Figure \ref{fig:pow_spec_fold}). While our
results seem to suggest that SXP348 may still be spinning-up, due to the
uncertainty on the spin period we are unable to determine if the previously
observed trend is still persisting.
The pulsar has previously been identified with a $V = 14.8$ mag Be star
\citep{hug94,isr98}, [MA93] 1367 \citep{mey93}. \citet{coe00} analysed OGLE
II data of the proposed optical counterpart and concluded that it was the
likely companion. Timing analysis of the long-term $I$-band OGLE II data
did not show any periodic modulation in the range 1 to 50 d with an upper
limit of $\leq \pm 0.02$ mag. The source position is also coincident with
the MACHO object 206.16776.17 ($V=14.4$, $R=13.3$). Our analysis of the
light curve does not reveal any coherent period in the range 1 to 1000 d.
The source falls in a gap between the chips in OGLE III.
It has been shown that the spectrum of SXP348 can be well-fitted with an
absorbed power-law with photon index $\sim 1.0$ \citep{isr98,yok98,hab04}.
We find that the X-ray emission is well-fitted by a power-law with index
0.8 (see Table \ref{tab:spec}). This fit gives an unabsorbed luminosity of
$2.5 \times 10^{35}$ erg s$^{-1}$.
\subsection{CXOU J010455.4-732555}
This source was detected in observation ID 5497 taken on 2006 March 3 (MJD
53797). The source is located very near to the edge of the chip and only
falls on the chip for part of the observation. We were therefore unable to
perform a search for pulsations.
The position of CXOU J010455.4-732555 is coincident with USNO-B1.0
0165-0046936 ($R1=14.5$, $B2=15.3$, $R2=15.0$), 2MASS 01045550-7325558
($J=12.4$, $H=11.8$, $K=11.8$) and OGLE III source SMC111.7 8943 ($I=13.0$).
The OGLE III light curve reveals a period of $1.4880 \pm 0.0005$ d
(Figure~\ref{fig:opt_lc8}). The next largest peak is at 3.0238 d. It is
unclear whether the true period is 1.4880 or 3.0238 d, however the values are
not consistent with the shorter period being a harmonic of the longer one. The
peak at $\sim 3$ d could be due to the beating of the true period with the 1
d sampling variation. If the source was a Be/X-ray binary it is hard to
reconcile the detected variation with an orbital period; in this case it
could be a NRP. However, the optical brightness of the object
indicates that the source is likely to be a variable star. The results from
the X-ray to optical flux ratio calculation are consistent with a Galactic
foreground star. Further X-ray observations to search for pulsations and
follow-up optical spectroscopy are needed to determine the nature of this
source.
\subsection{CXOU J010509.6-721146}
This source was detected in observation ID 5486 taken on 2006 February 10 (MJD
53776). The temporal analysis does not reveal any significant periodicities.
The location of CXOU J010509.6-721146 lies within the $1\arcmin$ error
circle of AX J0105-722 \citep{yok98}. A study of the region around
AX J0105-722 by \citet{fil00} resolved several sources. \citet{fil00}
proposed that the most likely counterpart to the {\it ASCA} source was RX
J0105.1-7211. Within errors the {\it Chandra}\ source is also consistent with this
{\it ROSAT} PSPC object. A search for the optical counterpart to the
{\it ASCA} source was carried out by \citet{coe05}. Based on H$\alpha$
observations and temporal analysis of the H$\alpha$ and optical data, they
identified the {\it ASCA} X-ray source, designated SXP3.34, with [MA93] 1506
\citep{mey93}. The position of this optical source also lies within the
{\it ASCA} error circle but it is not consistent with the {\it ROSAT} or
{\it Chandra}\ source. A search for pulsations could not be performed on the
{\it ROSAT} data due to poor statistics, and we did not detect any
significant pulsations in our search of the {\it Chandra}\ data, therefore no firm
identification with AX J0105-722 can be made. The {\it Chandra}\ position coincides
with 2MASS 01050959-7211470 ($J=16.8$, $H=16.3$, $K=15.3$), MACHO object
206.16947.35 ($V=18.2$, $R=18.0$) and OGLE III source SMC113.2 13190
($I=17.7$). The optical light curves do not show any changes. The X-ray to
optical flux ratio is -1.0, making it difficult to classify the source.
\subsection{CXOU J010533.0-721331}
This source was detected in observation ID 5486 taken on 2006 February 10 (MJD
53776). The temporal analysis does not reveal any significant periodicities.
The position of CXOU J010533.0-721331 is close to RX J0105.5-7213
\citep{fil00}. The statistical (90\%) positional error of $4\arcsec$ added
in quadrature with the systematic error of $7\arcsec$ gives an overall
positional error of $\sim 8\arcsec$ for RX J0105.5-7213. This error circle
encompasses CXOU J010533.0-721331, implying that they are the same source.
The position of the {\it Chandra}\ source coincides with the radio counterpart,
J0105.5-7213 \citep{fil00}, of the {\it ROSAT} source. \citet{fil00} did not
detect any optical counterpart to this source and classified it as an
optically faint background AGN. However, we find
that CXOU J010533.0-721331's position is consistent with MACHO object
206.16946.1089 ($V=19.9$, $R=18.7$) and OGLE III object SMC113.2 13509
($I=19.1$), and we propose that this source is the optical counterpart to
the radio object. There are no long-term variations in the optical light
curves and the X-ray to optical flux ratio is consistent with a background
AGN.
\begin{figure}
\includegraphics[width=84mm]{fig6.eps}
\caption{OGLE III light curve (top), Lomb-Scargle periodogram with
significant period marked (middle) and folded light curve (bottom) for the
optical counterpart of source \#5, CXOU J010206.6-714115. The data
were detrended using a polynomial before the period search was performed
and hence the folded light curve is given in relative $I$ mag. The data
have been folded on $P = 267.38 $ d using $T_{0} =$ JD 2452264.6099.}
\label{fig:opt_lc5}
\end{figure}
\begin{figure}
\includegraphics[width=84mm]{fig7.eps}
\caption{OGLE III light curve (top), Lomb-Scargle periodogram with
significant period marked (middle) and folded light curve (bottom) for the
optical counterpart of source \#8, CXOU J010455.4-732555. The data have
been folded on $P = 1.488 $ d using $T_{0} =$ JD 2452214.6844.}
\label{fig:opt_lc8}
\end{figure}
\subsection{CXOU J010712.6-723533}
This source was detected in observation ID 5487 taken on 2006 February 10 (MJD
53776). Timing analysis of this object revealed a period of $65.78 \pm 0.13$
s with a confidence of $>99$\% (see Figure \ref{fig:pow_spec_fold}).
The position of the source coincides with the emission line star [MA93] 1619
\citep{mey93}. The source is also close to 2E 0105.7-7251 = RX J0107.1-7235
\citep{kah99}, taking into account the 90\% confidence positional uncertainty
of $15\arcsec$ for this source, the position is consistent with CXOU
J010712.6-723533 (following the convention of \citealt{coe05} this source
would be designated SXP65.8). 2E 0105.7-7251 has been identified with a
$V=16.6$ Be star, the $B-V$ colour index is -1.2. CXOU J010712.6-723533's
position is consistent with 2MASS 01071259-7235338 ($J=15.8$, $H=15.4$,
$K=15.2$), MACHO object 206.17055.21 ($V=15.0$, $R=14.9$) and the OGLE II
source SMC-SC11 48835 ($I=15.7$). The long-term light curves show a
brightening of $\sim 0.02$ mag over $\sim 1200$ d, but no periodic modulation
is detected. The source falls in a gap between the chips in OGLE III.
We fitted the spectrum of CXOU J010712.6-723533 with a power-law and a
blackbody, both with photoelectric absorption. Initially we fixed the column
density at $6 \times 10^{20}$ cm$^{-2}$. This resulted in a best-fitting
power-law with index 0.3 and unabsorbed luminosity of $3.1 \times 10^{36}$
erg s$^{-1}$. Since we detected a high number of counts from this source, we
also fitted the column density. In this case the spectrum was again best fit
with a power-law with index 0.5, $N_{\rm H}=1.9 \times 10^{21}$ cm$^{-2}$ and
an unabsorbed luminosity of $3.0 \times 10^{36}$ erg s$^{-1}$ (see Table
\ref{tab:spec}). Statistically the power-law fits cannot be distinguished
from each other.
\subsection{CXOU J010735.0-732022}
This source was detected in observation ID 5496 taken on 2006 March 3 (MJD
53797). The temporal analysis does not reveal any significant periodicities.
There are no optical or infrared matches for this source.
\subsection{CXOU J010836.6-722501}
This source was detected in observation ID 5487 taken on 2006 February 10 (MJD
53776). The temporal analysis does not reveal any significant periodicities.
The position of CXOU J010836.6-722501 is consistent within errors to the
position of source 117 in the {\it ROSAT} HRI SMC catalogue \citep{sas00}.
The spectrum of the source was found to be hard \citep{hab00} and led
\citet{sas00} to classify the source as an X-ray binary or AGN. The position
of the {\it Chandra}\ source also coincides with MACHO object 206.17114.1658
($V=18.8$, $R=18.5$), OGLE II source SMC-SC11 116013 ($I=19.5$) and OGLE III
source SMC113.1 9719 ($I=19.6$). The long-term optical light curve shows a
brightening of $\sim 0.5$ mag over $\sim 800$ d. As we do not detect
pulsations from the source, and taking into account its position on the
QCCD compared to the pulsars we have detected and its X-ray to optical flux
ratio, we conclude it is likely that this source is a background AGN.
\subsection{CXOU J010849.5-721232}
This source was detected in observation ID 5485 taken on 2006 February 8 (MJD
53774). The temporal analysis does not reveal any significant periodicities.
The position of CXOU J010849.5-721232 is consistent with the MACHO object
206.17175.739 ($V=20.3$, $R=19.4$), the OGLE II object SMC-SC11 120966
($I=20.0$) and the OGLE III object SMC118.7 galaxy ($I=20.5$). The long-term
optical light curves do not vary. The X-ray emission from CXOU
J010849.5-721232 can be described by a power-law with a photon index of 1.7
or with a thermal emission model with temperature $7.4 \times 10^{7}$ K (see
Table \ref{tab:spec}). The temperature is poorly constrained and based on
this the non-thermal fit is preferred. In this case the X-ray to optical
ratio indicates that the source is a background AGN.
\subsection{CXOU J010855.6-721328}
This source was detected in observation ID 5485 taken on 2006 February 8 (MJD
53774). The temporal analysis does not reveal any significant periodicities.
The position of CXOU J010855.6-721328 coincides with OGLE III source
SMC118.7 galaxy and MACHO object 206.17174.524 ($V=19.5$, $R=18.9$). There
are no changes apparent in the optical light curves and the X-ray to optical
flux ratio is consistent with a background AGN.
\subsection{CXOU J011021.3-715201}
This source was detected in observation ID 5483 taken on 2006 February 6 (MJD
53772). The temporal analysis does not reveal any significant periodicities.
Within errors the position of CXOU J011021.3-715201 is consistent with
USNO-B1.0 0181-0039286 ($R1=19.0$, $B2=18.8$, $R2=19.1$) and OGLE III source
SMC118.5 1160 ($I=18.8$). The long-term optical light curve shows some
variation ($\sim 0.2$ mag) on $\sim 100$ d timescales. The X-ray to optical
flux ratio implies that the source is a background AGN.
\subsection{CXOU J011050.6-721025}
This source was detected in observation ID 5484 taken on 2006 February 6 (MJD
53772). The temporal analysis does not reveal any significant periodicities.
Within errors the position of CXOU J011050.6-721025 is coincident with the
radio source [FBR2002] J011050-721026 \citep{fil02} and OGLE III source
SMC118.7 10314 ($I=20.5$). Analysis of the long-term OGLE III light curve
does not show any variations. Based on the identification of the {\it Chandra}\ source
with a radio object suggests CXOU J011050.6-721025 is a background AGN.
\subsection{CXOU J011154.2-723105}
This source was detected in observation ID 5488 taken on 2006 February 12 (MJD
53778). A peak at $19.52 \pm 0.03$ s with $>90\%$ confidence was detected in
the Lomb-Scargle periodogram. A search using Phase Dispersion Minimisation
does not find a similar modulation.
A tentative detection of an X-ray source, designated BKGS 20, $\sim 3\arcsec$
from CXOU J011154.2-723105 has previously been reported \citep{bru87}.
However, a more stringent analysis of the {\it Einstein} data by
\citet{wan92} failed to detect the source. The position of the {\it Chandra}\ source
is coincident with USNO-B1.0 0174-0065503 ($R1=19.3$, $B2=18.9$, $R2=18.5$)
and the OGLE III source SMC115.5 12842 ($I=19.1$).
The position of the source on the quantile analysis plot is intriguing
(Figure \ref{fig:quantile}); it seems to lie between the detected pulsars
and the rest of the sources in our sample. The source shows possible
pulsed emission, however the magnitudes of the optical counterpart, and its
lack of variability on long timescales, indicates that it is not a Be/X-ray
transient. In addition, the X-ray to optical flux ratio indicates that the
source is a background AGN.
\subsection{CXOU J011303.4-724648}
This source was detected in observation ID 5490 taken on 2006 February 27 (MJD
53793). The temporal analysis does not reveal any significant periodicities.
The position of CXOU J011154.2-723105 is consistent with OGLE III source
SMC115.7 18128 ($I=19.8$), whose light curve exhibits a brightening over
$\sim 1500$ d of $\sim 0.4$ mag.
\subsection{CXOU J011744.7-733922}
This source was detected in observation ID 5494 taken on 2006 March 1 (MJD
53795). The temporal analysis does not reveal any significant periodicities.
A {\it ROSAT} PSPC X-ray source has previously been detected near the
position of CXOU J011744.7-733922. The archival source, [HFP2000] 537
\citep{hab00}, has a positional uncertainty of $\sim 10\arcsec$ which
encompasses the position of the {\it Chandra}\ source. This suggests that they are
the same object. CXOU J011744.7-733922's position is consistent with
USNO-B1.0 0163-0044338 ($R1=19.6$, $B2=19.5$, $R2=19.2$) and OGLE III source
SMC117.4 3274 ($I=18.4$). The OGLE III light curve shows a brightening of
$\sim 0.4$ mag in $\sim 1000$ d. The X-ray to optical flux ratio implies
that the source is a background AGN.
\subsection{CXOU J011832.4-731741}
This source was detected in observation ID 5493 taken on 2006 February 27 (MJD
53793). The temporal analysis does not reveal any significant periodicities.
A {\it ROSAT} PSPC X-ray source has previously been detected near to the
position of CXOU J011832.4-731741. The archival source, [HFP2000] 449
\citep{hab00}, has a positional uncertainty of $\sim 9\arcsec$ which
encompasses the position of the {\it Chandra}\ source. It is therefore likely that
they are the same object. CXOU J011832.4-731741's position coincides with
OGLE III source SMC121.6 galaxy, which suggests that it is a background AGN.
\subsection{CXOU J012027.3-724624}
This source was detected in observation ID 5491 taken on 2005 July 24 (MJD
53575). The temporal analysis does not reveal any significant periodicities.
The position of CXOU J012027.3-724624 coincides with OGLE III source
SMC120.7 6336 ($I=20.5$). The long-term optical light curve does not show
any variability.
\subsection{CXOU J012223.6-730848}
This source was detected in observation ID 5492 taken on 2005 August 12 (MJD
53594). A peak with $> 90\%$ confidence was detected in the Lomb-Scargle
periodogram at $140.99 \pm 1.50$ s, however there is no corresponding strong
peak in the Phase Dispersion Minimisation periodogram. Analysis of the data
with the method used for finding pulsars in {\it RXTE} data \citep{gal06}
reveals a possible periodicity at 282.49 s, approximately twice the value
from the Lomb-Scargle periodogram. In this method the light curve is folded
on the period found in the Lomb-Scargle to produce a pulse profile. The
pulse profile is then used as a template to subtract the pulsations from the
light curve. A Lomb-Scargle periodogram is generated for the cleaned light
curve, and this power spectrum is subtracted from the original. The resulting
power spectrum shows only the contribution of the pulsar to the Lomb-Scargle,
allowing possible harmonics that may have been lost in the noise to be
detected.
The position of CXOU J012223.6-730848 is consistent with USNO-B1.0
0168-0053065 ($R1=19.1$, $B2=20.8$, $R2=19.3$) and OGLE III source SMC121.4
542 ($I=18.9$). There are no changes detected in the optical light curves.
The X-ray spectrum is described equally well by a power-law with photon index
1.5 and a bremsstrahlung with a temperature of $1.1 \times 10^{8}$ K (see
Table \ref{tab:spec}). The temperature is poorly constrained and we therefore
prefer the non-thermal fit to the data.
The optical magnitude, lack of long-term variability in the optical light
curve and the position of the source on the QCCD (see Figure
\ref{fig:quantile} and Section 4) suggests that this source may not be a
pulsar. The X-ray to optical flux ratio determined using the non-thermal
model fit implies that the source is a background AGN. Further observations
are needed to verify the existence of pulsations and determine the nature of
the source.
\section{Discussion}
\label{sect:disc}
Observations have shown that the SMC contains a large number of HMXBs,
providing a homogeneous sample with which to investigate the evolution of the
SMC, and to compare with our Galaxy. The HMXBs are tracers of very young
populations and the SMC seems to have a particularly prominent young
population. To date, studies of the SMC have mostly concentrated on the Bar
of the SMC. To obtain a full picture of the history of the SMC we need to
broaden our study and include the outer regions of the SMC.
In this first paper from the SMC Wing Survey we have investigated the X-ray
and optical characteristics of the 23 brightest X-ray sources. The temporal
analysis, combined with identification of the optical counterparts, shows
that our sample contains four pulsars, two newly detected and two previously
known. The statistical significance of this cannot be determined until a
full analysis of the entire SMC Wing Survey has been performed. The other
sources include a quasar and possibly two foreground stars. The
classification of the remainder are not conclusive, but the lack of
pulsations, long-term periodic variability, optical identifications and X-ray
to optical flux ratios imply that they are most likely background AGN. The
classifications of several of the sources from the literature would seem to
agree with this. If the preliminary classifications we have presented in
this paper are confirmed our results indicate that the spectral hardness and
quantile analysis could be used to distinguish between different classes of
object (see below).
We have analysed the spectra of the 11 sources that have $>100$ counts (see
Table \ref{tab:spec}). Apart from source \#4, which is probably a variable
star, most of the objects exhibit non-thermal emission. We have been able to
fit the spectra of all four pulsars in our sample, their power-law indices
display a limited range of values with an average of 0.5 and standard
deviation of 0.2. In comparison, the spectra of the remaining seven sources
with $>100$ counts have softer spectra, with an average photon index of
$1.6\pm 0.2$ (excluding \#4). \citet{hab04} found that the distribution
of photon indices for SMC HMXBs, mainly located in the Bar, had an average
of $1.0\pm 0.2$. This is in agreement with the Bar pulsars studied by
\citet{edg04} that have an average photon index of $1.1\pm 0.5$. In general,
the pulsars that we have detected in the Wing have harder spectra than those
in the Bar.
It is likely that the pulsars have a higher intrinsic neutral hydrogen column
density than the AGN, however it is puzzling why the Wing pulsars as a
group are harder/more absorbed than the Bar group. Could the Wing pulsars
be situated at the back of the SMC? The work of \citet{lan86} would seem to
rule out that possibility as they found that the Wing lies in front of the
main body of the SMC. It is more likely that small number statistics are
contributing to the observed division of sources.
A large fraction of our sources are too faint to extract a meaningful
spectrum. By constructing a quantile-based colour-colour diagram we
have been able to investigate the spectral properties of all 23 sources in
our sample (see Figure~\ref{fig:quantile}). Our analysis shows that the
pulsars we have detected in the Wing fall in a distinct group on the QCCD.
The location of the Bar pulsars from \citet{edg04} also seem to fall in the
harder part of the diagram, but the separation of sources is less clearly
defined, with one source falling amongst the softer sources from the Wing.
There does not seem to be anything remarkable about this particular Wing
pulsar. The softer sources include an identified star, quasar and possible
AGN. The source that appears to sit in the transition region between the
majority of the pulsars and the other sources is CXOU J011154.2-723105
(source \#18). This object displays a possible pulsation of 19.52 s, but its
optical magnitude and lack of long-term variability seems to rule out a
Be/X-ray transient, and the nature of the source remains unclear.
The classification of all of the sources will require optical spectroscopy,
but the QCCD may be a useful tool for distinguishing pulsars from other types
of object (stars, quasars, AGN) for the fainter X-ray sources in the SMC Wing
survey.
\section{Summary}
We have detected two new pulsars, CXOU J010712.6-723533 and CXOU
J010206.6-714115, and observed two previously known pulsars, SXP101 and
SXP348. With the accurate positions provided by {\it Chandra}\ we have been able to
determine new optical identifications for the two new pulsars, CXOU
J010712.6-723533 and CXOU J010206.6-714115, and SXP101. We have found
long-term optical periods of 267 d and 21.9 d for CXOU J010206.6-714115 and
SXP101, respectively.
\section*{Acknowledgments}
AU was supported by the BST grant of the Polish MNSW. RHDC and SL acknowledge
support from Chandra/NASA grant GO5-6042A/NAS8-03060.
This research has made use of the SIMBAD data base, operated by CDS,
Strasbourg, France. This paper utilizes public domain data originally
obtained by the MACHO Project, whose work was performed under the joint
auspices of the U.S. Department of Energy, National Nuclear Security
Administration by the University of California, Lawrence Livermore National
Laboratory under contract No. W-7405-Eng-48, the National Science Foundation
through the Center for Particle Astrophysics of the University of California
under cooperative agreement AST-8809616, and the Mount Stromlo and Siding
Spring Observatory, part of the Australian National University.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,379 |
Biografia
Era figlio di Bernabò Visconti, signore di Milano, e di Beatrice della Scala.
Nel 1379 suo padre Bernabò spartì i propri domini tra i suoi cinque figli maschi legittimi: Marco, Gianmastino, Rodolfo, Carlo e Ludovico. A Gianmastino, che all'epoca aveva solo nove anni, spettò Brescia e la Val Camonica.
Dopo il colpo di stato compiuto da Gian Galeazzo Visconti al fine di diventare signore di Milano, Gianmastino si rifugiò a Brescia, poi a Coira, ospitato dal vescovo. Iniziò, da esule, una serie di viaggi che lo portarono fino in Germania. Qui ebbe due figli naturali, Giorgio e Maddalena, avuti da due donne del luogo.
Le fonti sono contraddittorie riguardo alla donna che divenne sua moglie. Nel 1385, qualche mese prima del colpo di stato, venne sicuramente promesso a Cleofe della Scala, figlia di Antonio della Scala, signore di Verona, e di Samaritana da Polenta. Alcune fonti riportano il relativo matrimonio e la nascita di tre figli:
Beatrice, che sposò Prosdocimo de' Conti,
Bernabò, che ebbe Marignano nel 1413 e una figlia, Donnina, sposata a Franciscolo Castiglioni, figlio di Cristoforo Castiglione;
Maddalena, che sposò Giovanni Porro, un nobile milanese.
Tuttavia da altri documenti risulta la nascita di una figlia legittima, Lucia, avuta da una certa Elisabetta.
Nel giugno del 1404 donò la Valtellina, Chiavenna, Poschiavo e Bormio al vescovo di Coira Hartmann II in segno di gratitudine. Quello stesso anno, a novembre, i Visconti rivendicarono la signoria di Milano, in seguito alla morte di Gian Galeazzo, contro Pandolfo Malatesta. Da questa controversia Gianmastino ottenne Bergamo e la Ghiara d'Adda.
Morì l'anno dopo il 19 luglio e fu sepolto a Bergamo nella chiesa di San Giovanni nella Cittadella, poi inserita nel Seminario vescovile Giovanni XXIII..
Ascendenza
Note
Voci correlate
Visconti
Milano
Gianmastino | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,213 |
import utils from './utils'
import validators from './validators'
import events from '../events'
import state from '../state'
import calculations from '../../src/calculations'
module.exports = function(req,res, next){
switch (req.body.type){
case 'task-created':
specTaskCreated(req, res, next)
break
case 'task-claimed':
specTaskClaimed(req, res, next)
break
case 'task-boosted':
specTaskBoosted(req, res, next)
break
case 'task-monthly-updated':
specTaskRateUpdated(req, res, next)
break
default:
next()
}
}
//
function specTaskCreated(req, res, next){
let errRes = []
if (
validators.isName(req.body.name, errRes) &&
validators.isNotes(req.body.description, errRes) &&
validators.isAmount(req.body.monthlyValue, errRes) &&
validators.isAmount(req.body.boost, errRes) &&
validators.isAmount(req.body.cap, errRes) &&
validators.isBool(req.body.oneTime, errRes) &&
validators.isFob(req.body.fob, errRes)
){
events.taskCreated(
req.body.name,
req.body.description,
req.body.monthlyValue,
req.body.cap,
req.body.boost,
req.body.fob,
req.body.oneTime,
utils.buildResCallback(res)
)
} else {
res.status(200).send(errRes)
}
}
function specTaskClaimed(req, res, next){
let errRes = []
// TODO: this member-fob conversion in earlier middleware, (new name authFob?)
let paid
state.pubState.tasks.forEach( task => {
if (task.taskId == req.body.taskId){
paid = calculations.calculateTaskPayout(task)
}
})
let memberId = utils.memberIdFromFob(req.body.fob)
console.log('payout ready', paid, memberId)
if (
validators.isTaskId(req.body.taskId, errRes) &&
validators.isMemberId(memberId, errRes) &&
validators.isAmount(paid, errRes) &&
validators.isNotes(req.body.notes, errRes)
){
events.taskClaimed(
req.body.taskId,
memberId,
paid,
req.body.notes,
utils.buildResCallback(res)
)
} else {
res.status(400).send(errRes)
}
}
function specTaskRateUpdated(req, res, next){
let errRes = []
if (
validators.isTaskId(req.body.taskId, errRes) &&
validators.isAmount(req.body.amount, errRes) &&
validators.isNotes(req.body.notes, errRes)
){
events.taskMonthlyUpdated(
req.body.taskId,
req.body.amount,
req.body.notes,
utils.buildResCallback(res)
)
} else {
res.status(200).send(errRes)
}
}
function specTaskBoosted(req, res, next){
let errRes = []
if (
validators.isTaskId(req.body.taskId, errRes) &&
validators.isAmount(req.body.amount, errRes) &&
validators.isNotes(req.body.notes, errRes)
){
events.taskBoosted(
req.body.taskId,
req.body.amount,
req.body.notes,
utils.buildResCallback(res)
)
} else {
res.status(200).send(errRes)
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 6,485 |
Panasonic and Toyota to form smart housing company
Japanese companies Toyota and Panasonic are teaming up to develop technology for connected homes and vehicles.
In January the two firms got together to build batteries for electric vehicles, and now they intend to offer services using the internet of things, such as on-demand ride sharing and home energy services.
The company, to be called Prime Life Technologies, will comprise Toyota and Panasonic's housing units. The Nikkei Asian Review reports that Panasonic will transfer all shares in Panasonic Homes to Prime Life. Toyota will transfer all shares in Toyota Housing, as well as those in subsidiary Misawa Home. It is due to start operations in January next year.
Kazuhiro Tsuga, president of Panasonic, said in a statement: "We will put our respective strengths together to offer new value in everyday life."
Akio Toyoda, president of Toyota, told reporters: "I want to take on the challenge of providing a new kind of lifestyle."
CNN observed the move follows similar deals struck by other car-makers. Volkswagen has allied with Microsoft to develop an "Automotive Cloud" that will integrate apps into cars, and Renault has teamed up with Nissan, Fiat Chrysler and Google, while Volvo is working with Uber on autonomous vehicles.Â
Toyota entered the housing business in 1975, establishing Toyota Housing in 2003 and acquiring Misawa Home in 2017.
Panasonic Homes has been developing smart lighting and air conditioning systems. Last year it unveiled "Home x", a smart home control system that uses the internet of things to manage homes.
The Japanese housing market is expected to decline as the country's population shrinks, putting a premium on competitiveness.
Image: The race is on to develop systems that integrate, energy, home and transport systems (Dreamstime)
Mercedes follows Tesla in offering battery-powered houses
Chelsea drivers to plug electric cars into lamp posts
Foster + Partners and Nissan reveal self-driving car that can power a home | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 3,768 |
Reuben Kadish Survey
Timothy Taubes
Artists Choice Museum
The Artists' Choice Museum will present a fifty year survey exhibition of sculpture and works on paper by Reuben Radish, to open Saturday, January 11th and continue through Saturday, February 22, 1986. Approximately forty terra cotta and bronze sculptures dating from 1955 to the present and works on paper dating from the early 1930's have been selected by the guest curator, Judd Tully.
In his catalogue essay, Tully writes: 'Radish harbors a passion for cultural anthropology, the bulging figure of the Venus of Willendorf and her ivory sister the 'Venus' of Lespugue. He uses archaic Greek and Egyptian myths and tempers them with Hebraic wisdom, from the 'Wandering Oedipus' to the grieving 'Demeter', a pantheon most mortals have lost touch with. The fantastic stories that cling to their names, fit seamlessly into Radish's world."
Born in Chicago in 1913, Radish and his family moved to Los Angeles where his close friendships with Philip Guston and the Pollock brothers, Sandy, Charles and Jackson took shape. Radish apprenticed as a fresco painter under David Alfara Siqueiros and teamed up with Guston to paint major murals in Mexico and California during the 1930's. It wasn't until after World War II and his stint as an Army combat artist that he moved to New York with his family and made the transition from painting to sculpture.
Radish quickly detoured from the downtown "New York School" route and became a successful dairy farmer in New Jersey. It was there — amidst the fertile soil — that his sculptural vision germinated. Though his sculpture was widely exhibited in the early 1960's and included in the finest of international surveys of contemporary art, Radish's more recent oeuvre is not as familiar to younger audiences with the distinct exception of his students at the Cooper Union where Radish is a master teacher of sculpture. This survey will merge the past and present work and give for the first time a 'wide angle" perspective to this unique artist.
A fully illustrated 24 page catalogue with an essay and bibliography by the curator accompanies the exhibition. A reception for Reuben Radish will be held Saturday, January 11th, 6:00-8:00PM.
Authors and Publications
Alison Weld
Ariella Budick
Benjamin Genocchio
Elizabeth Wix
Ellen Landau
Helen Harrison
Herman Cherry
Jean Libman Block
Jill Conner
Judd Tully
Kenneth Baker
Kristin Wilson
María Esther Guzmán
Michael Brenson
Suzanne Muchnic
William Zimmer
Arts Magazine
Grace Borgenicht Gallery
Kresge Museum of Art Bulletin
New Jersey State Museum
New York Studio School
Pollock-Krasner House and Study Center
SF Gate
Tamarind Institute | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,851 |
{"url":"https:\/\/rpg.stackexchange.com\/questions\/149347\/pathfinder-warbow-concept-review\/149355","text":"# Pathfinder warbow concept review\n\nThis is a review request for a warbow, intended for the Pathfinder system. The purpose behind the creation of this particular item was to allow the player to quest for a magic item that was not essentially useless at higher levels, but rather gained abilities as the PC gained levels and powers. It is a single occurring item, could possibly be considered as an artifact status, even though it is intended to be available starting at a low level, slanted towards a horizon walker or similar concept. Description and capabilities are as follows:\n\n[Insert name here] was a distinguished ranger, well known for his willingness to protect those around him when the cause was just, regardless of the consequences to himself. Many of the tales that have grown around him have reached the status of unverifiable legend, but it is certain that his bow was a feared weapon among the unjust. It is rumored that he even ventured onto the Abyss and other planes to retrieve people stolen from their land.\n\nSometime after he retired, he returned to his woodland home to find his wife murdered, and son missing from their house. Taking up his bow once more, he ventured forth to rescue his son. It is unknown what actually happened, but his son was returned, and [...] and his bow were never seen again. Most believe that he either perished, or made a trade with an infernal being to exchange himself for his son.\n\nMany people claim to have either seen or wielded his bow after that, but none of these have been confirmed, and few even believe that the bow existed, counting it as another fantastical tale that surrounds his legacy.\n\nDESCRIPTION\n\n[Insert name here] Warbow is dark wood intricately carved with arcane runes, and glows with a faint amber glow in dim light. A white leather grip is secured to the bow with three silver bands, one each at the top, middle and bottom of the grip. To successfully utilize [...] Warbow to its fullest potential, a character must fulfill the following requirements:\n\nFeats Exotic Weapon Proficiency, Far Shot*, Planar favored terrain** (Edit for clarification: The planar terrain relates only to the planar shift aspect).\n\nSkills Craft (bowyer) 4 ranks, Spellcraft 4 ranks\n\nABILITIES TABLE\n\n$$\\begin{array}{c c l} \\textbf{Character Level} & \\textbf{Weapon Level} & \\textbf{Weapon Effect} \\\\ \\hline 1^\\text{st}-2^\\text{nd} & 1^\\text{st} & \\textit{+1 adaptive war bow} \\\\ 3^\\text{rd}-4^\\text{th} & 2^\\text{nd} & \\text{Activation ring }\\textit{spell storing, minor} \\\\ 5^\\text{th}-6^\\text{th} & 3^\\text{rd} & \\text{Enhance arrows} \\\\ 7^\\text{th}-8^\\text{th} & 4^\\text{th} & \\text{Minor displacement (as }\\textit{blur}\\text{)} \\\\ 9^\\text{th}-10^\\text{th} & 5^\\text{th} & \\textit{+1 adaptive distance* war bow} \\\\ 11^\\text{th}-12^\\text{th} & 6^\\text{th} & \\text{Activation ring }\\textit{spell storing, regular} \\\\ 13^\\text{th}-14^\\text{th} & 7^\\text{th} & \\textit{Plane shift}\\text{,** 3\/day} \\\\ 15^\\text{th}-16^\\text{th} & 8^\\text{th} & \\text{Imbue arrow} \\\\ 17^\\text{th}-18^\\text{th} & 9^\\text{th} & \\text{Phase arrow} \\\\ 19^\\text{th}-20^\\text{th} & 10^\\text{th} & \\text{Activation ring }\\textit{spell storing, major} \\\\ \\end{array}$$\n\n1: +1 adaptive: At first level, the bow gains the +1 adaptive quality. Bow is +1 to hit\/damage, and Strength bonus applies as well.\n\n2: Spell Storing (Su): At third level, activation of the first ring occurs. This ring activates and functions as a ring of spell storing, minor.\n\n3: Enhance Arrows (Su): Each non-magical arrow fired gains +1, and one of the flaming, shock or burst attributes.\n\n4: Minor Displacement (Su): When wielded, the weapon distorts light as a cloak of displacement, minor, resulting in 20% miss chance on attacks against the wielder.\n\n5: Distance (Su): Every arrow fired by the bow gains the distance quality. *Note: For this to be used, wielder must have the Far Shot feat. If this is not available, this ability lies dormant until it is acquired.\n\n6: Spell Storing (Su): Activation of the second ring occurs. This functions as a ring of spell storing, regular.\n\n7: Plane Shift (Su): At 13 th level, the leather wrap darkens and close inspection reveals swirls. This can be used to activate and take up to 8 linked people as in the spell plane shift. If the shift is to a favored terrain, the bow grants the wielder the ability to natively survive while in possession. In addition, the accuracy is 1-10 miles from target. In other planes, normal accuracy and conditions apply. **Note: The wielder must have a plane as a favored terrain. If this is not available, the ability lies dormant until it is available.\n\n8: Imbue Arrow (Su): The bow grants the ability to infuse any non-magical arrow with either a known or stored area of effect spell. This spell will activate on any hit. If the result of the attack is a miss, the spell is lost without activation.\n\n9: Phase Arrow (Su): The wielder can launch an arrow 2x per day at a target known to him within range, and the arrow travels to the target in a straight path, passing through any nonmagical barrier or wall in its way. (Any magical barrier stops the arrow.) This ability negates cover, concealment, armor, and shield modifiers, but otherwise the attack is rolled normally.\n\n10: Spell Storing (Su): Activation of the third ring occurs. This functions as a ring of spell storing, major.\n\nAura moderate abjuration; CL 12th; Craft Magic Arms and Armor, creator must possess the arcane pool quality; Price +5 bonus.\n\nI would like to determine if this is a logical progression, or if it is not enough in the beginning progressing to overpowered at the high end?\n\n\u2022 To clarify, are you asking if this is balanced at all levels? \u2013\u00a0william porter Jun 5 at 16:55\n\u2022 @williamporter - Yes, it should be balanced throughout the entire progression. \u2013\u00a0JohnP Jun 5 at 16:56\n\u2022 @ObliviousSage - That is...a good point. I am unsure as well, as I've not encountered that process before. \u2013\u00a0JohnP Jun 5 at 16:57\n\u2022 @ObliviousSage Having see this occur before with the [pathfinder-2e] tag, they retagged all those questions to be [pathfinder-2e-playtest] when that tag existed already, so it should be fine. \u2013\u00a0william porter Jun 5 at 16:59\n\u2022 (You may want to review the utility or phrasing of phase arrow in light of, for example, this question. While an archer that possesses the bow may have the capacity to shoot through walls, not many archers possess the X-ray vision that actually enables them to do so!) \u2013\u00a0Hey I Can Chan Jun 5 at 17:13\n\nA few clarifications up front:\n\nFirst of all, what is a \u201cwar bow\u201d? I can\u2019t seem to find any Pathfinder stats for such a weapon. The third-party weapons list includes a \u201cwarbow\u201d from Adventuring Classes: A Fistful of Denarii by Robert J. Grady; since it\u2019s the closest I can find, I\u2019m using its stats for the rest of this answer.\n\nSecond, a price of \u201c+5 bonus\u201d doesn\u2019t make sense for a unique weapon. Those types of prices are only for adding magic onto some weapon, which isn\u2019t what you are doing here. Specific named weapons just get their value written out in gold pieces, something like \u201c50,450 gp\u201d (i.e. the value of a +5 warbow). Anyway, since you\u2019re giving this out early on, and treating it like an artifact, any price doesn\u2019t make sense; artifacts don\u2019t have values attached to them.\n\nFinally, just to clarify in case there is some confusion, being \u201cessentially useless at higher levels\u201d is not necessarily the fate of all magical items that don\u2019t \u201cgain[...] abilities as the PC gained levels and powers.\u201d With every item, you can always craft new abilities onto it. That is, a +1 war bow can be improved to a +1 adaptive war bow and then to +1 adaptive distance war bow just by spending the time\/money to improve it. Nothing wrong with an item that grows automatically, for sure, but just so you\u2019re aware that other options exist.\n\nNow then,\n\n### Bottom line up front: Costs more than a character could ever afford at all but the latest levels\n\nA 1st-level character doesn\u2019t get magic weapons, almost-ever. Even at 1st level, this bow is worth about 2,450-2,950 gp\u2014the guidelines would say 8,450 gp, but I think it\u2019s more accurate in this case to evaluate adaptive as being equivalent to a composite bow matching the character, so 0 gp for a Strength of 10 (or less), or 500 gp for a Strength of 18 (very high for a 1st-level archer). Still, 500 gp alone is more than 1st-level characters get, to say nothing of the 2,450 gp worth of +1 warbow.\n\nA +1 adaptive warbow doesn\u2019t become reasonable until, at a minimum, 4th level. A minor ring of spell storing cannot be afforded before 5th, and isn\u2019t really reasonable until 5th or 6th\u2014and that\u2019s still going to be a large chunk of that character\u2019s wealth.\n\nThen enhance arrows comes in, and blur, adding at a bare minimum another 30,000 gp\u2014at that point the weapon is worth something like 80,000 gp, which is more than a character should have total until 11th level, and not a reasonable amount to have until about 14th level.\n\nBefore you even get there, though, you get a ring of spell storing and at-will plane shift, which easily add another 80,000 gp. That value is probably conceivable around 16th level...\n\n...when you\u2019ve gotten imbue arrow, which is very good.\n\nBasically, until about 17th level or so, this bow is far more valuable than a character could reasonably afford at the given level. And I\u2019m being generous: I am ignoring adaptive and distance as minor quality-of-life ribbons rather than evaluating them how the game says to, and I am ignoring phasing arrow (which is probably fair, since it\u2019s pretty poor).\n\n### Concern: Obviates the arcane archer prestige class?\n\nThis gets enhance arrows, imbue arrow, and phase arrow, three features of the arcane archer prestige class. More importantly, imbue arrow is just about the only reason to ever play an arcane archer, so if someone with this already has that, they got the best part of the arcane archer class \u201cfor free.\u201d\n\nUltimately, this doesn\u2019t concern me that much, since arcane archer isn\u2019t really all that great anyway, and the arcane archer would have imbue arrow at a far lower level. On top of that, typical horizon walkers aren\u2019t going to have great spellcasting.\n\n### Concern: This incentivizes a probably-unintended usage\n\nThis bow is designed for the horizon walker prestige class, but beyond meeting the bare minimum requirements to use the bow, that class doesn\u2019t synergize with the bow in any way. That means that someone who wants the other benefits of this bow\u2014most notably imbue arrow without having to use arcane archer\u2014is encouraged to take just two levels of horizon walker, enough to use this bow, and focus the rest of their levels on something more powerful than horizon walker. An 18th-level wizard\/2nd-level horizon walker probably becomes the optimal usage. It seems kind of unlikely that you intended that as the best way to use this bow.\n\nSince this is intended for a particular character, this may be a purely theoretical concern though.\n\n### Concern: Its weapon properties are quite weak\n\nWhile adaptive is very good (and basically mandatory for a weapon like this), distance is pretty poor\u2014the warbow already has an enormous range, and the opportunities to use an even greater range tend to be few and far between in most campaigns. That is often intentional, since when long-ranged characters can attack from those ranges, a lot of times they can eliminate encounters without ever being endangered. Even if the enemy has similar range with which to respond, other PCs may well not, and therefore not get to participate.\n\nSo, assuming you don\u2019t often let this character snipe enemies with impunity, you have basically a +1 adaptive warbow, with flaming, frost, or shock so long as you aren\u2019t using magic arrows. That\u2019s not great, and the enhancement bonus offered by enhance arrows is entirely wasted (the +1 bonus it puts on arrows overlaps with the +1 bonus on the bow).\n\nConsider having enhance arrows improve, eventually to +5, which will enable the bow to pierce damage reduction. That is very important to an archer. Some additional properties on the bow itself would be well worth considering.\n\nOtherwise, this bow and its fantastic utility will spend a lot of time slung across the character\u2019s back, while they use a real bow for attacks.\n\n### Concern: At-will plane shift might be somewhat game-warping\n\nAs an escape button, it\u2019s without parallel. It means your entire party can escape nearly any situation in a round, whenever they need to, and then return to approximately the same location whenever they like. That could seriously change how your campaign plays out. The low accuracy on the return trip could be a problem for the heroes in some cases, but at least some of their quests may well not be overly hampered by it\u2014they could literally plane shift in and out at their leisure, waiting until they get a lucky roll on the \u201cdistance from target\u201d check. On a wide-open battlefield, they could appear, blast, and return to any safe haven they might care for.\n\nIf they\u2019re really clever, they can abuse planes with positive energy for healing, planes with fast time for sleeping and recovering spell slots, and so on. Plane shift is a big deal, especially at-will.\n\n### Concern: The final value of this weapon is at least 414,450 gp\n\nThe value of a +4 warbow (accounting for the +1, adaptive, distance, and the flaming, frost, or shock available from enhance arrows), a minor cloak of displacement, a minor ring of spell storing, a ring of spell storing, a major ring of spell storing, and a use-activated item of plane shift at-will is a staggering 414,450 gp. That makes it vastly more expensive than any other item in the game excepting only the champion of the gilded host, a colossal construct made of solid gold and heavily magic\u2019d, besides.\n\nAnd all of this is ignoring imbue arrow and phase arrow, since there aren\u2019t guidelines for how to price those.\n\nNow, a 20th-level character is supposed to have 880,000 gp worth of wealth to their names. Such a character could afford all that. And the major ring of spell storing is certainly the largest chunk of value in the bow (nearly half), so prior to that it is a more reasonable value. But it\u2019s definitely something to keep in mind. The other characters in the game may need some substantial gear of their own to compensate here.\n\n## Conclusion\n\nUltimately, as in at 20th level, this probably fine\u2014it does provide a lot of value, but at those levels characters can reasonably afford it. It has fantastic utility, but is perhaps not as good as a straight weapon. That\u2019s a little awkward; the horizon walker may well want some other bow for just attacking with at high levels.\n\nHowever, while leveling it is consistently a minimum of 2 levels ahead of the wealth curve, and that\u2019s being pretty generous in how I\u2019m evaluating its features. At a bare minimum I would shift everything up a tier, skipping the phase arrow feature of the 9th-level bow. The 1st-level bow would instead be just a \u201c+0\u201d adaptive warbow (\u201c+0\u201d here meaning \u201cmasterwork, but counts as magical\u201d), probably\u2014even that is quite good at 1st, but it\u2019s not totally unreasonable.\n\nSomething like\n\n$$\\begin{array}{c c l} \\textbf{Character Level} & \\textbf{Weapon Level} & \\textbf{Weapon Effect} \\\\ \\hline 1^\\text{st}-2^\\text{nd} & 1^\\text{st} & \\textit{\u201c+0\u201d adaptive war bow} \\\\ 3^\\text{rd}-4^\\text{th} & 2^\\text{nd} & \\textit{+1 adaptive war bow} \\\\ 5^\\text{th}-6^\\text{th} & 3^\\text{rd} & \\text{Activation ring }\\textit{spell storing, minor} \\\\ 7^\\text{th}-8^\\text{th} & 4^\\text{th} & \\text{Enhance arrows} \\\\ 9^\\text{th}-10^\\text{th} & 5^\\text{th} & \\text{Minor displacement (as }\\textit{blur}\\text{)} \\\\ 11^\\text{th}-12^\\text{th} & 6^\\text{th} & \\textit{+1 adaptive distance* war bow} \\\\ 13^\\text{th}-14^\\text{th} & 7^\\text{th} & \\text{Activation ring }\\textit{spell storing, regular} \\\\ 15^\\text{th}-16^\\text{th} & 8^\\text{th} & \\textit{Plane shift}\\text{,** 3\/day} \\\\ 17^\\text{th}-18^\\text{th} & 9^\\text{th} & \\text{Imbue arrow} \\\\ 19^\\text{th}-20^\\text{th} & 10^\\text{th} & \\text{Activation ring }\\textit{spell storing, major} \\\\ \\end{array}$$\n\nTo be clear, this is still very good at each of those levels\u2014plenty would argue that it\u2019s still too much. I think it\u2019s probably OK enough\u2014you\u2019re pushing the envelope, but you have to in order to overcome custom content, and besides, assuming we are talking about a horizon walker here, those aren\u2019t very powerful and could use the boost.\n\nBut I probably would go further. Toning down the major ring of spell storing and adding some per-day limitation on plane shift (perhaps one that scales with level) are probably your best bets for reining in some of the more problematic elements, and that should give you room to improve the bow\u2019s usage as an actual weapon, which at this point is fairly lacking.\n\n\u2022 The proposed statblock is a +5 weapon enhancement, based on the last lines. \u2013\u00a0Ifusaso Jun 5 at 18:29\n\u2022 @Ifusaso Based on the entire introduction and fluff, I\u2019m thinking no, it isn\u2019t, and the \u201c+5 enhancement\u201d price is a mistake or misunderstanding, which I addressed right at the beginning of the answer. (The third \u201cclarification up front\u201d) \u2013\u00a0KRyan Jun 5 at 18:31\n\u2022 My bad, I was skimming \u2013\u00a0Ifusaso Jun 5 at 18:55\n\u2022 You are correct in your identification of the warbow source. \u2013\u00a0JohnP Jun 5 at 19:54\n\u2022 The +5 was a misunderstanding of how the pricing\/valuation works. It wasn't really intended to be an artifact, although in actuality it probably needs to be. That can be disregarded. \u2013\u00a0JohnP Jun 5 at 19:56\n\n## Compared to base items, this bow is extremely valuable at almost every level of play.\n\n\u2022 Bow level 1 isn't bad, but is worth more money than you have\n\u2022 +1 Weapons aren't typically affordable until about level 3-4. This is approximately 2,500g for a 1st-2nd level character. Luckily no level 1-2 character can afford a +6 weapon, so this is not really an issue. Because the bow was already magical, it's really just the Adaptive special ability.\n\u2022 Bow Level 2 starts to seem... strong\n\u2022 Minor Ring of Spell Storing is worth 18,000g and is one of the most versatile and useful items for Martial characters with spellcaster allies, and a solid choice for spellcasters as well. This takes away its only drawback (that you can't equip 2 other rings with it)\n\u2022 Bow level 3 needs to have it's language cleaned up and is probably a large jump in power\n\u2022 This gives level 5+ characters +3 more weapon enhancement included in the price (worth about 32,000g), assuming the +1 was intended to stack and you meant flaming burst or shocking burst. Taking off the Burst portion (And 2d10+ damage on crits) would bring this more in-line with the +2 to +3 weapons others would have in this level range.\n\u2022 Bow level 4 is just plain awesome in power\n\u2022 A Minor Cloak of Displacement effect means constant Blur... concealment means you can attempt Stealth anywhere, dodge 1\/5 of every attack, and saved 24,000g. It also removed this item's greatest drawback... that you can't easily replace a Cloak of Resistance.\n\u2022 Bow level 5 is fairly reasonable most of the time\n\u2022 The only concerns with level 5 are that it's another effective +1 on the weapon and the rare instance of the wielder being in a position that they can shoot a quarter of a mile. Most battle situations don't allow themselves to take advantage of this, but a Horizon Walker with Dimension Door spell-like and a Fly speed or even Feather Fall could easily rain arrows for multiple turns at long range with almost no penalty.\n\u2022 Bow level 6 adds a second ring of Spell Storing, as written\n\u2022 A regular ring of Spell Storing is another 50,000g worth of simply amazing item that does not take up a slot.\n\u2022 Bow level 7 is a very thematic and fun ability\n\u2022 The value of Plane Shift, unlike the other abilities, is very context-dependent. I consider this an excellent ability to have on an item like you're describing. I would limit it to use(s) per day unless you're running a campaign where they need to be able to freely travel the Planes.\n\u2022 Bow level 8 makes a ranged weapon a melee only special ability\/entire class feature\n\u2022 Another +1 enhancement equivalent, but not a overpowering special ability. Definitely not going to sit well with anyone else who built for the Arcane Archer Prestige Class.\n\u2022 Bow level 9 needs some serious clarification (does Mage Armor defeat it or only apply its AC? Can the wielder sense viable targets? How\/to what extent does it nullify concealment bonuses?) and is very powerful\n\u2022 The limit of twice per day is the only thing that prevents this ability from being extremely powerful\n\u2022 Bow level 10 should not be a thing\n\u2022 A Greater Ring of Spell Storing follows the same complaints I have about its predecessors (including the fact that there's no language against them now having 3 Rings of Spell Storing) and adds 200,000g to the value.\n\nWhat The War bow is:\n\n\u2022 a +2 distance (flaming or shock at will) burst adaptive spell storing* (+8 equivalent) weapon\n\u2022 infinite teleportation, if you include Planes and back\n\u2022 3 Rings\n\u2022 A cloak\n\u2022 Conservatively, provides 430,000g of value by level 19\n\u2022 consistently valued at 75-2000% of a character's WBL except at the highest levels of play\n\nWhat a War Bow isn't:\n\n\u2022 a perfectly optimized ranged weapon\n\u2022 balanced against existing gear\n\n## What I would do to fix it\n\nClarify that it is, in fact, an Artifact and has all of the associated benefits\/problems with owning an Aritfact, including that only one exists. Ensure that any campaign this is in will have at least a few Artifacts to benefit the party.\n\nThen, give the weapon charges that replenish per day. The charges can be static or increase by wielder's level. Different abilities cost X charges. Clarify that only the strongest Spell Storing effect is active.\n\nFor example, assuming charges = HD:\n\n\u2022 Level 1 Adaptive is constant (no charges)\n\u2022 Level 2 Expend charges = to spell level to cast a spell Stored in the bow, can store up to 3 levels of spells at once\n\u2022 Level 3 Spend a charge as a Swift action to add flaming or shock for (CL) rounds. Spend 2 charges for it to last (CL) minutes or for flaming or shocking burst for (CL) rounds.\n\u2022 Level 4 Spend 2 charges as an Immediate action to benefit from Blur for (CL) minutes\n\u2022 Level 5 Distance is constant as long as you have Far Shot (no charges)\n\u2022 Level 6 Expend charges = 1\/2 spell level to cast a spell Stored in the bow, can store up to 5 levels of spells at once\n\u2022 Level 7 Expend 5 charges to Plane Shift (with the listed benefits)\n\u2022 Level 8 Expend 1 charge as a Swift action to add an area spell effect to the launched arrow\n\u2022 Level 9 Expend 3 charges as a Swift action to fire an arrow that ignores cover and targets touch AC\n\u2022 Level 10 Expend charges = 1\/3 spell level to cast a spell Stored in the bow, can store up to 10 levels of spells at once\n\n# Listed below is my analysis\/issues with the weapon. It is in no particular order.\n\n\u2022 Planar favored terrain is not a feat. Instead it should be \"Favored Terrain (any plane) class feature\".\n\u2022 Rather than your item level by character level chart, consider using a format similar to scaling magic items, where you just list the level the benefit is gained at.\n\u2022 Burst is not a weapon enchantment.\n\u2022 Getting +1 adaptive at level 1 is kind of strong. Using normal wealth by level (WBL), players usually cannot afford a +1 bow until 3, and it wouldn't be until 4 that they can afford a +1 adaptive bow. This would also be over half their WBL at that point, however as @KRyan pointed out in comments, it's more equivalent to a bow with a STR bonus equal to that of the character, so instead of being 40 time WBL (8000 gp), it's more of 26-28 times WBL.\n\u2022 Gaining a minor ring of spell storing at level 3 is huge, normally (from experience), a player wouldn't get one until level 10 or so. The cost of a ring of spell storing is 18,000 gp normally, way over the WBL of a level 3 character. If we use the magic item creation rules to price this item, we come to a value of 30,000 gp (18000 + (8000)*1.5) at this point. This is disregarding any other loot or money they've gained by this point, and is 10 times the normal WBL of a level 3 character.\n\u2022 Ignoring the fact that you're giving the wielder the benefit of the Arcane Archer prestige Class for free, these are all +1 enchantments, bringing the total value of the bow up to 45,000 gp (18000 + (18000)*1.5). This is only roughly 4.5 time the WBL of a level 5 character now.\n\u2022 A Minor Cloak of Displacement is normally 24,000 gp, not only are they getting it for free, they're also freeing up their shoulder slot for a Cloak of Resistance, negating one of the downsides of using a cloak of resistance. The weapon now has a cost of 78,000 gp (24000 + (18000 + 18000)*1.5), a little over 3 times the WBL of a level 7 character.\n\u2022 Despite this ability requiring a feat, the distance special ability is a +1 enchantment, and increases the value of the weapon.\n\u2022 A ring of spell storing normally costs 50,000 gp. This brings up the total value of the weapon to 161,000 gp (50000 + (24000+32000+18000)*1.5), almost double the WBL of a level 11 character.\n\u2022 At use plane shift as a Spell-Like Ability is worth 182,000 gp usually (7* 13 * 2000). This bring the Value of the bow up to 368,000 gp (182000 + (50000 + 18000 + 32000 + 24000)*1.5, which is about 2.5 times WBL. The fact that since it's a supernatural ability and thus doesn't provoke nor can be counterspelled makes it worth more.\n\u2022 Level 15 and 17 give away more of the Arcane Archer class, I'm unable to place a good price on the value of Phase Arrows, however I can approximate the worth of Imbue Arrows. Using the Spell Storing special ability, we can see that imbue arrows is stronger (no limit on spell level), making it a minimum of a +2 enhancement bonus as it's value (more likely +3, but we'll assume +2 for now). This would bring the cost up to 428,000 gp (182000 + (50000 + 18000 + 72000 + 24000)*1.5) over all, just a bit over a level 17's WBL. Without accounting for the value of these 2 abilities, it finally goes under WBL at level 17.\n\u2022 A Major Ring of Spell Storing is worth 200000 gp, this makes the bow worth at least 659,000 gp (200000 + (182000 + 50000 + 24000+32000)*1.5) (Greater ring + (Use Activated Plane Shift + Ring + Cloak + Minor Ring + Weapon Enchanments bonus)*1.5). This is 96% of a level 19's WBL. Applying the approximated value of imbue arrows, it becomes worth 719,000 gp (200000 + (182000+72000+50000+24000+18000)*1.5), which is over a level 19's WBL.\n\n# Things you can do to fix it. My recommendations to fix it.\n\n\u2022 Make it start as only a +1 bow.\n\n\u2022 Drop most of the class features it grants. If they want the benefits of being an arcane archer, make them play one. Don't give them an items that gives it to them for free. Change the bonus damage enchantment to depend on their planar favored terrain (corrosive for earth plane, fire for fire plane, frost for water plane, shock for air plane, etc), with their choice per arrow if they have multiple planar favored terrains.\n\u2022 Only have a single ring of spell storing, start off with a minor one, and then upgrade it, don't give all three types.\n\u2022 Limit the usage of the cloak of displacement effect, give it a set number of rounds it can be used in a day.\n\nApplying these changes I end up with the following item:\n\n[Insert name here]'s Warbow\n\nPricing and what not\n\nThis +1 darkwood composite longbow is intricately carved with arcane runes, and glows with a faint amber glow in dim light. A white leather grip is secured to the bow with three silver bands, one each at the top, middle and bottom of the grip.\n\n3rd Level: The bow gains the Adaptive special quality.\n\n6th Level: The bow gains a special ability depending on what plane the wielder has selected as their favored terrain. If the wielder has selected multiple favored terrains, they must select only one to use for this ability. This choice can be changed each time they shoot an arrow. [Insert Table of planes and enchantment]\n\n9th Level: The bow gains the ability to act as a Minor Ring of Spell Storing.\n\n12th Level: As a free action, the wielder of the bow can activate it, letting them act as though affected by a blur spell for 1 round. This ability can be used a number of times per day equal to the wielder's character level.\n\n15th Level: The leather wrap darkens and close inspection reveals swirls. The wielder of the bow gains the ability to use Plane Shift as a spell-like ability once a day with a caster level equal to the wielder's character level. This can only be used to plane shift to a plane the wielder has selected as a favored terrain, if they have not selected a plane as a favored terrain, they cannot use this spell-like ability.\n\n18th Level: The bow now gains the ability to act as a Ring of Spell Storing instead of a Minor Ring of Spell Storing.\n\n\u2022 I do need to finish analysis of the issues with it still, but I have other things that I need to work on. I'll edit this again later. \u2013\u00a0william porter Jun 5 at 18:49\n\u2022 The 1st-level issue is kind of irrelevant since you need planar favored terrain to use the bow... unless that referred solely to plane shift? Now I\u2019m not sure. Anyway, adaptive may be valued at +1, but realistically character\u2019s Strength scores don\u2019t change all that often, particularly for archers, so it\u2019s more accurately priced like a composite bow that just happens to match this particular character\u2019s Strength. And that should be a lot less than 8,000 gp. \u2013\u00a0KRyan Jun 5 at 19:12\n\u2022 Rereading, I see I am wrong about the planar terrain mastery requirement. Ah, that does change some things. \u2013\u00a0KRyan Jun 5 at 19:13\n\u2022 I mean, while not bad as a 1st level item, it's still 40 x WBL at minimum, since it'd be 8000gp for a +1 adaptive bow. Remember the bonus str of a regular bow costs +100 per +1 bonus iirc. \u2013\u00a0william porter Jun 5 at 19:14\n\u2022 If it was the +1 adaptive on it's own as a whole, I wouldn't mind it, it's just that when combined with everything else ... \u2013\u00a0william porter Jun 5 at 19:17","date":"2019-10-20 19:32:56","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 2, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4250602424144745, \"perplexity\": 3267.8239193730424}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570986718918.77\/warc\/CC-MAIN-20191020183709-20191020211209-00335.warc.gz\"}"} | null | null |
<!DOCTYPE html>
<html >
<head>
<meta charset="UTF-8">
<title>Login/Sign-In</title>
<link rel="stylesheet" href="{{ asset('css/normalize.css')}}">
<link rel="stylesheet" href="{{ asset('css/style.css')}}">
</head>
<body>
@if (session('status'))
<div class="alert alert-success">
{{ session('status') }}
</div>
@endif
<div class="logmod">
<div class="logmod__wrapper">
<div class="logmod__container">
<ul class="logmod__tabs reset_password">
<li data-tabtar="lgm-1">Reset Password</li>
</ul>
<div class="logmod__tab-wrapper">
<div class="logmod__tab lgm-1">
<div class="logmod__form">
<form accept-charset="utf-8" role="form" method="POST" action="{{ url('/password/reset') }}" class="simform">
{!! csrf_field() !!}
<input type="hidden" name="token" value="{{ $token }}">
<div class="sminputs">
<div class="input full">
<label class="string optional" for="email">Email*</label>
<input class="string optional" maxlength="255" id="email" name="email" placeholder="Email" type="email" size="50" />
</div>
@if ($errors->has('email'))
<span class="help-block">
<strong>{{ $errors->first('email') }}</strong>
</span>
@endif
</div>
<div class="sminputs">
<div class="input full">
<label class="string optional" for="email">New Password*</label>
<input class="string optional" maxlength="255" id="password" name="password" placeholder="New Password" type="password" size="50" />
</div>
@if ($errors->has('password'))
<span class="help-block">
<strong>{{ $errors->first('password') }}</strong>
</span>
@endif
</div>
<div class="sminputs">
<div class="input full">
<label class="string optional" for="email">Confirm Password*</label>
<input class="string optional" maxlength="255" id="password_confirmation" name="password_confirmation" placeholder="Confirm Password" type="password" size="50" />
</div>
@if ($errors->has('password_confirmation'))
<span class="help-block">
<strong>{{ $errors->first('password_confirmation') }}</strong>
</span>
@endif
</div>
<div class="simform__actions">
<button class="sumbit" type="sumbit">Send Password Reset Link</button>
</div>
</form>
</div>
</div>
</div>
</div>
</div>
</div>
<script src="{{ asset('js/jquery.js')}}"></script>
<script src="{{ asset('js/index.js') }}"></script>
</body>
</html>
| {
"redpajama_set_name": "RedPajamaGithub"
} | 6,516 |
\section{Local Interpretation of the Polyak Step-Size}
In this section, we provide two results that shed light on a geometrical interpretation of the Polyak step-size.
First, proposition \ref{prop:nr_step} provides a proximal interpretation for the standard Polyak step-size.
Second, proposition \ref{prop:prox_step} gives a similar result when using a maximal learning-rate, which corresponds to the update used by ALI-G.
\begin{restatable}{proposition}{propnrstep} \label{prop:nr_step}
Suppose that the problem is unconstrained: $\Omega = \mathbb{R}^p$.
Let $\mathbf{w}_{t+1} = \mathbf{w}_{t} - \frac{f(\mathbf{w}_t) - f_\star}{\|\nabla f(\mathbf{w}_t) \|^2 } \nabla f(\mathbf{w}_t)$.
Then $\mathbf{w}_{t+1}$ verifies:
\begin{equation}
\mathbf{w}_{t+1} = \argmin_{\mathbf{w} \in \mathbb{R}^p} \|\mathbf{w} - \mathbf{w}_t\| \text{ subject to: } f(\mathbf{w}_t) + \nabla f(\mathbf{w}_t)^\top (\mathbf{w} - \mathbf{w}_t) = f_\star,
\end{equation}
where we remind that $f_\star$ is the minimum of $f$, and $\mathbf{w} \mapsto f(\mathbf{w}_t) + \nabla f(\mathbf{w}_t)^\top (\mathbf{w} - \mathbf{w}_t)$ is the linearization of $f$ at $\mathbf{w}_t$.
In other words, $\mathbf{w}_{t+1}$ is the closest point to $\mathbf{w}_{t}$ that lies on the hyper-plane $f(\mathbf{w}_t) + \nabla f(\mathbf{w}_t)^\top (\mathbf{w} - \mathbf{w}_t) = f_\star$.
\end{restatable}
\begin{proof}
First we show that $\mathbf{w}_{t+1}$ satisfies the linear equality constraint:
\begin{leftlinebox}[nobreak=false]
\begin{equation}
\begin{split}
&\kern-1em f(\mathbf{w}_t) + \nabla f(\mathbf{w}_t)^\top (\mathbf{w}_{t+1} - \mathbf{w}_t) \\
&=f(\mathbf{w}_t) + \nabla f(\mathbf{w}_t)^\top \left(-\dfrac{f(\mathbf{w}_t) - f_\star}{\|\nabla f(\mathbf{w}_t) \|^2} \nabla f(\mathbf{w}_t) \right), \\
&=f(\mathbf{w}_t) - f(\mathbf{w}_t) + f_\star, \\
&= f_\star.
\end{split}
\end{equation}
\end{leftlinebox}
Now let us show that it has a minimal distance to $\mathbf{w}_t$.
\begin{leftlinebox}[nobreak=false]
We take $\hat{{\bm{w}}} \in \mathbb{R}^p$ a solution of the linear equality constraint, and we will show that $\|\mathbf{w}_{t+1} - \mathbf{w}_t\| \leq \|\hat{{\bm{w}}} - \mathbf{w}_t\|$.
By definition, we have that $\hat{{\bm{w}}}$ satisfies:
\begin{equation}
f(\mathbf{w}_t) + \nabla f(\mathbf{w}_t)^\top (\hat{{\bm{w}}} - \mathbf{w}_t) = f_\star.
\end{equation}
Now we can write:
\begin{equation}
\begin{split}
\|\mathbf{w}_{t+1} - \mathbf{w}_t\|
&= \| \dfrac{f(\mathbf{w}_t) - f_\star}{\|\nabla f(\mathbf{w}_t) \|^2} \nabla f(\mathbf{w}_t) \|, \\
&= \dfrac{f(\mathbf{w}_t) - f_\star}{\|\nabla f(\mathbf{w}_t) \|}, \\
&= \dfrac{|\nabla f(\mathbf{w}_t)^\top (\hat{{\bm{w}}} - \mathbf{w}_t)|}{\|\nabla f(\mathbf{w}_t) \|}, \\
&\leq \dfrac{||\nabla f(\mathbf{w}_t)\| \|\hat{{\bm{w}}} - \mathbf{w}_t\|}{\|\nabla f(\mathbf{w}_t) \|}, \quad \text{(Cauchy-Schwarz)} \\
&= \|\hat{{\bm{w}}} - \mathbf{w}_t\|.
\end{split}
\end{equation}
\end{leftlinebox}
\end{proof}
\begin{restatable}{proposition}{thproxstep}[Proximal Interpretation] \label{prop:prox_step}
Suppose that $\Omega = \mathbb{R}^p$ and let $\delta = 0$.
We consider the update performed by SGD: ${\bm{w}}_{t+1}^{\text{SGD}} = {\bm{w}}_t - \eta_t \nabla \ell_{z_t}({\bm{w}}_t)$; and the update performed by ALI-G: ${\bm{w}}_{t+1}^{\text{ALI-G}} = {\bm{w}}_t - \gamma_t \nabla \ell_{z_t}({\bm{w}}_t)$, where $\gamma_t = \min\left\{\frac{\ell_{z_t}({\bm{w}}_t)}{\| \nabla \ell_{z_t}({\bm{w}}_t)\|^2 + \delta}, \eta \right\}$.
Then we have:
\begin{align}
{\bm{w}}_{t+1}^{\text{SGD}} &= \argmin_{{\bm{w}} \in \mathbb{R}^p} \Big\{
\frac{1}{2 \eta_t} \| {\bm{w}} - {\bm{w}}_t \|^2 + \ell_{z_t}({\bm{w}}_t) + \nabla \ell_{z_t}({\bm{w}}_t)^\top ({\bm{w}} - {\bm{w}}_t) \Big\}, \\
{\bm{w}}_{t+1}^{\text{ALI-G}} &= \argmin_{{\bm{w}} \in \mathbb{R}^p} \Big\{
\frac{1}{2 \eta} \| {\bm{w}} - {\bm{w}}_t \|^2 +
\max \left\{\ell_{z_t}({\bm{w}}_t) + \nabla \ell_{z_t}({\bm{w}}_t)^\top ({\bm{w}} - {\bm{w}}_t), 0 \right\} \Big\}. \label{eq:prox_pb}
\end{align}
\end{restatable}
\begin{proof} \
In order to make the notation simpler, we use $\bm{d}_t \triangleq \nabla \ell_{z_t} (\mathbf{w}_t)$ and $l_t \triangleq \ell_{z_t} (\mathbf{w}_t)$. \\
First, let us consider $\bm{d}_t = \bm{0}$.
\begin{leftlinebox}[nobreak=false]
Then we choose $\gamma_t=0$ and it is clear that $\mathbf{w}_{t+1} = \mathbf{w}_t - \eta \gamma_t \bm{d}_t = \mathbf{w}_t$ is the optimal solution of problem (\ref{eq:prox_pb}).
\end{leftlinebox}
We now assume $\bm{d}_t \neq \bm{0}$.
\begin{leftlinebox}[nobreak=false]
We can successively re-write the proximal problem (\ref{eq:prox_pb}) as :
\begin{align*}
\min_{\mathbf{w} \in \mathbb{R}^p}
&\left\{ \dfrac{1}{2 \eta} \| \mathbf{w} - \mathbf{w}_t \|^2 + \max \left\{\ell_{z_t} (\mathbf{w}_t) + \nabla \ell_{z_t} (\mathbf{w}_t)^\top (\mathbf{w} - \mathbf{w}_t), 0 \right\} \right\}, \\
\min_{\mathbf{w} \in \mathbb{R}^p}
&\left\{ \dfrac{1}{2 \eta} \| \mathbf{w} - \mathbf{w}_t \|^2 + \max \left\{l_t + \bm{d}_t^\top (\mathbf{w} - \mathbf{w}_t), 0 \right\} \right\}, \\
\min_{\mathbf{w} \in \mathbb{R}^p, \upsilon}
&\left\{ \dfrac{1}{2 \eta} \| \mathbf{w} - \mathbf{w}_t \|^2 + \upsilon \right\} \: \text{subject to: } \: \upsilon \geq 0, \: \upsilon \geq l_t + \bm{d}_t^\top (\mathbf{w} - \mathbf{w}_t) \\
\min_{\mathbf{w} \in \mathbb{R}^p, \upsilon} \sup_{\mu, \nu \geq 0}
&\left\{ \dfrac{1}{2 \eta} \| \mathbf{w} - \mathbf{w}_t \|^2 + \upsilon - \mu \upsilon - \nu (\upsilon - l_t - \bm{d}_t^\top (\mathbf{w} - \mathbf{w}_t)) \right\} \\
\sup_{\mu, \nu \geq 0} \min_{\mathbf{w} \in \mathbb{R}^p, \upsilon}
&\left\{ \dfrac{1}{2 \eta} \| \mathbf{w} - \mathbf{w}_t \|^2 + \upsilon - \mu \upsilon - \nu (\upsilon - l_t - \bm{d}_t^\top (\mathbf{w} - \mathbf{w}_t)) \right\}, \stepcounter{equation}\tag{\theequation} \\
\end{align*}
where the last equation uses strong duality.
The inner problem is now smooth in $\mathbf{w}$ and $\upsilon$. We write its KKT conditions:
\begin{equation}
\dfrac{\partial \cdot}{\partial \upsilon} = 0: \quad 1 - \mu - \nu = 0
\end{equation}
\begin{equation}
\dfrac{\partial \cdot}{\partial \mathbf{w}} = 0: \quad \frac{1}{\eta}(\mathbf{w} - \mathbf{w}_t) + \nu \bm{d}_t = \bm{0}
\end{equation}
We plug in these results and obtain:
\begin{align*}
\sup_{\mu, \nu \geq 0}
&\left\{ \dfrac{1}{2 \eta} \| \eta \nu \bm{d}_t \|^2 + \nu (l_t + \bm{d}_t^\top (-\eta \nu \bm{d}_t)) \right\} \\
\text{st: } \quad &\mu + \nu = 1 \\
\sup_{\nu \in [0, 1]}
&\left\{ \dfrac{\eta}{2} \nu^2 \| \bm{d}_t \|^2 + \nu l_t - \eta \nu^2 \| \bm{d}_t^\top\|^2 \right\} \\
\sup_{\nu \in [0, 1]}
&\left\{ -\dfrac{\eta}{2} \nu^2 \| \bm{d}_t \|^2 + \nu l_t \right\} \stepcounter{equation}\tag{\theequation}
\end{align*}
This is a one-dimensional quadratic problem in $\nu$.
It can be solved in closed-form by finding the global maximum of the quadratic objective, and projecting the solution on $[0, 1]$.
We have:
\begin{equation}
\dfrac{\partial \cdot}{\partial \nu} = 0: - \eta \nu \| \bm{d}_t \|^2 + l_t = 0
\end{equation}
Since $\bm{d}_t \neq \bm{0}$ and $\eta \neq 0$, this gives the optimal solution:
\begin{equation}
\nu = \min \left\{ \max \left\{\dfrac{l_t}{\eta \|\bm{d}_t\|^2}, 0 \right\}, 1 \right\} = \min \left\{\dfrac{l_t}{\eta \|\bm{d}_t\|^2}, 1 \right\},
\end{equation}
since $l_t, \eta, \|\bm{d}_t\|^2 \geq 0$. \\
Plugging this back in the KKT conditions, we obtain that the solution $\mathbf{w}_{t+1}$ of the primal problem can be written as:
\begin{equation}
\begin{split}
\mathbf{w}_{t+1}
&= \mathbf{w}_t - \eta \nu \bm{d}_t, \\
&= \mathbf{w}_t - \eta \min \left\{\dfrac{l_t}{\eta \|\bm{d}_t\|^2}, 1 \right\} \bm{d}_t, \\
&= \mathbf{w}_t - \eta \min \left\{\dfrac{\ell_{z_t}(\mathbf{w}_t)}{\eta \|\nabla \ell_{z_t}(\mathbf{w}_t)\|^2}, 1 \right\} \nabla \ell_{z_t}(\mathbf{w}_t), \\
&= \mathbf{w}_t - \min \left\{\dfrac{\ell_{z_t}(\mathbf{w}_t)}{\|\nabla \ell_{z_t}(\mathbf{w}_t)\|^2}, \eta \right\} \nabla \ell_{z_t}(\mathbf{w}_t). \\
\end{split}
\end{equation}
\end{leftlinebox}
\end{proof}
\section{Summary of Convergence Results}
\label{app:sec:convergence}
\paragraph{Problem Formulation.}
We remind the problem setting as follows.
The learning task can be expressed as the problem $(\mathcal{P})$ of finding a feasible vector of parameters $\mathbf{w_\star} \in \Omega$ that minimizes $f$:
\begin{equation} \tag{$\mathcal{P}$} \label{eq:main_problem}
\mathbf{w_\star} \in \argmin\limits_{{\bm{w}} \in \Omega} f({\bm{w}}).
\end{equation}
Also note that $f_\star$ refers to the minimum value of $f$ over $\Omega$: $f_\star \triangleq \min_{{\bm{w}} \in \Omega} f({\bm{w}})$.
In the remainder of this section, we give an overview of convergence results of ALI-G in various stochastic settings.
First, we summarize convergence results in the convex setting in section \ref{subsec:summary_cvx_results}.
Notably, these results show convergence for any maximal learning-rate $\eta$, including $\eta = \infty$, which is equivalent to not using any clipping to a maximal value.
Second, we give results for a class of non-convex problems.
These results show that a maximal learning-rate is necessary and sufficient for convergence of the Polyak step-size.
Indeed we show that the Polyak step-size can oscillate indefinitely without a maximal learning-rate, and that using a maximal learning-rate provably leads to (exponentially fast) convergence.
\subsection{Convex Setting}
\label{subsec:summary_cvx_results}
For simplicity purposes, we assume that we are in the perfect interpolation setting: $\forall z, \: \ell_z(\mathbf{w_\star}) = 0$.
Detailed results with an interpolation tolerance $\varepsilon > 0$ are given in section \ref{app:sec:detailed_cvx_results}.
Since we are in the perfect interpolation setting, note that we can safely set the small constant for numerical stability to zero: $\delta = 0$.
The summary of the results is presented in table \ref{tab:cvx_results}.
\begin{table}[ht]
\centering
\footnotesize
\begin{tabular}{llcc}
\toprule
Assumption on Loss Functions &Distance Considered & \multicolumn{2}{c}{Convergence Rate} \\
\cmidrule(lr){1-1} \cmidrule(lr){2-2} \cmidrule(lr){3-4}
&& Small $\eta$ & Large $\eta$ (potentially $\infty$) \\
\cmidrule(lr){3-3} \cmidrule(lr){4-4}
Convex and $C$-Lipschitz & $\mathbb{E} \left[f\left(\tfrac{1}{T+ 1} \sum\limits_{t=0}^T \mathbf{w}_t \right) \right] - f_\star$ & $\tfrac{\| \mathbf{w}_{0} - \mathbf{w_\star} \|^2}{\eta (T + 1)} + \sqrt{ \tfrac{C^2 \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2}{T + 1}}$ & $\sqrt{ \tfrac{C^2 \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2}{T + 1}} $ \\
Convex and $\beta$-Smooth & $\mathbb{E} \left[f\left(\tfrac{1}{T+ 1} \sum\limits_{t=0}^T \mathbf{w}_t \right) \right] - f_\star$ & $\tfrac{\| \mathbf{w}_{0} - \mathbf{w_\star} \|^2}{\eta (T+1)}$ & $\tfrac{2 \beta \|\mathbf{w}_{0} - \mathbf{w_\star}\|^2}{T + 1}$ \\
$\alpha$-Strongly Convex and $\beta$-Smooth & $\mathbb{E}[f(\mathbf{w}_{T+1})] - f_\star$ & $\tfrac{\beta}{2} \exp \left(\tfrac{-\alpha \eta T }{2} \right) \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2$ & $\tfrac{\beta}{2} \exp\left(- \tfrac{\alpha t}{4 \beta} \right) \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2$ \\
\bottomrule
\end{tabular}
\caption{\em
Summary of convergence rates for convex problems in the perfect interpolation setting.
We remind that $\eta$ denotes the hyper-parameter used by ALI-G to clip its learning-rate to a maximal value.
Our convergence results yield different results when $\eta$ has a small value (middle column), and when $\eta$ has a large, possibly even infinite, value (right column).
The formal statements of these results are available in section \ref{app:sec:detailed_cvx_results}, along with their proofs.
}
\label{tab:cvx_results}
\end{table}
The overall convergence speed is similar to that of \emph{non-stochastic} Polyak step-size, which is itself the same as the optimal rate of \emph{non-stochastic} gradient descent: $\mathcal{O}(1/\sqrt{T})$ for convex Lipschitz functions, $\mathcal{O}(1/T)$ for convex and smooth functions, and $\mathcal{O}(\exp(-kT))$ (for some constant $k$) for smooth and strongly convex functions \citep{Hazan2019}.
\subsection{Non-Convex Setting}
We also assume that we are in the perfect interpolation setting and thus we set the constant for numerical stability $\delta$ to zero.
We further assume that the problem is unconstrained.
The summary of the results is presented in table \ref{tab:noncvx_results}.
\begin{table}[H]
\centering
\footnotesize
\begin{tabular}{cc}
\toprule
\multicolumn{2}{c}{Convergence Result} \\
\midrule
$0 < \eta \leq \tfrac{2 \alpha}{\beta^2}$ & $\eta=\infty$ \\
\cmidrule(lr){1-1} \cmidrule(lr){2-2}
$ f(\mathbf{w}_{T+1}) - f_\star \leq \tfrac{\beta}{2} \exp \left( - \kappa T \right) \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2$ & Can Fail to Converge (Proved) \\
\bottomrule
\end{tabular}
\caption{\em
Summary of convergence results for $\alpha$-RSI and $\beta$-smooth loss functions in the perfect interpolation setting.
We remind that $\eta$ denotes the hyper-parameter used by ALI-G to clip its learning-rate to a maximal value.
The constant $\kappa$ depends on $\alpha$, $\beta$ and $\eta$.
These results show that using a maximal learning-rate is necessary and sufficient for convergence.
The formal statements of these results are available in section \ref{app:sec:detailed_noncvx_results}, along with their proofs.
}
\label{tab:noncvx_results}
\end{table}
\section{Detailed Convex Results}
\label{app:sec:detailed_cvx_results}
\subsection{Lipschitz Convex Functions}
\begin{restatable}{theorem}{thaligcvxlargeeta}\label{th:alig_cvx_large_eta}
We assume that $\Omega$ is a convex set, and that for every $z \in \mathcal{Z}$, $\ell_z$ is convex and $C$-Lipschitz.
Let $\mathbf{w_\star}$ be a solution of (\ref{eq:main_problem}) such that $\forall z \in \mathcal{Z}, \: \ell_z(\mathbf{w_\star}) \leq \varepsilon$.
We further assume that $\eta > \frac{\varepsilon}{\delta}$.
Then if we apply ALI-G with a maximal learning-rate of $\eta$ to $f$, we have:
\begin{equation}
\begin{split}
\mathbb{E} \left[f\left(\frac{1}{T+ 1} \sum\limits_{t=0}^T \mathbf{w}_t \right) \right] - f_\star
&\leq \dfrac{\| \mathbf{w}_{0} - \mathbf{w_\star} \|^2}{(\eta - \frac{\varepsilon}{\delta}) (T + 1)} + \dfrac{\varepsilon^2}{\delta (\eta - \frac{\varepsilon}{\delta})} \\
&\quad + \sqrt{ \dfrac{(C^2 + \delta) \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2}{T + 1}} + \varepsilon \sqrt{\dfrac{C^2}{\delta} + 1}.
\end{split}
\end{equation}
\end{restatable}
\begin{proof} \
\begin{leftlinebox}[nobreak=false]
We consider the update at time $t$, which we condition on the draw of $z_t \in \mathcal{Z}$:
\begin{align*}
&\kern-1em \| \mathbf{w}_{t+1} - \mathbf{w_\star} \|^2 \\
&= \| \Pi_\Omega(\mathbf{w}_t - \gamma_t \nabla \ell_{z_t}(\mathbf{w}_t)) - \mathbf{w_\star} \|^2 \\
&\leq \| \mathbf{w}_t - \gamma_t \nabla \ell_{z_t}(\mathbf{w}_t) - \mathbf{w_\star} \|^2 \eqcomment{$\Pi_\Omega$ projection} \\
&= \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - 2 \gamma_t \nabla \ell_{z_t}(\mathbf{w}_t)^\top(\mathbf{w}_{t} - \mathbf{w_\star}) + \gamma_t^2 \| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 \\
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - 2 \gamma_t \nabla \ell_{z_t}(\mathbf{w}_t)^\top(\mathbf{w}_{t} - \mathbf{w_\star}) + \gamma_t \frac{\ell_{z_t}(\mathbf{w}_t)}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta} \| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 \\
&\quad \eqcomment{because $\gamma_t \leq \frac{\ell_{z_t}(\mathbf{w}_t)}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta}$} \\
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - 2 \gamma_t \nabla \ell_{z_t}(\mathbf{w}_t)^\top(\mathbf{w}_{t} - \mathbf{w_\star}) + \gamma_t \frac{\ell_{z_t}(\mathbf{w}_t)}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2} \| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 \\
&\quad \eqcomment{because $\ell_{z_t}(\mathbf{w}_t) \geq 0$ and $\delta \geq 0$} \\
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - 2 \gamma_t (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \gamma_t \ell_{z_t}(\mathbf{w}_t) \eqcomment{convexity of $\ell_{z_t}$} \\
&= \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - 2\gamma_t (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \gamma_t (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \gamma_t \ell_{z_t}(\mathbf{w_\star}) \\
&= \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \gamma_t (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \gamma_t \ell_{z_t}(\mathbf{w_\star}) \stepcounter{equation}\tag{\theequation}\label{eq:alig_cvx_basic_iterate_bound}
\end{align*}
\end{leftlinebox}
\begin{leftlinebox}[nobreak=false]
We now consider different cases, according to the value that $\gamma_t$ takes: $\gamma_t = \frac{\ell_{z_t}(\mathbf{w}_t)}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta}$ or $\gamma_t = \eta$.
\begin{leftlinebox}[nobreak=false]
First, suppose that $\gamma_t = \frac{\ell_{z_t}(\mathbf{w}_t)}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta}$.
Then we have:
\begin{align*}
&\| \mathbf{w}_{t+1} - \mathbf{w_\star} \|^2 \\
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \gamma_t \Big( \ell_{z_t}(\mathbf{w}_t) - 2 \ell_{z_t}(\mathbf{w_\star}) \Big)\\
&= \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \dfrac{1}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta} \Big( \ell_{z_t}(\mathbf{w}_t)^2 - 2 \ell_{z_t}(\mathbf{w}_t) \ell_{z_t}(\mathbf{w_\star}) \Big) \\
&= \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \dfrac{1}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta} \Big( (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}))^2 - \ell_{z_t}(\mathbf{w_\star})^2 \Big) \\
&= \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \dfrac{(\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}))^2}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta} + \dfrac{\ell_{z_t}(\mathbf{w_\star})^2}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta} \\
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \dfrac{(\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}))^2}{C^2 + \delta} + \dfrac{\ell_{z_t}(\mathbf{w_\star})^2}{\delta} \\
&\quad \eqcomment{because we have $0 \leq \| \nabla \ell_{z_t}(\mathbf{w}_t)\|^2 \leq C^2$} \\
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \dfrac{(\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}))^2}{C^2 + \delta} + \dfrac{\varepsilon^2}{\delta} \eqcomment{definition of $\varepsilon$} \stepcounter{equation}\tag{\theequation} \label{eq:iterate_bound_alig_cvx_1}
\end{align*}
\end{leftlinebox}
\begin{leftlinebox}[nobreak=false]
Now suppose $\gamma_t = \eta$ and $\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) \leq 0$.
We can use $\gamma_t \leq \frac{\ell_{z_t}(\mathbf{w}_t)}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta}$ to write:
\begin{equation}
\begin{split}
\| \mathbf{w}_{t+1} - \mathbf{w_\star} \|^2
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \gamma_t (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \gamma_t \ell_{z_t}(\mathbf{w_\star}), \\
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \frac{\ell_{z_t}(\mathbf{w}_t)}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta} (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \frac{\ell_{z_t}(\mathbf{w}_t)}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta} \ell_{z_t}(\mathbf{w_\star}),
\end{split}
\end{equation}
where the last inequality has used $\gamma_t \leq \frac{\ell_{z_t}(\mathbf{w}_t)}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta}$, $\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) \leq 0$ and $\ell_{z_t}(\mathbf{w_\star}) \geq 0$.
Therefore we are exactly in the same situation as the first case (where we used $\gamma_t = \frac{\ell_{z_t}(\mathbf{w}_t)}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta}$), and thus we have again:
\begin{equation} \label{eq:iterate_bound_alig_cvx_2}
\| \mathbf{w}_{t+1} - \mathbf{w_\star} \|^2 \leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \dfrac{(\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}))^2}{C^2 + \delta} + \dfrac{\varepsilon^2}{\delta}.
\end{equation}
\end{leftlinebox}
\begin{leftlinebox}[nobreak=false]
Now suppose that $\gamma_t = \eta$ and $\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) \geq 0$.
The inequality (\ref{eq:alig_cvx_basic_iterate_bound}) gives:
\begin{equation} \label{eq:iterate_bound_alig_cvx_3}
\begin{split}
&\kern-1em \| \mathbf{w}_{t+1} - \mathbf{w_\star} \|^2 \\
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \gamma_t (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \gamma_t \ell_{z_t}(\mathbf{w_\star}), \\
&= \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \eta (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \gamma_t \ell_{z_t}(\mathbf{w_\star}), \eqcomment{$\gamma_t = \eta$} \\
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \eta (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \gamma_t \varepsilon, \eqcomment{definition of $\varepsilon$, $\gamma_t \geq 0$} \\
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \eta (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \varepsilon \dfrac{\ell_{z_t}(\mathbf{w}_t)}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta}, \\
&\quad \eqcomment{because $\gamma_t \leq \frac{\ell_{z_t}(\mathbf{w}_t)}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta}$, $\varepsilon \geq 0$} \\
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \eta (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \varepsilon \dfrac{\ell_{z_t}(\mathbf{w}_t)}{\delta}, \\
&\quad \eqcomment{because $\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 \geq 0$} \\
&= \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \eta (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \varepsilon \dfrac{\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) + \ell_{z_t}(\mathbf{w_\star})}{\delta}, \\
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \eta (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \varepsilon \dfrac{\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) + \varepsilon}{\delta}, \\
&\quad \eqcomment{because $\ell_{z_t}(\mathbf{w_\star}) \leq \varepsilon$} \\
&= \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \left(\eta - \dfrac{\varepsilon}{\delta}\right) (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \dfrac{\varepsilon^2}{\delta}.
\end{split}
\end{equation}
\end{leftlinebox}
\end{leftlinebox}
\begin{leftlinebox}[nobreak=false]
We now introduce $\mathcal{I}_T$ and $\mathcal{J}_T$ as follows:
\begin{equation}
\begin{split}
\mathcal{I}_T &\triangleq \left\{ t \in \{0, ..., T\} : \gamma_t = \eta \ \text{and} \ \ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) \geq 0 \right\} \\
\mathcal{J}_T &\triangleq \{0, ..., T\} \ \backslash \ \mathcal{I}_T
\end{split}
\end{equation}
Then, by combining inequalities (\ref{eq:iterate_bound_alig_cvx_1}), (\ref{eq:iterate_bound_alig_cvx_2}) and (\ref{eq:iterate_bound_alig_cvx_3}), and using a telescopic sum, we obtain:
\begin{equation}
\begin{split}
\| \mathbf{w}_{T+1} - \mathbf{w_\star} \|^2
&\leq \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + \sum\limits_{t \in \mathcal{J}_T} \left( -\dfrac{(\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}))^2}{C^2 + \delta} + \dfrac{\varepsilon^2}{\delta}\right) \\
&\qquad + \sum\limits_{t \in \mathcal{I}_T} \left( -\left(\eta - \dfrac{\varepsilon}{\delta}\right) (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \dfrac{\varepsilon^2}{\delta} \right)
\end{split}
\end{equation}
Using $\| \mathbf{w}_{T+1} - \mathbf{w_\star} \|^2 \geq 0$, we obtain:
\begin{equation} \label{eq:alig_cvx_two_terms_bounded}
\begin{split}
\dfrac{1}{C^2 + \delta} \sum\limits_{t \in \mathcal{J}_T} (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}))^2 + \left(\eta - \dfrac{\varepsilon}{\delta}\right) \sum\limits_{t \in \mathcal{I}_T} (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) \\
\leq \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + (T + 1) \dfrac{\varepsilon^2}{\delta}
\end{split}
\end{equation}
\end{leftlinebox}
\begin{leftlinebox}[nobreak=false]
In particular, the inequality (\ref{eq:alig_cvx_two_terms_bounded}) gives that:
\begin{equation} \label{eq:alig_cvx_sum_normal}
\left(\eta - \dfrac{\varepsilon}{\delta}\right) \sum\limits_{t \in \mathcal{I}_T} (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}))
\leq \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + (T + 1) \dfrac{\varepsilon^2}{\delta}.
\end{equation}
Furthermore, for every $t \in \mathcal{I}_T$, we have $(\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) \geq 0$, which yields $\left(\eta - \frac{\varepsilon}{\delta}\right) \sum\limits_{t \in \mathcal{I}_T} (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) \geq 0$ since $\eta > \frac{\epsilon}{\delta}$.
Thus the inequality (\ref{eq:alig_cvx_two_terms_bounded}) also gives:
\begin{equation}
\dfrac{1}{C^2 + \delta} \sum\limits_{t \in \mathcal{J}_T} (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}))^2
\leq \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + (T + 1) \dfrac{\varepsilon^2}{\delta}.
\end{equation}
Using the Cauchy-Schwarz inequality, we can further write:
\begin{equation}
\left( \sum\limits_{t \in \mathcal{J}_T} \ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) \right)^2
\leq |\mathcal{J}_T| \sum\limits_{t \in \mathcal{J}_T} (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}))^2.
\end{equation}
Therefore we have:
\begin{equation} \label{eq:alig_cvx_sum_squares}
\begin{split}
\sum\limits_{t \in \mathcal{J}_T} \ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})
&\leq \sqrt{|\mathcal{J}_T| \sum\limits_{t \in \mathcal{J}_T} (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}))^2}, \\
&\leq \sqrt{|\mathcal{J}_T| (C^2 + \delta) \left( \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + (T + 1) \dfrac{\varepsilon^2}{\delta} \right)}. \\
\end{split}
\end{equation}
\end{leftlinebox}
\begin{leftlinebox}[nobreak=false]
We can now put together inequalities (\ref{eq:alig_cvx_sum_normal}) and (\ref{eq:alig_cvx_sum_squares}) by writing:
\begin{equation}
\begin{split}
&\kern-1em \sum\limits_{t=0}^T \ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) \\
&= \sum\limits_{t \in \mathcal{I}_T} \ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) + \sum\limits_{t \in \mathcal{J}_T} \ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) \\
&\leq \dfrac{1}{\eta - \frac{\varepsilon}{\delta}} \left( \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + (T + 1) \dfrac{\varepsilon^2}{\delta} \right) \\
&\quad + \sqrt{|\mathcal{J}_T| (C^2 + \delta) \left( \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + (T + 1) \dfrac{\varepsilon^2}{\delta} \right)} \\
&\leq \dfrac{1}{\eta - \frac{\varepsilon}{\delta}} \left( \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + (T + 1) \dfrac{\varepsilon^2}{\delta} \right) \\
&\quad + \sqrt{(T + 1) (C^2 + \delta) \left( \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + (T + 1) \dfrac{\varepsilon^2}{\delta} \right)}
\end{split}
\end{equation}
Dividing by $T+1$ and taking the expectation (over $z_1, ..., z_T$), we obtain:
\begin{equation}
\begin{split}
&\kern-1em \mathbb{E} \left[f\left(\dfrac{1}{T+1} \sum\limits_{t=0}^T \mathbf{w}_t \right) \right] - f_\star \\
&\leq \dfrac{1}{T+1} \sum\limits_{t=0}^T \mathbb{E} [f(\mathbf{w}_t)] - f_\star, \eqcomment{$f$ is convex} \\
&\leq \dfrac{\| \mathbf{w}_{0} - \mathbf{w_\star} \|^2}{(\eta - \frac{\varepsilon}{\delta}) (T + 1)} + \dfrac{\varepsilon^2}{\delta (\eta - \frac{\varepsilon}{\delta})} + \sqrt{(C^2 + \delta) \left( \dfrac{\| \mathbf{w}_{0} - \mathbf{w_\star} \|^2}{T + 1} + \dfrac{\varepsilon^2}{\delta} \right)}, \\
&\leq \dfrac{\| \mathbf{w}_{0} - \mathbf{w_\star} \|^2}{(\eta - \frac{\varepsilon}{\delta}) (T + 1)} + \dfrac{\varepsilon^2}{\delta (\eta - \frac{\varepsilon}{\delta})} + \sqrt{ \dfrac{(C^2 + \delta) \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2}{T + 1}} + \varepsilon \sqrt{\dfrac{C^2}{\delta} + 1}.
\end{split}
\end{equation}
\end{leftlinebox}
\end{proof}
When $\eta$ is small, the convergence error of Theorem \ref{th:alig_cvx_large_eta} is large.
This is corrected in the following result which is informative in the regime where $\eta$ is small:
\begin{restatable}{theorem}{thaligcvxsmalleta}\label{th:alig_cvx_small_eta}
We assume that $\Omega$ is a convex set, and that for every $z \in \mathcal{Z}$, $\ell_z$ is convex and $C$-Lipschitz.
Let $\mathbf{w_\star}$ be a solution of (\ref{eq:main_problem}) such that $\forall z \in \mathcal{Z}, \: \ell_z(\mathbf{w_\star}) \leq \varepsilon$.
Then if we apply ALI-G with a maximal learning-rate of $\eta$ to $f$, we have:
\begin{equation}
\mathbb{E} \left[f\left(\frac{1}{T+ 1} \sum\limits_{t=0}^T \mathbf{w}_t \right) \right] - f_\star
\leq \dfrac{\| \mathbf{w}_{0} - \mathbf{w_\star} \|^2}{\eta (T + 1)} + 2 \varepsilon + \sqrt{ \dfrac{(C^2 + \delta) \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2}{T + 1}} + 2 \eta \varepsilon \sqrt{C^2 + \delta}.
\end{equation}
\end{restatable}
\begin{proof} \
\begin{leftlinebox}[nobreak=false]
We consider the update at time $t$, which we condition on the draw of $z_t \in \mathcal{Z}$.
We re-use the inequality (\ref{eq:alig_cvx_basic_iterate_bound}) from the proof of Theorem \ref{th:alig_cvx_large_eta}:
\begin{equation} \label{eq:alig_cvx_small_eta_iterate_bound}
\| \mathbf{w}_{t+1} - \mathbf{w_\star} \|^2 \\
\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \gamma_t (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \gamma_t \ell_{z_t}(\mathbf{w_\star})
\end{equation}
\end{leftlinebox}
\begin{leftlinebox}[nobreak=false]
We consider again different cases, according to the value of $\gamma_t$ and the sign of $\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})$.
\begin{leftlinebox}[nobreak=false]
Suppose that $\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) < 0$.
Then the inequality (\ref{eq:alig_cvx_small_eta_iterate_bound}) gives:
\begin{equation} \label{eq:iterate_bound_alig_cvx_small_eta_1}
\begin{split}
&\kern-1em \| \mathbf{w}_{t+1} - \mathbf{w_\star} \|^2 \\
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \gamma_t (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \gamma_t \ell_{z_t}(\mathbf{w_\star}), \\
&= \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \gamma_t \ell_{z_t}(\mathbf{w}_t) + 2 \gamma_t \ell_{z_t}(\mathbf{w_\star}), \\
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 + 2 \gamma_t \ell_{z_t}(\mathbf{w_\star}), \eqcomment{$\gamma_t, \ell_{z_t}(\mathbf{w}_t) \geq 0$}\\
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 + 2 \eta \varepsilon, \eqcomment{$\gamma_t \leq \eta$, definition of $\varepsilon$} \\
\end{split}
\end{equation}
\end{leftlinebox}
\begin{leftlinebox}[nobreak=false]
Now suppose $\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) \geq 0$ and $\gamma_t = \eta$.
Then the inequality (\ref{eq:alig_cvx_small_eta_iterate_bound}) gives:
\begin{equation} \label{eq:iterate_bound_alig_cvx_small_eta_2}
\begin{split}
\| \mathbf{w}_{t+1} - \mathbf{w_\star} \|^2
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \gamma_t (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \gamma_t \ell_{z_t}(\mathbf{w_\star}), \\
&= \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \eta (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \eta \ell_{z_t}(\mathbf{w_\star}), \\
&\quad \eqcomment{because $\gamma_t = \eta$} \\
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \eta (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \eta \varepsilon, \\
&\quad \eqcomment{definition of $\varepsilon$, $\eta \geq 0$} \\
\end{split}
\end{equation}
\end{leftlinebox}
\begin{leftlinebox}[nobreak=false]
Finally, suppose that $\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) \geq 0$ and $\gamma_t = \frac{\ell_{z_t}(\mathbf{w}_t)}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta}$.
Then the inequality (\ref{eq:alig_cvx_small_eta_iterate_bound}) gives:
\begin{equation} \label{eq:iterate_bound_alig_cvx_small_eta_3}
\begin{split}
&\kern-1em \| \mathbf{w}_{t+1} - \mathbf{w_\star} \|^2 \\
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \gamma_t (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \gamma_t \ell_{z_t}(\mathbf{w_\star}), \\
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \gamma_t (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \eta \ell_{z_t}(\mathbf{w_\star}), \\
&\quad \eqcomment{because $\gamma_t \leq \eta$, $\ell_{z_t}(\mathbf{w_\star}) \geq 0$} \\
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \gamma_t (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \eta \varepsilon, \eqcomment{definition of $\varepsilon$, $\eta \geq 0$} \\
&= \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \dfrac{\ell_{z_t}(\mathbf{w}_t)}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta} (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \eta \varepsilon, \\
&\quad \eqcomment{because $\gamma_t = \frac{\ell_{z_t}(\mathbf{w}_t)}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta}$} \\
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \dfrac{(\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}))^2}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta} + \eta \varepsilon, \\
&\quad \eqcomment{because $\ell_{z_t}(\mathbf{w}_t) \geq \ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) \geq 0$} \\
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \dfrac{(\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}))^2}{C^2 + \delta} + \eta \varepsilon, \eqcomment{$\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 \leq C^2$} \\
\end{split}
\end{equation}
\end{leftlinebox}
\end{leftlinebox}
\begin{leftlinebox}[nobreak=false]
We now introduce $\mathcal{I}_T$ and $\mathcal{J}_T$ as follows:
\begin{equation}
\begin{split}
\mathcal{I}_T &\triangleq \left\{ t \in \{0, ..., T\} : \ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) < 0 \right\} \\
\mathcal{J}_T &\triangleq \left\{ t \in \{0, ..., T\} : \gamma_t = \frac{\ell_{z_t}(\mathbf{w}_t)}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta}\ \text{and} \ \ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) \geq 0 \right\} \\
\mathcal{K}_T &\triangleq \{0, ..., T\} \ \backslash \ \mathcal{I}_T \cup \mathcal{J}_T
\end{split}
\end{equation}
Then, by combining inequalities (\ref{eq:iterate_bound_alig_cvx_small_eta_1}), (\ref{eq:iterate_bound_alig_cvx_small_eta_2}) and (\ref{eq:iterate_bound_alig_cvx_small_eta_3}), and using a telescopic sum, we obtain:
\begin{equation}
\begin{split}
\| \mathbf{w}_{T+1} - \mathbf{w_\star} \|^2
&\leq \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + |\mathcal{I}_T| 2\eta \varepsilon + \sum\limits_{t \in \mathcal{J}_T} \left( -\dfrac{(\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}))^2}{C^2 + \delta} + \eta \varepsilon\right) \\
&\qquad + \sum\limits_{t \in \mathcal{K}_T} \left( -\eta (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \eta \varepsilon \right)
\end{split}
\end{equation}
Using $\| \mathbf{w}_{T+1} - \mathbf{w_\star} \|^2 \geq 0$, we obtain:
\begin{equation}
\begin{split}
&\kern-1em \dfrac{1}{C^2 + \delta} \sum\limits_{t \in \mathcal{J}_T} (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}))^2 + \eta \sum\limits_{t \in \mathcal{K}_T} (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) \\
&\leq \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + (T + 1) 2 \eta \varepsilon
\end{split}
\end{equation}
\end{leftlinebox}
\begin{leftlinebox}[nobreak=false]
Since $\forall t \in \mathcal{K}_t, \ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) \geq 0$, both LHS terms are non-negative and thus each of them is smaller or equal to the RHS:
\begin{equation} \label{eq:alig_cvx_small_eta_sum_normal}
\eta \sum\limits_{t \in \mathcal{K}_T} (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}))
\leq \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + 2 (T + 1) \eta \varepsilon,
\end{equation}
and:
\begin{equation}
\dfrac{1}{C^2 + \delta} \sum\limits_{t \in \mathcal{J}_T} (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}))^2
\leq \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + 2 (T + 1) \eta \varepsilon.
\end{equation}
Using the Cauchy-Schwarz inequality, we can further write:
\begin{equation}
\left( \sum\limits_{t \in \mathcal{J}_T} \ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) \right)^2
\leq |\mathcal{J}_T| \sum\limits_{t \in \mathcal{J}_T} (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}))^2.
\end{equation}
Therefore we have:
\begin{equation} \label{eq:alig_cvx_small_eta_sum_squares}
\begin{split}
\sum\limits_{t \in \mathcal{J}_T} \ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})
&\leq \sqrt{|\mathcal{J}_T| \sum\limits_{t \in \mathcal{J}_T} (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}))^2}, \\
&\leq \sqrt{|\mathcal{J}_T| (C^2 + \delta) \left( \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + 2 (T + 1) \eta \varepsilon \right)}. \\
\end{split}
\end{equation}
\end{leftlinebox}
\begin{leftlinebox}[nobreak=false]
We can now put together inequalities (\ref{eq:alig_cvx_small_eta_sum_normal}) and (\ref{eq:alig_cvx_small_eta_sum_squares}) by writing:
\begin{equation}
\begin{split}
\kern-1em \sum\limits_{t=0}^T \ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})
&\leq \sum\limits_{t \in \mathcal{K}_T} \ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) + \sum\limits_{t \in \mathcal{J}_T} \ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}), \eqcomment{only negative contributions in $\mathcal{I}_t$}\\
&\leq \dfrac{1}{\eta} \left( \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + 2 (T + 1) \eta \varepsilon \right) + \sqrt{|\mathcal{J}_T| (C^2 + \delta) \left( \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + 2 (T + 1) \eta \varepsilon \right)}, \\
&\leq \dfrac{1}{\eta} \left( \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + 2 (T + 1) \eta \varepsilon \right) + \sqrt{(T + 1) (C^2 + \delta) \left( \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + 2 (T + 1) \eta \varepsilon \right)}.
\end{split}
\end{equation}
\end{leftlinebox}
\begin{leftlinebox}[nobreak=false]
Dividing by $T+1$ and taking the expectation, we obtain:
\begin{equation}
\begin{split}
&\kern-1em \mathbb{E} \left[f\left(\dfrac{1}{T+1} \sum\limits_{t=0}^T \mathbf{w}_t \right) \right]- f_\star \\
&\leq \dfrac{1}{T+1} \sum\limits_{t=0}^T \mathbb{E}[f(\mathbf{w}_t)] - f_\star, \eqcomment{$f$ is convex} \\
&\leq \dfrac{\| \mathbf{w}_{0} - \mathbf{w_\star} \|^2}{\eta (T + 1)} + 2 \varepsilon + \sqrt{(C^2 + \delta) \left( \dfrac{\| \mathbf{w}_{0} - \mathbf{w_\star} \|^2}{T + 1} + 2 \eta \varepsilon \right)}, \\
&\leq \dfrac{\| \mathbf{w}_{0} - \mathbf{w_\star} \|^2}{\eta (T + 1)} + 2 \varepsilon + \sqrt{ \dfrac{(C^2 + \delta) \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2}{T + 1}} + 2 \eta \varepsilon \sqrt{C^2 + \delta}.
\end{split}
\end{equation}
\end{leftlinebox}
\end{proof}
\subsection{Smooth Convex Functions}
We now tackle the convex and $\beta$-smooth case.
Our proof techniques naturally produce the separation $\eta \geq \frac{1}{2 \beta}$ and $\eta \leq \frac{1}{2 \beta}$.
\begin{lemma} \label{lemma:smooth_bound}
Let $z \in \mathcal{Z}$.
Assume that $\ell_{z}$ is $\beta$-smooth and non-negative on $\mathbb{R}^p$.
Then we have:
\begin{equation} \label{eq:smooth_inequality}
\forall \: \mathbf{w} \in \mathbb{R}^p, \ \ell_{z}(\mathbf{w}) \geq \frac{1}{2 \beta} \| \nabla \ell_{z}(\mathbf{w}) \|^2
\end{equation}
Note that we do not assume that $\ell_z$ is convex.
\end{lemma}
\begin{proof} \
\begin{leftlinebox}[nobreak=false]
Let ${\bm{w}} \in \mathbb{R}^p$. By Lemma 3.4 of \citep{Bubeck2015}, we have:
\begin{equation}
\forall \: {\bm{u}} \in \mathbb{R}^p, \: | \ell_z({\bm{u}}) - \ell_z({\bm{w}}) - \nabla \ell_z({\bm{w}})^\top ({\bm{u}} - {\bm{w}})| \leq \dfrac{\beta}{2} \| {\bm{u}} - {\bm{w}} \|^2.
\end{equation}
Therefore we can write:
\begin{equation}
\forall \: {\bm{u}} \in \mathbb{R}^p, \: \ell_z({\bm{u}}) \leq \ell_z({\bm{w}}) + \nabla \ell_z({\bm{w}})^\top ({\bm{u}} - {\bm{w}}) + \dfrac{\beta}{2} \| {\bm{u}} - {\bm{w}} \|^2.
\end{equation}
And since $\forall \: {\bm{u}} \in \mathbb{R}^p, \: \ell_z({\bm{u}}) \geq 0$, we have:
\begin{equation}
\forall \: {\bm{u}} \in \mathbb{R}^p, \: 0 \leq \ell_z({\bm{w}}) + \nabla \ell_z({\bm{w}})^\top ({\bm{u}} - {\bm{w}}) + \dfrac{\beta}{2} \| {\bm{u}} - {\bm{w}} \|^2.
\end{equation}
We now choose ${\bm{u}} = {\bm{w}} - \dfrac{1}{\beta} \nabla \ell_z({\bm{w}})$, which yields:
\begin{equation}
0 \leq \ell_z({\bm{w}}) - \dfrac{1}{\beta} \| \nabla \ell_z({\bm{w}})\|^2 + \dfrac{1}{2 \beta} \| \nabla \ell_z({\bm{w}}) \|^2,
\end{equation}
which gives the desired result.
\end{leftlinebox}
\end{proof}
\begin{lemma} \label{lemma:smooth_gamma_bound}
Let $z \in \mathcal{Z}$.
Assume that $\ell_{z}$ is $\beta$-smooth and non-negative on $\mathbb{R}^p$.
Then we have:
\begin{equation}
\forall \: \mathbf{w} \in \mathbb{R}^p, \ \dfrac{\ell_{z}(\mathbf{w})}{\| \nabla \ell_{z}(\mathbf{w}) \|^2 + \delta} \geq \dfrac{1}{2 \beta} - \dfrac{\delta}{ 4 \beta^2 \ell_{z}(\mathbf{w})}
\end{equation}
\end{lemma}
\begin{proof} \
\begin{leftlinebox}[nobreak=false]
Let $\mathbf{w} \in \mathbb{R}^p$.
We apply Lemma \ref{lemma:smooth_bound} and we write successively:
\begin{equation}
\begin{split}
\dfrac{\ell_{z}(\mathbf{w})}{\| \nabla \ell_{z}(\mathbf{w}) \|^2 + \delta}
&\geq \dfrac{\ell_{z}(\mathbf{w})}{ 2 \beta \ell_{z}(\mathbf{w}) + \delta}, \eqcomment{Lemma \ref{lemma:smooth_bound}} \\
&= \dfrac{\ell_{z}(\mathbf{w}) + \frac{\delta}{2 \beta} - \frac{\delta}{2 \beta}}{ 2 \beta (\ell_{z}(\mathbf{w}) + \frac{\delta}{2 \beta})}, \\
&= \dfrac{1}{2 \beta} - \dfrac{\frac{\delta}{2 \beta}}{ 2 \beta (\ell_{z}(\mathbf{w}) + \frac{\delta}{2 \beta})}, \\
&\geq \dfrac{1}{2 \beta} - \dfrac{\delta}{4 \beta^2 \ell_{z}(\mathbf{w})}. \eqcomment{$\delta \geq 0$} \\
\end{split}
\end{equation}
\end{leftlinebox}
\end{proof}
\begin{restatable}{theorem}{thaligcvxsmoothlargeeta}\label{th:alig_cvx_smooth_large_eta}
We assume that $\Omega$ is a convex set, and that for every $z \in \mathcal{Z}$, $\ell_z$ is convex and $\beta$-smooth.
Let $\mathbf{w_\star}$ be a solution of (\ref{eq:main_problem}) such that $\forall z \in \mathcal{Z}, \: \ell_z(\mathbf{w_\star}) \leq \varepsilon$, and suppose that $\delta > 2 \beta \varepsilon$.
Further assume that $\eta \geq \frac{1}{2 \beta}$.
Then if we apply ALI-G with a maximal learning-rate of $\eta$ to $f$, we have:
\begin{equation}
\mathbb{E} \left[f\left(\frac{1}{T+ 1} \sum\limits_{t=0}^T \mathbf{w}_t \right) \right] - f_\star
\leq \dfrac{\delta}{\beta(1 - \frac{2 \beta \varepsilon}{\delta})} + \dfrac{2 \beta}{1 - \frac{2 \beta \varepsilon}{\delta}} \dfrac{\|\mathbf{w}_{0} - \mathbf{w_\star}\|^2}{T + 1}.
\end{equation}
\end{restatable}
\begin{proof} \
\begin{leftlinebox}[nobreak=false]
We re-use the inequality (\ref{eq:alig_cvx_basic_iterate_bound}) from the proof of Theorem \ref{th:alig_cvx_large_eta}:
\begin{equation} \label{eq:smooth_iterate_largeeta_bound}
\| \mathbf{w}_{t+1} - \mathbf{w_\star} \|^2 \leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \gamma_t (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \gamma_t \ell_{z_t}(\mathbf{w_\star})
\end{equation}
\end{leftlinebox}
As previously, we lower bound $\gamma_t (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}))$ and upper bound $\gamma_t \ell_{z_t}(\mathbf{w_\star})$ individually.
\begin{leftlinebox}[nobreak=false]
We begin with $\gamma_t (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}))$.
We remark that either $\gamma_t = \frac{\ell_{z_t}(\mathbf{w}_t)}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta}$ or $\gamma_t = \eta$.
\begin{leftlinebox}[nobreak=false]
Suppose $\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) \geq 0$ and $\gamma_t = \frac{\ell_{z_t}(\mathbf{w}_t)}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta}$.
Then we can write:
\begin{equation}
\begin{split}
&\kern-1em \gamma_t (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) \\
& = \dfrac{\ell_{z_t}(\mathbf{w}_t)}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta} (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})), \eqcomment{definition of $\gamma_t$} \\
& \geq \left( \dfrac{1}{2 \beta} - \dfrac{\delta}{4 \beta^2 \ell_{z}(\mathbf{w}_t)} \right) (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) \\
&\quad \eqcomment{using Lemma \ref{lemma:smooth_gamma_bound}, $\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) \geq 0$} \\
& = \dfrac{1}{2 \beta} (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) - \dfrac{\delta}{4 \beta^2} \dfrac{\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})}{\ell_{z_t}(\mathbf{w}_t)}\\
& \geq \dfrac{1}{2 \beta} (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) - \dfrac{\delta}{4 \beta^2} \eqcomment{$\ell_{z_t}(\mathbf{w_\star}) \geq 0$, $\ell_{z_t}(\mathbf{w}_t) \geq 0$}\\
\end{split}
\end{equation}
\end{leftlinebox}
\begin{leftlinebox}[nobreak=false]
Now suppose $\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) \geq 0$ and $\gamma_t = \eta$.
Then we have:
\begin{align*}
\gamma_t (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}))
&= \eta (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) \\
&\geq \eta (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) - \dfrac{\delta}{4 \beta^2} \\
&\geq \frac{1}{2 \beta} (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) - \dfrac{\delta}{4 \beta^2} \\
&\quad \eqcomment{because $\eta \geq \frac{1}{2 \beta}$, $\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) \geq 0$}.
\stepcounter{equation}\tag{\theequation}
\end{align*}
\end{leftlinebox}
\begin{leftlinebox}[nobreak=false]
Now suppose $\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) \leq 0$.
We have:
\begin{equation} \label{eq:smooth_gamma_t_bound}
\begin{split}
\gamma_t
&\leq \dfrac{\ell_{z_t}(\mathbf{w}_t)}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta} \\
&\leq \dfrac{\ell_{z_t}(\mathbf{w_\star})}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta} \eqcomment{$\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) \leq 0$}\\
&\leq \dfrac{\varepsilon}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta} \eqcomment{definition of $\varepsilon$} \\
&\leq \dfrac{\varepsilon}{\delta} \eqcomment{$\| \nabla \ell_{z_t}(\mathbf{w}_t) \| \geq 0$} \\
&\leq \dfrac{1}{2 \beta} \eqcomment{$\delta \geq 2 \beta \varepsilon$} \\
\end{split}
\end{equation}
We now write:
\begin{equation}
\begin{split}
\gamma_t \left( \ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) \right)
&\geq \frac{1}{2 \beta} (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) \eqcomment{$\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) \leq 0$, $\gamma_t \leq \frac{1}{2 \beta}$} \\
&\geq \frac{1}{2 \beta} (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) - \dfrac{\delta}{4 \beta^2} \\
\end{split}
\end{equation}
\end{leftlinebox}
In conclusion, in all cases, it holds true that:
\begin{equation} \label{eq:smooth_largeeta_bound_first_term}
\gamma_t (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) \geq \frac{1}{2 \beta} (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) - \dfrac{\delta}{4 \beta^2}
\end{equation}
\begin{leftlinebox}[nobreak=false]
We now upper bound $\gamma_t \ell_{z_t}(\mathbf{w_\star})$:
\begin{equation} \label{eq:smooth_largeeta_bound_second_term}
\begin{split}
\gamma_t \ell_{z_t}(\mathbf{w_\star})
&\leq \dfrac{\ell_{z_t}(\mathbf{w}_t) \ell_{z_t}(\mathbf{w_\star})}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta}, \eqcomment{definition of $\gamma_t$ and $\ell_{z_t}(\mathbf{w_\star}) \geq 0$} \\
&\leq \dfrac{\ell_{z_t}(\mathbf{w}_t) \ell_{z_t}(\mathbf{w_\star})}{\delta}, \eqcomment{$\| \nabla \ell_{z_t}(\mathbf{w}_t) \| \geq 0$} \\
&\leq \dfrac{(\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) + \varepsilon)\varepsilon}{\delta}, \eqcomment{definition of $\varepsilon$ twice} \\
&= \dfrac{\varepsilon}{\delta} (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \dfrac{\varepsilon^2}{\delta}.
\end{split}
\end{equation}
\end{leftlinebox}
\end{leftlinebox}
\begin{leftlinebox}[nobreak=false]
We now put together inequalities (\ref{eq:smooth_iterate_largeeta_bound}), (\ref{eq:smooth_largeeta_bound_first_term}) and (\ref{eq:smooth_largeeta_bound_second_term}):
\begin{equation}
\begin{split}
&\kern-1em \| \mathbf{w}_{t+1} - \mathbf{w_\star} \|^2 \\
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \frac{1}{2 \beta} (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \dfrac{\delta}{4 \beta^2} + \dfrac{\varepsilon}{\delta} (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \dfrac{\varepsilon^2}{\delta}, \\
&= \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \left(\frac{1}{2 \beta} - \dfrac{\varepsilon}{\delta} \right) (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \dfrac{\delta}{4 \beta^2} + \dfrac{\varepsilon^2}{\delta}.
\end{split}
\end{equation}
Therefore we have:
\begin{equation} \label{eq:smooth_telescopic_sum}
\left( \dfrac{1}{2 \beta} - \dfrac{\varepsilon}{\delta} \right) \left( \ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) \right) - \left(\dfrac{\delta}{4 \beta^2} + \dfrac{\varepsilon^2}{\delta} \right) \leq \|\mathbf{w}_{t} - \mathbf{w_\star}\|^2 - \|\mathbf{w}_{t+1} - \mathbf{w_\star}\|^2.
\end{equation}
\end{leftlinebox}
\begin{leftlinebox}[nobreak=false]
By summing over $t$ and taking the expectation over every $z_t$, we obtain:
\begin{equation}
\begin{split}
&\kern-1em \sum\limits_{t=0}^{T} \left( \dfrac{\delta - 2 \beta \varepsilon}{2 \beta \delta} \left( \mathbb{E}[f(\mathbf{w}_t)] - f(\mathbf{w_\star}) \right) - \dfrac{\delta^2 + 4 \beta^2 \varepsilon^2}{4 \beta^2 \delta} \right) \\
&\leq \|\mathbf{w}_{0} - \mathbf{w_\star}\|^2 - \mathbb{E}\left[\|\mathbf{w}_{T+1} - \mathbf{w_\star}\|^2\right], \\
&\leq \|\mathbf{w}_{0} - \mathbf{w_\star}\|^2.
\end{split}
\end{equation}
By assumption, we have that $\delta - 2 \beta \varepsilon > 0$.
Dividing by $T+1$ and using the convexity of $f$, we finally obtain:
\begin{equation}
\begin{split}
\mathbb{E} \left[f\left(\frac{1}{T+ 1} \sum\limits_{t=0}^T \mathbf{w}_t \right)\right] - f_\star
&\leq \frac{1}{T+ 1} \sum\limits_{t=0}^T \mathbb{E}[f(\mathbf{w}_t)] - f_\star \eqcomment{convexity of $f$}, \\
&= \dfrac{2 \beta \delta}{\delta - 2 \beta \varepsilon} \dfrac{\delta^2 + 4 \beta^2 \varepsilon^2}{4 \beta^2 \delta} + \dfrac{2 \beta \delta}{\delta - 2 \beta \varepsilon} \dfrac{\|\mathbf{w}_{0} - \mathbf{w_\star}\|^2}{T + 1}, \\
&= \dfrac{\delta^2 + 4 \beta^2 \varepsilon^2}{2 \beta(\delta - 2 \beta \varepsilon)} + \dfrac{2 \beta \delta}{\delta - 2 \beta \varepsilon} \dfrac{\|\mathbf{w}_{0} - \mathbf{w_\star}\|^2}{T + 1}, \\
&\leq \dfrac{\delta^2}{\beta(\delta - 2 \beta \varepsilon)} + \dfrac{2 \beta \delta}{\delta - 2 \beta \varepsilon} \dfrac{\|\mathbf{w}_{0} - \mathbf{w_\star}\|^2}{T + 1}, \eqcomment{$\delta - 2 \beta \varepsilon \geq 0$} \\
&= \dfrac{\delta}{\beta(1 - \frac{2 \beta \varepsilon}{\delta})} + \dfrac{2 \beta}{1 - \frac{2 \beta \varepsilon}{\delta}} \dfrac{\|\mathbf{w}_{0} - \mathbf{w_\star}\|^2}{T + 1}. \\
\end{split}
\end{equation}
\end{leftlinebox}
\end{proof}
\begin{restatable}{theorem}{thaligcvxsmoothsmalleta}\label{th:alig_cvx_smooth_small_eta}
We assume that $\Omega$ is a convex set, and that for every $z \in \mathcal{Z}$, $\ell_z$ is convex and $\beta$-smooth.
Let $\mathbf{w_\star}$ be a solution of (\ref{eq:main_problem}) such that $\forall z \in \mathcal{Z}, \: \ell_z(\mathbf{w_\star}) \leq \varepsilon$, and suppose that $\delta > 2 \beta \varepsilon$.
Further assume that $\eta \leq \frac{1}{2 \beta}$.
Then if we apply ALI-G with a maximal learning-rate of $\eta$ to $f$, we have:
\begin{equation}
\mathbb{E} \left[f\left(\dfrac{1}{T+1} \sum\limits_{t=0}^T \mathbf{w}_t \right)\right] - f_\star \leq \dfrac{\| \mathbf{w}_{0} - \mathbf{w_\star} \|^2}{\eta (T+1)} + \dfrac{\delta}{2 \beta} + \varepsilon.
\end{equation}
\end{restatable}
\begin{proof} \
\begin{leftlinebox}[nobreak=false]
Similarly to the beginning of previous proofs, we have that:
\begin{equation} \label{eq:smooth_iterate_smalleta_bound}
\| \mathbf{w}_{t+1} - \mathbf{w_\star} \|^2 \leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \gamma_t (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \gamma_t \ell_{z_t}(\mathbf{w_\star})
\end{equation}
\end{leftlinebox}
As previously, we lower bound $\gamma_t (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}))$ and upper bound $\gamma_t \ell_{z_t}(\mathbf{w_\star})$ individually.
\begin{leftlinebox}[nobreak=false]
We begin with $\gamma_t (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}))$.
We remark that either $\gamma_t = \frac{\ell_{z_t}(\mathbf{w}_t)}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta}$ or $\gamma_t = \eta$.
\begin{leftlinebox}[nobreak=false]
Suppose $\gamma_t = \frac{\ell_{z_t}(\mathbf{w}_t)}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta}$ and $\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) \geq 0$.
First we write:
\begin{equation}
\begin{split}
\gamma_t
&= \dfrac{\ell_{z_t}(\mathbf{w}_t)}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta} \\
&= \dfrac{\ell_{z_t}(\mathbf{w}_t) + \frac{\delta}{2 \beta}}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta} - \dfrac{\frac{\delta}{2 \beta}}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta}\\
&\geq \dfrac{\frac{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2}{2 \beta} + \frac{\delta}{2 \beta}}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta} - \dfrac{\delta}{2 \beta} \dfrac{1}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta} \eqcomment{Lemma \ref{lemma:smooth_bound}}\\
&= \dfrac{1}{2 \beta} - \dfrac{\delta}{2 \beta} \dfrac{1}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta} \\
&\geq \eta - \dfrac{\delta}{2 \beta} \dfrac{1}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta} \eqcomment{$\eta \leq \frac{1}{2 \beta}$}\\
\end{split}
\end{equation}
Since $\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) \geq 0$, this yields:
\begin{equation}
\begin{split}
\gamma_t (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}))
&\geq \left(\eta - \dfrac{\delta}{2 \beta} \dfrac{1}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta} \right) (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) \\
&= \eta (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) - \dfrac{\delta}{2 \beta} \dfrac{\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta} \\
&\geq \eta (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) - \dfrac{\delta}{2 \beta} \dfrac{\ell_{z_t}(\mathbf{w}_t)}{\|\nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta} \\
&\quad \eqcomment{because $\ell_{z_t}(\mathbf{w_\star}) \geq 0$} \\
\end{split}
\end{equation}
We now notice that since $\gamma_t = \frac{\ell_{z_t}(\mathbf{w}_t)}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta}$, and $\gamma_t \leq \eta$, then necessarily $\frac{\ell_{z_t}(\mathbf{w}_t)}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 + \delta} \leq \eta$.
This gives:
\begin{equation}
\gamma_t (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}))
\geq \eta (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) - \dfrac{\eta \delta}{2 \beta}
\end{equation}
\end{leftlinebox}
\begin{leftlinebox}[nobreak=false]
Now suppose $\gamma_t = \eta$ and $\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) \geq 0$.
Then we have:
\begin{align*}
\gamma_t (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}))
&= \eta (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) \\
&\geq \eta (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) - \dfrac{\eta \delta}{2 \beta}.
\stepcounter{equation}\tag{\theequation}
\end{align*}
\end{leftlinebox}
\begin{leftlinebox}[nobreak=false]
Now suppose $\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) \leq 0$.
Since $\gamma_t \leq \eta$ by definition, we have that:
\begin{equation}
\begin{split}
\gamma_t \left( \ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) \right)
&\geq \eta (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) \eqcomment{$\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star}) \leq 0$} \\
&\geq \eta (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) - \dfrac{\eta \delta}{2 \beta}.
\end{split}
\end{equation}
\end{leftlinebox}
In conclusion, in all cases, it holds true that:
\begin{equation} \label{eq:smooth_smalleta_bound_first_term}
\gamma_t (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) \geq \eta (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) - \dfrac{\eta \delta}{2 \beta}
\end{equation}
We upper bound $\gamma_t \ell_{z_t}(\mathbf{w_\star})$ as follows:
\begin{equation} \label{eq:smooth_smalleta_bound_second_term}
\begin{split}
\gamma_t \ell_{z_t}(\mathbf{w_\star})
&\leq \eta \ell_{z_t}(\mathbf{w_\star}) \eqcomment{$\ell_{z_t}(\mathbf{w_\star}) \geq 0$} \\
&\leq \eta \varepsilon \eqcomment{definition of $\varepsilon$}
\end{split}
\end{equation}
We combine inequalities (\ref{eq:smooth_iterate_smalleta_bound}), (\ref{eq:smooth_smalleta_bound_first_term}) and (\ref{eq:smooth_smalleta_bound_second_term}) and obtain:
\begin{equation}
\| \mathbf{w}_{t+1} - \mathbf{w_\star} \|^2 \leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \eta (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \dfrac{\eta \delta}{2 \beta} + \eta \varepsilon.
\end{equation}
By taking the expectation and using a telescopic sum, we obtain:
\begin{equation}
0 \leq \| \mathbf{w}_{T+1} - \mathbf{w_\star} \|^2 \leq \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 - \sum\limits_{t=0}^T \left( \eta (\mathbb{E}[f(\mathbf{w}_t)] - f_\star) + \dfrac{\eta \delta}{2 \beta} + \eta \varepsilon \right).
\end{equation}
Re-arranging and using the convexity of $f$, we finally obtain:
\begin{equation}
\mathbb{E} \left[f\left(\dfrac{1}{T+1} \sum\limits_{t=0}^T \mathbf{w}_t \right)\right] - f_\star \leq \dfrac{\| \mathbf{w}_{0} - \mathbf{w_\star} \|^2}{\eta (T+1)} + \dfrac{\delta}{2 \beta} + \varepsilon.
\end{equation}
\end{leftlinebox}
\end{proof}
\subsection{Smooth and Strongly Convex Functions}
Finally, we consider the $\alpha$-strongly convex and $\beta$-smooth case.
Again, our proof yields a natural separation between $\eta \geq \frac{1}{2 \beta}$ and $\eta \leq \frac{1}{2 \beta}$.
\begin{lemma} \label{lemma:strglycvx_gamma_bound}
Let $z \in \mathcal{Z}$.
Assume that $\ell_{z}$ is $\alpha$-strongly convex, non-negative on $\mathbb{R}^p$, and such that $ \inf \ell_{z} \leq \varepsilon$.
In addition, suppose that $\delta \geq 2 \alpha \varepsilon$.
Then we have:
\begin{equation}
\forall \: \mathbf{w} \in \mathbb{R}^p, \ \dfrac{\ell_{z}(\mathbf{w})}{\| \nabla \ell_{z}(\mathbf{w}) \|^2 + \delta} \leq \dfrac{1}{2 \alpha}.
\end{equation}
\end{lemma}
\begin{proof} \
\begin{leftlinebox}[nobreak=false]
Let \(\mathbf{w} \in \mathbb{R}^p\) and suppose that $\ell_{z}$ reaches its minimum at $\underline{\mathbf{w}} \in \mathbb{R}^p$ (this minimum exists because of strong convexity).
By definition of strong convexity, we have that:
\begin{equation}
\forall \ \hat{\mathbf{w}} \in \mathbb{R}^p, \ \ell_{z}(\hat{\mathbf{w}}) \geq \ell_{z}(\mathbf{w}) + \nabla \ell_{z}(\mathbf{w})^\top (\hat{\mathbf{w}} - \mathbf{w}) + \dfrac{\alpha}{2} \| \hat{\mathbf{w}} - \mathbf{w} \|^2
\end{equation}
We minimize the right hand-side over $\hat{\mathbf{w}}$, which gives:
\begin{equation}
\begin{split}
\forall \hat{\mathbf{w}} \in \mathbb{R}^p, \ \ell_{z}(\hat{\mathbf{w}})
&\geq \ell_{z}(\mathbf{w}) + \nabla \ell_{z}(\mathbf{w})^\top (\hat{\mathbf{w}} - \mathbf{w}) + \dfrac{\alpha}{2} \| \hat{\mathbf{w}} - \mathbf{w} \|^2 \\
&\geq \ell_{z}(\mathbf{w}) - \dfrac{1}{2 \alpha} \| \nabla \ell_{z}(\mathbf{w}) \|^2
\end{split}
\end{equation}
Thus by choosing $\hat{\mathbf{w}} = \underline{\mathbf{w}}$ and re-ordering, we obtain the following result (a.k.a. the Polyak-Lojasiewicz inequality):
\begin{equation}
\ell_{z}(\mathbf{w}) - \ell_{z}(\underline{\mathbf{w}}) \leq \dfrac{1}{2 \alpha} \| \nabla \ell_{z}(\mathbf{w}) \|^2
\end{equation}
\end{leftlinebox}
Therefore we can write:
\begin{leftlinebox}[nobreak=false]
\begin{equation}
\dfrac{\ell_{z}(\mathbf{w})}{\| \nabla \ell_{z}(\mathbf{w}) \|^2 + \delta} \leq \dfrac{\ell_{z}(\mathbf{w}) - \ell_{z}(\underline{\mathbf{w}}) + \varepsilon}{\| \nabla \ell_{z}(\mathbf{w}) \|^2 + \delta} \leq \dfrac{\frac{1}{2 \alpha} \| \nabla \ell_{z}(\mathbf{w})\|^2 + \varepsilon}{\| \nabla \ell_{z}(\mathbf{w}) \|^2 + \delta}.
\end{equation}
We introduce the function \(\psi: x \in \mathbb{R}^+ \mapsto \dfrac{\frac{1}{2 \alpha} x + \varepsilon}{x + \delta} \), and we compute its derivative:
\begin{equation}
\begin{split}
\psi'(x)
&= \dfrac{\frac{1}{2 \alpha} (x + \delta) - \frac{1}{2 \alpha} x - \varepsilon}{(x + \delta)^2}, \\
&= \dfrac{\frac{\delta}{2 \alpha} - \varepsilon}{(x + \delta)^2} \geq 0. \eqcomment{$\delta \geq 2 \alpha \varepsilon$}
\end{split}
\end{equation}
Therefore $\psi$ is monotonically increasing.
As a result, we have:
\begin{equation}
\forall \ x \in \mathbb{R}^+, \ \psi(x) \leq \lim\limits_{x \to \infty} \psi(x) = \dfrac{1}{2\alpha}.
\end{equation}
Therefore we have that:
\begin{equation}
\dfrac{\frac{1}{2 \alpha} \| \nabla \ell_{z}(\mathbf{w})\|^2 + \varepsilon}{\| \nabla \ell_{z}(\mathbf{w}) \|^2 + \delta} = \psi \left(\| \nabla \ell_{z}(\mathbf{w}) \|^2 \right) \leq \dfrac{1}{2 \alpha},
\end{equation}
which concludes the proof.
\end{leftlinebox}
\end{proof}
\begin{lemma} \label{lemma:parallelogram_inequality}
For any $a, b \in \mathbb{R}^p$, we have that:
\begin{equation}
\|a \|^2 + \|b \|^2 \geq \dfrac{1}{2} \| a - b\|^2
\end{equation}
\end{lemma}
\begin{proof}
This is a simple application of the parallelogram law, but we give the proof here for completeness.
\begin{align*}
\|a \|^2 + \|b \|^2 - \dfrac{1}{2} \| a - b\|^2
&= \|a \|^2 + \|b \|^2 - \dfrac{1}{2} \| a\|^2 -\dfrac{1}{2} \| b\|^2 + a^\top b \\
&= \dfrac{1}{2} \| a\|^2 + \dfrac{1}{2} \| b\|^2 + a^\top b \\
&= \dfrac{1}{2} \| a + b \|^2 \\
&\geq 0 \\
\end{align*}
\end{proof}
\begin{lemma} \label{lemma:strglycvx_fun_bound}
Let $z \in \mathcal{Z}$.
Assume that $\ell_{z}$ is $\alpha$-strongly convex and achieves its (possibly constrained) minimum at $\mathbf{w_\star} \in \Omega$.
Then we have:
\begin{equation}
\forall \: \mathbf{w} \in \Omega, \ \ell_{z}(\mathbf{w}) - \ell_{z}(\mathbf{w_\star}) \geq \dfrac{\alpha}{2} \| \mathbf{w} - \mathbf{w_\star} \|^2
\end{equation}
\end{lemma}
\begin{proof}
By definition of strong-convexity \cite{Bubeck2015}, we have:
\begin{equation}
\forall \: \mathbf{w} \in \Omega, \: \ell_z(\mathbf{w}) - \ell_z(\mathbf{w_\star}) - \nabla \ell_z(\mathbf{w_\star})^\top (\mathbf{w} - \mathbf{w_\star}) \geq \dfrac{\alpha}{2} \| \mathbf{w} - \mathbf{w_\star} \|^2.
\end{equation}
In addition, since $\mathbf{w_\star}$ minimizes $\ell_z$, then necessarily:
\begin{equation}
\forall \: \mathbf{w} \in \Omega, \: \nabla \ell_z(\mathbf{w_\star})^\top (\mathbf{w} - \mathbf{w_\star}) \geq 0.
\end{equation}
Combining the two equations gives the desired result.
\end{proof}
\begin{restatable}{theorem}{thaligstronglycvxlargeeta}\label{th:alig_cvx_strongly_large_eta}
We assume that $\Omega$ is a convex set, and that for every $z \in \mathcal{Z}$, $\ell_z$ is $\alpha$-strongly convex and $\beta$-smooth.
Let $\mathbf{w_\star}$ be a solution of (\ref{eq:main_problem}) such that $\forall z \in \mathcal{Z}, \: \ell_z(\mathbf{w_\star}) \leq \varepsilon$, and suppose that $\delta > 2 \beta \varepsilon$.
Further assume that $\eta \geq \frac{1}{2 \beta}$.
Then if we apply ALI-G with a maximal learning-rate of $\eta$ to $f$, we have:
\begin{equation}
\mathbb{E}[f(\mathbf{w}_{T+1})] - f_\star
\leq \beta \exp\left(- \dfrac{\alpha t}{4 \beta} \right) \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + \dfrac{\delta}{\alpha} + 2 \dfrac{\beta}{\alpha} \varepsilon + 2 \dfrac{\beta^2}{\alpha^2} \varepsilon.
\end{equation}
\end{restatable}
\begin{proof} \
\begin{leftlinebox}[nobreak=false]
We condition the update on $z_t$ drawn at random.
The beginning of the proof is identical to that of Theorem \ref{th:alig_cvx_smooth_large_eta} (and in particular requires $\delta > 2 \beta \varepsilon$).
In addition, we remark that $\delta > 2 \beta \varepsilon \geq 2 \alpha \varepsilon$, because it always holds true that $\beta \geq \alpha$.
Combining inequalities (\ref{eq:alig_cvx_basic_iterate_bound}) and (\ref{eq:smooth_largeeta_bound_first_term}), we obtain:
\begin{align*}
\kern-1em \| \mathbf{w}_{t+1} - \mathbf{w_\star} \|^2
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \dfrac{1}{2 \beta} (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \dfrac{\delta}{4 \beta^2} + \gamma_t \ell_{z_t}(\mathbf{w_\star}), \\
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \dfrac{1}{2 \beta} (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \dfrac{\delta}{4 \beta^2} + \gamma_t \varepsilon, \eqcomment{definition of $\varepsilon$} \\
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \dfrac{1}{2 \beta} (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \dfrac{\delta}{4 \beta^2} + \dfrac{\varepsilon}{2 \alpha}. \eqcomment{Lemma \ref{lemma:strglycvx_gamma_bound}}
\label{eq:strgly_cvx_iterate_bound} \stepcounter{equation}\tag{\theequation}
\end{align*}
\end{leftlinebox}
Taking the expectation over $z_t | z_{t-1}$, we obtain:
\begin{align*}
\kern-1em \mathbb{E}_{z_t | z_{t-1}}[\| \mathbf{w}_{t+1} - \mathbf{w_\star} \|^2]
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \dfrac{1}{2 \beta} (f(\mathbf{w}_t) - f(\mathbf{w_\star})) + \dfrac{\delta}{4 \beta^2} + \dfrac{\varepsilon}{2 \alpha}, \\
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \dfrac{\alpha}{4 \beta} \| \mathbf{w}_t - \mathbf{w_\star} \|^2 + \dfrac{\delta}{4 \beta^2} + \dfrac{\varepsilon}{2 \alpha}. \eqcomment{by lemma \ref{lemma:strglycvx_fun_bound}}
\end{align*}
Now taking expectation over every $z_t$, we use a trivial induction over $t$ and write:
\begin{leftlinebox}[nobreak=false]
\begin{align*}
&\kern-1em \mathbb{E} [\| \mathbf{w}_{t+1} - \mathbf{w_\star} \|^2] \\
&\leq \left( 1 - \dfrac{\alpha}{4 \beta} \right) \mathbb{E} [\| \mathbf{w}_{t} - \mathbf{w_\star} \|^2] + \dfrac{\delta}{4 \beta^2} + \dfrac{\varepsilon}{2 \alpha},\\
&\leq \left( 1 - \dfrac{\alpha}{4 \beta} \right)^t \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + \sum\limits_{k=0}^{t} \left(1 - \dfrac{\alpha}{4 \beta} \right)^{t -k} \left( \dfrac{\delta}{4 \beta^2} + \dfrac{\varepsilon}{2 \alpha} \right), \\
&\leq \left( 1 - \dfrac{\alpha}{4 \beta} \right)^t \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + \sum\limits_{k=0}^{\infty} \left(1 - \dfrac{\alpha}{4 \beta} \right)^{k} \left( \dfrac{\delta}{4 \beta^2} + \dfrac{\varepsilon}{2 \alpha} \right), \\
&= \left( 1 - \dfrac{\alpha}{4 \beta} \right)^t \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + \dfrac{1}{\frac{\alpha}{4 \beta}} \left( \dfrac{\delta}{4 \beta^2} + \dfrac{\varepsilon}{2 \alpha} \right), \\
&= \left( 1 - \dfrac{\alpha}{4 \beta} \right)^t \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + \dfrac{4 \beta}{\alpha} \left( \dfrac{\delta}{4 \beta^2} + \dfrac{\varepsilon}{2 \alpha} \right).
\stepcounter{equation}\tag{\theequation}
\end{align*}
\end{leftlinebox}
Given an arbitrary $\mathbf{w} \in \mathbb{R}^p$, we now wish to relate the distance $\|\mathbf{w} - \mathbf{w_\star} \|^2$ to the function values $f(\mathbf{w}) - f(\mathbf{w_\star})$.
\begin{leftlinebox}
Since each $\ell_z$ is $\alpha$-strongly convex and $\beta$-smooth, so is $f = \mathbb{E}_z[\ell_z]$.
We introduce $\underline{\mathbf{w}}$ the minimizer of $f$ on its unconstrained domain $\mathbb{R}^p$.
Then we can write that for any $\mathbf{w} \in \mathbb{R}^p$:
\begin{align*}
&\kern-1em f(\mathbf{w}) - f(\mathbf{w_\star}) \\
&\leq f(\mathbf{w}) - f(\underline{\mathbf{w}}), \eqcomment{$f(\underline{\mathbf{w}}) \leq f(\mathbf{w_\star})$} \\
&\leq \nabla f(\underline{\mathbf{w}})^\top(\mathbf{w} - \underline{\mathbf{w}}) + \dfrac{\beta}{2} \|\mathbf{w} - \underline{\mathbf{w}} \|^2, \eqcomment{$f$ is $\beta$-smooth}\\
&= \dfrac{\beta}{2} \|\mathbf{w} - \underline{\mathbf{w}} \|^2, \eqcomment{$\nabla f(\underline{\mathbf{w}}) = \bm{0}$} \\
&\leq \beta (\|\mathbf{w} - \mathbf{w_\star} \|^2 + \|\mathbf{w_\star} - \underline{\mathbf{w}} \|^2 ), \eqcomment{Lemma \ref{lemma:parallelogram_inequality}} \\
&\leq \beta \|\mathbf{w} - \mathbf{w_\star} \|^2 + \dfrac{2 \beta}{\alpha} \left(f(\mathbf{w_\star}) - f(\underline{\mathbf{w}}) \right), \eqcomment{$f$ is $\alpha$-strongly convex} \\
&\leq \beta \|\mathbf{w} - \mathbf{w_\star} \|^2 + \dfrac{2 \beta}{\alpha} f(\mathbf{w_\star}), \eqcomment{$0 \leq f(\underline{\mathbf{w}})$} \\
&\leq \beta \|\mathbf{w} - \mathbf{w_\star} \|^2 + 2\dfrac{\beta \varepsilon}{\alpha}, \eqcomment{definition of $\varepsilon$}
\label{eq:strgly_cvx_iterate_distance_to_function_distance} \stepcounter{equation}\tag{\theequation}
\end{align*}
\end{leftlinebox}
Taking the expectation, we can combine the results to obtain the final result:
\begin{leftlinebox}[nobreak=false]
\begin{align*}
\kern-1em \mathbb{E} [f(\mathbf{w}_{t+1})] - f(\mathbf{w_\star})
&\leq \beta \mathbb{E} [\|\mathbf{w}_{t+1} - \mathbf{w_\star} \|^2] + 2\dfrac{\beta \varepsilon}{\alpha}, \\
&\leq \beta \left( \left( 1 - \dfrac{\alpha}{4 \beta} \right)^t \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + \dfrac{4 \beta}{\alpha} \left(\dfrac{\delta}{4 \beta^2} + \dfrac{\varepsilon}{2 \alpha} \right) \right) + 2\dfrac{\beta \varepsilon}{\alpha}, \\
&= \beta \left( 1 - \dfrac{\alpha}{4 \beta} \right)^t \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + \dfrac{4 \beta}{\alpha} \left(\dfrac{\delta}{4 \beta} + \dfrac{\varepsilon \beta}{2 \alpha} \right) + 2\dfrac{\beta \varepsilon}{\alpha}, \\
&= \beta \left( 1 - \dfrac{\alpha}{4 \beta} \right)^t \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + \dfrac{\delta}{\alpha} + 2 \dfrac{\beta}{\alpha} \varepsilon + 2 \dfrac{\beta^2}{\alpha^2} \varepsilon, \\
&\leq \beta \exp\left(- \dfrac{\alpha t}{4 \beta} \right) \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + \dfrac{\delta}{\alpha} + 2 \dfrac{\beta}{\alpha} \varepsilon + 2 \dfrac{\beta^2}{\alpha^2} \varepsilon.
\end{align*}
\end{leftlinebox}
\end{proof}
\begin{restatable}{theorem}{thaligstronglycvxsmalleta}\label{th:alig_cvx_strongly_small_eta}
We assume that $\Omega$ is a convex set, and that for every $z \in \mathcal{Z}$, $\ell_z$ is $\alpha$-strongly convex and $\beta$-smooth.
Let $\mathbf{w_\star}$ be a solution of (\ref{eq:main_problem}) such that $\forall z \in \mathcal{Z}, \: \ell_z(\mathbf{w_\star}) \leq \varepsilon$, and suppose that $\delta > 2 \beta \varepsilon$.
Further assume that $\eta \leq \frac{1}{2 \beta}$.
Then if we apply ALI-G with a maximal learning-rate of $\eta$ to $f$, we have:
\begin{equation}
\mathbb{E}[f(\mathbf{w}_{T+1})] - f_\star
\leq \beta \exp \left(\dfrac{-\alpha \eta T }{2} \right) \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + \dfrac{\delta}{ \alpha} + \dfrac{4 \varepsilon \beta}{\alpha}.
\end{equation}
\end{restatable}
\begin{proof}
Re-using inequalities (\ref{eq:smooth_iterate_smalleta_bound}) and (\ref{eq:smooth_smalleta_bound_first_term}) from the proof of Theorem \ref{th:alig_cvx_smooth_small_eta}, we can write:
\begin{equation}
\begin{split}
\kern-1em \| \mathbf{w}_{t+1} - \mathbf{w_\star} \|^2
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \eta (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \dfrac{\eta \delta}{2 \beta} + \gamma_t \ell_{z_t}(\mathbf{w_\star}), \\
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \eta (\ell_{z_t}(\mathbf{w}_t) - \ell_{z_t}(\mathbf{w_\star})) + \dfrac{\eta \delta}{2 \beta} + \eta \varepsilon \\
&\quad \eqcomment{using $\gamma_t \leq \eta$, $0 \leq \ell_{z_t}(\mathbf{w_\star}) \leq \varepsilon$}.
\end{split}
\end{equation}
Taking the expectation over $z_t|z_{t-1}$, we obtain:
\begin{equation}
\kern-1em \mathbb{E}_{z_t|z_{t-1}} [\| \mathbf{w}_{t+1} - \mathbf{w_\star} \|^2]
\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \eta (f(\mathbf{w}_t) - f(\mathbf{w_\star})) + \dfrac{\eta \delta}{2 \beta} + \eta \varepsilon.
\end{equation}
Therefore, we can write:
\begin{equation}
\begin{split}
\kern-1em \mathbb{E}_{z_t|z_{t-1}} [\| \mathbf{w}_{t+1} - \mathbf{w_\star} \|^2]
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - \dfrac{\alpha \eta}{2} \| \mathbf{w}_t - \mathbf{w_\star} \|^2 + \dfrac{\eta \delta}{2 \beta} + \eta \varepsilon, \eqcomment{Lemma \ref{lemma:strglycvx_fun_bound}} \\
&= \left(1 - \dfrac{\alpha \eta}{2} \right) \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 + \dfrac{\eta \delta}{2 \beta} + \eta \varepsilon.
\end{split}
\end{equation}
Then a trivial induction gives that:
\begin{equation}
\begin{split}
\mathbb{E}[\| \mathbf{w}_{T+1} - \mathbf{w_\star} \|^2]
&\leq \left(1 - \dfrac{\alpha \eta}{2} \right)^T \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + \left( \dfrac{\eta \delta}{2 \beta} + \eta \varepsilon \right)\sum\limits_{t=0}^T \left(1 - \dfrac{\alpha \eta}{2} \right)^t, \\
&\leq \left(1 - \dfrac{\alpha \eta}{2} \right)^T \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + \left( \dfrac{\eta \delta}{2 \beta} + \eta \varepsilon \right)\sum\limits_{t=0}^\infty \left(1 - \dfrac{\alpha \eta}{2} \right)^t, \\
&= \left(1 - \dfrac{\alpha \eta}{2} \right)^T \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + \left( \dfrac{\eta \delta}{2 \beta} + \eta \varepsilon \right) \dfrac{1}{1 - \left(1 - \dfrac{\alpha \eta}{2} \right)}, \\
&= \left(1 - \dfrac{\alpha \eta}{2} \right)^T \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + \dfrac{\delta}{\alpha \beta} + \dfrac{2 \varepsilon}{\alpha}. \\
\end{split}
\end{equation}
We now re-use the inequality (\ref{eq:strgly_cvx_iterate_distance_to_function_distance}) in expectation to write:
\begin{equation}
\begin{split}
\mathbb{E}[f(\mathbf{w}_{T+1})] - f_\star
&\leq \beta \mathbb{E}[\| \mathbf{w}_{T+1} - \mathbf{w_\star} \|^2] + \dfrac{2 \beta \varepsilon}{\alpha}, \\
&\leq \beta \left(1 - \dfrac{\alpha \eta}{2} \right)^T \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + \dfrac{\delta}{\alpha} + \dfrac{4 \varepsilon \beta}{\alpha}, \\
&\leq \beta \exp \left(\dfrac{-\alpha \eta T }{2} \right) \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2 + \dfrac{\delta}{\alpha} + \dfrac{4 \varepsilon \beta}{\alpha}. \\
\end{split}
\end{equation}
\end{proof}
\section{Detailed Non-Convex Results}
\label{app:sec:detailed_noncvx_results}
The Restricted Secant Inequality (RSI) is a milder assumption than convexity.
It can be defined as follows:
\begin{definition}
Let $f: \mathbb{R}^p \to \mathbb{R}$ be a lower-bounded differentiable function achieving its minimum at $\mathbf{w_\star}$.
We say that $f$ satisfies the RSI if there exists $\alpha > 0$ such that:
\begin{equation}
\forall {\bm{w}} \in \mathbb{R}^p, \: \nabla f (\mathbf{w})^\top ({\bm{w}} - \mathbf{w_\star}) \geq \alpha \| {\bm{w}} - \mathbf{w_\star} \|^2.
\end{equation}
\end{definition}
The RSI is sometimes used to prove convergence of optimization algorithms without assuming convexity \citep{Vaswani2019a}.
As we prove below, the Polyak step-size may fail to converge under the RSI assumption, even in a non-stochastic setting with the exact minimum known.
\begin{figure}[ht]
\centering
\footnotesize
\input{include/rsi_plot_arxiv.tex}
\caption{\em Illustration of the function $f$, which satisfies the RSI. When starting at $w =- 3 / 5$, gradient descent with the Polyak step-size oscillates between $w= - 3 / 5$ and $w = 3 / 5$.}
\label{fig:rsi2}
\end{figure}
\begin{restatable}{proposition}{rsi_example}\label{prop:rsi_example}
Let $f: w \in [\frac{-3}{5}; \frac{3}{5}] \mapsto w^2 - |w|^3$.
Then $f$ satisfies the RSI with $\alpha = \frac{1}{5}$.
\end{restatable}
\begin{proof}
First we note that $f$ achieves its minimum at $w_{\star} = 0$, and that $f(w_{\star}) = 0$.
In addition, we introduce the sign function $\sigma(w)$, which is equal to $1$ if $w \geq 0$, and $-1$ otherwise.
Now let $w \in [\frac{-3}{5}; \frac{3}{5}]$.
Then we have that:
\begin{equation}
\begin{split}
\nabla f (w) (w - w_{\star}) - \frac{1}{5} (w - w_{\star})^2,
&= (2 w - 3 \sigma(w) w^2) (w - 0) - \frac{1}{5} (w - 0)^2, \\
&= \frac{9}{5} w^2 - 3 \sigma(w) w^3, \\
&= 3 w^2 (\frac{3}{5} - \sigma(w) w), \\
&\geq 0.
\end{split}
\end{equation}
\end{proof}
\begin{restatable}{proposition}{polyak_oscillation}\label{prop:polyak_oscillation}
Assume that we apply the Polyak step-size to $f: w \in [\frac{-3}{5}; \frac{3}{5}] \mapsto w^2 - |w|^3$, starting from the initial point $w_0 = -3/5$.
Then the iterates oscillate between $-3/5$ and $3/5$.
\end{restatable}
\begin{proof}
We show that, starting with $w_0 = -\frac{3}{5}$, we obtain $w_1 = \frac{3}{5}$.
This will prove oscillation of the iterates by symmetry of the problem.
Since $w_0 = \frac{-3}{5}$, we have $f(w_0) = \frac{9}{25} - \frac{27}{125} = \frac{18}{125}$.
Furthermore, $\nabla f(w_0) = 2 (\frac{-3}{5}) + 3 (\frac{9}{25}) = \frac{-3}{25}$.
Therefore:
\begin{equation}
\begin{split}
w_1
&= w_0 - \dfrac{f(w_0)}{(\nabla f (w_0))^2} \nabla f (w_0), \\
&= w_0 - \dfrac{f(w_0)}{\nabla f (w_0)}, \\
&= \frac{-3}{5} + \dfrac{\frac{18}{125}}{\frac{3}{25}}, \\
&= \frac{-3}{5} + \frac{6}{5}, \\
&= \frac{3}{5}.
\end{split}
\end{equation}
\end{proof}
\begin{restatable}{theorem}{thrsismooth}\label{th:alig_rsi}
We assume that $\Omega = \mathbb{R}^p$, and that for every $z \in \mathcal{Z}$, $\ell_z$ is $\beta$-smooth and satisfies the RSI with constant $\alpha$.
We further assume that there exists $\mathbf{w_\star}$ a solution of (\ref{eq:main_problem}) such that $\forall z \in \mathcal{Z}, \: \ell_z(\mathbf{w_\star}) = 0$.
Let $\eta$ be such that $\frac{1}{2 \beta} \leq \eta \leq \frac{2 \alpha}{\beta^2}$.
Then if we apply ALI-G with a maximal learning-rate of $\eta$ to $f$, we have:
\begin{equation}
f(\mathbf{w}_{T+1}) - f_\star
\leq \frac{\beta}{2} \exp \left( \left(-\dfrac{\alpha}{\beta} + \dfrac{\eta \beta}{2} \right) T \right) \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2.
\end{equation}
Note: this result assumes perfect interpolation, and thus we set $\delta = 0$ (no small constant for numerical stability).
\end{restatable}
\begin{proof}
We consider the update at time $t$, which we condition on the draw of $z_t \in \mathcal{Z}$.
Since we consider $\delta=0$, we have $\gamma_t = \min \left\{\frac{\ell_{z_t}(\mathbf{w}_t)}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 }, \eta \right\}$. We suppose $\nabla \ell_{z_t}(\mathbf{w}_t) \neq \bm{0}$.
\begin{equation} \label{eq:rsi}
\begin{split}
\| \mathbf{w}_{t+1} - \mathbf{w_\star} \|^2
&= \| \Pi_\Omega(\mathbf{w}_t - \gamma_t \nabla \ell_{z_t}(\mathbf{w}_t)) - \mathbf{w_\star} \|^2, \\
&\leq \| \mathbf{w}_t - \gamma_t \nabla \ell_{z_t}(\mathbf{w}_t) - \mathbf{w_\star} \|^2, \eqcomment{$\Pi_\Omega$ projection} \\
&= \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - 2 \gamma_t \nabla \ell_{z_t}(\mathbf{w}_t)^\top(\mathbf{w}_{t} - \mathbf{w_\star}) + \gamma_t^2 \| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2, \\
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - 2 \gamma_t \nabla \ell_{z_t}(\mathbf{w}_t)^\top(\mathbf{w}_{t} - \mathbf{w_\star}) + \gamma_t \ell_{z_t}(\mathbf{w}_t), \eqcomment{since $\gamma_t \leq \frac{\ell_{z_t}(\mathbf{w}_t)}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2}$}\\
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - 2 \gamma_t \nabla \ell_{z_t}(\mathbf{w}_t)^\top(\mathbf{w}_{t} - \mathbf{w_\star}) + \gamma_t \dfrac{\beta}{2} \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2, \eqcomment{Lemma 3.4 of \cite{Bubeck2015}} \\
&\leq \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 - 2 \gamma_t \alpha \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2 + \gamma_t \dfrac{\beta}{2} \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2, \eqcomment{RSI inequality} \\
&= \Big( 1 - 2 \gamma_t \alpha + \gamma_t \dfrac{\beta}{2} \Big) \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2.
\end{split}
\end{equation}
Since we know that $\frac{\ell_{z_t}(\mathbf{w}_t)}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2} \geq \frac{1}{2 \beta}$ (Lemma \ref{lemma:smooth_bound}) and $\eta \geq \frac{1}{2 \beta}$, we have that $\gamma_t \geq \frac{1}{2 \beta}$.
Then, using both $\gamma_t \geq \frac{1}{2 \beta}$ and $\gamma_t \leq \eta$, we can write:
\begin{equation}
\| \mathbf{w}_{t+1} - \mathbf{w_\star} \|^2
\leq \left(1 - \dfrac{\alpha}{\beta} + \dfrac{\eta \beta}{2} \right) \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2.
\end{equation}
With a trivial induction we obtain:
\begin{equation}
\begin{split}
\| \mathbf{w}_{T+1} - \mathbf{w_\star} \|^2
&\leq \left(1 - \dfrac{\alpha}{\beta} + \dfrac{\eta \beta}{2} \right)^T \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2, \\
&\leq \exp \left( \left(-\dfrac{\alpha}{\beta} + \dfrac{\eta \beta}{2} \right) T \right) \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2. \\
\end{split}
\end{equation}
Since $f$ is $\beta$-smooth and the problem is unconstrained by assumption, we have $f(\mathbf{w}_{T+1}) \leq \frac{\beta}{2} \| \mathbf{w}_{T+1} - \mathbf{w_\star} \|^2$ (by Lemma 3.4 of \cite{Bubeck2015}), and we obtain the desired result.
\end{proof}
\begin{restatable}{theorem}{thrsismooth_smalleta}\label{th:alig_rsi_small_eta}
We assume that $\Omega = \mathbb{R}^p$, and that for every $z \in \mathcal{Z}$, $\ell_z$ is $\beta$-smooth and satisfies the RSI with constant $\alpha$.
We further assume that there exists $\mathbf{w_\star}$ a solution of (\ref{eq:main_problem}) such that $\forall z \in \mathcal{Z}, \: \ell_z(\mathbf{w_\star}) = 0$.
Let $\eta$ be such that $0 < \eta \leq \frac{1}{2 \beta}$.
Then if we apply ALI-G with a maximal learning-rate of $\eta$ to $f$, we have:
\begin{equation}
f(\mathbf{w}_{T+1}) - f_\star
\leq \frac{\beta}{2} \exp \left( \left(- \eta \left(2 \alpha - \frac{\beta}{2} \right) \right) T \right) \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2.
\end{equation}
Note: this result assumes perfect interpolation, and thus we set $\delta = 0$ (no small constant for numerical stability).
\end{restatable}
\begin{proof}
We consider the update at time $t$, which we condition on the draw of $z_t \in \mathcal{Z}$.
Since we consider $\delta=0$, we have $\gamma_t = \min \left\{\frac{\ell_{z_t}(\mathbf{w}_t)}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2 }, \eta \right\}$.
We suppose $\nabla \ell_{z_t}(\mathbf{w}_t) \neq \bm{0}$.
We re-use equation (\ref{eq:rsi}) to write:
\begin{equation}
\| \mathbf{w}_{t+1} - \mathbf{w_\star} \|^2 \leq \Big( 1 - 2 \gamma_t \alpha + \gamma_t \dfrac{\beta}{2} \Big) \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2.
\end{equation}
Since we know that $\frac{\ell_{z_t}(\mathbf{w}_t)}{\| \nabla \ell_{z_t}(\mathbf{w}_t) \|^2} \geq \frac{1}{2 \beta}$ (Lemma \ref{lemma:smooth_bound}) and $\eta \leq \frac{1}{2 \beta}$, we have that $\gamma_t = \eta$ necessarily.
Thus we obtain:
\begin{equation*}
\| \mathbf{w}_{t+1} - \mathbf{w_\star} \|^2
\leq \Big( 1 - 2 \eta \alpha + \eta \dfrac{\beta}{2} \Big) \| \mathbf{w}_{t} - \mathbf{w_\star} \|^2.
\end{equation*}
With a trivial induction we obtain:
\begin{align*}
\| \mathbf{w}_{T+1} - \mathbf{w_\star} \|^2
&\leq \left(1 - \eta \left(2 \alpha - \frac{\beta}{2} \right) \right)^T \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2, \\
&\leq \exp \left( \left(- \eta \left(2 \alpha - \frac{\beta}{2} \right) \right) T \right) \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2. \\
\end{align*}
Since $f$ is $\beta$-smooth and the problem is unconstrained by assumption, we have $f(\mathbf{w}_{T+1}) \leq \frac{\beta}{2} \| \mathbf{w}_{T+1} - \mathbf{w_\star} \|^2$ (by Lemma 3.4 of \cite{Bubeck2015}), and we obtain the desired result.
\end{proof}
\vfill
\section{Additional Experimental Details}
\subsection{Standard Deviation of CIFAR Results}
\input{std_results}
\subsection{Additional Details About Training Protocol on ImageNet}
\paragraph{Data Processing.}
We use 1.23M images for training.
As mentioned in the paper, we do not use any data augmentation on this task.
Our data processing can be described as follows.
Each training image is resized so that its smaller dimension is of 224 pixels, after which we take a centered square crop of 224 by 224.
The cropped image is then centered and normalized per channel (for this, the mean and standard deviation per channel is computed across all training images), before being fed to the neural network.
\paragraph{Loss Function.}
We use the top-k truncated cross-entropy \cite{Lapin2016} as our loss function for training the model on ImageNet.
In particular, we use $k=5$ so that we optimize for the commonly used top-5 error, and we use the default temperature parameter $\tau=1$.
Our PyTorch code re-uses the implementation from \url{https://github.com/locuslab/lml}.
\section*{Abstract}
\addcontentsline{toc}{section}{Abstract}
In modern supervised learning, many deep neural networks are able to interpolate the data: the empirical loss can be driven to near zero on all samples simultaneously.
In this work, we explicitly exploit this interpolation property for the design of a new optimization algorithm for deep learning, which we term Adaptive Learning-rates for Interpolation with Gradients (ALI-G).
ALI-G retains the two main advantages of Stochastic Gradient Descent (SGD), which are (i) a low computational cost per iteration and (ii) good generalization performance in practice.
At each iteration, ALI-G exploits the interpolation property to compute an adaptive learning-rate in closed form.
In addition, ALI-G clips the learning-rate to a maximal value, which we prove to be helpful for non-convex problems.
Crucially, in contrast to the learning-rate of SGD, the maximal learning-rate of ALI-G does not require a decay schedule, which makes it considerably easier to tune.
We provide convergence guarantees of ALI-G in various stochastic settings.
Notably, we tackle the realistic case where the interpolation property is satisfied up to some tolerance.
We provide experiments on a variety of architectures and tasks:
(i) learning a differentiable neural computer;
(ii) training a wide residual network on the SVHN data set;
(iii) training a Bi-LSTM on the SNLI data set;
and (iv) training wide residual networks and densely connected networks on the CIFAR data sets.
ALI-G produces state-of-the-art results among adaptive methods, and even yields comparable performance with SGD, which requires manually tuned learning-rate schedules.
Furthermore, ALI-G is simple to implement in any standard deep learning framework and can be used as a drop-in replacement in existing code.
\section{Introduction}
Training a deep neural network is a challenging optimization problem: it involves minimizing the average of many high-dimensional non-convex functions.
In practice, the main algorithms of choice are Stochastic Gradient Descent (SGD) \citep{Robbins1951} and adaptive gradient methods such as AdaGrad \citep{Duchi2011} or Adam \citep{Kingma2015}.
It has been observed that SGD tends to provide better generalization performance than adaptive gradient methods \cite{Wilson2017}.
However, the downside of SGD is that it requires the manual design of a learning-rate schedule, which is widely regarded as an onerous and time consuming task.
In this work, we alleviate this issue with the design of an adaptive learning-rate algorithm that needs minimal tuning for good performance.
Indeed, we postulate that by using the same descent direction as SGD while automatically adapting its learning-rate, the resulting algorithm can offer similar generalization performance while requiring considerably less tuning.
In this work, we build on the following two ideas.
First, an adaptive learning-rate can be computed for the non-stochastic gradient direction when the minimum value of the objective function is known \citep{Polyak1969,Shor1985,Brannlund1995,Nedic2001, Nedic2001a}.
And second, one such minimum value is usually approximately known for interpolating models: for instance, it is close to zero for a model trained with the cross-entropy loss.
By carefully combining these two ideas, we create a stochastic algorithm that (i) provably converges fast in convex or Restricted Secant Inequality (RSI) settings, and (ii) obtains state-of-the-art empirical results with neural networks.
We refer to this algorithm as Adaptive Learning-rates for Interpolation with Gradients (ALI-G).
Procedurally, ALI-G is close to many existing algorithms, such as Deep Frank-Wolfe \citep{Berrada2019}, {\sc aProx} \citep{Asi2019} and $L_4$ \citep{Rolinek2018}.
And yet uniquely, thanks to its careful design and analysis, the learning-rate of ALI-G effectively requires a single hyper-parameter that does not need to be decayed.
Since ALI-G is easy to implement in any deep learning framework, we believe that it can prove to be a practical and reliable optimization tool for deep learning.
\paragraph{Contributions.}
We summarize the contributions of this work as follows: \\
- We design an adaptive learning-rate algorithm that uses a single hyper-parameter and does need any decaying schedule.
In contrast, the closely related {\sc aProx} \citep{Asi2019} and $L_4$ \citep{Rolinek2018} use respectively two and four hyper-parameters for their learning-rate.
\\
- We provide convergence rates of ALI-G in various stochastic convex settings.
Importantly, our theoretical results take into account the error in the estimate of the minimum objective value.
To the best of our knowledge, our work is the first to establish convergence rates for interpolation with approximate estimates. \\
- We prove that using a maximal learning-rate helps with convergence for a class of non-convex problems. \\
- We demonstrate state-of-the-art results for ALI-G on learning a differentiable neural computer; training variants of residual networks on the SVHN and CIFAR data sets; and training a Bi-LSTM on the Stanford Natural Language Inference data set.
\section{The Algorithm}
\subsection{Problem Setting}
\label{sec:pb_setting}
\paragraph{Loss Function.}
We consider a supervised learning task where the model is parameterized by ${\bm{w}} \in \mathbb{R}^p$.
Usually, the objective function can be expressed as an expectation over $z \in \mathcal{Z}$, a random variable indexing the samples of the training set:
\begin{equation}
f({\bm{w}}) \triangleq \mathbb{E}_{z \in \mathcal{Z}}[\ell_z({\bm{w}})],
\end{equation}
where each $\ell_z$ is the loss function associated with the sample $z$.
We assume that each $\ell_z$ is non-negative, which is the case for the large majority of loss functions used in machine learning.
For instance, suppose that the model is a deep neural network with weights ${\bm{w}}$ performing classification.
Then for each sample $z$, $\ell_z({\bm{w}})$ can represent the cross-entropy loss, which is always non-negative.
Other non-negative loss functions include the structured or multi-class hinge loss, and the $\ell_1$ or $\ell_2$ loss functions for regression.
\paragraph{Regularization.}
It is often desirable to employ a regularization function $\phi$ in order to promote generalization.
In this work, we incorporate such regularization as a constraint on the feasible domain: $\Omega = \left\{ {\bm{w}} \in \mathbb{R}^p: \ \phi({\bm{w}}) \leq r \right\}$ for some value of $r$.
In the deep learning setting, this will allow us to assume that the objective function can be driven close to zero without unrealistic assumptions about the regularization.
Our framework can handle any constraint set $\Omega$ on which Euclidean projections are computationally efficient.
This includes the feasible set induced by $\ell_2$ regularization: $\Omega = \left\{ {\bm{w}} \in \mathbb{R}^p: \ \| {\bm{w}} \|_2^2 \leq r \right\}$, for which the projection is given by a simple rescaling of ${\bm{w}}$.
Finally, note that if we do not wish to use any regularization, we define $\Omega = \mathbb{R}^p$ and the corresponding projection is the identity.
\paragraph{Problem Formulation.}
The learning task can be expressed as the problem $(\mathcal{P})$ of finding a feasible vector of parameters $\mathbf{w_\star} \in \Omega$ that minimizes $f$:
\begin{equation} \tag{$\mathcal{P}$} \label{eq:main_problem}
\mathbf{w_\star} \in \argmin\limits_{{\bm{w}} \in \Omega} f({\bm{w}}).
\end{equation}
Also note that $f_\star$ refers to the minimum value of $f$ over $\Omega$: $f_\star \triangleq \min_{{\bm{w}} \in \Omega} f({\bm{w}})$.
\paragraph{Interpolation.}
We say that the problem (\ref{eq:main_problem}) satisfies the interpolation assumption if there exist a solution $\mathbf{w_\star}$ that simultaneously minimizes all individual loss functions:
\begin{equation}
\forall z \in \mathcal{Z}, \: \ell_z(\mathbf{w_\star}) = 0.
\label{eq:def_interpolation}
\end{equation}
The condition (\ref{eq:def_interpolation}) can be equivalently expressed as $f_\star = 0$.
We also point out that in some cases, it can be more realistic to relax (\ref{eq:def_interpolation}) to $\forall z \in \mathcal{Z}, \: \ell_z(\mathbf{w_\star}) \leq \varepsilon$ for a small positive $\varepsilon$.
\subsection{The Polyak Step-Size}
\label{sec:polyak_step_size}
Before outlining the ALI-G algorithm, we begin with a brief description of the Polyak step-size, from which ALI-G draws some fundamental ideas.
\paragraph{Setting.}
We assume that $f_\star$ is known and we use non-stochastic updates: at each iteration, the full objective $f$ and its derivative are evaluated.
We denote by $\nabla f({\bm{w}})$ the first-order derivative of $f$ at ${\bm{w}}$ (e.g. $\nabla f({\bm{w}})$ can be a sub-gradient or the gradient).
In addition, $\| \cdot \|$ is the standard Euclidean norm in $\mathbb{R}^p$, and $\Pi_{\Omega}({\bm{w}})$ is the Euclidean projection of the vector ${\bm{w}} \in \mathbb{R}^p$ on the set $\Omega$.
\paragraph{Polyak Step-Size.}
At time-step $t$, using the Polyak step-size \citep{Polyak1969,Shor1985,Brannlund1995,Nedic2001,Nedic2001a} yields the following update:
\begin{equation} \label{eq:polyak_step}
{\bm{w}}_{t+1} = \Pi_\Omega\left({\bm{w}}_{t} - \gamma_t \nabla f({\bm{w}}_t) \right), \text{ where } \gamma_t \triangleq \tfrac{f({\bm{w}}_t) - f_\star}{\|\nabla f({\bm{w}}_t) \|^2},
\end{equation}
where we loosely define $\frac{0}{0}=0$ for simplicity purposes.
\begin{figure}[H]
\centering
\scriptsize
\input{include/plot_step_simple_arxiv.tex}
\caption{\em Illustration of the Polyak step-size in 1D. In this case, and further assuming that $f_\star = 0$, the algorithm coincides with the Newton-Raphson method for finding roots of a function.}
\label{fig:nr_step_simple}
\end{figure}
\paragraph{Interpretation.}
It can be shown that ${\bm{w}}_{t+1}$ lies on the intersection between the linearization of $f$ at ${\bm{w}}_t$ and the horizontal plane $z=f_\star$ (see Figure \ref{fig:nr_step_simple}, more details in Proposition 1 in the appendix).
Note that since $f_\star$ is the minimum of $f$, the Polyak step-size $\gamma_t$ is necessarily non-negative.
\paragraph{Limitations.}
Equation (\ref{eq:polyak_step}) has two major short-comings that prevent its applicability in a machine learning setting.
First, each update requires a full evaluation of $f$ and its derivative.
Stochastic extensions have been proposed in \citep{Nedic2001,Nedic2001a}, but they still require frequent evaluations of $f$.
This is expensive in the large data setting, and even computationally infeasible when using massive data augmentation.
Second, when applying this method to the non-convex setting of deep neural networks, the method sometimes fails to converge.
Therefore we would like to design an extension of the Polyak step-size that (i) is inexpensive to compute in a stochastic setting (e.g. with a computational cost that is independent of the total number of training samples), and (ii) converges in practice when used with deep neural networks.
The next section introduces the ALI-G algorithm, which achieves these two goals in the interpolation setting.
\subsection{The ALI-G Algorithm}
We now present the ALI-G algorithm.
For this, we suppose that we are in an interpolation setting: the model is assumed to be able to drive the loss function to near zero on all samples simultaneously.
\paragraph{Algorithm.} The main steps of the ALI-G algorithm are provided in Algorithm \ref{algo:prox}. ALI-G iterates over three operations until convergence. First, it computes a stochastic approximation of the learning objective and its derivative (line 3). Second, it computes a step-size decay parameter $\gamma_t$ based on the stochastic information (line 4). Third, it updates the parameters by moving in the negative derivative direction by an amount specified by the step-size and projecting the resulting vector on to the feasible region (line 5).
\begin{algorithm}[ht]
\caption{\em The ALI-G algorithm}\label{algo:prox}
\begin{algorithmic}[1]
\REQUIRE maximal learning-rate $\eta$, initial feasible ${\bm{w}}_0 \in \Omega$, small constant $\delta > 0$
\STATE $t=0$
\WHILE {not converged}
\STATE Get $\ell_{z_t}({\bm{w}}_t)$, $\nabla \ell_{z_t}({\bm{w}}_t)$ with $z_t$ drawn i.i.d.
\STATE $\gamma_t = \min \left\{ \frac{\ell_{z_t}({\bm{w}}_t)}{\| \nabla \ell_{z_t}({\bm{w}}_t) \|^2 + \delta}, \eta \right\}$
\STATE ${\bm{w}}_{t+1} = \Pi_{\Omega}\left({\bm{w}}_t - \gamma_t \nabla \ell_{z_t}({\bm{w}}_t) \right)$
\STATE $t=t+1$
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\paragraph{Comparison with the Polyak Step-Size.} There are three main differences to the update in equation (\ref{eq:polyak_step}).
First, each update only uses the loss $\ell_{z_t}$ and its derivative rather than the full objective $f$ and its derivative. Second, the learning-rate $\gamma_t$ is clipped to $\eta$, the maximal learning-rate hyper-parameter.
We emphasize that $\eta$ remains constant throughout the iterations, therefore it is a single hyper-parameter and does not need a schedule like SGD learning-rate.
Third, the minimum $f_\star$ has been replaced by the lower-bound of $0$.
All these modifications will be justified in the next section.
\paragraph{The ALI-G$^\infty$ Variant.}
When ALI-G uses no maximal learning-rate, we refer to the algorithm as ALI-G$^\infty$, since it is equivalent to use an infinite maximal learning-rate.
Note that ALI-G$^\infty$ requires no hyper-parameter for its step-size.
\paragraph{Momentum.}
In some of our experiments, we accelerate ALI-G with Nesterov momentum.
The update step at line 5 of algorithm \ref{algo:prox} is then replaced by (i) a velocity update ${\bm{v}}_{t} = \mu {\bm{v}}_{t-1} - \gamma_t \nabla \ell_{z_t}({\bm{w}}_t)$ and (ii) a parameter update ${\bm{w}}_{t+1} = \Pi_{\Omega}\left({\bm{w}}_t + \mu {\bm{v}}_{t} \right)$.
\section{Justification and Analysis}
\subsection{Stochasticity}
By definition, the interpolation setting gives $f_\star = 0$, which we used in ALI-G to simplify the formula of the learning-rate $\gamma_t$.
More subtly, the interpolation property also allows the updates to rely on the stochastic estimate $\ell_{z_t}({\bm{w}}_t)$ rather than the exact but expensive $f({\bm{w}}_t)$.
Intuitively, this is possible because in the interpolation setting, we know the minimum of the loss function for each individual training sample.
Recall that ALI-G$^\infty$ is the variant of ALI-G that uses no maximal learning-rate.
The following result formalizes the convergence guarantee of ALI-G$^\infty$ in the stochastic convex setting.
\begin{theorem}[Convex and Lipschitz] \label{th:alig_cvx}
We assume that $\Omega$ is a convex set, and that for every $z \in \mathcal{Z}$, $\ell_z$ is convex and $C$-Lipschitz.
Let $\mathbf{w_\star}$ be a solution of (\ref{eq:main_problem}), and assume that the interpolation property is approximately satisfied: $\forall z \in \mathcal{Z}, \: \ell_z(\mathbf{w_\star}) \leq \varepsilon$, for some interpolation tolerance $\varepsilon \geq 0$.
Then ALI-G$^\infty$ applied to $f$ satisfies:
\begin{equation}
\mathbb{E} \left[f\left(\tfrac{1}{T+ 1} \sum\limits_{t=0}^T {\bm{w}}_t \right) \right]
\leq \tfrac{ \| {\bm{w}}_{0} - \mathbf{w_\star} \| \sqrt{C^2 + \delta}}{\sqrt{T + 1}} + \varepsilon \sqrt{\left(\tfrac{C^2}{\delta} + 1 \right)}.
\end{equation}
\end{theorem}
In other words, by assuming interpolation, ALI-G provably converges while requiring only $\ell_{z_t}({\bm{w}}_t)$ and $\nabla \ell_{z_t}({\bm{w}}_t)$ (stochastic estimation per sample) to compute its learning-rate.
In contrast, the Polyak step-size would require $f({\bm{w}}_t)$ and $\nabla f({\bm{w}}_t)$ to compute the learning-rate (deterministic computation over all training samples).
This is because the Polyak step-size exploits the knowledge of $f_\star$ only, which is weaker information than knowing the minimum of all individual loss functions $\ell_z$ (as ALI-G does in the interpolation setting).
This difference induces a major computational advantage of ALI-G over the usual Polyak step-size.
We emphasize that in Theorem \ref{th:alig_cvx}, our careful analysis explicitly shows the dependency of the convergence result on the interpolation tolerance $\varepsilon$.
It is reassuring to note that convergence is exact when the interpolation property is exactly satisfied ($\varepsilon = 0$).
In the appendix, we also establish convergence rates of $\mathcal{O}(1 / T)$ for smooth convex functions, and $\mathcal{O}(\exp(-\alpha T / 8 \beta ))$ for $\alpha$-strongly convex and $\beta$-smooth functions.
Similar results can be proved when using a maximal learning-rate $\eta$: the convergence speed then remains unchanged provided that $\eta$ is large enough, and it is lowered when $\eta$ is small.
We refer the interested reader to the appendix for the formal results and their proofs.
\paragraph{Interpolation and Gradient Variance.}
In the literature, most convergence results of SGD depend on the variance of the gradient, which we denote by $\upsilon$ here.
The reader may have noticed that our convergence results depends only the interpolation tolerance $\varepsilon$ rather than $\upsilon$.
We briefly compare how these two quantities help convergence in their own distinct ways.
The gradient variance $\upsilon$ globally characterizes how much the gradient direction can differ across individual samples $z$, at any point ${\bm{w}}$ of the parameter space.
In particular, a low value for $\upsilon$ implies that the loss functions $\ell_z$ agree in the steepest descent direction at any point of the trajectory ${\bm{w}}_0, ..., {\bm{w}}_T$.
In contrast, the interpolation tolerance $\varepsilon$ locally characterizes the behavior of all loss functions near a global minimum $\mathbf{w_\star}$ only.
More specifically, a low value for $\varepsilon$ ensures that all loss functions $\ell_z$ agree in a common minimizer $\mathbf{w_\star}$.
Thus these two mechanisms are distinct ways of ensuring convergence of SGD.
Importantly, a low interpolation tolerance $\varepsilon$ does not necessarily imply a low gradient variance $\upsilon$ and vice-versa.
\subsection{Maximal Learning-Rate}
\paragraph{Non-Convexity.}
The Polyak step-size may fail to converge when the objective is non-convex, as figure \ref{fig:rsi} illustrates: in this (non-convex) setting, gradient descent with Polyak step-size oscillates between two symmetrical points because its step-size is too large.
A similar behavior can be observed on the non-convex problem of training deep neural networks.
\begin{figure}[h]
\centering
\footnotesize
\input{include/rsi_plot_arxiv.tex}
\caption{\em
A simple example where the Polyak step-size oscillates due to non-convexity.
On this problem, ALI-G converges whenever its maximal learning-rate is lower than $10$.
}
\label{fig:rsi}
\end{figure}
In order to analyze the convergence of ALI-G in a non-convex setting, we introduce the Restricted Secant Inequality (RSI) \cite{Zhang2013}:
\begin{definition}
Let $\phi: \mathbb{R}^p \to \mathbb{R}$ be a lower-bounded differentiable function achieving its minimum at $\mathbf{w_\star}$.
We say that $\phi$ satisfies the RSI if there exists $\alpha > 0$ such that:
\begin{equation}
\forall {\bm{w}} \in \mathbb{R}^p, \: \nabla \phi (\mathbf{w})^\top ({\bm{w}} - \mathbf{w_\star}) \geq \alpha \| {\bm{w}} - \mathbf{w_\star} \|^2.
\end{equation}
\end{definition}
The RSI does not require convexity and is a weaker assumption in the sense that all strongly convex functions satisfy the RSI \cite{Zhang2013}.
In particular, the example in figure \ref{fig:rsi} does satisfy the RSI (proof in the appendix).
In other words, the example above shows that the Polyak step-size can fail to converge under the RSI assumption.
In contrast, we prove that with an appropriate maximal learning-rate, ALI-G converges (exponentially fast) on all interpolating problems that satisfy the RSI:
\begin{restatable}{theorem}{thrsismooth}\label{th:alig_rsi}
We assume that $\Omega = \mathbb{R}^p$, and that for every $z \in \mathcal{Z}$, $\ell_z$ is $\beta$-smooth and satisfies the RSI with constant $\mu$.
Let $\mathbf{w_\star}$ be a solution of (\ref{eq:main_problem}) such that $\forall z \in \mathcal{Z}, \: \ell_z(\mathbf{w_\star}) = 0$.
Further assume that $\frac{1}{2 \beta} \leq \eta \leq \frac{2 \mu}{\beta^2}$.
Then if we apply ALI-G with a maximal learning-rate of $\eta$ to $f$, we have:
\begin{equation}
f(\mathbf{w}_{T+1}) - f_\star
\leq \tfrac{\beta}{2} \exp \left( \tfrac{-(2\mu - \eta \beta^2)T }{2\beta} \right) \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2.
\end{equation}
\end{restatable}
Note that the above theorem assumes perfect interpolation, that is, the tolerance $\epsilon = 0$.
Nonetheless, it demonstrates the importance of a maximal learning rate, which does not need a manual decaying schedule.
It is currently an open question whether a similar result to theorem \ref{th:alig_rsi} can be proved with some interpolation tolerance $\varepsilon > 0$ on the value of all $\ell_z (\mathbf{w_\star})$.
\paragraph{Proximal Interpretation.}
Interestingly, using a maximal learning-rate can be seen as a natural extension of SGD when using a non-negative loss function:
\begin{restatable}{proposition}{thproxstep}[Proximal Interpretation] \label{th:prox_step}
Suppose that $\Omega = \mathbb{R}^p$ and let $\delta = 0$.
We consider the update performed by SGD: ${\bm{w}}_{t+1}^{\text{SGD}} = {\bm{w}}_t - \eta_t \nabla \ell_{z_t}({\bm{w}}_t)$; and the update performed by ALI-G: ${\bm{w}}_{t+1}^{\text{ALI-G}} = {\bm{w}}_t - \gamma_t \nabla \ell_{z_t}({\bm{w}}_t)$, where $\gamma_t = \min\left\{\frac{\ell_{z_t}({\bm{w}}_t)}{\| \nabla \ell_{z_t}({\bm{w}}_t)\|^2}, \eta \right\}$.
Then we have:
\begin{equation}
\begin{split}
&{\bm{w}}_{t+1}^{\text{SGD}} = \argmin_{{\bm{w}} \in \mathbb{R}^p} \Big\{ \tfrac{1}{2 \eta_t} \| {\bm{w}} - {\bm{w}}_t \|^2 + \ell_{z_t}({\bm{w}}_t) + \nabla \ell_{z_t}({\bm{w}}_t)^\top ({\bm{w}} - {\bm{w}}_t) \Big\}, \\
&{\bm{w}}_{t+1}^{\text{ALI-G}} = \argmin_{{\bm{w}} \in \mathbb{R}^p} \Big\{ \tfrac{1}{2 \eta} \| {\bm{w}} - {\bm{w}}_t \|^2 + \max \left\{\ell_{z_t}({\bm{w}}_t) + \nabla \ell_{z_t}({\bm{w}}_t)^\top ({\bm{w}} - {\bm{w}}_t), 0 \right\} \Big\}. \label{eq:prox_pb}
\end{split}
\end{equation}
\end{restatable}
In other words, at each iteration, ALI-G solves a proximal problem in closed form in a similar way to SGD.
In both cases, the loss function $\ell_{z_t}$ is locally approximated by a first-order Taylor expansion at ${\bm{w}}_t$.
The difference is that ALI-G also exploits the fact that $\ell_{z_t}$ is non-negative.
This allows ALI-G to use a constant value for $\eta$ in the interpolation setting, while the learning-rate $\eta_t$ of SGD needs to be manually decayed.
\section{Related Work}
\paragraph{Interpolation in Deep Learning.}
As mentioned in the introduction, recent works have successfully exploited the interpolation assumption to prove convergence of SGD in the context of deep learning \citep{Ma2018a,Vaswani2019,Zhou2019}.
Such works are complementary to ours in the sense that they provide a convergence analysis of an existing algorithm for deep learning.
In a different line of work, \cite{Liu2019b} propose to exploit interpolation to prove convergence of a new acceleration method for deep learning.
However, their experiments suggest that the method still requires the use of a hand-designed learning-rate schedule.
\paragraph{Adaptive Gradient Methods.}
Similarly to ALI-G, most adaptive gradient methods also rely on tuning a single hyper-parameter, thereby providing a more pragmatic alternative to SGD that needs a specification of the full learning-rate schedule.
While the most popular ones are Adagrad \citep{Duchi2011}, RMSPROP \citep{Tieleman2012}, Adam \citep{Kingma2015} and AMSGrad \citep{Reddi2018}, there have been many other variants
\citep{Zeiler2012,Orabona2015,Defossez2017,Levy2017,Mukkamala2017,Zheng2017,Bernstein2018,Chen2018,Shazeer2018,Zaheer2018,Chen2019,Loshchilov2019,Luo2019}.
However, as pointed out in \citep{Wilson2017}, adaptive gradient methods tend to give poor generalization in supervised learning.
In our experiments, the results provided by ALI-G are significantly better than those obtained by the most popular adaptive gradient methods.
Recently, \citet{Liu2019a} have proposed to \say{rectify} Adam with a learning-rate warmup, which partly bridges the gap in generalization performance between Adam and SGD.
However, their method still requires a learning-rate schedule, and thus remains difficult to tune on new tasks.
\paragraph{Adaptive Learning-Rate Algorithms.}
\citet{Vaswani2019a} show that one can use line search in a stochastic setting for interpolating models while guaranteeing convergence.
This work is complementary to ours, as it provides convergence results with weaker assumptions on the loss function, but is less practically useful as it requires up to four hyper-parameters, instead of one for ALI-G.
Less closely related methods, included second-order ones, adaptively compute the learning-rate without using the minimum \citep{Schaul2013,Martens2015,Tan2016,Zhang2017a,Baydin2018,Wu2018,Li2019,Henriques2019}, but do not demonstrate competitive generalization performance against SGD with a well-tuned hand-designed schedule.
\paragraph{$L_4$ Algorithm.}
The $L_4$ algorithm \citep{Rolinek2018} also uses a modified version of the Polyak step-size.
However, the $L_4$ algorithm computes an online estimate of $f_\star$ rather than relying on a fixed value.
This requires three hyper-parameters, which are in practice sensitive to noise and crucial for empirical convergence of the method.
In addition, $L_4$ does not come with convergence guarantees.
In contrast, by utilizing the interpolation property and a maximal learning-rate, our method is able to (i) provide reliable and accurate minimization with only a single hyper-parameter, and (ii) offer guarantees of convergence in the stochastic convex setting.
\paragraph{Frank-Wolfe Methods.}
The proximal interpretation in Proposition \ref{th:prox_step} allows us to draw additional parallels to existing methods.
In particular, the formula of the learning-rate $\gamma_t$ may remind the reader of the Frank-Wolfe algorithm \citep{Frank1956} in some of its variants \citep{Locatello2017}, or other dual methods \citep{Lacoste-Julien2013,Shalev-Shwartz2016}.
This is because such methods solve in closed form the dual of problem (\ref{eq:prox_pb}), and problems in the form of (\ref{eq:prox_pb}) naturally appear in dual coordinate ascent methods \citep{Shalev-Shwartz2016}.
When no regularization is used, ALI-G and Deep Frank-Wolfe (DFW) \citep{Berrada2019} are procedurally identical algorithms.
This is because in such a setting, one iteration of DFW also amounts to solving (\ref{eq:prox_pb}) in closed-form -- more generally, DFW is designed to train deep neural networks by solving proximal linear support vector machine problems approximately.
However, we point out the two fundamental advantages of ALI-G over DFW:
(i) ALI-G can handle arbitrary (lower-bounded) loss functions, while DFW can only use convex piece-wise linear loss functions;
and (ii) as seen previously, ALI-G provides convergence guarantees in the convex setting.
\paragraph{SGD with Polyak's Learning-Rate.}
\citep{Oberman2019} extend the Polyak step-size to rely on a stochastic estimation of the gradient $\nabla \ell_{z_t}({\bm{w}}_t)$ only, instead of the expensive deterministic gradient $\nabla f({\bm{w}}_t)$.
However, they still require to evaluate $f({\bm{w}}_t)$, the objective function over the entire training data set, in order to compute its learning-rate, which makes the method impractical.
In addition, since they do not do exploit the interpolation setting nor the fact that regularization can be expressed as a constraint, they also require the knowledge of the optimal objective function value $f_\star$.
We also refer the interested reader to the recent analysis of \cite{Loizou2020}, which appeared after this work and provides a set of improved theoretical results.
\paragraph{{\sc aProx} Algorithm.}
\citep{Asi2019} have recently introduced the {\sc aProx} algorithm, a family of proximal stochastic optimization algorithms for convex problems.
Notably, the {\sc aProx} \say{truncated model} version is similar to ALI-G.
However, there are four clear advantages of our work over \citep{Asi2019} in the interpolation setting, in particular for training neural networks.
First, our work is the first to empirically demonstrate the applicability and usefulness of the algorithm on varied modern deep learning tasks -- most of our experiments use several orders of magnitude more data and model parameters than the small-scale convex problems of \citep{Asi2019}.
Second, our analysis and insights allow us to make more aggressive choices of learning rate than \citep{Asi2019}.
Indeed, \citep{Asi2019} assume that the maximal learning-rate is exponentially decaying, even in the interpolating convex setting.
In contrast, by avoiding the need for an exponential decay, the learning-rate of ALI-G requires only one hyper-parameters instead of two for {\sc aProx}.
Third, our analysis takes into account the interpolation tolerance $\varepsilon \geq 0$ rather than unrealistically assuming the perfect case $\varepsilon = 0$ (that would require infinite weights when using the cross-entropy loss for instance).
Fourth, our analysis proves fast convergence in function space rather than iterate space.
\section{Experiments}
We empirically compare ALI-G to the optimization algorithms most commonly used in deep learning.
Our experiments span a variety of architectures and tasks: (i) learning a differentiable neural computer;
(ii) training wide residual networks on SVHN;
(iii) training a Bi-LSTM on the Stanford Natural Language Inference data set;
and (iv) training wide residual networks and densely connected networks on the CIFAR data sets.
Note that the tasks of training wide residual networks on SVHN and CIFAR-100 are part of the DeepOBS benchmark \citep{Schneider2019}, which aims at standardizing baselines for deep learning optimizers.
In particular, these tasks are among the most difficult ones of the benchmark because the SGD baseline benefits from a manual schedule for the learning rate.
Despite this, our set of experiments demonstrate that ALI-G obtains competitive performance with SGD.
In addition, ALI-G significantly outperforms adaptive gradient methods.
The code to reproduce our results is publicly available\footnote{\url{https://github.com/oval-group/ali-g}}.
In the TensorFlow \citep{Abadi2015} experiment, we use the official and publicly available implementation of $L_4$\footnote{\url{https://github.com/martius-lab/l4-optimizer}}.
In the PyTorch \citep{Paszke2017} experiments, we use our implementation of $L_4$, which we unit-test against the official TensorFlow implementation.
In addition, we employ the official implementation of DFW\footnote{\url{https://github.com/oval-group/dfw}} and we re-use their code for the experiments on SNLI and CIFAR.
All experiments are performed either on a 12-core CPU (differentiable neural computer), on a single GPU (SVHN, SNLI, CIFAR) or on up to 4 GPUs (ImageNet).
We emphasize that all methods approximately have the same cost per iteration.
Consequently, faster convergence in terms of number of iterations or epochs translates into faster convergence in terms of wall-clock time.
\subsection{Differentiable Neural Computers}
\paragraph{Setting.}
The Differentiable Neural Computer (DNC) \citep{Graves2016} is a recurrent neural network that aims at performing computing tasks by learning from examples rather than by executing an explicit program.
In this case, the DNC learns to repeatedly copy a fixed size string given as input.
Although this learning task is relatively simple, the complex architecture of the DNC makes it an interesting benchmark problem for optimization algorithms.
\begin{figure}[H]
\centering
\includegraphics[width=0.65\linewidth]{include/dnc_plot.pdf}
\caption{
\em Final objective function when training a Differentiable Neural Computer for $10$k steps (lower is better).
The intensity of each cell is log-proportional to the value of the objective function (darker is better).
ALI-G obtains good performance for a very large range of $\eta$ ($10^{-1} \leq \eta \leq 10^6$).
}
\label{fig:dnc}
\end{figure}
\paragraph{Methods.}
We use the official and publicly available implementation of DNC\footnote{\url{https://github.com/deepmind/dnc}}.
We vary the initial learning rate as powers of ten between $10^{-4}$ and $10^{4}$ for each method except for $L_4$Adam and $L_4$Mom.
For $L_4$Adam and $L_4$Mom, since the main hyper-parameter $\alpha$ is designed to lie in $(0, 1)$, we vary it between $0.05$ and $0.095$ with a step of $0.1$.
The gradient norm is clipped for all methods except for ALI-G, $L_4$Adam and $L_4$Mom (as recommended by \citep{Rolinek2018}).
\paragraph{Results.} We present the results in Figure \ref{fig:dnc}.
ALI-G provides accurate optimization for any $\eta$ within $[10^{-1}, 10^6]$, and is among the best performing methods by reaching an objective function of $4.10^{-8}$.
On this task, RMSProp, $L_4$Adam and $L_4$Mom also provide accurate and robust optimization.
In contrast to ALI-G and the $L_4$ methods, the most commonly used algorithms such as SGD, SGD with momentum and Adam are very sensitive to their main learning-rate hyper-parameter.
Note that the difference between well-performing methods is not significant here because these reach the numerical precision limit of single-precision float numbers.
\subsection{Wide Residual Networks on SVHN}
\paragraph{Setting.}
The SVHN data set contains 73k training samples, 26k testing samples and 531k additional easier samples.
From the 73k difficult training examples, we select 6k samples for validation; we use all remaining (both difficult and easy) examples for training, for a total of 598k samples.
We train a wide residual network 16-4 following \citep{Zagoruyko2016}.
\paragraph{Method.}
For SGD, we use the manual schedule for the learning rate of \citep{Zagoruyko2016}.
For $L_4$Adam and $L_4$Mom, we cross-validate the main learning-rate hyper-parameter $\alpha$ to be in $\{0.0015, 0.015, 0.15\}$ ($0.15$ is the value recommended by \citep{Rolinek2018}).
For other methods, the learning rate hyper-parameter is tuned as a power of 10.
The $\ell_2$ regularization is cross-validated in $\{0.0001, 0.0005\}$ for all methods but ALI-G.
For ALI-G, the regularization is expressed as a constraint on the $\ell_2$-norm of the parameters, and its maximal value is set to $50$.
SGD, ALI-G and BPGrad use a Nesterov momentum of 0.9.
All methods use a dropout rate of 0.4 and a fixed budget of 160 epochs, following \citep{Zagoruyko2016}.
\begin{table}[ht]
\centering
\begin{tabular}{lc|lc}
\toprule
\multicolumn{4}{c}{Test Accuracy on SVHN (\%)} \\
\midrule
Adagrad & 98.0 & Adam & 97.9 \\
AMSGrad & 97.9 & BPGrad & 98.1 \\
DFW & 98.1 &$L_4$Adam &{\bf 98.2} \\
$L_4$Mom & 19.6 & ALI-G & 98.1 \\
\cmidrule(lr){1-2} \cmidrule(lr){3-4}
{\color{red} SGD} &98.3 &{\color{red} SGD$^\dagger$} & 98.4 \\
\bottomrule
\end{tabular}
\caption{\em
In red, SGD benefits from a hand-designed schedule for its learning-rate.
In black, adaptive methods, including ALI-G, have a single hyper-parameter for their learning-rate.
$SGD^\dagger$ refers to the performance reported by \citep{Zagoruyko2016}.
}
\label{tab:svhn}
\end{table}
\paragraph{Results.}
The results are presented in Table \ref{tab:svhn}.
On this relatively easy task, most methods achieve about 98\% test accuracy.
Despite the cross-validation, $L_4$Mom does not converge on this task.
Even though SGD benefits from a hand-designed schedule, ALI-G and other adaptive methods obtain close performance to it.
\subsection{Bi-LSTM on SNLI}
\paragraph{Setting.}
We train a Bi-LSTM of 47M parameters on the Stanford Natural Language Inference (SNLI) data set \citep{Bowman2015}.
The SNLI data set consists in 570k pairs of sentences, with each pair labeled as entailment, neutral or contradiction.
This large scale data set is commonly used as a pre-training corpus for transfer learning to many other natural language tasks where labeled data is scarcer \citep{Conneau2017} -- much like ImageNet is used for pre-training in computer vision.
We follow the protocol of \citep{Berrada2019}; we also re-use their results for the baselines.
\paragraph{Method.}
For $L_4$Adam and $L_4$Mom, the main hyper-parameter $\alpha$ is cross-validated in $\{0.015, 0.15\}$ -- compared to the recommended value of 0.15, this helped convergence and considerably improved performance.
The SGD algorithm benefits from a hand-designed schedule, where the learning-rate is decreased by 5 when the validation accuracy does not improve.
Other methods use adaptive learning-rates and do not require such schedule.
The value of the main hyper-parameter $\eta$ is cross-validated as a power of ten for the ALI-G algorithm and for previously reported adaptive methods.
Following the implementation by \citep{Conneau2017}, no $\ell_2$ regularization is used.
The algorithms are evaluated with the Cross-Entropy (CE) loss and the multi-class hinge loss (SVM), except for DFW which is designed for use with an SVM loss only.
For all optimization algorithms, the model is trained for 20 epochs, following \citep{Conneau2017}.
\begin{table}[h]
\centering
\begin{tabular}{lcc|lcc}
\toprule
\multicolumn{6}{c}{Test Accuracy on SNLI (\%)} \\
\midrule
& CE & SVM & & CE & SVM \\
\cmidrule(lr){2-2} \cmidrule(lr){3-3} \cmidrule(lr){5-5} \cmidrule(lr){6-6}
Adagrad$^*$ &83.8 &84.6 &Adam$^*$ &84.5 &85.0 \\
AMSGrad$^*$ &84.2 &85.1 &BPGrad$^*$ &83.6 &84.2 \\
DFW$^*$ & - &{\bf 85.2} & $L_4$Adam &83.3 &82.5 \\
$L_4$Mom &83.7 &83.2 & {\color{blue}ALI-G$^\infty$} &84.6 &84.7 \\
ALI-G & {\bf 84.8} &{\bf 85.2} & & \\
\cmidrule(lr){1-3} \cmidrule(lr){4-6}
{\color{red} SGD$^*$} &84.7 &85.2 &{\color{red} SGD$^\dagger$} &84.5 & - \\
\bottomrule
\end{tabular}
\caption{\em
In red, SGD benefits from a hand-designed schedule for its learning-rate.
In black, adaptive methods have a single hyper-parameter for their learning-rate.
In blue, {\color{blue} ALI-G$^\infty$} does not have any hyper-parameter for its learning-rate.
With an SVM loss, DFW and ALI-G are procedurally identical algorithms
-- but in contrast to DFW, ALI-G can also employ the CE loss.
Methods in the format $X^*$ re-use results from \citep{Berrada2019}.
$SGD^\dagger$ is the result from \citep{Conneau2017}.
}
\label{tab:snli}
\end{table}
\paragraph{Results.} We present the results in Table \ref{tab:snli}.
ALI-G$^\infty$ is the only method that requires no hyper-parameter for its learning-rate.
Despite this, and the fact that SGD employs a learning-rate schedule that has been hand designed for good validation performance, ALI-G$^\infty$ is still able to obtain results that are competitive with SGD.
Moreover, ALI-G, which requires a single hyper-parameter for the learning-rate, outperforms all other methods for both the SVM and the CE loss functions.
\subsection{Wide Residual Networks and Densely Connected Networks on CIFAR}
\paragraph{Setting.}
We follow the methodology of \citep{Berrada2019}, and we reproduce their results.
We test two architectures: a Wide Residual Network (WRN) 40-4 \citep{Zagoruyko2016} and a bottleneck DenseNet (DN) 40-40 \citep{Huang2017a}.
We use 45k samples for training and 5k for validation.
The images are centered and normalized per channel.
We apply standard data augmentation with random horizontal flipping and random crops.
AMSGrad was selected in \citep{Berrada2019} because it was the best adaptive method on similar tasks, outperforming in particular Adam and Adagrad.
In addition to the baselines from \citep{Berrada2019}, we also provide the performance of $L_4$Adam, $L_4$Mom, AdamW \citep{Loshchilov2019} and Yogi \citep{Zaheer2018}.
\paragraph{Method.}
All optimization methods employ the cross-entropy loss, except for the DFW algorithm, which is designed to use an SVM loss.
For DN and WRN respectively, SGD uses the manual learning rate schedules from \citep{Huang2017a} and \citep{Zagoruyko2016}.
Following \citep{Berrada2019}, the batch-size is cross-validated in $\{64, 128, 256\}$ for the DN architecture, and $\{128, 256, 512\}$ for the WRN architecture.
For $L_4$Adam and $L_4$Mom, the learning-rate hyper-parameter $\alpha$ is cross-validated in $\{0.015, 0.15\}$.
For AMSGrad, AdamW, Yogi, DFW and ALI-G, the learning-rate hyper-parameter $\eta$ is cross-validated as a power of 10 (in practice $\eta \in \{0.1, 1\}$ for ALI-G).
SGD, DFW and ALI-G use a Nesterov momentum of 0.9.
Following \citep{Berrada2019}, for all methods but ALI-G and AdamW, the $\ell_2$ regularization is cross-validated in $\{0.0001, 0.0005\}$ on the WRN architecture, and is set to $0.0001$ for the DN architecture.
For AdamW, the weight-decay is cross-validated as a power of 10.
For ALI-G, $\ell_2$ regularization is expressed as a constraint on the norm on the vector of parameters; its maximal value is set to $100$ for the WRN models, $80$ for DN on CIFAR-10 and $75$ for DN on CIFAR-100.
For all optimization algorithms, the WRN model is trained for 200 epochs and the DN model for 300 epochs, following respectively \citep{Zagoruyko2016} and \citep{Huang2017a}.
\paragraph{Results.}
We present the results in Table \ref{tab:cifar}.
In this setting again, ALI-G obtains competitive performance with manually decayed SGD.
ALI-G largely outperforms AMSGrad, AdamW and Yogi.
\begin{table}[ht]
\centering
\begin{tabular}{lcccc}
\toprule
\multicolumn{5}{c}{Test Accuracy on CIFAR (\%)} \\
\midrule
&\multicolumn{2}{c}{CIFAR-10} &\multicolumn{2}{c}{CIFAR-100} \\
\cmidrule(lr){2-3} \cmidrule(lr){4-5}
& WRN & DN & WRN & DN \\
\cmidrule(lr){2-2} \cmidrule(lr){3-3} \cmidrule(lr){4-4} \cmidrule(lr){5-5}
AMSGrad & 90.8 & 91.7 & 68.7 & 69.4 \\
AdamW & 92.1 & 92.6 & 69.6 & 69.5 \\
Yogi & 91.2 & 92.1 & 68.7 & 69.6 \\
DFW & 94.2 & 94.6 & {\bf 76.0} & 73.2 \\
$L_4$Adam & 90.5 & 90.8 & 61.7 & 60.5 \\
$L_4$Mom & 91.6 & 91.9 & 61.4 & 62.6 \\
ALI-G & {\bf 95.2} & {\bf 95.0} & 75.8 & {\bf 76.3} \\
\midrule
{\color{red} SGD} & 95.3 & 95.1 & 77.8 & 76.3 \\
{\color{red} SGD$^\dagger$} & 95.4 & - & 78.8 & - \\
\bottomrule
\end{tabular}
\caption{\em
In red, SGD benefits from a hand-designed schedule for its learning-rate.
In black, adaptive methods, including ALI-G, have a single hyper-parameter for their learning-rate.
$SGD^\dagger$ refers to the result from \citep{Zagoruyko2016}.
Each reported result is an average over three independent runs;
the standard deviations are reported in Appendix (they are at most $0.3$ for ALI-G and SGD).
}
\label{tab:cifar}
\end{table}
\subsection{Comparing Training Performance on CIFAR-100}
In this section, we empirically assess the performance of ALI-G and its competitors in terms of training objective on CIFAR-100.
In order to have comparable objective functions, the $\ell_2$ regularization is deactivated.
The learning-rate is selected as a power of ten for best final objective value, and the batch-size is set to its default value.
For clarity, we only display the performance of SGD, Adam, Adagrad and ALI-G (DFW does not support the cross-entropy loss).
The $L_4$ methods diverge in this setting.
Here SGD uses a constant learning-rate to emphasize the need for adaptivity.
Therefore all methods use one hyper-parameter for their learning-rate.
All methods use a fixed budget of 200 epochs for WRN-CIFAR-100 and 300 epochs for DN-CIFAR-100.
As can be seen, ALI-G provides better training performance than the baseline algorithms on all tasks.
\begin{figure}[h]
\centering
\footnotesize
\input{include/cifar_train_wrn_cifar100_arxiv.tex}
\input{include/cifar_train_dn_cifar100_arxiv.tex}
\caption{\em
Objective function over the epochs on CIFAR-100 (smoothed with a moving average over 5 epochs).
ALI-G reaches a value that is an order of magnitude better than the baselines.
}
\label{fig:cifar100_training}
\vspace{-10pt}
\end{figure}
\subsection{Training at Large Scale}
We demonstrate the scalability of ALI-G by training a ResNet-18 \citep{He2016} on the ImageNet data set.
In order to satisfy the interpolation assumption, we employ a loss function tailored for top-5 classification \cite{Lapin2016}, and we do not use data augmentation.
Our focus here is on the training objective and accuracy.
ALI-G uses the following training setup: a batch-size of 1024 split over 4 GPUs, a $\ell_2$ maximal norm of 400 for $\mathbf{w}$, a maximal learning-rate of 10 and no momentum.
SGD uses the state-of-the-art hyper-parameters and learning-rate schedule from \cite{He2016}.
As can be seen in figure \ref{fig:imagenet_training}, ALI-G reaches 99\% top-5 accuracy in 12 epochs (faster than SGD), and minimizes the objective function as well as SGD with its custom schedule.
\begin{figure}[H]
\centering
\footnotesize
\input{include/imagenet_obj_arxiv.tex}
\hspace{20pt}
\input{include/imagenet_acc_arxiv.tex}
\caption{\em
Training a ResNet-18 on ImageNet.
The final performance of ALI-G is as good as that of SGD, even though SGD benefits from a custom learning-rate schedule.
In addition, ALI-G reaches a high training accuracy faster than SGD.
}
\label{fig:imagenet_training}
\end{figure}
\section{Discussion}
We have introduced ALI-G, an optimization algorithm that automatically adapts the learning-rate in the interpolation setting.
ALI-G provides convergence guarantees in the stochastic setting, including for a class of non-convex problems.
By using the same descent direction as SGD, it offers comparable generalization performance while requiring significantly less tuning.
In future work, it would be interesting to extend ALI-G to the non-interpolating setting by adapting the minimum $f_\star$ online while requiring few hyper-parameters.
\subsection*{Acknowledgements}
This work was supported by the EPSRC grants AIMS CDT EP/L015987/1, Seebibyte EP/M013774/1, EP/P020658/1 and TU/B/000048, and by YouGov.
We also thank the Nvidia Corporation for the GPU donation.
\section*{Abstract}
\addcontentsline{toc}{section}{Abstract}
In modern supervised learning, many deep neural networks are able to interpolate the data: the empirical loss can be driven to near zero on all samples simultaneously.
In this work, we explicitly exploit this interpolation property for the design of a new optimization algorithm for deep learning, which we term Adaptive Learning-rates for Interpolation with Gradients (ALI-G).
ALI-G retains the two main advantages of Stochastic Gradient Descent (SGD), which are (i) a low computational cost per iteration and (ii) good generalization performance in practice.
At each iteration, ALI-G exploits the interpolation property to compute an adaptive learning-rate in closed form.
In addition, ALI-G clips the learning-rate to a maximal value, which we prove to be helpful for non-convex problems.
Crucially, in contrast to the learning-rate of SGD, the maximal learning-rate of ALI-G does not require a decay schedule, which makes it considerably easier to tune.
We provide convergence guarantees of ALI-G in various stochastic settings.
Notably, we tackle the realistic case where the interpolation property is satisfied up to some tolerance.
We provide experiments on a variety of architectures and tasks:
(i) learning a differentiable neural computer;
(ii) training a wide residual network on the SVHN data set;
(iii) training a Bi-LSTM on the SNLI data set;
and (iv) training wide residual networks and densely connected networks on the CIFAR data sets.
ALI-G produces state-of-the-art results among adaptive methods, and even yields comparable performance with SGD, which requires manually tuned learning-rate schedules.
Furthermore, ALI-G is simple to implement in any standard deep learning framework and can be used as a drop-in replacement in existing code.
\section{Introduction}
Training a deep neural network is a challenging optimization problem: it involves minimizing the average of many high-dimensional non-convex functions.
In practice, the main algorithms of choice are Stochastic Gradient Descent (SGD) \citep{Robbins1951} and adaptive gradient methods such as AdaGrad \citep{Duchi2011} or Adam \citep{Kingma2015}.
It has been observed that SGD tends to provide better generalization performance than adaptive gradient methods \cite{Wilson2017}.
However, the downside of SGD is that it requires the manual design of a learning-rate schedule, which is widely regarded as an onerous and time consuming task.
In this work, we alleviate this issue with the design of an adaptive learning-rate algorithm that needs minimal tuning for good performance.
Indeed, we postulate that by using the same descent direction as SGD while automatically adapting its learning-rate, the resulting algorithm can offer similar generalization performance while requiring considerably less tuning.
In this work, we build on the following two ideas.
First, an adaptive learning-rate can be computed for the non-stochastic gradient direction when the minimum value of the objective function is known \citep{Polyak1969,Shor1985,Brannlund1995,Nedic2001, Nedic2001a}.
And second, one such minimum value is usually approximately known for interpolating models: for instance, it is close to zero for a model trained with the cross-entropy loss.
By carefully combining these two ideas, we create a stochastic algorithm that (i) provably converges fast in convex or Restricted Secant Inequality (RSI) settings, and (ii) obtains state-of-the-art empirical results with neural networks.
We refer to this algorithm as Adaptive Learning-rates for Interpolation with Gradients (ALI-G).
Procedurally, ALI-G is close to many existing algorithms, such as Deep Frank-Wolfe \citep{Berrada2019}, {\sc aProx} \citep{Asi2019} and $L_4$ \citep{Rolinek2018}.
And yet uniquely, thanks to its careful design and analysis, the learning-rate of ALI-G effectively requires a single hyper-parameter that does not need to be decayed.
Since ALI-G is easy to implement in any deep learning framework, we believe that it can prove to be a practical and reliable optimization tool for deep learning.
\paragraph{Contributions.}
We summarize the contributions of this work as follows: \\
- We design an adaptive learning-rate algorithm that uses a single hyper-parameter and does need any decaying schedule.
In contrast, the closely related {\sc aProx} \citep{Asi2019} and $L_4$ \citep{Rolinek2018} use respectively two and four hyper-parameters for their learning-rate.
\\
- We provide convergence rates of ALI-G in various stochastic convex settings.
Importantly, our theoretical results take into account the error in the estimate of the minimum objective value.
To the best of our knowledge, our work is the first to establish convergence rates for interpolation with approximate estimates. \\
- We prove that using a maximal learning-rate helps with convergence for a class of non-convex problems. \\
- We demonstrate state-of-the-art results for ALI-G on learning a differentiable neural computer; training variants of residual networks on the SVHN and CIFAR data sets; and training a Bi-LSTM on the Stanford Natural Language Inference data set.
\section{The Algorithm}
\subsection{Problem Setting}
\label{sec:pb_setting}
\paragraph{Loss Function.}
We consider a supervised learning task where the model is parameterized by ${\bm{w}} \in \mathbb{R}^p$.
Usually, the objective function can be expressed as an expectation over $z \in \mathcal{Z}$, a random variable indexing the samples of the training set:
\begin{equation}
f({\bm{w}}) \triangleq \mathbb{E}_{z \in \mathcal{Z}}[\ell_z({\bm{w}})],
\end{equation}
where each $\ell_z$ is the loss function associated with the sample $z$.
We assume that each $\ell_z$ is non-negative, which is the case for the large majority of loss functions used in machine learning.
For instance, suppose that the model is a deep neural network with weights ${\bm{w}}$ performing classification.
Then for each sample $z$, $\ell_z({\bm{w}})$ can represent the cross-entropy loss, which is always non-negative.
Other non-negative loss functions include the structured or multi-class hinge loss, and the $\ell_1$ or $\ell_2$ loss functions for regression.
\paragraph{Regularization.}
It is often desirable to employ a regularization function $\phi$ in order to promote generalization.
In this work, we incorporate such regularization as a constraint on the feasible domain: $\Omega = \left\{ {\bm{w}} \in \mathbb{R}^p: \ \phi({\bm{w}}) \leq r \right\}$ for some value of $r$.
In the deep learning setting, this will allow us to assume that the objective function can be driven close to zero without unrealistic assumptions about the regularization.
Our framework can handle any constraint set $\Omega$ on which Euclidean projections are computationally efficient.
This includes the feasible set induced by $\ell_2$ regularization: $\Omega = \left\{ {\bm{w}} \in \mathbb{R}^p: \ \| {\bm{w}} \|_2^2 \leq r \right\}$, for which the projection is given by a simple rescaling of ${\bm{w}}$.
Finally, note that if we do not wish to use any regularization, we define $\Omega = \mathbb{R}^p$ and the corresponding projection is the identity.
\paragraph{Problem Formulation.}
The learning task can be expressed as the problem $(\mathcal{P})$ of finding a feasible vector of parameters $\mathbf{w_\star} \in \Omega$ that minimizes $f$:
\begin{equation} \tag{$\mathcal{P}$} \label{eq:main_problem}
\mathbf{w_\star} \in \argmin\limits_{{\bm{w}} \in \Omega} f({\bm{w}}).
\end{equation}
Also note that $f_\star$ refers to the minimum value of $f$ over $\Omega$: $f_\star \triangleq \min_{{\bm{w}} \in \Omega} f({\bm{w}})$.
\paragraph{Interpolation.}
We say that the problem (\ref{eq:main_problem}) satisfies the interpolation assumption if there exist a solution $\mathbf{w_\star}$ that simultaneously minimizes all individual loss functions:
\begin{equation}
\forall z \in \mathcal{Z}, \: \ell_z(\mathbf{w_\star}) = 0.
\label{eq:def_interpolation}
\end{equation}
The condition (\ref{eq:def_interpolation}) can be equivalently expressed as $f_\star = 0$.
We also point out that in some cases, it can be more realistic to relax (\ref{eq:def_interpolation}) to $\forall z \in \mathcal{Z}, \: \ell_z(\mathbf{w_\star}) \leq \varepsilon$ for a small positive $\varepsilon$.
\subsection{The Polyak Step-Size}
\label{sec:polyak_step_size}
Before outlining the ALI-G algorithm, we begin with a brief description of the Polyak step-size, from which ALI-G draws some fundamental ideas.
\paragraph{Setting.}
We assume that $f_\star$ is known and we use non-stochastic updates: at each iteration, the full objective $f$ and its derivative are evaluated.
We denote by $\nabla f({\bm{w}})$ the first-order derivative of $f$ at ${\bm{w}}$ (e.g. $\nabla f({\bm{w}})$ can be a sub-gradient or the gradient).
In addition, $\| \cdot \|$ is the standard Euclidean norm in $\mathbb{R}^p$, and $\Pi_{\Omega}({\bm{w}})$ is the Euclidean projection of the vector ${\bm{w}} \in \mathbb{R}^p$ on the set $\Omega$.
\paragraph{Polyak Step-Size.}
At time-step $t$, using the Polyak step-size \citep{Polyak1969,Shor1985,Brannlund1995,Nedic2001,Nedic2001a} yields the following update:
\begin{equation} \label{eq:polyak_step}
{\bm{w}}_{t+1} = \Pi_\Omega\left({\bm{w}}_{t} - \gamma_t \nabla f({\bm{w}}_t) \right), \text{ where } \gamma_t \triangleq \tfrac{f({\bm{w}}_t) - f_\star}{\|\nabla f({\bm{w}}_t) \|^2},
\end{equation}
where we loosely define $\frac{0}{0}=0$ for simplicity purposes.
\begin{figure}[H]
\centering
\scriptsize
\input{include/plot_step_simple.tex}
\caption{\em Illustration of the Polyak step-size in 1D. In this case, and further assuming that $f_\star = 0$, the algorithm coincides with the Newton-Raphson method for finding roots of a function.}
\label{fig:nr_step_simple}
\end{figure}
\paragraph{Interpretation.}
It can be shown that ${\bm{w}}_{t+1}$ lies on the intersection between the linearization of $f$ at ${\bm{w}}_t$ and the horizontal plane $z=f_\star$ (see Figure \ref{fig:nr_step_simple}, more details in Proposition 1 in the supplementary material).
Note that since $f_\star$ is the minimum of $f$, the Polyak step-size $\gamma_t$ is necessarily non-negative.
\paragraph{Limitations.}
Equation (\ref{eq:polyak_step}) has two major short-comings that prevent its applicability in a machine learning setting.
First, each update requires a full evaluation of $f$ and its derivative.
Stochastic extensions have been proposed in \citep{Nedic2001,Nedic2001a}, but they still require frequent evaluations of $f$.
This is expensive in the large data setting, and even computationally infeasible when using massive data augmentation.
Second, when applying this method to the non-convex setting of deep neural networks, the method sometimes fails to converge.
Therefore we would like to design an extension of the Polyak step-size that (i) is inexpensive to compute in a stochastic setting (e.g. with a computational cost that is independent of the total number of training samples), and (ii) converges in practice when used with deep neural networks.
The next section introduces the ALI-G algorithm, which achieves these two goals in the interpolation setting.
\subsection{The ALI-G Algorithm}
We now present the ALI-G algorithm.
For this, we suppose that we are in an interpolation setting: the model is assumed to be able to drive the loss function to near zero on all samples simultaneously.
\paragraph{Algorithm.} The main steps of the ALI-G algorithm are provided in Algorithm \ref{algo:prox}. ALI-G iterates over three operations until convergence. First, it computes a stochastic approximation of the learning objective and its derivative (line 3). Second, it computes a step-size decay parameter $\gamma_t$ based on the stochastic information (line 4). Third, it updates the parameters by moving in the negative derivative direction by an amount specified by the step-size and projecting the resulting vector on to the feasible region (line 5).
\begin{algorithm}[ht]
\caption{\em The ALI-G algorithm}\label{algo:prox}
\begin{algorithmic}[1]
\REQUIRE maximal learning-rate $\eta$, initial feasible ${\bm{w}}_0 \in \Omega$, small constant $\delta > 0$
\STATE $t=0$
\WHILE {not converged}
\STATE Get $\ell_{z_t}({\bm{w}}_t)$, $\nabla \ell_{z_t}({\bm{w}}_t)$ with $z_t$ drawn i.i.d.
\STATE $\gamma_t = \min \left\{ \frac{\ell_{z_t}({\bm{w}}_t)}{\| \nabla \ell_{z_t}({\bm{w}}_t) \|^2 + \delta}, \eta \right\}$
\STATE ${\bm{w}}_{t+1} = \Pi_{\Omega}\left({\bm{w}}_t - \gamma_t \nabla \ell_{z_t}({\bm{w}}_t) \right)$
\STATE $t=t+1$
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\paragraph{Comparison with the Polyak Step-Size.} There are three main differences to the update in equation (\ref{eq:polyak_step}).
First, each update only uses the loss $\ell_{z_t}$ and its derivative rather than the full objective $f$ and its derivative. Second, the learning-rate $\gamma_t$ is clipped to $\eta$, the maximal learning-rate hyper-parameter.
We emphasize that $\eta$ remains constant throughout the iterations, therefore it is a single hyper-parameter and does not need a schedule like SGD learning-rate.
Third, the minimum $f_\star$ has been replaced by the lower-bound of $0$.
All these modifications will be justified in the next section.
\paragraph{The ALI-G$^\infty$ Variant.}
When ALI-G uses no maximal learning-rate, we refer to the algorithm as ALI-G$^\infty$, since it is equivalent to use an infinite maximal learning-rate.
Note that ALI-G$^\infty$ requires no hyper-parameter for its step-size.
\paragraph{Momentum.}
In some of our experiments, we accelerate ALI-G with Nesterov momentum.
The update step at line 5 of algorithm \ref{algo:prox} is then replaced by (i) a velocity update ${\bm{v}}_{t} = \mu {\bm{v}}_{t-1} - \gamma_t \nabla \ell_{z_t}({\bm{w}}_t)$ and (ii) a parameter update ${\bm{w}}_{t+1} = \Pi_{\Omega}\left({\bm{w}}_t + \mu {\bm{v}}_{t} \right)$.
\section{Justification and Analysis}
\subsection{Stochasticity}
By definition, the interpolation setting gives $f_\star = 0$, which we used in ALI-G to simplify the formula of the learning-rate $\gamma_t$.
More subtly, the interpolation property also allows the updates to rely on the stochastic estimate $\ell_{z_t}({\bm{w}}_t)$ rather than the exact but expensive $f({\bm{w}}_t)$.
Intuitively, this is possible because in the interpolation setting, we know the minimum of the loss function for each individual training sample.
Recall that ALI-G$^\infty$ is the variant of ALI-G that uses no maximal learning-rate.
The following result formalizes the convergence guarantee of ALI-G$^\infty$ in the stochastic convex setting.
\begin{theorem}[Convex and Lipschitz] \label{th:alig_cvx}
We assume that $\Omega$ is a convex set, and that for every $z \in \mathcal{Z}$, $\ell_z$ is convex and $C$-Lipschitz.
Let $\mathbf{w_\star}$ be a solution of (\ref{eq:main_problem}), and assume that the interpolation property is approximately satisfied: $\forall z \in \mathcal{Z}, \: \ell_z(\mathbf{w_\star}) \leq \varepsilon$, for some interpolation tolerance $\varepsilon \geq 0$.
Then ALI-G$^\infty$ applied to $f$ satisfies:
\begin{equation}
\mathbb{E} \left[f\left(\tfrac{1}{T+ 1} \sum\limits_{t=0}^T {\bm{w}}_t \right) \right]
\leq \tfrac{ \| {\bm{w}}_{0} - \mathbf{w_\star} \| \sqrt{C^2 + \delta}}{\sqrt{T + 1}} + \varepsilon \sqrt{\left(\tfrac{C^2}{\delta} + 1 \right)}.
\end{equation}
\end{theorem}
In other words, by assuming interpolation, ALI-G provably converges while requiring only $\ell_{z_t}({\bm{w}}_t)$ and $\nabla \ell_{z_t}({\bm{w}}_t)$ (stochastic estimation per sample) to compute its learning-rate.
In contrast, the Polyak step-size would require $f({\bm{w}}_t)$ and $\nabla f({\bm{w}}_t)$ to compute the learning-rate (deterministic computation over all training samples).
This is because the Polyak step-size exploits the knowledge of $f_\star$ only, which is weaker information than knowing the minimum of all individual loss functions $\ell_z$ (as ALI-G does in the interpolation setting).
This difference induces a major computational advantage of ALI-G over the usual Polyak step-size.
We emphasize that in Theorem \ref{th:alig_cvx}, our careful analysis explicitly shows the dependency of the convergence result on the interpolation tolerance $\varepsilon$.
It is reassuring to note that convergence is exact when the interpolation property is exactly satisfied ($\varepsilon = 0$).
In the supplementary material, we also establish convergence rates of $\mathcal{O}(1 / T)$ for smooth convex functions, and $\mathcal{O}(\exp(-\alpha T / 8 \beta ))$ for $\alpha$-strongly convex and $\beta$-smooth functions.
Similar results can be proved when using a maximal learning-rate $\eta$: the convergence speed then remains unchanged provided that $\eta$ is large enough, and it is lowered when $\eta$ is small.
We refer the interested reader to the supplementary for the formal results and their proofs.
\paragraph{Interpolation and Gradient Variance.}
In the literature, most convergence results of SGD depend on the variance of the gradient, which we denote by $\upsilon$ here.
The reader may have noticed that our convergence results depends only the interpolation tolerance $\varepsilon$ rather than $\upsilon$.
We briefly compare how these two quantities help convergence in their own distinct ways.
The gradient variance $\upsilon$ globally characterizes how much the gradient direction can differ across individual samples $z$, at any point ${\bm{w}}$ of the parameter space.
In particular, a low value for $\upsilon$ implies that the loss functions $\ell_z$ agree in the steepest descent direction at any point of the trajectory ${\bm{w}}_0, ..., {\bm{w}}_T$.
In contrast, the interpolation tolerance $\varepsilon$ locally characterizes the behavior of all loss functions near a global minimum $\mathbf{w_\star}$ only.
More specifically, a low value for $\varepsilon$ ensures that all loss functions $\ell_z$ agree in a common minimizer $\mathbf{w_\star}$.
Thus these two mechanisms are distinct ways of ensuring convergence of SGD.
Importantly, a low interpolation tolerance $\varepsilon$ does not necessarily imply a low gradient variance $\upsilon$ and vice-versa.
\subsection{Maximal Learning-Rate}
\paragraph{Non-Convexity.}
The Polyak step-size may fail to converge when the objective is non-convex, as figure \ref{fig:rsi} illustrates: in this (non-convex) setting, gradient descent with Polyak step-size oscillates between two symmetrical points because its step-size is too large.
A similar behavior can be observed on the non-convex problem of training deep neural networks.
\begin{figure}[h]
\centering
\footnotesize
\input{include/rsi_plot.tex}
\caption{\em
A simple example where the Polyak step-size oscillates due to non-convexity.
On this problem, ALI-G converges whenever its maximal learning-rate is lower than $10$.
}
\label{fig:rsi}
\vspace{-20pt}
\end{figure}
In order to analyze the convergence of ALI-G in a non-convex setting, we introduce the Restricted Secant Inequality (RSI) \cite{Zhang2013}:
\begin{definition}
Let $\phi: \mathbb{R}^p \to \mathbb{R}$ be a lower-bounded differentiable function achieving its minimum at $\mathbf{w_\star}$.
We say that $\phi$ satisfies the RSI if there exists $\alpha > 0$ such that:
\begin{equation}
\forall {\bm{w}} \in \mathbb{R}^p, \: \nabla \phi (\mathbf{w})^\top ({\bm{w}} - \mathbf{w_\star}) \geq \alpha \| {\bm{w}} - \mathbf{w_\star} \|^2.
\end{equation}
\end{definition}
The RSI does not require convexity and is a weaker assumption in the sense that all strongly convex functions satisfy the RSI \cite{Zhang2013}.
In particular, the example in figure \ref{fig:rsi} does satisfy the RSI (proof in the supplementary material).
In other words, the example above shows that the Polyak step-size can fail to converge under the RSI assumption.
In contrast, we prove that with an appropriate maximal learning-rate, ALI-G converges (exponentially fast) on all interpolating problems that satisfy the RSI:
\begin{restatable}{theorem}{thrsismooth}\label{th:alig_rsi}
We assume that $\Omega = \mathbb{R}^p$, and that for every $z \in \mathcal{Z}$, $\ell_z$ is $\beta$-smooth and satisfies the RSI with constant $\mu$.
Let $\mathbf{w_\star}$ be a solution of (\ref{eq:main_problem}) such that $\forall z \in \mathcal{Z}, \: \ell_z(\mathbf{w_\star}) = 0$.
Further assume that $\frac{1}{2 \beta} \leq \eta \leq \frac{2 \mu}{\beta^2}$.
Then if we apply ALI-G with a maximal learning-rate of $\eta$ to $f$, we have:
\begin{equation}
f(\mathbf{w}_{T+1}) - f_\star
\leq \tfrac{\beta}{2} \exp \left( \tfrac{-(2\mu - \eta \beta^2)T }{2\beta} \right) \| \mathbf{w}_{0} - \mathbf{w_\star} \|^2.
\end{equation}
\end{restatable}
Note that the above theorem assumes perfect interpolation, that is, the tolerance $\epsilon = 0$.
Nonetheless, it demonstrates the importance of a maximal learning rate, which does not need a manual decaying schedule.
It is currently an open question whether a similar result to theorem \ref{th:alig_rsi} can be proved with some interpolation tolerance $\varepsilon > 0$ on the value of all $\ell_z (\mathbf{w_\star})$.
\paragraph{Proximal Interpretation.}
Interestingly, using a maximal learning-rate can be seen as a natural extension of SGD when using a non-negative loss function:
\begin{restatable}{proposition}{thproxstep}[Proximal Interpretation] \label{th:prox_step}
Suppose that $\Omega = \mathbb{R}^p$ and let $\delta = 0$.
We consider the update performed by SGD: ${\bm{w}}_{t+1}^{\text{SGD}} = {\bm{w}}_t - \eta_t \nabla \ell_{z_t}({\bm{w}}_t)$; and the update performed by ALI-G: ${\bm{w}}_{t+1}^{\text{ALI-G}} = {\bm{w}}_t - \gamma_t \nabla \ell_{z_t}({\bm{w}}_t)$, where $\gamma_t = \min\left\{\frac{\ell_{z_t}({\bm{w}}_t)}{\| \nabla \ell_{z_t}({\bm{w}}_t)\|^2}, \eta \right\}$.
Then we have:
\begin{equation}
\begin{split}
&{\bm{w}}_{t+1}^{\text{SGD}} = \argmin_{{\bm{w}} \in \mathbb{R}^p} \Big\{ \tfrac{1}{2 \eta_t} \| {\bm{w}} - {\bm{w}}_t \|^2 +\\
&\qquad \ell_{z_t}({\bm{w}}_t) + \nabla \ell_{z_t}({\bm{w}}_t)^\top ({\bm{w}} - {\bm{w}}_t) \Big\}, \\
&{\bm{w}}_{t+1}^{\text{ALI-G}} = \argmin_{{\bm{w}} \in \mathbb{R}^p} \Big\{ \tfrac{1}{2 \eta} \| {\bm{w}} - {\bm{w}}_t \|^2 + \\
&\qquad \max \left\{\ell_{z_t}({\bm{w}}_t) + \nabla \ell_{z_t}({\bm{w}}_t)^\top ({\bm{w}} - {\bm{w}}_t), 0 \right\} \Big\}. \label{eq:prox_pb}
\end{split}
\end{equation}
\end{restatable}
In other words, at each iteration, ALI-G solves a proximal problem in closed form in a similar way to SGD.
In both cases, the loss function $\ell_{z_t}$ is locally approximated by a first-order Taylor expansion at ${\bm{w}}_t$.
The difference is that ALI-G also exploits the fact that $\ell_{z_t}$ is non-negative.
This allows ALI-G to use a constant value for $\eta$ in the interpolation setting, while the learning-rate $\eta_t$ of SGD needs to be manually decayed.
\section{Related Work}
\paragraph{Interpolation in Deep Learning.}
As mentioned in the introduction, recent works have successfully exploited the interpolation assumption to prove convergence of SGD in the context of deep learning \citep{Ma2018a,Vaswani2019,Zhou2019}.
Such works are complementary to ours in the sense that they provide a convergence analysis of an existing algorithm for deep learning.
In a different line of work, \cite{Liu2019b} propose to exploit interpolation to prove convergence of a new acceleration method for deep learning.
However, their experiments suggest that the method still requires the use of a hand-designed learning-rate schedule.
\paragraph{Adaptive Gradient Methods.}
Similarly to ALI-G, most adaptive gradient methods also rely on tuning a single hyper-parameter, thereby providing a more pragmatic alternative to SGD that needs a specification of the full learning-rate schedule.
While the most popular ones are Adagrad \citep{Duchi2011}, RMSPROP \citep{Tieleman2012}, Adam \citep{Kingma2015} and AMSGrad \citep{Reddi2018}, there have been many other variants
\citep{Zeiler2012,Orabona2015,Defossez2017,Levy2017,Mukkamala2017,Zheng2017,Bernstein2018,Chen2018,Shazeer2018,Zaheer2018,Chen2019,Loshchilov2019,Luo2019}.
However, as pointed out in \citep{Wilson2017}, adaptive gradient methods tend to give poor generalization in supervised learning.
In our experiments, the results provided by ALI-G are significantly better than those obtained by the most popular adaptive gradient methods.
Recently, \citet{Liu2019a} have proposed to \say{rectify} Adam with a learning-rate warmup, which partly bridges the gap in generalization performance between Adam and SGD.
However, their method still requires a learning-rate schedule, and thus remains difficult to tune on new tasks.
\paragraph{Adaptive Learning-Rate Algorithms.}
\citet{Vaswani2019a} show that one can use line search in a stochastic setting for interpolating models while guaranteeing convergence.
This work is complementary to ours, as it provides convergence results with weaker assumptions on the loss function, but is less practically useful as it requires up to four hyper-parameters, instead of one for ALI-G.
Less closely related methods, included second-order ones, adaptively compute the learning-rate without using the minimum \citep{Schaul2013,Martens2015,Tan2016,Zhang2017a,Baydin2018,Wu2018,Li2019,Henriques2019}, but do not demonstrate competitive generalization performance against SGD with a well-tuned hand-designed schedule.
\paragraph{$L_4$ Algorithm.}
The $L_4$ algorithm \citep{Rolinek2018} also uses a modified version of the Polyak step-size.
However, the $L_4$ algorithm computes an online estimate of $f_\star$ rather than relying on a fixed value.
This requires three hyper-parameters, which are in practice sensitive to noise and crucial for empirical convergence of the method.
In addition, $L_4$ does not come with convergence guarantees.
In contrast, by utilizing the interpolation property and a maximal learning-rate, our method is able to (i) provide reliable and accurate minimization with only a single hyper-parameter, and (ii) offer guarantees of convergence in the stochastic convex setting.
\paragraph{Frank-Wolfe Methods.}
The proximal interpretation in Proposition \ref{th:prox_step} allows us to draw additional parallels to existing methods.
In particular, the formula of the learning-rate $\gamma_t$ may remind the reader of the Frank-Wolfe algorithm \citep{Frank1956} in some of its variants \citep{Locatello2017}, or other dual methods \citep{Lacoste-Julien2013,Shalev-Shwartz2016}.
This is because such methods solve in closed form the dual of problem (\ref{eq:prox_pb}), and problems in the form of (\ref{eq:prox_pb}) naturally appear in dual coordinate ascent methods \citep{Shalev-Shwartz2016}.
When no regularization is used, ALI-G and Deep Frank-Wolfe (DFW) \citep{Berrada2019} are procedurally identical algorithms.
This is because in such a setting, one iteration of DFW also amounts to solving (\ref{eq:prox_pb}) in closed-form -- more generally, DFW is designed to train deep neural networks by solving proximal linear support vector machine problems approximately.
However, we point out the two fundamental advantages of ALI-G over DFW:
(i) ALI-G can handle arbitrary (lower-bounded) loss functions, while DFW can only use convex piece-wise linear loss functions;
and (ii) as seen previously, ALI-G provides convergence guarantees in the convex setting.
\paragraph{SGD with Polyak's Learning-Rate.}
\citep{Oberman2019} extend the Polyak step-size to rely on a stochastic estimation of the gradient $\nabla \ell_{z_t}({\bm{w}}_t)$ only, instead of the expensive deterministic gradient $\nabla f({\bm{w}}_t)$.
However, they still require to evaluate $f({\bm{w}}_t)$, the objective function over the entire training data set, in order to compute its learning-rate, which makes the method impractical.
In addition, since they do not do exploit the interpolation setting nor the fact that regularization can be expressed as a constraint, they also require the knowledge of the optimal objective function value $f_\star$.
We also refer the interested reader to the recent analysis of \cite{Loizou2020}, which appeared after this work and provides a set of improved theoretical results.
\paragraph{{\sc aProx} Algorithm.}
\citep{Asi2019} have recently introduced the {\sc aProx} algorithm, a family of proximal stochastic optimization algorithms for convex problems.
Notably, the {\sc aProx} \say{truncated model} version is similar to ALI-G.
However, there are four clear advantages of our work over \citep{Asi2019} in the interpolation setting, in particular for training neural networks.
First, our work is the first to empirically demonstrate the applicability and usefulness of the algorithm on varied modern deep learning tasks -- most of our experiments use several orders of magnitude more data and model parameters than the small-scale convex problems of \citep{Asi2019}.
Second, our analysis and insights allow us to make more aggressive choices of learning rate than \citep{Asi2019}.
Indeed, \citep{Asi2019} assume that the maximal learning-rate is exponentially decaying, even in the interpolating convex setting.
In contrast, by avoiding the need for an exponential decay, the learning-rate of ALI-G requires only one hyper-parameters instead of two for {\sc aProx}.
Third, our analysis takes into account the interpolation tolerance $\varepsilon \geq 0$ rather than unrealistically assuming the perfect case $\varepsilon = 0$ (that would require infinite weights when using the cross-entropy loss for instance).
Fourth, our analysis proves fast convergence in function space rather than iterate space.
\section{Experiments}
We empirically compare ALI-G to the optimization algorithms most commonly used in deep learning.
Our experiments span a variety of architectures and tasks: (i) learning a differentiable neural computer;
(ii) training wide residual networks on SVHN;
(iii) training a Bi-LSTM on the Stanford Natural Language Inference data set;
and (iv) training wide residual networks and densely connected networks on the CIFAR data sets.
Note that the tasks of training wide residual networks on SVHN and CIFAR-100 are part of the DeepOBS benchmark \citep{Schneider2019}, which aims at standardizing baselines for deep learning optimizers.
In particular, these tasks are among the most difficult ones of the benchmark because the SGD baseline benefits from a manual schedule for the learning rate.
Despite this, our set of experiments demonstrate that ALI-G obtains competitive performance with SGD.
In addition, ALI-G significantly outperforms adaptive gradient methods.
The code to reproduce our results is publicly available\footnote{\url{https://github.com/oval-group/ali-g}}.
In the TensorFlow \citep{Abadi2015} experiment, we use the official and publicly available implementation of $L_4$\footnote{\url{https://github.com/martius-lab/l4-optimizer}}.
In the PyTorch \citep{Paszke2017} experiments, we use our implementation of $L_4$, which we unit-test against the official TensorFlow implementation.
In addition, we employ the official implementation of DFW\footnote{\url{https://github.com/oval-group/dfw}} and we re-use their code for the experiments on SNLI and CIFAR.
All experiments are performed either on a 12-core CPU (differentiable neural computer), on a single GPU (SVHN, SNLI, CIFAR) or on up to 4 GPUs (ImageNet).
We emphasize that all methods approximately have the same cost per iteration.
Consequently, faster convergence in terms of number of iterations or epochs translates into faster convergence in terms of wall-clock time.
\subsection{Differentiable Neural Computers}
\paragraph{Setting.}
The Differentiable Neural Computer (DNC) \citep{Graves2016} is a recurrent neural network that aims at performing computing tasks by learning from examples rather than by executing an explicit program.
In this case, the DNC learns to repeatedly copy a fixed size string given as input.
Although this learning task is relatively simple, the complex architecture of the DNC makes it an interesting benchmark problem for optimization algorithms.
\paragraph{Methods.}
We use the official and publicly available implementation of DNC\footnote{\url{https://github.com/deepmind/dnc}}.
We vary the initial learning rate as powers of ten between $10^{-4}$ and $10^{4}$ for each method except for $L_4$Adam and $L_4$Mom.
For $L_4$Adam and $L_4$Mom, since the main hyper-parameter $\alpha$ is designed to lie in $(0, 1)$, we vary it between $0.05$ and $0.095$ with a step of $0.1$.
The gradient norm is clipped for all methods except for ALI-G, $L_4$Adam and $L_4$Mom (as recommended by \citep{Rolinek2018}).
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{include/dnc_plot.pdf}
\caption{
\em Final objective function when training a Differentiable Neural Computer for $10$k steps (lower is better).
The intensity of each cell is log-proportional to the value of the objective function (darker is better).
ALI-G obtains good performance for a very large range of $\eta$ ($10^{-1} \leq \eta \leq 10^6$).
}
\label{fig:dnc}
\end{figure}
\paragraph{Results.} We present the results in Figure \ref{fig:dnc}.
ALI-G provides accurate optimization for any $\eta$ within $[10^{-1}, 10^6]$, and is among the best performing methods by reaching an objective function of $4.10^{-8}$.
On this task, RMSProp, $L_4$Adam and $L_4$Mom also provide accurate and robust optimization.
In contrast to ALI-G and the $L_4$ methods, the most commonly used algorithms such as SGD, SGD with momentum and Adam are very sensitive to their main learning-rate hyper-parameter.
Note that the difference between well-performing methods is not significant here because these reach the numerical precision limit of single-precision float numbers.
\subsection{Wide Residual Networks on SVHN}
\paragraph{Setting.}
The SVHN data set contains 73k training samples, 26k testing samples and 531k additional easier samples.
From the 73k difficult training examples, we select 6k samples for validation; we use all remaining (both difficult and easy) examples for training, for a total of 598k samples.
We train a wide residual network 16-4 following \citep{Zagoruyko2016}.
\paragraph{Method.}
For SGD, we use the manual schedule for the learning rate of \citep{Zagoruyko2016}.
For $L_4$Adam and $L_4$Mom, we cross-validate the main learning-rate hyper-parameter $\alpha$ to be in $\{0.0015, 0.015, 0.15\}$ ($0.15$ is the value recommended by \citep{Rolinek2018}).
For other methods, the learning rate hyper-parameter is tuned as a power of 10.
The $\ell_2$ regularization is cross-validated in $\{0.0001, 0.0005\}$ for all methods but ALI-G.
For ALI-G, the regularization is expressed as a constraint on the $\ell_2$-norm of the parameters, and its maximal value is set to $50$.
SGD, ALI-G and BPGrad use a Nesterov momentum of 0.9.
All methods use a dropout rate of 0.4 and a fixed budget of 160 epochs, following \citep{Zagoruyko2016}.
\begin{table}[ht]
\centering
\begin{tabular}{lc|lc}
\toprule
\multicolumn{4}{c}{Test Accuracy on SVHN (\%)} \\
\midrule
Adagrad & 98.0 & Adam & 97.9 \\
AMSGrad & 97.9 & BPGrad & 98.1 \\
DFW & 98.1 &$L_4$Adam &{\bf 98.2} \\
$L_4$Mom & 19.6 & ALI-G & 98.1 \\
\cmidrule(lr){1-2} \cmidrule(lr){3-4}
{\color{red} SGD} &98.3 &{\color{red} SGD$^\dagger$} & 98.4 \\
\bottomrule
\end{tabular}
\caption{\em
In red, SGD benefits from a hand-designed schedule for its learning-rate.
In black, adaptive methods, including ALI-G, have a single hyper-parameter for their learning-rate.
$SGD^\dagger$ refers to the performance reported by \citep{Zagoruyko2016}.
}
\label{tab:svhn}
\end{table}
\paragraph{Results.}
The results are presented in Table \ref{tab:svhn}.
On this relatively easy task, most methods achieve about 98\% test accuracy.
Despite the cross-validation, $L_4$Mom does not converge on this task.
Even though SGD benefits from a hand-designed schedule, ALI-G and other adaptive methods obtain close performance to it.
\subsection{Bi-LSTM on SNLI}
\paragraph{Setting.}
We train a Bi-LSTM of 47M parameters on the Stanford Natural Language Inference (SNLI) data set \citep{Bowman2015}.
The SNLI data set consists in 570k pairs of sentences, with each pair labeled as entailment, neutral or contradiction.
This large scale data set is commonly used as a pre-training corpus for transfer learning to many other natural language tasks where labeled data is scarcer \citep{Conneau2017} -- much like ImageNet is used for pre-training in computer vision.
We follow the protocol of \citep{Berrada2019}; we also re-use their results for the baselines.
\paragraph{Method.}
For $L_4$Adam and $L_4$Mom, the main hyper-parameter $\alpha$ is cross-validated in $\{0.015, 0.15\}$ -- compared to the recommended value of 0.15, this helped convergence and considerably improved performance.
The SGD algorithm benefits from a hand-designed schedule, where the learning-rate is decreased by 5 when the validation accuracy does not improve.
Other methods use adaptive learning-rates and do not require such schedule.
The value of the main hyper-parameter $\eta$ is cross-validated as a power of ten for the ALI-G algorithm and for previously reported adaptive methods.
Following the implementation by \citep{Conneau2017}, no $\ell_2$ regularization is used.
The algorithms are evaluated with the Cross-Entropy (CE) loss and the multi-class hinge loss (SVM), except for DFW which is designed for use with an SVM loss only.
For all optimization algorithms, the model is trained for 20 epochs, following \citep{Conneau2017}.
\begin{table}[h]
\centering
\begin{tabular}{lcc|lcc}
\toprule
\multicolumn{6}{c}{Test Accuracy on SNLI (\%)} \\
\midrule
& CE & SVM & & CE & SVM \\
\cmidrule(lr){2-2} \cmidrule(lr){3-3} \cmidrule(lr){5-5} \cmidrule(lr){6-6}
Adagrad$^*$ &83.8 &84.6 &Adam$^*$ &84.5 &85.0 \\
AMSGrad$^*$ &84.2 &85.1 &BPGrad$^*$ &83.6 &84.2 \\
DFW$^*$ & - &{\bf 85.2} & $L_4$Adam &83.3 &82.5 \\
$L_4$Mom &83.7 &83.2 & {\color{blue}ALI-G$^\infty$} &84.6 &84.7 \\
ALI-G & {\bf 84.8} &{\bf 85.2} & & \\
\cmidrule(lr){1-3} \cmidrule(lr){4-6}
{\color{red} SGD$^*$} &84.7 &85.2 &{\color{red} SGD$^\dagger$} &84.5 & - \\
\bottomrule
\end{tabular}
\caption{\em
In red, SGD benefits from a hand-designed schedule for its learning-rate.
In black, adaptive methods have a single hyper-parameter for their learning-rate.
In blue, {\color{blue} ALI-G$^\infty$} does not have any hyper-parameter for its learning-rate.
With an SVM loss, DFW and ALI-G are procedurally identical algorithms
-- but in contrast to DFW, ALI-G can also employ the CE loss.
Methods in the format $X^*$ re-use results from \citep{Berrada2019}.
$SGD^\dagger$ is the result from \citep{Conneau2017}.
}
\label{tab:snli}
\end{table}
\paragraph{Results.} We present the results in Table \ref{tab:snli}.
ALI-G$^\infty$ is the only method that requires no hyper-parameter for its learning-rate.
Despite this, and the fact that SGD employs a learning-rate schedule that has been hand designed for good validation performance, ALI-G$^\infty$ is still able to obtain results that are competitive with SGD.
Moreover, ALI-G, which requires a single hyper-parameter for the learning-rate, outperforms all other methods for both the SVM and the CE loss functions.
\subsection{Wide Residual Networks and Densely Connected Networks on CIFAR}
\paragraph{Setting.}
We follow the methodology of \citep{Berrada2019}, and we reproduce their results.
We test two architectures: a Wide Residual Network (WRN) 40-4 \citep{Zagoruyko2016} and a bottleneck DenseNet (DN) 40-40 \citep{Huang2017a}.
We use 45k samples for training and 5k for validation.
The images are centered and normalized per channel.
We apply standard data augmentation with random horizontal flipping and random crops.
AMSGrad was selected in \citep{Berrada2019} because it was the best adaptive method on similar tasks, outperforming in particular Adam and Adagrad.
In addition to the baselines from \citep{Berrada2019}, we also provide the performance of $L_4$Adam, $L_4$Mom, AdamW \citep{Loshchilov2019} and Yogi \citep{Zaheer2018}.
\paragraph{Method.}
All optimization methods employ the cross-entropy loss, except for the DFW algorithm, which is designed to use an SVM loss.
For DN and WRN respectively, SGD uses the manual learning rate schedules from \citep{Huang2017a} and \citep{Zagoruyko2016}.
Following \citep{Berrada2019}, the batch-size is cross-validated in $\{64, 128, 256\}$ for the DN architecture, and $\{128, 256, 512\}$ for the WRN architecture.
For $L_4$Adam and $L_4$Mom, the learning-rate hyper-parameter $\alpha$ is cross-validated in $\{0.015, 0.15\}$.
For AMSGrad, AdamW, Yogi, DFW and ALI-G, the learning-rate hyper-parameter $\eta$ is cross-validated as a power of 10 (in practice $\eta \in \{0.1, 1\}$ for ALI-G).
SGD, DFW and ALI-G use a Nesterov momentum of 0.9.
Following \citep{Berrada2019}, for all methods but ALI-G and AdamW, the $\ell_2$ regularization is cross-validated in $\{0.0001, 0.0005\}$ on the WRN architecture, and is set to $0.0001$ for the DN architecture.
For AdamW, the weight-decay is cross-validated as a power of 10.
For ALI-G, $\ell_2$ regularization is expressed as a constraint on the norm on the vector of parameters; its maximal value is set to $100$ for the WRN models, $80$ for DN on CIFAR-10 and $75$ for DN on CIFAR-100.
For all optimization algorithms, the WRN model is trained for 200 epochs and the DN model for 300 epochs, following respectively \citep{Zagoruyko2016} and \citep{Huang2017a}.
\paragraph{Results.}
We present the results in Table \ref{tab:cifar}.
In this setting again, ALI-G obtains competitive performance with manually decayed SGD.
ALI-G largely outperforms AMSGrad, AdamW and Yogi.
\begin{table}[ht]
\centering
\begin{tabular}{lcccc}
\toprule
\multicolumn{5}{c}{Test Accuracy on CIFAR (\%)} \\
\midrule
&\multicolumn{2}{c}{CIFAR-10} &\multicolumn{2}{c}{CIFAR-100} \\
\cmidrule(lr){2-3} \cmidrule(lr){4-5}
& WRN & DN & WRN & DN \\
\cmidrule(lr){2-2} \cmidrule(lr){3-3} \cmidrule(lr){4-4} \cmidrule(lr){5-5}
AMSGrad & 90.8 & 91.7 & 68.7 & 69.4 \\
AdamW & 92.1 & 92.6 & 69.6 & 69.5 \\
Yogi & 91.2 & 92.1 & 68.7 & 69.6 \\
DFW & 94.2 & 94.6 & {\bf 76.0} & 73.2 \\
$L_4$Adam & 90.5 & 90.8 & 61.7 & 60.5 \\
$L_4$Mom & 91.6 & 91.9 & 61.4 & 62.6 \\
ALI-G & {\bf 95.2} & {\bf 95.0} & 75.8 & {\bf 76.3} \\
\midrule
{\color{red} SGD} & 95.3 & 95.1 & 77.8 & 76.3 \\
{\color{red} SGD$^\dagger$} & 95.4 & - & 78.8 & - \\
\bottomrule
\end{tabular}
\caption{\em
In red, SGD benefits from a hand-designed schedule for its learning-rate.
In black, adaptive methods, including ALI-G, have a single hyper-parameter for their learning-rate.
$SGD^\dagger$ refers to the result from \citep{Zagoruyko2016}.
Each reported result is an average over three independent runs;
the standard deviations are reported in Appendix (they are at most $0.3$ for ALI-G and SGD).
}
\label{tab:cifar}
\end{table}
\subsection{Comparing Training Performance on CIFAR-100}
In this section, we empirically assess the performance of ALI-G and its competitors in terms of training objective on CIFAR-100.
In order to have comparable objective functions, the $\ell_2$ regularization is deactivated.
The learning-rate is selected as a power of ten for best final objective value, and the batch-size is set to its default value.
For clarity, we only display the performance of SGD, Adam, Adagrad and ALI-G (DFW does not support the cross-entropy loss).
The $L_4$ methods diverge in this setting.
Here SGD uses a constant learning-rate to emphasize the need for adaptivity.
Therefore all methods use one hyper-parameter for their learning-rate.
All methods use a fixed budget of 200 epochs for WRN-CIFAR-100 and 300 epochs for DN-CIFAR-100.
As can be seen, ALI-G provides better training performance than the baseline algorithms on all tasks.
\begin{figure}[h]
\centering
\footnotesize
\input{include/cifar_train_wrn_cifar100.tex}
\hspace{-10pt}
\input{include/cifar_train_dn_cifar100.tex}
\caption{\em
Objective function over the epochs on CIFAR-100 (smoothed with a moving average over 5 epochs).
ALI-G reaches a value that is an order of magnitude better than the baselines.
}
\label{fig:cifar100_training}
\vspace{-10pt}
\end{figure}
\subsection{Training at Large Scale}
We demonstrate the scalability of ALI-G by training a ResNet-18 \citep{He2016} on the ImageNet data set.
In order to satisfy the interpolation assumption, we employ a loss function tailored for top-5 classification \cite{Lapin2016}, and we do not use data augmentation.
Our focus here is on the training objective and accuracy.
ALI-G uses the following training setup: a batch-size of 1024 split over 4 GPUs, a $\ell_2$ maximal norm of 400 for $\mathbf{w}$, a maximal learning-rate of 10 and no momentum.
SGD uses the state-of-the-art hyper-parameters and learning-rate schedule from \cite{He2016}.
As can be seen in figure \ref{fig:imagenet_training}, ALI-G reaches 99\% top-5 accuracy in 12 epochs (faster than SGD), and minimizes the objective function as well as SGD with its custom schedule.
\begin{figure}[H]
\centering
\footnotesize
\hfill
\input{include/imagenet_obj.tex}
\hfill
\input{include/imagenet_acc.tex}
\hfill
\caption{\em
Training a ResNet-18 on ImageNet.
The final performance of ALI-G is as good as that of SGD, even though SGD benefits from a custom learning-rate schedule.
In addition, ALI-G reaches a high training accuracy faster than SGD.
}
\label{fig:imagenet_training}
\vspace{-10pt}
\end{figure}
\section{Discussion}
We have introduced ALI-G, an optimization algorithm that automatically adapts the learning-rate in the interpolation setting.
ALI-G provides convergence guarantees in the stochastic setting, including for a class of non-convex problems.
By using the same descent direction as SGD, it offers comparable generalization performance while requiring significantly less tuning.
In future work, it would be interesting to extend ALI-G to the non-interpolating setting by adapting the minimum $f_\star$ online while requiring few hyper-parameters.
\subsection*{Acknowledgements}
This work was supported by the EPSRC grants AIMS CDT EP/L015987/1, Seebibyte EP/M013774/1, EP/P020658/1 and TU/B/000048, and by YouGov.
We also thank the Nvidia Corporation for the GPU donation.
\subsection{SVHN}
\begin{table}[H]
\centering
\begin{tabular}{lccc}
\toprule
& \multicolumn{3}{c}{Optimal Hyper-Parameter} \\ \cmidrule(lr){2-4}
Optimizer & $\eta$ or $\alpha$ & $\lambda$ & $\rho$ \\
\cmidrule(lr){1-1} \cmidrule(lr){2-2} \cmidrule(lr){3-3} \cmidrule(lr){4-4}
Adagrad & 0.01 & 5e-4 & - \\
Adam & 0.00001 & 5e-4 & - \\
AMSGrad & 0.0001 & 5e-4 & - \\
BPGrad & 0.01 & 5e-4 & - \\
DFW & 0.01 & 1e-4 & - \\
L4Adam & 0.15 & 1e-4 & - \\
L4Mom & 0.0015 & 5e-4 & - \\
ALI-G & 0.01 & - & 50 \\
\bottomrule
\end{tabular}
\caption{\em Cross-validation results on SVHN.}
\end{table}
\subsection{SNLI}
\begin{table}[H]
\centering
\begin{tabular}{lc}
\toprule
\multicolumn{2}{c}{CE Loss} \\ \midrule
&Optimal Hyper-Parameter \\ \cmidrule(lr){2-2}
Optimizer & $\eta$ or $\alpha$ \\
\cmidrule(lr){1-1} \cmidrule(lr){2-2}
L4Adam & 0.15 \\
L4Mom & 0.015 \\
ALI-G & 1 \\
\bottomrule
\end{tabular}
\vspace{12pt}
\begin{tabular}{lc}
\toprule
\multicolumn{2}{c}{SVM Loss} \\ \midrule
&Optimal Hyper-Parameter \\ \cmidrule(lr){2-2}
Optimizer & $\eta$ or $\alpha$ \\
\cmidrule(lr){1-1} \cmidrule(lr){2-2}
L4Adam & 0.015 \\
L4Mom & 0.015 \\
ALI-G & 1 \\
\bottomrule
\end{tabular}
\caption{\em Cross-validation results on SNLI.}
\end{table}
\subsection{CIFAR}
\begin{table}[H]
\centering
\begin{tabular}{lcccc}
\toprule
\multicolumn{5}{c}{WRN on CIFAR-10} \\ \midrule
& \multicolumn{4}{c}{Optimal Hyper-Parameter} \\ \cmidrule(lr){2-5}
Optimizer & $\eta$ or $\alpha$ & $\lambda$ & $\rho$ &batch-size \\
\cmidrule(lr){1-1} \cmidrule(lr){2-2} \cmidrule(lr){3-3} \cmidrule(lr){4-4} \cmidrule(lr){5-5}
L4Adam & 0.015 & 1e-4 & - & 128 \\
L4Mom & 0.15 & 1e-4 & - & 128 \\
ALI-G & 1 & - & 100 & 256 \\
\bottomrule
\end{tabular}
\vspace{12pt}
\begin{tabular}{lcccc}
\toprule
\multicolumn{5}{c}{DN on CIFAR-10} \\ \midrule
& \multicolumn{4}{c}{Optimal Hyper-Parameter} \\ \cmidrule(lr){2-5}
Optimizer & $\eta$ or $\alpha$ & $\lambda$ & $\rho$ &batch-size \\
\cmidrule(lr){1-1} \cmidrule(lr){2-2} \cmidrule(lr){3-3} \cmidrule(lr){4-4} \cmidrule(lr){5-5}
L4Adam & 0.15 & 1e-4 & - & 256 \\
L4Mom & 0.15 & 1e-4 & - & 64 \\
ALI-G & 0.1 & - & 75 & 64 \\
\bottomrule
\end{tabular}
\vspace{12pt}
\begin{tabular}{lcccc}
\toprule
\multicolumn{5}{c}{WRN on CIFAR-100} \\ \midrule
& \multicolumn{4}{c}{Optimal Hyper-Parameter} \\ \cmidrule(lr){2-5}
Optimizer & $\eta$ or $\alpha$ & $\lambda$ & $\rho$ &batch-size \\
\cmidrule(lr){1-1} \cmidrule(lr){2-2} \cmidrule(lr){3-3} \cmidrule(lr){4-4} \cmidrule(lr){5-5}
L4Adam & 0.015 & 1e-4 & - & 512 \\
L4Mom & 0.015 & 5e-4 & - & 512 \\
ALI-G & 0.1 & - & 100 & 128 \\
\bottomrule
\end{tabular}
\vspace{12pt}
\begin{tabular}{lcccc}
\toprule
\multicolumn{5}{c}{DN on CIFAR-100} \\ \midrule
& \multicolumn{4}{c}{Optimal Hyper-Parameter} \\ \cmidrule(lr){2-5}
Optimizer & $\eta$ or $\alpha$ & $\lambda$ & $\rho$ &batch-size \\
\cmidrule(lr){1-1} \cmidrule(lr){2-2} \cmidrule(lr){3-3} \cmidrule(lr){4-4} \cmidrule(lr){5-5}
L4Adam & 0.015 & 1e-4 & - & 256 \\
L4Mom & 0.015 & 1e-4 & - & 256 \\
ALI-G & 0.1 & - & 75 & 256 \\
\bottomrule
\end{tabular}
\caption{\em Cross-validation results on CIFAR.}
\end{table}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 2,213 |
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations.Schema;
namespace Aurora.DataAccess.Entities
{
public class TaskEntity : InternalEntity
{
public TaskEntity()
{
this.Labels = new HashSet<TaskLabelEntity>();
}
public int BacklogItemId { get; set; }
[ForeignKey("BacklogItemId")]
public BacklogItemEntity BacklogItem { get; set; }
public int UserProjectId { get; set; }
[ForeignKey("UserProjectId")]
public UserProjectEntity UserProject { get; set; }
public string Tite { get; set; }
public string Description { get; set; }
public ICollection<TaskLabelEntity> Labels { get; set; }
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 7,764 |
Jaú River may refer to:
Jaú River (Amazonas), Brazil
Jaú River (São Paulo), Brazil
See also
Jau (disambiguation) | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 349 |
{"url":"http:\/\/qesn.affarefattotorinoshop.it\/check-if-list-contains-consecutive-numbers.html","text":"# Check If List Contains Consecutive Numbers\n\nRemoves all but the first element from every consecutive group of equivalent elements in the range [first,last). The input to this function is your list. In below example, we have iterate of all the elements one by one and put elements in hash map with counter 1. Generate a list of numbers, choose the first and last numbers and the step between consecutive numbers. check-duplicates. Although floating-point numbers includes integers (e. Any help would be appreciated. For a thorough check, you will need to return to your email and hunt down the confirmation emails in For instance, you can enter \"subject: verify\" to fetch all the emails with subject lines containing the Deseat is another effective method on this list if you are looking for ways to find all accounts linked to. You can choose your starting number and colors (which include many fluorescent colors for extra conspicuity). We compute the number of values, maximum, minimum, sum, and the average of the values. Outside of Office Hours, contact: 613-238-5335. There are two sets of encoded numbers; the first contains the bank's routing number, and the second contains the customer's account number along with the check number of the specific check. There are three different ways to get the number 37 out of the above text using a regular expression. Imagine tha I have a list of names and I want to count how many names contains the letter \"a\" for example. Weekly and biweekly reports include nursing facility data, cases by city\/town, residents subject to COVID-19 quarantine, and data from State facilities. Suppose we are sorting a large number of local phone numbers, for example, all residential phone numbers in the 412 area code region (about 1 million) We sort the numbers without use of comparisons in the following way. For example, the check 0 == 2 evaluates to 0. 0 (to distinguish it from the previous informal specifications). The contains method uses \"loose\" comparisons when checking item values, meaning a string with an The partition method may be combined with the list PHP function to separate elements that pass a given truth The values method returns a new collection with the keys reset to consecutive integers. 499999999999999999999999 are treated differently by this script. The list also has a pair of adjacent numbers (18, 17) that are not in the right order to be considered consecutive. Sharing Debugger lets you preview how your content will look when it's shared to Facebook and debug any issues with your Open Graph tags. 2 or true when StdIn. An arithmetic progression of primes is a set of primes of the form for fixed and and consecutive , i. Kotlin lists tutorial shows how to work with lists in Kotlin. x = the smallest x+2 = next to smallest x+4 = next to largest x+6 = largest The sum of four consecutive even numbers So we add them all up by putting in plusses x + (x+2) + (x+4) + (x+6) is the same as. are primes. How to create Kotlin List, how to add, update, replace, remove items in mutable List, iterate over List In this tutorial, I will show you many methods and functions to work with List & MutableList in Kotlin. For instance 8 is 3+5 and 22 is 11+11. isdigit(char) on each character char. 1[0-9][0-9] takes care of 100 to 199. The prime factorization of a number is a term used to describe a list of prime numbers that, when multiplied, results in the number. Populat A: See Answer. This method returns the index of the first occurrence of the. Describe your the best way to win the game, to get an. integer indices into the document columns) or strings that correspond to column names provided When quotechar is specified and quoting is not QUOTE_NONE, indicate whether or not to interpret two consecutive quotechar elements INSIDE a. Pseudo Code Example 1 is one of the ways that pseudo code can be written. The table Reseller contains one row for each reseller; the table ResellerSales contains multiple rows for each order, each row containing one order for a particular reseller. 2002-10-03: From Stephan:. You need to pay attention to these limitations of both functions and decide in which cases is better to use IsNumeric and when IsNumber. The function cannot alter the properties of the object containing the range of elements (i. If the function intention is better expressed by a signature with an extensible The collection will contain a large number of items. Numbers are added. L The string will begin on a word boundary. Choose General as the format and click on the OK button. Initialize the variable flag to zero. [tb_country] => Mexico [seasonName] => 2019\/2020 [cta_id] => 458 [fts_B] => 4 Before you bet with your bookie, you should analyze the match using H2H stats. Finding consecutive numbers in a list python. Marks the place for a character variable. (Un-checked will normally take less time to output. With modern technology, this letter can be posted in social media sites like Twitter. Home | Utah Legislature. Check if array elements are consecutive in O(n) time and O(1) space (Handles Both Positive and negative numbers) Please write comments if you find the above codes\/algorithms incorrect, or find other ways to solve the same problem. This page contains questions and answers from our readers relating to solar physics. 2 per cent, according to retail experts Springboard's latest data. Tuples can be used as keys if they contain only strings, numbers, or tuples; if a tuple contains any mutable object either directly or indirectly, it cannot be used as a key. ESLint checks the file path of named code blocks then ignores those if any overrides entry didn't match the To configure plugins inside of a configuration file, use the plugins key, which contains a list of ESLint comes with a large number of rules. How to create Kotlin List, how to add, update, replace, remove items in mutable List, iterate over List In this tutorial, I will show you many methods and functions to work with List & MutableList in Kotlin. 2020 In 233 In 233. How to Checks Checks if the String contains only digits * @author javaguides. JavaScript offers different ways to perform this operation. After clicking the link, the user will be validated. Let's say that an array is sequential when each successful element has the value of previous element + 1. Third, the FREQUENCY function returns values that are greater than one bin and less than or equal to the next-higher bin. The result is a column vector of the elements in A that are less than 9. contains names, and I'd like to check if the contents of another cell (D1) matches one of the names in my list. Using for loop, check if the input number is divisible by any of the natural numbers starting from 2. i cant make a bash script who check if a input numbers in command line is an power of 2 input #. Repeat values across cells. You can use the PHP strpos() function to check whether a string contains a specific word or not. Check if list contains consecutive numbers in Python, IF you can assume 1) they are integers, 2) they are ascending, and 3) no How to test if a list contains consecutive integers in Mathematica? The output should be a list of tuples as follows. Cells that do not contain letters are hard coded data. Tips on how to set up Blockade 3D Hack Unlimited Coins for Android\/iOS FREE : Strater 5. In this post, we will see how to check if array elements are consecutive. I know the OP specifically stated that the list came from a range of cells, but others might stumble This will give you the number where this matches (in this case, the second spot, so 2). The numbers come from a list's indexes that a user selected in UI. Imagine tha I have a list of names and I want to count how many names contains the letter \"a\" for example. can you replace the stars with figures **** x 3 _____ ***** the whole calculation uses each of the digits 0-9 once and once only the 4 figure number contains three consecutive numbers which are not in order. The other two numbers can be written as x+1 and x+2. Values of a list are called items or elements of the list. This can be 1 to 10, or 1 to 1,000,000, or anything in between. We can use the Fill Handle to quickly create this list in a column. Providing safe drinking water is a partnership that involves EPA, the states, tribes, water systems, and water system operators. \u2026will display the line numbers in a file that contain a specified Regular Expression. org or mail your article to [email\u00a0protected] For example, the call count(\"ho-tel\") should return 2. If you check numbers manually and see the numbers one after another, you will spend a lot of time and energy, especially when there are many items in the column. An integer number (>= 0) : can be a number, or a string, or list of digits; Output. Write for higher quality scan or layaway options. For the matched phone numbers, you don\u2019t want to just append group 0. In this article, we'll examine four ways to use Python to check whether a string contains a substring. Silicon (14 Si) has three stable isotopes with consecutive mass numbers. If you need more than 100 consecutive numbers and cannot find that many in our system, please contact our sales team for further assistance. There are two sets of encoded numbers; the first contains the bank's routing number, and the second contains the customer's account number along with the check number of the specific check. Shop ZB Marker Strip, Flat, 10-Section, Vertically Labeled, White, 6mm Wide, Consecutive Numbers 61-70 by Phoenix Contact (1051029-0061) at Graybar, your trusted resource for Terminal Block Markers and other Phoenix Contact products. Tips on how to set up Blockade 3D Hack Unlimited Coins for Android\/iOS FREE : Strater 5. I want to replace the entire cell array with zeros. Rolling dice - Sum greater $\\geq$ certain value. The solution of the enumeration problem is given in terms of a rational generating. (2) At least one value occurs more than once in the list. So 1 will be the first word (not 0). $$S_2$$ = sum of last n odd numbers = sum of all odd numbers from 1 to 100 \u2013 sum of the first (50 \u2013 n) odd numbers \u2022 The sum of first n odd natural numbers = $$n^2$$. find the index of the first\/last. Added the ability to create animated sticker sets by specifying the parameter tgs_sticker instead of png_sticker in the method createNewStickerSet. In this article we will discuss different ways to check if a given element exists in list or not. Look at the numbers at the bottom of the check. numbers of the. A wealth of women Australian womens lives from 1788 to the. There is also the Integer Generator which generates the numbers independently of each other (like rolls of a die) and where each number can occur more than once. set::upper_bound. You'll see the number of characters and words increase or decrease as you type, delete, and edit them. The list , number_list , will be populated by the items in range from 0-19 if the item's value is divisible by 2. Here are detailed instructions on how to create the randomly sorted list of numbers. For example: Input: array = {5, 3, 4, 1, 2} Output: true As array contains consecutive elements from. Note that, in this case we don't need to extract the categorical features, we can convert the whole dataframe into a dict. Access to hundreds of pages of award-winning information on prime numbers--with links to thousands of pages elsewhere. where F1 contains the text >=20, returns the same number. 2 per cent, according to retail experts Springboard's latest data. Your visit to the historic U. Re: Consecutive Variable list Macro Posted 10-24-2013 04:14 PM (1545 views) | In reply to saslove So, I modified the code to get the number of unique new cars added at every step and it works. Word(s) must contain. This program takes a number and checks whether a given number is prime or not. If number is divisible by 2 then it's an even number *. The Request object contains all the information about an incoming HTTP request. In this tutorial, we will discuss Python program to find sum of elements in a list. The dictionary form can also be used with standard consecutive integers class labels for additional readability. Beginning in SAS\u00ae 9, a range of sequential integers can be used with the IN operator and numeric variables. If a list contains repeated elements they should be replaced with a single copy of the element. Check out this Author's contributed articles. (or (contains? numbers 2) (contains? numbers 3)). Call join() method from 'separator', pass a list of strings to be concatenated to argument. Check if list contains consecutive numbers in Python Python Server Side Programming Programming Depending on the needs of our data analysis we may need to check for presence of sequential numbers in a python data container. The following sed command will search the number in each line of items. Check if xlrd is present in the list. There are 5 cards numbered 1 to 5, one number on one card. If you can execute code inline (eg. The sum of first n natural numbers is given by 21 n2+21 n. To check if a list contains any duplicate element follow the following steps,. I still need the formula for the other cells. Cells can contain any combination of responses from A - J. 1) Keep a running counter C (indicating subsequent length of 1s seen) =0 2) Iterate over the array: When you see a 1, increment C. Write the complete nuclear symbol (with neutron number) for each. Previous Next If you want to practice data structure and algorithm programs, you can go through 100+ data structure and algorithm programs. $$S_2$$ = sum of last n odd numbers = sum of all odd numbers from 1 to 100 \u2013 sum of the first (50 \u2013 n) odd numbers \u2022 The sum of first n odd natural numbers = $$n^2$$. I cobbled together an Update Cursor, but. A string contains a number if any of the characters are digits (0-9). Return list of symbols and set of tuple(s) of solution(s). Online Tally Counter. (\u00e2 -\u00e2 or \u00e2 +\u00e2 ) is. Large number of extensions. If you type abc or 12. I have a column of numbers (max 54 rows) with an index number listed. This is very useful, because now you can check if you have the right number of subsets. While searching for primitive value like number or string is relatively easy, searching for objects is slightly During shallow equality check of objects the list of properties of both objects is checked for equality. The message contains a link that must be clicked to validate the registration. Is there any way (sure there is :)) to check this without walking through all items in the list? I mean for example by using the first and last number in the list etc? Update: Note: The numbers: they are integers,. It's not a problem for the AutoFill option to copy across the values that contain both text and numerical values. My contact list is a separate sheet in the same workbook. What is the number of binary arrays of length $n$ with at least $k$ consecutive $1$'s? More in general, words of length $n$ with a finite alphabet $A$, that contain (or that avoid, if you like) a given pattern as a factor. 16 for any It turns out that the Orders table has at most three consecutive numbers according to the criteria in. Concatenate list of strings. We'd like to aggregate each group \"grp\" and count the number of dates in the group, as well as find the lowest and the highest date within each group. KruzSoni The name can only contain underscores and alphanumeric characters. Inserting the Next Consecutive Integer in Excel. The exception to this is 2, which can only be divided evenly by 1 and 2. Initialize the variable flag to zero. In the winter storm, a branch is removed from the tree. Since we are told that the list contains distinct integers, then no other set than the set of consecutive integers can satisfy that. Case numbers in the United States have reached alarming new records in recent days as outbreaks continue to grow across the country. Build the number of trees defined by the training parameters. getInstance(); int numConsecutive = 0; Date last = null I have been stuck trying to figure out how to write a regular expression that checks if a string contains 2 consecutive integers. For a thorough check, you will need to return to your email and hunt down the confirmation emails in For instance, you can enter \"subject: verify\" to fetch all the emails with subject lines containing the Deseat is another effective method on this list if you are looking for ways to find all accounts linked to. Now you can check for consecutive dates with something like this: Calendar c = Calendar. The numbers come from a list's indexes that a user selected in UI. But how do we represent signed binary numbers if all we have is a bunch of one's and zero's. Search for: 28. So the smallest number is ten. Write a Python program to check whether a list contains a sublist. Check for help content on the page that's giving you trouble. Task: Write a regular expression which matches a text line if this text line contains either the word 6. You can use this calculator for significant figures practice: Test your ability to find how many significant figures are in a number. 813-320-0930 displays as Verizon Wireless computer voice states your verizon account de-activated if you want to reactivate enter your passcode (your ener it)then asks for your 16 digit account number then (asks if your card number is the 16 digit account you just input asks you to push 1 for yes or 2 for no) then asks for your 4 digit pin. Sample Solution Name must be unique. So, two lists are considered to be equal if they contain the exact same elements in the same order. In that case we do not have a pair nodes. JPA distinguishes between the MEMBER OF operator, which should be used for checking a. Is there way in Excel VBA to check if a string contains a number, and then return TRUE or FALSE. , the congruence n=1 (mod 2) holds for odd n. You can modify which rules your project uses either using. , use the concept of reduce(). The OMB control numbers for the Federal Communications Commission, appear in \u00a7 0. The 2Q subsequent lines describe the queries, and each query is described over two lines: If the first line of a query contains the String Insert , then the second line contains two space separated integers x y , and the value y must be inserted into L at index x. Is there way in Excel VBA to check if a string contains a number, and then return TRUE or FALSE. Check this blog post showing one of the solutions: Refactoring Ranges. How to check uppercase and lowercase using if else in C programming. This is the MICR (Magnetic Ink Character Recognition) Line. Statement: Calculate the sum of series of even numbers from the list of numbers. Say if you want to consider a check if the n is more than 2 then you can have an if \u2013 endif statement if n < 2 then print n end if This if \u2013 endif statement would come in before code line 7. Highlight non-consecutive numbers from a column with Kutools for Excel. Suppose that you want to calculate the sum of a list of numbers such as: $$[1, 3, 5, 7, 9]$$. Simulations Pseudorandom numbers are often used in simulations because they can be used to mimic random variations in the real world. Check if list contains consecutive numbers in Python; How to check if array contains three consecutive dates in java? Program to print numbers such that no two consecutive numbers are co-prime and every three consecutive numbers are co-prime Using C++; Program to find largest of three numbers - JavaScript; Consecutive Numbers Sum in C++; Check. Check out our Interview-style online EIN application. In this quick code reference, I will demonstrate how to check whether value or item exists in python It is very easy to find if list contains a value with either in or not in operator. If the smaller of the numbers on the removed card is k, then k\u221220=. In the below programs we find out if among the elements of Alist, there are any consecutive numbers. The Repeating Characters rule is not case sensitive, so \"mypaSssSword\" contains four consecutive repeating characters (SssS). 69 is the only number whose square and cube between them use all of the digits 0 to 9 once each: 69 2 = 4761 and 69 3 = 328,509. The result is a list of dictionaries, among which each dictionary represent one sample. toContainEqual when you want to check that an item with a specific structure and values is contained in an array. The data list contains an equal number of positive and negative values. To check if a list contains any duplicate element follow the following steps,. After playing around with the issues, and getting further through that mess o' spaghetti code known as the NSCA httpd, I've come down on the side of those who said it was a dumb idea. Here we find the minimum and maximum element of the array in one traversal. , use the concept of reduce(). Using for loop, check if the input number is divisible by any of the natural numbers starting from 2. 16 for any It turns out that the Orders table has at most three consecutive numbers according to the criteria in. The number of girls was four times the number of boys. The input string can be assumed to contain only alphabets (both uppercase and lowercase) and numeric digits. The contains() method is Java method to check if String contains another substring or not. The sklearn. Now you can check for consecutive dates with something like this: Calendar c = Calendar. Given a list of numbers, write a Python program to check if the list contains consecutive integers. If n is a number, then the next numbers will be n+1 and n+2. Imagine tha I have a list of names and I want to count how many names contains the letter \"a\" for example. For example, you can test the field email, introducing the characters that are not included in the list mentioned Check all UI elements in different languages and their locations (if there is a support for languages. Look for the values \"X\" AND \"Y\" in ValuesALL and if both are found at least 1 time (no matter the order) list \"YES\" in column Combination otherwise \"NO\" Desired output:. You have to return the count of consecutive number with same digit. To use this plagiarism checker, please copy and paste your content in the box below, and then click on the big blue button that says \"Check Plagiarism. Python - Consecutive K elements join in List; Python - Extract range of Consecutive Similar elements ranges from string list; Python program to check if the list contains three consecutive common numbers in Python; Python | Remove consecutive duplicates from list; Python | Average of each n-length consecutive segment in a list; Python | Check. Non-member functions. set::key_comp. If you do not specify the M modifier, then the number of words in a string is defined as the number of maximal substrings of consecutive non-delimiters. This Security Technical Implementation Guide is published as a tool to improve the security of Department of Defense (DoD) information systems. 684 is made from three consecutive even numbers so it is divisible by 3. Since the set contains consecutive integers ( positive or negative but no of terms are more than 11), the mean of the numbers is the median. You'll see the number of characters and words increase or decrease as you type, delete, and edit them. thank you for reading this post. Sorry, we're still checking this file's contents to make sure it's safe to download. L The string will begin on a word boundary. So, if we happen to know the smallest element in the list, we also know the largest element in the list. using System; using System. List of cities and towns in Spain Britannica. Economic Metrics C. i cant make a bash script who check if a input numbers in command line is an power of 2 input #. , it cannot alter the size of an array or a container): The removal is done by replacing the duplicate elements by the next element that is not a duplicate, and signaling the new size of the. Now you can check for consecutive dates with something like this: Calendar c = Calendar. Find the first non-consecutive number. Check for a certain number range. 2 per cent, according to retail experts Springboard's latest data. For example, the call count(\"ho-tel\") should return 2. Python program to check if the list contains three consecutive common numbers in Python Last Updated: 17-08-2020 Our task is to print the element which occurs 3 consecutive times in a Python list. Things to consider if performance is a worry:-1. Blacklist database also contains IMEI numbers of phones that were bought in a contract and the owner stopped to pay it. To check word count, simply place your cursor into the text box above and start typing. So 2, 3, 5, 7, 11, 13, 17, 19 etc. Check the deadline of discount. Return its length: 4. readInt() is expecting an int, then it will respond with an InputMismatchException. This product is versatile and can be used as a Ticket out the Door, Exit Ticket, Quick Check, Practice, etc. While the program detects phone numbers in several formats, you want the phone number appended to be in a single, standard format. You can use this calculator for significant figures practice: Test your ability to find how many significant figures are in a number. Signed tokens can verify the integrity of the claims contained within it, while encrypted tokens hide those claims from other parties. This might not be the prettiest version of a alpha+numeric check, but it worked for I have similar problem like i have some data where i want to have to find the fields which contain AB-1234567 and AB1234567 , the numbers can be anything from 0-9 but the. , the congruence n=1 (mod 2) holds for odd n. Note: Numbers can be from 1 to 9. In the next posting in the same thread George Simms extracts set of digits from the leftmost postition only using an array formula (ctrl+shift+enter), but it fails for anything else (i. Which of the following commands will display lines that contain either start or end? egrep start end file. set::lower_bound. We'd like to aggregate each group \"grp\" and count the number of dates in the group, as well as find the lowest and the highest date within each group. Numbers are created by numeric literals or as the result of built-in functions and operators. Suppose we have a list of strings i. Blacklist database also contains IMEI numbers of phones that were bought in a contract and the owner stopped to pay it. Treat as a number if found as an empty string if none found (i. Explain each of the following A. StdIn treats strings of consecutive whitespace characters as identical to one space and allows you to delimit your numbers with such strings. As, Average = Sum\/terms. asList(crunchifyCompany). After executing the above line of code it gives the following rows containing ville and Aura string in their City The name column in this dataframe contains numbers at the last and now we will see how to extract those numbers from. This provides news about or relevant to public debt management in the Caribbean. It can contain various types of values. What are Consecutive Numbers ? This question often comes to the mind of Maths students in early classes. Need to work on your code with various hosted apps from anywhere with multiple devices? Fields splits the slice s around each instance of one or more consecutive white space characters, returning a slice of subslices of s or an empty list if s. How to check if one string contains substring. Problem Given an array, we need to check if array contains consecutive elements. (Download sample database. Not difficult, but annoying. You need to pay attention to these limitations of both functions and decide in which cases is better to use IsNumeric and when IsNumber. Providing safe drinking water is a partnership that involves EPA, the states, tribes, water systems, and water system operators. So for a number k to be triangular, $1+8k$ has to be a perfect square, and k has to be greater than 0 (or 0 if you consider 0 triangular). They aren't listed consecutively, although the rows must stay in the same order. If you don't use any property then the default is Value. data type that creates a field that enables you to choose a value from another table or from a list of values by using a list box or a combo box EX: Customers table with an AccountID field. In this kotlin programming tutorial, we will learn how to check if a substring exists in another string or not. Now to my question, is this a good approach or should I choose another, as my assignment marks depend on this program. Rolling dice - Sum greater $\\geq$ certain value. The numbers come from a list's indexes that a user selected in UI. Each Quiz contains 12-15 MCQ. For example if the computer contains the number 11. Remove, List. For a thorough check, you will need to return to your email and hunt down the confirmation emails in For instance, you can enter \"subject: verify\" to fetch all the emails with subject lines containing the Deseat is another effective method on this list if you are looking for ways to find all accounts linked to. When should you use. Length == 1 should just return true. Treat as a number if found as an empty string if none found (i. Note that in the above array formulas, the {0,1} and {-1,1} are enclosed in array braces (curly brackets {} ) not parentheses. This Security Technical Implementation Guide is published as a tool to improve the security of Department of Defense (DoD) information systems. Outside of Office Hours, contact: 613-238-5335. (or (contains? numbers 2) (contains? numbers 3)). Given that the list contains consecutive integers, it's enough if we find one of the numbers and its position in the list to have all numbers in the list. Now, let's talk. A VIN with straight-ones (seventeen consecutive 1s) has the nice feature that its check digit 1 matches the calculated value 1. Select the Enabled check box to enable the Repeating Characters rule. For example, if an array with length 6 contains numbers {0, 3, 1, 2, 3}, then duplicated number is 3. For example: 23, 24, 25. Now -- if there were an equal number of 1's, 2's, and 3's, then the sum of those 3 squares would be a multiple of the 3rd triangle, 1 + 2 + 3. , use the concept of reduce(). Site Check. Viewed 22k times 27. Strings are concatenated. numbers = c(1,2,3,5,7,8) difference = diff. The SUSE Linux Enterprise Server Ver 11 for System z Security Technical Implementation Guide (STIG) is published as a tool to improve the security of Department of Defense (DoD) information systems. There are two sets of encoded numbers; the first contains the bank's routing number, and the second contains the customer's account number along with the check number of the specific check. Regex no consecutive numbers Regex no consecutive numbers. def check_is_consecutive(l): \"\"\" sorts the list and checks if the elements in the list are consecutive This function does not handle any exceptions. Keywords: generate, list, numbers, first, last, minimum, maximum, lowest, highest, float, decimal, integer, negative, positive, step, range. The document need not contain the term California to be returned in this ABOUT query. For instance, if there is exactly one pair of consecutive numbers in the case where $$x = 4$$ and $$y = 9$$, there are three walls (one of which uses two balls) and four buckets. Converts a date or date with time to a UInt8 number containing the number of the day of the week (Monday is 1, and Sunday is 7). Da oltre 40 anni diffondiamo libri storici e di attualit\u00e0 in varie lingue a tema automobilistico e motociclistico presso i cultori del mondo dei motori. They aren't listed consecutively, although the rows must stay in the same order. xml 12\/09\/2014 18:32:21 hhalpern x:\\xx\\xxxxxx\\xxxxxx. When should you use. The file must contain exactly 31102 lines (for. Suppose I have an array of numbers like {5,6,7,8} (sequential) or {1,2,5} (not sequential). Not difficult, but annoying. Inserting the Next Consecutive Integer in Excel. CountVectorizer and Stop Words. I had a list of strings and needed to check whether or not a specific string was contained within the list ignoring the character casing. Statement: Calculate the sum of series of even numbers from the list of numbers. The task is to count the numbers missing in an array You have your check backwards, you're checking the two elements in the initial array against the elements you create to represent the full list of elements. That will match the length of the list if all numbers are consecutive. Your algorithm should run in O(n) complexity. In below example, we have iterate of all the elements one by one and put elements in hash map with counter 1. Your visit to the historic U. The number in the middle of those consecutive numbers is divisible by 3 so 684 is also Email check failed, please try again. Find the total nymber of men and boys at the zoo. Search for cells containing names beginning with a lowercase letter. Given an array of integers - some positive, some negative, some neither, find the set of consecutive numbers in this array with the largest possible sum. Using for loop, check if the input number is divisible by any of the natural numbers starting from 2. It returns a Boolean. This will make the if a bit longer, but it'll work. In number theory, primes in arithmetic progression are any sequence of at least three prime numbers that are consecutive terms in an arithmetic progression. Check\/print missing number in a consecutive range and remove duplicate numbers. Using len you can check weather map is empty or not. I know the OP specifically stated that the list came from a range of cells, but others might stumble This will give you the number where this matches (in this case, the second spot, so 2). Given that the list contains consecutive integers, it's enough if we find one of the numbers and its position in the list to have all numbers in the list. Let's take a look at some examples: Let's take a look at some examples:. In this program, you will learn to check if the Python list contains all the items of another list and display the result using python print() function. Now -- if there were an equal number of 1's, 2's, and 3's, then the sum of those 3 squares would be a multiple of the 3rd triangle, 1 + 2 + 3. Return list of symbols and set of tuple(s) of solution(s). In this scenario, there are two sets of data that are related by order number. The 2Q subsequent lines describe the queries, and each query is described over two lines: If the first line of a query contains the String Insert , then the second line contains two space separated integers x y , and the value y must be inserted into L at index x. If the function intention is better expressed by a signature with an extensible The collection will contain a large number of items. Some JavaScript versions interpret numbers as octal if they. Once again, all I wanted to know was how to get the consecutive numbers to print out the I way I listed it above. If the number is lesser than or equal to 1, then print the output as \u201d It is not a prime number\u201d. Q: Given a sequence of random numbers, find the start and end positions of runs of five or more consecutive numbers that are greater than zero. ; All elements in the array should be distinct (we can check. \u2022 Find both in-stock rolls with consecutive numbers from 1-5,000 or select your own starting number for sequential labels. Python program to check if the list contains three consecutive common numbers in Python Last Updated: 17-08-2020 Our task is to print the element which occurs 3 consecutive times in a Python list. 15 COMMON blocks Only labeled COMMON is used. In IDLExlrd. element - the element to be searched; start (optional) - start searching from this index. Lets get started with something simple. Incoming Calls Intercept for Android androidzoom. The simple way to search for a string in a list is just to use \u2018if string in list\u2019. Answered by Penny Nom. I can't find what I'm looking for. VICTORIA, AUSTRALIA, 1862 One Penny Trade Token, T. For each pair of consecutive numbers, there is one less bucket available, so we can adjust the formula for $$n(C_0)$$ accordingly. _____ Sources: 1) Project Euler 50: Kristian\u2019s algorithm. Means three is true. javascript). Transaction: Check if Consecutive Transactions in Given Duration Satisfy the Filter Conditions. com - 2013-06-08 05:07:53 - Similar - Report\/Block Blue Video Maker Inc Move the phone numbers into this list and your cell phone won\u00a1\u00aft receive calls from these phone numbers, but you can check the Filter here. Python is one of the best languages to learn for someone new to programming. This tutorial explains how to check variable is a number in javascript. Bills of lading also make sure that the. i cant make a bash script who check if a input numbers in command line is an power of 2 input #. The sum of consecutive numbers is equal to half the product of the last number in the sum with its successor. It does not contain dimension information; this is put in the associated type statement. One of the most common operations that programmers use on strings is to check whether a string contains some other string. One pound of iron contains an estimated 4,891,500,000,000,000,000,000,000 atoms. Providing safe drinking water is a partnership that involves EPA, the states, tribes, water systems, and water system operators. (5) In 2016, the Bureau of Labor Statistics predicted that between 2016 and 2026\u2014 (A) the number of new jobs for home health and personal care aides will increase 41 percent, which is an increase of 1,200,000 jobs and the largest increase in new jobs of any occupational category during such period; and (B) the number of new jobs for child. there are 700 more children than adults. The contains method uses \"loose\" comparisons when checking item values, meaning a string with an The partition method may be combined with the list PHP function to separate elements that pass a given truth The values method returns a new collection with the keys reset to consecutive integers. Question 835054: Using every one of the digits 1-9 only once, make an addition problem using only 3-digit numbers. Controls bound to an expression (cannot be assigned a value) Test if the ControlSource starts with \"=\". It's not a problem for the AutoFill option to copy across the values that contain both text and numerical values. The public drinking water systems regulated by EPA and delegated states and tribes provide drinking water to 90 percent of Americans. Minimum number of integers required such that each Segment contains at least one of them Check if it is possible to sort an array with conditional swapping of adjacent allowed Print all maximal increasing contiguous sub-array in an array. Check if any character is a digit by calling str. L:\\XML\\CPRT-113-HPRT-RU00-HR83. Write a Python program to check whether a list contains a sublist. Site Check. Now we are going to find duplicate objects in the list using hashmap\/hashtable. This product is versatile and can be used as a Ticket out the Door, Exit Ticket, Quick Check, Practice, etc. Arrays can contain different types of objects. Think of an even number and find two primes which add together to make your number. Convert numbers to letters in various formats. So, b and c are. Check three consecutive numbers - JavaScript; How to check if array contains three consecutive dates in java? Consecutive elements sum array in JavaScript; Python - Check if all elements in a list are identical; JavaScript to check consecutive numbers in array? How do we check if an object is an array in Javascript? Finding three desired. The string will contain a single word divided into syllables by hyphens, such as these: \"ho-tel\" \"cat\" \"met-a-phor\" \"ter-min-a-tor\" Your function should count the number of syllables and return it. It would match the first number it found, which would be the 3 from '3 bananas'. Hardware from different vendors may share the same device portion of the address. I need help assembling a python script that counts the number of consecutive numbers in a Field (Long Int) and writes that count in another field for those numbers. Tips on how to set up Blockade 3D Hack Unlimited Coins for Android\/iOS FREE : Strater 5. These 2-part sets feature a white, canary paper sequence, with consecutive page numbers printed on each form in red. Red Hat Enterprise Linux 4 Red Hat Enterprise Linux 5 Race condition in backend\/ctrl. That will match the length of the list if all numbers are consecutive. Pre-LINQ days you would have had to loop through the entries calling the. The most remote and. I have a simple table with a an integer that is not identity but contains an external generated ID, that must be consecutive. What is the number of binary arrays of length $n$ with at least $k$ consecutive $1$'s? More in general, words of length $n$ with a finite alphabet $A$, that contain (or that avoid, if you like) a given pattern as a factor. Consecutive numbers are numbers that follow each other in order. Ask Question Asked 7 years, 6 months ago. This product is versatile and can be used as a Ticket out the Door, Exit Ticket, Quick Check, Practice, etc. The rightmost digits of a MAC address represent an identification number for the specific device (S). A continuously updated list of Express Entry draw results, with full comparisons of Express Entry draws in 2015 Check the box to agree to receive Moving2Canada's free Getting Started Guide, plus our This all-program draw represents the fourth consecutive all-program draw, in a pattern resembling. Here, list comprehension will check the 10 numbers from 0 to 9. A aa aaa aaaa aaacn aaah aaai aaas aab aabb aac aacc aace aachen aacom aacs aacsb aad aadvantage aae aaf aafp aag aah. A natural number greater than 1 that is not prime is called a composite number. If it is, then it is not a prime number. In case of under\/overflow in the value, 0 is returned and the varname will contain -1. Check\/print missing number in a consecutive range and remove duplicate numbers Hi, In an ideal scenario, I will have a listing of db transaction log that gets copied to a DR site and if I have them all, they will be numbered consecutively like below. 3164 [Report No. Run RandomExpression. (A string contains a double letter if it contains aa or bb as a substring. 813-320-0930 displays as Verizon Wireless computer voice states your verizon account de-activated if you want to reactivate enter your passcode (your ener it)then asks for your 16 digit account number then (asks if your card number is the 16 digit account you just input asks you to push 1 for yes or 2 for no) then asks for your 4 digit pin. This is used in context with the email marketing service. - Divisors (II) K consecutive numbers with the same number of divisors. If the string address contains an odd number of characters, a zero fill byte will follow the last character. Red Hat Enterprise Linux 3 The (1) Mozilla 1. Odd numbers leave a remainder of 1 when divided by two, i. The number currently in the cell will be shown for the examples you chose from. ' instead of just '. $100 is split between four guys. I cobbled together an Update Cursor, but. I have a simple table with a an integer that is not identity but contains an external generated ID, that must be consecutive. The sklearn. set::upper_bound. However, there are more options such as ASCII codes, tap codes or even the periodic table of elements. Sharing Debugger lets you preview how your content will look when it's shared to Facebook and debug any issues with your Open Graph tags. Check out some more examples of a dictionary to get a hang of it What you have now is a list containing the temperature value in celsius, but the solution requires it to be a dictionary. ESLint checks the file path of named code blocks then ignores those if any overrides entry didn't match the To configure plugins inside of a configuration file, use the plugins key, which contains a list of ESLint comes with a large number of rules. Suppose we have a list of strings i. As the title suggests, I want to check if any number is missing from an array. Unadorned integer literals (including hex, octal and binary numbers) yield Numeric literals containing a decimal point or an exponent sign yield floating point numbers. It searches for a specific element to see if that element occurs at least once in the collection. This Class number is assigned in accordance with the procedures and criteria of Part 2. ) Utility to check the structure of a database: IsDecimalField() Test if a field is a Decimarl. Hello and I'm quite new to Math SE. \u2013 For dangerous substances of Class 2, the code consists of a number. The first line should contain the number of temperatures in the file and the next line should contain integer temperatures separated by blanks. This provides news about or relevant to public debt management in the Caribbean. Check if elements of a list are numbers or strings. Now try the following. But This Function can't count same string number. There are other types of consecutive patterns or sequences, such as even integers where the List 6 factors of the product of any 3 consecutive even integers. L:\\XML\\CPRT-113-HPRT-RU00-HR83. can you replace the stars with figures **** x 3 _____ ***** the whole calculation uses each of the digits 0-9 once and once only the 4 figure number contains three consecutive numbers which are not in order. You can check a response value against a list of valid options. L The string will begin on a word boundary. Here we find the minimum and maximum element of the array in one traversal. Hi, In an ideal scenario, I will have a listing of db transaction log that gets copied to a DR site and if I have them all, they Sum every 3 consecutive numbers in a column. 40, 45, 50 and 55 are consecutive multiples of 5. Input format. Random infix expression generator. They are not selected or validated by us and can contain inappropriate terms or ideas. no-consecutive-blank-lines - Disallows one or more blank lines in a row. data type that creates a field that enables you to choose a value from another table or from a list of values by using a list box or a combo box EX: Customers table with an AccountID field. 32, we know that for any positive real number h, there is some n in S such that n > sup S \u2013 h is incorrect since S only contains integers and is not dense. Q: Given a sequence of random numbers, find the start and end positions of runs of five or more consecutive numbers that are greater than zero. If you add two numbers, the result will be a number However, if the string contains a numeric value , the result will be a number Never write a number with a leading zero (like 07). OSINT footprinting using external APIs, Google If you check 1-2 phone numbers, then this will not be a problem for you. Finding consecutive numbers in a list python. Weekly and biweekly reports include nursing facility data, cases by city\/town, residents subject to COVID-19 quarantine, and data from State facilities. Suppose that my vector numbers contains c(1,2,3,5,7,8), and I wish to find if it contains 3 consecutive numbers, which in this case, are 1,2,3. Since the set contains consecutive integers ( positive or negative but no of terms are more than 11), the mean of the numbers is the median. We know that binary digits, or bits only have two values, either a \"1\" or a \"0\" and conveniently for us, a sign also has only two values, being a \"+\" or a \"-\". Write a Python program to count the number of elements in a list within a specified range. The other two numbers can be written as x+1 and x+2. Note: A randomized sequence does not contain duplicates (the numbers are like raffle tickets drawn from a hat). Use the COUNTIF function to count cells that contain a bit of text. Odd numbers leave a remainder of 1 when divided by two, i. To check to find whether a given array contains three consecutive dates: Convert the given array into a list of type LocalDate. An example is the sequence of primes (3, 7, 11), which is given by a n = 3 + 4 n {\\displaystyle a_{n}=3+4n} for 0 \u2264 n \u2264 2 {\\displaystyle 0\\leq n\\leq 2}. From the Number style drop-down list, select a, b, c. 3 Production number. In a set of consecutive numbers, the mean and the median are equal. It takes two arguments. Need to work on your code with various hosted apps from anywhere with multiple devices? Fields splits the slice s around each instance of one or more consecutive white space characters, returning a slice of subslices of s or an empty list if s. Step 2: Check If Column Contains Another Column with Lambda. Know complete list of the IIITs & IIIT fee structure. Find The sum of natural numbers from 11 to 30. Now, we'll check if the input list contains a string element 'john'. Posts about consecutive numbers written by ivasallay. Assume the sum to be 8 bit number so you can ignore carries and store the sum at memory location 2 Sample problem:. You can use these laravel collection methods to work with collection in your own projects. This method scans a List. OSINT footprinting using external APIs, Google If you check 1-2 phone numbers, then this will not be a problem for you. So, I have a list of integers say - set one - 1,2,3,22,34,21 set two - 2,4,5,3,7,8 set three - 5,9,8,1,2,3 I have a function which should return true or false based on if the list contains consecutive three sequence number. 1[0-9][0-9] takes care of 100 to 199. You will need to check with campsite if you wish to use this offer for more than two consecutive nights. After clicking the link, the user will be validated. b) If array is {83, 78, 80, 81, 79, 82}, then the function should return true because the array has consecutive numbers from 78 to 83. A Run of 7 Point Up or Down - Instruction as above; Chart usage. 2 per cent, according to retail experts Springboard's latest data. The phoneNum variable contains a string built from groups 1, 3, 5, and 8 of the matched text. Basically, this problem solution is called 'Gaps and Islands' problem. Generic; \/\/ Simple business object. Since the set contains consecutive integers ( positive or negative but no of terms are more than 11), the mean of the numbers is the median. If you have a lot of dialogs, you can Regex is useful in situations where you want to check, match, or validate different inputs, like check if a string contains numbers, or validate if the. Here, I succeeded returning True or False values respectively. Create an array containing the numbers in the given range: 15: Create an array containing the numbers till the given input number: 16: Get the list of all the words possible with the characters provided: 17: Check the List Of Values Contains The Zero: 18: Calculate The Factors: 19: Find The Common Minimum Number Between Two Arrays: 20: Find The. Lets get started with something simple. 0] An array can also be created by explicitly calling ::new with zero, one (the initial size of the Array) or two arguments (the initial size and a default object). Think of an even number and find two primes which add together to make your number. keys() key=list(key) key. The contains() method is Java method to check if String contains another substring or not. Given a list of numbers, write a Python program to check if the list contains consecutive integers. In case of under\/overflow in the value, 0 is returned and the varname will contain -1. As, Average = Sum\/terms. We can use the Fill Handle to quickly create this list in a column. Shop ZB Marker Strip, Flat, 10-Section, Vertically Labeled, White, 6mm Wide, Consecutive Numbers 61-70 by Phoenix Contact (1051029-0061) at Graybar, your trusted resource for Terminal Block Markers and other Phoenix Contact products. Women's fashions of 1914 - 1920 were heavily influenced by world war i (the great war) as well as the women's suffrage. Since the set contains consecutive integers ( positive or negative but no of terms are more than 11), the mean of the numbers is the median. else { print(\"No, it doesn't\") }. The second solution is similar to the first - in terms of performance and again if the column contains NaN values they should be filled with default values like Bonus Step: Check If List Column Contains Substring of Another with Function. Week ending September 25, 2020. Managing list variables. The SUSE Linux Enterprise Server Ver 11 for System z Security Technical Implementation Guide (STIG) is published as a tool to improve the security of Department of Defense (DoD) information systems. Simplify all but polynomials of order 3 or greater before returning them and (if check is not False) use the general simplify Allows solve to return a solution for a pattern in terms of other functions that contain that pattern; this is only needed. (Un-checked will normally take less time to output. A continuously updated list of Express Entry draw results, with full comparisons of Express Entry draws in 2015 Check the box to agree to receive Moving2Canada's free Getting Started Guide, plus our This all-program draw represents the fourth consecutive all-program draw, in a pattern resembling. Pre-Algebra, Algebra I, Algebra II, Geometry: homework help by free math tutors, solvers, lessons. Given an array of integers, check if an array is formed by consecutive integers. With the contains() method we can check if a list contains the specified elements. Large number of extensions. For example, the call count(\"ho-tel\") should return 2. The numbers come from a list's indexes that a user selected in UI. This Class number is assigned in accordance with the procedures and criteria of Part 2. The Repeating Characters rule is not case sensitive, so \"mypaSssSword\" contains four consecutive repeating characters (SssS). Actual coins you will receive. For example, if you wish to check if a variable is both greater than five and. The number of test cases depends on the experience and imagination of the tester. A small number of folks wrote email, either saying it was a cool idea or an unbelievably dumb one. Strings are concatenated. Thus, 6, 28, 496 are Perfect and correspond to values of 3, 7, and 31 for 2 n-1 in the formula. Go to the editor. And then check for contiguous sequence. Week ending September 25, 2020. Django provides a count() method for precisely this reason. The [NOT] MEMBER OF operator checks if a specified element is contained in a specified persistent collection field. 32, we know that for any positive real number h, there is some n in S such that n > sup S \u2013 h` is incorrect since S only contains integers and is not dense. Check whether the Only for session or Not allowed lists contain the problematic site. We will use two lists, having overlapping values. Subtract three from both sides: 3x = 30. operator==operator!=operatoroperator<=operator>=operator. It takes about 1Mb. ) This modifies how the substitutions are made into. Check this blog post showing one of the solutions: Refactoring Ranges. The first is straightforward. Using len you can check weather map is empty or not. Regex no consecutive numbers. Hint: a positive integer is a Fibonacci number if and only if either (5*n*n + 4) or (5*n*n - 4) is a perfect square. k=min(a,b,c) No problem. Let's say that an array is sequential when each successful element has the value of previous element + 1. As millions of years pass, layers of rock are added to the ground. While searching for primitive value like number or string is relatively easy, searching for objects is slightly During shallow equality check of objects the list of properties of both objects is checked for equality. I am trying to find the largest consecutive sequence of composite numbers. Allows Social Security to give the proper credit to your employees'. Miles of beaches, big-name cities, world-class theme parks \u2013 holidays to spain give you a to-do list as long as a tapas menu. We compute the number of values, maximum, minimum, sum, and the average of the values. The first step is to create a list of numbers in sequential order. Obviously if all the numbers are positive the answer is the initial array. 813-320-0930 displays as Verizon Wireless computer voice states your verizon account de-activated if you want to reactivate enter your passcode (your ener it)then asks for your 16 digit account number then (asks if your card number is the 16 digit account you just input asks you to push 1 for yes or 2 for no) then asks for your 4 digit pin. If set, the output model contains at least the given number of trees even if the best model is located within these trees. Many times string contains numbers and letters and you want to keep only numbers by removing the non-numeric characters from the string. Things to consider if performance is a worry:-1. There are hundreds of thousands rows and I can't do that by. Product of all Unique elements in a given array. These sheets contains areas for customer information, order number, terms, date, shipping information and salesperson. In number theory, primes in arithmetic progression are any sequence of at least three prime numbers that are consecutive terms in an arithmetic progression. For example, if input string is \"hello65\", then this string will be traversed in for loop and isDigit function will check each character if it is digit. Want to know more? Check out our cookies policy. If substring not found, it returns -1. The most remote and. Excel has an error checking option that can alert you to the presence of cells containing text representations of numbers. We compute the number of values, maximum, minimum, sum, and the average of the values. 16 for any It turns out that the Orders table has at most three consecutive numbers according to the criteria in. I have a column of numbers (max 54 rows) with an index number listed. If you check numbers manually and see the numbers one after another, you will spend a lot of time and energy, especially when there are many items in the column. The sum of the three numbers is x + x+1 + x+2 = 33. We want to check if a certain invoice exists in that column, and return \"YES,\" otherwise return #NA. The next place to seek help is our dedicated Help forum that contains detailed assistance for frequently requested topics. numbers of the. the cells in colum B will be phone numbers. Also note that string positions start at 0, and not 1. If you'd like to use something similar to contains, for now your best bet is to use a third-party library like String. Actually, it will check whether the symbol denoted by the string (its first argument) is already accessible in the package (its second, optional. Large number of extensions. One pound of iron contains an estimated 4,891,500,000,000,000,000,000,000 atoms. Number Each Line. The numbers directly before the word 'balloon' 2. where F1 contains the text >=20, returns the same number. isdigit(char) on each character char. For example, If I enter 9 the output should be: 2,3,4 if i enter 15: 1,2,3,4,5 4,5,6. Check if a string only contains numbers Check if a string only contains numbers Comments. ContainsAnyOf as workaround but you dont want to know how complicated I made it. The input to this function is your list. What is the number of binary arrays of length$n$with at least$k$consecutive$1$'s? More in general, words of length$n$with a finite alphabet$A\\$, that contain (or that avoid, if you like) a given pattern as a factor. The check 2 == 2 evaluates to a 1. Contains 3\/4 of the following items: - Uppercase Letters - Lowercase Letters - Numbers - Symbols. The largest I know is: $$90, 91, 92, 93, 94, 95, 96$$ I.","date":"2021-01-24 00:50:50","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 2, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.17462213337421417, \"perplexity\": 830.132331880011}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-04\/segments\/1610703538741.56\/warc\/CC-MAIN-20210123222657-20210124012657-00554.warc.gz\"}"} | null | null |
Home <News centre < Simpler pension statements
Simpler pension statements
In the House of Commons on Tuesday pensions minister Guy Opperman said that the Department for Work and Pensions expected all providers to be giving their customers easy-to-understand two-page statements in the near future, and hinted that legislation could be passed to force the issue.
A template for the simplified statement was developed by a joint industry group in the wake of a review of auto-enrolment pensions in 2017 and is already being used voluntarily by some pension providers.
The argument for simpler, standardised pension statements is that it will encourage people to engage more actively with their own arrangements, and enable them to make better decisions around providing for their retirement.
At present, different providers use widely varying terminology, and some firms' statement documents run to tens of pages in length.
Critics of the new model, however, argue that in the process of simplification, key information is left out, such as charges levied on savings.
During the debate on 12 March, Opperman said:
"It is my intention that all private-sector businesses that provide pensions will be giving a simple two-page statement to all their customers. Whether that is done on a voluntary basis or by statute is a matter to be decided."
Business tax news
PAYE and NI news
Pensions savings investments news
Personal tax news
Regulations news
VAT news
Platt Rushton Chartered Accountants | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 6,537 |
{"url":"http:\/\/www.highschoolmathandchess.com\/category\/math\/","text":"# Problems: discrete math\n\nI had to change the topic of my post this week once I read the article \"Math is my religion\" on the Portlandtribune.com website. The author, Brian Gentry, is a high school student and self described \"math geek\" who has been taking college level classes. He writes, \"But my interest in math has allowed me to see the holes in our math curriculum. ... Rather than teaching kids integration, we should teach them the math that is most applicable to their life goals.\". Two important courses that Mr Gentry thinks are particularly useful are geometry and discrete math. With respect to geometry, he writes \"A fundamental piece of our geometry class is proofs, and the logic taught through proof-writing is used not only in math, but also in journalism, history and every other field that requires the construction of a logical argument.\".\u00a0 To which I say, \"Amen\"! With respect to discrete math, (singling out number theory) he writes \"I was taught that I need to cite each theorem I use in my proofs and justify each application, just like a history major has to cite quotes and explain how each quote is relevant. I can say without a doubt that this class, if implemented in a high school curriculum, would be beneficial to everyone who took it. \". And that deserves a \"Hallaleujah!\". Brian goes on to quote a teacher, Barry Garelick, about the \"...decreasing number of proofs in geometry textbooks over the decades. He contends that proofs are integral to geometry...In Garelick\u2019s mind, proof-based courses teach students how to construct logical arguments, which I argue is not only central in mathematics but also in a variety of other fields.\". To which I say, \"Testify!\".\n\nAs you can tell, I agree with Mr Gentry but, unfortunately, Brian knows more than the experts who are moving us in the wrong direction. It's not enough to have a good idea in the public school system, you've got to get some \"experts\" on board to change policy and then the devil is in the details of how they implement it mess it up. You see, Brian has the mathematical knowledge that so many \" experts\" are missing. He also has a sincere desire to improve the situation--but no real power to do anything. In our centrally planned model with high paid \"experts\" who don't seem to have either. It's no surprise education flounders decade after decade after decade. When you see well intentioned people get \"Zuckered\" out of 100 million dollars by experts and when you see in state after state that \"experts\" have set the bar for mathematical knowledge needed for a math teachers by a a multiple choice test which requires a calculator--at the same time they turn away people with STEM degrees-- you do tend to question motives. They build failure into the system and look for superficial ways to \"improve\" on some contrived school rating (such as paying for students to take AP exams).\n\nThe fact is I've never talked to anyone with a graduate degree in math who thinks ripping proofs out of the curriculum is a good idea but that's where the current mathematical curriculum has taken us. The proofs that \"us old folks\" associate with geometry are gone and\/or watered down. Is that because proofs don't prepare students for higher level math?\u00a0 Of course not---but if students can't master proofs and much of the proof content is replaced and what remains is watered down test scores might rise.\n\nThere is no reason, based on math, to remove proofs--proofs are the essence of mathematics--and getting students some foundation in proofs would help better prepare them for college. So today's curriculum prepares a student for math less than before in that key area. Clearly the educational central planners have no clue when it comes to math. But even if they did many teachers have no idea what that discrete math means and that presents a huge obstacle to implementing Mr Gentry's excellent idea: remember one third of the high school math teachers don't have a degree in mathematics so there's going to be far less who have taken and are qualified to teach discrete mathematics. And finding these people is at odds with the various Bull**** certification requirements that create an artificial teacher shortage in various states.\u00a0 Take a look at the story below on California and Common Core to see the complete lack of planning and the resulting fiasco. Whose accountable for the mess? Nobody. Who pays the price? The kids. It's difficult for me to imagine anything more than centrally planned failure of implementation.\n\nWith that in mind I've posted an example of a discrete math problem which is more understandable\/natural than \"Two parallel lines are cut by a transversal...\". The problem is posted on the Problems page. There are $n \\geq 2$ people are at a party. Prove that there are two people who know the same number of people.\" Of course, there's a little explanation needed: Two people either both know each other or they don't. That is, it's impossible for A to know B but for B not to know A. Also assume that a people don't \"know themselves\". A very surprising result that can be proven using mathematical thinking\/logic. It uses the Pigeonhole Principle and of course, you can relate it to graph theory, too.\n\nHere are some stories that caught my eye this last week:\n\n\u2022 One of my favorite nontechnical math books is\u00a0 \"The Man Who Knew Infinity\". A movie based on the book is finally out. IFLScience talks about Ramanujan and has a clip from the upcoming movie.\n\u2022 The TCEC Season 8 Superinal is about 60% done and Komodo leads Stockfish with 4 wins, 1 loss, and 65 draws. With so many games to play Stockfish has a mathematical chance to win but given the consistent nature of computer play; i.e., computers don't blunder or get tired\/overconfident, this match is essentially over. Nevertheless, you can keep following the match here.\n\u2022 Kevin Knudson with an excellent article on Forbes: \"I then casually mentioned that if you take the harmonic series and throw out the terms whose denominators contain a 9 then the resulting series converges...And, there\u2019s nothing special about 9; you can toss out terms containing any particular digit. In fact, you can pick any finite string of digits, toss out the terms containing those, and the result converges. With that set-up, let\u2019s talk about what all this means and how we can prove it..\". Read the article to find out the details. If you teach AP Calculus, you really should take a look.\n\u2022 Student protests are happening at college campuses all over the country. The Chronicle of Higher Education mentions a bunch here. The Washington Post has an in depth piece on Yale and the student demands, \"The students also are asking Salovey to remove Nicholas and Erika Christakis from their positions at the helm of Silliman College, one of Yale\u2019s 12 undergraduate residential communities. The pair became the subject of students\u2019 ire when Erika Christakis, the associate master and an early childhood educator, sent an e-mail to students encouraging them to view offensive Halloween costumes as a matter of free speech and free expression.\"\n\u2022 ZeroHedge looks into the demands from students at the Amherst College. There is a long list of demands but take a look at demand number five: \" President Martin must issue a statement to the Amherst College community at large that states we do not tolerate the actions of student(s) who posted the \u201cAll Lives Matter\u201d posters, and\u00a0the\u00a0\u201cFree Speech\u201d posters that stated that \u201cin memoriam of the true victim of the Missouri Protests: Free Speech.\u201d Also let the student body know that it was racially insensitive to the students of color on our college campus and beyond who are victim to racial harassment and death threats; alert them that Student Affairs may require them to go through the Disciplinary Process if a formal complaint is filed, and that they will be required to attend extensive training for racial and cultural competency.\" Did you get that? People who posted a flyer on how free speech is dead (picture posted) need to be disciplined and re-educated --- along with those who post \"All Lives Matter\"---if these intolerant zealots get their way.\n\u2022 NYDailyNews posts a disturbing and \"chilling\" video in the case of the student on trial for killing his math teacher.\n\u2022 Edsource.org has an all too typical\u00a0 story of Common Core implementation problems. \"During the five years since California adopted the Common Core State Standards for mathematics and English language arts, the search for high-quality textbooks\u00a0and curriculum materials has been a sticking point, in some cases a\u00a0major one, in effectively and speedily implementing the new standards....The root of the problem, argued Phil Daro, a principal author of the Common Core math standards, is that\u00a0\u201cdistricts tried to switch to the Common Core before there were any books aligned with them.\u201dThat, however, was not the fault of districts. The state adopted the Common Core in 2010, but the State Board of Education only\u00a0 approved a recommended list of K-8 math textbooks and materials\u00a0in January 2014 \u2013 and only did so two weeks ago for K-8 materials in English language arts. But focus on the fact that even though Common Core was known to be coming years in advance and that it is 5 years after it is adopted and they still don't have quality curriculum materials. How bad is the state DOE when teachers still don't have \"the basics\" under control after 5 years, especially when they had years of planning before Common Core was implemented? California's plight is going on in many states and it's a big reason why the educational system doesn't improve much. But with a new election around the corner don't be surprised if another educational model takes its place. Then more years to transition to implement another bad system. More money to spend designing tests,etc. Wash, rinse, repeat.\n\u2022 EAGNews on the teacher arrested for running his own brothel, \"McCrimmon was arrested when authorities shut down his Memphis nightclub, Walt\u2019s Place, over the weekend. Undercover officers allegedly made eight separate prostitution transactions there, including deals organized by McCrimmon himself, before they raided the Parkway Village establishment Saturday, according to The Commercial Appeal. Police allege the club charged patrons a $20 \u201cmembership fee\u201d for events that featured strippers and other activities, but did not have a compensated dance permit. Walt\u2019s Place also served booze without a liquor license and provided a VIP room for$50 sex sessions, police said....McCrimmon has since resigned from his teaching position, My Fox Memphis reports.\". Perhaps he'll be moving to another state? Be on the lookout...\n\u2022 I was shocked to see someone claimed to have solved the Riemann Zeta Hypothesis. Whose that? From where? What the? A quick search made it clear someone was full of BS. With no \"reputable\" site proclaiming the amazing story I had to wait to see how it played out. Now Quartz has an article explaining how he \"fooled\" the British media:\u00a0 \"Leading British media, including the BBC and the Daily Telegraph, ran the story of Enoch winning the award, but a little digging suggests they might have jumped the gun.\". Very little digging, in fact. The article continues, \"Enoch has an academia.edu page where the \u201cproof\u201d of the solution to the Riemann Hypothesis has been uploaded\u2014but that has also come in for criticism, as the proof is believed to have been plagiarized.\". So bottom line is it doesn't take much to fool the press---and the Riemann Zeta Hypothesis is still open. The Aperiodical gets more in depth on the deception of what did and did not happen. Hey, at least he's not running a brothel.\n\u2022 EAGNews with reporting the lengthy and somewhat outrageous demands, including \"A mandatory class for everyone, including staff and administrators, about the \u201chistorical racial violence of this University and town \u2026\u201d...Housing and bathrooms that are not segregated by gender.\". LewRockwell hosts a smackdown piece by Fred Reed directed at bad universities like what we see at UNC: \"In all likelihood you will waste these four years of your time and mine in this institution...during which you will take absurd courses of your own devising, courses having nothing to do with the purposes of education, of which you know nothing....When you graduate, a terrible shock awaits you. You will find that employers have no interest in your wearisome righteousness. They will not pay you for Victims\u2019 Studies or\u00a0 contemplation of grievances. They will not care about the high GPAs you got through grade inflation or sleeping with the professor. They will expect you to do your job, if there is a job for you to do.\"\n\u2022 With intolerance and free speech under assault at the universities, Reaon.com has a video clip from the documentary \"Can We Take a Joke\". I haven't seen the movie, but I'm guessing the answer is no.\"\n\n# Basics: Simplify Like Terms\n\nI've added another worksheet to The Basics page; this one is on \"Simplfying Like Terms\".\u00a0In this process of trying to solve a problem in the last worksheet (where Sage turn multiplication into *) I did a search on how to fix the problem and found the answer on this website: latex() the expression. So with that problem solved, I've revised the previous Worksheet on evaluating expressions so that it prints without * for multiplication. If you had downloaded the earlier version you should get the updated version to replace it.\n\nHere are some stories that caught my eye over the past week.\n\n\u2022 Okay, I HAVE to start with a real honest to goodness mathematical breakthrough. L\u00e1szl\u00f3 Babai, a mathematician and computer scientist at the University of Chicago\u00a0 appears to have proven a very important result. Sciencemag.com has the details, \"In the \"graph isomorphism problem,\" the challenge is to determine whether two graphs are really the same or not. Babai has found a new algorithm to solve that problem, as he announced today....For the previous best method, invented in 1983 by Eugene Luks, a mathematician and computer scientist at the University of Oregon in Eugene, the number of steps grows almost exponentially with the number of nodes in the networks to be compared. In Babai's algorithm, the number of steps grows only slightly faster than polynomial time.... If it holds up, the new algorithm simply proves that the tough cases that stymie the current algorithms can also be solved efficiently,\u00a0\"\n\u2022 A MUST READ article. In an earlier post I mentioned the problems Kentucky was having with Common Core. The Federalist\u00a0looks into the\u00a0details. Some key passages, \"In connection with federal Race to the Top grant applications in 2010 and No Child Left Behind waivers in 2011, states had to demonstrate that their institutions of higher education (IHEs) would \u201cexempt from remedial courses and place into credit-bearing college courses\u201d students who attained a certain score on Common Core-aligned assessments.\".\u00a0So the \"Common Core is just a bunch of standards\" trope that has become the first line of defense\u00a0is shown to be false again. Back to the article, \"So what happens when those unprepared students matriculate at a college that has already agreed to place them in courses that count towards graduation, without remediation? Exactly what is now happening in Kentucky... students who formerly would have gone through remediation are now to be thrown into credit-bearing courses. But since such students obviously won\u2019t be ready for real college work, the courses will be designated \u201cco-requisite\u201d\u2014meaning lagging students will receive extra help of some sort so they can catch up...By and large, professors weren\u2019t consulted before their colleges and universities signed onto the Common Core scheme. They are only now beginning to understand that Common Core will result in hordes of unprepared students showing up in their freshman classes, and that the professors will be expected to relax or suspend course quality to hide the problem...Common Core\u2019s promise of \u201ccollege readiness\u201d means nothing if the definition is set not by colleges themselves but rather by the standards-writers.\".\n\u2022 The TCEC Superfinal is nearing the halfway mark. It's 3 wins for Komodo versus 1 win for Stockfish. There are 36 drawn games. You can follow the matches here.\n\u2022 A connection between quantum physics and $\\pi$ isn't as irrational as it sounds. Phys.org has the incredible details, \"In 1655 the English mathematician John Wallis published a book in which he derived a formula for pi as the product of an infinite series of ratios. Now researchers from the University of Rochester, in a surprise discovery, have found the same formula in quantum mechanical calculations of the energy levels of a hydrogen atom.\". Continue with Science20: \"Friedmann did not set out to look for $\\pi$ nor for the Wallis formula. The discovery began in a quantum mechanics course taught by Carl Hagen, a professor of physics at the University of Rochester and one of the six physicists who predicted the existence of the Higgs boson. While the quantum calculations developed by Danish physicist Niels Bohr in the early 20th century give accurate values for the energy states of hydrogen, Hagen wanted his students to use an alternate method--called the variational principle--to approximate the value for the ground state of the hydrogen atom...Addressing the centuries-long gap between the 17th century Wallis formula, the 20th century quantum theory, and the decades that passed from that time to now, Doug Ravenel, a professor of mathematics at the University of Rochester, points out that Friedmann and Hagen used long-established concepts of their fields to arrive at their result, so even mathematicians and physicists who lived many decades ago would have been able to appreciate it.\"This is a beautiful connection between pi and quantum mechanics that could have been found 80 years ago, but was not discovered until now,\" said Ravenel, congratulating the two authors.\"\n\u2022 Inside Sources has the article I want to quote in one place, \"Two multistate testing groups \u2014 the Smarter Balanced Assessment Consortium (SBAC) and the Partnership for Assessment of Readiness for College and Careers (PARCC) \u2014 received $360 million in taxpayer funds to create Common Core-compliant tests. But there are growing concerns over the program, such as the cost and classroom time consumed by state tests.\". It's difficult to imagine that a committee couldn't come up with appropriate test questions in the years that led up to Common Core being implemented without spending one tenth the amount. Open source the problems, cut the costs, and spend the money on the kids, not on the corporations. \u2022 That hysterical, profanity laden tirade by the Yale student in my last post is just one example of the nonsense going on in public and private education. ZeroHedge has a response by a UNC-Wilmington law professor that's gone viral. Here are some excerpts, \"Let\u2019s get something straight right now. You have no right to be unoffended. You have a right to be offended with regularity. It is the price you pay for living in a free society. If you don\u2019t understand that you are confused and dangerously so. In part, I blame your high school teachers for failing to teach you basic civics before you got your diploma. Most of you went to the public high schools, which are a disaster. Don\u2019t tell me that offended you. I went to a public high school....Unbelievably, a student once complained to the Department chairwoman that my mention of God and a Creator was a violation of Separation of Church and State. Let me be as clear as I possibly can: If any of you actually think that my decision to paraphrase the Declaration of Independence in the course syllabus is unconstitutional then you suffer from severe intellectual hernia. Indeed, it takes hard work to become stupid enough to think the Declaration of Independence is unconstitutional. If you agree with the student who made that complaint then you are probably just an anti-religious zealot...\". \u2022 Alternet.org with an article that says \"According to statistics released by the U.S. Department of Education and published by NBC News, in the 2011-2012 school year, teachers called the cops on students a total of 31,961 times in the state of California alone, leading to 6,341 arrests. With 175 8-hour school days, that means a cop is called every 2.6 minutes. At one California school district, in particular, East Side Union High School District in San Jose, police were called on students 1,745 times during the 2011-2012 school year. This one school called the police on students more than 10 times a day.\" \u2022 The University of Kansas gives in to the PC extremists. Infowars notes \" ##### A governing body made up of students at the University of Kansas has voted to eliminate their use of gender specific pronouns, stating the terms pose \u201cmicroaggressions\u201d towards people who don\u2019t fit traditional gender roles....The student government\u2019s move to eliminate gender nouns comes after the National Science Foundation spent$125,000 at KU last yearstudyinghow adjectives could be perceived as racist or sexist.\"\n\u2022 The University of Missouri has had the spotlight turn on it. ZeroHedge has a piece that says, \"For months, black student groups have complained of racial slurs and other slights on the overwhelmingly white (79% white and 8% black) flagship campus of the Missouri's four-college system. Today, amid a campus in open revolt and at least 30 black football players announcing that they would not play until the president was gone, AP reports that Mizzou President Tim Wolfe has resigned effective immediately urging students and faculty \"to heal and start talking again to make the changes necessary.\" Protestors demanded that Wolfe \"acknowledge his white male privilege,\" that he is immediately removed, and that the school adopt a mandatory racial-awareness program and hire more black faculty and staff.\". So much is going on there that I have no idea about, but what has gotten my attention is the behavior of faculty in student demonstrations. The Daily Mail has captured a lot of the stupidity. \"The civil rights protests at the University of Missouri took an unexpected turn on Monday, when a media teacher was caught on camera harassing journalists trying to cover the national story....Melissa Click, an assistant media professor at the university, is seen coming up to Mark Schierbecker, another photographer recording the exchange, to cover his camera and demand that he leave....She then comes back to the photographer and starts yelling in his face: 'You need to get out. You need to get out'. When he explains that 'this is public property' and he can stay there because 'it's owned by the university' Click takes on a mocking tone.'That's a really good one. I'm with the communications faculty and I really get that argument - but you need to go. You need to go!' Click says.'Who wants to help me get this reporter out of here?' she yells out. 'I need some muscle over here.'...Her aggression towards journalists is perhaps strange since just two days earlier she publicly reached out to the media on Facebook to cover the story.\" The AL.com website indicates she wasn't the only faculty involved: \"Janna Basler, Mizzou's director of Greek life and leadership, tells Tai that he needs to \"back off\" and \"go.\" She brushes against him, and Tai asks if she's employed by the Office of Greek Life. Basler responds, \"My name is 1950.\" She also tells Tai that he is \"impinging on what [its members] need right now, which is to be alone,\" that report states.\"As the students behind Basler begin to push forward, she makes physical contact with Tai, prompting him to object, to which she responds, \"I don't have a choice.\" The students seem to decide that since he's not going to move, they're going to move forward as a human chain, physically pushing him back with their bodies. A student adds, \"It's our right to walk forward.\"\". But there is a consequence for adults who act like kids:\u00a0TheManeater reports, \"Multiple petitions have been created calling for the removal of two MU employees after a video surfaced documenting an incident on Monday, Nov. 9 in which they demanded that journalists leave the Concerned Student 1950 campsite. Assistant Director of Greek Life and Leadership Janna Basler and Assistant Professor of Mass Media Melissa Click can be seen in the video shouting at MU student and former Maneater staffer Tim Tai and other journalists.\"\n\u2022 New York Magazine looks at the Missouri situation from the PC angle: \"The student protest at the University of Missouri began as a response to a serious problem \u2014 outbursts of vile racism on campus \u2014 and quickly devolved into an expression of a renewed left-wing hostility to freedom of expression...It is also undeniably true that outbursts of political correctness disproportionately take place in campus settings. In recent weeks,, and have seen left-wing student activism aimed at shutting down the expression of contrary viewpoints....As far as the students are concerned, they represent the cause of anti-racism, a fact that renders the need for debate irrelevant....People on the left need to stop evading the question of political correctness \u2014 by laughing it off as college goofs, or interrogating the motives of p.c. critics, or ignoring it \u2014 and make a decision on whether they agree with it.\". Well said. Forbes has noted \"Melissa Click has become of symbol of what many parents dread when they send their children off to college. From her bullying of students to her doctoral thesis on the whiteness of Martha Stewart and her classes in \u201cvisual literacy,\u201d she crystallizes the view that tuition dollars are spent on nonsense, and sometimes worse....That an assistant professor of \u201cmass media\u201d in the department of communication was unaware of the instantaneous power of YouTube and social media is another reason for parents to wonder about the wisdom of spending their money on Click.\". ZeroHedge posts when Melissa Click resigned.\u00a0 Huffington Post looks at her apology, line by line, and calls it Bull****. Fox2Now reports that \"...Janna Basler has been relieved from her duties as director of Greek Life & Leadership, pending a university investigation into her actions.\"\n\u2022 EAGNews on how \"...the Buffalo school district spent $5,045,586 on the union\u2019s insurance \u201ccosmetic surgery rider\u201d from July 2014 to June 2015....Just look at what the district has spent over the past four years on cosmetic procedures for teacher union members:* July 2014-June 2015:$5,045,586\n* July 2013-June 2014: $5,439,218 * July 2012-June 2013:$5,221,293\n* July 2011-June 2012: $4,966,179That\u2019s$20,672,276 that has been diverted out of the classroom for expenditures that have nothing to do with educating children.\"\n\u2022 \u00a0BoingBoing ask \"What would happen if you mixed a math education tutoring site with a late night 900 number?\". Click on the link do the reading and watch the video to find out. You'd be very naughty to watch the video at school; wait until you get home. But if you click on the link at the bottom you'll get to BostInno which says \"Joking aside, Solve X 4 U is a legitimate homework help business. According to the startup\u2019s website, they can help with problems pertaining to a wide range of difficult STEM subject areas, including statistics, accounting, economics and chemistry. Depending on how many customers they\u2019re servicing, Solve X 4 U will come to your aid within about 24 hours, so plan your homework assignments accordingly.\"\n\n# The Basics: Evaluating Expressions\n\nI've added a worksheet to The Basics page. This is programmed\u00a0using $\\LaTeX$ and the sagetex package so that every time you run $\\LaTeX$ on the file it creates another randomized worksheet. You should get a free Sagemath Cloud account to run it (that link is on the sidebar as well). I used a\u00a0\"quick and dirty\" approach\u00a0by generating the same type of problem over and over using a for loop. The problems were built into the Sagetex Test Template created some time back and posted on the Handouts page and you'll need to change the teacher name\u00a0to avoid any questions about who \"Ima Putz\" is. The extra wrinkle in the worksheet is the creation of answers. It's done on the fly as the problem is being created. The string outputP holds the typesetting of the Problems and the string outputA holds the typesetting of the Answers. Each time a problem is created, outputP is modified and then outputA has the answer\u00a0appended to it. In this way you've got an answer key which should be correct if I didn't make any mistakes.\n\nHere are some stories which caught my eye over the\u00a0past week.\n\n\u2022 Who hasn't seen the video of the school officer \"choking and\u00a0slamming\" a student who wouldn't hand in her cell phone after she was caught using it in class? Sputnik News has the original report here, followed by a report here on how the officer has a history of problems. The officer was fired, according to the latest report.\n\u2022 US News and World Report\u00a0has a report on all the testing. \"The drop in proficiency, which is a first-time occurrence in math since the test was first administered in 1990, comes after a series of years in which the country experienced gains in math and reading on NAEP. \"\n\u2022 Sott.net with a piece on Dr. Wendy Bradshaw. Although she \"...is far from retirement age and she has PhD in education, she's leaving her profession because of standardized testing, which she explained in a Facebook post which has been shared more than 44,000 times since it was posted on Oct. 23.\". The letter is an indictment of the\u00a0current system and although she never mentions Common Core by name, it sounds like she doesn't like it. You decide, \"Like many other teachers across the nation, I have become more and more disturbed by the misguided reforms taking place which are robbing my students of a developmentally appropriate education. Developmentally appropriate practice is the bedrock upon which early childhood education best practices are based, and has decades of empirical support behind it. However, the new reforms not only disregard this research, they are actively forcing teachers to engage in practices which are not only ineffective but actively harmful to child development and the learning process. I am absolutely willing to back up these statements with literature from the research base, but I doubt it will be asked for.\"\n\u2022 Science Daily with some educational research from Sweden, \"\"Most digital learning tools used in schools are unsatisfactory and only test the knowledge the pupils already have\"...\"However, digital learning tools can provide great educational benefits, as long as they do not become books on a screen, but use their digital advantages. This involves providing good feedback, showing that there are different ways of thinking to reach a goal, and presenting consequences that that cannot be demonstrated in a book,\" says Bj\u00f6rn Sj\u00f6d\u00e9n.\"\n\u2022 A nice article in Quartz on\u00a0Singapore teaching \"productive failure\" in math: \"Students who are presented with unfamiliar concepts, asked to work through them, and then taught the solution significantly outperform those who are taught through formal instruction and problem-solving. The approach is both utterly intuitive\u2014we learn from mistakes\u2014and completely counter-intuitive: letting kids flail around with unfamiliar math concepts seems both inefficient and potentially damaging to their confidence..On procedural knowledge, or applying the formula, there was no difference between productive failure and direct instruction. But on conceptual understanding\u2014understanding what it means and possessing the ability to adapt the information\u2014the productive failure students dramatically outperform their direct instruction peers..\"\n\u2022 The Baltimore Sun on Maryland's PARCC performance, \"The first results of testing on the Partnership for Assessment of Readiness for College and Careers tests \u2014 introduced as part of sweeping educational changes begun several years ago \u2014 showed only 31 percent of students met the standard for Algebra I and 40 percent of students met the standard for 10th-grade English....Only a quarter of African-Americans, 7 percent of special education students and 23 percent of students who qualify for subsidized lunches met the benchmark in English. The worst performance was by students learning to speak English, many of them immigrants. Only 2.3 percent of those students were proficient.\"\n\u2022 The Atlantic with a nice article on the new changes to the SAT: \"The college-admissions test is being restructured as an extension of the controversial public-school reading and writing standards.\". With respect to the math section \"The math test will consist of nearly 60 questions split between two sections, one that allows a calculator and one that doesn\u2019t.\" but this passage sounds bad\u00a0\"\u201cThe current SAT asks questions where the material is remarkably simple, but students have to figure out what exactly they are asking for,\u201d\". What's wrong with\u00a0harder math and more straightforward question?\n\u2022 NJ PARCC results were mentioned last post. But Newsworks reports there is a twist: \"The proficiency rate for PARCC\u2019s geometry test was only 24 percent. In Algebra II, the proficiency rate was 23 percent.....Under state law, students must pass the state\u2019s exit exam to graduate. But under an improvised system put together by the administration for the first three years of the new testing, 12th-graders will have other options....When asked whether the state even has the capacity to handle an appeals process that may include tens of thousands of students, state Education Commissioner David Hespe said the necessary resources will be found: \u201cWe\u2019re a big department; we\u2019ll deploy whatever we need.\u201d\". Sounds like a big mess.\n\u2022 The Obama administration has changed its stance on testing in schools. Vox has this story \"After seven years of trying to hold schools and teachers to\u00a0\u2014 and testing to make sure they meet them \u2014 Obama said he's taken it too far. \"When I look back on the great teachers who shaped my life, what I remember isn't the way they prepared me to take a standardized test,\" Obama said, saying he's concerned about \"too much testing, and from teachers who feel so much pressure to teach to a test that it takes the joy out of teaching and learning.\"Beginning immediately, the Education Department is going to start directing states and districts to spend less time testing and to give fewer tests.\". More details here, \"They've promised to give grants to states to review the tests they're giving and determine which ones to cut. They plan to provide specific instructions on how states can use other federal money to study and cut tests..Most importantly, they're also backing down, at least slightly, on linking test scores to teachers' evaluations. .\"\n\n# PEMDAS: notes\n\nAs I mentioned last post, I've been interacting with some middle school students. PEMDAS has been an important topic. I've put together some brief notes, shown above, that could form the basis of a lesson: warm up problems (which could then be the basis of discussion), teaching points, and then some more difficult problems involving\u00a0PEMDAS, The solutions are at the end. The level of difficulty is more towards high school school students or an \"honors\" course at the middle school level. I've created a new \u00a0page called The Basics\u00a0where you can download the PDF. I will post other introductory material there as needed. The Basics link is located on the sidebar, too.\n\nHere are some stories that\u00a0caught my eye this week:\n\n\u2022 Zero Hedge has commentary on a provocative piece in the\u00a0The Economist magazine: \"MICHAEL WANG, a young Californian, came second in his class of 1,002 students; his ACT score was 36, the maximum possible; he sang at Barack Obama\u2019s inauguration; he got third place in a national piano contest; he was in the top 150 of a national maths competition; he was in several national debating-competition finals. But when it came to his university application he faced a serious disappointment for the first time in his glittering career. He was rejected by six of the seven Ivy League colleges to which he applied.\". The article has the nice quote from, \"...Supreme Court Chief Justice Roberts on this subject: \u201cThe way to stop discrimination on the basis of race is to stop discrimination on the basis of race.\u201d\". So true; but as private institutions they can do what they want. They are, however, undermining the prestige of\u00a0their school. The loss of confidence in quality, combined with the cost of a US college education doesn't bode well for the future of US college education.\n\u2022 Payson Roundup wants to know why math scores of Arizona students have \"plunged\". According to the article, \"Arizona students are worse in math than students in 33 other states according to data provided by the Nations Report Card.\u00a0Arizona is not the only state that is struggling with proficiency in mathematics; the entire United States has poor mathematic proficiency scores. The National Assessment of Educational Process found that 7 out of 10 students in the U.S. scored at or above basic level in mathematics in 2013.\"\n\u2022 Huffington Post's Peter Greene delivers a verbal smack-down\u00a0to a nonsensical Politico article. Nice job Mr. Greene!\n\u2022 The Atlantic has a piece on \"The Anti-Free Speech Movement at UCLA\". From the beginning of the article, \"A half-century ago, student activists at the University of California clashed with administrators during the Berkeley Free Speech Movement, a series of events that would greatly expand free-speech rights of people at public colleges and universities.Today, activists at UCLA are demanding that administrators punish some of their fellow students for expressive behavior that is clearly protected by the First Amendment.\". The article dishes out some\u00a0blame, too: \"\u00a0The San Francisco Chronicle put it this way: \u201cRegent Dick Blum said his wife, U.S. Sen. Dianne Feinstein, D-Calif., \u2018is prepared to be critical of this university\u2019 unless UC not only tackles anti-Jewish bigotry but also makes clear that perpetrators will be punished.\u201d The lawyer Ken White wrote that \u201cBlum threatened that his wife \u2026 would interfere and make trouble if the Regents didn\u2019t commit to punish people for prohibited speech.\u201d\"\n\u2022 Listverse has \"10 Things You Probably Didn't Know about Albert Einstein\". What do you know about the Einstein refrigerator?\n\u2022 California has banned schools from using \"Redskins\" as a team name or mascot beginning Jan 1, 2017. \"The new law will affect four California high schools in Merced, Calaveras, Tulare and Madera counties.\". Does this mean they condone the \"fighting Irish\"? Perhaps that's the next lawsuit. Curiously enough, Governor Jerry Brown \"...vetoed a separate measure that would have barred public properties from being named after individuals associated with the Confederacy.\"\n\u2022 Edsource has an article on poor California test scores, \"In fact, only one-third of California students in grades 3-8 and grade 11 met the math standard \u2013 compared to 44 percent of students who met the standard in English language arts.\".\u00a0Did you get that? Failure is \"the norm\". Always remember that the test scores would be lower if it weren't for the many\u00a0students who get a tutor\u00a0to help them learn. And take a look at all the it's-no-big-deal talk and Common Core challenges talk. With respect to Common Core: \"\u201cLos Angeles started implementing Common Core three years ago,\u201d Dorado said. \u201cIt takes time and a tremendous amount of work. In LAUSD, we\u2019re talking 500 elementary schools alone.\u201dAnother challenge has been the shortage and quality of curriculum materials aligned with the standards. \u201cMany teachers are in the implementation phase,\u201d said the\u00a0California Mathematics Council\u2019s Vierra.\u00a0\u201cMany districts are still getting around to buying the curriculum (materials they need).\u201d\u201cA lot of teachers are cobbling together old materials with lessons they find online and material the district is providing,\u201d she said. \u201cMuch of the math curriculum is still very fragmented.\u201d\". That's similar to my experience--despite knowing years in advance what was coming the school systems are still behind the curve years after Common Core was \"implemented\".\n\n# Sagetex: Indefinite Integrals 6\/6b\n\nI've added two more indefinite integrals to the Sagetex: Integrals page. The integrals are of the form $\\int e^{-\\alpha x}\\cos(\\beta x)\\,dx$ and\u00a0$\\int e^{-\\alpha x}\\sin(\\beta x)\\,dx$ where $\\alpha, \\beta$ are random (positive) integers.\n\nHere are some issues that caught my eye over the last week:\n\n\u2022 Reason.com has an interview with \"...filmmaker\u00a0Ted Balaker who is currently finishing up his latest documentary, \"Can We Take A Joke?.\" The film, which features comedians Gilbert Gottfried, Jim Norton, Lisa Lampanelli, Adam Carolla, Karith Foster, and Penn Jillette,\u00a0examines the role of comedy in our culture of constant outrage. \"Comedians don't even have the freedom of conscience to just be neutral on something,\" Balaker\u00a0told Reason TV's Nick Gillespie. \"[They] have to affirm what the cool kids believe.\"\". Finally someone taking on the PC zealots.\n\u2022 Inofwars has the local news on a Wisconsin school to randomly drug \u00a0test the students: \"Given that it is actually unconstitutional to randomly drug test students, the school district is using a loophole to do so. Students taking part in extracurricular activities or students who park vehicles on school property will be subject to the random testing.\u201cParticipating in extracurriculars, um in public high schools is a privilege and it\u2019s not a right, as well as parking on our school parking lot,\u201d Dorschner explained.Tests will be conducted by randomly picking student identification numbers via computer every fortnight.Should a student test positive, or refuse to be tested, they will be barred from athletic involvement, mandated to attend counseling, and their parents will be alerted. The school says it will not expel any students or involve police.\"\n\u2022 Reason.com on presidential candidate Carly Fiorina's criticism of Common Core. The beginning of its expected prominent place in election topics? She said, \"Common Core may have started out as a set of standards, but what it\u2019s turned into is a program that honestly is being overly influenced by companies that have something to gain, testing companies and textbook companies, and it\u2019s becoming a set of standards, not on what a kid has to learn but instead on how a teacher has to teach and how a student should learn, and that kind of standardization is always going to drive achievement down, not up.\" I couldn't agree more.\n\u2022 A new pentagonal tiling has been discovered. See RedOrbit or check out what NPR says, \"In other words: It's possible that that there are dozens \u2014 hundreds, thousands even \u2014 of these convex pentagon shapes waiting to be discovered. Up until last month, only 14 had been found, and for all anyone knew, that list could have been final. But last month, a cluster of computers that Von Derau was using to run though different shapes spit out an intriguing possibility...The three mathematicians had discovered the first new convex pentagon able to tile the plane in some 30 years. The scientists had become a part of a legendary history that dates to 1918, when the German mathematician Karl Reinhardt described the first five types of pentagons to be able to tile the plane.\"\n\u2022 Reason.com again with a story about, \"A Minnesota student who had to transfer high schools to avoid an expulsion for an incredibly short, wholly inoffensive Tweet can sue the district for violating his First and Fourteenth Amendment rights, a federal judge ruled. The student, Reid Sagehorn, first landed himself in trouble with Elk River School District administrators in January of 2014, according to Education Week. He was asked on an internet message board whether he had made out with a certain 28-year-old teacher at Rogers High School; he tweeted his two-word answer: \u201cactually, yeah.\u201d Sagehorn later claimed that he was joking.\"\n\u2022 Ozy.com with a good piece on Brazil's Artur Avilla, winner of a Field's Medal in mathematics.\n\u2022 Raven's guard, football player Dr. John Urschel decides to test his mathematical skills after receiving a concussion: \"No word as to how Urschel performed on the math questions, but we're willing to bet pretty well, concussion or not.\"\n\u2022 The Sinquefield Cup, hailed as the highest rated chess tournament in history last year and won by Caruana in historic fashion, has the first round today. It will continue until\u00a0September 3. Spotskeeda gives a preview. The rounds can be followed live here.\n\n# Resource: Internet Archive\n\nWho doesn't like quality, free resources? The Internet Archive\u00a0explains on their About page: \"The Internet Archive is a 501(c)(3) non-profit that was founded to build an Internet library. Its purposes include offering permanent access for researchers, historians, scholars, people with disabilities, and the general public to historical collections that exist in digital format.\n\nFounded in 1996 and located in San Francisco, the Archive has been receiving data donations from Alexa Internet and others. In late 1999, the organization started to grow to include more well-rounded collections. Now the Internet Archive includes:, and software as well as archived web pages in our collections, and provides specialized services for adaptive reading and information access for the blind and other persons with disabilities.\".\n\nWhat separates the Internet Archive from other resources is\u00a0threefold: the quality of their resources, the range of resources (books, software, audio, video) and the ability to read through\u00a0many of the books by flipping through the text (as well as downloading it in a variety of formats). Take a look at Kotov's \"Grandmaster at Work\"\n\nNotice the 2 red boxes? The box at the bottom displays a variety of formats that you can download the book in. The box near the top right surrounds the \"Full screen\" button. Press that button to go into full screen mode.\n\nClicking on the right hand page flips the book forward, while the left hand page flips you back. So you can browse the resource online. Notice that the screenshot above has two more red boxes. The one in the top right hand corner will activate the voice reading of the book. The red square in the bottom left hand corner is a slider that can quickly get you to deep inside the book without flipping each page.\n\nMy only complaint is that the Search feature wasn't as helpful in finding the resources. I failed to find some books through searching \"mathematics\" which turned up in other unrelated searches.Here are some links to get you started:\n\nSpivak: Calculus book, Supplement for the book, Dugopolski: Precalculus, Beginning and Intermediate Algebra,lots of CK-12 series books CK 12 AlgebraCK 12 Algebra II with Trigonometry, MOOCulus Sequence and Series Textbook, Advanced Math 2Python Programming, Soltis: What it takes to become a chess master, Botvinnik: Half a Century of Chess, Alburt: Test and Improve Your Chess, Kosikov: Elements of Chess Strategy, lots of old Schaum's books, and so much more! I've added the link to the Internet Archive to the sidebar.\n\nHere are some stories which caught my eye the last week:\n\n\u2022 The Intercept looks at \"NO CHILD LEFT UN-MINED? STUDENT PRIVACY AT RISK IN THE AGE OF BIG DATA\". From the article, \"\u201cWhat if potential employers can buy the data about you growing up and in school?\u201d asks mathematician Cathy O\u2019Neil, who\u2019s finishing a book on big data and blogs at mathbabe.org. In some of the educational tracking systems, which literally log a child\u2019s progress on software keystroke by keystroke, \u201cWe\u2019re giving a persistence score as young as age 7 \u2014 that is, how easily do you give up or do you keep trying? Once you track this and attach this to [a child\u2019s] name, the persistence score will be there somewhere.\u201d O\u2019Neil worries that just as credit scores are now being used in hiring decisions, predictive analytics based on educational metrics may be applied in unintended ways.\u00a0Such worries came to the fore last week when educational services giant Pearson announced that it\u00a0was selling the company PowerSchool, which tracks student performance, to a private equity firm for $350 million. The company was started independently; sold to Apple; then to Pearson; and now to Vista Equity Partners. Each owner in turn has to decide how to manage the records of some 15 million students across the globe, according to Pearson.\" \u2022 Reason.com reports on famous author Judy Blume warning about censorship in today's world. \u2022 Huffington Post has a piece on Dr. John Urschel, professional football player, on why more kids don't like math. \u2022 Microagrressions, which I mentioned in this post, are back again. Although I learned \"America is a melting pot\", that's now a color blindess microaggression because it denies a person of color's racial\/ethnic experience. The College Fix can help get you up-to-date on on the latest witch hunt. From the article, \"University of Wisconsin-Stevens Point officials have advised faculty that the term \u201cAmerica is a melting pot\u201d is a racial microaggression. The common phrase was among a list of examples of so-called racial microaggressions used \u201cas a discussion item for some new faculty and staff training over the past few years,\u201d a campus official told The College Fix in an email. Other phrases on the list included: \u201cYou are a credit to your race,\u201d \u201cwhere are you from,\u201d \u201cthere is only one race, the human race,\u201d \u201cI believe the most qualified person should get the job\u201d and \u201ceveryone can succeed in this society if they work hard enough.\u201d. Take some time and look at the lists from University of Wisconsin and University of California. Lots of the examples on the list are ambiguous in the sense that it presumes you know WHY a comment was made. So the example, to a woman of color about \"I would never have guessed you were a scientist.\" is considered a microaggression. because it's assumed you said it because she's a woman which means it could be perceived as insulting her intelligence. Apparently it's okay to to say it to a white male, though, because it wouldn't be an attack on his intelligence....wait, what? Or if you've mistaken a faculty of color mistaken for a service worker then it assumes you've done it because they are of color and not because of how they were dressed, where they, or who they looked like. Heck, I've been mistaken for someone working in a store that I was shopping in for who knows what reason. Should I have been insulted? HOLDING AN OPINION that, \"Affirmative action is racist\" IS FORBIDDEN because it makes it seem like one group gets extra privileges. And the common practice of empathy, such as a someone saying, \"As a woman, I know what you go through as a racial minority\" is enough to cause a problem. How can a woman possibly know what racial discrimination is like. Your AMBIGUOUS ACTIONS are now under assault. \"A person asks a woman her age and, upon hearing she is 31, looks quickly at her ring finger\" is a problem because the reason WHY you did that was you thought \"Women should be married during child-bearing ages because that is their primary purpose.\". If faculty are being taught that examples like they've listed are transgressions then you get an indication of how today's young are looking at the world. It's a less tolerant, \"he\/she said this which made me feel ___, therefore they must pay the price\". Imagine spending money to get an education and coming out less educated and less tolerant. ZeroHedge has a piece on how \"hate speech\" is used to destroy \"freedom of speech\". \u2022 There's an annoying piece that's getting a lot of play. From the Western Morning News we hear \"Myth that men are naturally better at maths than women debunked\". Now you'd think that such a conclusion would be based on some test scores which would show that women scored just as well (or better) than men. No such case. From the article, \"US psychologist Dr Shane Bench, from Washington State University, who led a study that involved assessing the ability of men and women to predict their performance in maths tests, said: \"Gender gaps in the science, technology, engineering and maths fields are not necessarily the result of women's underestimating their abilities, but rather may be due to men's overestimating their abilities.\" His team conducted two studies of 300 undergraduates who were asked to have their maths skill tested before guessing how well they had fared. In the first study, participants received feedback about their real performance before they were again asked to take a test and predict their scores. For the second study, the students only sat one test without receiving any feedback, and were questioned about any plans to pursue maths-related courses Across both studies, men were consistently found to overestimate the number of problems they solved correctly while women's appraisal of their own abilities was more accurate. After receiving feedback about how well they did in the first study, men were then better at estimating their scores in the second test.\". Got that? With no information on actual math scores, what does this mean?? Suppose, for example, women scored 75% on the test and then estimated they scored about 75% whereas men scored 80% and estimated they scored 85%. Then the women are more accurate at gauging their performance, but since their performance is worse, how would that debunk the claim that men are better than women at math. And to make things worse, the research said \"After receiving feedback about how well they did in the first study, men were then better at estimating their scores in the second test.\". I'm not sure how this \"research\" proves anything. It only seems to show that, without feedback on performance, women are better at appraising their performance than men. But with feedback on performance (which is what happens in the real world as students get feedback on each test throughout the semester) men are better at appraising their performance. But none of this has to do with mathematical expertise. \u2022 The PC climate claims another high school teacher. Reason.com has the report, \"An Illinois high school teacher was fired after stepping on the American flag to prove a point about free speech. The teacher, Jordan Parmenter, had been using a flag as a pointer during class on May 15. At least one student accused him of being disrespectful toward the national symbol, so Parmenter dropped the flag on the ground and stomped on it, according to WGNTV.com. Word quickly spread, and soon enough, demonstrators appeared outside Martinsville Junior-Senior High School. Parmenter wrote a letter of apology, but the school board voted 6-0 to fire him....The school board had a golden opportunity to show kids that honoring the values the flag represents is more important than honoring the flag itself. Instead, they imparted a different lesson: that no act of defiance goes unpunished by the government. Perhaps that\u2019s an important lesson as well.\". Beware the angry mob. \u2022 The PC climate claims a college teacher as well. The Advocate has the story of an LSU professor, Teresa Buchanan, fired for using salty language. The teacher is fighting back with a lawsuit. From the article, \"She said the university is trying to dictate how she teaches and in the process is impinging on her academic freedom. \u201cThe occasional use of profanity is not sexual harassment,\u201d Buchanan said. \u201cNor is the occasional frank discussion of issues related to sexuality, particularly when done in the context of teaching specific issues related to sexuality.\u201d LSU spokesman Ernie Ballard declined comment Friday on Buchanan\u2019s dismissal, saying it\u2019s a personnel matter and involves possible litigation. Buchanan was fired even though a committee of five faculty members that presided over an 11-hour dismissal review hearing held on March 9 recommended that she keep her job. While the committee found that her adult language and humor violated university policies that protect students and employees from sexual harassment, it found no evidence Buchanan\u2019s comments were \u201csystematically directed at any individual.\u201d The committee recommended she be censured and agree to quit using \u201cpotentially offensive language and jokes\u201d that some found offensive.\" # Sagetex: Definite Integrals I've added two definite integrals to the Sagetex: Integrals page. The first problem creates two random parabolas in different directions and the area between the two curves must be calculated. This requires them to find the intersection points as well to set up the integral properly. The second integral gives a random exponential in e along with (random) endpoints of integration. Here are some issues that caught my eye this past week: \u2022 USA Today reports \"Texas is decriminalizing students' truancy\": \"Gov. Greg Abbott has signed into law a measure to decriminalize unexcused absences and require school districts to implement preventive measures. It will take effect Sept. 1. Reform advocates say the threat of a heavy fine \u2014 up to$500 plus court costs \u2014 and a criminal record wasn't keeping children in school and was sending those who couldn't pay into a criminal justice system spiral. Under the old law, students as young as 12 could be ordered to court for three unexcused absences in four weeks. Schools were required to file a misdemeanor failure to attend school charge against students with more than 10 unexcused absences in six months. And unpaid fines landed some students behind bars when they turned 17.\"\n\u2022 There's a wrinkle in a story from my last post. A teacher who was reported to have been removed from his position because he read from Mark Twain\u00a0was actually removed for making an inappropriate joke (relating to a Mark Twain passage). LA Times has the details: \"In his first interview since he was pulled from his fifth-grade class, Esquith told The Times on Monday that controversy stemmed from a joke he made in the classroom. He said he quipped with students that if he could not raise enough money for the annual Shakespearean play, they would all have to perform their parts naked like the king in Mark Twain's \"The Adventures of Huckleberry Finn.\" After another teacher complained, he said he explained the context of the joke to his principal at Hobart Boulevard Elementary. The principal, he said, told him he had nothing to worry about. Nonetheless, Esquith was removed from the classroom in April.\"\n\u2022 EducationWeek reports on the L.A. Unified budget has reduced the spending on police. This was a victory for\u00a0The Dignity in Schools Campaign which, \"..demanded that the school district, which is the second largest in the country, redirect $13.1 million in funds it had planned to spend on policing practices during the 2015-16 school year into jobs and programs aimed at improving school climate. (The district is still budgeting about$54 million for school police from other parts of its budget.) Though the district school board adopted the revised budget, campaign organizers don't yet know how much of the redirected money will go toward their specific funding recommendations, which include using $8 million for restorative justice measures like technical assistance and staff training, as well as$5 million for hiring prevention and intervention staff in alternative schools to create counselor-student ratios of 1 to 50. Such investments have been proven to positively transform school climate, whereas school-based policing has not, said Ruth Cusick, an education rights attorney at Public Counsel.\"\n\u2022 Huffington Post's piece \"Meet the 63rd Black Woman in American History with a Physics Ph.D.\" provides a glimpse into \"the challenges faced by marginalized communities in science\".\n\u2022 Put this on your radar: Reason.com has an update on a case making its way through the legal system: \"A little over a year ago, a group of nine California students with the help of the activist group Students Matter won an amazing victory in California Superior Court in the case of Vergara v. California.As I reported at the time:\n\nJudge Rolf M. Treu reasoned that the challenged teacher rules\u2014regarding permanent employment status, dismissal procedures, and a \"last in first out\" rule for layoffs\u2014do indeed damage California children's constitutional right (on the state level) to an education. He wrote that the challenged statutes \"cause the potential and\/or unreasonable exposure of grossly ineffective teachers to all California students\" and \"to minority and\/or low income students in particular, in violation of the equal protection clause of the California constitution.\"\n\nNaturally, the losers appealed, and Judge Treu stayed actual enforcement of his ruling pending appeal. Today, the Students Matter side filed their brief in the appeal process in the Court of Appeal for California, 2nd appellate district....In a press conference call this morning announcing the brief, lawyers on the Students Matter side say they still need to wait for the teachers side to file its response brief and then await an actual court date. Once the hearings are over, though, a decision must come within 90 days, but that could still be a very long time away--more's the pity for California public school students.\". There's a decent video that's posted on the page.\n\n\u2022 The Norway Chess Tournament 2015 ended with victory for Topalov. The tournament was marred by some blunders and a short draw between Anand and Topalov in the final round. But most newsworthy is what Chessbase reports\u00a0here,\"...this is easily the worst tournament ever played by Carlsen after obtaining his GM strength.\"\n\u2022 The\u00a043rd Sparkassen Chess Meeting Dortmund 2015\u00a0has begun; it features players such as Kramnik, So, Hou Yifan, Naiditsch.\n\u2022 American chess lost an icon recently. The NY Times has a piece on Walter Browne who passed away in Las Vegas at the age of 66.\n\u2022 Michael Krieger posts on \"Salt \u201cBlack Markets\u201d Emerge in Indiana School System as Students Seek to Avoid Bland Michelle Obama Lunches\"\n\n# Problem: \"Puzzle math\"\n\nI've added the following puzzle to the Problems page: For the 8 squares below (corners aren't included) squares are adjacent up\/down\/left\/right\/diagonally. Fill in each square with\u00a0a\u00a0number\u00a0from\u00a01 through 8 (one time each) so that adjacent squares don't contain consecutive integers. I found the problem\u00a0here. You can reason it out logically but I ended up using graph theory to get the answer.\n\nLots of stories this week:\n\n\u2022 A MUST READ article by John Bohannon: \"I Fooled Millions Into Thinking Chocolate Helps Weight Loss. Here's How\". From the article, \"It was, in fact, a fairly typical study for the field of diet research. Which is to say: It was terrible science. The results are meaningless, and the health claims that the media blasted out to millions of people around the world are utterly unfounded.\" and the key to generate bad conclusions from good data is \"...If you measure a large number of things about a small number of people, you are almost guaranteed to get a \u201cstatistically significant\u201d result. Our study included 18 different measurements\u2014weight, cholesterol, sodium, blood protein levels, sleep quality, well-being, etc.\u2014from 15 people. (One subject was dropped.) That study design is a recipe for false positives.\". And the p-value is instrumental in the deception, \"It\u2019s called p-hacking\u2014fiddling with your experimental design and data to push p under 0.05\u2014and it\u2019s a big problem. Most scientists are honest and do it unconsciously. They get negative results, convince themselves they goofed, and repeat the experiment until it \u201cworks\u201d. Or they drop \u201coutlier\u201d data points.\".\u00a0This article is a great resource if you teach statistics.\n\u2022 Magnus Carlsen won a 3 board blindfold\u00a0(with clock)\u00a0exhibition. The video is posted on Chessbase.\n\u2022 Caruana and Nakamura earned their place in the upcoming Candidates tournament to determine the next challenger for the World Chess Championship by taking the top two places at Khanty-Mansiysk\u00a02015.\n\u2022 The editor in chief of the Lancet, one of the world's most prestigious medical journals has stirred up some controversy. \"Dr. Horton recently published a statement declaring that\u00a0a lot of published research\u00a0is in fact unreliable at best, if not completely false.\u00a0\u201cThe case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness.. Yes, statistics isn't really math.\n\u2022 Forbes has one of those REALLY annoying posts by someone who really\u00a0lacks basic\u00a0knowledge about what they're writing about--something all too common in mainstream media where even robots now\u00a0generate worthless content. Although posed as a question \"Should We Stop Teaching Calculus in High School?\" the author is clearly saying \"yes\". \"The list of high school math courses in the U.S. hasn\u2019t changed for decades. My daughters are taking the same courses I took long ago: algebra, geometry, trigonometry, and calculus. These are all fine subjects, but they don\u2019t serve the needs of the 21st century.\". But\u00a0the author goes on to say, \"...the vast majority will never use calculus again. And those who do need it \u2013 future engineers, physicists, and the like \u2013 can take it in college.\". So the courses do serve the needs of the 21st century. The author makes the point that we are awash in data today so the author asks, \"What math courses do young people really need? Two subjects are head-smackingly obvious: computer science and statistics.\".\u00a0Huh? Who believes computer science is math? And anyone reading this blog knows (e.g. see here and here) that statistics isn't math either. Yes, theoretical stats is basically analysis but the statistics he's talking about (confidence intervals, p-values, etc) isn't. Whether (see the links) you want to look at schools\u00a0having a \"department of math and statistics\", or that bigger schools have a separate statistics department,\u00a0or\u00a0that AMSTAT news says statistics isn't a subfield of math, or as is mentioned in\u00a0the second link that statistics books and teachers have been presenting p-values incorrectly. If you believe stats is math then please explain what other branch of math teaches you the wrong way to do something as has been done with p-values? There is a different reasoning process for stats. But back to the article. With respect to statistics, \"Most high schools don\u2019t offer either one. In the few schools that do, they are usually electives that only a few students take.\". Let's mention that statistics\u00a0is covered in Common Core. The Common Core Standards are posted here\u00a0and this government site notes, \"The recently adopted Common Core State Standards for Mathematics (CCSSM) contain a large amount of statistics in the middle and high school grades and some at the elementary school level.\". Not being a math teacher, this author is unaware how much things have changed: more statistics, mathematical proofs in geometry have been\u00a0largely removed (I even had to teach probability(!) in my geometry classes), and 4 ways to subtract (which has confounded parents) are some noticeable changes. Assuming that because his daughters are taking the same courses he took decades ago means there hasn't\u00a0been any change in content\u00a0is, at best, sloppy journalism. To be clear, let me agree that computer courses in high school would be great. Python is such a natural choice that could be useful--but you don't sacrifice core math classes for that; you\u00a0eliminate or consolidate less relevant courses to make room for it. The removal of most proof content from geometry is\u00a0tragic because proofs are the essence of mathematics. As Alfred Renyi\u00a0said, \"A mathematician is a device for turning coffee into theorems\". The author opting for computer science and stats as math classes while neglecting discrete math is, I suspect, based in the ignorance of not knowing discrete math is the math of computer science. The author continues, \"Convincing schools to give up calculus won\u2019t be easy. I imagine that most math educators will scream in protest at the mere suggestion, in fact. In their never-ending competition to look good on a blizzard of standardized tests, schools push students to accelerate in math starting in elementary school, and they offer calculus as early as the tenth grade. This doesn\u2019t serve students well: the vast majority will never use calculus again.\". There's some truth in here but since he hasn't taught high school he can't properly interpret what's happening. High schools get awarded numerical scores according to a formula for \"performance\" (which often get translated into star ratings for the school). Admin look at how they can increase scores (to make their performance look better). More students taking AP exams means a higher score is given, regardless of how poorly the students do. That's why school admins get\u00a0teachers to encourage students to sign up for AP classes and that's why\u00a0schools often pay for the student to take the AP exam--it's an easy way to raise the school's score. The consequence is that\u00a0it's\u00a0commonplace for students who struggle with fractions to be taking\u00a0AP Calculus. But no matter, just require a graphing calculator to give students a chance. The child feels smart, the parents feel proud, admin performance\u00a0improves, and some business\u00a0makes a lot of money selling expensive\u00a0calculators when you can buy a laptop\u00a0computer for $200. The only problem is the child still has poor math skills that they'd be put to shame by a typical student from another country at a lower grade level that has a fraction of the resources but has parents making sure kids learn multiplication tables and basics and not giving them a calculator to use as a crutch at lower levels. But the issue isn't about producing quality in the US, so it's no surprise we never get it. They're looking to maximize performance under the rules they've been given so wasting taxpayer money on improving the school's score gets the admin credit for improving school quality even though no real quality has taken place. Same thing with attendance. Some schools have poor scores due in part to poor attendance. Since that poor attendance is, in cases, predictable (before Christmas break) schools often have incentives (such as a drawing for a free computer) for students who attend. This expenditure of taxpayer money has nothing to do with quality. There's so much to criticize in this article but you get the idea. His argument that \"here\u2019s a simple fix: get rid of high school calculus to make way for computer programming and statistics\" is nonsense. \u2022 The controversy of why women don't perform as well as men in chess, which I first raised here, continues with a new study (authored in part by women). The data itself is interesting: \"\u201cHard\u201d sciences such as physics and statistics on average have a larger gender gap than social sciences and humanities \u2013 no surprise there. However, this is only part of the story. According to 2013 data from the National Science Foundation in the United States, there is large variation within each category: whereas women earned only 19% of PhDs in physics and 18% in computer science, they earned no less than 54% of PhDs in molecular biology \u2013 amounting to gender parity. Within the humanities and social sciences, those numbers ranged from 78% in art history and 72% in psychology to a dismal 27% in philosophy and 28% in music theory and composition.\". I think the conclusions drawn off the data are suspect, to put it lightly. \"The key claims of Leslie and Cimpian\u2019s paper are: 1. that fields vary in the degree to which its practitioners believe that innate ability (\u201cgenius\u201d or \u201cbrilliance\u201d) is required for success; and 2. that society often promotes the notion that men have greater innate abilities than women.\". If women were doing well in those fields then I don't see how they would be \"bluffed\" out of continuing. It makes more sense to me that they weren't doing well, didn't like it, found their \"passion\" somewhere else (as jobs opened up in other more desirable fields), etc.. The fact is that women used to hold a large percentage of computer science job. The claim that women would somehow give up pursuing a job in these lucrative white collar fields because society has told them men have greater innate abilities seems demeaning to women and contrary to the general rise of women in the workforce over the past 40 years. Women have faced harassment in many areas and haven't quit. More believable to me is that the radical change in the field took computer science out of their comfort zone: computer science today is so much different than back in the 80's. Note also that the study separates Statistics from math (as it should) and the resulting percentages for the two are quite different. \u2022 The chronotope blog comments on the (included) recent John Oliver video on student test in high schools. Make sure to check out the video! \u2022 Ravens guard John Urschel analyzes the extra point rule change in football. \u2022 The Hindu notes on the passing of chess legend Anand's mother. Grandmaster R.B. Ramesh has a fitting quote, \"Without Anand it\u2019s tough to imagine Indian chess. Without his mother, it\u2019s tough to imagine Anand\u201d. \u2022 CinemaBlend posts on a Tobey Maguire starring as former World Chess Champion Bobby Fischer in \"Pawn Sacrific\". Check out the movie trailer. \u2022 Hundreds of SATs go missing. If they aren't found soon the teenagers will have to take them again. Let's hope nobody has their college plans derailed. \u2022 GreenBayPressGazette.com reports \"Wisconsin may be the first state in the country to certify teachers who don't have bachelor's degrees under a provision put in the state budget....Under the change, anyone with relevant experience could be licensed to teach non-core academic subjects in grades six through 12. They would not need a bachelor's degree and they could even be a high school dropout.\". This doesn't sound like a good idea. # Understanding \"The Test\" The post Understanding the Prediction explained the mathematics behind Richard Wiseman's brilliant magic trick that's posted on his Quirkology site. It's a great way to introduce graph theory to your class. Richard Wiseman's video \"The Test\" is a similar type of magic trick that can be explained using digraphs and, once again, is a great way to introduce a class to digraphs. I've added an explanation of \"The Test\" to the Other page. Here are some thngs that caught my eye last week: \u2022 Discover blog has an article, \"The Purpose of Harvard is not to Educate People\". From the article, \"Don\u2019t believe me? Here is the test: when was the last time Harvard made a senior tenure offer to someone because they were a world-class educator, rather than a world-class researcher? Not only is the answer \u201cnever,\u201d the question itself is somewhat laughable.\". \u2022 Patch.com reports on a \"...former interim director of special services for the Brick Township School District, failed to reveal a 1990 conviction on heroin and cocaine charges on his job application with the district, the Ocean County Prosecutor\u2019s Office has confirmed. Morgan, 65, was charged Thursday along with Brick Schools Superintendent Walter Uszenski and Uszenski\u2019s daughter, Jacqueline Halsey, in a scheme that supplied Halsey with full-time day care for her preschool child paid for by the Brick Township schools, with official misconduct and theft by deception, said Al Della Fave, spokesman for the Ocean County Prosecutor\u2019s Office.....According to a 1989 report in the New York Times, Morgan was arrested and charged with selling cocaine on five occasions, in amounts ranging from a half-ounce to more than 3 ounces, to undercover detectives assigned to the Brooklyn District Attorney\u2019s investigation unit.Morgan -- who taught English to 9th and 10th graders in a special education program at Canarsie High School -- also was charged with possession of cocaine, as well as with conspiracy to sell heroin, in which he is said to have agreed to travel to Thailand to buy heroin for undercover agents posing as drug dealers. The heroin was supposed to be brought into this country concealed in disposable diapers, the authorities said.Morgan later was convicted of felony drug charges, though a follow-up article in the New York Times does not specify the exact counts. Morgan, who had worked for the school for 20 years, was fired but later won a civil case against the schools over the firing, despite his conviction.\". So you've got a school official arrested and charged with selling drugs multiple instances (and the details are such public knowledge that it made the NY Times) and was teaching kids. So the question becomes \"Was a background check ever done?\" If it was, \"How did it miss such well known information?\". Also ask yourself what it says when Morgan was hired at the \"request and recommendation\" of someone still working in the educational system. \u2022 (Stephen) \"Colbert Fune$800K in Grants for SC teachers\".\n\u2022 Huffington Post has an interview with 2010 Fields Medal winner Cedric Villani.\n\n# Math Models and a Birthday Problem resource\n\nMathematical models is one of those ideas that students should know, but don't. Even after they've studied them. Ask your class the following: \"A coin is tossed. What's the probability that it\u00a0lands as heads?\".\n\nMost students have are quick to say 1\/2 but that's wrong--the correct\u00a0answer is we don't know the probability for any particular coin.\u00a0We could use experimental probability to estimate it but even that's an approximate answer. The probability of heads that the students think is reality is actually a based on a mathematical model with a \"fair coin\". Mathematical models are approximations of reality.\u00a0Unfortunately most students who have had some probability don't know coin flipping is based on a model\u00a0and\u00a0think\u00a0the number of outcomes\u00a0determines the probability (not realizing the equally likely assumption is an assumption which could be false). Some of these problems invariably trace back to the teachers who have taught them incorrectly.\n\nMathematician William Feller\u00a0was a well known expert in probability\u00a0who wrote a\u00a0classic\u00a0book\u00a0An Introduction to Probability Theory and Its Applications\u00a0in which you can find (by click on \"Look Inside\") the following quote on page 19: \"As a matter of fact, whenever refined statistical measures have been used to check on actual coin tossing, the result has invariably been that head and tail are not equally likely. And yet we stick to our model of an \"ideal\" coin even no good\u00a0coins exist. We preserve the model not merely for its logical simplicity, but essentially for its usefulness and its applicability.\".\n\nThe coin flipping model has two assumptions built into it:\n\n1. There are two outcomes (heads and tails)\n2. The two outcomes are equally likely.\n\nSince its possible for coins to balance on their sides (nickels and quarters more easily than a dime), it's possible (though admittedly remote) for a coin to land on its side. And it seems like most people have had experiences of a dropped coin which lands on its side only to roll away. Heck, it's even happened during\u00a0a football game. This paper estimates the odds of an American nickel landing on its edge as 1\/6000. Tested.com\u00a0says that a study (broken link) indicates \"...the \"randomness\" of a toss is actually weighted ever so slightly towards the side of the coin that's facing upwards when a flip begins....The paper, written by statistics and math professors from Stanford and UC Santa Cruz, also points out that a perfect coin toss can reproduce the same result 100 percent of the time.\".\n\nSo coin tossing is a simple example of a mathematical model that students should learn. Another model is the famous \"Birthday Problem\". As I mentioned in an earlier post:\n\nAnswering the Birthday Problem involves creating a mathematical model. The model rests on two assumptions that aren't true and should be discussed with the class:\n\n\u2022 there are 365 days in a year (Feb 29th is ignored to simplify the model)\n\u2022 birthdays are equally likely to be on any given day (Also false. This varies from country to country; in the US birthdays are more towards the middle of the year. Count back 9 months and you've got cold weather. Nothing random there.)\n\nI'm revisiting this post because I\u00a0ran across a chart showing the distribution of birthdays referred to in the second point above. The Gizmodo post \"How Common is Your Birthday\" says, \"The visualization used data from 1973 to 1999 to chart popular birthdays and figured out when the most popular time to pop out babies were.\" \u00a0So now you have a source to back up my claim and a nice chart to use in the classroom.\n\nIf you teach the Birthday Problem then you should get a copy of the chart or bookmark the page above.\n\nHere are some stories that caught my attention this week:\n\n\u2022 In an earlier post I pointed out\u00a0the case of a high school student accused of stealing a backpack who was in prison for almost 3 years because he had been accused of stealing a backpack and would not plead guilty. The charges were eventually dropped\u00a0and, because he was able to specify the specific dates on two occasions when he was abused, video has surfaced of these events. Democracy Now! has \u00a0\"Explosive video obtained by The New Yorker depicts extreme violence inside New York City\u2019s Rikers Island jail complex.\" and interviews\u00a0New Yorker staff writer Jennifer Gonnerman, who has reported on the issue.\n\u2022 NY Post reports on, \"A copy of the state\u2019s English Language Arts test that students took last week was leaked online Wednesday in an apparent act of sabotage by anti-testing activists......\u201cThis is a political act and it will be interesting to see whether [test-creation company] Pearson or the state Department of Education understands it as that or goes after them for civil or criminal liability,\u201d said Brooklyn College education Professor David Bloomfield, who called the post an act of \u201ccivil disobedience.\u201d\"\n\u2022 The Washington Post covers the widespread resistance of New York to Common Core testing: ,\"Newsday has translated raw numbers into percentages, estimating that over 40 percent of all Long Island 3-8 students refused to take last week\u2019s ELA Common Core state tests. Numbers in some districts reached well over 70 percent, with at least one district exceeding 80 percent. It appears that no more thanseven of the 124 districts on the island will meet the testing threshold of 95 percent. And that is before this week\u2019s math tests, when opt-out numbers are expected to climb, as they did last year...It seems clear that the final 2015 tally will well exceed 200,000 students. New York State will likely not make the minimum 95 percent federal requirement for testing..\"\n\u2022 Shamkir 2015 has ended in victory for Magnus Carlsen. Chessbase has the report here. Anand took second with Caruana and So tied for third. Anand's second place performance has put him at number 2 in the world with a 2803.7 Live Chess Rating. I never thought he'd get there again. At 45 with his\u00a0peak years ago and he's still in the hunt: incredible!\n\u2022 My sympathies go out to Nigel Short. Not for his 1.5 - 8.5 versus Kasparov (Chessbase only has part 1 out here) but for the savage \"beating\" the \"PC-police\" are inflicting on him. The brouhaha, discussed here, starts with Nigel's comment, \"\u201cMen and women's brains are hard-wired very differently, so why should they function in the same way? I don't have the slightest problem in acknowledging that my wife possesses a much higher degree of emotional intelligence than I do. Likewise, she doesn't feel embarrassed in asking me to manoeuvre the car out of our narrow garage. One is not better than the other, we just have different skills. It would be wonderful to see more girls playing chess, and at a higher level, but rather than fretting about inequality, perhaps we should just gracefully accept it as a fact.\u201d\" which became sensationalized with The Telegraph's article, \"\u00a0Nigel Short: 'Girls just don\u2019t have the brains to play chess\". \u00a0You can even see Nigel defending his common sense position on Sky News and being given the absurd argument that he's wrong because J Polgar has a plus record against him. Given that there are distinct differences in the hardwiring \u00a0of male versus female brains (not to mention the differences between males and between females) and the study Short points out, it's difficult to believe his comments have become so controversial. Sorry Nigel! You deserve better.\n\u2022 Huffington Post has an article about Ramanujan whose birthday was April 26th. The brilliant mathematician who was most definitely wired differently than others: \"The biggest question is how an untrained teenager, and later young adult who repeatedly flunked out of college in his native south India (generally the area of Madras, today's Chennai), was able to obtain--all on his own--mathematical expressions that later would take some of the world's leading mathematicians years and even decades to ascertain and prove.\".\n\u2022 I've deleted the CTAN Mail Archive link and replaced it with the updates gmane.org. They were notifying about many changes in LaTeX that never made it to the\u00a0other link. The link is CTAN announcements, located on the sidebar.","date":"2015-11-26 19:32:27","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 8, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2513009309768677, \"perplexity\": 2570.153402742765}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-48\/segments\/1448398447773.21\/warc\/CC-MAIN-20151124205407-00224-ip-10-71-132-137.ec2.internal.warc.gz\"}"} | null | null |
'use strict';
var pagespeed = require('gpagespeed');
var prependHttp = require('prepend-http');
var output = require('./output').init();
module.exports = function (opts, cb) {
opts = opts || {};
cb = cb || function () {};
if (!opts.url) {
throw new Error('URL required');
}
opts.strategy = opts.strategy || 'desktop';
opts.nokey = opts.key === undefined;
opts.url = prependHttp(opts.url);
pagespeed(opts, function (err, data) {
if (err) {
cb(err);
return;
}
var response = data;
output.process(opts, response, function(processErr) {
cb(processErr || err, response);
});
});
};
| {
"redpajama_set_name": "RedPajamaGithub"
} | 8,953 |
What separates Squash Unlimited from all the other stores both online and in your town/city? Good question!
The product is all the same from store to store, a Tecnifibre Carboflex 125s racquet is the same in Toronto as it is in New York. Same for a Salming or Asics indoor court shoe.
What separates Squash Unlimited from the rest is our passion for the game and the gear. It is also our passion to provide incredible customer service before and after the sale. That doesn't mean we are going to replace that racquet you just smashed into the wall (sorry) but we are here to be fair and responsible with every customer we do business with and find the right gear for your style of game and budget.
We play-test every single racquet we carry and can give you a solid factual opinion based on technical details (weight, balance, flex, etc) as well as our own biased opinion. Of course it is biased 🙂 we all have likes and dislikes in racquets / gear and how they all perform.
We also play-test grips, balls, shoes, glasses, and pretty much everything we sell. In fact we have tested it well before we have decided if we are going to carry it in the store or not. Only if it seems of good quality and performance then will we stock it.
We have a full racquet demo program for players to try. The best advice is to try before you decide. Don't get hung up on the specs too much. While they are important you need to feel it hit the ball to really know the how racquet plays or feels.
Our product buyer is always looking for new and innovative products in the market. We were the first store to carry Oliver racquets in Canada as well as Salming shoes. We take great great pride in being able to pick product winners and bring them to market first. That is why you will quite often find products here first before everyone else jumps on the bandwagon.
Our stringer is a Master Racquet Technician with the United States Racquet Stringers Association and has loads of experience stringing and servicing racquets. We are proud to have been hired to be the official stringers for the 2015 Pan Am Games in Toronto and the Squash Canada Women's World Teams Championships in Niagara-on-the-lake in 2014.
Thanks for reading through all of this. If you have any questions regarding product, policies or anything else feel free to contact me directly. I will be more than happy to help. | {
"redpajama_set_name": "RedPajamaC4"
} | 8,947 |
Die Poolbillard-Europameisterschaft 1993 war ein vom europäischen Poolbillardverband EPBF in Siófok ausgetragenes Poolbillardturnier. Es war die bislang einzige Poolbillard-EM in Ungarn. Die Mannschaftswettbewerbe und die 9-Ball-Wettbewerbe fanden in der norwegischen Hauptstadt Oslo statt. Bereits 1986 hatte die EM in Oslo stattgefunden.
Ausgespielt wurden die Disziplinen 8-Ball, 9-Ball und 14/1 endlos. Zudem wurden die Mannschafts-Europameister der Herren sowie der Damen ermittelt.
Der Deutsche Thomas Engert wurde im 14/1 endlos durch einen Finalsieg gegen seinen Landsmann, den Vorjahres-Finalisten Ralf Souquet, Europameister. Im 8-Ball hingegen gewann Souquet gegen Engert und wurde somit nach 1991 und 1992 zum dritten Mal in Folge 8-Ball-Europameister. Im 9-Ball gewann Oliver Ortmann das Finale gegen Souquet. Der Deutsche Tony Deigner gewann im 9-Ball die Bronzemedaille. Der Österreicher Werner Duregger gewann zweimal Bronze.
Bei den Damen gelang es Franziska Stark durch einen Finalsieg gegen die Österreicherin Gerda Hofstätter, erstmals seit 1993 Europameisterin im 14/1 endlos zu werden. Die Österreicherin Gerda Hofstätter wurde durch Siege gegen die jeweiligen Titelverteidigerinnen Louise Furberg und Franziska Stark Europameisterin im 8-Ball sowie im 9-Ball. Die Deutsche Ilona Bernhard gewann drei Bronzemedaillen, Andrea Kroll kam im 9-Ball auf den dritten Platz.
Die deutsche Mannschaft (Thomas Engert, Oliver Ortmann, Ralf Souquet, Tony Deigner und Edgar Nickel) gewann das Finale gegen Schweden und wurde somit nach 1991 und 1992 zum dritten Mal in Folge Europameister. Die Schweiz und Finnland gewannen Bronze. Bei den Damen wurde Schweden im Finale gegen Deutschland Europameister, auf den dritten Platz kamen Österreich und die Schweiz.
Medaillengewinner
Quellen
1993
Europameisterschaft
Sportveranstaltung in Oslo
Sport (Siófok)
Billardturnier in Norwegen
Billardturnier in Ungarn | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,576 |
This entry was posted in lyrical poetry, Mary Kendall poetry, meditative poetry, poetry, tanka and tagged city poems, poems of connection, poems of the human heart. Bookmark the permalink.
Very nice, Mary! This poem eases the heart. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,665 |
Automated Personalized Phone, Video and Social Media Systems with James O'Hara CW 28- transcript
Posted on October 20, 2015 by jtsar
John Tsarpalas: Today my guest on Commonwealthy is James O'Hara, co-founder of Extended Data. You go, "Okay, what does that mean?" Well, this company does some amazing things in the realm of automated phone calls, social media, and video.
You are going, "Oh, no, robo-calls. I don't listen to them. Those are horrible." Well, I think you think to think about that.
Automated calls are part of that portfolio that every campaign manager and campaign strategist needs to think about. They are very cost effective. You get things done very quickly. They have a short turn around time. And with the personalization that's available through extended media, people feel like you are talking to the.
So I think it's worth your while to listen to today's episode, automated personalized phone, video, and social media systems with James O'Hara, Commonwealthy #28.
My guest today is James O'Hara, co-founder of Extended Data Solutions. But James and I are old buddies. When did we meet? 2001? 2002?
James O'Hara: Some of our most early meetings went back to 2001 when I introduced myself to you and let you know that I wanted to run for state representative.
John Tsarpalas: Right. Tina was around then. If you listen to this podcast, you know who Kristina Keats is. She has been on many of these podcasts with me. She's sort of my old partner in- I hate to say crime because we were up to doing good deeds for the world.
James O'Hara: You had just come off helping Senator Mark Kirk get elected into the 10th Congressional district. He won congressman in a very, very tight race. The New Trier Republican organization had a big role in that.
John Tsarpalas: Right. We had just had our first big congressional win with Mark. We were optimistic that we could continue to do well in a very Democrat area, which it was. And you are a Northwestern grad, so then you were living in Evanston.
Back in those days, which we still do, we called it the People's Republic of Evanston because it is so far to the left. You decided you wanted to run for state rep in that area, which was, just bless you for trying. It was such a hard district in terms of the way it was gerrymandered. The principles of the people of Evanston is progressive.
James O'Hara: Not just Evanston, which is a town that I do love having gone to Northwestern, but there was a giant section of Roger's Park. I'll never forget after you and Tolbert and Tina and I had sat down and said, "Great. Okay, James, let's have you run. This is going to be a good experience. Let's have a good time with this," the NTRO picnic in Kenilworth occurred a few weeks later.
I met a guy named Dan Proft who had graduated from Northwestern a couple of years behind me. I was like, "Hey, Dan, how is it going? Nice to meet you. I'm running as a Republican in the 18th district of Illinois for state rep." He said, "Why?"
John Tsarpalas: Well, Dan has always been one to cut to the chase. Dan's on the radio here. If you are a Chicago resident, he's heard every morning. He does a conservative talk show here and doing very well with that. He's still around.
Yeah, why? Well, why? Because we needed a candidate and it was worth trying. That's why. But, gosh, it was a tough district.
So you were one of the first to, in my mind, start automating things. You just kind of went with that and created a whole business out of it. You still are at it.
But back in those days, I remember Tina and I had talked about this in one of our previous podcasts about using Access for a voter database. You had taken Access and somehow created a way to make queries much easier for us. That's one of the first things you did for the system.
And then you barcoded Get Out the Vote for us, which was a completely new concept. Then you kept going.
James O'Hara: That was a novel concept, which I think many federal and statewide organizations use today, but I want to say that we were one of the first organized groups that attempted this. Basically what I learned from you and Tina, which every candidate running for office needs to understand… In fact, I'll go back as far as a preliminary meeting I had with Dan Rutherford, whose is well known in the state of Illinois.
John Tsarpalas: He's a former state treasurer. A state senator and then state treasurer.
James O'Hara: So Dan Rutherford had a meeting with a group of candidates. It was put on by the House Republican organization. He did an educational session. He said, "Alright, you got this organization going forward. Let's talk about what you guys are going to do to win. So let me just ask you first (I want you to raise your hands) whose got a databases of pluses?"
I don't think anyone in the room knew what a database of pluses actually meant. So what we learned is that database of pluses means that you have a district and you have a voter file. You have individual voter records listed in that database. Within that database, you identify who are the people who have least said to you on the phone or when you have knocked on the doors, "I will support you, James."
I don't think anyone really knew what that meant. One of my first lessons was you have to understand who your pluses are. Once you understand who your pluses are, then you can deploy various technologies, systems, and procedures to make sure that those identified supporters are actually going to go out and vote for you on Election Day.
John Tsarpalas: Right. Well said. And we are still preaching that here. You've got to find your pluses. It's all about knocking on those doors and finding pluses.
James O'Hara: It is. And if you don't know who your pluses are, then how do you have an effective GOTV strategy. So without going into too much detail, essentially what we did is we assigned codes to every voter in the district. We identified codes for supporters and pluses.
Then what we did is we put poll watchers at each polling location and would have them send a text messages (and this is before smartphones, so we were using old AT&T or Nokia phones that had very basic, rudimentary text messaging capabilities) with the code for the identified plus who actually came to vote.
We would receive that and import it into the Microsoft Access database. Then we would know, "Okay, among our pluses, who has actually voted today?" So then our target became pluses who had not yet voted. With those pluses who had not yet voted, around 3 p.m. we deployed a volunteer and an automated strategy to contact those households and encourage them to vote on Election Day.
John Tsarpalas: Right. You rigged it. People need to understand this was primitive back in the day. I don't know how you did it. It was genius. It would call and if they answered, it would hang up. Then it would refer it over to someone to live call it.
James O'Hara: That's right! I forgot about that, John. You are right.
John Tsarpalas: And if no one answered and it went to a voicemail, then you played a recorded call that said, "It's Election Day. It's going to be very close Please show up at your polling place. Please vote today before 7 p.m." So it kicked over to our roomful of volunteers.
Then there is something else I need to just praise you. You did one of the most magnanimous things ever. That's when you got close to the end and you realized that your district just was overwhelming Democrat.
Yet the next district over had a Republican incumbent and it was winnable. You threw all of your resources for Beth and backed off your race. Beth won by 600 votes and it's because of you that she managed to hold that seat.
James O'Hara: That's very nice of you to remember that, John.
John Tsarpalas: Who can forget that? My gosh! Who has that kind of take one for the team spirit, but also the ability to control your own ego and realize that it wasn't going to happen for you. This was down to the last week or two.
We just knew how many pluses you had and how many you needed and how many they had. It just didn't add up. You knew it, and you had people and resources that you threw to Beth, who was literally just down a few hundred votes. In the end she won by 600 votes. Thank you! It was huge.
James O'Hara: It all goes back to you and Tolbert and Tina and a lot of leadership at NTRO who could actually motivate and get people to volunteer and agree to knock on doors and make phone calls. I always say with politics, it's got to be fun.
I think for a successful local campaign, you have to create fun environments for people to volunteer. If you can bring a bunch of likeminded individuals together with some pizza and soda and say, "Hey, let's work on this for three or four hours then talk about the fun stories that we had," you actually can make a difference.
John Tsarpalas: We really did and we really can. And you still can. It's still the same thing. It is about a feeling of spirit and having fun and team camaraderie.
You mentioned Tolbert. I should throw in here Tolbert Chisum, one of the best people I've ever met in my life, mentor, and part of the team of Tina and myself and James and a whole lot of other people back in those New Trier Republican days, which were really great days for Tina and I. And for you, too, I know.
James O'Hara: It was a lot of fun. Some of my great memories from my adulthood stem from being in that office on Park Drive and formulating some great relationships. But again, it all goes back to people having a common interest and a common goal and bonding over that and saying, "Okay, great. Let's put some elbow grease together as team, have some fun, and try to change something." That I think was the most important part.
John Tsarpalas: Right. So let's bring it more to the present day. You got your feet wet, teeth cut (I don't know what the right analogy is) on campaigns and how it works. You've put that together with your technical abilities. You had Extended Data, but you turned it into this technology resource for Republicans and conservatives and free market campaigns.
So let's talk a little bit about that and what technologies are out there and how people can do that. Perhaps we should start with some real simple things. Most people are familiar with the automated phone call. I think that's one of the first areas you moved into, correct?
James O'Hara: Yup.
John Tsarpalas: So let's start. There's the basic recorded message that gets played. But talk to us a little bit more about the details of how that works. It has to have a database behind it, etc.
James O'Hara: Sure. So the automated phone call business has been around for quite some time. It's exactly what you described, John. You record a message, you have a database, and you basically blast out the same message. I can remember you and I having conversations back in the day about this.
As a state rep candidate or running for school board or municipal office or city council or whatever it might be, you don't spend a lot of money on television and radio or a campaign strategist. You basically spend money on phone and mail.
So that's what I did when I ran for state representative. I would ask my friends, people I knew who lived in the district, "Hey, did you get my phone call?" or "Did you receive my mail piece?" Invariably the answer was, "No, when I get those automated phone calls, I often times will just hang up or if it is on my answering machine, I hit the delete button." With mail they said, "Once I realized it is political, I just kind of toss it in the garbage."
I thought, "This is kind of ironic." We spend our time doing two things, which you have to do: doing persuasive activities, like knocking on doors and making phone calls and going to debates and convincing people that you are the right candidate, getting volunteers, etc., and then raising money, like asking people to donate to your campaign.
You raise this money. You put in some money yourself. What do you spend it on if you are running in a local office? You spend it on mail and phones.
John Tsarpalas: Right. And people are deleting it, hanging up on it, or throwing it in the garbage.
James O'Hara: Exactly! So I thought, "This is crazy." What I did in 2001 was, as we talked about I living in Evanston, Illinois with my young family. I took my database and I just took the Smith families. I did have a four port dialer, which means I could make four phone calls with an automated phone delivery system at the same time.
I took all the databases' Smith families, uploaded it to the dialer, and recorded a message which said, "Hi, Smith family. This is James O'Hara, a Republican candidate for state representative. It's a beautiful Saturday morning in Evanston. Let me tell you why I am running." And then I would give my points.
I did that and then I did it for the Johnson family and the Jones family. And then I called the John's and the Mary's and the Tom's. I did it for several days. I did the same thing for Evanston and Wilmette.
John Tsarpalas: How did you think of that? That was genius.
James O'Hara: I don't remember how I thought of that.
John Tsarpalas: You are the first person to do that.
James O'Hara: I think we were in the political arena. I just realized if people are going to say, "When I get an automated phone call, I am going to hang up," I thought what if I speak their name? They say you only have a few seconds when you are marketing yourself, whether it is the corporate world or the political world. You only have a few seconds to capture someone's imagination.
So I thought if I greeted them by their name immediately, they would at least pay attention. I went back to some of those people and said, "Hey did you get my message?" They were like, "I heard it. That was awesome. I heard your message."
I kept doing that. We did it with mail as well. We would personalize mail with the household's first name and their town. We put graphics for their town, be it Wilmette, Winnetka, or Kenilworth. I think we got traction that we might not have otherwise gotten.
John Tsarpalas: Right. I remember you would have variable pictures. It would print different backgrounds so it looked like it came for just that town or that school area. I remember you even going by schools and kind of relate people to their neighborhood school in some of the literature you were doing. It was really brilliant.
James O'Hara: Yeah, that philosophy is still very, very much attend of what we do today. I think Tip O'Neill said, "All politics are local." People care about local initiatives. They care about federal. They care about state. But they also care about what's going on in their backyards. If somehow or another you can appear to be showing that you have a focus towards the local angle, you are going to capture their imagination more effectively than if it is something generic.
John Tsarpalas: Let's give them a little bit more of an example. For instance, did you have Mike Ditka record this? Did we do that? Can we talk about that?
James O'Hara: Sure.
John Tsarpalas: I remember people saying to me, "I can't believe Mike Ditka called me." Your equipment senses if it is voicemail or it is live, correct?
James O'Hara: Yup, absolutely
John Tsarpalas: So it would hang up if someone answered. But if it was voicemail, it would leave a message. It would be like, "Hi, James, Mike Ditka. I am calling people on Washington Avenue today because I am supporting John Doe for Congress. I am sorry I missed you." People think Mike Ditka really called them. People were talking about that for a long time.
James O'Hara: A few follow ups there. So Congressman Peter Roskam is the only person to date that has recorded street names. I think he fully understands the capability of the technology. What we did is we provided him a list of the top street names in his districts. So he would record messages.
If I am reaching out to you and your neighbors on Main Street and your neighbors on Oak Street and your neighbors on First street. He actually did that. I believe that is a very effective use of the technology. I need to do a better job of helping others understand.
Sure, you have to spend maybe thirty minutes out of your busy day. Pick a day, a weekend or a weekday or whatever it might be, and let's record the streets in your districts. But it is very, very effective. When you get a phone call from your congressman saying, "I am reaching out to you and your neighbors on Wilmette Avenue," that's going to be meaningful.
James O'Hara: The other thing, too, regarding the live versus answering machine devices. We first started off- you are exactly right, John- believe that if the person answers live, then you don't want to say, "Hi, John" or "Hi, Marybeth" or "Hi, Stephanie" because if the person who is answering is not Marybeth or John, then it is going to confuse them and be ineffective.
What we realized (and this goes back to another point you just raised), we are actually not trying to dope them into believing that the person has actually called their home. Certainly people do think that has happened. But what we are trying to do is capture their attention.
If I get a call at home for Stephanie and the recorded message says, "Hi, Stephanie," and it's actually me, James, answering the phone, at least I know this call is intended for my household. So I am not going to hang up. I am going to pay attention to what is this message for Stephanie.
So we changed that very, very early on. So we will speak the voter's first name. If it is the last name, like "Hi, Smith family" or "Hi, Johnson family," it's a no brainer. But if it is just a voter that we are targeting the household, we'll speak that name even in a live answer because we know all we are trying to do is get them to listen and pay attention.
Most importantly, then we can track how long these people who answered live listened to the message. What percentage of people listened to the entire message? What was the average message listen rate among all live listeners?
John Tsarpalas: Tina always says we need to ask questions, even if it is an automated call. You want to get their attention somehow. So scripts are really important here. Do you have some secrets to scripts?
James O'Hara: Yeah, again I'll go back to Congressman Roskam, who is brilliant. He did a fundraising call where they sent him a letter. The letter arrived in people's homes. We timed this personalized phone call to arrive at the same time that we expected the mail to arrive. What he did as effectively as anyone I've ever worked with is to sound very colloquial in his message.
So he leaves this message and says, "Hi, Smith family" or "Hi, Davis family. It's Peter Roskam. I just want to let you know we just dropped a note to your home about our current fundraising efforts. If you should receive it- Let's see, today's Wednesday, so you should receive it either tomorrow, Thursday, or Friday." Very colloquial and very, very natural sounding.
I think that's one of the secrets. You can't sound like it's a staged recording. It has to sound colloquial. Then when you bring the variable components together, whether it be a reference to their street name or their hometown or their last name or first name, the end product has to sound very user-friendly and natural.
John Tsarpalas: I want to bring up some thoughts on why phones are so important, this type of phone. Number one, they are fairly inexpensive. If you are just doing a simple message, they are downright cheap. When they are variable, there's a lot more work involved. But they are still effective and inexpensive.
The other thing that happens with a phone system is it can respond quickly. If you've got that October surprise and you're attacked one week before Election Day or the weekend before and there is no time to respond, a phone system can respond within an hour or half-hour.
So I really think it is important for a campaign to have a relationship with someone who is a vendor for these products, such as yourself. I also think it is important to have recorded, just as Peter Roskam has, all those street names and variables so that a variable recording can happen. Do one earlier on, but then you've also got the fire power if you need to drop something quickly.
James O'Hara: Absolutely. Most candidates need to understand that you need to be able to respond to issues in a very timely basis. Mail is great. Mail is a great medium. Radio ads if you can afford them are a great medium. But sometimes you need to respond very, very quickly, as you just mentioned, John, with a very timely response and message. Phones offer you a way to do it.
With variable voice technology, what's great is the way we set it up you can record the household's first name or last name. You can make a reference to their street or their hometown. And then you build the script so that whatever your message is, it can be used in a variety of different situations, whether it is Get Out the Vote, early voting, a respond to an attack ad, or fundraising. So it is going to fit in in any effort.
So the candidate at that point, having done the upfront work, which again I always say takes less than one hour to record these variable, personalized components, can in a couple minutes record the main message if you will for what they want to deliver. They can then have that go out.
Obviously in this day and age in 2015, there's a lot of negative feelings towards automated phone calls. We don't feel like a personalized, targeted phone call is the same as a traditional robo-call. You can actually provide value.
In many Midwestern states, the concept of early voting is still new. When does early voting start? Where do I vote early? Is it at my regular polling place? Is it at my church? Is it at my elementary school?
So if you can provide a personalized, targeted phone call which provides information which is value added to the voter, regardless of how they are going to vote, then that is something that is not in the same league or ballpark as a traditional robo-call.
John Tsarpalas: I agree. I think that's really important. For instance, here in Illinois it starts thirty days before, but it is much more limited in where you can vote. Most people don't know where those places are. They might know where their regular polling places is at the local school, church, or whatever. But the early voting sites are farther apart and fewer. That's really a great use to remind them.
James O'Hara: The information just isn't there. The intel isn't there. These early voting locations change all the time from year to year. As you just mentioned, John, they are not at your traditional polling place. The times change based upon the day of the week. And it's a huge value to the voter to be able to say, "Hey, I am not going to wait in line on Election Day. I am going to vote early."
An automated phone call which provides that voter and that household the details of where their early voting location is is a great benefit to a citizen, a voter. But also if you are running a campaign and you've got your identified pluses, why wouldn't you tell your identified pluses where they can vote early or when they can vote early to help get out your vote before Election Day?
John Tsarpalas: Right. And the phone system, the database can be current. It could be the people you identified up to the day you send it over to the phone system, versus if you are going to mail this, it's got to go to a printer, get barcoded, and go through the postal system. So you have missed three, four, five days of people that were identified, versus the phone call is going to give the database probably an hour before or less than a day before.
James O'Hara: Absolutely. I think that also helps with volunteer recruitment. If you are going to ask people to go out and knock on doors and make phone calls for you in the campaign days, which is great, it's really a value added statement or a benefit to that volunteer to be able to say, "Hey, besides having a party, we are actually going to leverage your work. The pluses that you identify, we are going to use some techniques which are going to ensure those people."
You know, John, polls mean nothing. It's how can you get your people to the polls on Election Day. If you can be the biggest supporter for candidate A for the school board or candidate B for state rep, but if you don't actually go to the polls on Election Day, it doesn't matter.
John Tsarpalas: Right, it doesn't count. We have not gotten to Get Out the Vote in our podcasts. We are sort of working our way there chronologically, talking right now sort of systems. We just talked about data systems not long ago. We are talking about your system. We are going to be talking about mail and mail vending, things like that, in the next few podcasts.
So you can do other things with phones, though. You mentioned polling. There are automated robo-polls. Do you get into that?
James O'Hara: Absolutely. I am glad you brought that up. Automated polls, right now we are in the midst of every day reading articles about what is the latest poll in Iowa. What's the latest poll in South Carolina or Florida or New Hampshire?
A lot of these polls are done by human beings that are sitting in a call center and going through a script and asking people. They are not just asking them, "Are you voting for Trump or Bush or Cruz?" They are asking them questions about their background, their ethnicity, their age, how likely they are to vote, and they are marrying that with actual voter history data.
Did this person vote in the last election cycle? How many times in the last four years? Do they vote Republican? Do they not vote in primaries? Companies like Real Clear Politics are able to provide some fairly sophisticated analysis.
But all of that is expensive. When you put someone in a calling center, a professional individual in a calling center, and staff that calling center, it's pricey. What we are able to do, and very effectively, is the same thing but with an automated phone call.
Again, it goes back to you, okay you have to engage that person. You have to make sure that they are going to answer the questions that you are asking. We have found with our methodology of greeting the household by name or the voter by name.
We will even go so far as to say, "This call is for John. Am I speaking with John?" So if John answers the phone, he might press one, but if it is Mary Beth, she might hit two. We verify that we've got the right voter. We can track his or her voter history in the databases.
And then we go through a series of questions. Again, without going into a lot of detail, there's two metrics that we use. The first metric is what percent of people who take a personalized, automated survey will answer at least one question. So the first question that you ask is probably the one that surveyors are most interested in. I need to get at least this one.
But you can have a survey, which we've done, with twenty or twenty-five questions. So the next metric is what percent of people will answer all questions, take the full survey. We've done municipal and statewide and federal campaigns polling where we use those metrics.
The answer is yes, when you can speak their name and make a reference to something that is meaningful to them with a personalized phone call up front, before you start answering the questions, your response rate is significantly higher. It really represents a hybrid, we like to say, between the more expensive personalized but live call, where you paying for someone who is in a call center, and a less expensive personalized experience that is automated.
We feel that we can get very similar results as a live experience solution would provide as you would imagine at a far lesser price.
John Tsarpalas: Okay. How big of a universe or how small of a universe is this worth bothering with? Is it ten thousand households? Ten thousand people? A hundred thousand? What scales of economy does this work for?
James O'Hara: Another great question, John. So we've worked with scales of five hundred people to ten million people when we've done federal races. For the client, if I am running for school board and I am trying to contact five hundred pluses, it is going to work for them.
It's a question of a partnership: does it work for both sides? So what we try to do is make sure that is something that is affordable, meaningful for the client, and it is going to be meaningful for us as well. As you mentioned earlier, there is a lot of work with setting these up.
Compare with what we do to the price of the stamp and a mail piece. At the end of the day, you just have to pick and choose what are the weapons you are going to use to get out your message. No matter what the universe is, we are always more effective than even a generic mail piece.
John Tsarpalas: Oh, absolutely. The other thought is push polls. You can do negative messaging via the pretense of a poll, correct?
James O'Hara: Yes and we've done it.
John Tsarpalas: I know. I have done it, too. That's why I am bringing it up. "Would you vote for John Smith even if you've realized he's embezzled more than one hundred million dollars?" Stuff like that.
James O'Hara: Yeah, we had a congressional candidate in Illinois. I give him a ton of credit. He had all these ideas of things that we could do. So they are very effective. They are above board. There is nothing which can be scrutinized in a negative way or called out as being improper.
There are many things you can do to kind of spin the message if you will, as you just suggested with the push poll, in a way that is going to be favorable to your client.
John Tsarpalas: I think I should define what a push poll is. I push poll is you are trying to sway someone's opinion under the pretenses that this is a poll. So you are using negative terms about your opponent and positive terms about yourself, if you are using both sides in the same poll. Or they could be separate; one's just a negative and one's just a positive. Then they are separate polls.
But you are trying to push the voters thinking in the poll versus a true poll is neutral and is trying to find out what you are thinking, not trying to sway your thinking. That's what a push poll is. The difference is that.
James O'Hara: And if you look at society today, in many, many forms and mediums of marketing, they are all push polls. It is not necessarily candidate A versus candidate B. But a lot of the messaging that we receive today as Americans are kind of constructed in a way to kind of guide the recipients thinking into a certain direction, which is really no different than what a push poll does.
John Tsarpalas: Right. And we see that every day with our news media and their liberal bias. They are trying to guide us to where they want us to go versus the neutral position and letting us make up our own minds.
James O'Hara: Exactly.
John Tsarpalas: So now you've also taken this to social media, correct?
James O'Hara: Absolutely.
John Tsarpalas: Tell me more because I haven't seen a lot of this.
James O'Hara: Really, the layer before social media, John, is video, which you referenced up front. It's the same concept. Again, the universal landlines is dwindling. More and more of my friends do not have landlines anymore. We have a landline at our home because we've got two children who don't have cell phones. If there is an emergency, we need a landline.
The reality is that as a country, we are moving away from landlines and moving more towards cell phone only households. We have to be able to be diverse and progressive in how we get our client's message out.
Certainly video is a big way to do that. Variable interactive video, more specifically, is the way to do that. We have to do that in a kind of ubiquitous manner, so that is computers and laptops and cell phones and tablets. That's how people are competing today.
It doesn't matter whether you are running for federal office or school board, you have to think about how you are going to communicate with the constituents in your district. Video, again, (it sounds crazy to say this) but corporate America is moving towards video communications as a way of marketing and getting their message out. In the political arena, the same thing is happening.
So we are doing that, but we are doing it I would say in a three point fashion. The first is the same thing we talked about with voice, with personalized video and greeting the viewer by name. Or maybe not greeting by name; that's a prerequisite. Maybe we refer to the hometown. Maybe we know something about their interests or their profiles, so talking about talking specific issues that are going to be relevant with them. So personalization is one.
The second one is interactivity. So within the video itself, you have the ability to allow the viewer to press a button, which indicates they are interested in lower taxes versus creating jobs versus rights for gun owners versus protecting life. So interactively allowing a viewer to make options in the screen and then have that video content which follows be specific to what they are interested in.
Finally all the way down to capturing data. So someone could say, "Yes, I want to volunteer. Here's my email address. I am interested in a yard sign or making phone calls." And even donating. So within the video itself (it sounds crazy) we can have the candidate or a voiceover encourage someone to donate to the campaign. And within the video itself, allow them to enter their credit card number and their billing address and hit the donate button and capture that information.
So we call it variable interactive video. We feel that this is the next frontier in the political domain.
John Tsarpalas: How much data can you supply people who haven't done? For instance, are you able to get lists of hunting licenses and things like that so that you can add that into the database to figure out who might be gun owner or things like that.
James O'Hara: That's a fabulous question. I think going back to President Bush's first election-
John Tsarpalas: That's when it all started.
James O'Hara: That's when it all started. You remember you had a representative from the firm that helped President Bush in his first election cycle talk about… He came in when you were the head of the Illinois Republican Party. He came in and he talked about what that did to aggregate all of this information.
That's a great example you just gave, John. Someone who subscribes to Field and Stream or who buys at a Field and Stream type of store-
John Tsarpalas: Right, Bass Pro Shop or Gander Mountain or one of those outdoor sporting goods places.
James O'Hara: So this data is collected and aggregated. It's called micro-targeting. You can basically, without knowing if this household or if this voter in this household is going to support Republican candidate A versus Democratic D or Independent C, leverage this data to make assumptions.
Say, "Hey, we've never contacted this household. They've never answered this door. They've never taken a phone call. They've never taken a survey. But they subscribe to these magazines and have these profile characteristics and they do vote. We know from their public voting history they actually vote. So we are going to assume they are on our side."
So to answer you question, John, we don't do that. Our particular firm doesn't do that. But there are a lot of really good organizations that provide that. That intelligence is absolutely critical to formulating a voter contact strategy.
John Tsarpalas: So you can buy it. It has a price, but you can buy it for your campaign if you want to go that way and if you have that kind of money and those kinds of funds.
John Tsarpalas: I probably should have a whole podcast on whole micro-targeting concept. Although for a local race, this is expensive. It takes it to a whole other level. But it is fascinating.
When I heard about what they were doing in 2004… One of the reasons why Bush won big in 2004 was micro-targeting. It was a brilliant concept. Of course the other side's caught up on it now.
James O'Hara: I don't think the term before 2004 of micro-targeting existed.
John Tsarpalas: No, it didn't. It did not.
James O'Hara: It didn't exist. So what these early firms were doing with micro-targeting, as you said, is now in a different perspective and different landscape. But the same concepts apply.
John Tsarpalas: Right. A real simple one is that 98% of people who drive Subaru's vote Democrat. It's that simple. So don't mail to Subaru drivers if you are a Republican.
James O'Hara: What about Volvos? Who do they vote for?
John Tsarpalas: Volvos used to be bigger. Now it's switched to Subaru.
James O'Hara: Okay.
John Tsarpalas: In 2004 Volvo drivers were the problem. I had a Volvo and I was like, "Hey, wait a minute!" But anyway.
James O'Hara: The other one that I recently heard was that 72% of men who buy their suits at Joseph A. Bank are Democrat. I am like, "Hey, I've got three Joseph A. Bank suits!" So I don't know what that means.
John Tsarpalas: Yeah, I don't know what that means either. So anyway, micro-targeting data is available. That helps to append to your file so that you have even more ability.
But what I wanted to also say was this is a reason why a campaign if it starts early or keeps going can be out doing issue IDing of voters. You are not only calling to say, "Are you voting for John Doe?"
You are calling up in an off year to say, "Do you support lower taxes for your school? Are you concerned about higher property taxes?" Or whatever some issues are that you can identify. You get that in your file so that come election time, you've got that data to target those people with specifics.
James O'Hara: Absolutely, John. Your narrative is perfect. I know the timing of your podcast and the type of interface you are providing, the timing is perfect. Your duty efforts have to be based on data.
As we just talked about, you can buy that data and do different things. But in the actual campaign in May through Labor Day and into Halloween time periods, collecting information about the voters in your district is absolutely key.
We did a program several years ago in the state of Ohio where the ballot issue was "Should the state of Ohio allow legalized gambling in X number of sites in the state: Cincinnati, Dayton, Cleveland, Columbus, etc.?" One of the larger kind of P.R. organizations in the state hired us to ask people, "Where do you stand on this gambling issue?"
So we asked a bunch of questions. "From one to five what do you think about this? And what about this? And would this motivate you?" So what we did was capture a bunch of intelligence down to the household level. That intelligence then drives your GOTV efforts.
So it is all related. They are all tied in to one another. Whether you are running for school board or state rep or state senator or Congress or President, your strategy has to be a cohesive strategy over a period of time.
What you are talking about now is collecting that data up front, identifying issues, what's important to you, what's not important to you, where do stand middle of the road. And then when it comes time to deploy your Get Out the Vote strategies, leverage that intelligence to effectively drive your GOTV efforts.
John Tsarpalas: Correct. And also your persuasion efforts, even before GOTV. If you know they are worried about property taxes and you are the candidate running on that issue, they need to know your name and they need to be supporting you. So you can then target them with information about you to win them over to supporting you.
James O'Hara: You nailed it. As you and I both know, what motivates one household is going to be different from another household. So if you can collect that information upfront, and then sell them and say, "Hey, I am the candidate who is going to…"
Let's go back to school board. There are things that certain families care about and things they don't care about. So you are exactly right; if you can then capture that information and then deliver it back to them in a way to say, "Hey, I am going to fight for this particular issue," it is going to cause them to be more inclined to vote for you.
John Tsarpalas: Correct, and if you can get it down to each person in that household because there are houses that are divided on issues as well, which is frightening but true.
James O'Hara: It's very frightening and it is tough.
John Tsarpalas: It is tough.
James O'Hara: It is easier when the households are united.
John Tsarpalas: Yes. So back to what you are up to. So we talked about video. We never did get fully into social media. You can also do targeting with social media, correct?
James O'Hara: Absolutely. So the fun thing with social media is that you have the ability to essentially allow the campaign to have an application. That application can interface with social media platforms like Facebook and Twitter. If you have an individual who is willing to provide you with access to their social media profile, then you can do a lot of fun things.
We've done programs with Congressman Kinzinger, for example, in which we launched a personalized variable video Facebook campaign that would allow a Facebook user to view a video from Congressman Kinzinger on a variety of different topics and then select a friend from their friend to send it.
So I would send a video from me, James, to you, John. Congressman Kinzinger would say, "Hey, John, I heard from your friend James. He said you were interested in this issue about guns" or taxation or creating jobs. So it creates this kind of viral message opportunity to share issues that the candidate believes in, subscribes to, and allows voters or constituents to then in a viral fashion spread that word among people they know are going to be receptive.
John Tsarpalas: Cool. There's a lot here. It's sounds a little more complicated than it is. Well, it is and isn't. On the tech end it is very complicated, but none of us have to worry about that. What we need to think about is what is our strategy? What is our budget? Who are we trying to reach? When? Do we have some backup plans in case problems arise, like attacks?
Anyway, there are these different techniques and possibilities. How do they best fit your campaign and your plan? Think about them. Investigate them. Ask questions. Feel free to reach out to James. James, how can people get a hold of you to ask questions about your systems?
James O'Hara: I can always be reached directly in my office at 312-953-9560 or via email johara@extendeddata.com.
John Tsarpalas: They have a beautiful website, by the way. Extendeddata.com, take a look there. It really is nice.
James O'Hara: Thank you. As we've been focusing more on what we are doing with personalized and targeted communications, there is a link to our secondary site, which is Personicom.com. We go into a lot more detail about some of the things that we are doing with variable video, variable voice, and interactive video on our Personicom site.
John Tsarpalas: First of all, it has been fun to talk to you. Second of all, I think we gave people a lot of ideas and thoughts. I think it has been not only educational, but also informative.
I just feel like it is one of the first podcasts where we are really starting to get to strategy as well as the tools and putting them together. That's what these are. You offer a lot of different tools and they come in different price ranges and they come ideally for different situations.
People need to think more about these tools, find out more about these tools, and put this into their plan early on and work with them. James, thanks. I really appreciate it. It's been so fun. We got to hang together soon. I don't know when we are going to be in the same city again.
James O'Hara: I am coming up for homecoming.
John Tsarpalas: Good! Okay, great. We'll have to get together. I look forward to it. Thanks!
James O'Hara: Thank you so much, John. This has been really fun and entertaining.
John Tsarpalas: If you think your campaign can use automated personalized phone, video, or social media systems, check out ExtendedData.com. James is offering a fifteen percent discount to listeners of the Commowealthy podcast. Just use the offer code COMMONWEALTY.
As always, we have transcripts, show notes, and links to everything we talked about in this episode. So feel free to check us out at Commonwealthy.com. Please pass on the word about Commonwealthy to your friends and those that are politically active. We'd love to grow our community and we are here to help you. We are here to answer your questions. Thanks for listening!
James O'Hara: What we realized (and this goes back to another point you just raised), we are actually not trying to dope them into believing that the person has actually called their home. Certainly people do think that has happened. But what we are trying to do is capture their attention.
Automated Personalized Phone, Video and Social Media Systems with James O'Hara CW 28
Today my special guest on the podcast is James O'Hara, co-founder and of Extended Data…
Voter Databases and Systems with Chris Littleton CW 26
I have a special podcast guest today, Chris Littleton of Voter Gravity. Chris helped lead…
This entry was posted in Transcripts and tagged personalized video, robocalls, running for local office. Bookmark the permalink.
← The Use of Social Media in a Local Political Campaign with Aubrey Blankenship CW 27
Automated Personalized Phone, Video and Social Media Systems with James O'Hara CW 28 → | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,208 |
{"url":"https:\/\/eccc.weizmann.ac.il\/report\/2017\/061\/","text":"Under the auspices of the Computational Complexity Foundation (CCF)\n\nREPORTS > DETAIL:\n\n### Paper:\n\nTR17-061 | 3rd April 2017 20:18\n\n#### Communication Complexity of Correlated Equilibrium in Two-Player Games\n\nTR17-061\nAuthors: Anat Ganor, Karthik C. S.\nPublication: 10th April 2017 04:21\nWe show a communication complexity lower bound for finding a correlated equilibrium of a two-player game. More precisely, we define a two-player $N \\times N$ game called the 2-cycle game and show that the randomized communication complexity of finding a 1\/poly($N$)-approximate correlated equilibrium of the 2-cycle game is $\\Omega(N)$. For small approximation values, this answers an open question of Babichenko and Rubinstein (STOC 2017). Our lower bound is obtained via a direct reduction from the unique set disjointness problem.","date":"2018-12-09 20:36:37","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8389045596122742, \"perplexity\": 1314.2261350379692}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-51\/segments\/1544376823009.19\/warc\/CC-MAIN-20181209185547-20181209211547-00511.warc.gz\"}"} | null | null |
Q: $a$ and $b$ are nonzero unequal real numbers and $\frac{a-b}{a}=\frac{b}{a-b}$, what is the sum of all possible values for $\frac{a}{b}$?
$a$ and $b$ are nonzero unequal real numbers and $\frac{a-b}{a}=\frac{b}{a-b}$, what is the sum of all possible values for $\frac{a}{b}$?
I have tried cross-multiplying (which works since $a\neq b$), but all I ended up getting was $a^2-3ab+b^2=0$, which I can't figure out how to use to my benefit. Other than this, I can only think of bashing out possibilities, but I'll probably miss something if I do that. Can anyone help?
Thanks!
A: Hint: Take the fractions on both sides, and divide the top and bottom by $b$:
$$
\frac{a-b}{a} = \frac{b}{a-b} \implies \frac{(a/b)-(b/b)}{a/b} = \frac{(b/b)}{(a/b)-(b/b)}\\
\implies \frac{x - 1}{x} = \frac{1}{x - 1},
$$
where $x = a/b$.
Alternatively, taking your expanded equation $a^2-3ab+b^2=0$ and dividing both sides by $b^2$ gets you the same result.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,693 |
Steady Motions of a Drawn Cable
O. M. O'Reilly
Department of Mechanical Engineering, University of California at Berkeley, Berkeley, CA 94720-1740
J. Appl. Mech. Mar 1996, 63(1): 180-189 (10 pages)
O'Reilly, O. M. (March 1, 1996). "Steady Motions of a Drawn Cable." ASME. J. Appl. Mech. March 1996; 63(1): 180–189. https://doi.org/10.1115/1.2787196
The steady motions of nonlinearly elastic and inextensible strings which are being drawn between two fixed points and are subject to a gravitational load are examined. It is shown that, dependent on the boundary conditions, constitutive equations and a reference drawing speed, multiple co-existant steady motions are possible in certain situations. Using a variational method, stability criteria are also established for some of these motions.
Boundary-value problems, Cables, Constitutive equations, Stability, Stress, String, Variational techniques
Abarbanel
H. D. I.
D. D.
J. E.
Ratiu
T. S.
Nonlinear Stability Analysis of Stratified Fluid Equilibria
Philosophical Transactions of the Royal Society of London
, Vol.
, pp.
Nonlinear Stability Analysis of Inviscid Flows in Three Dimensions: Incompressible Fluids and Barotropic Fluids
Physics of Fluids
S. S.
Multiple Equilibrium States of Nonlinearly Elastic Strings
SIAM Journal of Applied Mathematics
Reeken
The Drawing and Whirling of Strings: Singular Global Multiparameter Bifurcation Problems
SIAM Journal on Mathematical Analysis
Naghdi
P. M.
A Lagrangian Description of Vorticity
Archive for Rational Analysis and Mechanics
Clebsch
Ueber die Gleichgewichtsfigur eines biegsamen Fadens
Crelle Journal fu¨r die reine und angewandte Mathematik
R. W.
The Nonlinear String under a Vertical Load
Dickey, R. W., 1976, Bifurcation Problems in Nonlinear Elasticity, Pitman, London.
T. J.
Stability and Bifurcation of Rotating Nonlinearly Elastic Loops
Quarterly of Applied Mathematics
Papadopoulos
Steady Axial Motions of Strings
ASME JOURNAL OF APPLIED MECHANICS
R. J.
On Movchan's Theorems for Stability of Continuous Systems
International Journal of Engineering Science
Lamb, H., 1929, Dynamics, 2nd reprinted ed., Cambridge University Press, Cambridge, U.K.
Lamb, H., 1945, Hydrodynamics, 6th ed., Dover, New York.
Love, A. E. H., 1897, Theoretical Mechanics. An Introductory Treatise on the Principles of Dynamics, Cambridge University Press, Cambridge, U.K.
Naghdi, P. M., 1982, "Finite Deformation of Rods and Shells," Proceedings of the IUTAM Symposium on Finite Elasticity, D. E. Carlson, and R. T. Shield, eds., Martinus Nijhoff, The Hague, pp. 47–103.
Nashed
M. Z.
Some Remarks on Variations and Differentials
American Mathematical Monthly
, No.
O. M.
Varadi
Elastic Equilibria of Translating Cables
Acta Mechanica
N. C.
C. D.
Three Dimensional Vibration of Travelling Elastic Cables
Journal of Sound and Vibration
Theoretical and Experimental Stability of Two Translating Cable Equilibria
Rowth, E. J., 1882, The Advanced Part of a Treatise on the Dynamics of Rigid Bodies, 4th ed., Macmillan, London.
On the Oscillatory Motions of Translating Elastic Cables
Troutman, J. L., 1983, Variational Calculus with Elementary Convexity, Springer-Verlag, New York.
Variational Inequality Approach to Free Boundary Problems with Applications in Mould Filling. ISNM, Vol 136
Appl. Mech. Rev (November,2002)
Stability of Vertically Traveling, Pre-tensioned, Heavy Cables
J. Comput. Nonlinear Dynam (August,2018)
Energetics and Stability of Translating Media with an Arbitrarily Varying Length
J. Vib. Acoust (July,2000)
Conformal mapping, variational methods, and acoustic emission in mechanical cables
Appl. Mech. Rev (September,2001)
Analysis of Consolidation of Matrix-Coated Fibre Composite by Power Law Creep
A Linear Model of Stationary Elevator Traveling and Compensation Cables
Investigation on the Shooting Method Ability to Solve Different Mooring Lines Boundary Condition Types
Static Deformations Budget
Mechanics of Accuracy in Engineering Design of Machines and Robots Volume II: Stiffness and Metrology
Cavitating Structures at Inception in Turbulent Shear Flow
Proceedings of the 10th International Symposium on Cavitation (CAV2018)
Regression Target – Objective Function
Nonlinear Regression Modeling for Engineering Applications: Modeling, Model Validation, and Enabling Design of Experiments | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 7,888 |
\chapter{Introduction}\label{ch1}
"Data is the new oil" is a quote that goes back to 2006, which is credited to mathematician Clive Humby. It has recently picked up more steam after The Economist published a 2017 report~\cite{theeconomist} titled "The world's most valuable resource is no longer oil, but data". This thesis focuses on tabular data, the most popular type of data to be used for analysis in the industry~\cite{sun2019supertml}.
Unfortunately, extracting insights from tabular data risks losing personal privacy and results in an unjustified analysis~\cite{narayanan2008}. Thus, strict privacy regulations enforced via the European General Data Protection Regulation (GDPR) prevent the misuse of personal data. This calls for innovative technologies that can enable data-usage without breaching privacy. Hence, privacy preserving data solutions have become increasingly important and have the potential to push the contribution of the data economy to the EU GDP by up to 4\%~\cite{data-sharing}.
One such emerging solution is to leverage Generative Adversarial Networks (GAN)~\cite{gan}. GANs are first trained on a real dataset and are then subsequently used to generate synthetic data resembling the original data distribution. Beyond successfully generating images, GANs have recently been applied to generate high quality tabular datasets~\cite{ctgan, tablegan}. And since generative model can synthesize fake data as many as we want, it is good when available data is limited. For example, the case for online learning \cite{rad}.
Currently, the state-of-the-art tabular generators~\cite{ctgan} use the conditional GAN architecture and deal with only on two types of variables, namely continuous and categorical. However, an important class of "mixed" data-types is overlooked. In addition, existing solutions cannot effectively handle highly skewed continuous variables. And finally, the empirical robustness of existing methods to withstand malicious privacy attacks remains unexplored.
In this thesis, we design a tabular data synthesizer that addresses the limitations of the prior work by: (i) efficiently encoding "mixed" data-types consisting of both continuous and categorical variables, (ii) efficiently modeling skewed continuous variables and (iii) enhancing robustness against privacy attacks. Therefore, we propose a novel conditional tabular generative adversarial network, CTAB-GAN, that is further extended to be trained with strict privacy guarantees.
Thus, this chapter begins with Sec.~\ref{Ch1:Syn} elaborating on the necessity of synthetic data with robust privacy guarantees along with beneficial use-cases in the industry. Next, Sec.~\ref{Ch1:motivation} specifies the key scientific motivations for this research. This is followed by the main research questions posed in Sec.~\ref{Ch1:research_question} along with the main results, collaborations and contributions of this thesis in Sec.~\ref{Ch1:Res} \& Sec.~\ref{Ch1:contri}, respectively. Finally, Sec.~\ref{Ch1:org} ends with an outline of how this research is organised.
\section{Privacy Preserving Synthetic Tabular Data}
\label{Ch1:Syn}
Tabular data plays a key role in a wide-range of industries for gaining valuable insights and making data-driven decisions. For e.g., consider the recommendation systems employed on our favourite websites such as Netflix or Bol.com. Or, the corona patient risk models developed by our health-care providers. These all intimately rely on tabular data.
But unfortunately, using the real tabular data may be perilous because: (i) the \textit{privacy} of real data may be comprised (ii) the \textit{standard} of real data may be poor due to rows with incomplete information and (iii) the \textit{amount} of real data representing anomalous events (e.g., data-rows representing "fraud") may be heavily imbalanced as compared to normal events (e.g., data-rows representing "no fraud").
These factors necessitate the use of synthetic tabular data to ensure that the data doesn't contain any real user-sensitive information compromising \textit{privacy}, missing values degrading \textit{quality} and contains a balanced \textit{quantity} of class labels (e.g., equal number of data-rows representing "fraud" vs "no fraud", respectively).
Additionally, due to the recent rise in machine learning solutions that rely on real user-data, there has been an equally important demand for ensuring greater data security against malicious privacy attacks targeted towards machine learning algorithms.
In light of this, privacy-preserving techniques such as differential privacy\cite{dwork2008differential} serve as an effective framework to limit the influence of individual data points and to provide strict privacy guarantees preventing the loss of personal information. In recent times, tech giants such as Apple\cite{tang2017privacy} have successfully used this technique to effectively deal with privacy leaks. \\
\\
Thus, synthetic tabular data generated using strict differential privacy guarantees serves data-driven industries with the following gains:
\begin{itemize}
\item \textbf{Collaboration across stakeholders}- Synthetic data with reliable privacy guarantees serves to enable efficient and safe data-disclosure. This boosts collaboration among different parties and fosters innovation. For e.g., for building a stronger fraudulent insurance claim detector, a multi-national insurance company can benefit from information stored between divisions located across the world. However, privacy restrictions do not allow the real data to be shared. Thus, synthetic data can be used instead, to capture the shared characteristics of fraudulent insurance claims across the world.
\item \textbf{Data Optimization}- Synthetic data generators can effectively learn the distribution of the real data thereby enabling end-users to encapsulate the real information in a more compressed form. This enables easily storing and generating large amounts of data more efficiently. Moreover, synthetic data generators can generate datasets based on user-specified constraints and do not contain missing values by design~\cite{ctgan}.
\item \textbf{Model Optimization}- To improve the performance of machine learning algorithms, synthetic data can be used for performing data-augmentation to effectively re-balance datasets with imbalanced class labels~\cite{engelmann2020conditional}. Moreover, the synthetic data can be used as a proxy validation-dataset to tweak and validate the most optimal hyper-parameters thereby allowing a more efficient usage of the real data for training machine learning models~\cite{fintz2021synthetic}.
\end{itemize}
\section{Motivation}\label{Ch1:motivation}
The industrial datasets (at stakeholders like banks, insurance companies, and health care) present multi-fold challenges. First of all, such datasets are organized in tables and populated with both continuous and categorical variables, or a mix of the two, e.g., missing values can be considered to be categorical elements embedded in continuous variables as they are clearly separate from the continuous variable's original distribution. Here, such type of variables are termed as "mixed" variables. Secondly numeric data variables often have a wide range of values as well as a skewed distributions, e.g., the statistic of the transaction amount for a credit card. Most transactions should be within 0 and 500 bucks (i.e. daily shopping for food and clothes), but exceptions of a high transaction amount surely exist. And last but not least, training tabular GANs with sensitive datasets risks leaking privacy through malicious privacy attacks.
\\
\\
In summary, dealing with the following challenges formed the main motivations of research:
\begin{itemize}
\item Tabular data comprises of "mixed" variables that consist of both a continuous and a categorical component.
\item Continuous variables exhibit heavily skewed distributions which are difficult to model and reproduce authentically.
\item Tabular GANs comprise the privacy of the original dataset used for training.
\end{itemize}
\section{Research Questions}\label{Ch1:research_question}
Building on the motivations established for the thesis in ~Sec.~\ref{Ch1:motivation}, the thesis revolves around three main research question as follows:
\begin{itemize}
\item What are the performance capabilities of existing tabular GANs?
\item How to improve upon the tabular generation quality of state-of-the-art tabular GANs?
\item How to train tabular GANs in a privacy-preserving manner?
\end{itemize}
The main research questions are then further divided into the following sub-questions:
\begin{enumerate}
\item What are the performance capabilities of existing tabular GANs?
\begin{enumerate}
\item What is the statistical similarity, ML utility and privacy risk concerning synthetically produced datasets in terms of their corresponding original datasets?
\item What are the challenges faced by existing tabular GANs?
\end{enumerate}
\item How to improve upon the tabular generation quality of state-of-the-art tabular GANs?
\begin{enumerate}
\item How to handle "mixed" variables in tabular data?
\item How to deal with skewed continuous variables?
\end{enumerate}
\item How to prevent privacy leakage for tabular GANs?
\begin{enumerate}
\item How can differential privacy guarantees be instilled for synthetic tabular data generation?
\item Do the theoretical privacy guarantees successfully prevent privacy leakage?
\end{enumerate}
\end{enumerate}
\section{Main Results and Collaborations}
\label{Ch1:Res}
\subsection{Publications}
The work developed in this thesis has lead to several contributions which have been submitted in various venues:
\begin{itemize}
\item Aditya Kunar, Robert Birke, Zilong Zhao, Lydia Y. Chen. \textbf{DTGAN: Differential Private Training for Tabular GANs.}, under review \cite{dtgan}.
\item Zilong Zhao, Aditya Kunar, Robert Birke, Lydia Chen. \textbf{CTAB-GAN: Effective Table Data Synthesizing}, under review \cite{ctabgan}. \vspace{-0.5em}
\item Zilong Zhao, Aditya Kunar, Robert Birke, Lydia Chen. \textbf{FedTGAN: Federated Learning Framework for Synthesizing Tabular Data}, under review. \vspace{-0.5em}
\end{itemize}
\subsection{Collaborations}
Those works has been conducted thanks to fruitful collaborations
\begin{itemize}
\item Dr. Lydia Y. Chen (TU Delft) on tabular GAN algorithm, differential privacy and distributed GAN algorithm,\vspace{-0.5em}
\item Dr. Robert Birke (ABB Research) on tabular GAN algorithm, differential privacy and distributed GAN algorithm, \vspace{-0.5em}
\item Dr. Zilong Zhao (TU Delft) on tabular GAN algorithm, differential privacy and distributed GAN algorithm,\vspace{-0.5em}
\end{itemize}
\section{Contribution of thesis}\label{Ch1:contri}
Our primary contributions of this thesis have been the following:
\begin{itemize}
\item An extensive bench-marking of 4 state-of-the-art tabular GANs in terms of statistical similarity, ML utility and privacy. And, emphasizing important issues faced by existing methods.
\item A novel conditional generative adversarial network, CTAB-GAN, that can effectively handle "mixed" data-types and skewed continuous variables.
\item Differential private training of CTAB-GAN and rigorous privacy risk evaluation against membership and attribute inference attacks.
\end{itemize}
\section{Report Outline}\label{Ch1:org}
The thesis has the following outline, in~\autoref{ch2}, the relevant related work and core concepts pertaining to generating privacy-preserving synthetic data is highlighted. In~\autoref{ch3}, an exploratory study quantitatively evaluating 4 state-of-the-art tabular GAN approaches is elucidated. Moreover, the chapter highlights challenges faced by existing methods. In~\autoref{ch4}, a novel conditional table generative adversarial network, CTAB-GAN, is proposed to improve on challenges faced by the state-of-the-art. In~\autoref{ch5}, the application of differential privacy in the context of tabular GANs is examined and the empirical robustness against privacy attacks is studied. Lastly, in~\autoref{ch6}, we finally summarise this thesis by reviewing the research questions established in this chapter and by identifying limitations of CTAB-GAN and defining avenues of research for future work.
\chapter{Differential Privacy for Tabular Data Generators }\label{ch5}
\section{Introduction}
The previous chapters illustrated the efficacy of tabular GANs for learning the training data distributions and generating high utility synthetic datasets. However, utilising privacy sensitive real datasets to train tabular GANs poses a range of privacy issues. Recent studies have shown that GANs may fall prey to membership and attribute inference attacks which greatly endanger the personal information present in the real training data~\cite{gan_leak,priv_mirage}. Therefore, it is imperative to safeguard the training of tabular GANs such that it remains protected against malicious privacy attacks to ensure that synthetic data can be stored and shared across different parties without harm.
The limited existing work~\cite{pategan,long2019scalable,torkzadehmahani2019dp,torfi2020differentially} rely on Differential Privacy (DP)~\cite{dwork2008differential} for training tabular GANs in a privacy preserving manner. DP is a mathematical framework that provides theoretical guarantees bounding the statistical difference between any resulting tabular GAN model trained regardless of the existence of any particular individual's information in the original training dataset. Typically, this is achieved by (i) clipping the gradients for bounding the sensitivity and (ii) injecting noise while updating the parameters of a network during back-propagation~\cite{backprop}. However, the main challenge found in prior work is to calibrate the training of differential private tabular GANs so as to maintain the utility of synthetic datasets for analysis while providing strict theoretical privacy guarantees. Moreover, the existing literature rarely investigates the empirical robustness of their differential private GANs against privacy attacks.
In this chapter, two variants of differential private CTAB-GAN are proposed based on the ideas presented in prior work, most notably, the work done by the authors of DP-WGAN~\cite{xie2018differentially} and GS-WGAN~\cite{chen2020gs}. Furthermore, a rigorous empirical evaluation is conducted to investigate the usefulness of differential private tabular GANs in terms of their utility for analysis given their constraints to preserve privacy especially against malicious privacy attacks such as the membership and attribute inference attacks.
The rest of this chapter is organized as follows: the two main approaches used to employ differential privacy in CTAB-GAN are elucidated in Sec.~\ref{Ch5:dpctabgan}. Then in Sec.~\ref{Ch5:EA}, a rigorous empirical examination of DP-CTABGAN is provided. Finally, Sec.~\ref{Ch5:Conclusion} ends the chapter with a succinct summary of the results and provides directions for further research.
\section{DP-CTABGAN}
\label{Ch5:dpctabgan}
DP-CTABGAN is a novel approach to generate tabular datasets with strong DP guarantees. It utilizes the DP-SGD~\cite{abadi2016deep} framework introduced by \cite{abadi2016deep} and the subsampled RDP moments accountant technique~\cite{mironov2017renyi,wang2019subsampled} to preserve privacy and account for the cost, respectively. In addition, it makes use of the wasserstein loss with gradient penalty~\cite{gulrajani2017improved} to effectively bound the gradient norms with an analytically derived optimal clipping value as shown in the work of \cite{chen2020gan}. Therefore, the rest of this section is organised as follows: First, Sec.~\ref{Ch5:WGAN_real} highlights the updated training objective of DP-CTABGAN. Next, Sec.~\ref{Ch5:DPD} $\&$ Sec.~\ref{Ch5:DPG} presents two variants of DP-CTABGAN. Sec.~\ref{Ch5:DPD} details the implementation and privacy analysis for training the discriminator network with DP guarantees whereas in Sec.~\ref{Ch5:DPG}, the generator network is described. Both approaches are studied to obtain the most optimal configuration for training DP-CTABGAN.
\subsection{Wasserstein Loss with Gradient Penalty~\cite{gulrajani2017improved}}
\label{Ch5:WGAN_real}
One of the biggest challenges with using DP-SGD is tuning the clipping parameter, $C$, for bounding the gradient norms. Since clipping greatly degrades the information stored in the original gradients~\cite{chen2020gs}, choosing an optimal clipping value that does not significantly impact utility is crucial.
However, tuning the clipping parameter is laborious as the optimal value fluctuates depending on network hyperparameters (i.e model architecture, learning rate)~\cite{abadi2016deep}. Therefore, inspired by the work of \cite{chen2020gs}, the wasserstein loss with gradient penalty~\cite{gulrajani2017improved} (refer to Sec.~\ref{Ch5:WGAN}) is chosen as a suitable loss function for training both variants of DP-CTABGAN.
The gradient penalty term is especially useful as it ensures that the discriminator generates bounded gradient norms which are close to 1 under real and generated distributions. Therefore, an optimal clipping threshold of $C=1$ is obtained analytically avoiding an intensive hyper-parameter search thereby better preserving the information stored in gradients after clipping.
Note that the prior implementation of CTAB-GAN made use of batch normalization to help improve the flow of gradients in both the generator and the discriminator network. However, with the updated gradient penalty training objective which penalizes the gradients for each input data point independently, it is no longer valid. Therefore, \cite{gulrajani2017improved} recommends utilising layer normalization~\cite{ba2016layer} as a drop-in replacement for batch normalisation as it doesn't induce any correlations between data points. And, it was found that layer normalisation significantly improved the flow of gradient information during training based on preliminary experiments.
Additionally, utilising a simple linear interpolation between real and synthetic data points for computing the gradient penalty relies on the assumption that data points form a uniformly distributed hypercube. Since this assumption may not always hold in practice, spherical interpolates~\cite{shoemake1985animating} are used in this work for accounting the possible curvature of the latent space. And, it was found to yield better data utility in preliminary experiments.
\subsection{DP-Discriminator}
\label{Ch5:DPD}
In the first variant, DP-CTABGAN trains the discriminator using differential private-SGD as outlined in algorithm~\ref{Ch5:DPSGD-Algo} where the total number of iterations $T$ is determined based on the total privacy budget $(\epsilon$,$\delta)$. Thus, to compute the number of iterations, the privacy budget spent for every iteration must be bounded and accumulated over training iterations $T$. The subsampled RDP analytical moments accountant technique~\cite{wang2019subsampled} is used for this purpose. The theoretical analysis of the privacy cost is presented below:
\\
\\
\textbf{Theorem 5.2.1} Each discriminator update satisfies $(\lambda,2B\lambda/\sigma^{2})$-RDP where B is the batch size.
\\
\emph{Proof.} Let $f=clip({\bar{g}_D},C)$ be the clipped gradient of the discriminator before adding noise. The sensitivity is derived via the triangle inequality:
\begin{equation}
\Delta_{2}f = \max_{S,S'}||f(S)-f(S')||_{2} \leq 2C
\end{equation}
Since $C=1$ as a consequence of the wasserstein loss with gradient penalty~\cite{gulrajani2017improved} and by using definition 2.2.3 in Sec.~\ref{Ch5:background}, the DP-SGD procedure denoted as $\mathcal{M}_{\sigma,C}$ parameterized by noise scale $\sigma$ and clipping parameter $C$ may be represented as being $(\lambda,2\lambda/\sigma^{2})$-RDP.
Furthermore, each discriminator update for a batch of real data points $\{x_i,..,x_B\}$ can be represented as
\begin{equation}
\Tilde{g}_D = \frac{1}{B}\sum_{i=1}^{B}\mathcal{M}_{\sigma,C}(\nabla_{\theta_D}\mathcal{L}_{D}(\theta_D,x_i))
\end{equation}
where $\tilde{g}_{D}$ and $\theta_{D}$ represents the perturbed gradients and the weights of the discriminator network, respectively. This may be regarded as a composition of B Gaussian mechanisms. And so, by using theorem 2.2.1 in Sec.~\ref{Ch5:background}, the privacy cost for a single gradient update step for the discriminator can be expressed as $(\lambda,\sum_{i=1}^{B}2\lambda/\sigma^{2})$ or equivalently $(\lambda,2B\lambda/\sigma^{2})$. \tiny$\blacksquare$
\normalsize Note that $\mathcal{M}_{\sigma,C}$ is only applied for those gradients that are computed with respect to the real training dataset~\cite{abadi2016deep,zhang2018differentially}. Hence, the gradients computed with respect to the synthetic data and the gradient penalty term are left undisturbed.
Next, to further amplify the privacy protection of the discriminator, theorem 2.2.3 defined in Sec.~\ref{Ch5:background} is used where the subsampling rate is defined as $\gamma =B/N$ where $B$ is the batch size and $N$ is the size of the training dataset. Intuitively, subsampling adds another layer of randomness and enhances privacy by decreasing the chances of leaking information about particular individuals who are not included in any given subsample of the dataset.
Lastly, it is worth mentioning that the wasserstein loss with gradient penalty~\cite{gulrajani2017improved} training objective has one major pitfall with respect to the privacy cost. This is because, it encourages the use of a stronger discriminator network to provide more meaningful gradient updates to the generator. This requires performing multiple updates to the discriminator for each corresponding update to the generator leading to a faster consumption of the overall privacy budget.
\subsection{DP-Generator}
\label{Ch5:DPG}
In the second variant, DP-CTABGAN trains the generator network with DP guarantees. To do so, the gradients flowing from the discriminator and classifier networks (i.e., $g_{G}^{Disc}$ $\&$ $g_{G}^{Class}$) which interact with the original training data are selectively perturbed (i.e., $\tilde{g}_{G}^{Disc}$ $\&$ $\tilde{g}_{G}^{Class}$) via the familiar DP-SGD procedure represented as a randomized mechanism $\mathcal{M}_{\sigma,C}$ parameterized by noise scale $\sigma$ and clipping parameter $C$ for updating the generator's weights (i.e., $\theta_{G}$), as shown in the Fig~\ref{fig:Ch5Gen} below. The selective perturbation of the gradients is necessary as the combined training objective of the generator i.e., classification, information loss and generator losses (refer to Sec.~\ref{Ch4:TP}) doesn't entirely depend on the original training data. As an example, consider the generator loss which is only used to ensure that the generated data exactly matches the constraint given by the conditional vector sampled randomly during training and as a result, is independent of the real training data, itself.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.1]{Figures/Ch_5/Gen.png}
\caption{\centering Privacy Preserving Generator Training where $G$,$D$ and $\mathcal{C}$ denote the generator, discriminator and classifier networks with weights $\theta_G$, $\theta_D$ and $\theta_{\mathcal{C}}$, respectively.
}
\label{fig:Ch5Gen}
\end{figure}
With this in mind, the privacy analysis for training the generator via DP-SGD~\cite{abadi2016deep} utilizing the aforementioned subsampled RDP moments accountant~\cite{wang2019subsampled} is presented.
\\
\\
\textbf{Theorem 5.2.2} Each generator update satisfies $(\lambda,6B\lambda/\sigma^{2})$-RDP where B is the batch size.
\\
\emph{Proof} Let $f_{Disc}=clip({\bar{g}_G^{Disc}},C)$ be the clipped gradient of the generator computed with respect to $\mathcal{L}_{G}$ before adding noise. The sensitivity is derived via the triangle inequality:
\begin{equation}
\Delta_{2}f_{Disc} = \max_{S,S'}||f_{Disc}(S)-f_{Disc}(S')||_{2} \leq 2C
\end{equation}
Since $C=1$ as before and by using definition 2.2.3 in Sec.~\ref{Ch5:background}, the randomized mechanism $\mathcal{M}_{\sigma,C}$ may similarly be represented as being $(\lambda,2\lambda/\sigma^{2})$-RDP.
However, due to the addition of the information loss denoted as $\mathcal{L}_{I}$, the generator requires an additional fetch of gradients from the discriminator (i.e., $g_G^{Disc}$) computed with respect to $\mathcal{L}_{I}$ which in turn doubles the number of times $\mathcal{M}_{\sigma,C}$ is applied. Note that the sensitivity remains the same leading to an identical privacy cost (i.e., $(\lambda,2\lambda/\sigma^{2})$-RDP).
Likewise for the classifier loss expressed as $\mathcal{L}_{C}$, let $f_{Class}=clip({\bar{g}_G^{Class}},C)$ be the clipped gradient of the generator back-propagated from the classifier before adding noise. The sensitivity is similarly derived via the triangle inequality:
\begin{equation}
\Delta_{2}f_{Class} = \max_{S,S'}||f_{Class}(S)-f_{Class}(S')||_{2} \leq 2C
\end{equation}
For ease of derivation, the clipping parameter for the classifier module is also, $C=1$. Thus, by using definition 2.2.3 in Sec.~\ref{Ch5:background} once again, $\mathcal{M}_{\sigma,C}$ is $(\lambda,2\lambda/\sigma^{2})$-RDP.
Thus, to do a single update of the generator's weights $\theta_{G}$, the randomized mechanism $\mathcal{M}_{\sigma,C}$ is first applied twice for the discriminator network and once more for the classifier network with a fixed privacy cost of $(\lambda,2\lambda/\sigma^{2})$-RDP. Formally, this can be expressed as
\begin{equation}
\tilde{g}_G = \sum_{i=1}^{L} \mathcal{M}_{\sigma,C}(\nabla_{\theta_G}\mathcal{L}_{i}(\theta_G))
\end{equation}
where $L$ represents the set of losses for which the gradients are computed (i.e., \{${\mathcal{L}_{G}, \mathcal{L}_{I}, \mathcal{L}_{C}}$\}) and $\tilde{g}_{G}$ $\&$ $\theta_{G}$ represents the perturbed gradients and the weights of the generator network, respectively. This sequence can once again be interpreted as a composition of Gaussian mechanisms which allows the use of theorem 2.2.1 defined in Sec.~\ref{Ch5:background}, to express the cost for an individual data point as $(\lambda,\sum_{i=1}^{3}2\lambda/\sigma^{2})$-RDP. And, the privacy cost for a batch of data points $\{x_i,..,x_B\}$ can be similarly extended to be $(\lambda,\sum_{i=1}^{B}\sum_{i=1}^{3}2\lambda/\sigma^{2})$ or equivalently $(\lambda,6B\lambda/\sigma^{2})$. \tiny$\blacksquare$
\normalsize Next, to amplify the privacy protection for the generator, theorem 2.2.3 defined in Sec.~\ref{Ch5:background} is analogously used. However, in this case, the original training dataset is divided into disjoint subsets of equal size where a unique discriminator is trained for each subset independently. The size of each subsampled data is defined as $N_{d}/N$ where $N_{d}$ is the total number of discriminators and $N$ is the size of the full training dataset. Thus, during training, one out of the total number of discriminators is chosen randomly for every iteration to provide gradient updates to the generator on the basis of it's corresponding subsampled dataset. In this way, the subsampling rate for the generator is defined to be $\gamma = 1/N_{d}$.
Unfortunately, training multiple discriminators on smaller subsamples is problematic due to the lack of enough training iterations for any given discriminator in comparison to the generator. Moreover, reducing the number of samples via subsampling increases the potential of over-fitting the discriminators on it's respective subsample. \cite{chen2020gs} recommends to alleviate the first problem by pre-training the multiple discriminator networks with a standard generator without DP. Since, pre-training the discriminators reliably doesn't breach the DP guarantees for the generator. However, in practice, the results were not found to be affected by the presence of pre-trained discriminators in preliminary experiments.
Lastly, definition 2.2.2 defined in~Sec.\ref{Ch5:background} is used to convert the overall cumulative privacy cost computed in terms of RDP back to $(\epsilon,\delta)$-DP for both approaches. Practically, these computations are performed via the official implementation\footnote{\url{https://github.com/yuxiangw/autodp}} provided by \cite{wang2019subsampled}.
\section{Experimental Analysis}
\label{Ch5:EA}
\subsection{Experimental Setup}
\label{Ch5:ES}
\textbf{Datasets}- To evaluate DP-CTABGAN, 3 out of the 5 datasets introduced in Sec.~\ref{Ch3:DD} are used i.e., Adult~\cite{UCIdataset}, Credit~\cite{kagglecredit} and Loan~\cite{kaggleloan}. Refer to Tab.~\ref{table:DDE} detailing each dataset.
\\
\textbf{Baselines}- Both variants of DP-CTABGAN are compared with 2 state-of-the-art architectures: PATE-GAN~\cite{pategan}\footnote{\url{https://github.com/vanderschaarlab/mlforhealthlabpub/tree/main/alg/pategan}} and DP-WGAN~\cite{xie2018differentially}\footnote{\url{https://github.com/BorealisAI/private-data-generation/blob/master/models/dp_wgan.py}}. Additionally, to present a fair comparison between DP-WGAN and PATE-GAN, a common network architecture for the both the generator and discriminator is used (refer to Sec.~\ref{appendix:1}). Tab.~\ref{Ch5:tab1}. outlines the salient features of all methods used in this evaluation.
Lastly, it is important to note that for DP-WGAN, the authors originally derive the privacy cost using the moment accountant technique~\cite{abadi2016deep}. However, in this work, to compare fairly across different approaches that all making use of DP-SGD with gaussian mechanisms, the more optimal subsampled RDP accountant~\cite{mironov2017renyi,wang2019subsampled} is used. This is because, the RDP-accountant allows for even tighter bounds on the privacy budget than the moment accountant enabling less noise to be added during training for ensuring similar privacy guarantees.
\subsection{Evaluation Metrics}
\textbf{Statistical Similarity $\&$ ML Utility}- The evaluation metrics concerning the statistical similarity and ML utility is borrowed from Sec.~\ref{Ch3:metrics}. However, there are a few notable differences worth mentioning.
Firstly, with respect to the statistical similarity, unlike previous chapters, the WD is calculated after performing a min-max normalisation\footnote{\url{https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html}} for both the real and synthetic values using the real maximum and minimum values corresponding to a particular column. This is done for averaging the wasserstein distances across columns with drastically varying scales more reliably.
Secondly, for evaluating the ML utility, the average precision score (APR)\footnote{https://scikit-learn.org/stable/modules/generated/sklearn.metrics.average\_precision\_score.html} is introduced to provide a reliable source of performance in comparison to the AUC given the imbalance in datasets used. Moreover, the SVM model is eliminated from the study due to practical limitations with outputting predicted probabilities in a time-efficient manner. Lastly, the Min-Max normalisation is used as a pre-processing step before training of ML models as used in the evaluation done by \cite{pategan}. \\
\\
\textbf{Inference Attacks}- This chapter introduces two new metrics for evaluating the empirical robustness of GANs against malicious privacy attacks. More specifically, the membership and attribute inference attacks are launched against each model to expose the risk of privacy loss based on the rigorous framework provided by \cite{priv_mirage}.\\
\\
\textbf{The membership inference attack~\cite{chen2020gan}} is a binary classification problem in which an attacker tries to predict if a particular target data point $t$ has been used to train a victim generative model. This work assumes that the attacker only needs access to a black-box tabular GAN model, a reference dataset $\mathcal{R}$ and $t$ for which the inference must be made~\cite{priv_mirage}.
\begin{figure}[htb]
\centering
\includegraphics[scale=.06]{Figures/Ch_5/MI_Attack.png}
\caption{\centering Membership Inference Attack Pipeline}
\label{fig:MIE}
\end{figure}
As illustrated in Fig.~\ref{fig:MIE}, to launch an attack, the attacker prepares two training datasets with and without the target record $t$ using the reference dataset $\mathcal{R}$ (i.e., $\mathcal{R}$, $\mathcal{R}\oplus t$). Next, the attacker uses black-box access to the model for training two separate models on each dataset. The attacker then uses these to generate $s$ batches of synthetic data each consisting of $r$ rows, represented as $\mathcal{S}^{s}_{r}$. The synthetic batches are assigned a label of 0 and 1, respectively, based on the presence of $t$ in the training dataset.
Thereafter, each batch of synthetic data is processed by a feature extraction method summarizing the information contained in each batch into a single vector. This is done in two ways: (i) naive extraction- computes the mean, median, and variance of every continuous column and the length of unique categories as well as the most and least frequently occurring category for every categorical column (ii) correlation extraction- computes the pairwise correlations between all columns where the categorical columns are dummy-encoded.
This leads to the creation of a final dataset, containing an equal number of processed samples. This is split into train and test datasets. An attack model is trained on the training dataset and used to compute the privacy gain as $P_{Gain}=\frac{(P_{Real} - P_{Fake})}{2}$ where $P_{Fake}$ is the attack model's average probability of successfully predicting the correct label in the test-set and $P_{Real}=1$ since having access to the original training data ensures full knowledge of $t$'s presence~\cite{priv_mirage}.
To conduct the membership inference evaluation, 4000 rows of real data were sampled from each dataset to form the reference dataset (i.e., $\mathcal{R}$) to train the synthetic models. Each batch for feature extraction was chosen to be of size $r=400$. And, $s=1200$ batches were generated such that the training dataset was of size 1000 with balanced number of classes. And, the test set contained 200 samples with balanced classes. To train the attack model, the Random-Forest-Classifier\footnote{https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html} was used. The experiments were repeated 5 times with 5 different target records $t$ for each dataset and the results were averaged.
\\
\\
\textbf{An attribute inference attack~\cite{priv_mirage}} is defined as a regression problem where the attacker attempts to predict the values of a sensitive target column provided he/she has black-box access to a generative model.
\begin{figure}[htb]
\centering
\includegraphics[scale=.062]{Figures/Ch_5/AI_Attack.png}
\caption{\centering Attribute Inference Attack Pipeline}
\label{fig:MIA1}
\end{figure}
To launch an attribute inference attack and evaluate the privacy risk (refer to Fig.~\ref{fig:MIA1}), a dataset $\mathcal{R}$ sampled from the real distribution is split into train and test datasets, respectively (i.e., $\mathcal{R}_{Train}$ $\&$ $\mathcal{R}_{Test}$). $\mathcal{R}_{Train}$ is fed into a generative model for generating a corresponding synthetic training dataset (i.e., $\mathcal{G}_{Train}$).
A linear regression model\footnote{https://scikit-learn.org/stable/modules/generated/sklearn.linear\_model.LinearRegression.html} is then used for estimating the relationship between the independent variables known to the attacker and the dependent sensitive variable for both $\mathcal{R}_{Train}$ and $\mathcal{G}_{Train}$. Then, to evaluate the privacy risk, the privacy gain is computed similarly as: $P_{Gain} = \frac{(P_{Real}-P_{Fake})}{2}$, where $P_{Real}$ $\&$ $P_{Fake}$ denote the average posterior probabilities of correctly predicting the sensitive attribute on the real testset given the linear models fitted on $\mathcal{R}_{Train}$ and $\mathcal{G}_{Train}$, respectively~\cite{priv_mirage}.
For performing the attribute inference evaluation, 5000 real samples (i.e.,$\mathcal{R}$) were sampled from each dataset where 4900 samples were used for creating the training dataset (i.e., $\mathcal{R}_{Train}$) and 100 for the testing dataset (i.e.,$\mathcal{R}_{Test}$). Moreover, the sensitive attribute for the datasets Adult, Loan and Credit were chosen to be "Age","Age" and "Amount", respectively. The experiment was repeated 5 times and the average results are presented.
\subsection{Results}
This section presents the results for all baselines based on the criteria established previously. Note that for measuring the statistical similarity and ML efficacy, the privacy budget $\epsilon$ is varied between 1 and 100 to study the influence of a strong vs weak privacy constraint, respectively.
However, for evaluating the risk of privacy loss via membership and attribute inference attacks, a strict privacy budget of $\epsilon = 1$ is chosen as commonly used in prior work~\cite{pategan}. This is done to thoroughly test the effectiveness of DP techniques offering strong theoretical guarantees empirically. Refer to Sec.~\ref{appendix:2} of the appendix for details concerning hyper-parameters used to generate samples from all baselines for conducting the experiments.
Lastly, all result tables feature DP-CTABGAN with no privacy budget (i.e.,$\epsilon=\infty$) simply denoted as CTAB-GAN to be used as a reference point for examining the influence of differential privacy for training CTAB-GAN. Note that $\delta=1e-5$ is fixed across all experiments and the best results are highlighted in bold among only those models that are trained with finite privacy budgets.\\
\\
\textbf{Statistical Similarity $\&$ ML Utility}-
\label{Ch5:Res}
\begin{enumerate}
\item \textbf{Statistical Similarity}- As shown in Tab.~\ref{table:SS_allE1} and Tab.~\ref{table:SS_allE2}, among all baseline models, D-DP-CTABGAN is the only model which consistently improves across all three metrics when the privacy budget is increased.
Similarly, G-DP-CTABGAN sees an improvement across both the Avg-JSD and Avg-WD. However, PATE-GAN and DP-WGAN do not show signs of improvement consistently across any of the metrics. Moreover, they perform worse than both variants of DP-CTABGAN at both levels of epsilon.
This highlights their inability to capture the statistical distributions during training despite a loose privacy budget purely due to the lack of an effective training framework.
Lastly, it is worth noting that G-DP-CTABGAN features the best correlation distance at $\epsilon=1$ and $\epsilon=100$ showcasing that training the discriminator reliably is hugely beneficial for capturing correlations in the data as compared to D-DP-CTABGAN. Naturally, there is still a huge performance gap between CTAB-GAN and both variants of DP-CTABGAN due to the application of DP.\\
\begin{table}[htb]
\centering
\caption{\centering Statistical similarity: 3 measures averaged over 3 datasets with a privacy budget of $\epsilon=1$}
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Method} & \textbf{Avg JSD} & \textbf{Avg NWD} & \textbf{Diff. Corr.} \\
\hline
\small{PATE-GAN} & 0.487 & 0.259 & 3.982 \\
\small{DP-WGAN} &0.299 & 0.232 & 3.834 \\
\small{D-DP-CTABGAN} & \textbf{0.246} & \textbf{0.063} & 4.168 \\
\small{G-DP-CTABGAN} & 0.376 & 0.189 & \textbf{3.065} \\
\small{CTAB-GAN} & 0.028 & 0.01 & 1.607 \\
\hline
\end{tabular}
\label{table:SS_allE1}
\end{table}
\begin{table}[htb]
\centering
\caption{\centering Statistical similarity: 3 measures averaged over 3 datasets with a privacy budget of $\epsilon=100$ }
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Method} & \textbf{Avg JSD} & \textbf{Avg NWD} & \textbf{Diff. Corr.} \\
\hline
\small{PATE-GAN} & 0.358 & 0.259 & 4.837 \\
\small{DP-WGAN} & 0.304 & 0.222 & 4.57 \\
\small{D-DP-CTABGAN} & \textbf{0.127} & \textbf{0.047} & 3.648 \\
\small{G-DP-CTABGAN} & 0.389 & 0.174 & \textbf{3.21} \\
\small{CTAB-GAN} & 0.028 & 0.01 & 1.607 \\
\hline
\end{tabular}
\label{table:SS_allE2}
\end{table}
\item \textbf{ML Efficacy}- From the results presented in Tab.~\ref{table:ML_allE1} and Tab.~\ref{table:ML_allE2}, surprisingly PATE-GAN performs worse in terms of ML utility with a looser privacy budget. This is mainly because the student discriminator is trained solely with generated samples of poor statistical similarity as found in Tab.~\ref{table:SS_allE2}.
Moreover, it is found similar as before that only the D-DP-CTABGAN model consistently improves across all metrics with a looser privacy budget. And, showcases the best performance for both the f1-score and APR metrics with different privacy budgets across all baselines. This finding suggests that the based on the implementations of D-DP-CTABGAN and G-DP-CTABGAN used in this work, training the discriminator with DP guarantees is more optimal. This is in line with the challenges faced by G-DP-CTABGAN due to subsampling which hugely degrades performance by training multiple discriminators each using a smaller number of samples.
Finally, the performance increase of D-DP-CTABGAN in comparison to other baselines can be explained by it's sophisticated neural network architecture (i.e., conditional GAN) and improved training objective (i.e., wasserstein loss with gradient penalty). However, as a consequence of the application of DP, the performance decrease in comparison to CTAB-GAN is noticeably large.\\
\begin{table}[htb]
\centering
\caption{\centering Difference of ML accuracy (\%), F1-score, AUC and APR between original and synthetic data: average over 3 different datasets and a privacy budget $\epsilon=1$}
\begin{tabular}{|c|c|c|c|c|}
\hline
\textbf{Method} & \textbf{Accuracy} & \textbf{AUC} & \textbf{APR}& \textbf{F1-Score} \\
\hline
\small{PATE-GAN} & 10.8\% & \textbf{0.246} & 0.576 & 0.367 \\
\small{DP-WGAN} & \textbf{8.2\%} & 0.408 & 0.58 & 0.368 \\
\small{D-DP-CTABGAN} & 16.1\%& 0.302& \textbf{0.483} & \textbf{0.34}\\
\small{G-DP-CTABGAN} & 32.3\%& 0.377 & 0.604 & 0.454 \\
\small{CTABGAN} & 2.6\% & 0.042 & 0.143 & 0.097 \\
\hline
\end{tabular}
\label{table:ML_allE1}
\end{table}
\begin{table}[htb]
\centering
\caption{\centering Difference of ML accuracy (\%), F1-score, AUC and APR between original and synthetic data: average over 3 different datasets and a privacy budget $\epsilon=100$}
\begin{tabular}{|c|c|c|c|c|}
\hline
\textbf{Method} & \textbf{Accuracy} & \textbf{AUC} & \textbf{APR}& \textbf{F1-Score} \\
\hline
\small{PATE-GAN} & 37.4\% & 0.416 & 0.566 & 0.412 \\
\small{DP-WGAN} & \textbf{10.8\%} & 0.373 & 0.592 & 0.364 \\
\small{D-DP-CTABGAN} & 13\% & \textbf{0.265} & \textbf{0.475} & \textbf{0.262} \\
\small{G-DP-CTABGAN} & 13.7\% & 0.387 & 0.565 & 0.374 \\
\small{CTABGAN} & 2.6\% & 0.042 & 0.143 & 0.097 \\
\hline
\end{tabular}
\label{table:ML_allE2}
\end{table}
\end{enumerate}
\textbf{Privacy Impact Against Inference Attacks}-
\begin{enumerate}
\item \textbf{Membership Inference Attack}- From the results shown in Tab.~\ref{table:PP_allE}, it is found that all DP baselines provide an empirical privacy gain close to 0.25 for both feature extraction methods. This indicates that differential private methods provide a strong privacy protection against membership attacks. And ensures that the average probability of success for any attack is close to the attacker's original prior i.e 0.5. Furthermore, it is found that D-DP-CTABGAN and G-DP-CTABGAN provide the highest security against a membership attack with naive and correlation feature extraction methods, respectively.
Moreover, there is a clear decrease in the privacy gain achieved by CTABGAN showcasing that DP is needed to provide a stronger defense against membership inference attacks.
\item \textbf{Attribute Inference Attack}- Tab.~\ref{table:PP_allE} shows that PATE-GAN provides the greatest security. Moreover, both versions of DP-CTABGAN provide a lesser privacy protection than other baselines. And, CTABGAN provides the worst security. This is due to the superior quality of the synthetic data offered by CTABGAN and it's DP variants which enhances the attacker's probability of successfully inferring sensitive information. These results highlight the inherent trade-off between privacy and data utility i.e., increasing the utility directly worsens the privacy and vice versa.
It is worth noting that the privacy gain for attribute inference attack for all baselines is close to 0 suggesting that the overall privacy protection offered against attribute inference attacks is quite low. However, it should be noted that the privacy gain is computed with respect to the real data. Thus, in case the real data itself provides a low probability of successfully inferring the correct target values for a sensitive attribute, then the synthetic dataset will perform in a similar manner resulting in a privacy gain close to 0.
\end{enumerate}
\begin{table}[htb]
\centering
\caption{\centering Empirical privacy gain against membership attack with naive $\&$ correlation feature extraction and attribute inference attack: average over 3 different datasets with a privacy budget $\epsilon=1$}
\resizebox{0.8\columnwidth}{!}{
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Method} & \textbf{Naive Privacy Gain} & \textbf{Correlation Privacy Gain} & \textbf{Attribute Inference Privacy Gain}\\
\hline
\small{PATE-GAN} & 0.25 & 0.25 & \textbf{0.042} \\
\small{DP-WGAN} & 0.255 & 0.256 & 0.04 \\
\small{D-DP-CTABGAN}& \textbf{0.266} & 0.248 & 0.037 \\
\small{G-DP-CTABGAN}& 0.245 & \textbf{0.26} & 0.038 \\
\small{CTABGAN}& 0.238 & 0.233 & -2e-4 \\
\hline
\end{tabular}
}
\label{table:PP_allE}
\end{table}
\section{Conclusion}
\label{Ch5:Conclusion}
In this chapter, two variants of DP-CTABGAN were proposed and their corresponding privacy analyses were underlined. Based on theoretical derivations and empirical results, D-DP-CTABGAN resulted in a superior configuration for integrating DP guarantees into CTAB-GAN. Moreover, D-DP-CTABGAN consistently outperformed existing state-of-the-art baselines concerning generated sample quality in terms of both statistical similarity and ML utility metrics.
Additionally, both variants of DP-CTABGAN were found to be resilient towards membership and attribute inference attacks. Therefore, this work showcases the effectiveness of DP for protecting the privacy of sensitive datasets being used for training tabular GANs.
However, further enhancement of the quality of synthetic data at strict privacy budgets (i.e., $\epsilon \leq 1$) is still needed. Ultimately, there is an inherent trade-off between privacy and utility and obtaining the most optimal balance between both is left for future work.
\chapter{Conclusion}\label{ch6}
Tabular data is a key asset for data-driven industries that are fueled by modern advancements in the field of machine learning. However, utilising real tabular data risks leaking private information about individuals. Therefore, tabular GANs have gained vital importance as a viable solution to utilise tabular data without breaching privacy.
\\
This thesis dealt with three main research questions pertaining to tabular GANs:
\begin{itemize}
\item \textit{"What are the performance capabilities of existing tabular GANs?"}- To answer this research question, 4 state-of-the-art tabular GAN models were extensively evaluated on 5 datasets in terms of their ML utility, statistical similarity and privacy. And their major strengths and weaknesses were highlighted.
\item \textit{"How to improve upon the tabular generation quality of state-of-the-art tabular GANs?"}- Based on the exposed difficulties of existing methods, this work developed a novel conditional tabular GAN architecture, CTAB-GAN. CTAB-GAN was shown to effectively handle "mixed" data types and skewed variables. And, improved upon prior work in data utility for ML applications by up to 17\% in accuracy for 5 ML models on complex datasets while maintaining a safer privacy distance than prior-work.
\item \textit{"How to prevent privacy leakage for tabular GANs?"}- The use of differential privacy for enhancing the privacy of tabular GAN training was examined. Moreover, CTABGAN with DP guarantees was rigorously tested along side state-of-the-art DP-GANs with respect to generation quality and privacy protection against membership and attribute inference attacks. Our results using 3 datasets and 4 ML models showed that DP-CTABGAN maintains the highest data utility by up to 18\% in terms of the average precision score as compared to prior work while reliably withstanding privacy attacks.
\end{itemize}
To conclude the thesis, a few important limitations of this work and corresponding future directions are highlighted:
\begin{itemize}
\item CTAB-GAN makes use of convolution operations that rely on a square matrix representation of the input data. This requires additional padding that adds useless information to the data. Therefore, the use of rectangular kernel operations that can be executed directly on rectangular shaped input data can be further looked into.
\item CTAB-GAN's suffers from poor convergence on small sized datasets. Therefore, effectively reducing the training complexity of CTAB-GAN for smaller datasets is needed. And so, simpler data transforms that can allow to learn dependencies between variables without increasing the input dimensionality needs further exploration.
\item There is large gap between the data utility of synthetic data generated with and without using strict privacy guarantees. Moreover, determining the most optimal privacy budget $\epsilon$ that best balances the privacy/utility trade off requires future consideration.
\end{itemize}
\afterpage{\blankpage}
\chapter{CTAB-GAN: Effective Tabular Data Synthesizing}\label{ch4}
\section{Introduction}
CTAB-GAN is a novel tabular data generator designed to overcome the challenges outlined in Sec.~\ref{Ch3:Challenges}. In CTAB-GAN we invent a \textit{Mixed-type Encoder} based on the \textit{mode-specific normalization (MSN)} introduced in the work of \cite{ctgan}. The \textit{Mixed-type Encoder} can better represent a mix of categorical and continuous variables as well as deal with missing values. Moreover, CTAB-GAN is based on a \textit{conditional GAN (CGAN)} and utilizes \textit{training-by-sampling} to efficiently treat imbalanced data variables. Additionally, it features the \textit{classification}, \textit{information} and \textit{generator losses}~\cite{tablegan,ctgan} for training the generator to improve semantic integrity and training stability, respectively. Furthermore, CTAB-GAN makes use of the underlying \textit{DCGAN architecture}~\cite{radford2015unsupervised} for enhancing the quality of generated samples. Lastly, CTAB-GAN utilizes a light-weight \textit{log-transformation} to overcome the mode collapse problem for heavy long-tailed numerical variables.
Hence, in section Sec.~\ref{Ch4:Design}, the novel design aspects of CTAB-GAN are highlighted and Sec.~\ref{Ch4:exp} provides an experimental analysis comparing CTAB-GAN with the state-of-the-art methods introduced in Sec.~\ref{Ch3:Baselines}. Lastly, Sec.~\ref{Ch4:Conclusion} ends with a short conclusion of the chapter.
\section{Design of CTAB-GAN}
\label{Ch4:Design}
\subsection{Network Structure}
The structure of CTAB-GAN comprises of three blocks: Generator $\mathcal{G}$, Discriminator $\mathcal{D}$ and an auxiliary Classifier $\mathcal{C}$. Moreover, since our algorithm is based on \textit{conditional GAN}, the generator requires a noise vector plus a \textit{conditional vector} as input (refer to Sec.~\ref{Ch4:cgan}). Additionally, the discriminator is fed both the real and synthetic data after concatenating them with their corresponding \textit{conditional vectors} as input (see Fig.~\ref{fig:STD_1}).
$\mathcal{G}$ and $\mathcal{D}$ are implemented using the \textit{DCGAN neural network architecture}~\cite{radford2015unsupervised} (refer to section \ref{Ch4:dcgan}) inspired from the work of \cite{tablegan}. This architecture has shown promising results in terms of generating synthetic data with high ML utility and was found to most optimally capture the correlations in the original data (refer to Sec.~\ref{Ch3:Results}). Therefore, it is used as the underlying neural network architecture for training CTAB-GAN.
$\mathcal{C}$ (refer to section \ref{Ch4:tabloss}) consists of 4 fully connected layers with $256$ nodes each which are all followed by a \textit{LeakyReLU layer} with a leaky ratio of $0.20$ and are trained using \textit{dropout regularization} with a probability parameter of $0.5$. Note that the last and $5^{th}$ layer of the classifier is adapted to deal with both \textit{binary} $\&$ \textit{multi-class classification} problems.
An important distinction concerning the classification loss as presented in the work of \cite{tablegan} is that, this work utilizes an MLP neural architecture\footnote{The MLP architecture was chosen as it led to superior performance in preliminary experiments.} for the \textit{auxiliary classifier} and caters to both \textit{binary} and \textit{multi-class classification} problems. Whereas, TableGAN features an \textit{auxiliary classifier} with the same neural architecture as the discriminator and can only deal with binary classification problems.
\begin{figure}[htb]
\centering
\includegraphics[width=\linewidth]{Figures/Ch_4/STD2.png}
\caption{\centering Synthetic Tabular Data Generation via CTAB-GAN}
\label{fig:STD_1}
\end{figure}
\subsection{Data Representation}
\label{Ch4:data_representation}
\begin{figure}[htb]
\centering
\includegraphics[width=0.4\linewidth]{Figures/Ch_4/bi-mode.png}
\caption{\centering \centering Mixed type variable distribution with VGM estimation}
\label{fig:gmm_distribution_mixed}
\end{figure}
The original tabular training data is encoded variable by variable. This work distinguish between three types of variables: \textit{categorical}, \textit{continuous} $\&$ \textit{mixed}.
\textbf{Mixed variables} are those that contain both \textit{categorical} and \textit{continuous} values, an example is a \textit{continuous variable} with missing values. The missing values clearly do not belong to the continuous domain. Thus, they are treated separately as a \textit{categorical component} of a \textit{mixed variable}.
The novel \textit{mixed-type encoder} is proposed to deal with such a variable. With this encoder, values of \textit{mixed variables} are seen as concatenated value-mode pairs based on the \textit{MSN technique}(refer to Sec.~\ref{Ch4:msn}) introduced by \cite{ctgan}. The encoding is illustrated via the exemplary distribution of a \textit{mixed variable} shown in red in Fig.~\ref{fig:gmm_distribution_mixed}. One can see that values can either be exactly $\mu_0$ or $\mu_3$ (the \textit{categorical part}) or distributed around two peaks in $\mu_1$ and $\mu_2$ (the \textit{continuous part}). The \textit{continuous part} has been explained in Sec.~\ref{Ch4:msn}.
The \textit{categorical part} (e.g., $\mu_0$ or $\mu_3$) in Fig.~\ref{fig:gmm_distribution_mixed} is treated similarly, except $\alpha$ is directly set to 0 as the category is determined only by the one-hot encoding representing the modes. For example, for a value in $\mu_3$, the final encoding is given by $0 \bigoplus [0, 0, 0, 1]$.
Finally, \textit{categorical variables} are encoded via a one-hot vector $\gamma$. missing values in this case are simply treated as a separate unique class and an extra bit is added to the one-hot vector to account for it.
Thus, a row with $[1, \dots, N]$ variables is encoded by concatenation of the encoding of all variables, i.e. either $(\alpha \bigoplus \beta)$ for \textit{continuous} $\&$ \textit{mixed variables} or $\gamma$ for \textit{categorical variables}. Having $n$ \textit{continuous/mixed variables} and $m$ \textit{categorical variables} ($n + m = N$) the final encoding can be expressed as:
\begin{equation}
\label{condvec}
\bigoplus_{i=1}^{n} \alpha_i\mathsmaller{\bigoplus} \beta_{i} \;
\bigoplus_{j=n+1}^{N} \gamma_{j}
\end{equation}
\subsection{Counter Imbalanced Variables}
\label{Ch4:condvec}
In CTAB-GAN, the conditional GAN with training-by-sampling (refer to Sec.~\ref{Ch4:cgan}) inspired from the work of \cite{ctgan} is used. However, in contrast to their work, the conditional vector of CTAB-GAN is further extended to include the one-hot-vectors corresponding to the modes used to represent continuous and mixed columns (refer to Sec.~\ref{Ch4:data_representation}). Thus, the extended conditional vector $ex\_cond$ is a bit vector given by the concatenation of all one-hot encodings $\beta$ (for continuous $\&$ mixed variables) along with all categorical one-hot encodings $\gamma$ for all variables present in Eq.~\eqref{condvec}. For example, $ex\_cond$ is shown in Fig.~\ref{fig:condvec} with three variables, one continuous ($C_1$), one mixed ($C_2$) and one categorical ($C_3$), with class 2 selected for $C_3$.
Extending the conditional vector to include the continuous $\&$ mixed variables helps deal with imbalance in the frequency of modes used to represent them. Moreover, the generator is conditioned on all data-types during training enhancing the learned correlation between all variables (refer to Sec.~\ref{Ch4:Results} $\&$ Sec.~\ref{Ch4:motivation_response}).
\begin{figure}[htb]
\centering
\includegraphics[scale=.18]{Figures/Ch_4/condvec.png}
\caption{\centering Conditional vector: example selects class 2 from third variable out of three}
\label{fig:condvec}
\end{figure}
\subsection{Treat Long Tails}
\label{Ch4:longgtail}
To encode continuous values, a variational Gaussian mixture model is used (as explained in Sec.~\ref{Ch4:msn}). However, Gaussian mixtures can not deal with all types of data distributions, notably distributions with a long tail where few rare points are far from the bulk of the data. VGM especially face great difficulties to encode the values towards the tail.
To counter this issue, we pre-process continuous variables with long tail distributions with a log-transform. For such a variable having values with lower bound $l$, we replace each value $\tau$ with compressed $\tau^c$:
\begin{equation}
\label{eq:preprossesing}
\tau^c = \left\{
\begin{array}{rl}
\mbox{log($\tau$)} & \mbox{if $l>$0} \\
\mbox{log($\tau$ - $l$+$\epsilon$)} & \mbox{if $l\leqslant$0} \mbox{, where } \mbox{$\epsilon>$0}
\end{array} \right\}
\end{equation}
The log-transform allows to compress and reduce the distance between the tail and bulk data making it easier for VGM to encode all values, including those values present towards the end of the long tail. We show the impact of this simple yet effective method in Sec.~\ref{Ch4:motivation_response}.
\subsection{Training Procedure}
\label{Ch4:TP}
To train CTAB-GAN, one must overcome 2 major difficulties both caused by the use of a \textit{convolution} based GAN architecture (i.e., \textit{DCGAN}\cite{radford2015unsupervised}). The first is to be \textbf{data compatible} with the \textit{DCGAN architecture} that expects a square matrix commonly used to represent images. The second is to account for the presence of \textbf{multiple data types} as the proposed \textit{DCGAN} is not designed to handle categorical variables. This sub-section explains how to overcome these issues in detail. Additionally it briefly covers the training objectives used to train CTAB-GAN.
First, each row belonging to the original dataset is transformed as explained in Sec.~\ref{Ch4:data_representation}. Let the size of such a transformed row $r$ be defined as $1 \times T$ where $T$ is the length of each transformed data-row. Next, the novel \textit{extended conditional vector} (i.e., $ex\_cond$) is sampled as illustrated in section \ref{Ch4:condvec}. Let the size of $ex\_cond$ be $1 \times E$. The extended conditional vector and it's corresponding real data-row are further concatenated to form a vector (i.e., $r \oplus ex\_cond$) of size $1 \times (T+E)$. \\
\\
To deal with \textbf{data compatibility}, each data-row and it's \textit{condition vector} stored as a vector of size $1\times(T+E)$ is wrapped into the closest square matrix of dimensions, i.e. $1\times d \times d$ such that $d$ is the ceiled square root of the data-row dimensionality (i.e., $T+E$). And, unfilled entries of the square matrix are padded with zeros. For example, for a data row with 8 variables, it is converted into a square matrix of dimension $3\times3$ where the last missing entry corresponding to an additional $9^{th}$ column is filled with a zero.
This square shaped image-like format is then used to define the input layer dimensions of $\mathcal{D}$ to take as input of shape i.e., $1 \times d \times d$ where 1 is the number of channels and $d$ is the height and width, respectively. The generator on the other hand is initialized to take in as input a random noise vector $z$ of arbitrary size $s$ coupled with it's corresponding conditional vector $ex\_cond$ of size $E$ (i.e., $z \oplus ex\_cond$ of size $(s+E)\times1\times1$) to output a square matrix of shape $1 \times g \times g$ where $g$ is calculated as ceiled square root of $T$ (i.e., the generator is not required to generate data concatenated with conditional vectors).\\
\\
To account for \textbf{multiple data-types}, the output of the generator is converted back into the shape of the original tabular encoding $1 \times T$ after discarding the additional columns gained as a result of converting to a square matrix. Subsequently, the final activation is applied. For the scalar values $\alpha$ for mixed $\&$ continuous variables, a \textit{Tanh} final activation is used. And for one-hot-encodings used for representing the modes (i.e., $\beta$) and the categorical variables (i.e., $\gamma$) (refer to Sec.~\ref{Ch4:data_representation}), the \textit{gumbel softmax activation} function with a temperature parameter of $0.20$ is used. This is based on the work of \cite{ctgan} where the different activations accounts for the difference in data-types (a non-issue for generating images).
The resulting generated tabular data-row $\hat{r}$ of size $1\times T$ is concatenated with it's corresponding conditional vector $ex\_cond$ of size $1\times E$ (i.e., $\hat{r}\oplus ex\_cond$ of size $1\times(T+E)$). And, this is similarly converted back to a square shape of size $1 \times d \times d$ to be passed to the discriminator. \\
\\
Finally, to account for the training objectives, let $\mathcal{L}^{D}_{orig}$ and $\mathcal{L}^{G}_{orig}$ denote the original GAN loss functions from~\cite{gan} described in Sec.~\ref{Ch4:dcgan} to train the discriminator $\mathcal{D}$ and generator $\mathcal{G}$, respectively. Furthermore, for the generator (i.e., $\mathcal{G}$) the complete training objective is the combination of the classification, information and generator losses ( refer to Sec.~\ref{Ch4:TB}). Thus, the training objective can be formally expressed as: $\mathcal{L}^{G}=\mathcal{L}^{G}_{orig}+\mathcal{L}_{class}^{G}+\mathcal{L}_{info}^{G} + \mathcal{L}_{generator}^{G}$, while for $\mathcal{D}$ it remains unchanged, i.e. $\mathcal{L}^{D}_{orig}$. \\
\\
Lastly, it is important to note that for utilising the classifier module, the conditional vectors are not concatenated to either real or generated samples. Furthermore, the generated data is not converted back into a square form after applying the final activation and is used as is. In this way, the classifier takes as input the tabular encoded data representation of real/synthetic data expressed in Sec.~\ref{Ch4:data_representation}.
\section{Experimental Analysis}
\label{Ch4:exp}
To show the efficacy of the proposed CTAB-GAN model, the experimental analysis introduced in Sec. \ref{Ch3:EC} is extended to include CTAB-GAN. Hence the same experimental setup is used to compare the performance of CTAB-GAN with respect to the baselines set by the four state-of-the-art GAN generators introduced therein in terms of the resulting ML utility, statistical similarity to the real data, and privacy distance. Additionally, we provide an ablation analysis to highlight the efficacy of the unique components of CTAB-GAN.
\subsection{Results analysis}
\label{Ch4:Results}
\begin{figure}[htb]
\begin{center}
\subfloat[\centering Adult]{
\includegraphics[width=0.3\textwidth]{Figures/Ch_4/Adult.png}
\label{fig:result_adult}
}
\hspace{\fill}
\subfloat[\centering Covertype]{
\includegraphics[width=0.3\textwidth]{Figures/Ch_4/Covtype.jpg}
\label{fig:result_covtype}
}
\hspace{\fill}
\subfloat[\centering Credit]{
\includegraphics[width=0.3\textwidth]{Figures/Ch_4/Credit.png}
\label{fig:result_credit}
}
\hspace{\fill}
\subfloat[\centering Intrusion]{
\includegraphics[width=0.3\textwidth]{Figures/Ch_4/Intrusion.jpg}
\label{fig:result_intrusion}
}
\quad
\subfloat[\centering Loan]{
\includegraphics[width=0.3\textwidth]{Figures/Ch_4/Loan.jpg}
\label{fig:result_loan}
}
\caption{ML utilities difference (i.e., AUC and F1-score) for five algorithms using five synthetic data generators on all 5 datasets}
\label{fig:ml_whole}
\end{center}
\end{figure}
\begin{enumerate}
\item \textbf{ML Utility}- Tab.~\ref{table:ML_all} shows the averaged ML utility differences between real and synthetic data in terms of accuracy, F1 score, and AUC. A better synthetic dataset is expected to have low differences. It can be seen that CTAB-GAN outperforms all other state-of-the-art methods in terms of Accuracy, F1-score and AUC. Accuracy is the most commonly used classification metric, but to account for imbalanced target variables, the F1-score and AUC are more reliable metrics to evaluate performance. CTAB-GAN largely shortens the AUC difference from 0.169 (best in state-of-the-art) to 0.094.
\begin{table}[htb]
\centering
\caption{\centering Difference of ML accuracy (\%), F1-score, and AUC between original and synthetic data: average over 5 different datasets and 3 replications.}
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Method} & \textbf{Accuracy} & \textbf{F1-score} & \textbf{AUC} \\
\hline
\small{CTAB-GAN} & \textbf{8.90\%} & \textbf{0.107} & \textbf{0.094} \\
\small{CTGAN} &21.51\% & 0.274 & 0.253 \\
\small{TableGAN} &11.40\% & 0.130 & 0.169 \\
\small{MedGAN} & 14.11\%& 0.282& 0.285 \\
\small{CW-GAN} & 20.06\%& 0.354 & 0.299 \\
\hline
\end{tabular}
\label{table:ML_all}
\end{table}
To obtain a better understanding, Fig.~\ref{fig:ml_whole} plots the (F1-score-x axis, AUC-y-axis) for all 5 ML models for all datasets.
Fig.~\ref{fig:ml_whole}(a,b $\&$ c) show that for Adult, Covtype and Credit datasets, the results of CTAB-GAN and TableGAN are largely similar and clearly better than the rest. This is due to a more stable \textit{DCGAN architecture} that trains reliably and therefore generates high utility datasets.
Fig.~\ref{fig:ml_whole}(d) shows that for the Intrusion dataset, CTAB-GAN largely outperforms all others across all ML models used for evaluation. This can be explained by the use of the \textit{conditional GAN architecture} that helps deal with imbalanced variables and the added \textit{information loss} which greatly helps stabilize training (refer to Sec.~\ref{Ch4:ablation}).
Fig.~\ref{fig:ml_whole}(e) however shows that TableGAN outperforms CTAB-GAN on the loan dataset. The Loan dataset is significantly smaller than the other four. Therefore, we find that the encoding method in CTAB-GAN which works well for complex cases also increases the dimensionality of the input data. This results in a failure to converge to a better optimum for smaller datasets. Whereas TableGAN's encoding doesn't lead to an increase in the dimensionality of the raw data as \textit{categorical variables} are simply treated as \textit{continuous} and no \textit{MSN} (refer to Sec.~\ref{Ch4:msn}) is used. Thus, this leads to a simpler representation making it easier for the TableGAN model to learn effectively from smaller datasets.
\item \textbf{Statistical similarity}- Statistical similarity results are reported in Tab.~\ref{table:SS_all}. CTAB-GAN stands out again across all comparisons.
For \textit{categorical variables} (i.e. average JSD), CTAB-GAN outperforms CTGAN and TableGAN by 13.5\% and 28.4\%. Although both CTGAN and CTAB-GAN rely on conditional GAN and training-by-sampling to deal with categorical imbalance, the addition of a superior \textit{DCGAN neural network architecture} and the additional loss terms for the generator such as the \textit{classification} and \textit{information losses} enable CTAB-GAN to outperform it's predecessor.
For \textit{continuous variables} (i.e. average WD), CTAB-GAN benefits from the design of the \textit{mixed-encoder} to deal with \textit{mixed data variables}. Moreover, the use of an \textit{extended conditional vector} helps to better produce \textit{skewed multi-modal numerical distributions}. And, the use of the \textit{log-transform} allows to better capture \textit{long-tailed distributions} (refer to Sec.~\ref{Ch4:motivation_response}). It is worth pointing out that the average WD column shows some extreme numbers such as 46257 and 238155 comparing to 1197 of CTAB-GAN due to these algorithms generating extremely large values for long tail variables.
Besides divergence and distance, CTAB-GAN's synthetic data also maintains the best \textit{correlation}. The \textit{extended conditional vector} allows the generator to produce samples conditioned even on a given \textit{VGM mode} for \textit{continuous variables}. This increases the capacity to learn the \textit{conditional distribution} for \textit{continuous variables} and hence leads to an improvement in the overall feature interactions captured by the model.
\begin{table}[htb]
\centering
\caption{\centering Statistical similarity: three measures averaged over 5 datasets and three repetitions.}
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Method} & \textbf{Avg JSD} & \textbf{Avg WD} & \textbf{Diff. Corr.} \\
\hline
\small{CTAB-GAN} & \textbf{0.062}& \textbf{1197} & \textbf{2.09} \\
\small{CTGAN} & 0.0704& 1769 & 2.73 \\
\small{TableGAN} & 0.0796& 2117 & 2.30 \\
\small{MedGAN} & 0.2135& 46257 & 5.48 \\
\small{CW-GAN} & 0.1318& 238155 & 5.82 \\
\hline
\end{tabular}
\label{table:SS_all}
\end{table}
\item \textbf{Privacy preservability}- The privacy results are shown in Tab.~\ref{table:PP_all}. It can be seen that the DCR and NNDR between real and synthetic data all indicate that generation from TableGAN has the shortest distance to real data (highest privacy risk).
Moreover, as we use distance-based algorithms to give an overview on privacy, the evaluation of privacy is relative to the utility. This is because, on the one hand, if the distance between real and synthetic data is too large, it simply means that the quality of generated data is poor. On the other hand, if the distance between real and synthetic data is too small, it simply means that there is a risk to reveal sensitive information from the training data.
Thus, the algorithm which allows for greater distances between real and synthetic data under equivalent ML utility and statistical similarity data should be considered. In that case, CTAB-GAN not only outperforms TableGAN in ML utility and statistic similarity, but also in all privacy preservability metrics by 11.6\% and 4.5\% for DCR and NNDR, respectively.
\begin{table}[htb]
\centering
\caption{\centering Privacy impact: between real and synthetic data (R\&S) and within real data (R) and synthetic data (S).}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{\textbf{Model}} & \multicolumn{3}{c|}{\textbf{DCR}} & \multicolumn{3}{c|}{\textbf{NNDR}} \\
\cline{2-7}
& \textbf{R\&S} & \textbf{R} & \textbf{S} & \textbf{R\&S} & \textbf{R} & \textbf{S}\\
\hline
CTAB-GAN & 1.118 & 0.428& 0.937 & 0.713 & 0.414 &0.591\\
CTGAN & 1.517 & 0.428& 1.026 & 0.763 & 0.414 &0.624 \\
TableGAN & 0.988 & 0.428& 0.920 & 0.681 & 0.414 &0.632\\
MedGAN & 1.918 & 0.428& 0.254 & 0.871 & 0.414 &0.393 \\
CW-GAN & 2.197 & 0.428& 1.124 & 0.847 & 0.414 &0.675\\
\hline
\end{tabular}
\label{table:PP_all}
\end{table}
\end{enumerate}
\subsection{Ablation analysis}
\label{Ch4:ablation}
To illustrate the efficiency of each strategy we implement an ablation study which cuts off the different components of CTAB-GAN one by one:
\begin{enumerate}
\item \textbf{w/o $\mathcal{C}$}- In this experiment, Classifier $\mathcal{C}$ and the corresponding classification loss for Generator $\mathcal{G}$ are taken away from CTAB-GAN
\item \textbf{w/o I. loss} (information loss)- In this experiment, we remove information loss from CTAB-GAN
\item \textbf{w/o MSN}- In this case, we substitute the mode specific normalization based on VGM for continuous variables with min-max normalization and use simple one-hot encoding for categorical variables. Here the conditional vector is the same as for CTGAN
\item \textbf{w/o LT}- (long tail). In this experiment, long tail treatment is no longer applied. This only affects datasets with long tailed columns, i.e. Credit and Intrusion.
\end{enumerate}
The results are compared with the baseline implementing all strategies. All experiments are repeated 3 times, and results are evaluated on the same 5 machine learning algorithms introduced in Sec.~\ref{Ch3:ml_efficacy}. We report the F1-score difference between CTAB-GAN and each above-mentioned experiments where the test datasets and evaluation flow are the same as shown in Sec.~\ref{Ch3:EC} and Sec.~\ref{Ch3:metrics}. Tab.~\ref{table:ablation} shows the results.
\begin{table}[htb]
\centering
\caption{\centering F1-score difference to CTAB-GAN. CTAB-GAN column reports the absolute averaged F1-score as baseline.}
\label{table:ablation}
\begin{tabular}{ |c|c|c|c|c|c|}
\hline
\textbf{Dataset} & \textbf{CTAB-GAN} & \textbf{w/o $\mathcal{C}$} & \textbf{w/o I. Loss} & \textbf{w/o MSN} & \textbf{w/o LT} \\
\hline
Adult & 0.704 & -0.01& -0.037 & -0.05 & - \\
Covertype & 0.532 & -0.018 & -0.184& -0.118 & - \\
Credit & 0.710 & +0.011& -0.177& +0.06 & 0.00 \\
Intrusion & 0.842&-0.031&-0.437&+0.003 & -0.074 \\
Loan & 0.803 &-0.044&+0.028&+0.013 & - \\
\hline
\end{tabular}
\end{table}
Each part of CTAB-GAN has different impacts on different datasets as follows:
\begin{enumerate}
\item \textbf{w/o $\mathcal{C}$}- has a negative impact for all datasets except Credit. Since Credit has only 30 continuous variables and one target variable, the semantic check can not be very effective.
\item \textbf{w/o I. loss}- has a positive impact for Loan, but results degenerate for all other datasets. It can even make the model especially unusable for Intrusion. This shows that the information loss is worse for smaller datasets and beneficial for larger datasets.
\item \textbf{w/o MSN}- performs worse for Covertype and Adult, has little impact for Intrusion and provides better results for the Credit and Loan datasets than the original CTAB-GAN. This is because out of 30 continuous variables in the Credit dataset, 28 are nearly single mode Gaussian distributed. Thus, the initialized high number of modes, i.e. 10, for each continuous variable (same setting as in CTGAN) degrades the estimation quality. Likewise, for the Loan dataset, the MSN encoding increases the input data dimensionality greatly thereby increasing the difficulty of learning from smaller sized dataset such as Loan.
\item \textbf{w/o LT}- has the biggest impact on Intrusion, since it contains 2 long tail columns which are seemingly important predictors for the target column. For Credit, the influence is limited. Even if the long tail treatment fits the \textit{amount} column well (see Sec.~\ref{Ch4:motivation_response}), this variable doesn't seem to be a strong predictor for the target column.
\end{enumerate}
In general, averaging the column values across all ablation tests results in a negative impact for the performance which justifies our design choices for CTAB-GAN.
\subsection{Results for Motivation Cases}
\label{Ch4:motivation_response}
After reviewing all the metrics, let us recall the three motivation cases from Sec.~\ref{Ch3:Challenges}.
\begin{enumerate}
\item \textbf{Mixed data type variables}- Fig.~\ref{fig:motivationcases_response}(a) compares the real and CTAB-GAN generated data for variable \textit{Mortgage} in the Loan dataset. CTAB-GAN encodes this variable as mixed data-type. We can see that CTAB-GAN generates clear 0 values and the frequency is similar as in real distribution. Therefore, this is a result of using the mixed encoder combined with the extended conditional vector to control the sampling of the categorical component to correspond to the original data with greater similarity.
\item \textbf{Long tail distributions}- Fig.~\ref{fig:motivationcases_response}(b) compares the cumulative frequency graph for the \textit{Amount} variable in Credit. This variable is a typical long tail distribution. One can see that CTAB-GAN perfectly recovers the real distribution. Due to log-transform data pre-processsing, CTAB-GAN learns this structure significantly better than the state-of-the-art methods shown in Fig.~\ref{fig:motivationcases}(b).
\item \textbf{Skewed multi-mode continuous variables}-Fig.~\ref{fig:motivationcases_response}(c) compares the frequency distribution for the continuous variable \textit{Hours-per-week} from Adult. Except the dominant peak at 40, there are many side peaks. Fig.~\ref{fig:motivationcases}(c), shows that TableGAN, CW-GAN and MedGAN struggle since they can learn only a simple Gaussian distribution due to the lack of any special treatment for continuous variables. CTGAN, which also use VGM, can detect other modes. However, CTGAN is not as good as CTAB-GAN. The reason is that CTGAN lacks the mode of continuous variables in the conditional vector. By incorporating the mode of continuous variables into conditional vector, we can apply the training-by-sample and logarithm frequency also to modes. This gives the mode with less weight more chance to appear in the training and avoids the mode collapse.
\end{enumerate}
\begin{figure}[htb]
\begin{center}
\subfloat[\centering Mortgage in Loan dataset~\cite{kaggleloan}]{
\includegraphics[width=0.33\textwidth]{Figures/Ch_4/bimodal_ctabgan_update.png}
\label{fig:ctabgan_bimodal}
}
\subfloat[\centering Amount in Credit dataset~\cite{kagglecredit}]{
\includegraphics[width=0.3\textwidth]{Figures/Ch_4/result_longtail.png}
\label{fig:longtail_result}
}
\subfloat[\centering Hours-per-week in Adult dataset~\cite{UCIdataset}]{
\includegraphics[width=0.335\textwidth]{Figures/Ch_4/result_gmm.png}
\label{fig:gmm_result}
}
\caption{ \centering Challenges of modeling industrial dataset using existing GAN-based table generator: (a) Mixed data type, (b) long tail distribution, and (c) Skewed multi-modal data}
\label{fig:motivationcases_response}
\end{center}
\end{figure}
\section{Conclusion}
\label{Ch4:Conclusion}
Motivated by the importance of data sharing and fulfillment of governmental regulations, we propose CTAB-GAN -- a novel conditional GAN based tabular data generator. CTAB-GAN advances beyond the prior state-of-the-art methods by modeling mixed data-type variables and provides strong generation capabilities for long-tailed continuous variables and continuous variables with complex distributions.
To such ends, the core features of CTAB-GAN include (i) introduction of the classification and information loss into the conditional DCGAN, (ii) effective data encoding for mixed data-type variables, and (iii) a novel construction of conditional vectors.
We exhaustively evaluate CTAB-GAN against four tabular data generators on a wide range of metrics, namely ML utilities, statistical similarity and privacy preservation. The results show that the synthetic data of CTAB-GAN results into high utilities, high similarity and reasonable privacy guarantee, compared to existing state-of-the-art techniques. The improvement on complex datasets is up to 17\% in accuracy comparing to all state-of-the-art algorithms.
\chapter{Related Work $\&$ Preliminaries}\label{ch2}
This chapter begins with Sec.~\ref{Ch2:Related_work} discussing the relevant literature for tabular GANs and their differential private variants. And, Sec.~\ref{Ch2:TC} provides a brief primer on generative adversarial networks (GANs) and differential privacy (DP) in the context of tabular data.
\section{Related Work}
\label{Ch2:Related_work}
\resizebox{\columnwidth}{!}{
\begin{forest}
for tree={
align=center,
edge+={thick},
draw,
fill=pink!60,
rounded corners=2pt,
drop shadow,
},
if level=0{
tikz={\draw [thick] (.children first) (.children last);}
}{},
[\textbf{GAN}
[
Tabular GAN\\~[Non-conditional GAN\\~[MedGAN\cite{choi2017generating}][TableGAN\cite{tablegan}]]
[Conditional GAN\\~[CTGAN\cite{ctgan}]
[CW-GAN\cite{engelmann2020conditional}]]
]
[DP-GAN\\~[PATE\cite{papernot2016semi}\\~[PATE-GAN\cite{pategan}]]
[DP-SGD\cite{abadi2016deep}\\~[Generator\\~[GS-WGAN\cite{chen2020gs}]]
[Discriminator\\~[DP-WGAN\cite{xie2018differentially}]]
]
]
]
]
]
\end{forest}
}
\subsection{Tabular GANs}
\label{Ch3:SOTA}
In this section, the focus is on GAN-based methods that deal with tabular data generation. These methods are featured extensively in the work done in \autoref{ch3} and \autoref{ch4}. Tab.~\ref{table:sota} details key features for each method.\\
\\
\textbf{MedGAN}- In the work done by \cite{choi2017generating} (2017), the authors propose a novel mechanism to synthetically generate Electronic Health Records (EHR) consisting of high dimensional categorical variables. Their model consists of using a combination of an auto-encoder and a generative adversarial network. They show that their model is capable of producing realistic synthetic patient records as evaluated via a qualitative medical expert review. Additionally, they empirically analyze the risk of violating privacy via identity and attribute disclosure attacks and conclude that the risk is manageable.
However, the MedGAN model cannot generate synthetic datasets outside the medical domain of Electronic Health Records which contain only categorical variables.\\
\\
\textbf{TableGAN}- In the work done by \cite{tablegan} (2018), the authors develop a tabular data synthesizer that is based on the \textit{DCGAN architecture} (refer to Sec.~\ref{Ch4:dcgan}). Their approach utilizes a separate classifier module in addition to the discriminator and generator modules commonly used in GAN-based frameworks. Moreover, their method relies on additional loss objective for the generator known as the \textit{classification} $\&$ \textit{information losses}, respectively (refer to Sec.~\ref{Ch4:tabloss}).
However, TableGAN doesn't deal with generating categorical variables in a principled manner. This is because their approach involves mapping categorical variables to integers and treating them purely numerically.\\
\\
\textbf{CTGAN}- In the work done by \cite{ctgan} (2019), they introduce a novel conditional tabular GAN architecture as well as the training-by-sampling method (refer to Sec.~\ref{Ch4:cgan}). These improvements allow the generator to more efficiently produce realistic samples for the minority categories found in discrete columns thereby producing synthetic records which match the real data distribution more closely. In addition, they introduce the mode-specific-normalization technique (refer to Sec.~\ref{Ch4:msn}) for learning complex numerical distributions. Lastly, their discriminator network is trained with WGAN loss with gradient penalty~\cite{gulrajani2017improved} (refer to Sec.~\ref{Ch5:WGAN}) for improved training of GANs.
However, the CTGAN model is incapable of dealing with missing values. This limits it's applicability in real-world scenarios where the data is often impure and contains a large number of missing values. Moreover, the authors also convey that learning from small training sets severely affects performance as well. \\
\\
\textbf{Conditional Wasserstein GAN}- In the work done by \cite{engelmann2020conditional} (2020), they propose the conditional Wasserstein GAN primarily for the purposes of data augmentation. Similar to TableGAN, they exploit the classification loss and augment it to the generator's loss objective. Moreover, their model also integrates cross-layers~\cite{wang2017deep} in both the discriminator and generator networks.
However, in their work, they do not make use of any activation functions for generating samples for numerical attributes. This makes it inherently difficult to constrain the generator's output to a meaningful range of values. Moreover, it can also lead to severely destabilizing the training process.
\begin{table}[htb]
\centering
\caption[BB]{\centering State-of-the-Art Blueprint\footnotemark.}
\resizebox{0.8\columnwidth}{!}{
\begin{tabular}{|c|c|c|c|c|}
\hline
\textbf{Method} & \textbf{Data format} & \textbf{Training method} & \textbf{Privacy analysis} & \textbf{Designated output} \\
\hline
MedGAN & Categorical Only & Auto-encoder + GAN & Yes & No \\
TableGAN & Categorical $\&$ Continuous & GAN + Classifier & Yes & No \\
CTGAN & Categorical $\&$ Continuous & Conditional-WGAN & No & Yes \\
CW-GAN & Categorical $\&$ Continuous & Conditional-WGAN + Classifier & No & Yes \\
\hline
\end{tabular}
}
\label{table:sota}
\end{table}
\footnotetext{ Note that the designated output column is used to identify models which have the capacity to sample data-instances with user-defined categorical attributes.
}
\subsection{Differential Private GANs}
\label{Ch5:related_work}
In this section, relevant differential private GAN models are reviewed in relation to the work done in \autoref{ch5}. Tab.~\ref{Ch5:tab1} highlights key details of each model. \\
\\
\textbf{PATE-GAN}- In the work done by \cite{pategan} (2019), the authors devise a technique for integrating DP guarantees in tabular GANs via the Private Aggregation of Teacher Ensembles (PATE) framework~\cite{papernot2016semi}. In their approach, multiple teacher discriminators are trained using disjoint subsets of the training data along with a student discriminator where the aggregation of the teacher ensemble is done after perturbing the predictions of teacher discriminators using Laplacian noise.
However, PATE-GAN suffers from the following limitations: (i) the student discriminator is trained solely using generated samples and does not see any real samples. This is problematic because if the student discriminator only has access to the unrealistic samples generated by the generator, it won't provide reliable feedback to the generator so that it can improve it's sample quality (refer to Sec.~\ref{Ch5:Res} for experimental evidence) and (ii) The PATE-GAN framework requires careful hyper-parameter tuning to select the number of teacher discriminators. \\
\\
\textbf{DP-WGAN}- In the work done by \cite{xie2018differentially} (2018), the authors incorporate differential privacy guarantees for wasserstein GANs wherein DP-SGD\cite{abadi2016deep} is applied for training the discriminator. Moreover, they make use of weight clipping to enforce the Lipschitz constraint on the discriminator so as to be compatible with the wasserstein loss.
However, the challenges with this approach lies in the fact that calibrating the DP specific hyper-parameters (i.e gradient norm clipping value) varies drastically based on differences in network architectures and training procedures. Additionally, clipping the weights of the discriminator has been found to cause convergence issues\cite{gulrajani2017improved}. \\
\\
\textbf{GS-WGAN}- In the work done by \cite{chen2020gs} (2020), the authors work towards training image GANs in a privacy preserving manner. Their novel contributions include utilising the wasserstein loss with gradient penalty\cite{gulrajani2017improved} to avoid hyper-parameter tuning of the clipping parameter and employing DP guarantees using DP-SGD\cite{abadi2016deep} for the generator network rather than the discriminator network. This is motivated by the fact that only the generator network is made publicly available after training the GAN model. Moreover, they propose a more precise approach for perturbing gradients of the generator by only manipulating those that are computed with respect to the training data so as to minimize the loss of gradient information. And lastly, they make use of the subsampled R\'enyi Differential Privacy (RDP) Accountant\cite{wang2019subsampled} to compute the privacy loss during training.
However, their work focuses only on DP training of the generator and offers no experimental analysis against privacy attacks.
\begin{table}[htb]
\centering
\caption{\centering Outline of all methodologies in this work.}
\resizebox{0.8\columnwidth}{!}{
\begin{tabular}{ |c|c|c|c|c|c|c|c| }
\hline
\textbf{Model} & \textbf{Loss} &\textbf{DP Site} & \textbf{Noise}& \textbf{Accountant} & \textbf{Data Format}\\
\hline
PATE-GAN & KL Divergence Loss & Discriminator & Laplacian & PATE Accountant & Table\\
DP-WGAN & Wasserstein Loss + Weight Clipping & Discriminator & Gaussian & RDP-Accountant & Image $\&$ Table \\
D-DP-CTABGAN & Wasserstein Loss + Gradient Penalty & Discriminator & Gaussian & RDP-Accountant & Table\\
G-DP-CTABGAN & Wasserstein Loss + Gradient Penalty & Generator & Gaussian & RDP-Accountant & Table\\
\hline
\end{tabular}
}
\label{Ch5:tab1}
\end{table}
\section{Background}
\label{Ch2:TC}
\subsection{GAN Designs}
\label{Ch4:TB}
\textbf{GAN}- \textbf{GANs} are a relatively recent breakthrough in machine learning and generative modelling. Unlike conventional machine learning algorithms that learn a conditional distribution of a class variable given the predictor variables (e.g., to solve a binary classification problem), the main purpose of GANs is to learn the joint distribution of the entire input data. In this way, using the learned joint distribution, the generated samples can be drawn to resemble the original input data.
GANs make use of two neural networks: the generator and the discriminator networks. The generator takes as input a random noise vector to synthesize data that closely resembles the real data. Whereas, the discriminator takes as input real/generated samples and acts a teacher assessing the output of the generator judging whether the generated samples are real or fake. Much like a supervisor providing feedback to a student about his/her work. The two models are trained together via an adversarial min-max game minimizing the loss of the generator while maximizing the loss of the discriminator expressed below~\cite{gan}:
\begin{equation}
\label{eq:gan}
\min_{\mathcal{G}}\max_{\mathcal{D}}V(\mathcal{G},\mathcal{D}) = \mathbb{E}[log\mathcal{D}(x)]_{x\sim p_{data}(x)} + \mathbb{E}[log(1-\mathcal{D}(\mathcal{G}(z)))]_{z \sim p(z)}
\end{equation}
where $\mathcal{G}$ $\&$ $\mathcal{D}$ represent the generator and discriminator networks, respectively. Furthermore, $p_{data}$ denotes the real data distribution and $p(z)$ denotes a prior distribution (i.e $\mathcal{N}(0,I)$) with latent vector $z$. And, $\mathcal{D}$ outputs a scalar in the range [0,1].
\textbf{Tabular GANs} are simply GANs that are used to generate tabular formatted datasets. As an example, consider an SQL table used to store employee information. In this setting, each entry in the table is an independent sample obtained from the joint distribution of all the employees. The goal of tabular GANs is to learn such a joint distribution to subsequently synthesize data that matches the original. Fig.~\ref{fig:STD} illustrates this process.
\begin{figure}[htb]
\centering
\includegraphics[scale=.20]{Figures/Ch_3/STD.png}
\caption{\centering Synthetic Tabular Data Generation via GANs}
\label{fig:STD}
\end{figure}\\
\textbf{DCGAN\cite{radford2015unsupervised}}-\label{Ch4:dcgan} The \textbf{DCGAN architecture} is an extension of the standard GAN architecture which makes use of \textit{convolutional} and \textit{convolutional-transpose layers} in the discriminator and generator, respectively. It is a widely used stable GAN architecture that has proven to be useful for generating images as well as tabular data~\cite{tablegan}.
The generator network of \textit{DCGAN} consists of stacks of \textit{strided 2D convolutional transpose layers} followed by a \textit{2d batch norm layer} and a \textit{ReLU activation function}. The final output of the generator is passed through a \textit{Tanh activation function} to bring the values in the original range of $[-1,1]$ (for representing images). The generator takes as input a random noise vector of arbitrary length and returns an image with the same spatial dimensions as the original dataset.
Whereas, the discriminator network is composed of stacks of \textit{2d convolutional layers}, \textit{2d batch norm} and \textit{LeakyReLU layers} with a leaky ratio of $0.20$ with the final output being passed through a \textit{sigmoid activation function}. The discriminator takes as input real/fake images and outputs the probability of any particular sample being real or synthetic.
Lastly, it is worth noting that the presence of \textit{batch normalisation} in both the generator and discriminator networks is a key contribution of the authors that leads to a stable flow of gradients for training \textit{DCGAN} reliably. \\
\\
\textbf{Conditional GAN $\&$ Training-by-Sampling\cite{ctgan}}-\label{Ch4:cgan} To address the problem of imbalanced categorical variables in real-world datasets, the \textit{conditional generator}, \textit{generator loss} and \textit{training-by-sampling} are introduced in the work of \cite{ctgan}. The main idea behind these techniques stems from the use an additional vector, termed as the \textit{conditional vector}, to represent the classes of categorical variables. This vector is both fed to the generator and used to bound the sampling of the real training samples to subsets satisfying the condition. Moreover, the conditions are sampled in such a way so as to give higher chances to minority classes while training the model. These concepts are explained in greater detail below. \\
\\
The \textbf{Conditional GAN} features a \textbf{conditional generator} whose generated samples come from a \textit{conditional probability distribution} $\hat{r} \sim \mathbb{P}_{\mathcal{G}}(row|C_{i^{*}}=c^{*})$ where, $c^{*}$ is a particular class within the $i^{th}$ categorical variable
$C_{i^{*}}$. Intuitively, this corresponds to generating a row given a chosen class for a selected categorical variable. To represent this condition (i.e., $C_{i^{*}}=c^{*}$), the \textit{conditional vector} is used.
To construct the \textit{conditional vector}, \cite{ctgan} treats all categorical variables $C_{1},...,C_{N_c}$ as one-hot vectors $c_1,...,c_{N_c}$ where $N_c$ represents the total number of categorical variables. Let the $i^{th}$ one-hot vector and it's corresponding $mask$ vector be denoted as $c_i=[c_i^{(k)}]$, for $k = 1,...,|C_i|$ $\&$ $m_{i}=[m_i^{(k)}]$, for $k = 1,...,|C_i|$, respectively. The condition is then expressed using the $masks$ for each one-hot vector as: $m_i^{(k)} = \{1, $ if $ i=i^{*}$ $\&$ $k=k^{*}$, else $0\}$ where $i^{*}$ is a chosen categorical variable and $k^{*}$ is the selected class within variable $i^{*}$. Thus, the \textit{conditional vector} is represented as: $cond = m_{1}\oplus...\oplus m_{N_{c}}$ where $\oplus$ is the concatenation operator. As an example, consider two categorical variables $C_1= [0,1]$ and $C_2=[0,1]$, if the condition is $C_1=0$, the corresponding $masks$ will be $m_1 = [1,0]$ $\&$ $m_2=[0,0]$ to result in a \textit{conditional vector} i.e., $cond=[1,0,0,0]$. \\
\\
Next, \cite{ctgan} uses the \textbf{generator loss} to ensure that the \textit{conditional generator} generates samples that match the constraint provided by the \textit{conditional vector}. As an example, consider the $mask$ $m_i = [1,0]$ for a particular categorical variable $i$. Given this condition, the \textit{conditional generator} should ideally similarly output a data row where the $0^{th}$ class for the $i^{th}$ categorical variable is produced leading to a matching generated $mask$ $\hat{m}_{i} = [1,0]$. Thus, if $m_{i}$ represents the conditional $mask$ for the selected one-hot-encoded variable $i$ associated for a given data row and $\hat{m}_{i}$ denotes the corresponding generated $mask$, the \textit{generator loss} denoted as $\mathcal{L}_{generator}^{G}$ is formally represented as: $H(m_{i}, \hat{m}_{i})$ where $H(.)$ is the \textit{cross-entropy loss}. Therefore, in this manner, the added loss acts as a soft constraint for enforcing that the generated samples are aligned with their corresponding \textit{conditional vectors.}\\
\\
Finally, the \textbf{training-by-sampling} method is used to sample the \textit{conditional vector} in such a way so that the model can explore all possible classes present in categorical variables evenly during training. Thus, the sampling procedure for generating a condition is as follows:
\begin{enumerate}
\item Out of $N_c$ categorical variables, a column $C_{i^{*}}$ is uniformly chosen at random with probability $1/N_{c}$.
\item Based on the chosen column $C_{i^{*}}$, a probability mass function (PMF) is created after applying a \textit{log-transformation} to the frequency of individual classes within column $C_{i^{*}}$ where the log-transform naturally leads to an over-sampling of minority classes.
\item On the basis of the constructed PMF described above, a class $c^{*}$ is sampled for the selected column $C_{i^{*}}$. Thus the condition $C_{i^{*}}=c^{*}$ and it's corresponding \textit{conditional vector} $cond$ is formed.
\end{enumerate}
\subsection{GAN Loss Objectives}
\textbf{Wasserstein Loss with Gradient Penalty\cite{gulrajani2017improved}}-\label{Ch5:WGAN} The \textbf{wasserstein loss} first proposed in the work of \cite{pmlr-v70-arjovsky17a} (2017) provides greater stability for training GANs as compared to the classical KL divergence loss (as shown in Eq.~\ref{eq:gan}). In contrast to the KL divergence loss, the proposed loss function remains continuous and differentiable for measuring the similarity of probability distributions with non-overlapping support. Therefore, it is capable of providing more meaningful gradients for training the generator especially for cases where the probability distribution of generated samples is highly dissimilar to the real probability distribution. Formally, the wasserstein loss may be expressed as minimizing the integral probability metrics (IPMs)
$sup_{f\in\mathcal{F}}|\int_{M}fd\mathbb{P}_r - \int_{M}fd\mathbb{P}_g|$ between real($\mathbb{P}_r$) and generated ($\mathbb{P}_g$) data distributions, where $\mathcal{F}=\{f:||f||_{L}\leq 1\}$ enforces the discriminator function $f$ to be 1-Lipschitz continuous. In practice, \cite{pmlr-v70-arjovsky17a} proposed weight clipping to enforce the Lipschitz constraint on the discriminator by clamping the weights of the discriminator to lie within a compact space $[-c,c]$ where $c$ is the clipping threshold.
However, \cite{gulrajani2017improved} proposed the \textbf{gradient penalty term} as an alternative to weight clipping. As they found that weight clipping may lead to convergence issues by biasing the discriminator towards simpler functions or causing exploding/vanishing gradients. Motivated by their theoretical proof illustrating that an optimal discriminator naturally possesses a gradient norm of 1 almost everywhere under real and generated distributions, $\mathbb{P}_r$ and $\mathbb{P}_g$ respectively, the authors define the discriminator to be 1-Lipschitz continuous if and only if, it has gradients with norm at most 1 everywhere. They then enforce 1-Lipschitz continuity of the discriminator by adding a soft constraint during training to constrain the gradient norm of the discriminator's output with respect to it's input. Originally the authors define the input (i.e random samples $\hat{x}\sim\mathbb{P}_{\hat{x}}$) as sampling along the straight lines between pair of points sampled from the original data distribution $\mathbb{P}_{r}$ and the generator distribution $\mathbb{P}_{g}$. Thus, the training objectives $\mathcal{L}_D$ $\&$ $\mathcal{L}_G$ for the discriminator and generator are expressed as:
\begin{equation}
\mathcal{L}_{D} = \underbrace{\mathbb{E}_{\Tilde{x}\sim\mathbb{P}_g}[D(\Tilde{x})]-\mathbb{E}_{x\sim\mathbb{P}_{r}}[D(x)]}_{\text{Wasserstein loss}}+\underbrace{\tau\mathbb{E}_{\hat{x}\sim\mathbb{P}_{\hat{x}}}[(||\nabla_{\hat{x}}D(\hat{x})||_{2}-1)^{2}]}_{\text{Gradient penalty}}
\end{equation}
\begin{equation}
\mathcal{L}_{G}= -\mathbb{E}_{\Tilde{x}\sim\mathbb{P}_g}[D(G(\Tilde{x}))]
\end{equation}
where $D$ is the set of 1-Lipschitz functions defining the discriminator network, $G$ represents the generator network and $\tau$ is the penalty coefficient.\\
\\
\textbf{Classification $\&$ Information Losses\cite{tablegan}}-\label{Ch4:tabloss} The \textbf{classification loss} requires to add to the GAN architecture an \textit{auxiliary classifier} in parallel to the discriminator. Moreover, the \textit{auxiliary classifier} is trained alongside the discriminator and generator and usually features the same neural architecture as the discriminator~\cite{acgan}. It's primarily used to output predicted class labels for each synthesized record.
The \textit{classification loss} quantifies the discrepancy between the synthesized and predicted class labels. This helps to increase the semantic integrity of synthetic records. For instance, (sex=female, disease=prostate cancer) is not a semantically correct record as women do not have a prostate, and no such record should appear in the original data and is hence not learnt by the classifier\cite{tablegan}. Therefore, it provides the generator a useful signal to generate valid class labels for synthetic data records.
$\mathcal{L}_{class}^{C} = \mathbb{E}[|l(x)-\mathcal{C}(fe(x))|]_{x \sim p_{data}(x)}$
$\&$ $\mathcal{L}_{class}^{G} = \mathbb{E}[|l(G(z))-\mathcal{C}(fe(G(z)))|]_{z \sim p(z)}$ correspond to training the classifier (i.e., $\mathcal{C}$) and generator (i.e., $\mathcal{G}$), respectively, where $l(.)$ is a function that returns the class label of any given data row and $fe(.)$ deletes the class feature of that data row.\\
\\
The \textbf{information loss} penalizes the discrepancy between statistics of the generated data and the real data. This helps to generate data which is statistically closer to the real one. Moreover, the information loss stabilizes the training of the generator by providing a new objective for the generator that prevents it from over-training on the current discriminator~\cite{salimans2016improved}.
For computing the \textit{information loss}, let $f_x$ and $f_{\mathcal{G}(z)}$ denote the resulting features obtained from the penultimate layer of a discriminator denoted as $\mathcal{D}$ for a real and generated sample, respectively. Thus, the \textit{information loss} for the generator (i.e., $\mathcal{G}$) is expressed as:
$\mathcal{L}_{info}^{G}= \mathcal{L}_{mean} + \mathcal{L}_{sd}$ where $\mathcal{L}_{mean} = ||\mathbb{E}[f_x]_{x \sim p_{data}(x)} - \mathbb{E}[f_{\mathcal{G}(z)}]_{z \sim p(z)}||_{2}$ and $\mathcal{L}_{sd} = ||\mathbb{SD}[f_x]_{x \sim p_{data}(x)} - \mathbb{SD}[f_{\mathcal{G}(z)}]_{z \sim p(z)}||_{2}$. And, $\mathbb{E}$ and $\mathbb{SD}$ denote the mean and standard deviations of the features, respectively.
Note that for all the above loss equations, $p_{data}$ is used to denote the real data distribution and $p(z)$ is a prior distribution over the latent noise vector $z$ that is fed to the generator.
\subsection{Data Transformation}
\textbf{Mode-Specific Normalisation}-\label{Ch4:msn} The \textbf{mode-specific normalization (MSN)}~\cite{ctgan} technique developed in the work of \cite{ctgan} is invented to deal with multiple peaks in \textit{multi-modal} continuous variables. The MSN acts as a \textit{reversible transformation} that helps to represent complicated numerical distributions and generate synthetic data with greater fidelity.
\begin{figure}[htb]
\begin{center}
\subfloat[\centering Fitting VGM on a continuous variable ]{
\includegraphics[width=0.20\columnwidth]{Figures/Ch_4/fit_vgm.png}
\label{fig:vgm}
}
\subfloat[\centering Selecting a mode for a single value in a continuous variable]{
\includegraphics[width=0.20\columnwidth]{Figures/Ch_4/select_mode.png}
\label{fig:vgm_single}
}
\caption{\centering MSN encoding for continuous variables}
\label{fig:gmm_distribution_continuous}
\end{center}
\end{figure}
A continuous variable is processed using a \textit{variational Gaussian mixture model (VGM)}~\cite{prml} to estimate the number of modes $k$, e.g., $k=2$ in the example provided(see Fig.~\ref{fig:gmm_distribution_continuous}(a)), and fits a Gaussian mixture model. The learned Gaussian mixture model can be formally expressed as: $\mathbb{P} = \sum_{k=1}^{2} \omega_k \mathcal{N}(\mu_k, \sigma_k)$, where $\mathcal{N}$ is the normal distribution and $\omega_k$, $\mu_k$ and $\sigma_k$ are the weight, mean and standard deviation of each mode, respectively.
To encode values of a continuous variable, each value is associated and normalized based on the mode for which it has highest probability to belong to (see Fig.~\ref{fig:gmm_distribution_continuous}(b)). Given $\rho_1$ and $\rho_2$ being the probability density from the two modes in correspondence of the value $\tau$ to encode, the mode with the highest probability is selected. In the provided example $\rho_1$ is higher and so mode $1$ is used to normalize $\tau$. The normalized value $\alpha$ is: $\alpha = \frac{\tau - \mu_1}{4\sigma_1}$. Moreover the mode used to encode $\tau$ is tracked via one-hot encoding $\beta$, e.g. $\beta = [1,0]$ in the given example. The final encoding is giving by the concatenation of $\alpha$ and $\beta$: $\alpha \bigoplus \beta$, where $\bigoplus$ is the vector concatenation operator.
\subsection{Differential Privacy}
\label{Ch5:background}
This section presents formal definitions and theorems pertaining to differential privacy that are relevant for this work.\\
\\
\textbf{Definition 2.2.1} (Differential Privacy\cite{dwork2008differential}) A randomized mechanism \( \mathcal{M} \) with range \( \mathcal{R} \) is $(\epsilon,\delta)$-DP, if
\begin{equation}
P[\mathcal{M}(S)\in \mathcal{O}] \leq e^{\epsilon}.P[\mathcal{M}(S')\in \mathcal{O}] + \delta
\end{equation}
holds for any subset of outputs $\mathcal{O} \subseteq \mathcal{R}$ and for any adjacent datasets S and S', where S and S' differ from each other with only one training example.
Note that, $\mathcal{M}$, for the purposes of this work corresponds to a tabular GAN model and $(\epsilon,\delta)$ represents the privacy budget. Intuitively, DP tries to minimize the influence of any individual data point on the training of tabular GANs with lower values of $(\epsilon,\delta)$ providing greater privacy protection.
\\
\\
\textbf{Definition 2.2.2} (R\'enyi Differential Privacy (RDP)\cite{mironov2017renyi}) A randomized mechanism \( \mathcal{M} \) is $(\lambda,\epsilon)$-RDP with order $\lambda$, if
\begin{equation}
D_{\lambda}(\mathcal{M}(S)||\mathcal{M}(S')) = \frac{1}{\lambda-1}log\mathbb{E}_{x\sim\mathcal{M}(S)} \left[ \left(\frac{P[\mathcal{M}(S)=x]}{P[\mathcal{M}(S')=x]} \right) \right]^{\lambda-1}\leq\epsilon
\end{equation}
holds for any adjacent datasets S and S', where
\small
$D_\lambda(P||Q)=\frac{1}{\lambda-1}log\mathbb{E}_{x\sim Q}[(P(x)/Q(x))^\lambda]$ \normalsize represents the R\'enyi divergence. In addition, a $(\lambda,\epsilon)$-RDP mechanism \( \mathcal{M} \) can be expressed as $(\epsilon+\frac{log1/\delta}{\lambda-1},\delta)$\normalsize-DP.
RDP was proposed to alleviate the shortcomings of DP while dealing with the composition of randomized mechanisms that rely on the application of gaussian noise. RDP is a strictly stronger privacy definition than DP as it provides tighter bounds for tracking the cumulative privacy loss over a sequence of mechanisms such as differential private stochastic gradient descent which is performed multiple times during training.
\\
\\
\textbf{Theorem 2.2.1} (Composition\cite{mironov2017renyi}) For a sequence of mechanisms $ \mathcal{M}_{1},...,\mathcal{M}_{k}$ such that $\mathcal{M}_{i}$ is $(\lambda,\epsilon_i)$-RDP $\forall i$, the composition $\mathcal{M}_{1}\circ ... \circ \mathcal{M}_{k}$ is $(\lambda,\sum_{i}\epsilon_{i})$-RDP.
\\
\\
\textbf{Definition 2.2.3} (Gaussian Mechanism\cite{dwork2014algorithmic,mironov2017renyi}) Let $f : X \rightarrow \mathbb{R}^{d}$ be an arbitrary d-dimensional function with sensitivity being:
\begin{equation}
\Delta_{2}f = \max_{S,S'}||f(S) - f(S')||_{2}
\end{equation}
over all adjacent datasets S and S'. The Gaussian Mechanism \( \mathcal{M}_{\sigma} \), parameterized by $\sigma$, adds into the output, i.e.,
\begin{equation}
\mathcal{M}_{\sigma}(x) = f(x) + \mathcal{N}(0,\sigma^{2}I)
\end{equation}
where $\mathcal{N}$ denotes a Gaussian distribution with mean 0 and covariance $\sigma^{2}I$. Thus, \( \mathcal{M} \) is considered to be $(\lambda,\frac{\lambda\Delta_{2}f^{2}}{2\sigma^{2}})$-RDP.
The Gaussian mechanism described above forms the basis on which differential privacy is integrated for training tabular GANs in this work.
\\
\\
\textbf{Theorem 2.2.2} (Post Processing\cite{dwork2014algorithmic}) If \( \mathcal{M} \) satisfies $(\epsilon,\delta)$-DP, $F\circ\mathcal{M}$ will satisfy $(\epsilon,\delta)$-DP
for any function F with $\circ$ denoting the composition operator.
As a result of the post processing theorem, it suffices to ensure that one of the networks for tabular GANs (i.e., either the discriminator or the generator network) is trained with DP guarantees to guarantee that the overall algorithm is compatible with differential privacy.
\\
\\
\textbf{Theorem 2.2.3} (RDP for Subsampled Mechanisms\cite{wang2019subsampled}) Given a dataset containing $n$ data points with domain $\mathcal{X}$ and a randomized mechanism $\mathcal{M}$ that takes an input from $\mathcal{X}^{m}$ for $m \geq n$, let the randomized algorithm $\mathcal{M}\circ\textbf{subsample}$ be defined as: (i) \textbf{subsample}: subsample without replacement $m$ data points of the database (with subsampling rate $\gamma = m/n$); (ii) apply $\mathcal{M}$: a randomized algorithm taking the subsampled dataset as the input.
Thus, for all integers $\lambda \geq 2 $, if $\mathcal{M}$ is $(\lambda,\epsilon(\lambda))$-RDP, then $\mathcal{M}\circ\textbf{subsample}$ is $(\lambda,\epsilon'(\lambda))$-RDP where
\begin{equation}
\begin{aligned}
& \epsilon'(\lambda) & \leq \frac{1}{\lambda-1}log\bigg(1 + \gamma^{2}\binom{\lambda}{2}\min\left\{4(e^{\epsilon(2)}-1),e^{\epsilon(2)}\min\{2,(e^{\epsilon(\infty)}-1)^{2}\}\right\} \\
& & + \sum_{j=3}^{\lambda}\gamma^{j} \binom{\lambda}{j}e^{(j-1)\epsilon(j)}\min\{2,(e^{\epsilon(\infty)-1)^{j})}\}\bigg)
\end{aligned}
\end{equation}
Subsampling is a useful technique to strengthen the privacy guarantees offered by a randomized mechanism $\mathcal{M}$.
\subsection{DP via Differential Private SGD\cite{abadi2016deep}}
\label{Ch5:DPSGD}
The DP-SGD technique enables training neural networks with differential privacy guarantees and uses noisy stochastic gradient descent as a means to limit the influence of individual training samples. Algorithm 1 specifies how this technique is used for training a network with parameters $\theta$ by minimizing the empirical loss function $\mathcal{L}(\theta)$. For every iteration of SGD, the gradients $\Delta\mathcal{L}(
\theta; x_i)$ are calculated for some random subset of real data points. After which the L2 norm of the gradients are clipped. Finally noise is added to the gradients to preserve privacy and the parameters $\theta$ are updated via gradient descent.
\begin{algorithm}[htb]
\textbf{Input:} Data points $\{x_1,...,x_N\}$, loss function \small$\mathbb{\mathcal{L}}(\theta) = \frac{1}{N}\sum_{i}\mathbb{\mathcal{L}}(\theta,x_i)$. \normalsize
Hyper-parameters: learning rate $\eta_{t}$, noise scale $\sigma$, batch size $B$, gradient norm bound $C$.\\
\textbf{Initialize} $\theta_{0}$ randomly\;
\For{$t \in [T]$}{
Take a random sample $B_t$ with sampling probability $B/N$\;
\textbf{Compute Gradient}\:
For each $i \in B_{t}$, compute $g_{t}(x_{i})\leftarrow\nabla_{\theta_{t}}\mathcal{L}(\theta_{t},x_{i})$\:
\textbf{Clip gradient}\:
$\bar{g}_{t}\leftarrow g_{t}(x_i)/\max(1,\frac{||g_t(x_i)||_{2}}{C})$
\textbf{Add noise}\:
$\Tilde{g}_{t}\leftarrow\frac{1}{L}(\sum_{i}\bar{g}_{t}(x_i)+\mathcal{N}(0,\sigma^{2}C^{2}I))$
\textbf{Descent}\:
$\theta_{t+1}\leftarrow\theta_t-\eta_t\Tilde{g}_t$
}
\textbf{Output:}$\theta_{T}$ and final privacy cost $(\epsilon,\delta)$ computed using a privacy accountant.
\caption{\centering Differential Private SGD\cite{abadi2016deep}}
\label{Ch5:DPSGD-Algo}
\end{algorithm}
\chapter{Exploratory Study of Related Work}\label{ch3}
\section{Introduction}
This chapter tackles the first research question introduced in Sec.~\ref{Ch1:research_question}. Thus, Sec.~\ref{Ch3:EC} elicits an empirical quantitative analysis of four state-of-the-art tabular GANs with respect to three core evaluation criteria as follows:
\begin{itemize}
\item Statistical similarity with original data
\item Utility for Machine Learning (ML) applications
\item Privacy preservability
\end{itemize}
And, Sec.~\ref{Ch3:Challenges} introduces major challenges faced by current state-of-the-art methods. Finally, Sec.~\ref{Ch3:Conclusion} ends the chapter with a brief summary.
\section{Empirical Comparison}
\label{Ch3:EC}
\subsection{Datasets}
\label{Ch3:DD}
Five commonly used machine learning datasets were used to perform this experimental study,. Three of them -- \href{http://archive.ics.uci.edu/ml/datasets/adult}{Adult}, \href{https://archive.ics.uci.edu/ml/datasets/covertype}{Covertype} and \href{http://archive.ics.uci.edu/ml/datasets/kdd+cup+1999+data}{Intrusion} -- are from the UCI machine learning repository~\cite{UCIdataset}. The other two --\href{https://www.kaggle.com/mlg-ulb/creditcardfraud}{Credit} and \href{https://www.kaggle.com/itsmesunil/bank-loan-modelling}{Loan} -- are from Kaggle\footnote{https://www.kaggle.com/datasets}. All five tabular datasets have a target variable, for which the rest of the variables are used to perform classification. Due to computing resource limitations, 50K rows of data are sampled randomly in a stratified manner with respect to the target variable for Covertype, Credit and Intrusion datasets.
However, the Adult and Loan datasets are not sampled. The details of each dataset are shown in Tab.~\ref{table:DDE}. One thing to notice here is that we assume that the user already knows the data type of each variable for every dataset before training. \cite{ctgan} holds the same assumptions.
\begin{table}[htb]
\centering
\caption[DD]{\centering Description of Datasets\footnotemark.}
\resizebox{0.8\columnwidth}{!}{
\begin{tabular}{ |c|c|c|c|c|c|c|c| }
\hline
\textbf{Dataset} & \textbf{Train/Test Split} &\textbf{Target Variable} & \textbf{$\mbox{Continuous}$} & \textbf{$\mbox{Binary}$} & \textbf{$\mbox{Multi-class}$} & \textbf{$\mbox{Mixed-type}$}& \textbf{$\mbox{Long-tail}$}\\
\hline
{Adult} & 39k/9k & 'income' & 3 & 2 & 7 & 2 & 0\\
\hline
Covertype & 45k/5k & 'Cover\_Type' & 10 & 44 & 1 & 0 & 0\\\hline
Credit & 40k/10k & 'Class' & 30 & 1 & 0 & 0 & 1 \\\hline
Intrusion & 45k/5k & 'Class' & 22 & 6 & 14 & 0 & 2 \\\hline
Loan & 4k/1k & 'PersonalLoan' & 5 & 5 & 2 & 1 & 0 \\
\hline
\end{tabular}
}
\label{table:DDE}
\end{table}
\footnotetext{ Refer to Sec.~\ref{Ch4:data_representation} $\&$ Sec.~\ref{Ch4:longgtail} for details for Mixed-type and Long-tail, respectively. Note that these data-types are simply treated as continuous with respect to the baselines evaluated in this chapter.
}
\subsection{Baselines}
\label{Ch3:Baselines}
We evaluate 4 state-of-the-art GAN-based tabular data generators: CTGAN, TableGAN, CW-GAN $\&$ MedGAN. Tab.~\ref{table:sota} stresses on the key features of each baseline.
To have a fair comparison, all algorithms are implemented in Pytorch, with the generator and discriminator structures matching the descriptions provided in their respective papers with the exception of the MedGAN model which was extended to deal with continuous variables as well.\footnote{Note that the code-base for all the models was found here- \url{https://github.com/sdv-dev/SDGym}} All algorithms are trained using a batch size of 500 rows for 150 epochs for Adult, Covertype, Credit and Intrusion datasets, whereas the algorithms are trained for 300 epochs on Loan dataset. This is because, the Loan dataset is significantly smaller than the others containing only 5000 rows and requires a longer training time to converge. Lastly, each experiment is repeated 3 times.
\subsection{Environment}
Experiments are run under Ubuntu 20.04 on a machine equipped with 32 GB memory, a GeForce RTX 2080 Ti GPU and a 10-core Intel i9 CPU.
\subsection{Evaluation metrics}
\label{Ch3:metrics}
The evaluation is conducted on three dimensions: (1) machine learning (ML) utility, (2) statistical similarity and (3) privacy preservability. The first two are used to evaluate if the synthetic data can be used as a good proxy of the original data. The third criterion sheds light on the nearest neighbour distances within and between the original and synthetic datasets, respectively.
\begin{enumerate}
\item \textbf{Machine learning (ML) utility}-
\label{Ch3:ml_efficacy}
To quantify the ML utility, we compare the performance achieved by 5 widely used machine learning algorithms on real versus synthetic data: decision tree classifier, linear support-vector-machine (SVM), random forest classifier, multinomial logistic regression and MLP. We use Python and scikit-learn 0.24.2.
We set max-depth to 28 for decision tree and random forest models. MLP uses one 128 neuron hidden layer. All other hyper-parameters use their default value. For a fair compassion, all hyper-parameters and ML models are fixed across all datasets. Due to this our results can differ slightly from~\cite{ctgan} where the authors use different ML models and hyper-parameters for each dataset.
First we split the original data into training and test sets (see Fig.~\ref{fig:settingA}). The training set is used as real data to train the GAN models. Once the training is finished, we use it to synthesize data with the same size as the training set. The synthetic and real training datasets are then used to train two separate instances of the 5 machine learning models from above. The ML utility is measured via difference in accuracy, F1-score and area under the ROC between model pairs trained on the real and synthetic data. The aim of this design is to test how close the ML utility is when we train a machine learning model using the synthetic data vs the real data.
\begin{figure}[htb]
\centering
\includegraphics[width=0.7\columnwidth]{Figures/Ch_3/testing_flow_design.png}
\caption{\centering Evaluation flows for ML utility}
\label{fig:settingA}
\vspace{-0.5em}
\end{figure}
\item \textbf{Statistical Similarity}- Three metrics are used to quantitatively measure the statistical similarity between the real and synthetic data.
\textit{Jensen-Shannon divergence (JSD)}~\cite{jsd}- The JSD provides a measure to quantify the difference between the probability mass distributions of individual categorical variables belonging to the real and synthetic datasets, respectively. Moreover, this metric is bounded between 0 and 1 and is symmetric allowing for an easy interpretation of results.
\textit{Wasserstein distance (WD)}~\cite{wgan_test}- In similar vein, the Wasserstein distance is used to capture how well the distributions of individual continuous/mixed variables are emulated by synthetically produced datasets in correspondence to real datasets. We use WD because we found that the JSD metric was numerically unstable for evaluating the quality of continuous variables, especially when there is no overlap between the synthetic and original dataset. Hence, we resorted to utilize the more stable Wasserstein distance.
\textit{Difference in pair-wise correlation (Diff. Corr.)}-
To evaluate how well feature interactions are preserved in the synthetic datasets, we first compute the pair-wise correlation matrix for the columns within real and synthetic datasets individually. To measure the correlation between any two continuous features, the Pearson correlation coefficient is used. It ranges between $[-1,+1]$. Similarly, the Theil uncertainty coefficient is used to measure the correlation between any two categorical features. It ranges between $[0,1]$. Lastly, the correlation ratio between categorical and continuous variables is used. It also ranges between $[0,1]$. Note that the dython\footnote{\url{http://shakedzy.xyz/dython/modules/nominal/\#compute\_associations}} library is used to compute these metrics. Finally, the differences between the pair-wise correlation matrices for the real and synthetic datasets is computed.
\item \textbf{Privacy preservability}- To quantify the privacy preservability, we resort to distance metrics (instead of differential privacy~\cite{pategan}) as they are intuitive and easy to understand by data science practitioners. Specifically, the following two metrics are used to evaluate the privacy risk associated with synthetic datasets.
\textit{Distance to Closest Record (DCR)}- The DCR is used to measure the Euclidean distance between any synthetic record and its closest corresponding real neighbour. Ideally, the higher the DCR the lesser the risk of privacy breach. Furthermore, the $5^{th}$ percentile of this metric is computed to provide a robust estimate of the privacy risk.
\textit{Nearest Neighbour Distance Ratio (NNDR)}~\cite{nndr}- Instead of only measuring the closest neighbour, the NNDR measures the ratio between the Euclidean distance for the closest and second closest real neighbour to any corresponding synthetic record. This ratio is within $[0,1]$. Higher values indicate better privacy. Low NNDR values between synthetic and real data may reveal sensitive information from the closest real data record. Fig.~\ref{fig:nndr} illustrates the case. Hence, this ratio helps to evaluate the privacy risk with greater depth and better certainty. Note that the $5^{th}$ percentile is computed here as well.
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\columnwidth]{Figures/Ch_3/NNDR.png}
\caption{ \centering Illustration of NNDR metric with its privacy risk implications}
\label{fig:nndr}
\end{figure}
\end{enumerate}
\subsection{Results}
\label{Ch3:Results}
In this sub-section, the experimental results for each data-synthesizer are shown based on the aforementioned evaluation criteria.
\begin{enumerate}
\item \textbf{ML Utility}- Tab. \ref{table:ML_allE} shows that the TableGAN model outperforms the other models by achieving the least differences in all three metrics used to measure ML utility (i.e Accuracy, F1-score and AUC). This surprising result shows that it can even outperform the more recent conditional GAN architectures such as CTGAN and CW-GAN. The results shown here suggests that the deep convolution architecture employed in the TableGAN model achieves the most formidable results. Therefore, it is worth exploring the benefits of utilising this type of architecture to further improve the performance of other models such as CTGAN.
\begin{table}[htb]
\centering
\caption{\centering Difference of ML accuracy (\%), F1-score, and AUC between original and synthetic data: average over 5 different datasets and 3 replications.}
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Method} & \textbf{Accuracy} & \textbf{F1-score} & \textbf{AUC} \\
\hline
\small{CTGAN} &21.51\% & 0.274 & 0.253 \\
\small{TableGAN} &\textbf{11.40\%} & \textbf{0.130} & \textbf{0.169} \\
\small{MedGAN} & 14.11\%& 0.282& 0.285 \\
\small{CW-GAN} & 20.06\%& 0.354 & 0.299 \\
\hline
\end{tabular}
\label{table:ML_allE}
\end{table}
\item \textbf{Statistical similarity}- Tab. \ref{table:SS_allE} shows that the CTGAN data-synthesizer achieves the best average JSD for categorical columns along with the the best average wasserstein distance for continuous columns. This highlights that a conditional architecture accompanied by the training-by-sampling method along with mode specific normalisation for continuous variables is directly beneficial for improving statistical similarity of synthetically produced datasets. However, it should be noted the TableGAN model performs best in terms of maintaining the least correlation distance with the CTGAN model performing slightly worse at second place. This slight difference could yet again be attributed towards the \textit{DCGAN neural network architecture} that makes use of \textit{strided convolutions} which enables the receptive field to grow larger after each layer thereby extracting useful global correlations in the data.
\begin{table}[htb]
\centering
\caption{\centering Statistical similarity: three measures averaged over 5 datasets and three repetitions.}
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Method} & \textbf{Avg JSD} & \textbf{Avg WD} & \textbf{Diff. Corr.} \\
\hline
\small{CTGAN} & \textbf{0.0704} & \textbf{1769} & 2.73 \\
\small{TableGAN} & 0.0796& 2117 & \textbf{2.30} \\
\small{MedGAN} & 0.2135& 46257 & 5.48 \\
\small{CW-GAN} & 0.1318& 238155 & 5.82 \\
\hline
\end{tabular}
\label{table:SS_allE}
\vspace{-0.5em}
\end{table}
\item \textbf{Privacy Impact}- Tab. \ref{table:PP_allCh3} highlights that the CW-GAN and MedGAN models maintain the safest distance in terms of the DCR and NNDR metrics between real and synthetic datasets. Furthermore, by analyzing the DCR and NNDR metrics within synthetic data, we see that the CW-GAN model produces the most diverse samples whereas the MedGAN model produces the least diverse samples among all the data-synthesizers suggesting that it most likely suffers from mode-collapse. Lastly, it is worth mentioning that the results also show that privacy and ML utility are fundamentally inversely related as models such as CTGAN and TableGAN which perform well in terms of ML utility are naturally worse in terms of privacy.
\begin{table}[htb]
\centering
\caption{\centering Privacy impact: between real and synthetic data (R\&S) and within real data (R) and synthetic data (S).}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{\textbf{Model}} & \multicolumn{3}{c|}{\textbf{DCR}} & \multicolumn{3}{c|}{\textbf{NNDR}} \\
\cline{2-7}
& \textbf{R\&S} & \textbf{R} & \textbf{S} & \textbf{R\&S} & \textbf{R} & \textbf{S}\\
\hline
CTGAN & 1.517 & 0.428& 1.026 & 0.763 & 0.414 &0.624 \\
TableGAN & 0.988 & 0.428& 0.920 & 0.681 & 0.414 &0.632\\
MedGAN & 1.918 & 0.428& 0.254 & \textbf{0.871} & 0.414 &0.393 \\
CW-GAN & \textbf{2.197} & 0.428& 1.124 & 0.847 & 0.414 &0.675\\
\hline
\end{tabular}
\label{table:PP_allCh3}
\end{table}
\end{enumerate}
\section{Challenges faced by Existing Solutions}
\label{Ch3:Challenges}
In this section, we empirically demonstrate how the prior state-of-the-art methods fall short in solving challenges in industrial data sets.
\begin{enumerate}
\item \textbf{Mixed data type variables}- To the best of our knowledge, existing GAN-based tabular generators only consider data variables as either categorical or continuous. However, in reality, a variable can be a mix of these two types, and often variables have missing values. The \textit{Mortgage} variable from the Loan dataset is a good example of a mixed variable. Fig.~\ref{fig:mortgage_column_motivation} shows the distribution of the original and synthetic data generated by 4 state-of-the-art algorithms for this variable.
According to the data description, a loan holder can either have no mortgage (0 value) or a mortgage (any positive value). In appearance this variable is not a categorical type due to the numeric nature of the data. So all 4 state-of-the-art algorithms treat this variables as a continuous type without capturing the special meaning of the value, zero. Hence, all 4 algorithms generate a value around 0 instead of exact 0. And the negative values for Mortgage have no/wrong meaning in the real world.
\item \textbf{Long tail distributions}- Many real world datasets can have long tail distributions where most of the occurrences happen near the initial value of the distribution, and rare cases towards the end. As an example, Fig.~\ref{fig:amount_result_motivation} plots the cumulative frequency for the original (top) and synthetic (bottom) data generated by 4 state-of-the-art algorithms for the \textit{Amount} variable in the Credit dataset. This variable represents the transaction amount when using credit cards. One can imagine that most transactions have small amounts, ranging from few bucks to thousands of dollars. However, there definitely exists a very small number of transactions with large amounts. Note that for ease of comparison both plots use the same x-axis, but the real data has no negative values.
Thus, the real data clearly has 99\% of occurrences happening at the start of the range, but the distribution extends until around $25000$. In comparison none of the synthetic data generators are able to learn and imitate this behavior.
\item \textbf{Skewed multi-modal continuous variables}-The term \textit{multi-mode} is extended from Variational Gaussian Mixtures (VGM)~\cite{prml} (refer to Sec.~\ref{Ch4:msn}). These are used to model Gaussian distributions with multiple peaks. The intuition behind using multiple modes can be easily captured from Fig.~\ref{fig:gmm_result_motivation}. The figure plots in each row the distribution of the working \textit{Hours-per-week} variable from the Adult dataset. This is not a typical Gaussian distribution. There is an obvious peak at 40 hours but with several other lower peaks, e.g. at 50, 20 and 45. Also the number of people working 20 hours per week is higher than those working 10 or 30 hours per week.
This behavior is difficult to capture for the state-of-the-art data generators (see subsequent rows in Fig.\ref{fig:gmm_result_motivation}). The closest results are obtained by CTGAN which uses Gaussian mixture estimation for continuous variables. However, CTGAN loses some modes compared to the original distribution.
\end{enumerate}
\begin{figure}[htb]
\begin{center}
\subfloat[\centering Mortgage in Loan dataset~\cite{kaggleloan}]{
\includegraphics[width=0.33\textwidth]{Figures/Ch_3/mixed_motivation_case.png}
\label{fig:mortgage_column_motivation}
}
\subfloat[\centering Amount in Credit dataset\cite{kagglecredit}]{
\includegraphics[width=0.3\textwidth]{Figures/Ch_3/long_tail_motivation_case.png}
\label{fig:amount_result_motivation}
}
\subfloat[\centering Hours-per-week in Adult dataset~\cite{UCIdataset}]{
\includegraphics[width=0.335\textwidth]{Figures/Ch_3/extended_condvec_motivation_case.png}
\label{fig:gmm_result_motivation}
}
\caption{\centering Challenges of modeling industrial datasets using existing Tabular GANs: (a) Mixed data-type, (b) Long tail distribution, and (c) Skewed multi-modal continuous variable}
\label{fig:motivationcases}
\end{center}
\end{figure}
\section{Conclusion}
\label{Ch3:Conclusion}
In this exploratory study, we were able to shed some light on some of the latest works on GAN-based tabular data-synthesizers. Additionally, we executed an in-depth empirical evaluation to benchmark their performance. Based on our findings, we summarize some important key points as follows-:
\begin{itemize}
\item Firstly, the TableGAN model outperforms state-of-the-art approaches with respect to the utility for ML applications as it maintains the least difference in all ML metrics.
\item Secondly, it is observed that the best average JSD for categorical variables and the best average wasserstein distance for continuous variables is achieved by CTGAN.
\item Thirdly, in terms of the privacy risk, all data-synthesizers produce datasets with a greater DCR and NNDR metric for between real and synthetic datasets as compared to both within real and synthetic datasets. This suggests that the privacy risk for all data-synthesizers as measured via these metrics are limited.
\item Lastly, current techniques fail to account for mixed data-types, heavy long-tailed distributions and skewed multi-modal numerical distributions.
\end{itemize}
\chapter*{Abstract}
\addcontentsline{toc}{chapter}{Abstract}
\setheader{Abstract}
\vspace*{1.5cm}
\begin{quote}
While data sharing is crucial for knowledge development, privacy concerns and strict regulation (e.g., European General Data Protection Regulation (GDPR)) unfortunately limit its full effectiveness. Synthetic tabular data emerges as an alternative to enable data sharing while fulfilling regulatory and privacy constraints. The state-of-the-art tabular data synthesizers draw methodologies from Generative Adversarial Networks (GAN). In this thesis, we develop CTAB-GAN, a novel conditional table GAN architecture that can effectively model diverse data types with complex distributions. CTAB-GAN is extensively evaluated with the state of the art GANs that generate synthetic tables, in terms of data similarity and analysis utility. The results on five datasets show that the synthetic data of CTAB-GAN remarkably resembles the real data for all three types of variables and results into higher accuracy for five machine learning algorithms, by up to 17\%.
Additionally, to ensure greater security for training tabular GANs against malicious privacy attacks, differential privacy (DP) is studied and used to train CTAB-GAN with strict privacy guarantees. DP-CTAB-GAN is rigorously evaluated using state-of-the-art DP-tabular GANs in terms of data utility and privacy robustness against membership and attribute inference attacks. Our results on three datasets indicate that strict theoretical differential privacy guarantees come only after severely affecting data utility. However, it is shown empirically that these guarantees help provide a stronger defence against privacy attacks. Overall, it is found that DP-CTABGAN is capable of being robust to privacy attacks while maintaining the highest data utility as compared to prior work, by up to 18\% in terms of the average precision score.
\begin{flushright}
{\makeatletter\itshape
\@author \\
Delft, August 2021
\makeatother}
\end{flushright}
\end{quote}
\chapter*{Preface}
\addcontentsline{toc}{chapter}{Preface}
\setheader{Preface}
\vspace*{1.5cm}
\begin{quote}
My thesis builds upon a background in adversarial machine learning with a focus on tabular data generation and it's use as both an imminent and eminent privacy preserving technology. The research begins by attempting to improve state-of-art GAN based tabular data synthesizers by understanding their fundamental strengths and weaknesses in~\autoref{ch3}. Based on the conclusions drawn from this study existing weaknesses are addressed and the strengths of various methodologies are combined to enhance performance, giving rise to a novel tabular data generator, CTAB-GAN, in~\autoref{ch4}. Additionally, privacy concerns for synthetic tabular data generation are addressed by studying the use of differential privacy. And, an empirical investigation of privacy exposure is carried out using membership and attribute inference based attacks in~\autoref{ch5}. \\
This research is a product of the wonderful guidance of my supervisor Dr. Lydia Chen and my daily supervisor Dr. Zilong Zhao. I owe a great sense of gratitude to both of them for their extensive knowledge which they willfully provided throughout the period of my master's thesis. Finally, I offer my thanks to my family and friends who have been a pillar of support.\\
I would also like to express my gratitude towards Mr. Hiek Scheer and Mr. James Gnanasekaran, who graciously accepted to be a part of the defence committee.\\
\begin{flushright}
{\makeatletter\itshape
\@author \\
Delft, August 2020
\makeatother}
\end{flushright}
\end{quote}
\chapter{Differential Privacy Experimental Setup}
The supplementary material highlights the network architecture shared between PATE-GAN~\cite{pategan} and DP-WGAN~\cite{xie2018differentially} as mentioned in~Sec.~\ref{Ch5:ES}. Additionally, it provides hyper-parameters used for conducting the data utility (i.e., statistical similarity $\&$ ML utility) as well as the membership and attribute inference attack experiments.
\section{Network Architecture}
\label{appendix:1}
The network architecture for training PATE-GAN is used identically to their original implementation provided on github\footnote{\url{https://github.com/vanderschaarlab/mlforhealthlabpub/tree/main/alg/pategan}}. And, the network structure of DP-WGAN\footnote{\url{https://github.com/BorealisAI/private-data-generation/blob/master/models/dp_wgan.py}} used in the experiments has been modified from the original to have the exact neural network architecture for the discriminator and generator networks as that of PATE-GAN. This is done to study the performance of DP-WGAN in relation to PATE-GAN.
The generator network of PATE-GAN comprises of a shallow neural network with 3 fully connected layers that each comprise of $4*l$ nodes where $l$ is the length of each row in the original data. The first 2 fully connected layers are followed by a \textit{Tanh activation} whereas for the last layer a \textit{Sigmoid activation} is used. This is done to bring the values generated in the range of [0,1] which is the same range as the normalised data used for training.
The student discriminator network of PATE-GAN comprises of a shallow neural network with 2 fully connected layers with $l$ nodes. The first layer is followed by a \textit{ReLU activation function} whereas the output of the second layer is used directly for computing the KL divergence loss of the discriminator as shown in Eq.~\ref{eq:gan}.
\section{Network Hyper-parameters}
\label{appendix:2}
Across all baselines, the batch size was set to 64. Moreover, for PATE-GAN and DP-WGAN, default hyper-parameters as found in the code-bases were utilized. Thus, PATE-GAN uses 10 as the default number of teacher discriminators for all experiments. And DP-WGAN, uses [-0.01,0.01] to clamp the weights of the discriminator and $0.1$ as the gradient norm bound $C$.
Additionally, Tab.~\ref{tab:app0} and Tab.~\ref{tab:app1} provide details concerning the differential-private hyper-parameters such as the noise scale used and the number of training epochs\footnote{Note that in the original implementation of PATE-GAN, the privacy budget $\epsilon=1$ is expended with just one iteration over a single batch. Therefore, in the epochs columns, the number of iterations over a single batch is displayed.} required for generating synthetic tabular data with the corresponding privacy budget epsilon (i.e., $\epsilon$) to conduct the data utility experiments and privacy attack experiments in Sec.~\ref{Ch5:ES}.
\begin{table}[htb]
\centering
\caption{\centering Differential privacy hyper-parameters for conducting statistical similarity and ML utility experiments.}
\resizebox{0.8\columnwidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\textbf{Model} & \textbf{Dataset} & \textbf{No. of Discriminators} & \textbf{Noise Scale} & \textbf{Epochs} & \textbf{Epsilon} \\
\hline
PATE-GAN & Adult & 1 & 1 & 1 & 1 \\
PATE-GAN & Credit & 1 & 1 & 1 & 1 \\
PATE-GAN & Loan & 1 & 1 & 1 & 1 \\
DP-WGAN & Adult & 1 & 1.012 & 1 & 1 \\
DP-WGAN & Credit & 1 & 1.012 & 1 & 1 \\
DP-WGAN & Loan & 1 & 1.33 & 1 & 1 \\
D-DP-CTABGAN & Adult & 1 & 1.06 & 1 & 1 \\
D-DP-CTABGAN & Credit & 1 & 1.06 & 1 & 1 \\
D-DP-CTABGAN & Loan & 1 & 1.58 & 1 & 1 \\
G-DP-CTABGAN & Adult & 1000 & 3.518 & 1 & 1 \\
G-DP-CTABGAN & Credit & 1000 & 3.53 & 1 & 1 \\
G-DP-CTABGAN & Loan & 1000 & 1.28 & 1 & 1 \\
PATE-GAN & Adult & 1 & 1 & 795 & 100 \\
PATE-GAN & Credit & 1 & 1 & 795 & 100 \\
PATE-GAN & Loan & 1 & 1 & 795 & 100 \\
DP-WGAN & Adult & 1 & 0.33 & 6 & 100 \\
DP-WGAN & Credit & 1 & 0.33 & 6 & 100 \\
DP-WGAN & Loan & 1 & 0.38 & 7 & 100 \\
D-DP-CTABGAN & Adult & 1 & 0.36 & 5 & 100 \\
D-DP-CTABGAN & Credit & 1 & 0.36 & 5 & 100 \\
D-DP-CTABGAN & Loan & 1 & 0.42 & 4 & 100 \\
G-DP-CTABGAN & Adult & 50 & 0.867 & 1 & 100 \\
G-DP-CTABGAN & Credit & 100 & 0.874 & 1 & 100 \\
G-DP-CTABGAN & Loan & 100 & 1.089 & 4 & 100 \\
\hline
\end{tabular}
}
\label{tab:app0}
\end{table}
\begin{table}[htb]
\centering
\caption{\centering Differential privacy hyper-parameters for conducting membership and attribute inference attacks.}
\resizebox{0.8\columnwidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\textbf{Model} & \textbf{Dataset} & \textbf{No of Discriminators} & \textbf{Noise Scale (Membership)} & \textbf{Noise Scale (Attribute)} & \textbf{Epochs} & \textbf{Epsilon} \\ \hline
PATE-GAN & Adult & 1 & 1 & 1 & 1 & 1 \\
PATE-GAN & Credit & 1 & 1 & 1 & 1 & 1 \\
PATE-GAN & Loan & 1 & 1 & 1 & 1 & 1 \\
DP-WGAN & Adult & 1 & 1.33 & 1.25 & 1 & 1 \\
DP-WGAN & Credit & 1 & 1.33 & 1.25 & 1 & 1 \\
DP-WGAN & Loan & 1 & 1.33 & 1.25 & 1 & 1 \\
D-DP-CTABGAN & Adult & 1 & 1.67 & 1.56 & 1 & 1 \\
D-DP-CTABGAN & Credit & 1 & 1.67 & 1.56 & 1 & 1 \\
D-DP-CTABGAN & Loan & 1 & 1.67 & 1.56 & 1 & 1 \\
G-DP-CTABGAN & Adult & 1000 & 1.28 & 1.37 & 1 & 1 \\
G-DP-CTABGAN & Credit & 1000 & 1.28 & 1.37 & 1 & 1 \\
G-DP-CTABGAN & Loan & 1000 & 1.28 & 1.37 & 1 & 1 \\
\hline
\end{tabular}
}
\label{tab:app1}
\end{table}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 8,562 |
Lost Caper Music Studio
Welcome to The Lost Caper Music Studio
Original and Cover Music
Eventually there will be Folk, Country, Rock, Keltic music and videos from various local artist including myself.
This site is new so be sure to check back for new exciting developments at The Lost Caper Studio, right here in Cape Breton, Nova Scotia, Canada.
This site is made by Cape Breton musician to help local musicians. So with no further ado please enjoy and support our site.
My Cape Breton Home
Tip Your Musician!
We added a tip jar to keep the Lost Caper Studio up and running thus helping local musicians.
The minimum tip is C$2.00
Air fa la la lo. (partly song in Gaelic)
Add a tip to help us keep making music.
Tip Me.
This is a Celtic song composed in the mid 1700 by Duncan Ban MacIntyre. It was originally written and sung in Gaelic and was a song of romantic love but through translation and over the course of 300 years has been reworked to be a song of brotherhood and love for all man. It is now sung partly in Gaelic and partly in English.
Perhaps in these unpredictable and evil times it should be sung and share throughout the world until it is imbedded in our minds as to why the good lord put us here.
So here is my interpretation of brotherhood and love as expressed in the 300 year old song Air fa la la lo.
You can download this song for .99 cents by clicking below.
My Cape Breton Home. 2:50
Air Fal Al Al O (Chorus song in Gaelic) 4:27
Ray Cape Breton
Jan 1 2021 1:46 PM
I will be adding more tracks and videos on a monthly bases. Check back later.
Hope you enjoy my site. Feel free to share the link www.lostcaper.com.
Bandzoogle's Music feature allows you to sell your albums and tracks - and we never take a percentage of your sales. Choose from set prices, free downloads, or even give away a track in exchange for a mailing list signup! | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 3,320 |
Un símbol solar és un dibuix o signe usat per representar el Sol o els seus atributs en una cultura determinada. Els símbols solars comuns inclouen cercles amb raigs, creus o espirals. En la iconografia religiosa, les personificacions del Sol o els atributs solars s'indiquen mitjançant un halo o una corona radiada.
Quan l'estudi sistemàtic de la mitologia comparada es va fer popular durant el , l'opinió acadèmica tendia a interpretar els mites històrics i la iconografia en termes de "simbolisme solar". Aquest va ser especialment el cas amb Max Müller i els seus seguidors a partir de la dècada de 1860 en el context dels estudis indoeuropeus. Molts "símbols solars" reclamats al , com l'esvàstica, la triskele, la creu solar, etc. han tendit a ser interpretats de manera més conservadora a partir del .
Disc Solar
L'element bàsic de la majoria dels símbols solars és el disc solar circular.
El disc es pot modificar de diverses maneres, especialment afegint rajos (trobats a l'Edat de Bronze en representacions egípcies d' Aten ) o una creu. A l'Orient Pròxim antic, el disc solar també es pot modificar mitjançant l'addició dels Uraeus (cobra de criança), i en l'Antiga Mesopotamia es mostrava alada
Edat del Bronze
Els jeroglífics egipcis tenen un gran inventari del simbolisme solar a causa de la posició central de les deïtats solars ( Ra, Horus, Aten, etc.) en la religió egípcia.
L'ideograma principal del "Sol" era una representació del disc solar,N5(Gardiner N5),( Gardiner N5 ), amb una variant que inclou els Uraeus,N6(N6).L'ideograma "Sol" en l'escriptura xinesa inicial, que comença amb l' escriptura òssia d'oracle (segle XII aC) també mostra el disc solar amb un punt central (d'on el personatge modern 日 ), anàleg al heroglífic egipci.
Símbol astronòmic
El símbol astronòmic modern per al Sol ( punt de cercle, Unicode U + 2609 ☉; cf U + 2299 ⊙ "operador de cercle circular") es va usar per primera vegada en el Renaixement. Un diagrama del Komendium of Astrology del de Johannes Kamateros mostra el Sol representat per un cercle amb un raig. El planisferi de Bianchini, fet en el ,, té un cercle amb raigs que irradien d'ell.
Representacions amb radis
A circular disk with alternating triangular and wavy rays emanating from it is a frequent symbol or artistic depiction of the sun.
Antiquity
The ancient Mesopotamian "star of Shamash" could be represented with either eight wavy rays, or with four wavy and four triangular rays.
The Vergina Sun (also known as the Star of Vergina, Macedonian Star, or Argead Star) is a rayed solar symbol appearing in ancient Greek art from the 6th to 2nd centuries BC. The Vergina Sun appears in art variously with sixteen, twelve, or eight triangular rays.
Sun with face
The iconographic tradition of depicting the Sun with rays and with a human face developed in Western tradition in the high medieval period and became widespread in the Renaissance, harking back to the Sun god (Sol/Helios) wearing a radiate crown in classical antiquity.
Sunburst
El raig de sol va ser la insígnia del rei Eduard III d'Anglaterra , i per tant s'ha convertit en la insígnia del càrrec de Windsor Herald
Emblemes moderns
Les insígnies oficials que incorporen símbols solars amb raigs inclouen l'emblema dels jesuïtes, la bandera de l'Uruguai, la bandera de Kiribati, algunes versions de la bandera de l'Argentina, la insígnia de les Forces de Defensa Irlandeses i l' escut de l'Iraq de 1959-1965.
Les representacions del sol a les banderes de la República de la Xina (Taiwan), Kazakhstan, Kurdistan i Nepal només tenen raigs rectes (triangulars); la del Kirguizistan només té raigs corbes; mentre que la de les Filipines té raigs divergents curts agrupats en fils.
Una altra forma de raig del sol té línies radials simples que divideixen el fons en dos colors, com en les banderes militars del Japó i l'actual bandera de Macedònia del Nord, i a les parts superiors de les banderes del Tibet i d'Arizona.
La bandera de Nou Mèxic es basa en el símbol de sol de Zia, que té quatre grups de quatre raigs paral·lels emanats simètricament d'un cercle central.
Pictograma modern
El pictograma modern que representa el Sol com un cercle amb raigs, sovint en nombre de vuit (indicada per qualsevol de les línies rectes o triangles; Unicode Diversos Símbols ☀ O + 2600; ☼ O + 263C) indica "bon temps" en les previsions meteorològiques, originalment en la televisió previsions als anys setanta. El bloc Unicode 6.0 Diversos símbols i pictogrames va introduir un altre conjunt de pictogrames meteorològics, incloent "sol blanc" sense raigs 1F323 🌣, així com "sol amb cara" U + 1F31E 🌞..
El pictograma "sol amb raigs" també s'utilitza per representar la configuració de "alta brillantor" als dispositius de visualització, codificats per separat per Unicode 6.0 U + 1F50 (Miscellaneous Symbols and Pictographs).
Creus
La "creu solar" o "roda solar" (🜨) sovint es considera que representen les quatre estacions i l'any tropical i, per tant, el Sol (tot i que com a símbol astronòmic modern, significa la terra). En la religió prehistòrica de l'Edat del Bronze a Europa, les creus en cercles apareixen freqüentment sobre artefactes identificats com a objectes de culte. Un exemple de l'època del bronze nòrdic és el "estàndard en miniatura" amb incrustacions d'ambre que revelen una forma de creu quan es manté contra la llum (Museu Nacional de Dinamarca ). El símbol de l'edat de bronze també ha estat relacionat amb la ràdios carro de la roda, que en el moment va ser de quatre puntes- (comparar el Bideograma Lineal 243 "roda" 𐃏 ). En el context d'una cultura que celebrava el carro del Sol, la roda podria haver tingut una connotació solar (vegeu el carro del sol de Trundholm ).
El símbol Arevakhach (creu solar) que sovint es troba en les esteles memorials armènies es reivindica com un antic símbol solar armeni de l'eternitat i la llum.
Alguns xaman Sami tambors tenen la Beiwe Sami símbol de sol que s'assembla a una creu solar.
L'esvàstica es pot derivar de la creu solar, i és un altre símbol solar en alguns contexts. S'utilitza entre budistes ("manji"), Jains i hindús ; i moltes altres cultures, encara que no necessàriament com a símbol solar. Vegeu també el festival Malkh.
Algunes formes dels signes de triple espiral o triskelion també han estat reivindicades com a símbols solars.
El " Sol Negre " (en alemany Schwarze Sonne ) és un símbol de significació esotèrica i oculta basat en un mosaic de rodes de sol amb simetria rotacional de dotze vegades incorporat a un sòl del castell de Wewelsburg durant l'època nazi, que es basava en la seva forma fluïda en esvàstica. com dissenys en el període migratori Zierscheiben. El Kolovrat, o en polonès "Kołowrót", representa el Sol en el neopaganisme eslau.
Símbols circulars
La majoria de símbols relacionats amb el Sol estan basats en el cercle, com per exemple a la bandera del Japó. A l'astrologia, aquest símbol és un cercle amb un punt al centre, probablement derivat del jeroglífic egipci per referir-se al Sol o al déu Ra que el representava.
Símbols de tres braços
El cercle es pot dividir de manera regular igualment amb tres braços, donat peu a nous símbols solars, com la trisquela amb totes les seves variants. Si es dobla el símbol, apareixen els cercles amb sis braços, que són l'origen de l'anomenada flor de la vida, un símbol complex usat per diverses cultures europees.
Símbols de quatre braços
Aquest cercle pot portar també creus o braços interiors, formant símbols específics:
la creu solar (símbol neolític que designa la totalitat, per la suma dels punts cardinals i el Sol), que es refereix a la roda del carro que transportava el sol segons la mitologia
l'esvàstica sembla una derivació d'aquesta roda o creu solar
A vegades els braços es doblen, de manera que es forma un cercle amb vuit línies interiors, essent també la roda del carro solar. El mateix símbol és atribut de diversos déus, com la Taranis gal·la. Aquest símbol ha donat peu a les estrelles de vuit puntes presents a banderes russes.
Referències
Vegeu també
Creu solar
Creu celta
Esvàstica
Taranis
Enllaços externs
Symbols.com list and description of sun symbols
Origins and Meanings of the Eight-point Star
Símbols | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,878 |
{"url":"https:\/\/calculator.academy\/lumens-per-square-foot-calculator\/","text":"Enter the total number of lumens and the total area of a room into the calculator to determine the lumens per square foot.\n\n## Lumens Per Square Foot Formula\n\nLPSF = TL \/ A\n\u2022 Where LPSF is the lumens per square foot\n\u2022 TL is the total number of lumens\n\u2022 A is the room area (ft^2)\n\nTo calculate the lumens per square foot, divide the total number of lumens by the room area.\n\n## Lumens Definition\n\nA lumen is a unit of measure that describes a luminous flux or in other words the light emitted per second.\n\n## Lumens Per Square Foot Example\n\nHow to calculate lumens per square foot?\n\n1. First, determine the total number of lumens.\n\nCalculate the total number of lumens produced by the light sources of the room.\n\n2. Next, determine the width.\n\nMeasure the width of the room in feet.\n\n3. Next, determine the length.\n\nMeasure the length of the room in feet.\n\n4. Finally, calculate the lumens per square foot.\n\nCalculate the LPSF using the equation above.\n\n## FAQ\n\nWhat is a lumen?\n\nA lumen is a unit of measure of the amount of luminous flux.","date":"2023-01-29 08:26:19","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.482637882232666, \"perplexity\": 1463.6778246590377}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764499710.49\/warc\/CC-MAIN-20230129080341-20230129110341-00278.warc.gz\"}"} | null | null |
package com.paypal.api.payments;
import java.io.UnsupportedEncodingException;
import java.net.MalformedURLException;
import java.net.URL;
import java.net.URLDecoder;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import com.google.gson.GsonBuilder;
import com.paypal.base.Constants;
import com.paypal.base.rest.APIContext;
import com.paypal.base.rest.HttpMethod;
import com.paypal.base.rest.PayPalRESTException;
import com.paypal.base.rest.PayPalResource;
import com.paypal.base.rest.RESTUtil;
import com.paypal.base.sdk.info.SDKVersionImpl;
public class Agreement extends PayPalResource {
/**
* Identifier of the agreement.
*/
private String id;
/**
* State of the agreement
*/
private String state;
/**
* Name of the agreement.
*/
private String name;
/**
* Description of the agreement.
*/
private String description;
/**
* Start date of the agreement. Date format yyyy-MM-dd z, as defined in [ISO8601](http://tools.ietf.org/html/rfc3339#section-5.6).
*/
private String startDate;
/**
* Details of the agreement.
*/
private AgreementDetails agreementDetails;
/**
* Details of the buyer who is enrolling in this agreement. This information is gathered from execution of the approval URL.
*/
private Payer payer;
/**
* Shipping address object of the agreement, which should be provided if it is different from the default address.
*/
private Address shippingAddress;
/**
* Default merchant preferences from the billing plan are used, unless override preferences are provided here.
*/
private MerchantPreferences overrideMerchantPreferences;
/**
* Array of override_charge_model for this agreement if needed to change the default models from the billing plan.
*/
private List<OverrideChargeModel> overrideChargeModels;
/**
* Plan details for this agreement.
*/
private Plan plan;
/**
* Date and time that this resource was created. Date format yyyy-MM-dd z, as defined in [ISO8601](http://tools.ietf.org/html/rfc3339#section-5.6).
*/
private String createTime;
/**
* Date and time that this resource was updated. Date format yyyy-MM-dd z, as defined in [ISO8601](http://tools.ietf.org/html/rfc3339#section-5.6).
*/
private String updateTime;
/**
* Payment token
*/
private String token;
/**
*
*/
private List<Links> links;
/**
* Default Constructor
*/
public Agreement() {
}
/**
* Parameterized Constructor
*/
public Agreement(String name, String description, String startDate, Payer payer, Plan plan) {
this.name = name;
this.description = description;
this.startDate = startDate;
this.payer = payer;
this.plan = plan;
}
/**
* Setter for id
*/
public Agreement setId(String id) {
this.id = id;
return this;
}
/**
* Getter for id
*/
public String getId() {
return this.id;
}
/**
* Setter for state
*/
public Agreement setState(String state) {
this.state = state;
return this;
}
/**
* Getter for state
*/
public String getState() {
return this.state;
}
/**
* Setter for name
*/
public Agreement setName(String name) {
this.name = name;
return this;
}
/**
* Getter for name
*/
public String getName() {
return this.name;
}
/**
* Setter for description
*/
public Agreement setDescription(String description) {
this.description = description;
return this;
}
/**
* Getter for description
*/
public String getDescription() {
return this.description;
}
/**
* Setter for startDate
*/
public Agreement setStartDate(String startDate) {
this.startDate = startDate;
return this;
}
/**
* Getter for startDate
*/
public String getStartDate() {
return this.startDate;
}
/**
* Setter for agreementDetails
*/
public Agreement setAgreementDetails(AgreementDetails agreementDetails) {
this.agreementDetails = agreementDetails;
return this;
}
/**
* Getter for agreementDetails
*/
public AgreementDetails getAgreementDetails() {
return this.agreementDetails;
}
/**
* Setter for payer
*/
public Agreement setPayer(Payer payer) {
this.payer = payer;
return this;
}
/**
* Getter for payer
*/
public Payer getPayer() {
return this.payer;
}
/**
* Setter for shippingAddress
*/
public Agreement setShippingAddress(Address shippingAddress) {
this.shippingAddress = shippingAddress;
return this;
}
/**
* Getter for shippingAddress
*/
public Address getShippingAddress() {
return this.shippingAddress;
}
/**
* Setter for overrideMerchantPreferences
*/
public Agreement setOverrideMerchantPreferences(MerchantPreferences overrideMerchantPreferences) {
this.overrideMerchantPreferences = overrideMerchantPreferences;
return this;
}
/**
* Getter for overrideMerchantPreferences
*/
public MerchantPreferences getOverrideMerchantPreferences() {
return this.overrideMerchantPreferences;
}
/**
* Setter for overrideChargeModels
*/
public Agreement setOverrideChargeModels(List<OverrideChargeModel> overrideChargeModels) {
this.overrideChargeModels = overrideChargeModels;
return this;
}
/**
* Getter for overrideChargeModels
*/
public List<OverrideChargeModel> getOverrideChargeModels() {
return this.overrideChargeModels;
}
/**
* Setter for plan
*/
public Agreement setPlan(Plan plan) {
this.plan = plan;
return this;
}
/**
* Getter for plan
*/
public Plan getPlan() {
return this.plan;
}
/**
* Setter for createTime
*/
public Agreement setCreateTime(String createTime) {
this.createTime = createTime;
return this;
}
/**
* Getter for createTime
*/
public String getCreateTime() {
return this.createTime;
}
/**
* Setter for updateTime
*/
public Agreement setUpdateTime(String updateTime) {
this.updateTime = updateTime;
return this;
}
/**
* Getter for updateTime
*/
public String getUpdateTime() {
return this.updateTime;
}
/**
* Setter for token
*/
public Agreement setToken(String token) {
this.token = token;
return this;
}
/**
* Getter for token
*/
public String getToken() {
return this.token;
}
/**
* Setter for links
*/
public Agreement setLinks(List<Links> links) {
this.links = links;
return this;
}
/**
* Getter for links
*/
public List<Links> getLinks() {
return this.links;
}
/**
* Create a new billing agreement by passing the details for the agreement, including the name, description, start date, payer, and billing plan in the request JSON.
* @param accessToken
* Access Token used for the API call.
* @return Agreement
* @throws PayPalRESTException
* @throws UnsupportedEncodingException
* @throws MalformedURLException
*/
public Agreement create(String accessToken) throws PayPalRESTException, MalformedURLException, UnsupportedEncodingException {
APIContext apiContext = new APIContext(accessToken);
return create(apiContext);
}
/**
* Create a new billing agreement by passing the details for the agreement, including the name, description, start date, payer, and billing plan in the request JSON.
* @param apiContext
* {@link APIContext} used for the API call.
* @return Agreement
* @throws PayPalRESTException
* @throws MalformedURLException
* @throws UnsupportedEncodingException
*/
public Agreement create(APIContext apiContext) throws PayPalRESTException, MalformedURLException, UnsupportedEncodingException {
if (apiContext == null) {
throw new IllegalArgumentException("APIContext cannot be null");
}
if (apiContext.getAccessToken() == null || apiContext.getAccessToken().trim().length() <= 0) {
throw new IllegalArgumentException("AccessToken cannot be null or empty");
}
if (apiContext.getHTTPHeaders() == null) {
apiContext.setHTTPHeaders(new HashMap<String, String>());
}
apiContext.getHTTPHeaders().put(Constants.HTTP_CONTENT_TYPE_HEADER, Constants.HTTP_CONTENT_TYPE_JSON);
apiContext.setSdkVersion(new SDKVersionImpl());
String resourcePath = "v1/payments/billing-agreements";
String payLoad = this.toJSON();
Agreement agreement = configureAndExecute(apiContext, HttpMethod.POST, resourcePath, payLoad, Agreement.class);
for (Links links : agreement.getLinks()) {
if ("approval_url".equals(links.getRel())) {
URL url = new URL(links.getHref());
agreement.setToken(splitQuery(url).get("token"));
break;
}
}
return agreement;
}
/**
* Helper class to parse Query part of a URL
* @param url
* @return Query part in the given URL in name-value pair
* @throws UnsupportedEncodingException
*/
private static Map<String, String> splitQuery(URL url) throws UnsupportedEncodingException {
Map<String, String> queryPairs = new HashMap<String, String>();
String query = url.getQuery();
String[] pairs = query.split("&");
for (String pair : pairs) {
int idx = pair.indexOf("=");
queryPairs.put(URLDecoder.decode(pair.substring(0, idx), "UTF-8"), URLDecoder.decode(pair.substring(idx + 1), "UTF-8"));
}
return queryPairs;
}
/**
* Execute a billing agreement after buyer approval by passing the payment token to the request URI.
* @param accessToken
* Access Token used for the API call.
* @return Agreement
* @throws PayPalRESTException
*/
public Agreement execute(String accessToken) throws PayPalRESTException {
APIContext apiContext = new APIContext(accessToken);
return execute(apiContext, this.getToken());
}
/**
* Execute a billing agreement after buyer approval by passing the payment token to the request URI.
* @param apiContext
* {@link APIContext} used for the API call.
* @param token
* payment token (e.g., EC-0JP008296V451950C)
* @return Agreement
* @throws PayPalRESTException
*/
public static Agreement execute(APIContext apiContext, String token) throws PayPalRESTException {
if (apiContext == null) {
throw new IllegalArgumentException("APIContext cannot be null");
}
if (apiContext.getAccessToken() == null || apiContext.getAccessToken().trim().length() <= 0) {
throw new IllegalArgumentException("AccessToken cannot be null or empty");
}
if (apiContext.getHTTPHeaders() == null) {
apiContext.setHTTPHeaders(new HashMap<String, String>());
}
apiContext.getHTTPHeaders().put(Constants.HTTP_CONTENT_TYPE_HEADER, Constants.HTTP_CONTENT_TYPE_JSON);
apiContext.setSdkVersion(new SDKVersionImpl());
Object[] parameters = new Object[] { token };
String pattern = "v1/payments/billing-agreements/{0}/agreement-execute";
String resourcePath = RESTUtil.formatURIPath(pattern, parameters);
String payLoad = "";
return configureAndExecute(apiContext, HttpMethod.POST, resourcePath, payLoad, Agreement.class);
}
/**
* Retrieve details for a particular billing agreement by passing the ID of the agreement to the request URI.
* @param accessToken
* Access Token used for the API call.
* @param agreementId
* String
* @return Agreement
* @throws PayPalRESTException
*/
public static Agreement get(String accessToken, String agreementId) throws PayPalRESTException {
APIContext apiContext = new APIContext(accessToken);
return get(apiContext, agreementId);
}
/**
* Retrieve details for a particular billing agreement by passing the ID of the agreement to the request URI.
* @param apiContext
* {@link APIContext} used for the API call.
* @param agreementId
* String
* @return Agreement
* @throws PayPalRESTException
*/
public static Agreement get(APIContext apiContext, String agreementId) throws PayPalRESTException {
if (apiContext == null) {
throw new IllegalArgumentException("APIContext cannot be null");
}
if (apiContext.getAccessToken() == null || apiContext.getAccessToken().trim().length() <= 0) {
throw new IllegalArgumentException("AccessToken cannot be null or empty");
}
if (apiContext.getHTTPHeaders() == null) {
apiContext.setHTTPHeaders(new HashMap<String, String>());
}
apiContext.getHTTPHeaders().put(Constants.HTTP_CONTENT_TYPE_HEADER, Constants.HTTP_CONTENT_TYPE_JSON);
apiContext.setSdkVersion(new SDKVersionImpl());
if (agreementId == null) {
throw new IllegalArgumentException("agreementId cannot be null");
}
Object[] parameters = new Object[] {agreementId};
String pattern = "v1/payments/billing-agreements/{0}";
String resourcePath = RESTUtil.formatURIPath(pattern, parameters);
String payLoad = "";
return configureAndExecute(apiContext, HttpMethod.GET, resourcePath, payLoad, Agreement.class);
}
/**
* Update details of a billing agreement, such as the description, shipping address, and start date, by passing the ID of the agreement to the request URI.
* @param accessToken
* Access Token used for the API call.
* @param patchRequest
* PatchRequest
* @return Agreement
* @throws PayPalRESTException
*/
public Agreement update(String accessToken, List<Patch> patchRequest) throws PayPalRESTException {
APIContext apiContext = new APIContext(accessToken);
return update(apiContext, patchRequest);
}
/**
* Update details of a billing agreement, such as the description, shipping address, and start date, by passing the ID of the agreement to the request URI.
* @param apiContext
* {@link APIContext} used for the API call.
* @param patchRequest
* PatchRequest (list of patches)
* @return Agreement
* @throws PayPalRESTException
*/
public Agreement update(APIContext apiContext, List<Patch> patchRequest) throws PayPalRESTException {
if (apiContext == null) {
throw new IllegalArgumentException("APIContext cannot be null");
}
if (apiContext.getAccessToken() == null || apiContext.getAccessToken().trim().length() <= 0) {
throw new IllegalArgumentException("AccessToken cannot be null or empty");
}
if (apiContext.getHTTPHeaders() == null) {
apiContext.setHTTPHeaders(new HashMap<String, String>());
}
apiContext.getHTTPHeaders().put(Constants.HTTP_CONTENT_TYPE_HEADER, Constants.HTTP_CONTENT_TYPE_JSON);
apiContext.setSdkVersion(new SDKVersionImpl());
if (this.getId() == null) {
throw new IllegalArgumentException("Id cannot be null");
}
if (patchRequest == null) {
throw new IllegalArgumentException("patchRequest cannot be null");
}
Object[] parameters = new Object[] {this.getId()};
String pattern = "v1/payments/billing-agreements/{0}";
String resourcePath = RESTUtil.formatURIPath(pattern, parameters);
String payLoad = new GsonBuilder().create().toJson(patchRequest);
return configureAndExecute(apiContext, HttpMethod.PATCH, resourcePath, payLoad, Agreement.class);
}
/**
* Suspend a particular billing agreement by passing the ID of the agreement to the request URI.
* @param accessToken
* Access Token used for the API call.
* @param agreementStateDescriptor
* AgreementStateDescriptor
* @throws PayPalRESTException
*/
public void suspend(String accessToken, AgreementStateDescriptor agreementStateDescriptor) throws PayPalRESTException {
APIContext apiContext = new APIContext(accessToken);
suspend(apiContext, agreementStateDescriptor);
return;
}
/**
* Suspend a particular billing agreement by passing the ID of the agreement to the request URI.
* @param apiContext
* {@link APIContext} used for the API call.
* @param agreementStateDescriptor
* AgreementStateDescriptor
* @throws PayPalRESTException
*/
public void suspend(APIContext apiContext, AgreementStateDescriptor agreementStateDescriptor) throws PayPalRESTException {
if (apiContext == null) {
throw new IllegalArgumentException("APIContext cannot be null");
}
if (apiContext.getAccessToken() == null || apiContext.getAccessToken().trim().length() <= 0) {
throw new IllegalArgumentException("AccessToken cannot be null or empty");
}
if (apiContext.getHTTPHeaders() == null) {
apiContext.setHTTPHeaders(new HashMap<String, String>());
}
apiContext.getHTTPHeaders().put(Constants.HTTP_CONTENT_TYPE_HEADER, Constants.HTTP_CONTENT_TYPE_JSON);
apiContext.setSdkVersion(new SDKVersionImpl());
if (this.getId() == null) {
throw new IllegalArgumentException("Id cannot be null");
}
if (agreementStateDescriptor == null) {
throw new IllegalArgumentException("agreementStateDescriptor cannot be null");
}
Object[] parameters = new Object[] {this.getId()};
String pattern = "v1/payments/billing-agreements/{0}/suspend";
String resourcePath = RESTUtil.formatURIPath(pattern, parameters);
String payLoad = agreementStateDescriptor.toJSON();
configureAndExecute(apiContext, HttpMethod.POST, resourcePath, payLoad, null);
return;
}
/**
* Reactivate a suspended billing agreement by passing the ID of the agreement to the appropriate URI. In addition, pass an agreement_state_descriptor object in the request JSON that includes a note about the reason for changing the state of the agreement and the amount and currency for the agreement.
* @param accessToken
* Access Token used for the API call.
* @param agreementStateDescriptor
* AgreementStateDescriptor
* @throws PayPalRESTException
*/
public void reActivate(String accessToken, AgreementStateDescriptor agreementStateDescriptor) throws PayPalRESTException {
APIContext apiContext = new APIContext(accessToken);
reActivate(apiContext, agreementStateDescriptor);
return;
}
/**
* Reactivate a suspended billing agreement by passing the ID of the agreement to the appropriate URI. In addition, pass an agreement_state_descriptor object in the request JSON that includes a note about the reason for changing the state of the agreement and the amount and currency for the agreement.
* @param apiContext
* {@link APIContext} used for the API call.
* @param agreementStateDescriptor
* AgreementStateDescriptor
* @throws PayPalRESTException
*/
public void reActivate(APIContext apiContext, AgreementStateDescriptor agreementStateDescriptor) throws PayPalRESTException {
if (apiContext == null) {
throw new IllegalArgumentException("APIContext cannot be null");
}
if (apiContext.getAccessToken() == null || apiContext.getAccessToken().trim().length() <= 0) {
throw new IllegalArgumentException("AccessToken cannot be null or empty");
}
if (apiContext.getHTTPHeaders() == null) {
apiContext.setHTTPHeaders(new HashMap<String, String>());
}
apiContext.getHTTPHeaders().put(Constants.HTTP_CONTENT_TYPE_HEADER, Constants.HTTP_CONTENT_TYPE_JSON);
apiContext.setSdkVersion(new SDKVersionImpl());
if (this.getId() == null) {
throw new IllegalArgumentException("Id cannot be null");
}
if (agreementStateDescriptor == null) {
throw new IllegalArgumentException("agreementStateDescriptor cannot be null");
}
Object[] parameters = new Object[] {this.getId()};
String pattern = "v1/payments/billing-agreements/{0}/re-activate";
String resourcePath = RESTUtil.formatURIPath(pattern, parameters);
String payLoad = agreementStateDescriptor.toJSON();
configureAndExecute(apiContext, HttpMethod.POST, resourcePath, payLoad, null);
return;
}
/**
* Cancel a billing agreement by passing the ID of the agreement to the request URI. In addition, pass an agreement_state_descriptor object in the request JSON that includes a note about the reason for changing the state of the agreement and the amount and currency for the agreement.
* @param accessToken
* Access Token used for the API call.
* @param agreementStateDescriptor
* AgreementStateDescriptor
* @throws PayPalRESTException
*/
public void cancel(String accessToken, AgreementStateDescriptor agreementStateDescriptor) throws PayPalRESTException {
APIContext apiContext = new APIContext(accessToken);
cancel(apiContext, agreementStateDescriptor);
return;
}
/**
* Cancel a billing agreement by passing the ID of the agreement to the request URI. In addition, pass an agreement_state_descriptor object in the request JSON that includes a note about the reason for changing the state of the agreement and the amount and currency for the agreement.
* @param apiContext
* {@link APIContext} used for the API call.
* @param agreementStateDescriptor
* AgreementStateDescriptor
* @throws PayPalRESTException
*/
public void cancel(APIContext apiContext, AgreementStateDescriptor agreementStateDescriptor) throws PayPalRESTException {
if (apiContext == null) {
throw new IllegalArgumentException("APIContext cannot be null");
}
if (apiContext.getAccessToken() == null || apiContext.getAccessToken().trim().length() <= 0) {
throw new IllegalArgumentException("AccessToken cannot be null or empty");
}
if (apiContext.getHTTPHeaders() == null) {
apiContext.setHTTPHeaders(new HashMap<String, String>());
}
apiContext.getHTTPHeaders().put(Constants.HTTP_CONTENT_TYPE_HEADER, Constants.HTTP_CONTENT_TYPE_JSON);
apiContext.setSdkVersion(new SDKVersionImpl());
if (this.getId() == null) {
throw new IllegalArgumentException("Id cannot be null");
}
if (agreementStateDescriptor == null) {
throw new IllegalArgumentException("agreementStateDescriptor cannot be null");
}
Object[] parameters = new Object[] {this.getId()};
String pattern = "v1/payments/billing-agreements/{0}/cancel";
String resourcePath = RESTUtil.formatURIPath(pattern, parameters);
String payLoad = agreementStateDescriptor.toJSON();
configureAndExecute(apiContext, HttpMethod.POST, resourcePath, payLoad, null);
return;
}
/**
* Bill an outstanding amount for an agreement by passing the ID of the agreement to the request URI. In addition, pass an agreement_state_descriptor object in the request JSON that includes a note about the reason for changing the state of the agreement and the amount and currency for the agreement.
* @param accessToken
* Access Token used for the API call.
* @param agreementStateDescriptor
* AgreementStateDescriptor
* @throws PayPalRESTException
*/
public void billBalance(String accessToken, AgreementStateDescriptor agreementStateDescriptor) throws PayPalRESTException {
APIContext apiContext = new APIContext(accessToken);
billBalance(apiContext, agreementStateDescriptor);
return;
}
/**
* Bill an outstanding amount for an agreement by passing the ID of the agreement to the request URI. In addition, pass an agreement_state_descriptor object in the request JSON that includes a note about the reason for changing the state of the agreement and the amount and currency for the agreement.
* @param apiContext
* {@link APIContext} used for the API call.
* @param agreementStateDescriptor
* AgreementStateDescriptor
* @throws PayPalRESTException
*/
public void billBalance(APIContext apiContext, AgreementStateDescriptor agreementStateDescriptor) throws PayPalRESTException {
if (apiContext == null) {
throw new IllegalArgumentException("APIContext cannot be null");
}
if (apiContext.getAccessToken() == null || apiContext.getAccessToken().trim().length() <= 0) {
throw new IllegalArgumentException("AccessToken cannot be null or empty");
}
if (apiContext.getHTTPHeaders() == null) {
apiContext.setHTTPHeaders(new HashMap<String, String>());
}
apiContext.getHTTPHeaders().put(Constants.HTTP_CONTENT_TYPE_HEADER, Constants.HTTP_CONTENT_TYPE_JSON);
apiContext.setSdkVersion(new SDKVersionImpl());
if (this.getId() == null) {
throw new IllegalArgumentException("Id cannot be null");
}
if (agreementStateDescriptor == null) {
throw new IllegalArgumentException("agreementStateDescriptor cannot be null");
}
Object[] parameters = new Object[] {this.getId()};
String pattern = "v1/payments/billing-agreements/{0}/bill-balance";
String resourcePath = RESTUtil.formatURIPath(pattern, parameters);
String payLoad = agreementStateDescriptor.toJSON();
configureAndExecute(apiContext, HttpMethod.POST, resourcePath, payLoad, null);
return;
}
/**
* Set the balance for an agreement by passing the ID of the agreement to the request URI. In addition, pass a common_currency object in the request JSON that specifies the currency type and value of the balance.
* @param accessToken
* Access Token used for the API call.
* @param currency
* Currency
* @throws PayPalRESTException
*/
public void setBalance(String accessToken, Currency currency) throws PayPalRESTException {
APIContext apiContext = new APIContext(accessToken);
setBalance(apiContext, currency);
return;
}
/**
* Set the balance for an agreement by passing the ID of the agreement to the request URI. In addition, pass a common_currency object in the request JSON that specifies the currency type and value of the balance.
* @param apiContext
* {@link APIContext} used for the API call.
* @param currency
* Currency
* @throws PayPalRESTException
*/
public void setBalance(APIContext apiContext, Currency currency) throws PayPalRESTException {
if (apiContext == null) {
throw new IllegalArgumentException("APIContext cannot be null");
}
if (apiContext.getAccessToken() == null || apiContext.getAccessToken().trim().length() <= 0) {
throw new IllegalArgumentException("AccessToken cannot be null or empty");
}
if (apiContext.getHTTPHeaders() == null) {
apiContext.setHTTPHeaders(new HashMap<String, String>());
}
apiContext.getHTTPHeaders().put(Constants.HTTP_CONTENT_TYPE_HEADER, Constants.HTTP_CONTENT_TYPE_JSON);
apiContext.setSdkVersion(new SDKVersionImpl());
if (this.getId() == null) {
throw new IllegalArgumentException("Id cannot be null");
}
if (currency == null) {
throw new IllegalArgumentException("currency cannot be null");
}
Object[] parameters = new Object[] {this.getId()};
String pattern = "v1/payments/billing-agreements/{0}/set-balance";
String resourcePath = RESTUtil.formatURIPath(pattern, parameters);
String payLoad = currency.toJSON();
configureAndExecute(apiContext, HttpMethod.POST, resourcePath, payLoad, null);
return;
}
/**
* List transactions for a billing agreement by passing the ID of the agreement, as well as the start and end dates of the range of transactions to list, to the request URI.
* @param accessToken
* Access Token used for the API call.
* @param agreementId
* String
* @return AgreementTransactions
* @throws PayPalRESTException
*/
public static AgreementTransactions transactions(String accessToken, String agreementId, Date startDate, Date endDate) throws PayPalRESTException {
APIContext apiContext = new APIContext(accessToken);
return transactions(apiContext, agreementId, startDate, endDate);
}
/**
* List transactions for a billing agreement by passing the ID of the agreement, as well as the start and end dates of the range of transactions to list, to the request URI.
* @param apiContext
* {@link APIContext} used for the API call.
* @param agreementId
* String
* @return AgreementTransactions
* @throws PayPalRESTException
*/
public static AgreementTransactions transactions(APIContext apiContext, String agreementId, Date startDate, Date endDate) throws PayPalRESTException {
if (apiContext == null) {
throw new IllegalArgumentException("APIContext cannot be null");
}
if (apiContext.getAccessToken() == null || apiContext.getAccessToken().trim().length() <= 0) {
throw new IllegalArgumentException("AccessToken cannot be null or empty");
}
if (apiContext.getHTTPHeaders() == null) {
apiContext.setHTTPHeaders(new HashMap<String, String>());
}
if (startDate == null) {
throw new IllegalArgumentException("startDate cannot be null");
}
if (endDate == null) {
throw new IllegalArgumentException("endDate cannot be null");
}
apiContext.getHTTPHeaders().put(Constants.HTTP_CONTENT_TYPE_HEADER, Constants.HTTP_CONTENT_TYPE_JSON);
apiContext.setSdkVersion(new SDKVersionImpl());
if (agreementId == null) {
throw new IllegalArgumentException("agreementId cannot be null");
}
SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd");
String sDate = dateFormat.format(startDate);
String eDate = dateFormat.format(endDate);
Object[] parameters = new Object[] {agreementId, sDate, eDate};
String pattern = "v1/payments/billing-agreements/{0}/transactions?start_date={1}&end_date={2}";
String resourcePath = RESTUtil.formatURIPath(pattern, parameters);
String payLoad = "";
return configureAndExecute(apiContext, HttpMethod.GET, resourcePath, payLoad, AgreementTransactions.class);
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,698 |
RV Dealer News
Inaugural Canadian Outdoor Hospitality Expo a Success
Lippert Components Launches Rebrand
Meet Eleonore Hamm, President, RVDA of Canada and RV/MH Hall of…
RV Dealer Wins Big…And Donates it Back to Care Camps
Venture's 2021 SportTrek Touring Model Proves Popular with Retailers and Families
2020 Canadian RV Dealer of the Year Award – The Nominees…
Spotlight on Ultra-Light Trailer Manufacturer Roulottes ProLite
Maria Lister of Boss Technology named ORVDA 2020 Associate Member of…
Al Robinson of Great Canadian RV awarded ORVDA 2020 Dealer of…
The 30th Annual Walt Paseska Memorial Canadian RV Dealer of the…
Ontario's Al Robinson Wins 2020 Canadian RV Dealer of the Year…
RV Care National Dealers' Meeting Goes Virtual
Home Columns EVENTS Ontario's Al Robinson Wins 2020 Canadian RV Dealer of the Year Award
Ontario's Al Robinson Wins 2020 Canadian RV Dealer of the Year Award
Marney Carmichael
The Recreation Vehicle Dealers Association of Canada (RVDA) and RV Lifestyle Magazine/RV Dealer News are pleased to announce that Al Robinson of Great Canadian RV in Peterborough, ON has won the 2020 Walter Paseska Memorial Canadian RV Dealer of the Year Award.
The presentation of the Award was a virtual one this year (click here to view the presentation), replacing an official ceremony that is usually held to coincide with the RVDA's annual general meeting. It is given to the Canadian RV dealer who best exemplifies the professionalism and community spirit of RV dealers throughout the country.
This year's other nominees were Mike Porter of Bluenose RV Centre in Bridgewater, Nova Scotia and Bob Verwey of Owasco RV Centre in Bowmanville, Ontario. Each year nominees are submitted to the RVDA selection committee, which consists of previous Canadian RV Dealer of the Year winners and representatives from the RVDA of Canada and founding sponsor RV Lifestyle Magazine.
A proud Al Robinson displays his Canadian RV Dealer of the Year Award (photo courtesy Stacey Robinson).
Upon winning the Award Al Robinson said: "I am really honoured, I truly appreciate it. Congratulations to the nominees, really anyone could have won the Award. We have such an incredible industry, so many relationships… We are just so blessed to have great customers in our industry and such great relationships that we have built out of this. Dealer of the Year is not just one person, it's our dealership. We couldn't do it without the staff that we have… and I couldn't have done it without Stacey. Once again, I thank you for this honour."
Al Robinson has worked in the RV industry for 48 years – 35 of which have been as a dealer principal – and opened Great Canadian RV, whose motto is "The Better Way to Get Away!", in 2012. In October 2020 he won the Ontario RV Dealer's Association (ORVDA) Dealer of the Year Award (read the article here). He works side by side with his wife Stacey Robinson and employs a staff of 10, one of whom has been with Al for over three decades. The dealership boasts a four-bay service shop, the largest RV parts and accessories store in their region and proudly presents Coachmen Catalina and Freedom Express, East to West Della Terra and Alta, Forest River Impression, and Cameo fifth wheels. Al is the number one dealer of Coachmen Catalina in the country and strongly believes in education; he sends his staff to every learning opportunity available, making Great Canadian RV both a progressive workplace and place of business. The company is based on the core values of quality product, exceptional customer service, straightforward business practice, and community involvement.
Al Robinson celebrated his October ORVDA win with his mother, 91-year-old Marg, his wife Stacey and staff of Great Canadian RV.
Mike Gaeddert, GM, Coachmen Catalina, has had a relationship with Al for over seven years and said: "I have known both Al and the dealership to hold themselves to a very high standard on customer and community service. [They] have continually worked with us to deliver the best possible customer experience from both sales and service."
Pete Liegl, Founder and CEO, Forest River, Inc., added: "The Robinson's dealership takes pride in serving not only their customers but also their community. Al Robinson has personally worked with my father, Pete Liegl, for over 35 years and I have worked with Al and his dealership since our inception in 2018. It has been a privilege to work with Al and Stacey as they exude true passion for the RV lifestyle and they are dedicated to serving the RV industry and its people. Great Canadian RV continues to surpass the expectations of being a Forest River dealer and this dealership is always fair, courteous and above all, professional in every respect. EAST TO WEST and Forest River's partnership with Great Canadian RV is exemplary of the model to which we would like all dealers to aspire."
Al has been a lifetime member of ORVDA and is currently chairman of the government relations committee where he has made invaluable connections and contributions on behalf of the RV industry. He has also served on the Ontario Private Campground Association Board of Directors and has been involved in a multitude of local and regional groups such as the Kawartha Lake Associated District Chamber of Commerce and the Trent-Severn Waterway Steering Committee.
Al grew up camping with his family and his parents Marg and Ken opened M&K's Beaver Park in Omemee, Kawartha Lakes, in 1972. It was here that he discovered the outdoors and learned about running a business, selling RVs and servicing customers from a very young age, seeing first-hand the enjoyment and magic families experienced while camping.
In 1977 Al fulfilled his goal of becoming a police officer and served on the Metro Toronto Police Force until 1983, when he made the decision to return to the growing family business at Beaver Park to work alongside his parents. During this time Al completed an Honours degree.
Having developed a particular passion for the RV industry, Al opened his own dealership in 1986 and named it Open Road Trailer Sales. Based in Lindsay, ON, Open Road quickly became the number-one selling Cobra (now Forest River) dealer in Canada. It was here where Al began to form relationships with manufacturers, campers, and park owners. Open Road was consolidated in 1992 with Bailey's Bay Resort and Al successfully grew the business until it was purchased by Parkbridge Resorts; it remains a popular cottage and RV spot to this day.
Al and Great Canadian RV have shown tremendous community involvement over the years, from hauling trailers for hurricane relief to sponsoring youth teams and supporting causes such as the Kinsmen Club, the Food Bank, the Canadian Guide Dogs for the Blind, The Boys and Girls Club, and the Make-A-Wish Foundation, to which the dealership donated a complete 2019 Coachmen Catalina trailer package with extended warranty for a young girl and her family. In the spring and summer of 2020, Al and Stacey assisted nearly 50 frontline workers across Ontario and into New York to safely self-isolate in RVs during the COVID-19 crisis.
Al has a true passion for the RV lifestyle. He loves to camp and kayak and spend time with his family. Outside of his Great Canadian RV family Al and Stacey have a total of five children and one grandchild.
Most of all, Al Robinson truly cares for his clients. A returning customer, Mike Norman, said: "Al is someone who always listens – really listens, to your concerns or issues; and he makes sure every detail of what you need is looked at. It is great to deal with someone who cares so deeply about the folks he does business with. There have been a few times we have been out camping and had questions about our trailer or how to deal with something in the trailer, and Al is always quick to call back with his expertise."
The Canadian RV Dealer of the Year Award was established in 1989 in memory of the late Walter Paseska of Walt's Trailer Sales in Headingly, Manitoba. Walt dedicated many years to the Canadian RV industry and was instrumental in bringing the RVDA to his province. The Award was conceived and sponsored by Camping Canada – RV Lifestyle Magazine, Vie en Plein Air, and RV Dealer News.
Each year, in conjunction with RV Lifestyle Magazine/RV Dealer News, the RVDA of Canada acknowledges the excellent work of RV dealers who have been leaders in the RVDA movement either regionally or at a provincial or national level; who have exhibited a long-term dedication to the RV community; and who have made substantial contributions to their communities.
2020 RVDA Dealer of the Year
Al Robinson
Previous articleRV Care National Dealers' Meeting Goes Virtual
Next articleVenture RV Receives 2020 Quality Circle Awards
All-new Escape Hatch by KZ RV
EVENTS January 27, 2021
Industry News January 19, 2021
Meet Eleonore Hamm, President, RVDA of Canada and RV/MH Hall of...
Taylor Publishing Group is Canada's leading publisher of recreational magazines, including Power Boating Canada, RV Lifestyle, Poker Runs America and RV Dealer News.
Contact us: webmaster@taylorpublishinggroup.com
Roadtrek Inc. – The dawn of a new era!
Industry News August 13, 2019
Leisure Travel Vans Reveals 2021 Wonder
Industry News February 12, 2020
Erwin Hymer Group North America Debuts New Trailer Line.
Industry News March 9, 2018
Profiles15
Transitions10
© RV Dealer News Magazine, Taylor Publishing Group, 268-44 Crawford Cresent, Campbellville, ON, L0P1B0. Tel: 905-844-8214 | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 3 |
\section*{Abstract}
{\bf
Several tensor networks are built of isometric tensors, i.e.\ tensors satisfying $W^{\dagger} W = \mathbbm{1}$.
Prominent examples include matrix product states (MPS) in canonical form, the multiscale entanglement renormalization ansatz (MERA), and quantum circuits in general, such as those needed in state preparation and quantum variational eigensolvers.
We show how gradient-based optimization methods on Riemannian manifolds can be used to optimize tensor networks of isometries to represent e.g.\ ground states of 1D quantum Hamiltonians.
We discuss the geometry of Grassmann and Stiefel manifolds, the Riemannian manifolds of isometric tensors, and review how state-of-the-art optimization methods like nonlinear conjugate gradient and quasi-Newton algorithms can be implemented in this context.
We apply these methods in the context of infinite MPS and MERA, and show benchmark results in which they outperform the best previously-known optimization methods, which are tailor-made for those specific variational classes.
We also provide open-source implementations of our algorithms.
}
\vspace{10pt}
\noindent\rule{\textwidth}{1pt}
\tableofcontents\thispagestyle{fancy}
\noindent\rule{\textwidth}{1pt}
\vspace{10pt}
\section{Introduction}%
\label{sec:introduction}
Tensor networks can be used to efficiently represent vectors and operators in very large tensor product spaces, assuming they have a restricted structure of correlations.
This makes them well-suited as ansätze for ground states of local quantum Hamiltonians and other quantum states with limited entanglement~\cite{verstraete_mpsreview_2008,schollwoeck_densitymatrix_2011,vidal_class_2008}; as compact representations of partition functions of large systems in classical statistical mechanics~\cite{levin_trg_2007,evenbly_tnr_2015}; and as representations of tensors of various kinds in other applications~\cite{oseledets2011tensor}, such as machine learning~\cite{cichocki_era_2014,stoudenmire_supervised_2016}.
Many tensor networks have constraints applied to their tensors, most common of them being the requirement of isometricity, i.e.\ the property that $W^{\dagger} W = \mathbbm{1}$ when the tensor $W$ is interpreted as a linear map from the tensor product space associated with a subset of its indices to the space associated with the complementary set of indices.
This constraint arises from removing redundant gauge freedom from the network in the case of canonical forms of matrix product states (MPS)~\cite{schollwoeck_densitymatrix_2011} and tree tensor networks (TTN)~\cite{shi_classical_2006}, but is inherent in the definition of the multiscale entanglement renormalization ansatz (MERA)~\cite{vidal_class_2008}.
Even for projected entangled-pair states (PEPS)~\cite{verstraete_mpsreview_2008,verstraete2004renormalization}, where an isometry constraint does not arise naturally, it might be interesting to consider the restricted set with isometric tensors, as this simplifies certain calculations~\cite{zaletel2020isometric,soejima2020isometric,tepaske2020three}.
Furthermore, tensor networks constructed from isometric tensors are equivalent to quantum circuits that could potentially be implemented on a quantum computer, and have attracted recent attention from this point of view~\cite{peruzzo2014variational,li2017efficient,barratt2020parallel,lin2020real}.
To find a tensor network approximation of an unknown state of interest, e.g.\ a ground state of some local Hamiltonian, the variational principle is invoked, i.e.\ the ground state approximation is identified with the point on the tensor network manifold that minimizes the energy.
The first algorithm for finding such an approximation was the density matrix renormalization group (DMRG)~\cite{white_density_1992}, which optimizes the energy over the set of MPS (although the MPS structure was only implicit in the original formulation of DMRG).
The one-site DMRG algorithm in particular optimizes each tensor in turn, iterating the procedure until convergence, a technique known as alternating least squares optimization.
A similar alternating optimization strategy is also the basis for the standard energy minimization algorithm for MERA~\cite{evenbly_algorithms_2009}, which we refer to as the Evenbly-Vidal algorithm, although the local problem is in this case solved differently in order to respect the isometry condition.
Another paradigm for finding minimal energy tensor networks is based on the idea of imaginary time evolution, using either Trotter decompositions~\cite{vidal2004efficient,orus2008infinite} or the time-dependent variational principle (TDVP)~\cite{hackl2020geometry,haegeman2011time}.
Trotter-based imaginary time evolution has been the prevailing algorithm for the optimization of infinite PEPS until recently~\cite{corboz2016variational,vanderstraeten2016gradient}.
In the context of optimizing unitary or isometric tensor networks, yet another strategy is based on flow equations, as proposed in Ref.~\onlinecite{dawson2008unifying}.
Also in the context of quantum computational tasks, classical optimization of the unitary gates in the quantum circuit with respect to a given cost function is often required, as e.g.\ in Ref.~\onlinecite{lin2020real}.
Well-known gradient-based algorithms for nonlinear optimization have not received a great deal of attention for the optimization of tensor networks, likely due to the astounding efficiency of the DMRG algorithm for the case of MPS\@.
Promising results for using the standard (i.e.\ Euclidean) version of the nonlinear conjugate gradient algorithm were reported for translation-invariant MPS~\cite{milsted2013matrix} and PEPS~\cite{vanderstraeten2016gradient} in the thermodynamic limit.
In this manuscript, we propose to use the well-established Riemannian generalization of the nonlinear conjugate gradient and quasi-Newton algorithms to optimize over manifolds of isometric tensor networks.
We furthermore construct a specific preconditioner for these algorithms, derived from the Hilbert space geometry of the tensor network manifold, and show that the resulting methods can outperform tailor-made optimization algorithms, such as the Evenbly-Vidal algorithm for MERA and the variational uniform MPS (VUMPS) algorithm~\cite{zauner2018variational} for infinite MPS\@.
This manuscript is structured as follows:
Section~\ref{sec:geometry} provides an overview of the Riemannian geometry of complex Grassmann and Stiefel manifolds, the manifolds of isometric matrices and tensors.
In Section~\ref{sec:optimization}, we briefly review the basics of Riemannian extensions of gradient-based optimization methods such as the gradient descent, nonlinear conjugate gradient and quasi-Newton algorithms, and discuss the role of preconditioners in this setting.
In Sections~\ref{sec:mera} and~\ref{sec:mps}, we show how these methods can be applied in the context of MERA and MPS, respectively, and demonstrate how they outperform previous methods in many situations.
Section~\ref{sec:discussion} provides some further discussion and an outlook.
The algorithms presented below are available in open source software packages written in the scientific programming language Julia~\cite{bezanson2017julia}.
The most high-level and user-facing packages are MPSKit.jl~\cite{MPSKit.jl} and MERAKit.jl~\cite{MERAKit.jl}.
The ancillary files in \href{https://arxiv.org/src/2007.03638}{arxiv.org/src/2007.03638} include scripts that use these packages to reproduce all the benchmark results that we show.
\section{Riemannian geometry of isometric tensors}%
\label{sec:geometry}
Throughout this section, we focus on a single isometric matrix $W$ that fulfills $W^{\dagger} W = \mathbbm{1}$.
This could for instance be an isometry or disentangler of a MERA, with its top and bottom indices combined to single matrix indices, or an MPS tensor in left or right canonical form.
In contrast to most literature in numerical optimization, we focus on complex isometric matrices.
Isometric matrices of a given size $n \times p$ form a manifold, called the Stiefel manifold, that can be naturally embedded in the Euclidean vector space $\mathbb{C}^{n\times p}$ of general complex $n\times p$ matrices:
\begin{equation}
\label{eq:stiefel_definition}
\Stiefel{n}{p} = \{ W \in \mathbb{C}^{n \times p} \;|\; W^{\dagger} W = \mathbbm{1} \},
\end{equation}
where we have assumed the necessary condition $n \geq p$.
The case $n = p$ yields the manifold of unitary matrices $\Unitary{n}$, which is thus included as special case.
For instance, for the isometries of a ternary MERA with bond dimension $D$, $n = D^3$ and $p = D$, whereas the corresponding disentanglers have $n = p = D^2$.
In the case of left or right canonical MPS with physical dimension $d$ and bond dimension $D$, $n = dD$ and $p = D$.
The isometry constraint imposes $p^2$ independent real-valued constraints and thus $\Stiefel{n}{p}$ is a real manifold of dimension $(2n - p)p$.
Note that as the isometry constraint is not holomorphic, $\Stiefel{n}{p}$ cannot be understood as a complex manifold, and its tangent space cannot be given the structure of a complex subspace of $\mathbb{C}^{n \times p}$, a point to which we return below.
In many situations, what is of interest is not the exact isometry $W$ itself, but rather the subspace which it defines by the span of its $p$ columns.
In those cases, one should identify $W$ with $WU$, where $U$ can be an arbitrary $p \times p$ unitary, and consider the equivalence class $[W] = \{\, W U \;|\; U \in \Unitary{p} \}$.
In a tensor network, this happens whenever the columns of $W$ correspond to a single virtual index, in which case a gauge transformation $U$ can be applied to it, while $U^\dagger$ can be absorbed into the leg of the tensor to which $W$ is connected.
The manifold of such equivalence classes of isometric tensors $[W]$ is a quotient manifold known as the Grassmann manifold $\Grassmann{n}{p} = \Stiefel{n}{p}/\Unitary{p}$.
While $\Grassmann{n}{p}$ is here defined as the quotient manifold of two manifolds without complex structure, $\Grassmann{n}{p}$ is itself a proper complex manifold with complex dimension $(n - p)p$, or equivalently, real dimension $2(n - p)p$.
This can be understood by noticing that the isometry condition is not necessary to define a subspace, so that $\Grassmann{n}{p}$ can also be defined as $\Grassmann{n}{p} = \GeneralLinear{n} / (\GeneralLinear{p} \times \GeneralLinear{n-p})$, with $\GeneralLinear{n}$ the general linear group of invertible complex $n\times n$ matrices.
In fact, $\Grassmann{n}{p}$ can then be given the structure of a K\"{a}hler manifold, which can be important when studying time evolution~\cite{hackl2020geometry}.
In contrast, optimization of real-valued functions on a manifold is only concerned with the Riemannian structure (and not with possible complex, symplectic, or K\"{a}hler structures), for which only the structure as real manifold is relevant, as we make more explicit below.
Throughout the remainder of this manuscript, we will denote elements from Grassmann manifolds using a single representative $W$ of the corresponding equivalence class $[W]$, and assume that $W$ is isometric.
We briefly review the basic properties of Grassmann and Stiefel manifolds which are required to apply gradient-based optimization methods.
For a more thorough introduction to the properties of Grassmann and Stiefel manifolds, see for instance Refs.~\onlinecite{edelman_geometry_1998,zhu_riemannian_2017}.
Note though, that these references only consider real-valued matrices, whereas we review here the complex case.
\subsection{Tangent vectors}%
\label{sec:geometry_tangents}
For an isometric matrix $W \in \Stiefel{n}{p}$, the tangent space at $W$ consists of all matrices $X$ for which $W^{\dagger} X$ is skew-hermitian.
In other words,
\begin{equation}%
\label{eq:stiefel_tangent}
X = W A + W_\perp B, \;\text{where}\; A = -A^{\dagger}.
\end{equation}
Here $W_\perp$ is a $n \times (n-p)$ isometric matrix such that $WW^{\dagger} + W_\perp W_\perp^{\dagger} = \mathbbm{1}$, i.e.\ it is a unitary completion of $W$ (which is not unique).
$B$ is an arbitrary $(n-p) \times p$ matrix.
The skew-hermiticity condition on $A$ implies that the tangent space only allows for linear combinations with real-valued scalar coefficients, i.e.\ it is a vector space over $\mathbb{R}$, as mentioned above.
Because optimization algorithms are formulated using only real-valued linear combinations of tangent vectors, this does not pose any restriction.
For a point on the Grassmann manifold represented by $W$, we can use the unitary gauge freedom to impose that tangent vectors satisfy the holomorphic condition $W^{\dagger} X = 0$.
This amounts to restricting to tangent vectors with $A=0$, and thus the tangent vectors on a Grassmann manifold can be parameterized as
\begin{equation}
\label{eq:grassmann_tangent}
X = W_\perp B,
\end{equation}
with $B$ again being an arbitrary $(n-p) \times p$ matrix.
Note that the $A=0$ condition is preserved under complex linear combinations, as one would expect given the complex structure of $\Grassmann{n}{p}$.
In both cases, Stiefel and Grassmann, we denote the tangent space at $W$ by $\tangentspace{W}$, to which we can append the manifold if we want to distinguish explicitly between the two cases.
\subsection{Metric}%
\label{sec:geometry_metric}
Implicit in most gradient methods is the idea to use the partial derivatives of the cost function, which constitute a dual vector in the cotangent space, as a direction (i.e.\ a tangent vector) along which to update the state.
This works fine if one assumes to be working in Euclidean space, but otherwise requires a metric.
A natural metric for $\tangentspace{W}$, regardless of whether we are on a Stiefel or Grassmann manifold, is the Euclidean metric $g_W(X, Y) = \Re \Tr [X^{\dagger} Y]$, i.e.\ the real part of the Frobenius inner product in the embedding space $\mathbb{C}^{n \times p}$.
Note that the real part of the inner product of a complex space defines a metric (a real symmetric bilinear), whereas the imaginary part defines a symplectic form.
While a general metric depends on the base point $W$, for $g_W$ this dependence is not explicit.
Another natural metric for the Stiefel manifold is given by what is known as canonical metric, for which we refer to Ref.~\onlinecite{edelman_geometry_1998}.
In this manuscript we use the Euclidean $g_W$, as we found little difference between the two choices in our simulations, and the Euclidean metric is more closely related to the Hilbert space inner product and the preconditioning schemes for the tensor networks that we consider in later sections.
A metric allows one to map cotangent vectors to tangent vectors.
In a case like ours, where the manifold is embedded in a Euclidean space, it more generally allows one to construct an orthogonal projection from the embedding space to the tangent space.
For a given complex matrix $D\in\mathbb{C}^{n \times p}$, we define its orthogonal projection onto $\tangentspace{W}$ as the tangent vector $G$ for which $g_W(G, X) = \Re \Tr [D^{\dagger} X]$, for all $X \in \tangentspace{W}$.
The solution for this projection is
\begin{align}
\label{eq:stiefel_gradient}
G &= D - \frac{1}{2} W(W^{\dagger} D + D^{\dagger} W) & \text{if } W &\in \Stiefel{n}{p},\\%
\label{eq:grassmann_gradient}
G &= D - WW^{\dagger} D & \text{if } W &\in \Grassmann{n}{p}.
\end{align}
$D\mapsto G$ is a complex linear map for the Grassmann manifold, but only real linear for the Stiefel manifold.
Although the names $D$ and $G$ purposefully refer to derivatives and gradients, note that Eqs.~\eqref{eq:stiefel_gradient} and~\eqref{eq:grassmann_gradient} can be used to project any arbitrary matrix from $\mathbb{C}^{n \times p}$ onto the tangent space $\tangentspace{W}$.
\subsection{Gradients, retraction, and transport}%
\label{sec:geometry_gradients_retraction_transport}
For gradient optimization of a cost function $C(W)$, we can first compute the partial derivatives
\begin{align}
D_{ij} = \frac{\partial C}{\partial \Re W_{ij}} + i \frac{\partial C}{\partial \Im W_{ij}} = 2 \frac{\partial C}{\partial W^\ast_{ij}}
\end{align}
without taking the isometry condition into account.
The complex linear combination here is chosen such that
\begin{align}
\left.\frac{\mathrm{d} C(W + \epsilon X)}{\mathrm{d} \epsilon}\right\vert_{\epsilon=0} = \Re \Tr [D^{\dagger} X], \quad \forall X \in \mathbb{C}^{n \times p}
\end{align}
(assuming that the cost-function can meaningfully be extended or continued to non-isometric matrices in such a way that the above derivative is well defined).
Projecting $D$ onto the tangent space with Eq.~\eqref{eq:stiefel_gradient} or~\eqref{eq:grassmann_gradient} yields $G$, which is the tangent vector such that
\begin{align}
\label{eq:stiefel_grassmann_gradient_condition}
g_W(G, X) = \left.\frac{\mathrm{d} C(W + \epsilon X)}{\mathrm{d} \epsilon}\right\vert_{\epsilon=0},\quad \forall X \in \tangentspace{W}.
\end{align}
$G$ will henceforth be referred to as the \emph{gradient} of $C$.
This brings us to the next point, which is that we would often like to change our isometry $W$ by moving in the direction of a tangent vector $X\in \tangentspace{W}$, but $W + \epsilon X$ will only respect the isometry condition up to first order in $\epsilon$.
To travel further in the direction of $X$ while staying on the manifold, Riemannian optimization algorithms employ the concept of \emph{retraction}.
A retraction $\retraction{W}{X}{\alpha}$ is a curve parameterized by $\alpha \in \mathbb{R}$, an initial point $W$ such that $\retraction{W}{X}{0} = W$, and initial direction $X \in \tangentspace{W}$ such that $\frac{\partial}{\partial \alpha} \retraction{W}{X}{\alpha}|_{\alpha=0} = X$, that lies exactly within the manifold for all values of $\alpha$ in some interval containing $\alpha = 0$ (preferably $\alpha \in \mathbb{R}^+$).
For both Stiefel and Grassmann manifolds, several retraction functions exist, even if we impose the requirement that we must be able to numerically compute them efficiently.
One natural choice to consider are geodesics, since the notion of retraction can be seen as a generalization thereof.
Given a tangent vector $X = WA + W_\perp B$, the retraction
\begin{equation}
\label{eq:retraction}
\retraction{W}{X}{\alpha} = e^{\alpha \, Q_X} \, W \, ,
\quad \text{where } Q_X =
\begin{bmatrix}
W & W_\perp
\end{bmatrix}
\begin{bmatrix}
A & -B^{\dagger} \\%
B & 0
\end{bmatrix}
\begin{bmatrix}
W^{\dagger} \\%
W_\perp^{\dagger}
\end{bmatrix},
\end{equation}
is indeed a geodesic for the Grassmann manifold (where $A=0$), but is not a geodesic with respect to the Euclidean metric for the Stiefel manifold (where $A = -A^{\dagger}$).
It is however a geodesic with respect to the canonical metric of the Stiefel manifold, and can certainly be used as viable retraction also in combination with the Euclidean metric.\footnote{%
A closed form expression for the geodesics of the Stiefel manifold with respect to the Euclidean metric is also known, but cannot be written using a unitary applied to $W$; we refer to Ref.~\onlinecite{edelman_geometry_1998} for further details.
}
The retraction in Eq.~\eqref{eq:retraction} requires the matrix exponential of $Q_X$, which can be evaluated with $O(n p^2 + p^3)$ operations (compared to a naive $O(n^3)$ implementation) by exploiting the fact that the maximal rank of $Q_X$ is $2p$.\footnote{%
How this is done depends slightly on the manifold.
In the simpler Grassmann case, the exponential in Eq.~\eqref{eq:retraction} reduces to sines and cosines of singular values of $X$, and we can avoid constructing $W_\perp$ explicitly~\cite{edelman_geometry_1998}.
In the Stiefel case, we need to extract $W_\perp$ from a QR decomposition of $\begin{bmatrix} W & Z \end{bmatrix}$, where $Z = (\mathbbm{1} - WW^{\dagger})X = W_\perp B$, and compute the matrix exponential of $
\begin{bmatrix}
A & -B^{\dagger} \\%
B & 0
\end{bmatrix}
$.
The full details can be found in the source code of the TensorKitManifolds.jl~\cite{TensorKitManifolds.jl} package.
}
Another notable option for retraction is to replace the exponential in Eq.~\eqref{eq:retraction} by a Cayley transform, which can then exploit the reduced rank via the Sherman–Morrison-Woodbury formula, see Ref.~\onlinecite{zhu_riemannian_2017} for details.
While the latter can be somewhat faster, we use the retraction in Eq.~\eqref{eq:retraction} throughout this manuscript.
The above definitions constitute the bare minimum to formulate a Riemannian gradient descent algorithm on a Stiefel or Grassmann manifold.
To exploit information from previous optimization steps, as happens in the conjugate gradient and quasi-Newton algorithms, one more ingredient is needed: a vector transport to transport gradients and other tangent vectors from previous points on the manifold to the current point.
A vector transport generalizes the concept of parallel transport, and needs to be compatible with the chosen retraction.
If $V = \retraction{W}{X}{\alpha}$ is the end point of a retraction, a vector transport maps a tangent vector $Y \in \tangentspace{W}$ at the initial point to a tangent vector $\transport{Y}{W}{X}{\alpha} \in \tangentspace{V}$.
As with the retraction, many choices are possible, but we use the transport
\begin{align}
\label{eq:transport}
\transport{Y}{W}{X}{\alpha} = e^{\alpha \, Q_X} \, Y \, ,
\end{align}
where $Q_X$ is as in Eq.~\eqref{eq:retraction}, both for the Stiefel and the Grassmann case.
This choice can be implemented efficiently, again by exploiting the low-rank property of $Q_X$.
It has the additional benefit that it is a metric connection, which is to say it preserves inner products between tangent vectors, i.e.\ $g_W(Y_1,Y_2) = g_V(\transport{Y_1}{W}{X}{\alpha}, \,\transport{Y_2}{W}{X}{\alpha})$.
This simplifies some steps of the optimization algorithms and guarantees desirable convergence properties~\cite{zhu_riemannian_2017}.
Note that Eq.~\eqref{eq:transport} is not the parallel transport with respect to the Euclidean metric $g$ (nor with respect to the canonical metric), as it corresponds to a metric connection which has torsion, but this does not hinder its usage in optimization algorithms.
Alternatively, one could again replace the exponential in Eq.~\eqref{eq:transport} by a Cayley transform, if this was also done in the retraction.
\subsection{Product manifolds}%
\label{sec:geometry_product_manifolds}
Note, finally, that a function depending on several isometries or unitaries corresponds to a function on the product manifold $\Stiefel{n_1}{p_1} \times\, \Stiefel{n_2}{p_2} \times \ldots$ (with $\times$ being the Cartesian product), where some of the factors could also be Grassmann manifolds instead.
The corresponding tangent space is the Cartesian product of the individual tangent spaces (which corresponds to the direct sum as long as the number of tensors remains finite) and all of the above structures and constructions extend trivially.
\section{Riemannian gradient optimization}%
\label{sec:optimization}
Having established the Riemannian geometry of Grassmann and Stiefel manifolds (and products thereof) in the previous section, we can now discuss how to implement Riemannian versions of some well-known gradient-based optimization algorithms, all of which are described in the literature~\cite{smith_optimization_1994,edelman_geometry_1998,absil2009optimization,ring2012optimization,huang2015broyden,zhu_riemannian_2017}.
We aim to minimize a cost function $C(W)$ defined on our manifold, where we consider a single argument $W$ for notational simplicity.
The simplest approach is the Riemannian formulation of gradient descent, often also referred to as steepest descent.
It is an iterative procedure which at every step computes the gradient of $C$ at the current point on the manifold, and then uses the chosen retraction in the direction of the negative gradient to find the next point.
In steepest descent, the step size $\alpha$ is chosen so as to minimize $C$ along the retraction $\alpha \mapsto \retraction{W}{X}{\alpha}$ with $X = - G$.
Finding $\alpha$ is known as the linesearch, and various algorithms and strategies exist for it.
It is often unnecessary or even prohibitive to determine the minimum accurately; rather an approximate step size $\alpha$ that satisfies the Wolfe conditions~\cite{nocedal2006numerical} is sufficient to guarantee convergence.
If we define $W' = \retraction{W}{X}{\alpha}$ to be the new isometry, $G'$ the gradient at $W'$, and $X' = \mathrm{d} \retraction{W}{X}{\alpha} / \mathrm{d} \alpha$ the local tangent to the retraction, then the Wolfe conditions are
\begin{align}
\label{eq:wolfe_1}
C(W') &< C(W) - c_1 g_W(G, X),\\%
\label{eq:wolfe_2}
g_{W'}(G',X') &> c_2 g_W(G,X),
\end{align}
with $0 < c_1 < c_2 < 1$ being free parameters~\cite{ring2012optimization,huang2015broyden}.
Eq.~\eqref{eq:wolfe_1} states that the cost function should decrease sufficiently, while Eq.~\eqref{eq:wolfe_2} says that its slope (which starts out negative for a descent direction) should increase sufficiently.
Throughout our simulations, we use the linesearch algorithm described in Refs.~\onlinecite{hager2006algorithm,hager_new_2005}, which also takes into account that the descent property of Eq.~\eqref{eq:wolfe_1} (also known as the Armijo rule) cannot be evaluated accurately close to convergence due to finite machine precision, and switches to an approximate but numerically more stable condition when necessary.
In practice, a small number (often two or three) function evaluations suffice to determine a suitable step size $\alpha$.
While (Riemannian) gradient descent with step sizes that satisfy the Wolfe conditions converges in theory, this convergence is only linear and can be prohibitively slow, especially for systems of physical interest exhibiting strong correlations (e.g.\ critical systems)~\footnote{
This can be argued by noting that the Hessian of the corresponding energy function is often related to the dispersion relation of the physical excitations in the system~\cite{haegeman2012variational,haegeman2013post}, and thus has (near)-zero modes for such systems.
}.
An improved algorithm with nearly the same cost is the nonlinear conjugate gradient algorithm, which dates back to the work of Hestenes and Stiefel.
In conjugate gradient the search direction is a linear combination of the (negative) gradient and the previous search direction, a concept known as \enquote{momentum} in the context of optimizers for machine learning.
Various schemes exist for the choice of the $\beta$ coefficient in this linear combination, see Ref.~\onlinecite{hager_survey_2006} and references therein.
All these schemes can be applied in the Riemannian case, although the inner products that need to be computed as part of $\beta$'s definition need to be replaced by the metric $g$.
Furthermore, to build a linear combination between the current gradient and the previous search direction, one needs to invoke the vector transport $\mathcal{T}$ from Sec.~\ref{sec:geometry} for the latter to represent a valid tangent vector at the new base point.
In the simulations below, we use the conjugate gradient scheme of Hager and Zhang~\cite{hager2006algorithm,hager_new_2005}.
From a second order expansion of the cost function around the current point, one arrives at Newton's method, which suggests taking a step of length $1$ in the direction of $-H^{-1}(G)$, where $H$ is the Hessian, i.e.\ the matrix of second derivatives.
While Newton's method has a theoretical quadratic convergence rate close to the minimum, computing $H$ and its inverse is often prohibitively expensive and has various other issues.
The Hessian might not be positive definite far away from the minimum, and furthermore depends on the second order behaviour of the retraction when formulating a Riemannian generalization of Newton's method.
Quasi-Newton methods, on the other hand, construct an approximation to $H^{-1}$ using only gradients, computed at the successive points $W_k$ along the optimization.
The most commonly used is the Limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) algorithm~\cite{nocedal1980updating,nocedal2006numerical}, which keeps a low-rank, positive semi-definite approximation of $H^{-1}$ in memory.
The Riemannian formulation of it also depends on the vector transport and has been well established, see Refs.~\onlinecite{ring2012optimization, huang2015broyden} and references therein.
Both the conjugate gradient and L-BFGS algorithms converge to a local minimum at a rate that is somewhere between the linear convergence of gradient descent and quadratic convergence of Newton's method.
Which one is to be preferred often depends on the application.
The latter requires a few more vector operations, but can use these to scale the inverse Hessian so that step size $\alpha=1$ is typically accepted and no linesearch is needed in most iterations.
Despite the speedup provided by the conjugate gradient and L-BFGS algorithms, it is often beneficial to apply a \emph{preconditioner} to the optimisation.
A preconditioner is a transformation that maps one tangent vector to another, $X \mapsto \tilde{X}$, and that is applied when choosing the search direction.
Using a preconditioner with gradient descent simply means retracting in the direction of the negative \emph{preconditioned} gradient $-\tilde{G}$, instead of $-G$.
Using preconditioners with conjugate gradient and quasi-Newton methods is not much more complicated, and we direct the reader to the numerical optimisation literature~\cite{nocedal2006numerical,nash1985preconditioning,desterck_nonlinearly_2018} for the details.
The choice of the preconditioner $X \mapsto \tilde{X}$ is typically guided by trying to capture some structure of the Hessian.
The inverse Hessian $\tilde{X} = H^{-1}(X)$ would often be an ideal preconditioner, and while it is usually infeasible to implement, using some approximation to it may already help convergence significantly.
A preconditioner (assumed to be positive definite) can also be seen as changing the metric in the problem, hopefully in such a way that the optimisation landscape becomes less singular and hence easier to navigate for the chosen optimisation algorithm.
This geometrical viewpoint is illustrated in Fig.~\ref{fig:preconditioning_geometry}, and is what we will use to justify the preconditioners we use in our tensor network optimisations.
Note that the same effect could be achieved by actually defining a new metric on the relevant Stiefel or Grassmann manifold, and repeating the steps in Sec.~\ref{sec:geometry} again for this metric.
However, we find that using the Euclidean inner product with an additional explicit preconditioning step gives greater flexibility without complicating e.g.\ the metric condition for the vector transport.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.95\linewidth]{preconditioning_geometry.pdf}
\caption{%
On the left, the grey ovals are the contour lines of the cost function in this 2-dimensional optimisation problem.
The purple arrow $X$ is the negative gradient, and the zig-zag line emanating from it is the path that gradient descent takes.
The relatively slow convergence of the gradient descent path is a consequence of the near-singular geometry of the contour lines, where the cost function varies much more along one axis than the other.
The green arrow $\tilde{X}$ would be the optimal choice for the preconditioned search direction, the one that takes us to the optimum in a single retraction.
By changing the geometry (i.e.\ the metric) of the problem, in this case by a simple rescaling of the axes, we can map to the problem on the right, where the geometry of the contour lines has become less singular.
In this new geometry $\tilde{X}$ is in fact the negative gradient.
This suggests that a preconditioner that implements this change of geometry would probably be beneficial for convergence.
While the above is a cartoon example, redefining the metric to make the optimisation landscape less singular can be a useful way to design preconditioners more generally.
}%
\label{fig:preconditioning_geometry}
\end{figure}
In the context of tensor networks, the cost function $C$ will typically be $C(W) = \bra{\psi(W)} H \ket{\psi(W)}$, where $H$ is a local Hamiltonian, and $\ket{\psi(W)}$ is a tensor network state dependent on the isometry $W$.
A tangent vector $X \in \tangentspace{W}$ can then be related to a state $\ket{\Phi_W(X)} = X^i \ket{\partial_i \psi(W)}$ in Hilbert space, which yields an induced inner product $\braket{\Phi_W(X)}{\Phi_W(Y)}$ between tangent vectors $X, Y \in \tangentspace{W}$.
A suitable preconditioner can then be extracted from the explicit expression of $\braket{\Phi_W(X)}{\Phi_W(Y)}$, or some approximation thereof.
As discussed in the applications below, we assume that this inner product can be written as $\braket{\Phi_W(X)}{\Phi_W(Y)} \approx \Tr[X^{\dagger} Y \rho_W]$ for some $W$-dependent, hermitian, positive (semi)-definite $\rho_W$ of size $p \times p$.
We can then implement a preconditioning step $X \mapsto \tilde{X} \in \tangentspace{W}$ such that (henceforth omitting the $W$ dependence)
\begin{equation}%
\label{eq:preconditioner}
\Re \Tr[Y^{\dagger} \tilde{X} \rho] = \Re \Tr[Y^{\dagger} X] \quad \forall \, Y \in \tangentspace{W}.
\end{equation}
In other words, the Euclidean inner product with $X$ equals the more physically motivated inner product with $\tilde{X}$.
If we express $X$ as $X = WA + W_\perp B$, where $A$ is skew-hermitian (Stiefel) or zero (Grassmann), then the solution to Eq.~\eqref{eq:preconditioner} is
\begin{align}%
\label{eq:preconditioner_solution}
& \tilde{X} = W \tilde{A} + W_\perp \tilde{B},\\%
\text{where} \; & \tilde{A} \rho + \rho \tilde{A} = 2 A \;\text{and}\; \tilde{B} = B \rho^{-1}.
\end{align}
The equation for $\tilde{A}$ is a Sylvester equation, that can be solved easily and efficiently using e.g.\ an eigendecomposition of $\rho$ at a cost $O(p^3)$.
The matrix $\rho$ may often be quite ill-conditioned, and in practice we have found the regularized inverse ${\left(\rho^2 + \mathbbm{1} \delta^2\right)}^{-\frac{1}{2}}$ to work well.
We discuss the choice of $\delta$ in the applications below.
Note that this preconditioner accounts for the structure of the physical state space, i.e.\ it corresponds to the induced metric of the variational manifold in Hilbert space.
When implemented exactly, the preconditioned gradient corresponds to the direction in which a state would evolve under imaginary time evolution as implemented by the time-dependent variational principles of Dirac, Frenkel or McLachlan (see Refs.~\onlinecite{hackl2020geometry,Yuan_2019} and references therein) and has been used with MPS as such~\cite{haegeman2011time}.
This choice, or (block)-diagonal approximations thereof, as discussed in the next section for the case of MERA, was recently referred to as the ``quantum natural gradient'' in the context of variational quantum circuits~\cite{Stokes_2020}.
This choice of preconditioner is independent of the Hamiltonian, and it is conceivable that a much bigger speedup can be obtained by explicitly taking the Hamiltonian into account.
Such an improved preconditioner can probably not be implemented efficiently without resorting to an iterative linear solver, such as the linear conjugate gradient method.
Such a scheme would be close in spirit to the set of optimization methods known as truncated Newton algorithms~\cite{nash1985preconditioning,nash2000survey}.
The above preconditioner can then still prove useful to speed up this inner linear problem.
We elaborate on this in the discussion in Section~\ref{sec:discussion}.
\section{Application: MERA}%
\label{sec:mera}
In this section we show how Riemannian optimization methods can be applied to the multiscale entanglement renormalization ansatz (MERA), and demonstrate that the resulting algorithm outperforms the usual Evenbly-Vidal optimization method used for MERA\@.
Specifically, we concentrate on a one-dimensional, infinite, scale invariant, ternary MERA, but the generalization to other types of MERAs is trivial.
A MERA is a tensor network of the form
\begin{equation}
\label{eq:mera}
\includegraphics[scale=1,raise=-1.4em]{mera.pdf}
\; .
\end{equation}
Each tensor in a MERA is isometric in the sense that
\begin{equation}
\label{eq:mera_isometricity}
\includegraphics[scale=1,raise=-0.6em]{mera_isometricity_a.pdf}
\; = \;
\includegraphics[scale=1,raise=-0.6em]{mera_isometricity_b.pdf}
\qquad \text{and} \qquad
\includegraphics[scale=1,raise=-0.6em]{mera_isometricity_c.pdf}
\; = \;
\includegraphics[scale=1,raise=-0.6em]{mera_isometricity_d.pdf}
\;,
\end{equation}
where red borders denote complex conjugation.
The network defines a quantum state $\ket{\text{MERA}}$ living on the lattice at the bottom legs in Eq.~\eqref{eq:mera}.
In the example MERA from Eq.~\eqref{eq:mera}, there are two distinct layers:
There is one transition layer at the bottom, followed by a scale invariant layer, copies of which repeat upwards to infinity.
Each layer $i$ is translation invariant and defined by two tensors, the disentangler $u_i =
\includegraphics[scale=1,raise=-0.2em]{mera_u.pdf}\,$
and the isometry
$w_i =
\includegraphics[scale=1,raise=-0.2em]{mera_w.pdf}\,$.
The cost function we are trying to minimize is $\bra{\text{MERA}} H \ket{\text{MERA}}$, where $H = \sum_i h_i$ is a given local Hamiltonian.
In our benchmark simulations we use the critical Ising Hamiltonian
\begin{align}
h_i = -X_i X_{i+1} - Z_i.
\end{align}
The parameter space in which we are optimising is $\bigtimes_v M_v$, where $\bigtimes_v$ denotes Cartesian product over all the different tensors $v = u_1, w_1, u_2, w_2, \dots$, and $M_v$ is the Stiefel or Grassmann manifold of each tensor $v$.
Any unitary one-site rotation on the top index of an isometry $w_i$ can be absorbed into the disentangler $u_{i+1}$ above it, and hence the natural manifold for $w$'s is the Grassmann manifold: $M_{w_i} = \mathrm{Gr}$.\footnote{%
Note that we are not saying here that any member of the equivalence class $[w_i] = \{ w_i U \;|\; UU^{\dagger} = U^{\dagger} U = \mathbbm{1}\}$ leads to the same MERA\@: This is obviously not the case.
Instead what we are saying is that changes of the form $w_i \mapsto w_i U$ are degenerate from the point of view of our optimization, as they can be cancelled by a corresponding change in one of the disentanglers.
In other words, any tangent directions that correspond to changes of the type $w_i \mapsto w_i U$ are of no interest to us, and can be projected out.
}
The same is not true for the disentanglers, for which similar unitary rotations would entangle the two top indices, and hence we treat them as points on Stiefel manifolds: $M_{u_i} = \mathrm{St}$.\footnote{
Indeed, $\Grassmann{n}{n}$ is the trivial singleton manifold $[\mathbbm{1}]$.
}
We have omitted the dimensions of the manifolds, since they depend on the physical site state space dimension $d$ and the bond dimension $D$ of the upper layers.
As discussed at the end of Section~\ref{sec:geometry}, the tangent space is the Cartesian product of the tangent spaces of the individual tensors, $\bigtimes_v \tangentspace{v}$, which corresponds to a direct sum structure, and the Riemannian geometry and associated operations extend trivially.
The inner product, in particular, is the sum of the inner products on the individual manifolds.
To compute the gradients, we first discuss the partial derivatives.
Hereto, we denote the partial derivative of the state $\ket{\text{MERA}}$ with respect to a tensor $v$ by $\partial_v \ket{\text{MERA}}$.
Since each tensor appears several times in the network, $\partial_v \ket{\text{MERA}}$ has several terms in it, e.g.
\begin{gather}
\label{eq:mera_derivative}
\partial_{w_1} \ket{\text{MERA}} = \;
\includegraphics[scale=1,raise=-1.2em]{mera_derivative_a.pdf}
\; + \;
\includegraphics[scale=1,raise=-1.2em]{mera_derivative_b.pdf}
\; + \;
\includegraphics[scale=1,raise=-1.2em]{mera_derivative_c.pdf}
\; + \; \dots.
\end{gather}
The partial derivative of the cost function is then $D_v = 2 \partial_{v^{\dagger}} \bra{\text{MERA}} H \ket{\text{MERA}}$.
Up to a scalar factor, the same object arises in the context of the usual Evenbly-Vidal optimization algorithm, where it is called the \enquote{environment} of tensor $v$.
These environments can be computed efficiently, and we refer the reader to Ref.~\onlinecite{evenbly_algorithms_2009} for how to do so.
Extra care needs to be taken when dealing with the scale invariant layer, something we discuss in Appendix~\ref{app:mera_scale_invariant_layer}.
The gradient $G_v$ is the projection of the partial derivative $D_v$ onto the tangent space $\tangentspace{v}$, as in Eq.~\eqref{eq:stiefel_gradient} and~\eqref{eq:grassmann_gradient}.
The total gradient $G$ of the whole parameter space is $G = (G_{u_1}, G_{w_1}, G_{u_2}, \dots) \in \bigtimes_v \tangentspace{v}$.
As mentioned above, the inner product between two tangents $X, Y \in \bigtimes_v \tangentspace{v}$ is $\sum_v g_v(X_v, Y_v)$, where $g$ is the Euclidean metric.
However, each $X_v$ is associated with a state in the physical Hilbert space, schematically denoted as $\frac{\partial \ket{\text{MERA}}}{\partial v} X_v$, and we would like to implement a preconditioning that would equate to using instead a metric arising from the physical inner product, namely
\begin{equation}%
\label{eq:mera_full_inner_product}
\sum_{v, v' \in \{u_1, w_1, \dots\}} X_v^{\dagger} \frac{\partial^2 \braket{\text{MERA}}{\text{MERA}}}{\partial v^{\dagger} \partial v'} Y_{v'}
\end{equation}
The cross-terms in this sum are quite expensive to compute, so we settle instead for the diagonal version
\begin{equation}%
\label{eq:mera_diagonal_inner_product}
\sum_{v \in \{u_1, w_1, \dots\}} X_v^{\dagger} \frac{\partial^2 \braket{\text{MERA}}{\text{MERA}}}{\partial v^{\dagger} \partial v} Y_{v}
\;\, = \sum_{v \in \{u_1, w_1, \dots\}} \Tr[X_v^{\dagger} Y_v \rho_v],
\end{equation}
where $\rho_v$ is the reduced density matrix on the top index or indices of $v$.
As discussed at the end of Sec.~\ref{sec:optimization}, preconditioning with this type of metric can be efficiently implemented for both Stiefel and Grassmann tangents.
The regularization parameter $\delta$ used in computing the regularized inverse of $\rho$ (or the equivalent thereof for the Sylvester problem) in the preconditioner can also be allowed to vary.
In particular, using a very small value of $\delta$ can be detrimental to the optimization in the beginning, when we are far from the minimum, and we have found $\delta = \|X_v\|$ to be a good choice.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.8\linewidth]{mera_results.pdf}
\caption{%
A comparison of convergence in optimising a MERA using the Evenbly-Vidal algorithm (solid green lines) and L-BFGS on Riemannian manifolds (dashed blue lines).
Displayed here are the ground state energy error compared to the exact value (top) and the norm of the gradient (bottom).
The benchmark model in question is the critical Ising model.
In all simulations the MERA is a bond dimension $8$ ternary MERA with two transition layers, with the $\mathbb{Z}_2$ symmetry enforced.
For both algorithms three different simulations are shown, corresponding to three different starting points:
One was a MERA initialized with random isometries and identity disentanglers, the two others were MERAs optimized to convergence at a lower bond dimension, $D=3$ and $D=6$, and then expanded to the full bond dimension $D=8$.
This kind of slow ramping up of the bond dimension can be useful for both speed of convergence and for avoiding local minima.
As the energy error plot shows, here, too, some simulations converge to a local minimum instead of the global one.
In all cases the convergence speed of L-BFGS algorithm clearly outperforms the Evenbly-Vidal algorithm.
}%
\label{fig:mera_results}
\end{figure}
To the best of our knowledge, the only algorithm that has systematically been used to minimize $\bra{\text{MERA}}H\ket{\text{MERA}}$ is the Evenbly-Vidal algorithm, described in detail in Ref.~\onlinecite{evenbly_algorithms_2009}.
Fig.~\ref{fig:mera_results} shows benchmark results comparing the Evenbly-Vidal algorithm and an L-BFGS optimization, using the above preconditioning.
The L-BFGS optimization converges significantly faster for all the simulations displayed in the figure.
Note the logarithmic scale of the horizontal axis, which allows to visualize both the initial and final parts of the convergence.
Individual iterations take somewhat longer to run with L-BFGS (though the asymptotic complexity remains the same, $O(D^8)$ for the ternary MERA), typically 1.5--2 times longer in our simulations, but this effect is more than compensated for by the faster rate of convergence~\footnote{%
Note that the speed difference between the Evenbly-Vidal algorithm and gradient methods depends somewhat on the type of MERA\@.
The most costly operations inherent to the gradient methods (retractions, vector transport and applying the preconditioner) scale as $O(D^6)$, whereas the leading-order cost of both algorithms (computing energy and gradients or environments) is $O(D^7)$ for modified binary, $O(D^8)$ for ternary, and $O(D^9)$ for binary MERA\@.
The higher scaling of e.g.\ binary versus ternary MERA is compensated for by ternary MERAs typically needing higher bond dimensions to achieve the same accuracy, which shows as proportionally higher subleading costs.
}.
While our benchmark is the Ising model with a ternary MERA, we find qualitatively similar results for binary MERAs, and for different models such as the XXZ model.
Moreover, we show results for the L-BFGS algorithm as they are slightly better than those of the conjugate gradient method, but the difference is not drastic.
Other small changes, such as treating the isometries as elements of Stiefel manifolds, or using different retractions or the canonical metric, have limited effects on the results.
The use of preconditioning with the Hilbert space inner product, however, is crucial, and thus indicative that further improvements could be made by improving the preconditioner.
Note that MERA optimizations are somewhat prone to getting stuck in local minima, especially at higher bond dimensions, something that affects all optimization methods we have tried.
The strategy of the Evenbly-Vidal algorithm is similar to alternating least-squares algorithms:
At every step a single tensor of the network is updated, while considering the other tensors as independent of it.
The specific update needs to account for the isometry condition and is reviewed in Appendix~\ref{app:ev_steplimit}.
An update like this typically brings down the energy at every step, and the procedure is then iterated over all the different tensors until convergence.
At first this seems entirely different from gradient optimization:
The Evenbly-Vidal algorithm makes discontinuous jumps from one point in the parameter space to another, one tensor at a time, whereas gradient methods perform smooth retractions of all the tensors at once.
However, hidden in the Evenbly-Vidal update is in fact a kind of step size parameter, that is the additive scale of the effective Hamiltonian.
In Appendix~\ref{app:ev_steplimit} we show that there is a particular limit in which the Evenbly-Vidal algorithm reduces to gradient descent preconditioned with the metric from Eq.~\eqref{eq:mera_diagonal_inner_product}.
Although this limit is not necessarily where the algorithm is typically run, this relation to a first-order optimisation method gives some intuition for how a quasi-Newton or conjugate gradient method could outperform it.
\section{Application: MPS}%
\label{sec:mps}
In this section we show how gradient optimization methods on Riemannian manifolds can be applied to optimize a matrix product state (MPS)\@.
The MPS is kept in its left-canonical form, where each tensor is an isometry from its physical index and left virtual index to its right virtual index.
Such an MPS can be depicted as
\begin{equation}
\label{eq:mps}
\includegraphics[scale=1,raise=-1.4em]{mps.pdf}
\; ,
\end{equation}
where
\begin{equation}
\label{eq:mps_isometricity}
\includegraphics[scale=1,raise=-1.4em]{mps_isometricity_a_v2.pdf}
\, = \,
\includegraphics[scale=1,raise=-1.4em]{mps_isometricity_b_v2.pdf}
\;,
\end{equation}
and red borders denote complex conjugation.
Every injective MPS can be gauge-transformed into this form.
For simplicity's sake we concentrate on the case of an infinite MPS with one-site translation symmetry~\cite{zauner2018variational,vanderstraeten2019tangent}.
Such an MPS is defined by a single isometry.
However, the generalization to a finite MPS or to one with a larger unit cell is straightforward.
We consider the tensor
$\,\includegraphics[scale=1,raise=-0.45em]{mps_tensor.pdf}$
defining the MPS as a point on a Grassmann manifold, since unitary rotations on the right virtual index of each tensor are mere gauge transformations, which can be absorbed in the next tensor without changing the physical state.
The inner product between two tangent tensors, as well as retraction and transport functions are as explained in Sec.~\ref{sec:geometry}, but see also Ref.~\onlinecite{haegeman2014geometry} for further details about the Riemannian geometry of MPS manifolds.
The cost function is the expectation value of a Hamiltonian, which we represent as a matrix product operator (MPO)
\begin{equation}
\label{eq:mpo}
\includegraphics[scale=1,raise=-0.85em]{mpo.pdf}
\;\; .
\end{equation}
The partial derivative of the cost function with respect the isometry can be computed as
\begin{equation}
\label{eq:mps_derivative}
2 \cdot\; \includegraphics[scale=1,raise=-1.33em]{mps_derivative.pdf}
\;\; ,
\end{equation}
where $H_l$ and $H_r$ are the left and right energy environments, which can be efficiently be computed as outlined in Refs.~\onlinecite{schollwoeck_densitymatrix_2011,zauner2018variational,vanderstraeten2019tangent}.
The partial derivative can then be projected onto the tangent space of the Grassmann manifold, as in Eq.~\eqref{eq:grassmann_gradient}, to obtain the gradient.
For preconditioning, we want the effective inner product between two tangent vectors for an individual site,
$\,\includegraphics[scale=1,raise=-0.45em]{mps_impurity_a.pdf}$
and
$\,\includegraphics[scale=1,raise=-0.45em]{mps_impurity_b.pdf}$,
to be
\begin{align}
\sum_{n=-\infty}^{\infty} &\includegraphics[scale=1,raise=-1.45em]{mps_inner_a.pdf}
\; = \;
\includegraphics[scale=1,raise=-1.45em]{mps_inner_b.pdf}
\; = \;
\includegraphics[scale=1,raise=-1.45em]{mps_inner_c.pdf}
\;.
\label{eq:mps_inner}
\end{align}
Here $n$ is the separation between the sites, and the first equation follows from the fact that Grassmann tangent vectors are orthogonal to the Grassmann-points they are at, i.e.\ Eq.~\eqref{eq:grassmann_tangent}.
This is known as the left gauge condition for tangent vectors in the context of MPS~\cite{haegeman2014geometry,vanderstraeten2019tangent}.
The tensor at the very right in Eq.~\eqref{eq:mps_inner} is the dominant right eigenvector of the MPS transfer matrix,
\begin{align}
\label{eq:mps_right_transfermatrix}
\includegraphics[scale=1,raise=-1.45em]{mps_with_right_transfermatrix.pdf}
\; = \;
\includegraphics[scale=1,raise=-1.45em]{mps_right_transfermatrix.pdf}
\;,
\end{align}
and plays the role of $\rho$ from Eq.~\eqref{eq:preconditioner}.
In contrast to the MERA case, this expression corresponds to the exact Hilbert space inner product between tangent vectors, without approximations.
Implementing preconditioning with this inner product requires only implementing the map
\begin{align}
\label{eq:mps_preconditioning}
\includegraphics[scale=1,raise=-0.70em]{mps_impurity.pdf}
\; \mapsto \;
\includegraphics[scale=1,raise=-0.70em]{mps_impurity_with_preconditioning.pdf}
\;.
\end{align}
As with MERA, regularising the inverse of the right eigenvector is paramount for performance, especially during the initial iterations of the optimization process.
In the MPS case we use the regularisation $\left(\includegraphics[scale=1,raise=-0.3em]{right_transfermatrix.pdf} + \mathbbm{1} \delta \right)^{-1}$ with $\delta = \left\|\includegraphics[scale=1,raise=-0.5em]{mps_impurity_a.pdf}\right\|^2$.
We would like to note that, with an exact inverse (i.e.\ $\delta = 0$) in Eq.~\eqref{eq:mps_preconditioning}, standard gradient descent in the limit of a small step size $\alpha\to 0$ amounts to imaginary time evolution, implemented using the TDVP~\cite{haegeman2011time}.
This is a consequence of the K\"{a}hler structure of the MPS manifold~\cite{hackl2020geometry,haegeman2014geometry,vanderstraeten2019tangent}.
With the above building blocks, we are ready to use Riemannian gradient methods for a uniform MPS\@.
For benchmarking, we compare against the well-established VUMPS algorithm~\cite{zauner2018variational}.
We are \emph{not} able to consistently outperform VUMPS for all MPS problems, but we are able to do so for some problems.
As an example of a case where gradient optimization performs well, we consider the triangular lattice antiferromagnetic spin-$\frac{1}{2}$ Heisenberg model on a cylinder.
The classical analogue of this model is disordered, but quantum fluctuations restore the order again in the infinite 2d plane.
It is an example of order from disorder and has been studied extensively~\cite{chubukov1992,kojima_quantum_2018,zheng_excitation_2006,chernyshev_spin_2009,mourigal_dynamical_2013}.
Considering the cylinder as a 1D system with longer range couplings (\enquote{coiling} around the cylinder), the Hamiltonian can be written as
\begin{equation}%
H = \sum_i (h_{i,i+1} + h_{i, i+c} + h_{i, i+c+1}), \qquad h_{i,j} = X_i X_j + Y_i Y_j + Z_i Z_j,
\end{equation}
where $X$, $Y$, and $Z$ are the spin operators.
Here $c$ is the width of the cylinder, which we fix to $c=6$ for our benchmark.
The appropriate MPS ansatz for this model is a uniform MPS with $c$-site unit cell.
We also enforce the $\mathrm{SU}(2)$ symmetry of the MPS, since continuous symmetry breaking does not take place for finite $c$.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.8\linewidth]{mps_results.pdf}
\caption{%
A comparison of convergence in optimising an infinite MPS using VUMPS (solid green lines), conjugate gradient (dashed blue lines), gradient descent (dotted red lines), and the \enquote{switch} method that combines VUMPS and conjugate gradient (dash-dotted purple lines).
The benchmark model in question is a triangular lattice antiferromagnetic spin-$\frac{1}{2}$ Heisenberg model on a cylinder of width $6$.
Results are shown for MPS bond dimensions 1100 (darker colors) and 1900 (lighter colors).
SU(2) symmetry of the tensors is enforced.
VUMPS clearly performs the best at the start of the optimization, but its asymptotic convergence rate is roughly the same as that of gradient descent, whereas conjugate gradient can be seen to converge significantly faster.
A best-of-both-worlds solution is the switch method, which does 30 iterations of VUMPS at the start and then switches over to conjugate gradient.
L-BFGS produces results roughly comparable to those of conjugate gradient, but we do not show them here.
}%
\label{fig:mps_results}
\end{figure}
In Fig.~\ref{fig:mps_results} we show results comparing VUMPS with both gradient descent and conjugate gradient optimizations, with the above preconditioner.
VUMPS does clearly better in the beginning of the optimization, which starts from a randomly initialized MPS\@.
However, its convergence speed after the initial burst is similar to that of gradient descent, whereas conjugate gradient converges at a clearly faster rate.
This is to be expected, as VUMPS was inspired by imaginary time evolution using the TDVP (or thus, Riemannian gradient descent), and should become equivalent to it for small step sizes, i.e.\ when the algorithm is close to convergence.
Note that convergence in Fig.~\ref{fig:mps_results} is shown with respect to number of iterations, not actual running time.
VUMPS iterations, which internally use an iterative eigenvalue solver, take roughly 1.5 times as long as conjugate gradient iterations, thus increasing the gap between the two methods when plotting with respect to running time.
Finally, we have also included results for a method labeled \enquote{switch}, where we use VUMPS for the first few iterations, and then switch over to conjugate gradient, which outperforms both of the individual methods.
As mentioned, the Riemannian optimization methods explained here can be easily applied to a finite MPS as well.
Preliminary benchmarks indicate that for some models, gradient methods can outperform the DMRG algorithm~\cite{white_density_1992}.
The qualitative picture is similar to what we observe with infinite MPS, where variational methods like VUMPS and DMRG are superbly fast at making progress early in the optimization, but if the problem is difficult and a slow convergence sets in, the asymptotic convergence rate of preconditioned conjugate gradient or L-BFGS is often better.
We leave, however, a more detailed study of finite MPS optimization for future work.
\section{Conclusion}%
\label{sec:discussion}
The MERA and MPS results of Secs.~\ref{sec:mera} and~\ref{sec:mps} illustrate that Riemannian gradient-based optimization can be a competitive method for optimising tensor network ansätze.
Partial derivatives of the energy with respect to a given tensor give rise to tensor network diagrams that also appear in current algorithms such as the Evenbly-Vidal algorithm for MERA and the VUMPS algorithm for infinite MPS\@.
Implementing these methods is thus only a matter of computing an actual update direction from the computed gradient using the recipe of the chosen method (gradient descent, conjugate gradient or L-BFGS quasi-Newton), and replacing the update step with a retraction.
Vector transport is subsequently used to bring data from the previous iteration(s), such as former gradients, to the tangent spaces at the current iterate.
This approach is fully compatible with exploiting the sparse structure of tensors arising from symmetries, such as $\mathbb{Z}_2$, $\mathsf{U}_1$ or even non-abelian symmetries such as $\mathsf{SU}_2$.
The isometry condition defines how the tensor should be interpreted as a linear map, so that, when using symmetric tensors, they take a block diagonal form in a basis of fused representations, according to Schur's lemma.
The isometry condition itself, the projection onto the tangent space, the retraction, and the vector transport then all apply at the level of those individual diagonal blocks, and can easily be implemented as such.
Indeed, as mentioned, $\mathbb{Z}_2$ symmetry was used in the MERA results and $\mathsf{SU}_2$ symmetry in the MPS results presented above.
We have demonstrated the usefulness of gradient optimization for MPS and MERA, but there are other tensor network methods that also involve isometric tensors.
Notable cases we have not discussed are tree tensor networks, i.e.\ MERA without the disentanglers, and the tensor network renormalization algorithm~\cite{evenbly_tnr_2015} (TNR), which is closely related to MERA, and for which the usual optimization method is a variant of the Evenbly-Vidal algorithm.
We expect that in both these cases gradient methods could provide similar advantages as they do for MERA\@.
While we focused here on the application of Riemannian gradient-based optimization methods for tensor networks with isometry constraints, even their Euclidean counterparts have not received a great deal of attention as an alternative to the standard recipe of optimizing individual tensors in an alternating sequence using only local information (i.e.\ from the current iteration, not relying on the history of previous iterations).
While the latter can be expected to work extremely well when correlations are relatively short-ranged, there is no particular reason that gradient-based methods which optimize all tensors simultaneously could not replicate this behaviour in this regime, when provided with a suitable preconditioner.
However, gradient-based methods, in particular those that use a history of previous iterations, such as conjugate gradient and quasi-Newton algorithms, have the potential to also work in the regime with long-range and critical correlations.
These conditions typically imply very small eigenvalues in the Hessian, which is detrimental for methods that only use first order information of the current iterate.
A specific example includes situations of low particle density, for which specific multigrid algorithms have been explored~\cite{dolfi2012multigrid}.
It would be interesting to see if gradient-based algorithms would alleviate the problems that plague DMRG in this regime.
Related to this is the case of continuous MPS~\cite{verstraete2010continuous}, where the state is not even a linear or homogeneous function of the matrices containing the variational parameters and DMRG- or VUMPS-like algorithms are unavailable.
In those cases, gradient-based methods are the only alternative~\cite{ganahl2017continuous,tuybens2020variational}.
For all of these applications, a well-considered preconditioner is of paramount importance.
A suitably preconditioned gradient descent can easily outperform a conjugate gradient or quasi-Newton algorithm with ill-chosen parameterization.
In the case of MPS-specific methods such as DMRG or VUMPS, this is implicit in using what is known as the center-gauge.
For gradient methods, the same effect is accomplished by using the reduced density matrix which appears in the physical inner product of these tangent vectors in Hilbert space.
However, it is conceivable that there is plenty of room for improvement by using information of the actual Hamiltonian in constructing a preconditioner, i.e.\ by using its matrix elements with respect to the tangent vectors rather than those of the identity operator.
While the full Hessian needed for Newton's algorithm can be computed for the case of MPS~\cite{haegeman2013post}, this comes with a large cost and would likely be inefficient.
A single application of the Hessian to a given tangent vector requires to solve several non-hermitian linear problems with iterative solvers (e.g.\ the generalized minimal residual algorithm), in order to obtain cubic scaling in the bond dimension.
Hence, Newton's method would amount to three nested levels of iterative algorithms.
A local positive definite approximation of the Hessian which can be applied to a given vector efficiently and directly can be constructed, by (i) ignoring contributions from taking both partial derivatives in the ket or in the bra (somewhat similar to the Gauss-Newton or Levenberg–Marquardt algorithms), as well as (ii) discarding non-local contributions similar to how we ignored off-diagonal contributions in the inner product of MERA tangent vectors.
Such a preconditioner would still need an iterative solver (e.g.\ linear conjugate gradient) to be applied efficiently, but the improvement over the metric or preconditoner constructed here might be sufficiently significant to overcome this overhead.
Indeed, such a scheme is similar in spirit to truncated Newton algorithms~\cite{nash2000survey}, for which dedicated implementations of the inner conjugate gradient method exist, which detect the absence of positive definiteness and produce valid descent directions at every step.
A related strategy might be to directly use the solution of the local problem from DMRG, VUMPS or the Evenbly-Vidal algorithm as some kind of nonlinear preconditioner, as outlined in Ref.~\onlinecite{desterck_nonlinearly_2018}.
These ideas will be explored in a forthcoming paper.
As a final remark, we would like to point out that the techniques explored in this manuscript are relevant beyond the case of tensor network representations of ground states of many body systems.
Various tasks in quantum computation also rely on the classical optimization of the gates in a unitary circuit as a precursory step, and this particular classical task can likewise benefit from the Riemannian optimization methods on which we have reported.
\emph{Note:}
Near completion of this work, the preprint \enquote{Riemannian optimization and automatic differentiation for complex quantum architectures} by Luchnikov, Krechetov, and Filippov~\cite{luchnikov_riemannian_2020} appeared on the arXiv, which also proposes the use Riemannian optimization techniques for applications involving isometric tensor networks, quantum control and state tomography.
In particular, they also consider Stiefel manifolds to perform gradient optimization on a (finite) MERA, although with different gradient-based algorithms inspired by machine learning.
They do not consider the use of preconditioners nor applications to MPS, so that the two articles complement each other and pave the way for a bright future for Riemannian gradient-based optimization of tensor networks.
\section*{Acknowledgements}
We thank Glen Evenbly, Andrew Hallam, Laurens Vanderstraeten, and Frank Verstraete for useful discussions.
We also thank Miles Stoudenmire and an anonymous referee for helpful feedback.
\paragraph{Funding information}
This work has received funding from the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (grant agreements No 715861 (ERQUAF) and 647905 (QUTE)), and from Research Foundation Flanders (FWO) via grant GOE1520N and via a postdoctoral fellowship of MH\@.
\begin{appendix}
\section{Evenbly-Vidal algorithm and its relation to gradient descent}%
\label{app:ev_steplimit}
This appendix summarizes the local update in the Evenbly-Vidal algorithm, illustrates the implicit notion of a step size it contains, and relates it to a preconditioned gradient descent in the limit of small step size.
For a Hamiltonian $H$ the MERA cost function is
\begin{equation}
C(W) = \bra{\text{MERA}(W)} H \ket{\text{MERA}(W)},
\end{equation}
where we have chosen to focus on a single isometry or disentangler $W$ only.
Note that no normalization is necessary, as the state is properly normalized due to the isometry conditions on the tensors.
Because $C$ is a homogeneous function of $W$, $C(W) \propto \Tr[W^{\dagger} D]$, where $D = 2 \partial_{W^\ast} C$ is the partial derivative that we used in the gradient optimization as well, also called the \emph{environment} of $W$.
Given this linear approximation of the cost function, where we assume $D$ to be independent of $W$ (which it in reality is not), the choice of $W$ that extremises it is $W = \pm Q$, where $D = QP$ is the polar decomposition, or as the original paper~\cite{evenbly_algorithms_2009} expresses this, $Q = UV^{\dagger}$ where $D = U S V^{\dagger}$ is the singular value decomposition.
While the sign of $W$ matters for the linearized cost function, it does not for $C$, as $C$ contains only even powers of $W$.
Although the assumption of $D$ being independent of $W$ is clearly false, the update that sets $W = Q$ still works as an iterative step, that in most situations increases $\|C\|$.
This step can then be repeated, and performed in turn for each of the different tensors that make up the MERA, to converge to a local maximum of $\| C \|$.
This algorithm has fixed points where $D = W P$, i.e.\ when $W$ equals the polar factor of $D$.
In that case, it can easily be verified that the gradient $G$ associated to $D$ by orthogonal projection onto the Stiefel tangent space vanishes, which confirms the necessary condition that this scheme converges to local extrema.
However, in order to ensure that maximizing $\| C \|$ amounts to minimizing $C$, the Hamiltonian is redefined as $H_\gamma = H - \gamma \mathbbm{1}$, with $\gamma$ sufficiently large, e.g.\ so as to make $H_\gamma$ negative definite.
In that case, the ground state approximation is indeed the state that maximizes $\| C \|$.
Although $\gamma$ was introduced to shift $H$ by a constant to make it sufficiently negative, it turns out to play the role of an inverse step size.
To see this, first note that
\begin{equation}
C_\gamma \, = \, \bra{\text{MERA}(W)} H_\gamma \ket{\text{MERA}(W)}
\, = \, C - \gamma \Tr[W^{\dagger} W \rho],
\end{equation}
where $\rho$ is the reduced density matrix at the top index or indices of $W$.
Consequently, $D_\gamma = D - \gamma W \rho$.
Now decompose $D$ as $D = W (A + S) + W_\perp B$, where $A$ and $S$ are the skew-hermitian and hermitian parts of $W^{\dagger} D$, and thus
\begin{equation}
D_\gamma \, = \, W(A + S - \gamma \rho) + W_\perp B
\, = \, W(S - \gamma \rho) + G.
\end{equation}
Here $G = W A + W_\perp B$ is the gradient, obtained by projecting $D$ onto the Stiefel tangent space at base point $W$.
As expected, the term in the Hamiltonian $H_\gamma$ that is proportional to the identity operator does not contribute to the Stiefel gradient.
At convergence, $A$ and $B$ will be zero and the role of $\gamma$ is clearly to shift the eigenvalues of $S$ so as to have a fixed sign.
Now consider a small but non-zero $G$, i.e.\ when the algorithm is close to convergence, and treat it as a perturbation to $W(S - \gamma \rho)$.
To see how the Evenbly-Vidal update behaves in this case, we need to understand perturbation theory of the polar decomposition.
If $X = QP$ is the polar decomposition of some arbitrary matrix $X$, and we perturb it as $X + \,\mathrm{d} X$, then an exercise that we omit here shows that
\begin{align}%
\label{eq:polar_perturbation}
X + \,\mathrm{d} X = (Q + \,\mathrm{d} Q) (P + \,\mathrm{d} P)
\end{align}
where $\,\mathrm{d} P$ is some hermitian matrix we do not care about, and
\begin{align}%
\label{eq:polar_perturbation_dQ}
&\,\mathrm{d} Q = Q A_X + Q_\perp B_X\\%
\text{where}\quad & A_X P + P A_X = Q^{\dagger} \,\mathrm{d} X - \,\mathrm{d} X^{\dagger} Q\\%
\text{and}\quad & B_X = Q_\perp^{\dagger} \,\mathrm{d} X P^{-1}.
\end{align}
Matching this up with our case,
\begin{align}%
\label{eq:polar_perturbation_ev}
& D_\gamma = \underbrace{W}_{=Q} \underbrace{(S - \gamma \rho)}_{=P} + \underbrace{W A + W_\perp B}_{= \,\mathrm{d} X},
\end{align}
we obtain
\begin{align}%
& \,\mathrm{d} Q = \,\mathrm{d} W = WA_X + W_\perp B_X\\%
\text{where}\quad & A_X (S - \gamma \rho) + (S - \gamma \rho) A_X = 2 A\\%
\text{and}\quad & B_X = B (S - \gamma \rho)^{-1}.
\end{align}
If we assume that $\gamma$ is sufficiently large so that $S$ is negligible compared to it, this becomes
\begin{align}%
& \,\mathrm{d} W = -\frac{1}{\gamma} (W \tilde{A}_X + W_\perp \tilde{B}_X)\\%
\text{where}\quad & \tilde{A}_X \rho + \rho \tilde{A}_X = 2 A\\%
\text{and}\quad & \tilde{B}_X = B \rho^{-1}.
\end{align}
Comparing this with Eqs.~\eqref{eq:preconditioner_solution} and~\eqref{eq:mera_diagonal_inner_product}, we can identify this with $\,\mathrm{d} W = -\frac{1}{\gamma}\tilde{G}$, where $\gamma^{-1}$ thus plays the role of a step size in the Evenbly-Vidal algorithm, and $\tilde{G}$ is the gradient preconditioned with the same metric that was used in our gradient optimization in Sec.~\ref{sec:mera}.
Indeed, this observation further motivates our specific choice of preconditioner.
Note that in practice, the Evenbly-Vidal algorithm might not satisfy the assumption of large $\gamma$.
The analysis above remains valid up to the final assumption, and might thus give an indication of a better preconditioner for MERA optimization that includes information from the Hamiltonian, yet can still be implemented efficiently.
Instead of $\rho$, we could use $\rho - \gamma^{-1} S$, with $S$ the symmetric part of the $W^{\dagger} D$ and $\gamma$ chosen sufficiently big to ensure positive definiteness.
We leave this proposal for future work.
\section{Efficient computation with the scale invariant layer of a MERA}%
\label{app:mera_scale_invariant_layer}
In the optimization of an infinite MERA, the scale invariant layers at the top need to be treated somewhat differently from the rest.
To discuss this, we first need to lay down some notation.
We denote the local Hamiltonian term ascended to the lowest scale invariant layer by $h$.
We often think of $h$ not as an operator $V \to V$, but as a vector in $V \otimes \bar{V}$, and denote this vector $\bra{h}$.
Similarly, we denote the local scale invariant density matrix $\rho$, and its vectorized version by $\ket{\rho}$.
Finally, we call $A$ the ascending superoperator, thought of as a linear operator $V \otimes \bar{V} \to V \otimes \bar{V}$.
Right-multiplying a vector like $\bra{h}$ by $A$ corresponds to raising it by a layer, and left-multiplying a vector like $\ket{\rho}$ by $A$ corresponds to lowering it by a layer.
There are two problems that need to be solved for $A$ at every iteration of the optimization.
First, to find $\ket{\rho}$, we must solve the eigenvalue equation $A \ket{\rho} = \ket{\rho}$.
Second, when computing the gradient, we need to take the partial derivative $\partial_v \Tr[h \rho] = \partial_v \braket{h}{\rho}$, where $v$ is either the disentangler or the isometry of the scale invariant layer.
Expanding the dependence of $\ket{\rho}$, through $A$, on $v$, one finds
\begin{equation}
\label{eq:scale_invariant_partial}
\partial_v \braket{h}{\rho} = \sum_{i=0}^\infty \bra{h} A^i (\partial_v A) \ket{\rho}.
\end{equation}
To evaluate this we need to find the value of the series $\sum_{i=0}^\infty \bra{h} A^i$.
At face-value this diverges if $\bra{h}$ has overlap with $\bra{\mathbbm{1}}$ (the vectorized version of the identity matrix), since $\bra{\mathbbm{1}} A = \bra{\mathbbm{1}}$.
However, it turns out that any contributions to the partial derivative that are of the form $\bra{\mathbbm{1}}(\partial_v A)\ket{\rho}$ are orthogonal to the Grassmann/Stiefel tangent plane and thus projected out, because they correspond to shifting the cost function by a constant.
Hence we can define $A' = A - \ket{\rho}\bra{\mathbbm{1}}$ and replace the above series by $\sum_{i=0}^\infty \bra{h} A'^i$, which converges like a geometric series, since all eigenvalues of $A'$ are smaller than $1$ in modulus.
Indeed, this can similarly be understood as regular perturbation theory for the eigenvector $\rho$ of the (non-hermitian) operator $A$, whose eigenvalue $1$ does not change under the perturbation.
All of the above is well-known from the original MERA papers~\cite{vidal_class_2008,evenbly_algorithms_2009}, and comes down to solving two relatively simple linear algebra problems.
The reason this is worth mentioning, is that multiplication by $A$ is the leading order cost of the whole MERA optimization, and thus as few such operations should be done as possible.
With the traditional Evenbly-Vidal optimization, approximations have often been used, such as approximating $\bra{h}$ at iteration $i$ as $\bra{h_i} = \bra{h_{i-1}} + \bra{h_{i-1}} A'$, to save computation time~\cite{evenbly_quantum_2011}.
With gradient algorithms like the ones presented here, these kinds of approximations may not be feasible, since the gradient needs to be computed to good accuracy at every step to be able to perform a line search.
We have found, however, that using Krylov subspace methods for the eigenvalue problem $A \ket{\rho} = \ket{\rho}$ and the linear problem of solving $\bra{h}\sum_{i=0}^\infty A'^i$ from $(\bra{h} \sum_{i=0}^\infty A'^i) (\mathbbm{1} - A) = \bra{h}$, with a small Krylov space dimension (e.g.\ $4$) and the solution from the previous iteration as the initial guess, leads to accurate results usually with very few applications of $A$.
This helps make the MERA gradient optimization methods competitive with the Evenbly-Vidal algorithm.
\end{appendix}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,309 |
\section*{Introduction}
\setcounter{section}{0}
\setcounter{equation}{0}
\subsection{Operator error estimates}
The paper concerns homogenization theory of periodic differential operators (DOs). First of all,
we mention the books \cite{BeLP, BaPa, ZhKO}.
In a series of papers \cite{BSu1,BSu2,BSu3} by Birman and Suslina, an operator-theoretic (spectral) approach to homogenization problems was developed. In $L_2({\mathbb R}^d; {\mathbb C}^n)$, a wide class of
matrix strongly elliptic second order DOs ${\mathcal A}_\varepsilon$ was studied.
The operator ${\mathcal A}_\varepsilon$ is given by
\begin{equation}
\label{A_eps}
{\mathcal A}_\varepsilon = b(\mathbf{D})^* g(\mathbf{x}/\varepsilon) b(\mathbf{D}), \quad \varepsilon >0,
\end{equation}
where $g(\mathbf{x})$ is a bounded and positive definite $(m\times m)$-matrix-valued function
periodic with respect to some lattice \hbox{$\Gamma \subset {\mathbb R}^d$}, and $b(\mathbf{D}) = \sum_{l=1}^d b_l D_l$ is a first order DO. Here $b_l$ are constant $(m \times n)$-matrices. It is assumed that $m \geqslant n $ and the symbol $b(\boldsymbol{\xi})$ has maximal rank.
In \cite{BSu1}, it was shown that the resolvent $({\mathcal A}_\varepsilon +I)^{-1}$ converges in the $(L_2 \to L_2)$-operator norm to the resolvent of the effective operator ${\mathcal A}^0$, and
\begin{equation}
\label{est_A_eps}
\bigl\| ({\mathcal A}_\varepsilon +I)^{-1} - ({\mathcal A}^0+I)^{-1} \bigr\|_{L_2(\mathbb{R}^d)\to L_2(\mathbb{R}^d)} \leqslant C \varepsilon.
\end{equation}
The effective operator is given by ${\mathcal A}^0= b(\mathbf{D})^* g^0 b(\mathbf{D})$, where $g^0$ is a constant positive matrix called the \textit{effective} matrix. In \cite{Su1}, a similar result was obtained for the parabolic semigroup:
\begin{equation}
\label{parab_est_A_eps}
\bigl\| e^{- \tau {\mathcal A}_\varepsilon} - e^{-\tau {\mathcal A}^0} \bigr\|_{L_2(\mathbb{R}^d)\to L_2(\mathbb{R}^d)} \leqslant C(\tau) \varepsilon,\quad \tau >0.
\end{equation}
Estimates \eqref{est_A_eps} and \eqref{parab_est_A_eps} are order-sharp. Such inequalities are called \textit{operator error estimates} in homogenization theory.
A different approach to operator error estimates (the shift method) was developed by Zhikov and Pastukhova. In \cite{Zh2,ZhPas1,ZhPas2}, estimates of the form \eqref{est_A_eps}, \eqref{parab_est_A_eps} were obtained for the operators of acoustics and elasticity. Further results were discussed in a survey \cite{ZhPas3}.
The operator error estimates for the nonstationary Schr{\"o}dinger-type equations and hyperbolic equations were studied in \cite{BSu4} and in the recent works \cite{Su3, Su4, M1, M2, DSu1, DSu2, D, DSu4}.
In operator terms, the behavior of the operator-valued functions
$e^{-i \tau {\mathcal A}_\varepsilon}$,
$\cos (\tau {\mathcal A}_\varepsilon^{1/2})$,
${\mathcal A}_\varepsilon^{-1/2} \sin (\tau {\mathcal A}_\varepsilon^{1/2})$, $\tau \in \mathbb{R}$, was investigated. It turned out that the nature of the results differs from the case of elliptic and parabolic equations: the type of the operator norm must be changed.
Let us dwell on the hyperbolic case. In \cite{BSu4}, the following sharp order estimate was proved:
\begin{equation}
\label{est_cos_A_eps}
\bigl\| \cos (\tau {\mathcal A}_\varepsilon^{1/2}) - \cos (\tau ({\mathcal A}^0)^{1/2}) \bigr\|_{H^2(\mathbb{R}^d)\to L_2(\mathbb{R}^d)} \leqslant C(1+ |\tau|) \varepsilon.
\end{equation}
A similar result for the operator ${\mathcal A}_\varepsilon^{-1/2} \sin (\tau {\mathcal A}_\varepsilon^{1/2})$
together with approximation in the energy norm was obtained in \cite{M1, M2}:
\begin{align}
\label{est_sin_A_eps}
\bigl\| {\mathcal A}_\varepsilon^{-1/2} \sin (\tau {\mathcal A}_\varepsilon^{1/2}) -
({\mathcal A}^0)^{-1/2} \sin (\tau ({\mathcal A}^0)^{1/2}) \bigr\|_{H^1(\mathbb{R}^d)\to L_2(\mathbb{R}^d)} \leqslant
C(1 + |\tau|) \varepsilon,
\\
\label{est_sin_A_eps2}
\bigl\| {\mathcal A}_\varepsilon^{-1/2} \sin (\tau {\mathcal A}_\varepsilon^{1/2}) -
({\mathcal A}^0)^{-1/2} \sin (\tau ({\mathcal A}^0)^{1/2}) - \varepsilon K(\varepsilon;\tau) \bigr\|_{H^2(\mathbb{R}^d)\to H^1(\mathbb{R}^d)} \leqslant C(1 + |\tau|) \varepsilon.
\end{align}
Here $K(\varepsilon;\tau)$ is the corresponding corrector.
In \cite{DSu1, DSu2, DSu4}, it was shown that in the general case the results
\eqref{est_cos_A_eps}--\eqref{est_sin_A_eps2} are sharp both regarding the type of the operator norm and regarding the dependence of estimates on $\tau$ (it is impossible to replace $(1+|\tau|)$ on the right by
$(1+|\tau|)^\alpha$ with $\alpha<1$). On the other hand, under some additional assumptions the results admit the following improvement:
\begin{gather}
\label{usilenie_est_cos_A_eps}
\bigl\| \cos (\tau {\mathcal A}_\varepsilon^{1/2}) - \cos (\tau ({\mathcal A}^0)^{1/2}) \bigr\|_{H^{3/2}(\mathbb{R}^d)\to L_2(\mathbb{R}^d)} \leqslant C(1+ |\tau|)^{1/2} \varepsilon,
\\
\label{usilenie_est_sin_A_eps}
\bigl\| {\mathcal A}_\varepsilon^{-1/2} \sin (\tau {\mathcal A}_\varepsilon^{1/2}) -
({\mathcal A}^0)^{-1/2} \sin (\tau ({\mathcal A}^0)^{1/2}) \bigr\|_{H^{1/2}(\mathbb{R}^d)\to L_2(\mathbb{R}^d)}
\leqslant C(1+ |\tau|)^{1/2} \varepsilon,
\\
\label{usilenie_est_sin_A_eps2}
\bigl\| {\mathcal A}_\varepsilon^{-1/2} \sin (\tau {\mathcal A}_\varepsilon^{1/2}) -
({\mathcal A}^0)^{-1/2} \sin (\tau ({\mathcal A}^0)^{1/2}) - \varepsilon K(\varepsilon;\tau)
\bigr\|_{H^{3/2}(\mathbb{R}^d)\to H^1(\mathbb{R}^d)} \leqslant C(1+ |\tau|)^{1/2} \varepsilon.
\end{gather}
The additional assumptions are formulated in terms of the spectral characteristics of the operator
${\mathcal A}= b(\mathbf{D})^* g(\mathbf{x}) b(\mathbf{D})$ at the bottom of the spectrum.
Similar results for the exponential $e^{-i \tau {\mathcal A}_\varepsilon}$ were previously
obtained in \cite{Su3, Su4, D}.
\subsection{Main results}
In the present paper, we apply the results of \cite{BSu4, M1,M2,DSu2, DSu4} to the \textit{model operator of electrodynamics} acting in $L_2({\mathbb R}^3;{\mathbb C}^3)$ and given by the expression
\begin{equation}
\label{L_eps_intr}
{\mathcal L}_\varepsilon = \mu_0^{-1/2} \operatorname{curl} \eta(\mathbf{x}/\varepsilon)^{-1} \operatorname{curl} \mu_0^{-1/2} - \mu_0^{1/2} \nabla \nu(\mathbf{x}/\varepsilon) \operatorname{div} \mu_0^{1/2}, \quad \varepsilon >0.
\end{equation}
Here $\mu_0$ is a constant positive matrix, $\eta(\mathbf{x})$ is a matrix-valued function, and $\nu(\mathbf{x})$ is a real-valued function. It is assumed that $\eta(\mathbf{x})$ and $\nu(\mathbf{x})$ are periodic, bounded and positive definite. The operator \eqref{L_eps_intr} is a particular case of the operator \eqref{A_eps} with $m=4$ and $n=3$. The specific feature is that the operator ${\mathcal L}_\varepsilon$ is reduced by the orthogonal decomposition of $L_2({\mathbb R}^3;{\mathbb C}^3)$ into the divergence-free and the gradient subspaces (the Weyl decomposition). We are mainly interested in the divergence-free part ${\mathcal L}_{J,\varepsilon}$
of the operator ${\mathcal L}_\varepsilon$. For ${\mathcal L}_{J,\varepsilon}$ we obtain estimates of the form \eqref{est_cos_A_eps}--\eqref{est_sin_A_eps2}. We show that in the general case these results cannot be improved. On the other hand, under some additional assumptions we obtain estimates of the form \eqref{usilenie_est_cos_A_eps}--\eqref{usilenie_est_sin_A_eps2}. Some examples of both situations are discussed.
The results are applied to homogenization of the Cauchy problem for the nonstationary Maxwell system in the case where the magnetic permeability is equal to $\mu_0$ and the dielectric permittivity is given by the matrix $\eta(\mathbf{x}/\varepsilon)$.
Some partial results in this direction were obtained in the previous paper \cite{DSu3} by the authors (in the case where $\mu_0 = {\mathbf 1}$).
The method is based on the scaling transformation, the Floquet--Bloch theory, and the analytic perturbation theory. An important role is played by the spectral characteristics of the operator ${\mathcal L}$ (given by \eqref{L_eps_intr} with $\varepsilon=1$) at the bottom of the spectrum.
We also rely on the papers \cite{Su2, BSu-FAA, Su-AA18} about homogenization of the stationary periodic Maxwell system.
\subsection{Plan of the paper}
In \S 1, we introduce the operator $\mathcal L$ acting in $L_2({\mathbb R}^3; {\mathbb C}^3)$; describe
its reduction by the Weyl decomposition; describe the expansion of $\mathcal L$ in the direct integral of the operators ${\mathcal L}(\mathbf{k})$ acting in $L_2({\Omega}; {\mathbb C}^3)$ (where $\Omega$ is the cell of the lattice $\Gamma$) and depending on the parameter ${\mathbf k} \in {\mathbb R}^3$ (the quasimomentum). In \S 2, the effective characteristics of the operator $\mathcal L$ are introduced. In \S 3, main results of the paper on homogenization of the operators ${\mathcal L}_\varepsilon$ and ${\mathcal L}_{J,\varepsilon}$ are obtained.
In \S 4, we apply the results to homogenization of the solutions of the Cauchy problem for the nonstationary Maxwell system.
\subsection{Notation}
Let $\mathfrak{H}$ and $\mathfrak{H}_*$ be complex separable Hilbert spaces. By $\Vert \cdot \Vert _{\mathfrak{H}}$ we denote the norm in $\mathfrak{H}$; the symbol $\Vert \cdot \Vert _{\mathfrak{H}\rightarrow \mathfrak{H}_*}$ denotes the norm of a linear continuous operator acting from $\mathfrak{H}$ to $\mathfrak{H}_*$.
The inner product and the norm in $\mathbb{C}^n$ are denoted by $\langle \cdot ,\cdot \rangle$ and $\vert \cdot \vert$, respectively, $\mathbf{1}_n = \mathbf{1}$ is the unit $(n\times n)$-matrix.
If $a$ is a matrix of size $n\times n$, then $\vert a\vert$ stands for the norm of $a$ viewed as
an operator in $\mathbb{C}^n$.
We denote $\mathbf{x}=(x_1,x_2,x_3)\in \mathbb{R}^3$, $iD_j=\partial /\partial x_j$, $j=1,2,3$, $\mathbf{D}=-i\nabla =(D_1,D_2,D_3)$.
The class $L_2$ of $\mathbb{C}^n$-valued functions in a domain $\mathcal{O}\subset \mathbb{R}^d$ is denoted by $L_2(\mathcal{O};\mathbb{C}^n)$. The Sobolev classes of $\mathbb{C}^n$-valued functions in a domain $\mathcal{O}$ are denoted by $H^s(\mathcal{O};\mathbb{C}^n)$. For $n=1$, we write simply $L_2(\mathcal{O})$, $H^s(\mathcal{O})$, but sometimes we use such simple notation also for the spaces of vector-valued or matrix-valued functions.
\subsection{Acknowledgement} M.~A.~Dorodnyi is a Young Russian Mathematics award winner and would like to thank its sponsors and jury.
\section{The model second order operator}
\label{Section Preliminaries}
\setcounter{section}{1}
\setcounter{equation}{0}
\subsection{Lattices. The Gelfand transformation}\label{Subsection Lattices}
Let $\Gamma$ be a lattice in $\mathbb{R}^3$ generated by the basis $\mathbf{a}_1, \mathbf{a}_2, \mathbf{a}_3$:
\begin{equation*}
\Gamma=\biggl\{ \mathbf{a}\in \mathbb{R}^3 \,:\ \mathbf{a}=\sum \limits _{j=1}^3 q_j \mathbf{a}_j,\ q _j\in\mathbb{Z}\biggr\}.
\end{equation*}
Let $\Omega$ be the elementary cell of the lattice $\Gamma$:
\begin{equation*}
\Omega = \biggl \{ \mathbf{x}\in\mathbb{R}^3 \ :\, \mathbf{x}=\sum \limits _{j=1}^3 \xi_j\mathbf{a}_j,\;
0 <\xi_j < 1
\biggr \}.
\end{equation*}
The basis $\mathbf{b}_1, \mathbf{b}_2,\mathbf{b}_3\in \mathbb{R}^3$ dual to $\mathbf{a}_1, \mathbf{a}_2,\mathbf{a}_3$ is defined by the relations $\langle \mathbf{b}_j,\mathbf{a}_i\rangle =2\pi \delta _{ji}$. This basis generates the lattice $\widetilde{\Gamma}$ dual to ${\Gamma}$. Let $\widetilde{\Omega}$ be the central Brillouin zone of the lattice $\widetilde{\Gamma}$:
\begin{equation*}
\widetilde{\Omega}=\bigl \{ \mathbf{k}\in\mathbb{R}^3:\ \vert \mathbf{k}\vert <\vert \mathbf{k}-\mathbf{b}\vert ,\ 0\neq \mathbf{b}\in \widetilde{\Gamma}\bigr\}.
\end{equation*}
Let $r_0$ be the radius of the ball inscribed in $\mathrm{clos}\,\widetilde{\Omega}$, i.~e.,
$2r_0=\min_{0\ne {\mathbf b} \in \widetilde{\Gamma}} |{\mathbf b}|$.
For $\Gamma$-periodic measurable matrix-valued functions, we use the following notation:
$$
f^\varepsilon (\mathbf{x}):=f(\mathbf{x}/\varepsilon), \ \varepsilon >0;\quad
\overline{f}:=\vert \Omega\vert ^{-1}\int _\Omega f(\mathbf{x})\,d\mathbf{x},\quad
\underline{f}:=\left(\vert \Omega\vert ^{-1}\int _\Omega f(\mathbf{x})^{-1}\,d\mathbf{x}\right)^{-1}.
$$
In the definition of $\overline{f}$ it is assumed that $f\in L_{1,\mathrm{loc}}(\mathbb{R}^3)$, and in the definition of
$\underline{f}$ it is assumed that $f(\mathbf{x})$ is a square nondegenerate matrix such that $f^{-1}\in L_{1,\mathrm{loc}}(\mathbb{R}^3)$.
Let $\widetilde{H}^1(\Omega;\mathbb{C}^n)$ be the subspace of
$H^1(\Omega;\mathbb{C}^n)$ consisting of functions whose $\Gamma$-periodic extension to $\mathbb{R}^3$ belongs to $H^1_{\textnormal{loc}}(\mathbb{R}^3;\mathbb{C}^n)$.
Now, we introduce the \textit{Gelfand transformation} $\mathcal{U}$. First, $\mathcal{U}$ is defined on the Schwartz class by the following relation:
\begin{equation*}
\begin{split}
(\mathcal{U}\mathbf{f})(\mathbf{k}, \mathbf{x}) =
\widetilde{\mathbf{f}}(\mathbf{k}, \mathbf{x}) :=
|\widetilde{\Omega}|^{-1/2} \sum_{\mathbf{a} \in \Gamma} e^{-i \langle \mathbf{k}, \mathbf{x} + \mathbf{a} \rangle} \mathbf{f}(\mathbf{x} + \mathbf{a}),
\quad \mathbf{f} \in \mathcal{S}(\mathbb{R}^3; \mathbb{C}^3), \ \mathbf{x} \in \Omega, \ \mathbf{k} \in \widetilde{\Omega}.
\end{split}
\end{equation*}
Next, it is extended up to unitary transformation
\begin{equation*}
\mathcal{U}: L_2(\mathbb{R}^3; \mathbb{C}^3) \to \int_{\widetilde{\Omega}} \oplus L_2(\Omega; \mathbb{C}^3) \, d \mathbf{k} =: \mathcal{K}.
\end{equation*}
The relation ${\mathbf f} \in H^1(\mathbb{R}^3;\mathbb{C}^3)$ is equivalent to the fact that $\widetilde{\mathbf{f}}(\mathbf{k}, \cdot) \in \widetilde{H}^1(\Omega;\mathbb{C}^3)$ for almost all \hbox{$\mathbf{k} \in \widetilde{\Omega}$}~and
$$
\int_{\widetilde{\Omega}} \int_\Omega \left( |(\mathbf{D} + \mathbf{k}) \widetilde{\mathbf{f}}(\mathbf{k}, \mathbf{x}) |^2 +
| \widetilde{\mathbf{f}}(\mathbf{k}, \mathbf{x}) |^2 \right) \, d\mathbf{x}\, d \mathbf{k} < \infty.
$$
Under the transform $\mathcal{U}$, the operator in $L_2(\mathbb{R}^3;\mathbb{C}^3)$ acting as multiplication by a bounded periodic matrix-valued function
turns into multiplication by the same function on the fibers of the direct integral $\mathcal K$.
Action of the first order DO $b(\mathbf{D})$ on
${\mathbf f} \in H^1(\mathbb{R}^3;\mathbb{C}^3)$ turns into action of the operators
$b({\mathbf D}+{\mathbf k})$ on $\widetilde{\mathbf f}(\mathbf{k},\cdot) \in \widetilde{H}^1(\Omega;\mathbb{C}^3)$ on the fibers of the direct integral.
\subsection{The operator $\mathcal{L}$}\label{Subsection Operator L}
Let $\mu_0$ be a symmetric positive $(3 \times 3)$-matrix with real entries.
Suppose that $\eta(\mathbf{x})$ is a symmetric $(3 \times 3)$-matrix-valued function in ${\mathbb R}^3$ with real entries
and $\nu(\mathbf{x})$ is a real-valued function in ${\mathbb R}^3$. We assume that $\eta(\mathbf{x})$ and
$\nu(\mathbf{x})$ are periodic with respect to the lattice $\Gamma$ and such that
\begin{align}
\label{eta}
\eta(\mathbf{x}) &> 0; \quad \eta, \eta^{-1} \in L_\infty;\\
\label{nu}
\nu(\mathbf{x}) &> 0; \quad \nu, \nu^{-1} \in L_\infty.
\end{align}
In $L_2(\mathbb{R}^3; \mathbb{C}^3)$, we consider the operator $\mathcal{L}$ formally given by the differential expression
\begin{equation}
\label{L}
\mathcal{L} = \mu_0^{-1/2}\operatorname{curl} \eta(\mathbf{x})^{-1} \operatorname{curl} \mu_0^{-1/2}
- \mu_0^{1/2} \nabla \nu(\mathbf{x}) \operatorname{div} \mu_0^{1/2}.
\end{equation}
The operator~(\ref{L}) can be represented as $\mathcal{L} = b(\mathbf{D})^* g(\mathbf{x}) b(\mathbf{D})$, where
\begin{equation*}
b(\mathbf{D}) = \begin{pmatrix}
- i \operatorname{curl} \mu_0^{-1/2} \\
- i \operatorname{div} \mu_0^{1/2}
\end{pmatrix}, \quad g(\mathbf{x}) = \begin{pmatrix}
\eta(\mathbf{x})^{-1} & 0 \\
0 & \nu(\mathbf{x})\end{pmatrix}.
\end{equation*}
The symbol $b(\boldsymbol{\xi})$ of the operator $b({\mathbf D})$ is given by
\begin{equation}
\label{symbol}
b(\boldsymbol{\xi}) = \begin{pmatrix} r(\boldsymbol{\xi}) \mu_0^{-1/2} \\ \boldsymbol{\xi}^t \mu_0^{1/2}\end{pmatrix},
\quad
r(\boldsymbol{\xi}) =
\begin{pmatrix}
0 & -\xi_3 & \xi_2 \\
\xi_3 & 0 & -\xi_1 \\
-\xi_2 & \xi_1 & 0
\end{pmatrix}, \quad \boldsymbol{\xi}^t = \begin{pmatrix} \xi_1 & \xi_2 & \xi_3 \end{pmatrix}.
\end{equation}
We have
$$
\operatorname{rank} b(\boldsymbol{\xi}) =3, \quad 0 \ne \boldsymbol{\xi} \in {\mathbb R}^3.
$$
This condition is equivalent to the estimates
\begin{equation}
\label{DSu1}
\alpha_0 \mathbf{1}_3 \leqslant b(\boldsymbol{\xi})^* b(\boldsymbol{\xi}) \leqslant \alpha_1 \mathbf{1}_3,
\quad |\boldsymbol{\xi}| =1,
\end{equation}
with some positive constants $\alpha_0, \alpha_1$. It is easily seen that \eqref{DSu1}
holds with the constants
$$
\alpha_0 = \min \{ |\mu_0|^{-1}; |\mu_0^{-1}|^{-1}\}, \quad \alpha_1 = |\mu_0| + |\mu_0^{-1}|.
$$
From \eqref{eta} and \eqref{nu} it follows that $g({\mathbf x})$ is positive definite and bounded. Obviously,
$$
\|g\|_{L_\infty} = \max \{ \|\eta^{-1}\|_{L_\infty}, \|\nu\|_{L_\infty}\}, \quad
\|g^{-1} \|_{L_\infty} = \max \{ \|\eta\|_{L_\infty}, \|\nu^{-1}\|_{L_\infty}\}.
$$
The precise definition of the operator $\mathcal{L}$ is given in terms of the quadratic form
\begin{equation*}
\begin{aligned}
&\mathfrak{l}[\mathbf{u},\mathbf{u}] :=
\int_{{\mathbb R}^3} \langle g({\mathbf x}) b({\mathbf D}) \mathbf{u}, b({\mathbf D}) \mathbf{u} \rangle \, d{\mathbf x} \\
&=\int_{\mathbb{R}^3} \left( \left\langle \eta(\mathbf{x})^{-1} \operatorname{curl} \mu_0^{-1/2} \mathbf{u}, \operatorname{curl} \mu_0^{-1/2} \mathbf{u} \right\rangle + \nu(\mathbf{x}) \bigl| \operatorname{div} \mu_0^{1/2} \mathbf{u} \bigr|^2\right) \, d\mathbf{x},
\quad \mathbf{u} \in H^1(\mathbb{R}^3; \mathbb{C}^3).
\end{aligned}
\end{equation*}
Under our assumptions,
\begin{equation}
\label{estimates}
\begin{aligned}
c_0 \| {\mathbf D} {\mathbf u} \|^2_{L_2({\mathbb R}^3)} \leqslant \mathfrak{l}[\mathbf{u},\mathbf{u}] \leqslant c_1 \| {\mathbf D} {\mathbf u}\|^2_{L_2({\mathbb R}^3)},
\quad {\mathbf u} \in H^1({\mathbb R}^3;{\mathbb C}^3),
\\
c_0 = \alpha_0 \|g^{-1}\|^{-1}_{L_\infty},\quad c_1 = \alpha_1 \|g \|_{L_\infty}.
\end{aligned}
\end{equation}
Thus, the form $\mathfrak{l}[{\mathbf u},{\mathbf u}]$ is closed and nonnegative.
By definition, $\mathcal L$ is a selfadjoint operator in $L_2({\mathbb R}^3;{\mathbb C}^3)$ generated by this form.
So, the operator $\mathcal L$ is a particular case of the operator $\mathcal A$ (see Introduction),
and we can apply general resuts for the class of operators $\mathcal A$.
\subsection{The Weyl decomposition. Reduction of the operator $\mathcal{L}$}\label{Weyl}
In $L_2({\mathbb R}^3;{\mathbb C}^3)$, we introduce the \textquotedblleft gradient\textquotedblright \ subspace
\begin{equation*}
G(\mu_0) := \left\lbrace \mathbf{u} = \mu_0^{1/2} \nabla \phi \colon\ \phi \in H_{\mathrm{loc}}^1 (\mathbb{R}^3), \ \nabla \phi \in L_2(\mathbb{R}^3; \mathbb{C}^3) \right\rbrace.
\end{equation*}
The \textquotedblleft divergence-free\textquotedblright \ subspace $J(\mu_0)$ is defined as the orthogonal complement to $G(\mu_0)$.
So, we have the following Weyl decomposition
\begin{equation}
\label{Weyl_decomp}
L_2(\mathbb{R}^3; \mathbb{C}^3) = J(\mu_0) \oplus G(\mu_0).
\end{equation}
The subspace $J(\mu_0)$ consists of the functions $\mathbf{u} \in L_2(\mathbb{R}^3; \mathbb{C}^3) $ satisfying \hbox{$\operatorname{div} \mu_0^{1/2}\mathbf{u} = 0$} (in the sense of distributions).
By $\mathcal{P}(\mu_0)$ we denote the orthogonal projection onto $J(\mu_0)$.
\begin{remark}
\label{PJ}
It is easily seen that {\rm (}see, e.~g., \cite[Chapter 7, Section 2.4]{BSu1}{\rm )}
for $s>0$ the operator $\mathcal{P}(\mu_0)$ restricted to $H^s(\mathbb{R}^3;\mathbb{C}^3)$
is the orthogonal projection of the space $H^s(\mathbb{R}^3;\mathbb{C}^3)$ onto the subspace $J^s(\mu_0) :=
J(\mu_0) \cap H^s(\mathbb{R}^3;\mathbb{C}^3)$. The operator $I -\mathcal{P}(\mu_0)$ restricted to $H^s(\mathbb{R}^3;\mathbb{C}^3)$ is the orthogonal projection of
$H^s(\mathbb{R}^3;\mathbb{C}^3)$ onto the subspace
$G^s(\mu_0) := G(\mu_0) \cap H^s(\mathbb{R}^3;\mathbb{C}^3)$.
\end{remark}
The operator \eqref{L} is reduced by the decomposition~\eqref{Weyl_decomp}: $\mathcal{L}= \mathcal{L}_J \oplus \mathcal{L}_G$. The part $\mathcal{L}_J$ acting in $J(\mu_0)$ is formally given by the differential expression
$\mu_0^{-1/2}\operatorname{curl} \eta(\mathbf{x})^{-1} \operatorname{curl} \mu_0^{-1/2}$,
and the part $\mathcal{L}_G$ acting in $G(\mu_0)$
is given by $- \mu_0^{1/2} \nabla \nu(\mathbf{x}) \operatorname{div} \mu_0^{1/2}$.
\subsection{The operators $\mathcal{L}(\mathbf{k})$}
In $L_2 (\Omega; \mathbb{C}^3)$, we consider the operator
$\mathcal{L}(\mathbf{k})$ depending on the parameter $\mathbf{k}\in {\mathbb R}^3$ (called the quasimomentum)
and formally given by
\begin{equation*}
\mathcal{L}(\mathbf{k}) =
\mu_0^{-1/2} \operatorname{curl}_\mathbf{k} \eta(\mathbf{x})^{-1} \operatorname{curl}_\mathbf{k} \mu_0^{-1/2}
- \mu_0^{1/2} \nabla_\mathbf{k} \nu(\mathbf{x}) \operatorname{div}_\mathbf{k} \mu_0^{1/2}
\end{equation*}
with periodic boundary conditions. Here
\begin{equation*}
\nabla_\mathbf{k} \phi := \nabla \phi + i \mathbf{k} \phi, \quad \operatorname{div}_\mathbf{k} \mathbf{f} := \operatorname{div} \mathbf{f} + i \, \mathbf{k} \cdot \mathbf{f}, \quad \operatorname{curl}_\mathbf{k} \mathbf{f} := \operatorname{curl} \mathbf{f} + i \, \mathbf{k} \times \mathbf{f}
\end{equation*}
($\mathbf{k} \cdot \mathbf{f}$ is the inner product and $\mathbf{k} \times \mathbf{f}$ is the vector product).
Strictly speaking, $\mathcal{L}(\mathbf{k})$ is a selfadjoint operator in $L_2 (\Omega; \mathbb{C}^3)$ generated by the closed nonnegative quadratic form
\begin{equation*}
\begin{aligned}
\mathfrak{l}(\mathbf{k})[\mathbf{u}, \mathbf{u}] =
\int_{\Omega} \left\langle \eta(\mathbf{x})^{-1} \operatorname{curl}_\mathbf{k} \mu_0^{-1/2}\mathbf{u}, \operatorname{curl}_\mathbf{k} \mu_0^{-1/2}\mathbf{u} \right\rangle \, d {\mathbf x}
\\
+ \int_\Omega \nu(\mathbf{x}) \bigl| \operatorname{div}_\mathbf{k} \mu_0^{1/2}\mathbf{u} \bigr|^2 \, d\mathbf{x},
\quad \mathbf{u} \in \widetilde{H}^1(\Omega; \mathbb{C}^3).
\end{aligned}
\end{equation*}
Using the Fourier series expansion for a function $\mathbf{u}$, it is easily seen that
\begin{equation}
\label{l(k)_form_estimate}
\begin{split}
c_0 \|(\mathbf{D} + \mathbf{k}) \mathbf{u}\|_{L_2 (\Omega)}^2 \leqslant \mathfrak{l}(\mathbf{k})[\mathbf{u}, \mathbf{u}] \leqslant c_1 \|(\mathbf{D} + \mathbf{k}) \mathbf{u}\|_{L_2 (\Omega)}^2, \quad
\mathbf{u} \in \widetilde{H}^1(\Omega; \mathbb{C}^3),
\end{split}
\end{equation}
where the constants $c_0, c_1$ are the same as in \eqref{estimates}.
By the lower estimate~(\ref{l(k)_form_estimate}),
\begin{equation}
\label{c*}
\mathcal{L}(\mathbf{k}) \geqslant c_0 |\mathbf{k}|^2 I, \quad \mathbf{k} \in \widetilde{\Omega}.
\end{equation}
\subsection{Reduction of the operators ${\mathcal L}(\mathbf{k})$}
In $L_2 (\Omega; \mathbb{C}^3)$, we define the
\textquotedblleft gradient\textquotedblright \ subspace (depending on the parameter $\mathbf{k} \in \mathbb{R}^3$)
\begin{equation*}
G(\mathbf{k};\mu_0) := \{\mathbf{u} = \mu_0^{1/2} \nabla_\mathbf{k} \phi \colon\ \phi \in \widetilde{H}^1(\Omega) \}.
\end{equation*}
The \textquotedblleft divergence-free\textquotedblright \ subspace $J(\mathbf{k}; \mu_0)$ is defined as the orthogonal complement to $G(\mathbf{k};\mu_0)$:
\begin{equation}
\label{H_Weyl_decomp}
L_2 (\Omega; \mathbb{C}^3) = J(\mathbf{k};\mu_0) \oplus G(\mathbf{k}; \mu_0).
\end{equation}
The subspace $J(\mathbf{k}; \mu_0)$ consists of the functions $\mathbf{u} \in L_2 (\Omega; \mathbb{C}^3)$ satisfying $\operatorname{div}_\mathbf{k} \mu_0^{1/2}\check{\mathbf{u}} = 0$ (in the sense of distributions), where $\check{\mathbf{u}}$ is the $\Gamma$-periodic extension of a function
$\mathbf{u}$ to ${\mathbb R}^3$. Let $\mathcal{P}(\mathbf{k};\mu_0)$ be the orthogonal projection onto
$J(\mathbf{k}; \mu_0)$.
The operator $\mathcal{L}(\mathbf{k})$ is reduced by decomposition~\eqref{H_Weyl_decomp}.
The part $\mathcal{L}_J(\mathbf{k})$ acting in $J(\mathbf{k};\mu_0)$ is formally given by the expression
$\mu_0^{-1/2}\operatorname{curl}_{\mathbf{k}} \eta(\mathbf{x})^{-1} \operatorname{curl}_{\mathbf{k}} \mu_0^{-1/2}$ (with periodic boundary conditions), and the part $\mathcal{L}_G(\mathbf{k})$ acting in
$G(\mathbf{k}; \mu_0)$ is given by
$- \mu_0^{1/2}\nabla_{\mathbf{k}} \nu(\mathbf{x}) \operatorname{div}_{\mathbf{k}} \mu_0^{1/2}$.
\subsection{Direct integral expansion for the operator $\mathcal{L}$}
Under the Gelfand transformation $\mathcal{U}$, the operator $\mathcal{L}$ expands in the direct integral of the operators $\mathcal{L} (\mathbf{k})$:
\begin{equation*}
\mathcal{U} \mathcal{L} \mathcal{U}^{-1} = \int_{\widetilde \Omega} \oplus \mathcal{L} (\mathbf{k}) \, d \mathbf{k}.
\end{equation*}
This means the following. Let $\mathbf{v} \in H^1(\mathbb{R}^3; \mathbb{C}^3)$. Then
\begin{align}
\label{Gelfand_indetail_1}
\widetilde{\mathbf{v}}(\mathbf{k}, \cdot) &\in \widetilde{H}^1(\Omega; \mathbb{C}^3) \quad \text{for almost all} \; \mathbf{k} \in \widetilde \Omega, \\
\label{Gelfand_indetail_2}
\mathfrak{l}[\mathbf{v}, \mathbf{v}] &= \int_{\widetilde{\Omega}} \mathfrak{l}(\mathbf{k}) [\widetilde{\mathbf{v}}(\mathbf{k}, \cdot), \widetilde{\mathbf{v}}(\mathbf{k}, \cdot)] \, d \mathbf{k} .
\end{align}
Conversely, if $\widetilde{\mathbf{v}} \in \mathcal{K}$ satisfies~\eqref{Gelfand_indetail_1} and the
integral in~\eqref{Gelfand_indetail_2} is finite, then $\mathbf{v} \in H^1(\mathbb{R}^3; \mathbb{C}^3)$
and~\eqref{Gelfand_indetail_2} holds.
Under the Gelfand transform, the orthogonal projection $\mathcal{P}(\mu_0)$ expands in the direct integral of the orthogonal projections $\mathcal{P}(\mathbf{k}; \mu_0)$; see \cite{Su2}. Therefore, the operator $\mathcal{L} \mathcal{P}(\mu_0) = \mathcal{L}_J \oplus \mathbf{0}_{G(\mu_0)}$ expands in the direct integral of the operators $\mathcal{L}(\mathbf{k}) \mathcal{P}(\mathbf{k};\mu_0) = \mathcal{L}_J (\mathbf{k}) \oplus \mathbf{0}_{G(\mathbf{k};\mu_0)}$:
\begin{equation*}
\mathcal{U} \mathcal{L} \mathcal{P}(\mu_0) \mathcal{U}^{-1} = \int_{\widetilde{\Omega}} \oplus \mathcal{L}(\mathbf{k}) \mathcal{P}(\mathbf{k};\mu_0) \, d \mathbf{k}.
\end{equation*}
\section{Effective characteristics}
\subsection{The analytic branches of eigenvalues and eigenvectors}
According to \cite{BSu1}, we put
$$
\mathbf{k} = t \boldsymbol{\theta},\quad t= |\mathbf{k}|,\quad \boldsymbol{\theta}\in {\mathbb S}^2,
$$
and denote $\mathcal{L} (\mathbf{k})= \mathcal{L}(t \boldsymbol{\theta})=: L(t ;\boldsymbol{\theta})$.
The operator family $L(t ;\boldsymbol{\theta})$ depends on the onedimensional parameter $t$ analytically
and has discrete spectrum (since $\mathcal{L} (\mathbf{k})$ is an elliptic operator in a bounded domain).
We can apply analytic perturbation theory (see \cite{K}).
For $t=0$ the \textquotedblleft unperturbed\textquotedblright \ operator $\mathcal{L}(0)$ has an isolated threemultiple eigenvalue $\lambda_0=0$.
The corresponding eigenspace consists of constant vector-valued functions:
\begin{equation}
\label{frakN}
\mathfrak{N} := \operatorname{Ker} \mathcal{L} (0) = \left\lbrace \mathbf{u} \in L_2(\Omega;\mathbb{C}^3)
\colon\ \mathbf{u} = \mathbf{c} \in \mathbb{C}^3 \right\rbrace.
\end{equation}
Let $P$ be the orthogonal projection of $L_2(\Omega;\mathbb{C}^3)$ onto the subspace $\mathfrak{N}$:
\begin{equation*}
P \mathbf{u} = |\Omega|^{-1} \int_\Omega \mathbf{u}(\mathbf{x}) \, d\mathbf{x}.
\end{equation*}
We put
\begin{equation*}
\delta:= \frac{r_0^2}{4} \alpha_0 \| g^{-1}\|^{-1}_{L_\infty},
\quad
t^0 := \frac{r_0}{2} \alpha_0^{1/2} \alpha_1^{-1/2} \|g\|^{-1/2}_{L_\infty} \| g^{-1}\|^{-1/2}_{L_\infty}.
\end{equation*}
As was shown in \cite{BSu1}, for $t\leqslant t^0$ the operator $L(t ;\boldsymbol{\theta})$ has exactly three eigenvalues
(counted with multiplicities)
$\lambda_l(t ;\boldsymbol{\theta})$, $l=1,2,3,$ belonging to the interval $[0,\delta]$,
while the interval $(\delta, 3\delta)$ is free of the spectrum. By
${\mathfrak F}(\mathbf{k})={\mathfrak F}(t ;\boldsymbol{\theta})$ we denote the eigenspace of the operator
$L(t ;\boldsymbol{\theta})$ for the interval $[0,\delta]$.
According to the analytic perturbation theory, for $t \leqslant t^0$ the eigenvalues
$\lambda_l(t ;\boldsymbol{\theta})$, $l=1,2,3,$ can be enumerated in such a way that they
are real-analytic functions of $t$
(for each fixed $\boldsymbol{\theta} \in \mathbb{S}^2$) and the corresponding eigenvectors
$\boldsymbol{\varphi}_l(t;\boldsymbol{\theta})$, $l = 1,2,3$, orthonormal in $L_2(\Omega;\mathbb{C}^3)$
are real-analytic in $t$. Thus,
\begin{equation*}
L(t;\boldsymbol{\theta}) \boldsymbol{\varphi}_l(t;\boldsymbol{\theta}) = \lambda_l(t; \boldsymbol{\theta}) \boldsymbol{\varphi}_l(t;\boldsymbol{\theta}), \quad l = 1,2,3, \quad 0 \leqslant t \leqslant t^0,
\end{equation*}
and the set $\boldsymbol{\varphi}_l(t;\boldsymbol{\theta})$, $l = 1,2,3$, forms an orthonormal basis in the subspace
${\mathfrak F}(t ;\boldsymbol{\theta})$.
For sufficiently small $0< t_*=t_*(\boldsymbol{\theta}) \leqslant t^0$ and $t \leqslant t_*(\boldsymbol{\theta})$, we have
the following convergent power series expansions
\begin{align}
\label{eigenvalues_series}
\lambda_l(t; \boldsymbol{\theta}) &= \gamma_l(\boldsymbol{\theta})t^2 + \mu_l(\boldsymbol{\theta})t^3 + \ldots, \qquad l = 1,2,3, \\
\label{eigenvectors_series}
\boldsymbol{\varphi}_l(t;\boldsymbol{\theta}) &= \boldsymbol{\omega}_l(\boldsymbol{\theta}) + t \boldsymbol{\psi}_l(\boldsymbol{\theta}) + \ldots, \qquad l = 1,2,3.
\end{align}
The vectors $\boldsymbol{\omega}_l(\boldsymbol{\theta})$, $l = 1,2,3$, form an orthonormal basis
in the subspace $\mathfrak{N}$.
By \eqref{c*}, \hbox{$\gamma_l(\boldsymbol{\theta}) \geqslant c_0 >0$}; in general, the coefficients $\mu_l(\boldsymbol{\theta}) \in \mathbb{R}$ may be nonzero.
The coefficients of the power series expansions \eqref{eigenvalues_series}, \eqref{eigenvectors_series} are called the \textit{threshold characteristics} of the operator $\mathcal{L}$ at the bottom of the spectrum.
\subsection{The spectral germ. The effective matrix\label{sec_effective}}
The key role is played by the \textit{spectral germ} $S (\boldsymbol{\theta})$ of the operator
$L(t; \boldsymbol{\theta})$; see \cite{BSu1}.
Let us give the spectral definition of the germ: $S (\boldsymbol{\theta})$ \textit{is a selfadjoint operator
in the space $\mathfrak{N}$ such that the numbers $\gamma_l(\boldsymbol{\theta})$ and the elements
$\boldsymbol{\omega}_l(\boldsymbol{\theta})$ are its eigenvalues and eigenvectors}:
\begin{equation*}
S(\boldsymbol{\theta})\boldsymbol{\omega}_l(\boldsymbol{\theta})=\gamma_l(\boldsymbol{\theta})
\boldsymbol{\omega}_l(\boldsymbol{\theta}),\quad l=1,2,3.
\end{equation*}
In \cite{BSu1}, the following invariant representation for the germ was obtained:
\begin{equation}
\label{germ}
S (\boldsymbol{\theta}) = b(\boldsymbol{\theta})^* g^0 b(\boldsymbol{\theta}), \quad \boldsymbol{\theta} \in \mathbb{S}^{2},
\end{equation}
where $b(\boldsymbol{\theta})$~is the symbol of the operator $b(\mathbf{D})$, and $g^0$~is the so called effective matrix. The constant positive ($4 \times 4$)-matrix $g^0$ is defined as follows. Let $\Lambda \in \widetilde{H}^1 (\Omega)$~be the ($3 \times 4$)-matrix-valued function which is the $\Gamma$-periodic solution of the problem
\begin{equation}
\label{equation_for_Lambda}
b(\mathbf{D})^* g(\mathbf{x}) (b(\mathbf{D}) \Lambda (\mathbf{x}) + \mathbf{1}_4) = 0, \quad \int_{\Omega} \Lambda (\mathbf{x}) \, d \mathbf{x} = 0.
\end{equation}
The effective matrix $g^0$ is defined in terms of the matrix $\Lambda (\mathbf{x})$:
\begin{gather}
\label{g_tilde}
\widetilde{g} (\mathbf{x}) := g(\mathbf{x})( b(\mathbf{D}) \Lambda (\mathbf{x}) + \mathbf{1}_4), \\
\label{g0}
g^0 = | \Omega |^{-1} \int_{\Omega} \widetilde{g} (\mathbf{x}) \, d \mathbf{x}.
\end{gather}
It turns out that the matrix $g^0$ is positive definite.
The effective characteristics for the operator
$L(t; \boldsymbol{\theta})$ were calculated in \cite{BSu-FAA} and \cite{Su-AA18}.
First, we introduce the effective matrix $\eta^0$~for the scalar elliptic operator $- \operatorname{div} \eta(\mathbf{x}) \nabla = \mathbf{D}^* \eta(\mathbf{x}) \mathbf{D}$.
Recall the definition of $\eta^0$. Let $\mathbf{e}_1$, $\mathbf{e}_2$, $\mathbf{e}_3$ be the standard othonormal basis in $\mathbb{R}^3$. Let $\Phi_j(\mathbf{x})$ be the $\Gamma$-periodic solution of the problem
\begin{equation}
\label{2.8a}
\operatorname{div} \eta(\mathbf{x}) (\nabla \Phi_j(\mathbf{x}) + \mathbf{e}_j)=0,
\quad \int_{\Omega} \Phi_j (\mathbf{x}) \, d \mathbf{x} = 0.
\end{equation}
Consider the matrix $\Sigma_{\circ}({\mathbf x})$ with the columns $\nabla \Phi_j({\mathbf x})$, $j=1,2,3$. We put
$$
\widetilde{\eta} (\mathbf{x}):= \eta({\mathbf x}) (\Sigma_\circ({\mathbf x}) + {\mathbf 1}_3).
$$
Then
$$
\eta^0 = | \Omega |^{-1} \int_{\Omega} \widetilde{\eta} (\mathbf{x}) \, d \mathbf{x}.
$$
\begin{remark}
\label{eta0_properties}
Note that the matrix $\eta^0$ has the following properties\emph{:}
\noindent $1^\circ$. We have $\underline{\eta} \leqslant \eta^0 \leqslant \overline{\eta}$ {\rm (}these estimates are known as the Voigt--Reuss bracketing{\rm )}.
It follows that $|\eta^0|\leqslant \|\eta\|_{L_\infty}$, $|(\eta^0)^{-1}| \leqslant \|\eta^{-1}\|_{L_\infty}$.
\noindent $2^\circ$. The identity $\eta^0 = \overline{\eta}$ is equivalent to the fact that the columns
$\boldsymbol{\eta}_j ({\mathbf x})$, $j=1,2,3$, of the matrix $\eta({\mathbf x})$ are divergence-free{\rm :} $\operatorname{div}\, \boldsymbol{\eta}_j ({\mathbf x})=0$.
In this case, the solution of problem \eqref{2.8a} is trivial{\rm :} \hbox{$\Phi_j(\mathbf{x})=0$}, $j=1,2,3$.
\noindent $3^\circ$. The identity $\eta^0 = \underline{\eta}$ is equivalent to the
fact that the columns
$\mathbf{l}_j ({\mathbf x})$, $j=1,2,3$, of the matrix
$\eta({\mathbf x})^{-1}$ can be represented as
$\mathbf{l}_j ({\mathbf x}) = \nabla \phi_j(\mathbf{x}) + \mathbf{l}_j^0$
for some $\phi_j \in \widetilde{H}^1(\Omega)$ and $\mathbf{l}_j^0 \in \mathbb{R}^3$.
In this case we have $\widetilde{\eta}(\mathbf{x}) = \eta^0 = \underline{\eta}$.
\end{remark}
We put ${\mathbf c}_j= (\eta^0)^{-1} {\mathbf e}_j$, $j=1,2,3$.
Let $\widetilde{\Phi}_j(\mathbf{x})$ be the $\Gamma$-periodic solution of the problem
\begin{equation}
\label{2.8aaa}
\operatorname{div} \eta(\mathbf{x}) (\nabla \widetilde{\Phi}_j(\mathbf{x}) + \mathbf{c}_j)=0,
\quad \int_{\Omega} \widetilde{\Phi}_j (\mathbf{x}) \, d \mathbf{x} = 0.
\end{equation}
Let ${\mathbf p}_j \in \widetilde{H}^1(\Omega;{\mathbb C}^3)$ (where $j=1,2,3$) be the $\Gamma$-periodic solution of the problem
$$
\begin{aligned}
\operatorname{curl} (\mu_0^{-1} \operatorname{curl} {\mathbf p}_j({\mathbf x})) = {\eta}({\mathbf x}) (\nabla \widetilde{\Phi}_j({\mathbf x}) + {\mathbf c}_j) -{\mathbf e}_j,
\\
\operatorname{div} {\mathbf p}_j( {\mathbf x}) =0, \quad \int_\Omega {\mathbf p}_j({\mathbf x}) \,d {\mathbf x} =0.
\end{aligned}
$$
Let $\rho \in \widetilde{H}^1(\Omega)$ be the $\Gamma$-periodic solution of the problem
$$
- \operatorname{div} (\mu_0 \nabla \rho({\mathbf x})) = 1 - \underline{\nu} \, \nu({\mathbf x})^{-1}, \quad \int_\Omega \rho({\mathbf x}) \,d {\mathbf x} =0.
$$
Then the $(3 \times 4)$-matrix $\Lambda({\mathbf x})$ takes the form
\begin{equation*}
\Lambda({\mathbf x}) = i \begin{pmatrix}
\mu_0^{-1/2} \Psi({\mathbf x}) & \mu_0^{1/2} \nabla \rho({\mathbf x})
\end{pmatrix},
\end{equation*}
where $\Psi({\mathbf x})$ is the $(3 \times 3)$-matrix with the columns $\operatorname{curl} {\mathbf p}_j({\mathbf x})$, $j=1,2,3$.
Next, the matrix $\widetilde{g} ({\mathbf x}) = g({\mathbf x}) (b({\mathbf D}) \Lambda({\mathbf x}) + {\mathbf 1}_4)$ is given by
$$
\widetilde{g}({\mathbf x}) = \begin{pmatrix} (\eta^0)^{-1} + \Sigma({\mathbf x}) & 0 \\ 0 & \underline{\nu} \end{pmatrix},
$$
where $\Sigma({\mathbf x})$ is the matrix with the columns $\nabla \widetilde{\Phi}_j({\mathbf x})$, $j=1,2,3$. Note that
$\Sigma({\mathbf x}) = \Sigma_\circ({\mathbf x}) (\eta^0)^{-1}$.
According to \eqref{g0}, we obtain
\begin{equation}
\label{g00}
g^0 = \begin{pmatrix}
(\eta^0)^{-1} & 0 \\
0 & \underline{\nu}
\end{pmatrix}.
\end{equation}
By \eqref{germ} and \eqref{g00}, the germ $S(\boldsymbol{\theta})$ can be written as
\begin{equation}
\label{S_decomp}
S(\boldsymbol{\theta}) = \mu_0^{-1/2} r(\boldsymbol{\theta})^t (\eta^0)^{-1} r(\boldsymbol{\theta}) \mu_0^{-1/2} + \underline{\nu} \, \mu_0 ^{1/2} \boldsymbol{\theta} \boldsymbol{\theta}^t \mu_0^{1/2},
\end{equation}
where the symbol $r(\boldsymbol{\theta})$ is defined by \eqref{symbol}.
\subsection{Decomposition of the spectral germ}
Consider the following orthogonal decomposition of the threedimensional space~(\ref{frakN}) depending on the parameter \hbox{$\boldsymbol{\theta} \in \mathbb{S}^2$}:
\begin{equation}
\label{N_Weyl_decomp}
\mathfrak{N} = J_{\boldsymbol{\theta}}^0 \oplus G_{\boldsymbol{\theta}}^0,
\end{equation}
where
\begin{gather*}
J_{\boldsymbol{\theta}}^0 = \{ \mu_0^{1/2} \mathbf{c} \in \mathbb{C}^3 \colon \langle \mu_0 \mathbf{c}, \boldsymbol{\theta} \rangle =0 \}, \\
G_{\boldsymbol{\theta}}^0 = \{ \mathbf{c} = \alpha \mu_0^{1/2}\boldsymbol{\theta} \colon \alpha \in \mathbb{C} \}.
\end{gather*}
Obviously, the operator $S(\boldsymbol{\theta})$ is reduced by decomposition~(\ref{N_Weyl_decomp}).
The part $S_J(\boldsymbol{\theta})$ of $S(\boldsymbol{\theta})$ in $J_{\boldsymbol{\theta}}^0$ corresponds to the first term in (\ref{S_decomp}), and the part $S_G(\boldsymbol{\theta})$ of $S(\boldsymbol{\theta})$ in $G_{\boldsymbol{\theta}}^0$ corresponds to the second term.
The operator $S(\boldsymbol{\theta})$ has unique eigenvalue in the subspace $G_{\boldsymbol{\theta}}^0$:
\begin{equation}
\label{gamma3}
\gamma_3 (\boldsymbol{\theta}) = \underline{\nu} \langle \mu_0 \boldsymbol{\theta}, \boldsymbol{\theta} \rangle.
\end{equation}
The corresponding normed eigenvector is given by
\begin{equation}
\label{omega3}
\boldsymbol{\omega}_3(\boldsymbol{\theta}) = |\Omega|^{-1/2} \langle \mu_0 \boldsymbol{\theta}, \boldsymbol{\theta}\rangle^{-1/2} \mu_0^{1/2} \boldsymbol{\theta}.
\end{equation}
In the subspace $J_{\boldsymbol{\theta}}^0$~the germ has two eigenvalues $\gamma_1 (\boldsymbol{\theta})$ and $\gamma_2 (\boldsymbol{\theta})$ corresponding to the algebraic problem
\begin{equation}
\label{solenoid_germ_spec_probl}
r(\boldsymbol{\theta})^t (\eta^0)^{-1} r(\boldsymbol{\theta}) \mathbf{c} = \gamma \mu_0
\mathbf{c}, \quad \mu_0 \mathbf{c} \perp \boldsymbol{\theta}.
\end{equation}
We have the following simple estimates
\begin{equation}
\label{gammaj_est1}
\begin{aligned}
&\gamma_j(\boldsymbol{\theta}) \leqslant |\mu_0^{-1}| | (\eta^0)^{-1}| \leqslant |\mu_0^{-1}| \| \eta^{-1}\|_{L_\infty},
\quad \boldsymbol{\theta} \in \mathbb{S}^2, \quad j=1,2;
\\
&\gamma_3(\boldsymbol{\theta}) \geqslant \underline{\nu} |\mu_0^{-1}|^{-1},\quad \boldsymbol{\theta} \in \mathbb{S}^2.
\end{aligned}
\end{equation}
\begin{remark}
As was mentioned in \cite[Remark 4.5]{Su2}, we can always choose the analytic branches of eigenvalues and eigenvectors of the operator $L(t; \boldsymbol{\theta})$, $t \in [0, t^0]$, in such a way that one of the eigenvectors \emph{(}let it be $\boldsymbol{\varphi}_3 (t; \boldsymbol{\theta})$\emph{)} belongs to the \textquotedblleft gradient\textquotedblright \ subspace $G(t\boldsymbol{\theta}; \mu_0)$ for \hbox{$t \ne 0$}, and then \emph{(}automatically\emph{)} the other two eigenvectors $\boldsymbol{\varphi}_1 (t; \boldsymbol{\theta})$, $\boldsymbol{\varphi}_2 (t; \boldsymbol{\theta})$ belong to the \textquotedblleft divergence-free\textquotedblright \ subspace $J(t \boldsymbol{\theta};\mu_0)$. The coefficient $\gamma_3(\boldsymbol{\theta})$ in
expansion~\emph{(\ref{eigenvalues_series})} for $\lambda_3 (t; \boldsymbol{\theta})$ is the eigenvalue of the part of the germ $S(\boldsymbol{\theta})$ in the subspace $G_{\boldsymbol{\theta}}^0$.
The \textquotedblleft embryo\textquotedblright \ $\boldsymbol{\omega}_3(\boldsymbol{\theta})$ in
expansion~\emph{(\ref{eigenvectors_series})} for $\boldsymbol{\varphi}_3 (t; \boldsymbol{\theta})$ is given by
\eqref{omega3}.
The coefficients $\gamma_1(\boldsymbol{\theta})$ and $\gamma_2(\boldsymbol{\theta})$ in
expansions~\emph{(\ref{eigenvalues_series})} for $\lambda_1 (t; \boldsymbol{\theta})$, $\lambda_2 (t; \boldsymbol{\theta})$ are eigenvalues of
$S_J(\boldsymbol{\theta})$ and correspond to the algebraic problem~\emph{(\ref{solenoid_germ_spec_probl})}. The \textquotedblleft embryos\textquotedblright \ $\boldsymbol{\omega}_1 (\boldsymbol{\theta})$ and
$\boldsymbol{\omega}_2 (\boldsymbol{\theta})$ in expansions~\emph{(\ref{eigenvectors_series})} for $\boldsymbol{\varphi}_1 (t; \boldsymbol{\theta})$ and $\boldsymbol{\varphi}_2 (t; \boldsymbol{\theta})$ belong to $J_{\boldsymbol{\theta}}^0$ and are the eigenvectors of problem~\eqref{solenoid_germ_spec_probl}.
If $\gamma_1(\boldsymbol{\theta}) \ne \gamma_2(\boldsymbol{\theta})$, then $\boldsymbol{\omega}_1 (\boldsymbol{\theta})$ and $\boldsymbol{\omega}_2 (\boldsymbol{\theta})$ are defined uniquely \emph{(}up to phase factors\emph{)}.
For $t=0$ all three eigenvectors belong to the \textquotedblleft divergence-free\textquotedblright \ subspace $J(0;\mu_0)$: $\boldsymbol{\varphi}_l (0;\boldsymbol{\theta}) = \boldsymbol{\omega}_l(\boldsymbol{\theta}) \in \mathfrak{N}$, $l = 1,2,3$. Note also that, if $\gamma_1(\boldsymbol{\theta}) = \gamma_2(\boldsymbol{\theta})$, then the knowledge of the germ $S(\boldsymbol{\theta})$ is not sufficient to determine the \textquotedblleft embryos\textquotedblright \ $\boldsymbol{\omega}_1 (\boldsymbol{\theta})$, $\boldsymbol{\omega}_2 (\boldsymbol{\theta})$.
\end{remark}
\subsection{The operator $N(\boldsymbol{\theta})$}
We also need the operator $N(\boldsymbol{\theta})$ acting in the space $\mathfrak{N}$ and defined in terms of the coefficients of the power series expansions \eqref{eigenvalues_series}, \eqref{eigenvectors_series} as follows:
\begin{align}
\nonumber
N(\boldsymbol{\theta}) &= N_0(\boldsymbol{\theta}) + N_*(\boldsymbol{\theta}),
\\
\label{N0}
N_0(\boldsymbol{\theta}) &= \sum_{l=1}^3 \mu_l(\boldsymbol{\theta}) (\cdot,\boldsymbol{\omega}_l(\boldsymbol{\theta}) )_{L_2(\Omega)}\boldsymbol{\omega}_l(\boldsymbol{\theta}),
\\
\nonumber
N_*(\boldsymbol{\theta})
&= \sum_{l=1}^3 \gamma_l(\boldsymbol{\theta}) \left((\cdot,P \boldsymbol{\psi}_l(\boldsymbol{\theta}) )_{L_2(\Omega)}
\boldsymbol{\omega}_l(\boldsymbol{\theta}) + (\cdot,\boldsymbol{\omega}_l(\boldsymbol{\theta}))_{L_2(\Omega)} P \boldsymbol{\psi}_l(\boldsymbol{\theta}) \right).
\end{align}
For more details, see \cite{BSu2}.
\begin{remark}
\label{rem_N}
In the basis $\{\boldsymbol{\omega}_l(\boldsymbol{\theta})\}_{l=1}^3$, the operator $N_0(\boldsymbol{\theta})$ is diagonal, while the diagonal entries of $N_*(\boldsymbol{\theta})$ are equal to zero. We have
\begin{align}
\label{2.13a}
(N(\boldsymbol{\theta})\boldsymbol{\omega}_l(\boldsymbol{\theta}),\boldsymbol{\omega}_l(\boldsymbol{\theta}))_{L_2(\Omega)}&=
(N_0(\boldsymbol{\theta})\boldsymbol{\omega}_l(\boldsymbol{\theta}),\boldsymbol{\omega}_l(\boldsymbol{\theta}))_{L_2(\Omega)}
=\mu_l(\boldsymbol{\theta}), \quad l=1,2,3,
\\
\nonumber
\begin{split}
(N(\boldsymbol{\theta})\boldsymbol{\omega}_l(\boldsymbol{\theta}),\boldsymbol{\omega}_j(\boldsymbol{\theta}))_{L_2(\Omega)}&=
(N_*(\boldsymbol{\theta})\boldsymbol{\omega}_l(\boldsymbol{\theta}),\boldsymbol{\omega}_j(\boldsymbol{\theta}))_{L_2(\Omega)}
\\
&=(\gamma_l(\boldsymbol{\theta}) - \gamma_j(\boldsymbol{\theta})) (P \boldsymbol{\psi}_l(\boldsymbol{\theta}),
\boldsymbol{\omega}_j(\boldsymbol{\theta})), \quad l\ne j.
\end{split}
\end{align}
\end{remark}
In~\cite[\S4]{BSu2}, the following invariant representation for the operator $N (\boldsymbol{\theta})$ was obtained:
\begin{equation*}
N (\boldsymbol{\theta}) = b(\boldsymbol{\theta})^* M(\boldsymbol{\theta}) b(\boldsymbol{\theta}),
\end{equation*}
where $M (\boldsymbol{\theta})$~is the ($4 \times 4$)-matrix given by
\begin{equation*}
M (\boldsymbol{\theta}) = | \Omega |^{-1} \int_{\Omega} (\Lambda (\mathbf{x})^* b(\boldsymbol{\theta})^* \widetilde{g}(\mathbf{x}) + \widetilde{g}(\mathbf{x})^* b(\boldsymbol{\theta}) \Lambda (\mathbf{x}) ) \, d \mathbf{x}.
\end{equation*}
Here $\Lambda (\mathbf{x})$~is the $\Gamma$-periodic solution of problem~(\ref{equation_for_Lambda}), and $\widetilde{g}(\mathbf{x})$~is given by~(\ref{g_tilde}).
For $L(t; \boldsymbol{\theta})$, the operator $N(\boldsymbol{\theta})$ was calculated in~\cite[Section~14.3]{BSu2} (in the case where $\mu_0 = {\mathbf 1}$). Transferring the calculation to the case of a constant matrix $\mu_0$,
it is easy to show that
\begin{equation}
\label{N_operator}
N(\boldsymbol{\theta}) = - i f(\boldsymbol{\theta}) \mu_0^{-1/2} r(\boldsymbol{\theta}) \mu_0^{-1/2},
\end{equation}
where the matrix $r(\boldsymbol{\theta})$ is defined by \eqref{symbol}, and
\begin{equation}
\label{Mjk}
\begin{split}
f(\boldsymbol{\theta}) :=& (\rho_{12}(\boldsymbol{\theta}) - \rho_{21}(\boldsymbol{\theta})) \theta_3 +
(\rho_{31}(\boldsymbol{\theta}) - \rho_{13}(\boldsymbol{\theta})) \theta_2 +
(\rho_{23}(\boldsymbol{\theta}) - \rho_{32}(\boldsymbol{\theta})) \theta_1,
\\
\rho_{jk}(\boldsymbol{\theta}) :=& | \Omega |^{-1} \int_{\Omega} \widetilde{\Phi}_j (\mathbf{x}) \bigl\langle
\eta (\mathbf{x}) (\nabla \widetilde{\Phi}_k (\mathbf{x}) + \mathbf{c}_k), \boldsymbol{\theta} \bigr\rangle \, d\mathbf{x}.
\end{split}
\end{equation}
Obviously, the operator $N(\boldsymbol{\theta})$ is reduced by decomposition~(\ref{N_Weyl_decomp}).
The part of $N(\boldsymbol{\theta})$ in the subspace $G_{\boldsymbol{\theta}}^0$ is equal to zero.
\begin{remark}
\label{rem_NN}
Since $\boldsymbol{\omega}_3 (\boldsymbol{\theta}) = \alpha \mu_0^{1/2} \boldsymbol{\theta}$ \emph{(}see \eqref{omega3}\emph{)}, then \eqref{N_operator} and the obvious identity $r(\boldsymbol{\theta}) \boldsymbol{\theta} =0$ imply that
\begin{equation*}
(N (\boldsymbol{\theta}) \boldsymbol{\omega}_3(\boldsymbol{\theta}), \boldsymbol{\omega}_j (\boldsymbol{\theta})) =
(N (\boldsymbol{\theta}) \boldsymbol{\omega}_j (\boldsymbol{\theta}), \boldsymbol{\omega}_3 (\boldsymbol{\theta})) = 0,
\quad j=1,2,3,\quad \boldsymbol{\theta} \in \mathbb{S}^2.
\end{equation*}
It follows {\rm (}see \eqref{2.13a}{\rm )} that the coefficient $\mu_3(\boldsymbol{\theta})$ in expansion \eqref{eigenvalues_series} of the eigenvalue
$\lambda_3(t;\boldsymbol{\theta})$ corresponding to the \textquotedblleft gradient\textquotedblright \ subspace
$G(t\boldsymbol{\theta};\mu_0)$, is equal to zero{\rm :}
\begin{equation*}
\mu_3(\boldsymbol{\theta})= 0,
\quad \boldsymbol{\theta} \in \mathbb{S}^2.
\end{equation*}
\end{remark}
\begin{remark}
\label{N=0}
$1^\circ$.
Suppose that $\eta^0 = \overline{\eta}$ \emph{(}see Remark \emph{\ref{eta0_properties}$(2^\circ)$)}.
Then the columns of the matrix
$\eta(\mathbf{x})$ are divergence-free, whence the periodic solutions $\widetilde{\Phi}_j({\mathbf x})$ $(j=1,2,3)$ of problems \eqref{2.8aaa} are equal to zero. In this case, $N(\boldsymbol{\theta})=0$ for any $\boldsymbol{\theta} \in \mathbb{S}^2$.
In particular, this is the case if the matrix $\eta(\mathbf{x})$ is constant.
\noindent
$2^\circ$. Suppose that $\eta^0 = \underline{\eta}$ \emph{(}see Remark \emph{\ref{eta0_properties}$(3^\circ)$)}.
Then the vector-functions
$\eta (\mathbf{x}) (\nabla \widetilde{\Phi}_k (\mathbf{x}) + \mathbf{c}_k)$ $(k=1,2,3)$ are constant. Hence, by
\eqref{N_operator}, \eqref{Mjk}, we have $N(\boldsymbol{\theta})=0$ for any $\boldsymbol{\theta} \in \mathbb{S}^2$.
\end{remark}
\begin{remark}
\label{rem2_5}
$1^\circ$. According to \cite[Proposition 4.2]{BSu2}, if $b(\boldsymbol{\theta})$ and $g(\mathbf{x})$ are matrices with real entries
{\rm (}which is satisfied for the operator $\mathcal L${\rm )} and the vectors $\boldsymbol{\omega}_l (\boldsymbol{\theta})$, $l=1,2,3$, can be chosen real
{\rm(}for fixed $\boldsymbol{\theta} \in \mathbb{S}^2${\rm)}, then $N_0(\boldsymbol{\theta})=0$.
These conditions are ensured provided that $\gamma_1(\boldsymbol{\theta}) \ne \gamma_2(\boldsymbol{\theta})$,
because the vector $\boldsymbol{\omega}_3(\boldsymbol{\theta})$ is real \emph{(}see \eqref{omega3}\emph{)}, and, in the case under consideration, the eigenvectors of problem \eqref{solenoid_germ_spec_probl}
are determined uniquely {\rm (}up to phase factors{\rm )} and can be chosen real. For such $\boldsymbol{\theta}$ we have $N(\boldsymbol{\theta}) = N_*(\boldsymbol{\theta})$ and $\mu_l(\boldsymbol{\theta})=0$, $l=1,2,3$.
\noindent
$2^\circ$. If $\gamma_1(\boldsymbol{\theta}_0) = \gamma_2(\boldsymbol{\theta}_0)$ for some $\boldsymbol{\theta}_0 \in \mathbb{S}^2$, then, by Remarks {\rm \ref{rem_N}} and
{\rm \ref{rem_NN}}, we have $N_*(\boldsymbol{\theta}_0)=0$ and $N(\boldsymbol{\theta}_0)= N_0(\boldsymbol{\theta}_0)$.
Herewith, $\mu_{1}(\boldsymbol{\theta}_0)$ and $\mu_{2}(\boldsymbol{\theta}_0)$ are the eigenvalues of the operator \eqref{N_operator} in the subspace $J^0_{\boldsymbol{\theta}_0}$, they are given by
$$
\mu_{1,2}(\boldsymbol{\theta}_0) = \pm f(\boldsymbol{\theta}_0)
\frac{\langle \mu_0 \boldsymbol{\theta}_0, \boldsymbol{\theta}_0 \rangle^{1/2}}{( \operatorname{det}\mu_0)^{1/2}}.
$$
If $f(\boldsymbol{\theta}_0) \ne 0$ \emph{(}and then also $\mu_{1,2}(\boldsymbol{\theta}_0) \ne 0$\emph{)},
then the vectors $\boldsymbol{\omega}_{1,2}(\boldsymbol{\theta}_0)$ are determined uniquely {\rm (}up to phase factors{\rm )} and coincide with the eigenvectors of the matrix $\mu_0^{-1/2} r(\boldsymbol{\theta}_0) \mu_0^{-1/2}$ corresponding to the eigenvalues $\pm i \frac{\langle \mu_0 \boldsymbol{\theta}_0, \boldsymbol{\theta}_0 \rangle^{1/2}}{( \operatorname{det}\mu_0)^{1/2}}$.
\end{remark}
\subsection{The effective operator}
We put
\begin{equation}
\label{effective_oper_symb}
S (\mathbf{k}) := t^2 S (\boldsymbol{\theta}) = b(\mathbf{k})^* g^0 b(\mathbf{k}), \quad \mathbf{k} \in \mathbb{R}^{3}.
\end{equation}
Expression~(\ref{effective_oper_symb}) is the symbol of the DO
\begin{equation}
\label{L0}
\mathcal{L}^0 = b(\mathbf{D})^* g^0 b(\mathbf{D}) = \mu_0^{-1/2} \operatorname{curl} (\eta^0)^{-1} \operatorname{curl} \mu_0^{-1/2} - \mu_0^{1/2}\nabla \underline{\nu} \operatorname{div} \mu_0^{1/2},
\end{equation}
acting in $L_2(\mathbb{R}^3;\mathbb{C}^3)$ on the domain
$H^2(\mathbb{R}^3;\mathbb{C}^3)$
and called the \emph{effective operator} for the operator $\mathcal{L}$.
\section{Homogenization of the operator $\mathcal{L}_\varepsilon$}
\label{L_eps_approx_section}
\subsection{The operator $\mathcal{L}_\varepsilon$}
\emph{Our main object}~is the operator $\mathcal{L}_\varepsilon$ acting in $L_2(\mathbb{R}^3;\mathbb{C}^3)$ and formally given by
\begin{equation}
\label{L_eps}
\mathcal{L}_\varepsilon = \mu_0^{-1/2} \operatorname{curl} (\eta^\varepsilon(\mathbf{x}))^{-1}
\operatorname{curl} \mu_0^{-1/2} - \mu_0^{1/2} \nabla \nu^\varepsilon(\mathbf{x}) \operatorname{div} \mu_0^{1/2} = b(\mathbf{D})^* g^{\varepsilon}(\mathbf{x}) b(\mathbf{D}).
\end{equation}
The precise definition is given in terms of the corresponding quadratic form (cf.~Subsection~\ref{Subsection Operator L}).
The coefficients of the operator~(\ref{L_eps}) oscillate rapidly as $\varepsilon \to 0$.
We obtain approximations of the operators $\cos(\tau \mathcal{L}_\varepsilon^{1/2})$ and $\mathcal{L}_\varepsilon^{-1/2} \sin(\tau \mathcal{L}_\varepsilon^{1/2})$ for small $\varepsilon$.
As well as $\mathcal L$, the operator \eqref{L_eps} is reduced by the Weyl decomposition
\eqref{Weyl_decomp}. Its parts in the divergence-free and the gradient subspaces are denoted by
${\mathcal L}_{J,\varepsilon}$ and ${\mathcal L}_{G,\varepsilon}$, respectively.
Using that $\mathcal{L}_\varepsilon$ and $\mathcal{L}^0$ are simultaneously reduced by the Weyl decomposition \eqref{Weyl_decomp} and taking Remark \ref{PJ} into account, we obtain the following simple statement.
\begin{lemma}\label{lemma}
Suppose that $\mathcal{L}_\varepsilon$~is the operator~\emph{(\ref{L_eps})}, and $\mathcal{L}^0$~is the effective operator~\emph{(\ref{L0})}.
Let $\mathcal{L}_{J,\varepsilon}$, $\mathcal{L}_{G,\varepsilon}$ be the parts of $\mathcal{L}_{\varepsilon}$ in the subspaces $J(\mu_0)$ and $G(\mu_0)$, respectively. Let $\mathcal{L}_{J}^0$, $\mathcal{L}_{G}^0$ be the parts of the operator $\mathcal{L}^0$ in the subspaces $J(\mu_0)$ and
$G(\mu_0)$, respectively.
\noindent $1^\circ$. The estimate
\begin{equation*}
\bigl\| \cos( \tau \mathcal{L}_{\varepsilon}^{1/2}) - \cos( \tau (\mathcal{L}^0)^{1/2}) \bigr\|_{H^s (\mathbb{R}^3) \to L_2 (\mathbb{R}^3)} \leqslant \mathcal{C}(\tau) \varepsilon^{\sigma}
\end{equation*}
with some $s\geqslant 0$ and $\sigma \geqslant 0$ is equivalent to the pair of inequalities
\begin{align*}
\bigl\| \cos( \tau \mathcal{L}_{J,\varepsilon}^{1/2}) - \cos( \tau (\mathcal{L}_J^0)^{1/2}) \bigr\|_{J^s \to J} \leqslant \mathcal{C}(\tau) \varepsilon^{\sigma},
\\
\bigl\| \cos( \tau \mathcal{L}_{G,\varepsilon}^{1/2}) - \cos( \tau (\mathcal{L}_G^0)^{1/2}) \bigr\|_{G^s \to G} \leqslant \mathcal{C}(\tau) \varepsilon^{\sigma}.
\end{align*}
Here for brevity we denote $J:=J(\mu_0)$, $G:=G(\mu_0)$, $J^s := J^s(\mu_0)$, $G^s := G^s(\mu_0)$.
\noindent
$2^\circ$. The estimate
\begin{equation*}
\bigl\| \mathcal{L}_{\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{\varepsilon}^{1/2}) - (\mathcal{L}^0)^{-1/2} \sin( \tau (\mathcal{L}^0)^{1/2}) \bigr\|_{H^s (\mathbb{R}^3) \to L_2 (\mathbb{R}^3)} \leqslant {\mathcal{C}}(\tau) \varepsilon^{\sigma}
\end{equation*}
with some $s\geqslant 0$ and $\sigma \geqslant 0$ is equivalent to the pair of inequalities
\begin{align*}
\bigl\| \mathcal{L}_{J,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{J,\varepsilon}^{1/2}) - (\mathcal{L}_J^0)^{-1/2}\sin( \tau (\mathcal{L}_J^0)^{1/2}) \bigr\|_{J^s \to J} \leqslant
{\mathcal{C}}(\tau) \varepsilon^{\sigma},
\\
\bigl\| \mathcal{L}_{G,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{G,\varepsilon}^{1/2}) -
(\mathcal{L}_G^0)^{-1/2} \sin( \tau (\mathcal{L}_G^0)^{1/2}) \bigr\|_{G^s \to G} \leqslant
{\mathcal{C}}(\tau) \varepsilon^{\sigma}.
\end{align*}
$3^\circ$. The estimate
\begin{equation*}
\bigl\| \mathcal{L}_{\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{\varepsilon}^{1/2})D_j - (\mathcal{L}^0)^{-1/2} \sin( \tau (\mathcal{L}^0)^{1/2}) D_j \bigr\|_{H^s (\mathbb{R}^3) \to L_2 (\mathbb{R}^3)} \leqslant {\mathcal{C}}(\tau) \varepsilon^{\sigma}
\end{equation*}
with some $s\geqslant 0$ and $\sigma \geqslant 0$ is equivalent to the pair of inequalities
\begin{align*}
\bigl\| \mathcal{L}_{J,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{J,\varepsilon}^{1/2}) D_j - (\mathcal{L}_J^0)^{-1/2}\sin( \tau (\mathcal{L}_J^0)^{1/2}) D_j \bigr\|_{J^s \to J} \leqslant
{\mathcal{C}}(\tau) \varepsilon^{\sigma},
\\
\bigl\| \mathcal{L}_{G,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{G,\varepsilon}^{1/2}) D_j -
(\mathcal{L}_G^0)^{-1/2} \sin( \tau (\mathcal{L}_G^0)^{1/2}) D_j \bigr\|_{G^s \to G} \leqslant
{\mathcal{C}}(\tau) \varepsilon^{\sigma}.
\end{align*}
Here $j=1,2,3$.
\end{lemma}
\subsection{Approximation for the operator-valued functions of $\mathcal{L}_\varepsilon$ in the principal order}
For convenience of further references, the following set of parameters is called the \textit{problem data}:
\begin{equation}
\label{problem_data}
|\mu_0|,\ |\mu_0^{-1}|,\
\| \eta \|_{L_\infty}, \ \| \eta^{-1} \|_{L_\infty},\ \| \nu \|_{L_\infty},\ \| \nu^{-1} \|_{L_\infty}; \ \text{the parameters of the lattice}\ \Gamma.
\end{equation}
The following theorem is a consequence of the general results for the class of operators
$\mathcal{A}_\varepsilon$.
\begin{theorem}
\label{cos_thrm1}
Let $\mathcal{L}_\varepsilon$~be the operator~\emph{(\ref{L_eps})}, and let $\mathcal{L}^0$~be the effective operator~\emph{(\ref{L0})}. Then for $\tau \in \mathbb{R}$ and $\varepsilon >0$ we have
\begin{gather}
\label{cos_est}
\bigl\| \cos( \tau \mathcal{L}_{\varepsilon}^{1/2}) - \cos( \tau (\mathcal{L}^0)^{1/2}) \bigr\|_{H^2 (\mathbb{R}^3) \to L_2 (\mathbb{R}^3)} \leqslant {C}_1 (1+|\tau|) \varepsilon,
\\
\label{sin_est}
\bigl\|\mathcal{L}_\varepsilon^{-1/2} \sin( \tau \mathcal{L}_\varepsilon^{1/2}) - (\mathcal{L}^0)^{-1/2} \sin( \tau (\mathcal{L}^0)^{1/2})
\bigr\|_{H^1 (\mathbb{R}^3) \to L_2 (\mathbb{R}^3)} \leqslant {C}_2 (1+ |\tau| ) \varepsilon.
\end{gather}
The constants ${C}_1$ and ${C}_2$ depend only on the problem data \eqref{problem_data}.
\end{theorem}
Estimate \eqref{cos_est} was obtained in \cite[Theorem 13.1]{BSu4}, and estimate \eqref{sin_est} was proved in \cite[Theorem 9.1]{M2} (see also \cite{M1}).
By interpolation, Theorem \ref{cos_thrm1} implies the following result (see \cite[Theorem 13.2]{BSu4} and \cite[Corollary 15.3]{DSu4}).
\begin{theorem}
\label{cos_thrm2}
Let $\mathcal{L}_\varepsilon$~be the operator~\emph{(\ref{L_eps})}, and let $\mathcal{L}^0$~be the effective operator~\emph{(\ref{L0})}. Then for $0 \leqslant s \leqslant 2$, $\tau \in \mathbb{R}$, and $\varepsilon >0$ we have
\begin{equation*}
\bigl\| \cos( \tau \mathcal{L}_{\varepsilon}^{1/2}) - \cos( \tau (\mathcal{L}^0)^{1/2}) \bigr\|_{H^s (\mathbb{R}^3) \to L_2 (\mathbb{R}^3)} \leqslant \mathcal{C}_1(s) (1+|\tau|)^{s/2} \varepsilon^{s/2},
\end{equation*}
\begin{equation*}
\begin{split}
\bigl\|\mathcal{L}_\varepsilon^{-1/2} \sin( \tau \mathcal{L}_\varepsilon^{1/2}) D_j - (\mathcal{L}^0)^{-1/2} \sin( \tau (\mathcal{L}^0)^{1/2}) D_j \bigr\|_{H^s (\mathbb{R}^3) \to L_2 (\mathbb{R}^3)}
\leqslant \mathcal{C}_2 (s) (1+ |\tau|)^{s/2} \varepsilon^{s/2},
\end{split}
\end{equation*}
$j=1,2,3$. The constants $\mathcal{C}_1(s)$ and $\mathcal{C}_2(s)$ depend on the problem data \eqref{problem_data} and on $s$.
\end{theorem}
As was shown in \cite{DSu1,DSu2,DSu4}, under some additional assumptions, the results of Theorems \ref{cos_thrm1} and \ref{cos_thrm2} can be improved.
\begin{condition}
\label{cond1}
Let~$N(\boldsymbol{\theta})$ be the operator defined by~\emph{(\ref{N_operator})}, \eqref{Mjk}. Suppose that \hbox{$N(\boldsymbol{\theta}) = 0$} for any $\boldsymbol{\theta} \in \mathbb{S}^{2}$ which is equivalent to the assumption that $f(\boldsymbol{\theta}) \equiv 0$.
\end{condition}
Theorem 15.2 from \cite{DSu4} directly implies the following result.
\begin{theorem}
\label{cos_thrm3a}
Let $\mathcal{L}_\varepsilon$~be the operator~\emph{(\ref{L_eps})}, and let $\mathcal{L}^0$~be the effective operator~\emph{(\ref{L0})}. Suppose that Condition \emph{\ref{cond1}} is satisfied.
Then for $\tau \in \mathbb{R}$ and $\varepsilon >0$ we have
\begin{equation}
\label{cos_est3}
\bigl\| \cos( \tau \mathcal{L}_{\varepsilon}^{1/2}) - \cos( \tau (\mathcal{L}^0)^{1/2}) \bigr\|_{H^{3/2} (\mathbb{R}^3) \to L_2 (\mathbb{R}^3)} \leqslant {C}_3 (1+|\tau|)^{1/2} \varepsilon,
\end{equation}
\begin{equation}
\label{sin_est3}
\begin{aligned}
\bigl\|\mathcal{L}_\varepsilon^{-1/2} \sin( \tau \mathcal{L}_\varepsilon^{1/2}) - (\mathcal{L}^0)^{-1/2} \sin( \tau (\mathcal{L}^0)^{1/2})
\bigr\|_{H^{1/2} (\mathbb{R}^3) \to L_2 (\mathbb{R}^3)} \leqslant {C}_4 (1+ |\tau| )^{1/2} \varepsilon.
\end{aligned}
\end{equation}
The constants ${C}_3$ and ${C}_4$ depend only on the problem data \eqref{problem_data}.
\end{theorem}
By interpolation, Theorem \ref{cos_thrm3a} implies the following result (see \cite[Corollary 15.4]{DSu4}).
\begin{theorem}
\label{cos_thrm4}
Suppose that the assumptions of Theorem \emph{\ref{cos_thrm3a}} are satisfied.
Then for \hbox{$0 \leqslant s \leqslant 3/2$}, $\tau \in \mathbb{R}$, and $\varepsilon >0$ we have
\begin{equation}
\label{cos_est4}
\bigl\| \cos( \tau \mathcal{L}_{\varepsilon}^{1/2}) - \cos( \tau (\mathcal{L}^0)^{1/2}) \bigr\|_{H^s (\mathbb{R}^3) \to L_2 (\mathbb{R}^3)} \leqslant \mathcal{C}_3(s) (1+|\tau|)^{s/3} \varepsilon^{2s/3},
\end{equation}
\begin{equation}
\label{sin_est4}
\begin{split}
\bigl\|\mathcal{L}_\varepsilon^{-1/2} \sin( \tau \mathcal{L}_\varepsilon^{1/2}) D_j - (\mathcal{L}^0)^{-1/2} \sin( \tau (\mathcal{L}^0)^{1/2}) D_j \bigr\|_{H^s (\mathbb{R}^3) \to L_2 (\mathbb{R}^3)}
\leqslant \mathcal{C}_4 (s) (1+ |\tau|)^{s/3} \varepsilon^{2s/3},
\end{split}
\end{equation}
$j=1,2,3$. The constants $\mathcal{C}_3(s)$ and $\mathcal{C}_4(s)$ depend on the problem data \eqref{problem_data} and on $s$.
\end{theorem}
Note that the operators $\mathcal{L}_{J,\varepsilon}$ and $\mathcal{L}^0_J$ depend on the coefficient $\eta(\mathbf{x})$, but not on $\nu({\mathbf x})$.
Conversely, $\mathcal{L}_{G,\varepsilon}$ and $\mathcal{L}^0_G$ depend on the coefficient $\nu(\mathbf{x})$, but not on $\eta({\mathbf x})$.
Consider the operator $\check{\mathcal L}_\varepsilon$ with the initial coefficients $\nu(\mathbf{x})$, $\mu_0$
and the constant coefficient
$\check{\eta}$ (for simplicity, let $\check{\eta}= {\mathbf 1}_3$). By Remark \ref{N=0}($1^\circ$), such an operator satisfies Condition \ref{cond1}. Then, by Theorems \ref{cos_thrm3a} and \ref{cos_thrm4}, the operator
$\check{\mathcal L}_\varepsilon$ satisfies
estimates of the form \eqref{cos_est3}--\eqref{sin_est4}. Using Lemma \ref{lemma}, we arrive at the following statement.
\begin{corollary}
\label{corollary}
Let $\mathcal{L}_\varepsilon$~be the operator~\emph{(\ref{L_eps})}, and let $\mathcal{L}^0$~be the effective operator~\emph{(\ref{L0})}.
Let $\mathcal{L}_{G,\varepsilon}$ and $\mathcal{L}_{G}^0$ be the parts of the operators $\mathcal{L}_{\varepsilon}$ and $\mathcal{L}^0$ in the subspace $G(\mu_0)$.
Then for $\tau \in {\mathbb R}$ and $\varepsilon >0$ we have
\begin{gather*}
\bigl\| \cos( \tau \mathcal{L}_{G,\varepsilon}^{1/2}) - \cos( \tau (\mathcal{L}_G^0)^{1/2})
\bigr\|_{G^{3/2} \to G} \leqslant \check{C}_3 (1+| \tau|)^{1/2} \varepsilon,
\\
\bigl\|\mathcal{L}_{G,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{G,\varepsilon}^{1/2}) -
(\mathcal{L}_G^0)^{-1/2} \sin( \tau (\mathcal{L}_G^0)^{1/2}) \bigr\|_{G^{1/2} \to G} \leqslant \check{C}_4 (1+ |\tau|)^{1/2} \varepsilon.
\end{gather*}
For $0\leqslant s \leqslant 3/2$, $\tau \in {\mathbb R}$, and $\varepsilon >0$ we have
\begin{gather*}
\bigl\| \cos( \tau \mathcal{L}_{G,\varepsilon}^{1/2}) - \cos( \tau (\mathcal{L}_G^0)^{1/2})
\bigr\|_{G^s \to G} \leqslant \check{\mathcal{C}}_3 (s) (1+ |\tau| )^{s/3} \varepsilon^{2s/3},
\\
\bigl\|\mathcal{L}_{G,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{G,\varepsilon}^{1/2}) D_j -
(\mathcal{L}_G^0)^{-1/2} \sin( \tau (\mathcal{L}_G^0)^{1/2}) D_j \bigr\|_{G^s \to G}
\leqslant \check{\mathcal{C}}_4 (s)(1+| \tau|)^{s/3} \varepsilon^{2s/3}, \quad j=1,2,3.
\end{gather*}
The constants $\check{C}_3$ and $\check{C}_4$ are controlled in terms of $|\mu_0|$, $|\mu_0^{-1}|$, $\|\nu\|_{L_\infty}$, $\|\nu^{-1}\|_{L_\infty}$, and the parameters of the lattice $\Gamma$.
The constants $\check{\mathcal C}_3(s)$ and $\check{\mathcal C}_4(s)$ depend on the same parameters and on $s$.
\end{corollary}
Now we abandon the condition $N(\boldsymbol{\theta})\equiv 0$, and instead assume that
$N_0(\boldsymbol{\theta}) \equiv 0$. However, in this case we have to impose an additional condition about the spectrum of the germ $S(\boldsymbol{\theta})$.
\begin{condition}
\label{cond2}
$1^\circ$. The operator $N_0(\boldsymbol{\theta})$ is equal to zero{\rm :} $N_0(\boldsymbol{\theta}) = 0$ for any $\boldsymbol{\theta} \in \mathbb{S}^2$. This is equivalent to
$\mu_1(\boldsymbol{\theta}) = \mu_2(\boldsymbol{\theta}) = 0$ for any $\boldsymbol{\theta} \in \mathbb{S}^2$.
$2^\circ$. The branches of the eigenvalues $\gamma_1 (\boldsymbol{\theta})$ and $\gamma_2 (\boldsymbol{\theta})$ of problem \eqref{solenoid_germ_spec_probl} either do not intersect or coincide identically.
\end{condition}
Note that the intersection of the branch
$\gamma_3 (\boldsymbol{\theta}) = \underline{\nu} \langle \mu_0 \boldsymbol{\theta}, \boldsymbol{\theta} \rangle$
(see \eqref{gamma3}) with the branches
$\gamma_1 (\boldsymbol{\theta})$ and $\gamma_2 (\boldsymbol{\theta})$ is allowed.
Under Condition \ref{cond2}, in the case where $\gamma_1 (\boldsymbol{\theta})$ and $\gamma_2 (\boldsymbol{\theta})$ do not intersect, we denote
$$
c^\circ:= \min_{\boldsymbol{\theta}\in \mathbb{S}^2} |\gamma_1 (\boldsymbol{\theta}) - \gamma_2 (\boldsymbol{\theta})|.
$$
By Remark \ref{rem2_5}, if
$\gamma_1 (\boldsymbol{\theta})$ and $\gamma_2 (\boldsymbol{\theta})$ do not intersect, then $N_0(\boldsymbol{\theta}) \equiv 0$ and Condition \ref{cond2} is valid automatically.
The following result is deduced from \cite[Theorem 15.2]{DSu4}.
\begin{theorem}
\label{cos_thrm3aa}
Let $\mathcal{L}_\varepsilon$~be the operator~\emph{(\ref{L_eps})} and let $\mathcal{L}^0$~be the effective operator~\emph{(\ref{L0})}. Suppose that Condition \emph{\ref{cond2}} is satisfied.
Then for $\tau \in \mathbb{R}$ and $\varepsilon >0$ we have
\begin{equation}
\label{cos_est6}
\bigl\| \cos( \tau \mathcal{L}_{\varepsilon}^{1/2}) - \cos( \tau (\mathcal{L}^0)^{1/2}) \bigr\|_{H^{3/2} (\mathbb{R}^3) \to L_2 (\mathbb{R}^3)} \leqslant {C}_5 (1+|\tau|)^{1/2} \varepsilon,
\end{equation}
\begin{equation}
\label{sin_est6}
\begin{aligned}
\bigl\|\mathcal{L}_\varepsilon^{-1/2} \sin( \tau \mathcal{L}_\varepsilon^{1/2}) - (\mathcal{L}^0)^{-1/2} \sin( \tau (\mathcal{L}^0)^{1/2})
\bigr\|_{H^{1/2} (\mathbb{R}^3) \to L_2 (\mathbb{R}^3)}
\leqslant {C}_6 (1+ |\tau| )^{1/2} \varepsilon.
\end{aligned}
\end{equation}
The constants ${C}_5$ and ${C}_6$ depend on the problem data \eqref{problem_data}
and also on the parameter $c^\circ$.
\end{theorem}
\begin{proof}
By Lemma \ref{lemma}, the required estimates \eqref{cos_est6} and \eqref{sin_est6} are equivalent to similar estimates for the divergence-free and the gradient parts of the operator
${\mathcal L}_\varepsilon$. According to Corollary \ref{corollary}, these estimates are valid for the gradient part.
So, the problem is reduced to the proof of the following estimates:
\begin{gather}
\label{312}
\bigl\| \cos( \tau \mathcal{L}_{J,\varepsilon}^{1/2}) - \cos( \tau (\mathcal{L}^0_J)^{1/2}) \bigr\|_{J^{3/2} \to J} \leqslant {C}_5 (1+|\tau|)^{1/2} \varepsilon,
\\
\label{313}
\bigl\|\mathcal{L}_{J,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{J,\varepsilon}^{1/2}) - (\mathcal{L}_J^0)^{-1/2} \sin( \tau (\mathcal{L}_J^0)^{1/2}) \bigr\|_{J^{1/2} \to J} \leqslant {C}_6 (1+ |\tau| )^{1/2} \varepsilon.
\end{gather}
Consider the operator $\widehat{\mathcal L}_\varepsilon$ with the initial coefficients $\mu_0$, $\eta(\mathbf{x})$ and the constant coefficient $\widehat{\nu} = 2 |\mu_0^{-1}|^2 \|\eta^{-1}\|_{L_\infty}$. By \eqref{gammaj_est1}, such a choice of the coefficient $\widehat{\nu}$ ensures that the branch
$\widehat{\gamma}_3(\boldsymbol{\theta})= \widehat{\nu} \langle \mu_0 \boldsymbol{\theta},\boldsymbol{\theta}\rangle$ does not intersect with $\gamma_1(\boldsymbol{\theta})$ and $\gamma_2(\boldsymbol{\theta})$.
Together with Condition \ref{cond2}, this ensures that Condition 9.7 from \cite{DSu4} is satisfied
(this condition means that $N_0(\boldsymbol{\theta}) \equiv 0$ and the multiplicity of the spectrum of the germ $S(\boldsymbol{\theta})$ does not depend on $\boldsymbol{\theta}$). Then, by Theorem 15.2 from \cite{DSu4},
estimates of the form \eqref{cos_est6}, \eqref{sin_est6}
are valid for the operator $\widehat{\mathcal L}_\varepsilon$.
Applying Lemma \ref{lemma} and taking into account that the divergence-free parts of the operators ${\mathcal L}_\varepsilon$ and $\widehat{\mathcal L}_\varepsilon$ coincide, we arrive at the required estimates \eqref{312}, \eqref{313}.
\end{proof}
By interpolation, we obtain the following result (it is deduced from \cite[Corollary 15.4]{DSu4} by analogy with the proof of Theorem \ref{cos_thrm3aa}).
\begin{theorem}
\label{cos_thrm6}
Suppose that the assumptions of Theorem \emph{\ref{cos_thrm3aa}} are satisfied.
Then for \hbox{$0 \leqslant s \leqslant 3/2$}, $\tau \in \mathbb{R}$, and $\varepsilon >0$ we have
\begin{equation*}
\bigl\| \cos( \tau \mathcal{L}_{\varepsilon}^{1/2}) - \cos( \tau (\mathcal{L}^0)^{1/2}) \bigr\|_{H^s (\mathbb{R}^3) \to L_2 (\mathbb{R}^3)} \leqslant \mathcal{C}_5(s) (1+|\tau|)^{s/3} \varepsilon^{2s/3},
\end{equation*}
\begin{equation*}
\begin{split}
\bigl\|\mathcal{L}_\varepsilon^{-1/2} \sin( \tau \mathcal{L}_\varepsilon^{1/2}) D_j - (\mathcal{L}^0)^{-1/2} \sin( \tau (\mathcal{L}^0)^{1/2}) D_j \bigr\|_{H^s (\mathbb{R}^3) \to L_2 (\mathbb{R}^3)}
\leqslant \mathcal{C}_6 (s) (1+ |\tau|)^{s/3} \varepsilon^{2s/3},
\end{split}
\end{equation*}
$j=1,2,3$. The constants $\mathcal{C}_5(s)$ and $\mathcal{C}_6(s)$ depend on the problem data \eqref{problem_data}, on $s$, and on the parameter~$c^\circ$.
\end{theorem}
\subsection{Approximation for the operator $\mathcal{L}_{\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{\varepsilon}^{1/2})$ in the energy norm}
Approximation for the operator-valued function $\mathcal{L}_{\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{\varepsilon}^{1/2})$ in the ``energy'' norm (i.~e., the norm of operators acting from $H^s$ to $H^1$) follows from the results of \cite{M2}, where the general class of the operators ${\mathcal A}_\varepsilon$ was considered.
In this approximation, a corrector is taken into account. In the general case, the corrector involves an auxiliary smoothing operator. However, under the additional assumption that the solution
$\Lambda({\mathbf x})$ of problem \eqref{equation_for_Lambda} is a multiplier from $H^{2}$ to $H^1$,
we can get rid of the smoothing operator. In dimension $d\leqslant 4$, this condition holds automatically.
We are also interested in approximation of the so called ``flux'', i.~e., of the operator
$g^\varepsilon b({\mathbf D}) \mathcal{L}_\varepsilon^{-1/2} \sin( \tau \mathcal{L}_\varepsilon^{1/2})$ in the $(H^s \to L_2)$-norm.
From \cite[Theorem 9.8]{M2} we deduce the following result.
\begin{theorem}
\label{th1_corrector}
Let $\mathcal{L}_\varepsilon$~be the operator~\emph{(\ref{L_eps})}, and let $\mathcal{L}^0$~be the effective operator~\emph{(\ref{L0})}.
Then for $\tau \in \mathbb{R}$ and $0< \varepsilon \leqslant 1$ we have
\begin{equation*}
\begin{aligned}
\bigl\|\mathcal{L}_\varepsilon^{-1/2} \sin( \tau \mathcal{L}_\varepsilon^{1/2}) -
\bigl(I + \varepsilon \Lambda^\varepsilon b({\mathbf D}) \bigr) (\mathcal{L}^0)^{-1/2} \sin( \tau (\mathcal{L}^0)^{1/2})
\bigr\|_{H^{2} (\mathbb{R}^3) \to H^1 (\mathbb{R}^3)}
\leqslant {C}_7 (1+ |\tau| ) \varepsilon,
\end{aligned}
\end{equation*}
\begin{equation*}
\begin{aligned}
\bigl\| g^\varepsilon b({\mathbf D}) \mathcal{L}_\varepsilon^{-1/2} \sin( \tau \mathcal{L}_\varepsilon^{1/2}) -
\widetilde{g}^\varepsilon b({\mathbf D}) (\mathcal{L}^0)^{-1/2} \sin( \tau (\mathcal{L}^0)^{1/2})
\bigr\|_{H^{2} (\mathbb{R}^3) \to L_2 (\mathbb{R}^3)}
\leqslant {C}_8 (1+ |\tau| ) \varepsilon.
\end{aligned}
\end{equation*}
The constants ${C}_7$ and ${C}_8$ depend only on the problem data \eqref{problem_data}.
\end{theorem}
We have
$$
\Lambda^\varepsilon b({\mathbf D}) = \mu_0^{-1/2} \Psi^\varepsilon \operatorname{curl} \mu_0^{-1/2} + \mu_0^{1/2}
(\nabla \rho)^\varepsilon \operatorname{div} \mu_0^{1/2}.
$$
Obviously, the first term is equal to zero on $G(\mu_0)$, and the second is equal to zero on $J(\mu_0)$.
Next,
$$
g^\varepsilon b({\mathbf D}) = -i \begin{pmatrix} (\eta^\varepsilon)^{-1} \operatorname{curl} \mu_0^{-1/2} \\ {\nu}^\varepsilon \operatorname{div} \mu_0^{1/2} \end{pmatrix},
\quad
\widetilde{g}^\varepsilon b({\mathbf D}) = -i \begin{pmatrix} \bigl((\eta^0)^{-1} + \Sigma^\varepsilon \bigr) \operatorname{curl} \mu_0^{-1/2} \\ \underline{\nu}\, \operatorname{div} \mu_0^{1/2} \end{pmatrix}.
$$
Using these relations, it is easy to check the following analog of Lemma~\ref{lemma}.
\begin{lemma}
\label{lemma2}
$1^\circ$. The estimate
\begin{equation*}
\bigl\| \mathcal{L}_{\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{\varepsilon}^{1/2}) -
\bigl(I + \varepsilon \Lambda^\varepsilon b({\mathbf D}) \bigr) (\mathcal{L}^0)^{-1/2} \sin( \tau (\mathcal{L}^0)^{1/2}) \bigr\|_{H^s (\mathbb{R}^3) \to H^1 (\mathbb{R}^3)} \leqslant \mathcal{C}(\tau) \varepsilon^{\sigma}
\end{equation*}
with some $s\geqslant 0$ and $\sigma \geqslant 0$ is equivalent to the pair of inequalities
\begin{align*}
\bigl\| \mathcal{L}_{J,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{J,\varepsilon}^{1/2}) -
\bigl(I + \varepsilon \mu_0^{-1/2} \Psi^\varepsilon \operatorname{curl} \mu_0^{-1/2} \bigr) (\mathcal{L}_J^0)^{-1/2} \sin( \tau (\mathcal{L}_J^0)^{1/2}) \bigr\|_{J^s \to H^1} \leqslant \mathcal{C}(\tau) \varepsilon^{\sigma},
\\
\bigl\| \mathcal{L}_{G,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{G,\varepsilon}^{1/2}) -
\bigl(I +\varepsilon \mu_0^{1/2} (\nabla \rho)^\varepsilon \operatorname{div} \mu_0^{1/2} \bigr) (\mathcal{L}_G^0)^{-1/2} \sin( \tau (\mathcal{L}_G^0)^{1/2}) \bigr\|_{G^s \to H^1} \leqslant \mathcal{C}(\tau) \varepsilon^{\sigma}.
\end{align*}
$2^\circ$. The estimate
\begin{equation*}
\bigl\| g^\varepsilon b({\mathbf D} )\mathcal{L}_{\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{\varepsilon}^{1/2}) -
\widetilde{g}^\varepsilon b({\mathbf D}) (\mathcal{L}^0)^{-1/2} \sin( \tau (\mathcal{L}^0)^{1/2}) \bigr\|_{H^s (\mathbb{R}^3) \to L_2 (\mathbb{R}^3)} \leqslant \mathcal{C}(\tau) \varepsilon^{\sigma}
\end{equation*}
with some $s\geqslant 0$ and $\sigma \geqslant 0$ is equivalent to the pair of inequalities
\begin{align*}
\begin{split}
&\bigl\| (\eta^\varepsilon)^{-1} \operatorname{curl} \mu_0^{-1/2} \mathcal{L}_{J,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{J,\varepsilon}^{1/2}) -
\bigl((\eta^0)^{-1} + \Sigma^\varepsilon \bigr) \operatorname{curl} \mu_0^{-1/2} (\mathcal{L}_J^0)^{-1/2} \sin( \tau (\mathcal{L}_J^0)^{1/2}) \bigr\|_{J^s \to L_2}
\\
&\qquad \leqslant \mathcal{C}(\tau) \varepsilon^{\sigma},
\end{split}
\\
&\bigl\| \nu^\varepsilon \operatorname{div} \mu_0^{1/2} \mathcal{L}_{G,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{G,\varepsilon}^{1/2}) -
\underline{\nu} \, \operatorname{div} \mu_0^{1/2} (\mathcal{L}_G^0)^{-1/2} \sin( \tau (\mathcal{L}_G^0)^{1/2}) \bigr\|_{G^s \to L_2} \leqslant \mathcal{C}(\tau) \varepsilon^{\sigma}.
\end{align*}
\end{lemma}
Next, under some additional assumptions (for instance, under Condition \ref{cond1}), the results of Theorem
\ref{th1_corrector} can be improved; see \cite{DSu4}. Now, in order to remove the smoothing operator in the corrector, it suffices to assume that the solution
$\Lambda({\mathbf x})$ of problem \eqref{equation_for_Lambda} is a multiplier from $H^{3/2}$ to $H^1$.
In dimension $d\leqslant 3$, this condition is valid automatically (see \cite[Proposition 14.25]{DSu4}).
From \cite[Theorem 15.36]{DSu4} we obtain the following result.
\begin{theorem}
\label{th2_corrector}
Let $\mathcal{L}_\varepsilon$~be the operator~\emph{(\ref{L_eps})}, and let $\mathcal{L}^0$~be the effective operator~\emph{(\ref{L0})}. Suppose that Condition \emph{\ref{cond1}} is satisfied.
Then for $\tau \in \mathbb{R}$ and $0< \varepsilon \leqslant 1$ we have
\begin{equation*}
\begin{aligned}
\bigl\|\mathcal{L}_\varepsilon^{-1/2} \sin( \tau \mathcal{L}_\varepsilon^{1/2}) -
\bigl( I + \varepsilon \Lambda^\varepsilon b({\mathbf D}) \bigr) (\mathcal{L}^0)^{-1/2} \sin( \tau (\mathcal{L}^0)^{1/2})
\bigr\|_{H^{3/2} (\mathbb{R}^3) \to H^1 (\mathbb{R}^3)}
\leqslant {C}_9 (1+ |\tau| )^{1/2} \varepsilon,
\end{aligned}
\end{equation*}
\begin{equation*}
\begin{aligned}
\bigl\| g^\varepsilon b({\mathbf D}) \mathcal{L}_\varepsilon^{-1/2} \sin( \tau \mathcal{L}_\varepsilon^{1/2}) -
\widetilde{g}^\varepsilon b({\mathbf D}) (\mathcal{L}^0)^{-1/2} \sin( \tau (\mathcal{L}^0)^{1/2})
\bigr\|_{H^{3/2} (\mathbb{R}^3) \to L_2 (\mathbb{R}^3)}
\leqslant {C}_{10} (1+ |\tau| )^{1/2} \varepsilon.
\end{aligned}
\end{equation*}
The constants ${C}_9$ and ${C}_{10}$ depend only on the problem data \eqref{problem_data}.
\end{theorem}
By analogy with the proof of Corollary \ref{corollary}, from Theorem \ref{th2_corrector} and Lemma \ref{lemma2} we deduce the following corollary.
\begin{corollary}
\label{corollary25}
Let $\mathcal{L}_\varepsilon$~be the operator~\eqref{L_eps}, and let $\mathcal{L}^0$~be the effective operator~\eqref{L0}.
Let $\mathcal{L}_{G,\varepsilon}$ and $\mathcal{L}_{G}^0$ be the parts of the operators $\mathcal{L}_{\varepsilon}$ and $\mathcal{L}^0$ in the subspace $G(\mu_0)$.
Then for $\tau \in {\mathbb R}$ and $0< \varepsilon \leqslant 1$ we have
\begin{equation}
\label{sin_est_corrector3}
\begin{aligned}
&\bigl\|\mathcal{L}_{G,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{G,\varepsilon}^{1/2}) -
\bigl(I + \varepsilon \mu_0^{1/2} (\nabla \rho)^\varepsilon \operatorname{div} \mu_0^{1/2} \bigr)
(\mathcal{L}_G^0)^{-1/2} \sin( \tau (\mathcal{L}_G^0)^{1/2})
\bigr\|_{G^{3/2} \to H^1}
\\
& \qquad \leqslant \check{C}_9 (1+ |\tau| )^{1/2} \varepsilon,
\end{aligned}
\end{equation}
\begin{equation*}
\begin{aligned}
\bigl\| \nu^\varepsilon \operatorname{div} \mu_0^{1/2} \mathcal{L}_{G,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{G,\varepsilon}^{1/2}) - \underline{\nu} \, \operatorname{div} \mu_0^{1/2} (\mathcal{L}_G^0)^{-1/2} \sin( \tau (\mathcal{L}_G^0)^{1/2}) \bigr\|_{G^{3/2} \to L_2}
\leqslant \check{C}_{10} (1+ |\tau| )^{1/2} \varepsilon.
\end{aligned}
\end{equation*}
The constants $\check{C}_9$ and $\check{C}_{10}$ depend only on $|\mu_0|$, $|\mu_0^{-1}|$,
$\|\nu\|_{L_\infty}$, $\|\nu^{-1}\|_{L_\infty}$, and the parameters of the lattice $\Gamma$.
\end{corollary}
Now, using \cite[Theorem 15.36]{DSu4} together with Lemma \ref{lemma2} and Corollary \ref{corollary25}, we obtain the following result; cf. the proof of Theorem \ref{cos_thrm3aa}.
\begin{theorem}
\label{th3_corrector}
Let $\mathcal{L}_\varepsilon$~be the operator~\emph{(\ref{L_eps})}, and let $\mathcal{L}^0$~be the effective operator~\emph{(\ref{L0})}. Suppose that Condition \emph{\ref{cond2}} is satisfied.
Then for $\tau \in \mathbb{R}$ and $0< \varepsilon \leqslant 1$ we have
\begin{equation*}
\begin{aligned}
\bigl\|\mathcal{L}_\varepsilon^{-1/2} \sin( \tau \mathcal{L}_\varepsilon^{1/2}) -
\bigl( I + \varepsilon \Lambda^\varepsilon b({\mathbf D}) \bigr) (\mathcal{L}^0)^{-1/2} \sin( \tau (\mathcal{L}^0)^{1/2})
\bigr\|_{H^{3/2} (\mathbb{R}^3) \to H^1 (\mathbb{R}^3)}
\leqslant {C}_{11} (1+ |\tau| )^{1/2} \varepsilon,
\end{aligned}
\end{equation*}
\begin{equation*}
\begin{aligned}
\bigl\| g^\varepsilon b({\mathbf D}) \mathcal{L}_\varepsilon^{-1/2} \sin( \tau \mathcal{L}_\varepsilon^{1/2}) -
\widetilde{g}^\varepsilon b({\mathbf D}) (\mathcal{L}^0)^{-1/2} \sin( \tau (\mathcal{L}^0)^{1/2})
\bigr\|_{H^{3/2} (\mathbb{R}^3) \to L_2 (\mathbb{R}^3)}
\leqslant {C}_{12} (1+ |\tau| )^{1/2} \varepsilon.
\end{aligned}
\end{equation*}
The constants ${C}_{11}$ and ${C}_{12}$ depend only on the problem data \eqref{problem_data} and $c^\circ$.
\end{theorem}
In the interpolational results about approximation of the operator
$\mathcal{L}_\varepsilon^{-1/2} \sin( \tau \mathcal{L}_\varepsilon^{1/2})$
in the energy norm, we use the smoothing operator $\Pi_\varepsilon$ acting in
$L_2({\mathbb R}^3;{\mathbb C}^4)$ and given by
$$
(\Pi_\varepsilon {\mathbf u})({\mathbf x}) = (2\pi)^{-3/2} \intop_{\widetilde{\Omega}/ \varepsilon } e^{i \langle {\mathbf x}, \boldsymbol{\xi}\rangle}
\widehat{{\mathbf u}}(\boldsymbol{\xi}) \, d\boldsymbol{\xi}.
$$
Here $\widehat{{\mathbf u}}(\boldsymbol{\xi})$ is the Fourier-image of a function ${\mathbf u}({\mathbf x})$.
\begin{theorem}
\label{th4_corrector}
Let $\mathcal{L}_\varepsilon$~be the operator~\emph{(\ref{L_eps})}, and let $\mathcal{L}^0$~be the effective operator~\emph{(\ref{L0})}.
Then for $0\leqslant s \leqslant 2$, $\tau \in \mathbb{R}$, and $\varepsilon >0$ we have
\begin{equation}
\label{sin_est_corrector7}
\begin{aligned}
&\bigl\| {\mathbf D} \left(\mathcal{L}_\varepsilon^{-1/2} \sin( \tau \mathcal{L}_\varepsilon^{1/2}) -
\bigl( I + \varepsilon \Lambda^\varepsilon \Pi_\varepsilon b({\mathbf D}) \bigr) (\mathcal{L}^0)^{-1/2} \sin( \tau (\mathcal{L}^0)^{1/2}) \right)
\bigr\|_{H^{s} (\mathbb{R}^3) \to L_2 (\mathbb{R}^3)}
\\
&\qquad \leqslant \mathcal{C}_{7}(s) (1+ |\tau| )^{s/2} \varepsilon^{s/2},
\end{aligned}
\end{equation}
\begin{equation}
\label{sin_est_corrector8}
\begin{aligned}
&\bigl\| g^\varepsilon b({\mathbf D}) \mathcal{L}_\varepsilon^{-1/2} \sin( \tau \mathcal{L}_\varepsilon^{1/2}) -
\bigl( g^0 + (\widetilde{g}^\varepsilon - g^0) \Pi_\varepsilon \bigr) b({\mathbf D}) (\mathcal{L}^0)^{-1/2} \sin( \tau (\mathcal{L}^0)^{1/2}) \bigr\|_{H^{s} (\mathbb{R}^3) \to L_2 (\mathbb{R}^3)}
\\
&\qquad \leqslant \mathcal{C}_{8}(s) (1+ |\tau| )^{s/2} \varepsilon^{s/2}.
\end{aligned}
\end{equation}
The constants $\mathcal{C}_{7}(s)$ and $\mathcal{C}_{8}(s)$ depend only on the problem data \eqref{problem_data} and on~$s$.
\end{theorem}
\begin{proof}
Corollary 15.9 from \cite{DSu4} directly implies estimate \eqref{sin_est_corrector7} together with the inequality
\begin{equation}
\label{sin_est_corrector8a}
\begin{aligned}
& \bigl\| g^\varepsilon b({\mathbf D}) \mathcal{L}_\varepsilon^{-1/2} \sin( \tau \mathcal{L}_\varepsilon^{1/2}) -
\widetilde{g}^\varepsilon \Pi_\varepsilon b({\mathbf D}) (\mathcal{L}^0)^{-1/2} \sin( \tau (\mathcal{L}^0)^{1/2})
\bigr\|_{H^{s} (\mathbb{R}^3) \to L_2 (\mathbb{R}^3)}
\\
&\qquad \leqslant \widetilde{\mathcal{C}}_{8}(s) (1+ |\tau| )^{s/2} \varepsilon^{s/2}.
\end{aligned}
\end{equation}
Take into account that
\begin{equation}
\label{sin_est_corrector8b}
\begin{split}
\bigl\|{g}^0 (I- \Pi_\varepsilon ) b({\mathbf D}) (\mathcal{L}^0)^{-1/2} \sin( \tau (\mathcal{L}^0)^{1/2})
\bigr\|_{H^{s} (\mathbb{R}^3) \to L_2 (\mathbb{R}^3)}
\leqslant \|g\|_{L_\infty}^{1/2} \|I- \Pi_\varepsilon \|_{H^s({\mathbb R}^3) \to L_2({\mathbb R}^3)}.
\end{split}
\end{equation}
We have
$$
\|(I- \Pi_\varepsilon ) {\mathbf u} \|^2_{L_2({\mathbb R}^3)} = \intop_{{\mathbb R}^3 \setminus (\widetilde{\Omega}/ \varepsilon )}
|\widehat{{\mathbf u}}(\boldsymbol{\xi})|^2
\, d\boldsymbol{\xi} \leqslant r_0^{-2\sigma} \varepsilon^{2\sigma} \| {\mathbf u}\|^2_{H^\sigma({\mathbb R}^3)},
$$
whence
\begin{equation}
\label{sin_est_corrector8c}
\| I- \Pi_\varepsilon \|_{H^s({\mathbb R}^3) \to L_2({\mathbb R}^3)} \leqslant
r_0^{-\sigma} \varepsilon^\sigma, \quad 0\leqslant \sigma \leqslant s.
\end{equation}
Relations \eqref{sin_est_corrector8a}, \eqref{sin_est_corrector8b}, and \eqref{sin_est_corrector8c} (with $\sigma = s/2$)
imply \eqref{sin_est_corrector8}.
\end{proof}
We need the following analog of Lemma \ref{lemma2}.
\begin{lemma}
\label{lemma3}
$1^\circ$. The estimate
\begin{equation*}
\bigl\| {\mathbf D} \left( \mathcal{L}_{\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{\varepsilon}^{1/2}) -
\bigl(I + \varepsilon \Lambda^\varepsilon \Pi_\varepsilon b({\mathbf D}) \bigr) (\mathcal{L}^0)^{-1/2} \sin( \tau (\mathcal{L}^0)^{1/2}) \right) \bigr\|_{H^s (\mathbb{R}^3) \to L_2 (\mathbb{R}^3)} \leqslant \mathcal{C}(\tau) \varepsilon^{\sigma}
\end{equation*}
with some $s\geqslant 0$ and $\sigma \geqslant 0$ is equivalent to the pair of inequalities
\begin{align*}
\bigl\| {\mathbf D} \left( \mathcal{L}_{J,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{J,\varepsilon}^{1/2}) -
\bigl(I + \varepsilon \mu_0^{-1/2} \Psi^\varepsilon \Pi_\varepsilon \operatorname{curl} \mu_0^{-1/2} \bigr)
(\mathcal{L}_J^0)^{-1/2} \sin( \tau (\mathcal{L}_J^0)^{1/2}) \right)
\bigr\|_{J^s \to L_2} \leqslant \mathcal{C}(\tau) \varepsilon^{\sigma},
\\
\bigl\| {\mathbf D} \left(\mathcal{L}_{G,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{G,\varepsilon}^{1/2}) -
\bigl(I + \varepsilon \mu_0^{1/2} (\nabla \rho)^\varepsilon \Pi_\varepsilon \operatorname{div} \mu_0^{1/2} \bigr)
(\mathcal{L}_G^0)^{-1/2} \sin( \tau (\mathcal{L}_G^0)^{1/2}) \right) \bigr\|_{G^s \to L_2} \leqslant \mathcal{C}(\tau) \varepsilon^{\sigma}.
\end{align*}
$2^\circ$. The estimate
\begin{equation*}
\bigl\| g^\varepsilon b({\mathbf D} )\mathcal{L}_{\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{\varepsilon}^{1/2}) -
\bigl( g^0 + (\widetilde{g}^\varepsilon - g^0) \Pi_\varepsilon \bigr) b({\mathbf D}) (\mathcal{L}^0)^{-1/2} \sin( \tau (\mathcal{L}^0)^{1/2}) \bigr\|_{H^s (\mathbb{R}^3) \to L_2 (\mathbb{R}^3)} \leqslant \mathcal{C}(\tau) \varepsilon^{\sigma}
\end{equation*}
with some $s\geqslant 0$ and $\sigma \geqslant 0$ is equivalent to the pair of inequalities
\begin{align*}
&\bigl\| (\eta^\varepsilon)^{-1} \operatorname{curl} \mu_0^{-1/2}
\mathcal{L}_{J,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{J,\varepsilon}^{1/2}) -
\bigl((\eta^0)^{-1} + \Sigma^\varepsilon \Pi_\varepsilon\bigr) \operatorname{curl} \mu_0^{-1/2} (\mathcal{L}_J^0)^{-1/2} \sin( \tau (\mathcal{L}_J^0)^{1/2}) \bigr\|_{J^s \to L_2}
\\
&\qquad \leqslant \mathcal{C}(\tau) \varepsilon^{\sigma},
\\
& \bigl\| \nu^\varepsilon \operatorname{div} \mu_0^{1/2} \mathcal{L}_{G,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{G,\varepsilon}^{1/2}) - \underline{\nu} \, \operatorname{div} \mu_0^{1/2}
(\mathcal{L}_G^0)^{-1/2} \sin( \tau (\mathcal{L}_G^0)^{1/2}) \bigr\|_{G^s \to L_2} \leqslant \mathcal{C}(\tau) \varepsilon^{\sigma}.
\end{align*}
\end{lemma}
Using \cite[Corollary 15.12]{DSu4} and taking into account \eqref{sin_est_corrector8b} and \eqref{sin_est_corrector8c}
(with $\sigma = 2 s/3$), we deduce the following result.
\begin{theorem}
\label{th5_corrector}
Let $\mathcal{L}_\varepsilon$~be the operator~\emph{(\ref{L_eps})}, and let $\mathcal{L}^0$~be the effective operator~\emph{(\ref{L0})}. Suppose that Condition \emph{\ref{cond1}} is satisfied.
Then for $0\leqslant s \leqslant 3/2$, $\tau \in \mathbb{R}$, and $\varepsilon >0$ we have
\begin{equation*}
\begin{aligned}
&\bigl\| {\mathbf D} \left(\mathcal{L}_\varepsilon^{-1/2} \sin( \tau \mathcal{L}_\varepsilon^{1/2}) -
\bigl( I + \varepsilon \Lambda^\varepsilon \Pi_\varepsilon b({\mathbf D}) \bigr) (\mathcal{L}^0)^{-1/2} \sin( \tau (\mathcal{L}^0)^{1/2}) \right)
\bigr\|_{H^{s} (\mathbb{R}^3) \to L_2 (\mathbb{R}^3)}
\\
&\qquad \leqslant \mathcal{C}_{9}(s) (1+ |\tau| )^{s/3} \varepsilon^{2s/3},
\end{aligned}
\end{equation*}
\begin{equation*}
\begin{aligned}
&\bigl\| g^\varepsilon b({\mathbf D}) \mathcal{L}_\varepsilon^{-1/2} \sin( \tau \mathcal{L}_\varepsilon^{1/2}) -
\bigl( g^0 + (\widetilde{g}^\varepsilon - g^0) \Pi_\varepsilon \bigr) b({\mathbf D}) (\mathcal{L}^0)^{-1/2} \sin( \tau (\mathcal{L}^0)^{1/2})
\bigr\|_{H^{s} (\mathbb{R}^3) \to L_2 (\mathbb{R}^3)}
\\
&\qquad \leqslant \mathcal{C}_{10}(s) (1+ |\tau| )^{s/3} \varepsilon^{2s/3}.
\end{aligned}
\end{equation*}
The constants $\mathcal{C}_{9}(s)$ and $\mathcal{C}_{10}(s)$ depend only on the problem data \eqref{problem_data} and on~$s$.
\end{theorem}
Similarly to the proof of Corollary \ref{corollary}, from Theorem \ref{th5_corrector} and Lemma \ref{lemma3} we deduce the following corollary.
\begin{corollary}
\label{corollary3}
Let $\mathcal{L}_\varepsilon$~be the operator~\eqref{L_eps}, and let $\mathcal{L}^0$~be the effective operator~\eqref{L0}.
Let $\mathcal{L}_{G,\varepsilon}$ and $\mathcal{L}_{G}^0$ be the parts of the operators $\mathcal{L}_{\varepsilon}$ and $\mathcal{L}^0$ in the subspace $G(\mu_0)$, respectively.
Then for $0 \leqslant s \leqslant 3/2$, $\tau \in {\mathbb R}$, and $\varepsilon >0$ we have
\begin{equation*}
\begin{aligned}
&\bigl\| {\mathbf D} \left(\mathcal{L}_{G,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{G,\varepsilon}^{1/2}) -
\bigl(I + \varepsilon \mu_0^{1/2} (\nabla \rho)^\varepsilon \Pi_\varepsilon \operatorname{div} \mu_0^{1/2} \bigr)
(\mathcal{L}_G^0)^{-1/2} \sin( \tau (\mathcal{L}_G^0)^{1/2}) \right)\bigr\|_{G^{s} \to L_2}
\\
&\qquad \leqslant \check{\mathcal C}_9(s) (1+ |\tau| )^{s/3} \varepsilon^{2s/3},
\end{aligned}
\end{equation*}
\begin{equation*}
\begin{aligned}
&\bigl\| \nu^\varepsilon \operatorname{div} \mu_0^{1/2} \mathcal{L}_{G,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{G,\varepsilon}^{1/2}) - \underline{\nu} \, \operatorname{div} \mu_0^{1/2}
(\mathcal{L}_G^0)^{-1/2} \sin( \tau (\mathcal{L}_G^0)^{1/2}) \bigr\|_{G^s \to L_2}
\\
&\qquad \leqslant \check{\mathcal C}_{10}(s) (1+ |\tau| )^{s/3} \varepsilon^{2s/3}.
\end{aligned}
\end{equation*}
The constants $\check{\mathcal C}_9(s)$ and $\check{\mathcal C}_{10}(s)$ depend only on $|\mu_0|$, $|\mu_0^{-1}|$, $\|\nu\|_{L_\infty}$, $\|\nu^{-1}\|_{L_\infty}$, the parameters of the lattice $\Gamma$, and on $s$.
\end{corollary}
Combining \cite[Corollary 15.12]{DSu4}, Lemma \ref{lemma3}, and Corollary \ref{corollary3}, we deduce the following result; cf. the proof of Theorem \ref{cos_thrm3aa}.
\begin{theorem}
\label{th6_corrector}
Let $\mathcal{L}_\varepsilon$~be the operator~\emph{(\ref{L_eps})}, and let $\mathcal{L}^0$~be the effective operator~\emph{(\ref{L0})}. Suppose that Condition \emph{\ref{cond2}} is satisfied.
Then for $0\leqslant s \leqslant 3/2$, $\tau \in \mathbb{R}$, and $\varepsilon >0$ we have
\begin{equation*}
\begin{aligned}
& \bigl\| {\mathbf D} \left(\mathcal{L}_\varepsilon^{-1/2} \sin( \tau \mathcal{L}_\varepsilon^{1/2}) -
(I + \varepsilon \Lambda^\varepsilon \Pi_\varepsilon ) b({\mathbf D}) (\mathcal{L}^0)^{-1/2} \sin( \tau (\mathcal{L}^0)^{1/2}) \right)
\bigr\|_{H^{s} (\mathbb{R}^3) \to L_2 (\mathbb{R}^3)}
\\
&\qquad \leqslant \mathcal{C}_{11}(s) (1+ |\tau| )^{s/3} \varepsilon^{2s/3},
\end{aligned}
\end{equation*}
\begin{equation*}
\begin{aligned}
& \bigl\| g^\varepsilon b({\mathbf D}) \mathcal{L}_\varepsilon^{-1/2} \sin( \tau \mathcal{L}_\varepsilon^{1/2}) - \bigl( g^0 +
(\widetilde{g}^\varepsilon - g^0) \Pi_\varepsilon \bigr) b({\mathbf D}) (\mathcal{L}^0)^{-1/2} \sin( \tau (\mathcal{L}^0)^{1/2})
\bigr\|_{H^{s} (\mathbb{R}^3) \to L_2 (\mathbb{R}^3)}
\\
&\qquad \leqslant \mathcal{C}_{12}(s) (1+ |\tau| )^{s/3} \varepsilon^{2s/3}.
\end{aligned}
\end{equation*}
The constants $\mathcal{C}_{11}(s)$ and $\mathcal{C}_{12}(s)$ depend only on the problem data \eqref{problem_data}, on~$s$, and on $c^\circ$.
\end{theorem}
\subsection{Approximation for the operator-valued functions of $\mathcal{L}_{J,\varepsilon}$}
Using Lemma \ref{lemma} and applying Theorems \ref{cos_thrm1}, \ref{cos_thrm2}, \ref{cos_thrm3a}, \ref{cos_thrm4},
\ref{cos_thrm3aa}, \ref{cos_thrm6} to the operator
$\widehat{\mathcal L}_\varepsilon$ with the initial coefficients $\mu_0, \eta(\mathbf{x})$ and the constant coefficient
$\widehat{\nu} = 2 |\mu_0^{-1}|^2 \|\eta^{-1}\|_{L_\infty}$, we obtain the following (combined) result.
\begin{theorem}
\label{cos_thrm1_J}
Let $\mathcal{L}_{J,\varepsilon}$ be the part of the operator~\eqref{L_eps} in the subspace $J(\mu_0)$, and let $\mathcal{L}_{J}^0$ be the part of the effective operator~\eqref{L0} in the subspace $J(\mu_0)$.
\noindent$1^\circ$.
For $\tau \in \mathbb{R}$ and $\varepsilon >0$ we have
\begin{gather}
\label{cos_thrm1_H^2_L2_est}
\bigl\| \cos( \tau \mathcal{L}_{J,\varepsilon}^{1/2}) - \cos( \tau (\mathcal{L}^0_J)^{1/2}) \bigr\|_{J^2 \to J} \leqslant
\widehat{C}_1 (1+ |\tau|) \varepsilon,
\\
\label{sin_thrm1_H^1_L2_est}
\bigl\|\mathcal{L}_{J,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{J,\varepsilon}^{1/2})
- (\mathcal{L}^0_J)^{-1/2} \sin( \tau (\mathcal{L}^0_J)^{1/2})
\bigr\|_{J^1 \to J} \leqslant \widehat{C}_2 (1+ |\tau|) \varepsilon.
\end{gather}
For $0 \leqslant s \leqslant 2$, $\tau \in \mathbb{R}$, and $\varepsilon >0$ we have
\begin{gather}
\label{cos_thrm1_H^s_L2_est}
\bigl\| \cos( \tau \mathcal{L}_{J,\varepsilon}^{1/2}) - \cos( \tau (\mathcal{L}^0_J)^{1/2}) \bigr\|_{J^s \to J} \leqslant
\widehat{\mathcal{C}}_1 (s) (1+|\tau|)^{s/2} \varepsilon^{s/2},
\\
\label{sin_thrm1_H^s_L2_est}
\begin{split}
\bigl\|\mathcal{L}_{J,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{J,\varepsilon}^{1/2}) D_j
- (\mathcal{L}^0_J)^{-1/2} \sin( \tau (\mathcal{L}^0_J)^{1/2}) D_j
\bigr\|_{J^s \to J}
\leqslant \widehat{\mathcal{C}}_2 (s) (1+ |\tau|)^{s/2} \varepsilon^{s/2},\quad j=1,2,3.
\end{split}
\end{gather}
The constants $\widehat{C}_1$ and $\widehat{C}_2$ are controlled in terms of the norms $|\mu_0|$, $|\mu_0^{-1}|$,
$\|\eta\|_{L_\infty}$, $\|\eta^{-1}\|_{L_\infty}$, and the parameters of the lattice $\Gamma$. The constants
$\widehat{\mathcal{C}}_1(s)$ and $\widehat{\mathcal{C}}_2 (s)$ depend on the same parameters and on~$s$.
\noindent$2^\circ$. Suppose that Condition \emph{\ref{cond1}} or Condition \emph{\ref{cond2}} is satisfied.
Then for $\tau \in \mathbb{R}$ and $\varepsilon >0$ we have
\begin{gather}
\label{cos_thrm1_H^3/2_L2_est}
\bigl\| \cos( \tau \mathcal{L}_{J,\varepsilon}^{1/2}) - \cos( \tau (\mathcal{L}^0_J)^{1/2}) \bigr\|_{J^{3/2} \to J} \leqslant
\widehat{C}_3 (1+ |\tau|)^{1/2} \varepsilon,
\\
\label{sin_thrm1_H^1/2_L2_est}
\bigl\|\mathcal{L}_{J,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{J,\varepsilon}^{1/2})
- (\mathcal{L}^0_J)^{-1/2} \sin( \tau (\mathcal{L}^0_J)^{1/2})
\bigr\|_{J^{1/2} \to J} \leqslant \widehat{C}_4 (1+ |\tau|)^{1/2} \varepsilon.
\end{gather}
For $0 \leqslant s \leqslant 3/2$, $\tau \in \mathbb{R}$, and $\varepsilon >0$ we have
\begin{gather}
\label{cos111}
\bigl\| \cos( \tau \mathcal{L}_{J,\varepsilon}^{1/2}) - \cos( \tau (\mathcal{L}^0_J)^{1/2}) \bigr\|_{J^s \to J} \leqslant
\widehat{\mathcal{C}}_3 (s) (1+|\tau|)^{s/3} \varepsilon^{2s/3},
\\
\label{sin111}
\begin{split}
\bigl\|\mathcal{L}_{J,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{J,\varepsilon}^{1/2}) D_j
- (\mathcal{L}^0_J)^{-1/2} \sin( \tau (\mathcal{L}^0_J)^{1/2}) D_j
\bigr\|_{J^s \to J}
\leqslant \widehat{\mathcal{C}}_4 (s) (1+|\tau|)^{s/3} \varepsilon^{2s/3}, \quad j=1,2,3.
\end{split}
\end{gather}
Under Condition \emph{\ref{cond1}} the constants $\widehat{C}_3$ and $\widehat{C}_4$ are controlled in terms of the norms $|\mu_0|$, $|\mu_0^{-1}|$, $\|\eta\|_{L_\infty}$, $\|\eta^{-1}\|_{L_\infty}$, and the parameters of the lattice $\Gamma$\emph{;} the constants $\widehat{\mathcal{C}}_3 (s)$ and
$\widehat{\mathcal{C}}_4 (s)$ depend on the same parameters and on $s$.
Under Condition \emph{\ref{cond2}} the constants depend also on $c^\circ$.
\end{theorem}
Similarly, using Lemmas \ref{lemma2}, \ref{lemma3} and applying Theorems \ref{th1_corrector}, \ref{th2_corrector}, \ref{th3_corrector}, \ref{th4_corrector}, \ref{th5_corrector}, and \ref{th6_corrector} to the operator
$\widehat{\mathcal L}_\varepsilon$ with the initial coefficients $\mu_0, \eta(\mathbf{x})$ and the constant coefficient
$\widehat{\nu} = 2 |\mu_0^{-1}|^2 \|\eta^{-1}\|_{L_\infty}$, we obtain the following (combined) result.
\begin{theorem}
\label{cos_thrm2_J}
Let $\mathcal{L}_{J,\varepsilon}$ be the part of the operator~\eqref{L_eps} in the subspace
$J(\mu_0)$ and let $\mathcal{L}_{J}^0$ be the part of the effective operator~\eqref{L0} in the subspace $J(\mu_0)$.
\noindent$1^\circ$.
For $\tau \in \mathbb{R}$ and $0< \varepsilon \leqslant 1$ we have
\begin{gather}
\label{sin_thrm1_corr1}
\begin{split}
&\bigl\|\mathcal{L}_{J,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{J,\varepsilon}^{1/2})
- \bigl(I + \varepsilon \mu_0^{-1/2} \Psi^\varepsilon \operatorname{curl} \mu_0^{-1/2} \bigr)(\mathcal{L}^0_J)^{-1/2} \sin( \tau (\mathcal{L}^0_J)^{1/2}) \bigr\|_{J^2 \to H^1}
\\
&\qquad \leqslant \widehat{C}_7 (1+ |\tau|) \varepsilon,
\end{split}
\\
\label{sin_thrm1_corr2}
\begin{split}
&\bigl\| (\eta^\varepsilon)^{-1} \operatorname{curl} \mu_0^{-1/2} \mathcal{L}_{J,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{J,\varepsilon}^{1/2})
- \bigl( (\eta^0)^{-1} + \Sigma^\varepsilon \bigr) \operatorname{curl} \mu_0^{-1/2} (\mathcal{L}^0_J)^{-1/2} \sin( \tau (\mathcal{L}^0_J)^{1/2})
\bigr\|_{J^2 \to L_2} \\
&\qquad \leqslant \widehat{C}_8 (1+ |\tau|) \varepsilon.
\end{split}
\end{gather}
For $0 \leqslant s \leqslant 2$, $\tau \in \mathbb{R}$, and $\varepsilon >0$ we have
\begin{gather}
\label{sin_thrm1_corr3}
\begin{split}
&\bigl\| {\mathbf D} \left(\mathcal{L}_{J,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{J,\varepsilon}^{1/2})
- \bigl(I + \varepsilon \mu_0^{-1/2} \Psi^\varepsilon\Pi_\varepsilon \operatorname{curl} \mu_0^{-1/2}\bigr) (\mathcal{L}^0_J)^{-1/2} \sin( \tau (\mathcal{L}^0_J)^{1/2})\right)
\bigr\|_{J^s \to L_2}
\\
&\qquad \leqslant \widehat{\mathcal C}_7(s) (1+ |\tau|)^{s/2} \varepsilon^{s/2},
\end{split}
\\
\label{sin_thrm1_corr4}
\begin{split}
&\bigl\| (\eta^\varepsilon)^{-1} \operatorname{curl} \mu_0^{-1/2} \mathcal{L}_{J,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{J,\varepsilon}^{1/2})
- \bigl( (\eta^0)^{-1} + \Sigma^\varepsilon \Pi_\varepsilon \bigr) \operatorname{curl} \mu_0^{-1/2} (\mathcal{L}^0_J)^{-1/2} \sin( \tau (\mathcal{L}^0_J)^{1/2})
\bigr\|_{J^s \to L_2} \\
&\qquad \leqslant \widehat{\mathcal C}_8(s) (1+ |\tau|)^{s/2} \varepsilon^{s/2}.
\end{split}
\end{gather}
The constants $\widehat{C}_7$ and $\widehat{C}_8$ are controlled in terms of the norms $|\mu_0|$, $|\mu_0^{-1}|$,
$\|\eta\|_{L_\infty}$, $\|\eta^{-1}\|_{L_\infty}$, and the parameters of the lattice $\Gamma$. The constants
$\widehat{\mathcal{C}}_7(s)$ and $\widehat{\mathcal{C}}_8 (s)$ depend on the same parameters and on~$s$.
\noindent$2^\circ$. Suppose that Condition \emph{\ref{cond1}} or Condition \emph{\ref{cond2}} is satisfied.
Then for $\tau \in \mathbb{R}$ and $0< \varepsilon \leqslant 1$ we have
\begin{gather}
\label{sin_thrm1_corr5}
\begin{split}
&\bigl\|\mathcal{L}_{J,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{J,\varepsilon}^{1/2})
- \bigl(I + \varepsilon \mu_0^{-1/2} \Psi^\varepsilon \operatorname{curl} \mu_0^{-1/2}\bigr) (\mathcal{L}^0_J)^{-1/2} \sin( \tau (\mathcal{L}^0_J)^{1/2}) \bigr\|_{J^{3/2} \to H^1}
\\
&\qquad \leqslant \widehat{C}_9 (1+ |\tau|)^{1/2} \varepsilon,
\end{split}
\\
\label{sin_thrm1_corr6}
\begin{split}
&\bigl\| (\eta^\varepsilon )^{-1} \operatorname{curl} \mu_0^{-1/2} \mathcal{L}_{J,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{J,\varepsilon}^{1/2})
- \bigl( (\eta^0)^{-1} + \Sigma^\varepsilon \bigr) \operatorname{curl} \mu_0^{-1/2} (\mathcal{L}^0_J)^{-1/2} \sin( \tau (\mathcal{L}^0_J)^{1/2})
\bigr\|_{J^{3/2} \to L_2} \\
&\qquad \leqslant \widehat{C}_{10} (1+ |\tau|)^{1/2} \varepsilon.
\end{split}
\end{gather}
For $0 \leqslant s \leqslant 3/2$, $\tau \in \mathbb{R}$, and $\varepsilon >0$ we have
\begin{gather}
\label{sin_thrm1_corr7}
\begin{split}
&\bigl\| {\mathbf D} \left(\mathcal{L}_{J,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{J,\varepsilon}^{1/2})
- \bigl(I + \varepsilon \mu_0^{-1/2} \Psi^\varepsilon \Pi_\varepsilon \operatorname{curl} \mu_0^{-1/2}\bigr)(\mathcal{L}^0_J)^{-1/2} \sin( \tau (\mathcal{L}^0_J)^{1/2})\right)
\bigr\|_{J^s \to L_2}
\\
&\qquad \leqslant \widehat{\mathcal C}_9(s) (1+ |\tau|)^{s/3} \varepsilon^{2s/3},
\end{split}
\\
\label{sin_thrm1_corr8}
\begin{split}
&\bigl\| (\eta^\varepsilon)^{-1} \operatorname{curl} \mu_0^{-1/2} \mathcal{L}_{J,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{J,\varepsilon}^{1/2})
- \bigl( (\eta^0)^{-1}+ \Sigma^\varepsilon \Pi_\varepsilon \bigr) \operatorname{curl} \mu_0^{-1/2} (\mathcal{L}^0_J)^{-1/2} \sin( \tau (\mathcal{L}^0_J)^{1/2})
\bigr\|_{J^s \to L_2} \\
&\qquad \leqslant \widehat{\mathcal C}_{10}(s) (1+ |\tau|)^{s/3} \varepsilon^{2 s/3}.
\end{split}
\end{gather}
Under Condition \emph{\ref{cond1}} the constants $\widehat{C}_9$ and $\widehat{C}_{10}$ are controlled in terms of the norms $|\mu_0|$, $|\mu_0^{-1}|$, $\|\eta\|_{L_\infty}$, $\|\eta^{-1}\|_{L_\infty}$, and the parameters of the lattice $\Gamma$\emph{;} the constants $\widehat{\mathcal{C}}_9 (s)$ and
$\widehat{\mathcal{C}}_{10} (s)$ depend on the same parameters and on $s$.
Under Conditiion \emph{\ref{cond2}} these constants depend also on~$c^\circ$.
\end{theorem}
\begin{remark}
Tracking the dependence of the estimates on $\tau$, we can obtain qualified estimates for small $\varepsilon$ and large $|\tau|$, which is of independent interest.
\noindent{\emph{1)}} Under the assumptions of Theorem \emph{\ref{cos_thrm1_J}}$(1^\circ)$ or Theorem \emph{\ref{cos_thrm2_J}}$(1^\circ)$, we can take $\tau = O(\varepsilon^{-\alpha})$,
$0< \alpha < 1$. Then the norms in \eqref{cos_thrm1_H^2_L2_est}, \eqref{sin_thrm1_H^1_L2_est},
\eqref{sin_thrm1_corr1}, \eqref{sin_thrm1_corr2} are estimated by $O(\varepsilon^{1-\alpha})$, and the norms in \eqref{cos_thrm1_H^s_L2_est}, \eqref{sin_thrm1_H^s_L2_est}, \eqref{sin_thrm1_corr3}, \eqref{sin_thrm1_corr4} are of order $O(\varepsilon^{s(1-\alpha)/2})$.
\noindent{\emph{2)}} Under the assumptions of Theorem \emph{\ref{cos_thrm1_J}}$(2^\circ)$ or Theorem \emph{\ref{cos_thrm2_J}}$(2^\circ)$, we can take $\tau = O(\varepsilon^{-\alpha})$,
$0< \alpha < 2$. Then the norms in \eqref{cos_thrm1_H^3/2_L2_est}, \eqref{sin_thrm1_H^1/2_L2_est},
\eqref{sin_thrm1_corr5}, \eqref{sin_thrm1_corr6} are estimated by $O(\varepsilon^{1-\alpha/2})$, and the norms in \eqref{cos111}, \eqref{sin111}, \eqref{sin_thrm1_corr7}, \eqref{sin_thrm1_corr8} are of order $O(\varepsilon^{s(2-\alpha)/3})$.
\end{remark}
\subsection{The sharpness of the results}
Applying Theorem 13.6 from \cite{DSu2} and Theorem 15.15 from \cite{DSu4}, we arrive at the following statement confirming that, in the general case, Theorems \ref{cos_thrm1} and \ref{th1_corrector} are sharp regarding the type of the operator norm.
\begin{theorem}
\label{s<2_thrm}
Let $N_0 (\boldsymbol{\theta})$ be the operator defined by~\emph{(\ref{N0})}. Suppose that
$N_0 (\boldsymbol{\theta}_0) \ne 0$ at least for one point $\boldsymbol{\theta}_0 \in \mathbb{S}^{2}$. Then the following is true.
\noindent $1^\circ$. Let $0 \ne \tau \in \mathbb{R}$ and $0 \leqslant s < 2$. Then there does not exist a constant
$\mathcal{C} (\tau) > 0$ such that the estimate
\begin{equation}
\label{s<2_est_imp}
\bigl\| \cos(\tau \mathcal{L}_\varepsilon^{1/2}) - \cos(\tau (\mathcal{L}^0)^{1/2}) \bigr\|_{H^s(\mathbb{R}^3) \to L_2(\mathbb{R}^3)} \leqslant \mathcal{C}(\tau) \varepsilon
\end{equation}
holds for all sufficiently small $\varepsilon > 0$.
\noindent $2^\circ$. Let $0 \ne \tau \in \mathbb{R}$ and $0 \leqslant r < 1$. Then there does not exist a constant $\mathcal{C} (\tau) > 0$ such that the estimate
\begin{equation}
\label{sharp2}
\bigl\| \mathcal{L}_\varepsilon^{-1/2} \sin(\tau \mathcal{L}_\varepsilon^{1/2}) -
(\mathcal{L}^0)^{-1/2} \sin(\tau (\mathcal{L}^0)^{1/2}) \bigr\|_{H^r(\mathbb{R}^3) \to L_2(\mathbb{R}^3)} \leqslant \mathcal{C}(\tau) \varepsilon
\end{equation}
holds for all sufficiently small $\varepsilon > 0$.
\noindent $3^\circ$. Let $0 \ne \tau \in \mathbb{R}$ and $0 \leqslant s < 2$. Then there does not exist a constant $\mathcal{C} (\tau) > 0$ such that the estimate
\begin{equation}
\label{sharp3}
\bigl\| \mathcal{L}_\varepsilon^{-1/2} \sin(\tau \mathcal{L}_\varepsilon^{1/2}) -
\bigl( I + \varepsilon \Lambda^\varepsilon \Pi_\varepsilon b({\mathbf D})\bigr) (\mathcal{L}^0)^{-1/2} \sin(\tau (\mathcal{L}^0)^{1/2}) \bigr\|_{H^s(\mathbb{R}^3) \to L_2(\mathbb{R}^3)} \leqslant \mathcal{C}(\tau) \varepsilon
\end{equation}
holds for all sufficiently small $\varepsilon > 0$.
\end{theorem}
By Remark \ref{rem2_5}, the condition $N_0 (\boldsymbol{\theta}_0) \ne 0$ is equivalent to the relations
$\gamma_1(\boldsymbol{\theta}_0) = \gamma_2(\boldsymbol{\theta}_0)$ and
$f(\boldsymbol{\theta}_0) \ne 0$, where $f(\boldsymbol{\theta})$ is defined by \eqref{Mjk}.
Now, from Theorem \ref{s<2_thrm} we deduce a similar result for the operator $\mathcal{L}_{J,\varepsilon}$ confirming the sharpness of Theorems
\ref{cos_thrm1_J}($1^\circ$) and \ref{cos_thrm2_J}$(1^\circ)$.
\begin{theorem}
\label{s<2_thrm_J}
Let $\mathcal{L}_{J,\varepsilon}$ be the part of the operator~\eqref{L_eps} in the subspace $J(\mu_0)$, and let $\mathcal{L}_{J}^0$ be the part of the effective operator~\eqref{L0} in the subspace $J(\mu_0)$. Let $N_0 (\boldsymbol{\theta})$ be the operator defined by~\emph{(\ref{N0})}. Suppose that $N_0 (\boldsymbol{\theta}_0) \ne 0$ at least for one point $\boldsymbol{\theta}_0 \in \mathbb{S}^{2}$.
\noindent $1^\circ$. Let $0 \ne \tau \in \mathbb{R}$ and $0 \leqslant s < 2$. Then there does not exist a constant $\widetilde{\mathcal{C}} (\tau) > 0$ such that the estimate
\begin{equation}
\label{s<2_est_imp_J}
\bigl\| \cos(\tau \mathcal{L}_{J,\varepsilon}^{1/2}) - \cos(\tau (\mathcal{L}^0_J)^{1/2}) \bigr\|_{J^s \to J} \leqslant
\widetilde{\mathcal C}(\tau) \varepsilon
\end{equation}
holds for all sufficiently small $\varepsilon > 0$.
\noindent $2^\circ$. Let $0 \ne \tau \in \mathbb{R}$ and $0 \leqslant r < 1$. Then there does not exist a constant $\widetilde{\mathcal{C}} (\tau) > 0$ such that the estimate
\begin{equation}
\label{sharp4}
\bigl\| \mathcal{L}_{J,\varepsilon}^{-1/2} \sin(\tau \mathcal{L}_{J,\varepsilon}^{1/2}) -
(\mathcal{L}^0_J)^{-1/2} \sin(\tau (\mathcal{L}^0_J)^{1/2}) \bigr\|_{J^r \to J} \leqslant
\widetilde{\mathcal C}(\tau) \varepsilon
\end{equation}
holds for all sufficiently small $\varepsilon > 0$.
\noindent $3^\circ$. Let $0 \ne \tau \in \mathbb{R}$ and $0 \leqslant s < 2$. Then there does not exist a constant $\widetilde{\mathcal{C}} (\tau) > 0$ such that the estimate
\begin{equation}
\label{sharp5}
\bigl\|\mathcal{L}_{J,\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{J,\varepsilon}^{1/2})
- \bigl(I + \varepsilon \mu_0^{-1/2} \Psi^\varepsilon \operatorname{curl} \mu_0^{-1/2} \bigr)(\mathcal{L}^0_J)^{-1/2} \sin( \tau (\mathcal{L}^0_J)^{1/2})
\bigr\|_{J^s \to H^1}
\leqslant \widetilde{\mathcal C}(\tau) \varepsilon
\end{equation}
holds for all sufficiently small $\varepsilon > 0$.
\end{theorem}
\begin{proof} Let us check statement $1^\circ$.
It suffices to assume that $3/2 \leqslant s <2$.
We prove by contradiction. Suppose that for some $3/2 \leqslant s <2$ and \hbox{$\tau \ne 0$} estimate
\eqref{s<2_est_imp_J} holds. By Corollary \ref{corollary}, estimate
\begin{equation}
\label{s<2_est_imp_G}
\bigl\| \cos(\tau \mathcal{L}_{G,\varepsilon}^{1/2}) - \cos(\tau (\mathcal{L}^0_G)^{1/2}) \bigr\|_{G^s \to G} \leqslant
\check{\mathcal C}(\tau) \varepsilon
\end{equation}
is also valid. According to Lemma \ref{lemma}, relations \eqref{s<2_est_imp_J} and \eqref{s<2_est_imp_G} imply
\eqref{s<2_est_imp} with the constant $\mathcal{C}(\tau) = \max\{\widetilde{\mathcal C}(\tau),\check{\mathcal C}(\tau)\}$. But this contradicts statement $1^\circ$ of Theorem \ref{s<2_thrm}.
Statement $2^\circ$ is proved similarly.
Let us check statement $3^\circ$. It suffices to assume that $3/2 \leqslant s <2$.
Suppose that for some $3/2 \leqslant s <2$ and \hbox{$\tau \ne 0$} estimate
\eqref{sharp5} is satisfied for sufficiently small $\varepsilon$.
By Corollary \ref{corollary25}, estimate \eqref{sin_est_corrector3} holds.
Then from Lemma \ref{lemma2} it follows that the estimate
\begin{equation}
\label{sharp6}
\bigl\|\mathcal{L}_{\varepsilon}^{-1/2} \sin( \tau \mathcal{L}_{\varepsilon}^{1/2})
- \bigl(I + \varepsilon \Lambda^\varepsilon b({\mathbf D}) \bigr)(\mathcal{L}^0)^{-1/2} \sin( \tau (\mathcal{L}^0)^{1/2})
\bigr\|_{H^s \to H^1}
\leqslant {\mathcal C}(\tau) \varepsilon
\end{equation}
holds for sufficiently small $\varepsilon$.
It remains to take into account the following estimate proved in \cite[Section 14.7]{DSu4}:
\begin{equation}
\label{sharp7}
\bigl\| \varepsilon \Lambda^\varepsilon (I - \Pi_\varepsilon) b({\mathbf D}) (\mathcal{L}^0)^{-1/2} \sin( \tau (\mathcal{L}^0)^{1/2})
\bigr\|_{H^{3/2} \to H^1} \leqslant C \varepsilon,\quad 0< \varepsilon \leqslant 1.
\end{equation}
By \eqref{sharp6} and \eqref{sharp7}, we conclude that estimate \eqref{sharp3} is valid for sufficiently small
$\varepsilon$. But this contradicts statement $3^\circ$ of Theorem \ref{s<2_thrm}.
\end{proof}
Applying Theorem 15.17 from \cite{DSu4}, we obtain the following result confirming the sharpness of Theorems \ref{cos_thrm1} and \ref{th1_corrector} regarding the dependence of the estimates on $\tau$.
\begin{theorem}
\label{th_time_sharp}
Let $N_0 (\boldsymbol{\theta})$ be the operator defined by~\emph{(\ref{N0})}. Suppose that $N_0 (\boldsymbol{\theta}_0) \ne 0$ at least for one point $\boldsymbol{\theta}_0 \in \mathbb{S}^{2}$.
Then the following is true.
\noindent $1^\circ$. Let $s \geqslant 2$. Then there does not exist a positive function $\mathcal{C} (\tau)$ such that
$\lim_{\tau \to \infty} \mathcal{C}(\tau)/ |\tau| =0$ and estimate \eqref{s<2_est_imp} holds for $\tau \in {\mathbb R}$ and sufficiently small $\varepsilon>0$.
\noindent $2^\circ$. Let $r \geqslant 1$. Then there does not exist a positive function $\mathcal{C} (\tau)$ such that $\lim_{\tau \to \infty} \mathcal{C}(\tau)/ |\tau| =0$ and estimate \eqref{sharp2} holds for $\tau \in {\mathbb R}$ and sufficiently small $\varepsilon>0$.
\noindent $3^\circ$. Let $s \geqslant 2$. Then there does not exist a positive function $\mathcal{C} (\tau)$ such that
$\lim_{\tau \to \infty} \mathcal{C}(\tau)/ |\tau| =0$ and estimate \eqref{sharp3} holds for $\tau \in {\mathbb R}$
and sufficiently small $\varepsilon>0$.
\end{theorem}
Theorem \ref{th_time_sharp} implies a similar result for the operator $\mathcal{L}_{J,\varepsilon}$ confirming that Theorems
\ref{cos_thrm1_J}($1^\circ$) and \ref{cos_thrm2_J}($1^\circ$) are sharp regarding the dependence of the estimates on $\tau$.
\begin{theorem}
\label{th_time_sharp2}
Let $\mathcal{L}_{J,\varepsilon}$ be the part of the operator~\eqref{L_eps} in the subspace $J(\mu_0)$, and let $\mathcal{L}_{J}^0$ be the part of the effective operator~\eqref{L0} in the subspace $J(\mu_0)$. Let $N_0 (\boldsymbol{\theta})$ be the operator defined by~\emph{(\ref{N0})}. Suppose that
$N_0 (\boldsymbol{\theta}_0) \ne 0$ at least for one point $\boldsymbol{\theta}_0 \in \mathbb{S}^{2}$.
\noindent $1^\circ$. Let $s \geqslant 2$. Then there does not exist a positive function $\widetilde{\mathcal{C}} (\tau)$ such that $\lim_{\tau \to \infty} \widetilde{\mathcal{C}}(\tau)/ |\tau| =0$ and estimate \eqref{s<2_est_imp_J} holds for $\tau \in {\mathbb R}$ and sufficiently small $\varepsilon>0$.
\noindent $2^\circ$. Let $r \geqslant 1$. Then there does not exist a positive function $\widetilde{\mathcal{C}} (\tau)$ such that
$\lim_{\tau \to \infty} \widetilde{\mathcal{C}}(\tau)/ |\tau| =0$ and estimate \eqref{sharp4} holds for $\tau \in {\mathbb R}$ and sufficiently small $\varepsilon >0$.
\noindent $3^\circ$. Let $s \geqslant 2$. Then there does not exist a positive function $\widetilde{\mathcal{C}} (\tau)$ such that
$\lim_{\tau \to \infty} \widetilde{\mathcal{C}}(\tau)/ |\tau| =0$ and estimate \eqref{sharp5} holds for $\tau \in {\mathbb R}$ and sufficiently small $\varepsilon >0$.
\end{theorem}
\begin{proof}
Let us check statement $1^\circ$. We prove by contradiction. Suppose that for some
$s \geqslant 2$ there exists a positive function
$\widetilde{\mathcal{C}} (\tau)$ such that
$\lim_{\tau \to \infty} \widetilde{\mathcal{C}}(\tau)/ |\tau| =0$ and estimate \eqref{s<2_est_imp_J} holds for $\tau \in {\mathbb R}$ and sufficiently small $\varepsilon>0$.
By Corollary \ref{corollary}, the estimate
\begin{equation}
\label{s<2_est_imp_G2}
\bigl\| \cos(\tau \mathcal{L}_{G,\varepsilon}^{1/2}) - \cos(\tau (\mathcal{L}^0_G)^{1/2}) \bigr\|_{G^s \to G} \leqslant
\check{C}_3(1+ |\tau|)^{1/2} \varepsilon
\end{equation}
is also satisfied.
By Lemma \ref{lemma}, relations \eqref{s<2_est_imp_J} and \eqref{s<2_est_imp_G2} imply
\eqref{s<2_est_imp} with $\mathcal{C}(\tau) = \max\{\widetilde{\mathcal C}(\tau),\check{C}_3(1+|\tau|)^{1/2}\}$. We have $\lim_{\tau \to \infty} {\mathcal{C}}(\tau)/ |\tau| =0$.
But this contradicts statement $1^\circ$ of Theorem \ref{th_time_sharp}.
Statement $2^\circ$ is proved similarly.
Let us check statement $3^\circ$. Suppose that for some $s \geqslant 2$ there exists a positive function
$\widetilde{\mathcal{C}} (\tau)$ such that
$\lim_{\tau \to \infty} \widetilde{\mathcal{C}}(\tau)/ |\tau| =0$ and estimate \eqref{sharp5} holds for $\tau \in {\mathbb R}$ and sufficiently small $\varepsilon >0$.
By Corollary \ref{corollary25}, estimate \eqref{sin_est_corrector3} is satisfied.
Combining this with Lemma \ref{lemma2}, we conclude that \eqref{sharp6} holds with
$\mathcal{C}(\tau) = \max\{\widetilde{\mathcal C}(\tau),\check{C}_9(1+|\tau|)^{1/2}\}$. We have
$\lim_{\tau \to \infty} {\mathcal{C}}(\tau)/ |\tau| =0$. It remains to take \eqref{sharp7} into account.
Relations \eqref{sharp6} and \eqref{sharp7} imply estimate \eqref{sharp3}. But this contradicts statement $3^\circ$ of Theorem \ref{th_time_sharp}.
\end{proof}
\subsection{Examples}
Concrete examples of both situations were given in \cite[\S 4]{DSu3}.
1) Let $\Gamma = (2 \pi \mathbb{Z})^3$. Assume that $\mu_0 = \mathbf{1}$.
Suppose that the matrix $\eta(\mathbf{x})$ depends only on $x_1$ and is given by
\begin{equation*}
\eta(\mathbf{x}) = \begin{pmatrix}
\eta_1(x_1) & \eta_2(x_1) & 0 \\
\eta_2(x_1) & \eta_3(x_1) & 0 \\
0 & 0 & \eta_4(x_1)
\end{pmatrix},
\end{equation*}
where $\eta_j(x_1)$, $j = 1,2,3,4$,~are $(2 \pi)$-periodic real-valued functions. It is assumed that the
matrix-valued function $\eta(\mathbf{x})$ is bounded and uniformly positive definite.
In \cite[Section 4.1]{DSu3}, it was shown that the functions $\eta_j(x_1)$, $j=1,2,3,4,$ can be chosen so that
$\gamma_1(\boldsymbol{\theta}_0) = \gamma_2(\boldsymbol{\theta}_0)$
and $\mu_1(\boldsymbol{\theta}_0) = -\mu_2(\boldsymbol{\theta}_0) \ne 0$
for some $\boldsymbol{\theta}_0 \in \mathbb{S}^2$.
Then $N_0(\boldsymbol{\theta}_0) \ne 0$. We can apply general results (Theorems \ref{cos_thrm1_J}($1^\circ$) and \ref{cos_thrm2_J}$(1^\circ)$), and they are sharp both regarding the norm type and regarding the dependence of the estimates on $\tau$.
2) Recall that some cases where $N(\boldsymbol{\theta}) \equiv 0$ were distinguished in Remark \ref{N=0}.
One more example borrowed from~\cite{Zh1} was discussed in \cite[Section 4.2]{DSu3}.
Suppose that $\mu_0 = \mathbf{1}$.
Let $\Gamma = (2\pi \mathbb{Z})^3$, and choose the cell centred at zero: $\Omega = (-\pi, \pi)^3$.
Let $B_1 = \{|\mathbf{x}| \leqslant 1\}$~be the unit ball, $B_\vartheta$~be the ball concentric with $B_1$ and such that
$|B_\vartheta| = \vartheta |B_1|$, $0 < \vartheta < 1$. Let $\eta(\mathbf{x})$ be the $\Gamma$-periodic matrix-valued function, on the cell given by
\begin{equation*}
\eta(\mathbf{x}) = a(\mathbf{x})I, \qquad a(\mathbf{x}) = \left\lbrace \begin{aligned}
&\kappa , & &\text{for} \; \mathbf{x} \in B_\vartheta,\\
&1, & &\text{for} \; \mathbf{x} \in B_1 \setminus B_\vartheta,\\
&1 + \tfrac{3 \vartheta (\kappa -1)}{3 + (1 - \vartheta)(\kappa -1)}, & &\text{for} \; \mathbf{x} \in \Omega \setminus B_1,
\end{aligned} \right.
\end{equation*}
where $\kappa >0$.
As was shown in \cite[Section 4.2]{DSu3}, in this example $N (\boldsymbol{\theta}) = 0$ for any $\boldsymbol{\theta}\in \mathbb{S}^2$.
In the case where $N (\boldsymbol{\theta}) \equiv 0$, the general results can be improved:
we can apply Theorems~ \ref{cos_thrm1_J}($2^\circ$) and \ref{cos_thrm2_J}$(2^\circ)$.
\section{Homogenization of the nonstationary Maxwell system}
\subsection{Statement of the problem\label{sec4.1}}
Suppose that the dielectric permittivity is given by the rapidly oscillating matrix $\eta^\varepsilon ({\mathbf x})$,
and the magnetic permeability is equal to the constant matrix $\mu_0$.
Suppose that $\eta({\mathbf x})$ and $\mu_0$ satisfy the assumptions of Subsection \ref{Subsection Operator L}.
We use the following notation for the physical fields:
${\mathbf u}_\varepsilon ({\mathbf x},\tau)$ is the intensity of the electric field;
${\mathbf w}_\varepsilon ({\mathbf x},\tau) = \eta^\varepsilon({\mathbf x}) {\mathbf u}_\varepsilon({\mathbf x},\tau)$ is the electric displacement vector;
${\mathbf v}_\varepsilon ({\mathbf x},\tau)$ is the intensity of the magnetic field;
${\mathbf z}_\varepsilon({\mathbf x},\tau) = \mu_0 {\mathbf v}_\varepsilon ({\mathbf x},\tau)$ is the magnetic displacement vector.
Consider the following Cauchy problem for the nonstationary Maxwell system:
\begin{equation}
\label{63}
\left\{
\begin{aligned}
&\partial_\tau {\mathbf u}_\varepsilon({\mathbf x},\tau) = (\eta^\varepsilon ({\mathbf x}))^{-1} \operatorname{curl}
{\mathbf v}_\varepsilon ({\mathbf x},\tau),
\quad \operatorname{div} \, \eta^\varepsilon( {\mathbf x}) {\mathbf u}_\varepsilon ({\mathbf x},\tau) =0,
\quad {\mathbf x} \in {\mathbb R}^3,\ \tau \in {\mathbb R};
\\
&\partial_\tau {\mathbf v}_\varepsilon({\mathbf x},\tau) = - \mu_0^{-1} \operatorname{curl} {\mathbf u}_\varepsilon ({\mathbf x},\tau),
\quad \operatorname{div} \, \mu_0 {\mathbf v}_\varepsilon ({\mathbf x},\tau) =0,
\quad {\mathbf x} \in {\mathbb R}^3,\ \tau \in {\mathbb R};
\\
&{\mathbf u}_\varepsilon({\mathbf x},0) = (P_\varepsilon {\mathbf f})({\mathbf x}),\quad {\mathbf v}_\varepsilon({\mathbf x},0) = {\boldsymbol \phi}({\mathbf x}), \quad {\mathbf x} \in {\mathbb R}^3.
\end{aligned}
\right.
\end{equation}
Here ${\boldsymbol \phi} \in L_2({\mathbb R}^3;{\mathbb C}^3)$ and $ \operatorname{div} \mu_0 {\boldsymbol \phi}({\mathbf x}) =0$
(this relation is understood in the sense of distributions). Next, ${\mathbf f} \in L_2({\mathbb R}^3;{\mathbb C}^3)$ and $P_\varepsilon$ is the orthogonal projection of the weighted space $L_2({\mathbb R}^3;{\mathbb C}^3; \eta^\varepsilon )$ onto the subspace
$$
\{ {\mathbf u} \in L_2({\mathbb R}^3;{\mathbb C}^3): \ \operatorname{div} \, \eta^\varepsilon ({\mathbf x}) {\mathbf u}({\mathbf x}) =0\}.
$$
The projection $P_\varepsilon$ acts as follows:
$(P_\varepsilon {\mathbf f})({\mathbf x}) = {\mathbf f}({\mathbf x}) - \nabla \omega_\varepsilon ({\mathbf x})$, where $\omega_\varepsilon$ is the solution of the equation
$\operatorname{div} \, \eta^\varepsilon \nabla \omega_\varepsilon = \operatorname{div} \, \eta^\varepsilon {\mathbf f}$ (understood in the generalized sense): $\omega_\varepsilon \in L_{2,\,\text{loc}}({\mathbb R}^3)$, $\nabla \omega_\varepsilon \in L_2({\mathbb R}^3;{\mathbb C}^3)$, and
$$
\intop_{{\mathbb R}^3} \langle \eta^\varepsilon ({\mathbf x}) ({\mathbf f}({\mathbf x}) - \nabla \omega_\varepsilon({\mathbf x})), \nabla \chi( {\mathbf x})\rangle \, d {\mathbf x}=0,
\quad \chi \in L_{2,\,\text{loc}} ({\mathbb R}^3),\ \nabla \chi \in L_2({\mathbb R}^3;{\mathbb C}^3).
$$
\subsection{The homogenized Maxwell system\label{sec4.2}}
We use the following notation for the homogenized physical fields:
${\mathbf u}_0({\mathbf x},\tau)$ is the intensity of the electric field;
${\mathbf w}_0({\mathbf x},\tau) = \eta^0 {\mathbf u}_0({\mathbf x},\tau)$ is the electric displacement vector;
${\mathbf v}_0({\mathbf x},\tau)$ is the intensity of the magnetic field;
${\mathbf z}_0({\mathbf x},\tau) = \mu_0 {\mathbf v}_0({\mathbf x},\tau)$ is the magnetic displacement vector.
Here $\eta^0$ is the effective matrix defined in Subsection \ref{sec_effective}.
The homogenized problem is given by
\begin{equation}
\label{64}
\left\{
\begin{aligned}
&\partial_\tau {\mathbf u}_0({\mathbf x},\tau) = (\eta^0)^{-1} \operatorname{curl} {\mathbf v}_0({\mathbf x},\tau),
\quad \operatorname{div} \, \eta^0 {\mathbf u}_0({\mathbf x},\tau) =0, \quad {\mathbf x} \in {\mathbb R}^3,\ \tau \in {\mathbb R};
\\
&\partial_\tau {\mathbf v}_0({\mathbf x},\tau) = - \mu_0^{-1} \operatorname{curl} {\mathbf u}_0({\mathbf x},\tau),\quad
\operatorname{div} \, \mu_0 {\mathbf v}_0({\mathbf x},\tau) =0, \quad
{\mathbf x} \in {\mathbb R}^3,\ \tau \in {\mathbb R};
\\
&{\mathbf u}_0({\mathbf x},0) = (P_0 {\mathbf f})({\mathbf x}),\quad {\mathbf v}_0({\mathbf x},0) = {\boldsymbol \phi}({\mathbf x}), \quad {\mathbf x} \in {\mathbb R}^3.
\end{aligned}
\right.
\end{equation}
Here $P_0$ is the orthogonal projection of the weighted space $L_2({\mathbb R}^3;{\mathbb C}^3; \eta^0)$ onto the subspace
$$
\{ {\mathbf u} \in L_2({\mathbb R}^3;{\mathbb C}^3): \ \operatorname{div} \, \eta^0 {\mathbf u}({\mathbf x}) =0\}.
$$
The projection $P_0$ acts as follows:
$(P_0 {\mathbf f})({\mathbf x}) = {\mathbf f} ({\mathbf x}) - \nabla \omega_0({\mathbf x})$, where $\omega_0$ is the solution of the equation
$\operatorname{div} \, \eta^0 \nabla \omega_0 = \operatorname{div} \, \eta^0 {\mathbf f}$ (understood in the weak sense):
$\omega_0 \in L_{2,\,\text{loc}}({\mathbb R}^3)$, $\nabla \omega_0 \in L_2({\mathbb R}^3;{\mathbb C}^3)$, and
$$
\intop_{{\mathbb R}^3} \langle \eta^0 ({\mathbf f}({\mathbf x}) - \nabla \omega_0({\mathbf x})), \nabla \chi({\mathbf x})\rangle \, d{\mathbf x} =0,
\quad \chi \in L_{2,\,\text{loc}}({\mathbb R}^3),\ \nabla \chi \in L_2({\mathbb R}^3;{\mathbb C}^3).
$$
\begin{remark}
Since $P_\varepsilon {\mathbf f} = {\mathbf f} - \nabla \omega_\varepsilon$ and $P_0 {\mathbf f} = {\mathbf f} - \nabla \omega_0$, then
$$
\operatorname{curl} P_\varepsilon {\mathbf f} = \operatorname{curl} P_0 {\mathbf f} = \operatorname{curl} {\mathbf f},
$$
which is understood in the sense of distributions.
\end{remark}
\subsection{Reduction to the second order equation}
From \eqref{63} we obtain the second order equation for ${\mathbf v}_\varepsilon$:
$$
\partial^2_\tau {\mathbf v}_\varepsilon ({\mathbf x},\tau) = - \mu_0^{-1} \operatorname{curl} \partial_\tau {\mathbf u}_\varepsilon ({\mathbf x},\tau)=
- \mu_0^{-1} \operatorname{curl} (\eta^\varepsilon ({\mathbf x}))^{-1}\operatorname{curl} {\mathbf v}_\varepsilon ({\mathbf x},\tau),
$$
with the initial conditions
$$
{\mathbf v}_\varepsilon ({\mathbf x},0) = {\boldsymbol \phi}({\mathbf x}), \quad
\partial_\tau {\mathbf v}_\varepsilon ({\mathbf x},0) = - \mu_0^{-1} \operatorname{curl} {\mathbf u}_\varepsilon ({\mathbf x},0) =
- \mu_0^{-1} \operatorname{curl} (P_\varepsilon {\mathbf f})({\mathbf x}) = - \mu_0^{-1} \operatorname{curl} {\mathbf f}({\mathbf x}).
$$
Thus, the magnetic intensity ${\mathbf v}_\varepsilon ({\mathbf x},\tau)$ is the generalized solution of the following Cauchy problem
\begin{equation}
\label{51}
\left\{
\begin{aligned}
&\mu_0 \,\partial_\tau^2 {\mathbf v}_\varepsilon ({\mathbf x},\tau) = - \operatorname{curl} (\eta^\varepsilon(\mathbf{x}))^{-1} \operatorname{curl} {\mathbf v}_\varepsilon ({\mathbf x},\tau),
\quad \operatorname{div} \, \mu_0 {\mathbf v}_\varepsilon ({\mathbf x},\tau) =0, \quad {\mathbf x} \in {\mathbb R}^3,\ \tau \in {\mathbb R};
\\
& {\mathbf v}_\varepsilon ({\mathbf x},0) = {\boldsymbol \phi}({\mathbf x}),\quad \mu_0 \,\partial_\tau {\mathbf v}_\varepsilon ({\mathbf x},0) = {\boldsymbol \psi}({\mathbf x}),
\quad {\mathbf x} \in {\mathbb R}^3,
\end{aligned}
\right.
\end{equation}
where $\boldsymbol{\psi}:= - \operatorname{curl} {\mathbf f}$.
Other fields are expressed in terms of ${\mathbf v}_\varepsilon$ as follows:
\begin{equation}
\label{51a}
\begin{aligned}
{\mathbf z}_\varepsilon ( {\mathbf x},\tau) &= \mu_0 {\mathbf v}_\varepsilon ({\mathbf x},\tau),
\\
{\mathbf u}_\varepsilon ({\mathbf x},\tau) - {\mathbf u}_\varepsilon({\mathbf x},0) &= \int_0^\tau (\eta^\varepsilon ({\mathbf x}))^{-1} \operatorname{curl} {\mathbf v}_\varepsilon ({\mathbf x},\widetilde{\tau}) \, d\widetilde{\tau},
\\
{\mathbf w}_\varepsilon ({\mathbf x},\tau) - {\mathbf w}_\varepsilon({\mathbf x},0) &= \int_0^\tau \operatorname{curl} {\mathbf v}_\varepsilon ({\mathbf x},\widetilde{\tau}) \, d\widetilde{\tau}.
\end{aligned}
\end{equation}
We substitute $\mu_0^{1/2} {\mathbf v}_\varepsilon = \boldsymbol{\varphi}_\varepsilon$.
Then $ \boldsymbol{\varphi}_\varepsilon$ is the solution of the problem
\begin{equation*}
\left\{
\begin{aligned}
&\partial_\tau^2 \boldsymbol{\varphi}_\varepsilon ({\mathbf x},\tau) = - \mu_0^{-1/2} \!\operatorname{curl} (\eta^\varepsilon(\mathbf{x}))^{-1}\! \operatorname{curl} \mu_0^{-1/2} \boldsymbol{\varphi}_\varepsilon({\mathbf x},\tau),
\ \;
\operatorname{div} \mu_0^{1/2} \!\boldsymbol{\varphi}_\varepsilon({\mathbf x},\tau) =0, \ \; {\mathbf x} \in {\mathbb R}^3,\ \tau \in {\mathbb R};
\\
& \boldsymbol{\varphi}_\varepsilon({\mathbf x},0) = \mu_0^{1/2}{\boldsymbol \phi}({\mathbf x}),\quad
\partial_\tau \boldsymbol{\varphi}_\varepsilon ({\mathbf x},0) = \mu_0^{-1/2}{\boldsymbol \psi}({\mathbf x}), \quad {\mathbf x} \in {\mathbb R}^3.
\end{aligned}
\right.
\end{equation*}
The solution is represented as
$$
\boldsymbol{\varphi}_\varepsilon = \cos (\tau {\mathcal L}_{J,\varepsilon}^{1/2}) \mu_0^{1/2}{\boldsymbol \phi}
+ {\mathcal L}_{J,\varepsilon}^{-1/2} \sin (\tau {\mathcal L}_{J,\varepsilon}^{1/2}) \mu_0^{-1/2}{\boldsymbol \psi}.
$$
Hence,
\begin{equation}
\label{51b}
{\mathbf v}_\varepsilon (\cdot,\tau) = \mu_0^{-1/2} \cos (\tau {\mathcal L}_{J,\varepsilon}^{1/2}) \mu_0^{1/2}{\boldsymbol \phi}
+ \mu_0^{-1/2}{\mathcal L}_{J,\varepsilon}^{-1/2} \sin (\tau {\mathcal L}_{J,\varepsilon}^{1/2}) \mu_0^{-1/2}{\boldsymbol \psi}.
\end{equation}
Similarly, the homogenized Maxwell system \eqref{64} is reduced to the following problem for
${\mathbf v}_0$:
\begin{equation*}
\left\{
\begin{aligned}
& \mu_0\, \partial_\tau^2 {\mathbf v}_0({\mathbf x},\tau) = - \operatorname{curl} (\eta^0)^{-1} \operatorname{curl} {\mathbf v}_0({\mathbf x},\tau),
\quad
\operatorname{div} \, \mu_0 {\mathbf v}_0({\mathbf x},\tau) =0, \quad {\mathbf x} \in {\mathbb R}^3,\ \tau \in {\mathbb R};
\\
& {\mathbf v}_0({\mathbf x},0) = {\boldsymbol \phi}({\mathbf x}),\quad \mu_0 \, \partial_\tau {\mathbf v}_0({\mathbf x},0) = {\boldsymbol \psi}({\mathbf x}),
\quad {\mathbf x} \in {\mathbb R}^3.
\end{aligned}
\right.
\end{equation*}
Other homogenized fields are expressed in terms of ${\mathbf v}_0$ as follows:
\begin{equation}
\label{53a}
\begin{aligned}
& {\mathbf z}_0({\mathbf x},\tau) = \mu_0 {\mathbf v}_0({\mathbf x},\tau),
\\
& {\mathbf u}_0({\mathbf x},\tau) - {\mathbf u}_0({\mathbf x},0) = \int_0^\tau (\eta^0)^{-1} \operatorname{curl} {\mathbf v}_0({\mathbf x},\widetilde{\tau}) \, d\widetilde{\tau},
\\
& {\mathbf w}_0({\mathbf x},\tau) - {\mathbf w}_0({\mathbf x},0) = \int_0^\tau \operatorname{curl} {\mathbf v}_0({\mathbf x},\widetilde{\tau}) \, d\widetilde{\tau}.
\end{aligned}
\end{equation}
Similarly to \eqref{51b}, we have
\begin{equation}
\label{54}
{\mathbf v}_0(\cdot, \tau)= \mu_0^{-1/2} \cos(\tau (\mathcal{L}_{J}^0)^{1/2}) \mu_0^{1/2} {\boldsymbol \phi} + \mu_0^{-1/2}
(\mathcal{L}_{J}^0)^{-1/2} \sin( \tau (\mathcal{L}_{J}^0)^{1/2}) \mu_0^{-1/2} {\boldsymbol \psi}.
\end{equation}
\subsection{The results on homogenization of the Maxwell system}
From Theorem~\ref{cos_thrm1_J}($1^\circ$) we deduce the following result.
\begin{theorem}
\label{thrm_51}
Under the assumptions of Subsections \emph{\ref{sec4.1}, \ref{sec4.2}}, the magnetic intensity
${\mathbf v}_\varepsilon({\mathbf x},\tau)$ and the magnetic displacement vector ${\mathbf z}_\varepsilon({\mathbf x},\tau)$ satisfy the following statements.
\noindent $1^\circ$. Let ${\boldsymbol \phi}, {\mathbf f} \in H^2({\mathbb R}^3;{\mathbb C}^3)$, and
$\operatorname{div} \mu_0 {\boldsymbol \phi} =0$.
Then for $\tau \in {\mathbb R}$ and $\varepsilon>0$ we have
\begin{align}
\label{55}
\| {\mathbf v}_\varepsilon(\cdot, \tau) - {\mathbf v}_0(\cdot, \tau) \|_{L_2({\mathbb R}^3)} \leqslant \mathfrak{C}_1(1+|\tau|)\varepsilon
\left( \| {\boldsymbol \phi} \|_{H^2({\mathbb R}^3)} +
\| {\mathbf f} \|_{H^2({\mathbb R}^3)}\right),
\\
\label{55aaa}
\| {\mathbf z}_\varepsilon(\cdot, \tau) - {\mathbf z}_0(\cdot, \tau) \|_{L_2({\mathbb R}^3)} \leqslant \mathfrak{C}_2 (1+|\tau|)\varepsilon
\left( \| {\boldsymbol \phi} \|_{H^2({\mathbb R}^3)} + \| {\mathbf f} \|_{H^2({\mathbb R}^3)}\right).
\end{align}
The constants $\mathfrak{C}_1$ and $\mathfrak{C}_2$ depend on $|\mu_0|$, $|\mu_0^{-1}|$, $\|\eta\|_{L_\infty}$, $\|\eta^{-1}\|_{L_\infty}$, and the parameters of the lattice $\Gamma$.
\noindent $2^\circ$. Let ${\boldsymbol \phi}, {\mathbf f} \in H^s({\mathbb R}^3;{\mathbb C}^3)$, where $0\leqslant s \leqslant 2$, and $\operatorname{div} \mu_0 {\boldsymbol \phi} =0$.
Then for $\tau \in {\mathbb R}$ and $\varepsilon>0$ we have
\begin{align}
\label{55a}
\| {\mathbf v}_\varepsilon(\cdot, \tau) - {\mathbf v}_0(\cdot, \tau) \|_{L_2({\mathbb R}^3)} \leqslant \mathfrak{C}_3(s) (1+|\tau|)^{s/2}\varepsilon^{s/2}
\left( \| {\boldsymbol \phi} \|_{H^s({\mathbb R}^3)} + \| {\mathbf f} \|_{H^s({\mathbb R}^3)}\right),
\\
\nonumber
\| {\mathbf z}_\varepsilon(\cdot, \tau) - {\mathbf z}_0(\cdot, \tau) \|_{L_2({\mathbb R}^3)} \leqslant \mathfrak{C}_4(s) (1+|\tau|)^{s/2}\varepsilon^{s/2}
\left( \| {\boldsymbol \phi} \|_{H^s({\mathbb R}^3)} + \| {\mathbf f} \|_{H^s({\mathbb R}^3)}\right).
\end{align}
The constants $\mathfrak{C}_3(s)$ and $\mathfrak{C}_4(s)$ depend on
$|\mu_0|$, $|\mu_0^{-1}|$, $\|\eta\|_{L_\infty}$, $\|\eta^{-1}\|_{L_\infty}$,
the parameters of the lattice $\Gamma$, and on $s$.
\noindent $3^\circ$. If ${\boldsymbol \phi}, {\mathbf f} \in L_2({\mathbb R}^3;{\mathbb C}^3)$, and
$\operatorname{div} \mu_0 {\boldsymbol \phi} =0$, then
\begin{align}
\label{57}
\lim_{\varepsilon \to 0} \| {\mathbf v}_\varepsilon(\cdot, \tau) - {\mathbf v}_0(\cdot, \tau) \|_{L_2({\mathbb R}^3)} =0,\quad \tau \in {\mathbb R},
\\
\label{57b}
\lim_{\varepsilon \to 0} \| {\mathbf z}_\varepsilon(\cdot, \tau) - {\mathbf z}_0(\cdot, \tau) \|_{L_2({\mathbb R}^3)} =0,\quad \tau \in {\mathbb R}.
\end{align}
\end{theorem}
\begin{proof}
Inequality \eqref{55} follows directly from \eqref{cos_thrm1_H^2_L2_est},
\eqref{sin_thrm1_H^1_L2_est} and representations \eqref{51b}, \eqref{54}. Similarly, estimate \eqref{55a} is deduced from \eqref{cos_thrm1_H^s_L2_est} and \eqref{sin_thrm1_H^s_L2_est}.
The results for ${\mathbf z}_\varepsilon$ directly follow from the results for ${\mathbf v}_\varepsilon$, since
${\mathbf z}_\varepsilon = \mu_0 {\mathbf v}_\varepsilon$ and ${\mathbf z}_0 = \mu_0 {\mathbf v}_0$.
Estimate \eqref{55a} with $s=0$ shows that the norm on the left is uniformly bounded provided that
${\boldsymbol \phi}, {\mathbf f} \in L_2({\mathbb R}^3;{\mathbb C}^3)$ (and $\operatorname{div} \mu_0 {\boldsymbol \phi} =0$).
Applying \eqref{55a} with $s=0$ and \eqref{55} and using that $H^2$ is dense in $L_2$, and the set $\{ {\mathbf u} \in H^2: \operatorname{div} \mu_0 {\mathbf u} =0\}$ is dense in the space $\{ {\mathbf u} \in L_2:
\operatorname{div} \mu_0 {\mathbf u} =0\}$,
by the Banach--Steinhaus theorem, we obtain \eqref{57}.
Relation \eqref{57b} follows from \eqref{57}.
\end{proof}
Theorems \ref{s<2_thrm_J}($1^\circ,2^\circ$) and \ref{th_time_sharp2}($1^\circ, 2^\circ$) show that, in the general case, estimates \eqref{55} and \eqref{55aaa} are sharp regarding the norm type and regarding the dependence on
$\tau$.
However, under some additional assumptions, statements $1^\circ, 2^\circ$ of Theorem \ref{thrm_51} can be improved. This follows from Theorem~\ref{cos_thrm1_J}($2^\circ$).
\begin{theorem}
\label{thrm_52}
Suppose that the assumptions of Theorem \emph{\ref{thrm_51}} are satisfied.
Suppose that Condition \emph{\ref{cond1}} or Condition \emph{\ref{cond2}} is satisfied.
\noindent $1^\circ$.
Let ${\boldsymbol \phi}, {\mathbf f} \in H^{3/2}({\mathbb R}^3;{\mathbb C}^3)$, and $\operatorname{div} \mu_0 {\boldsymbol \phi} =0$.
Then for $\tau \in {\mathbb R}$ and $\varepsilon>0$ we have
\begin{align*}
\| {\mathbf v}_\varepsilon(\cdot, \tau) - {\mathbf v}_0(\cdot, \tau) \|_{L_2({\mathbb R}^3)} \leqslant \mathfrak{C}_5 (1+|\tau|)^{1/2}\varepsilon
\left( \| {\boldsymbol \phi} \|_{H^{3/2}({\mathbb R}^3)} +
\| {\mathbf f} \|_{H^{3/2}({\mathbb R}^3)}\right),
\\
\| {\mathbf z}_\varepsilon(\cdot, \tau) - {\mathbf z}_0(\cdot, \tau) \|_{L_2({\mathbb R}^3)} \leqslant \mathfrak{C}_6 (1+|\tau|)^{1/2}\varepsilon
\left( \| {\boldsymbol \phi} \|_{H^{3/2}({\mathbb R}^3)} + \| {\mathbf f} \|_{H^{3/2}({\mathbb R}^3)}\right).
\end{align*}
Under Condition \emph{\ref{cond1}} the constants $\mathfrak{C}_5$ and $\mathfrak{C}_6$ depend on $|\mu_0|$, $|\mu_0^{-1}|$, $\|\eta\|_{L_\infty}$, $\|\eta^{-1}\|_{L_\infty}$, and the parameters of the lattice $\Gamma$.
Under Condition \emph{\ref{cond2}} these constants depend also on $c^\circ$.
\noindent $2^\circ$. Let ${\boldsymbol \phi}, {\mathbf f} \in H^s({\mathbb R}^3;{\mathbb C}^3)$, where $0\leqslant s \leqslant 3/2$, and $\operatorname{div} \mu_0 {\boldsymbol \phi} =0$.
Then for $\tau \in {\mathbb R}$ and $\varepsilon>0$ we have
\begin{align*}
\| {\mathbf v}_\varepsilon(\cdot, \tau) - {\mathbf v}_0(\cdot, \tau) \|_{L_2({\mathbb R}^3)} \leqslant \mathfrak{C}_7(s) (1+|\tau|)^{s/3}\varepsilon^{2s/3}
\left( \| {\boldsymbol \phi} \|_{H^s({\mathbb R}^3)} + \| {\mathbf f} \|_{H^s({\mathbb R}^3)}\right),
\\
\| {\mathbf z}_\varepsilon(\cdot, \tau) - {\mathbf z}_0(\cdot, \tau) \|_{L_2({\mathbb R}^3)} \leqslant \mathfrak{C}_8(s) (1+|\tau|)^{s/3}\varepsilon^{2s/3}
\left( \| {\boldsymbol \phi} \|_{H^s({\mathbb R}^3)} + \| {\mathbf f} \|_{H^s({\mathbb R}^3)}\right).
\end{align*}
Under Condition \emph{\ref{cond1}} the constants $\mathfrak{C}_7(s)$ and $\mathfrak{C}_8(s)$ depend on
$|\mu_0|$, $|\mu_0^{-1}|$, $\|\eta\|_{L_\infty}$, $\|\eta^{-1}\|_{L_\infty}$,
the parameters of the lattice $\Gamma$, and $s$.
Under Condition \emph{\ref{cond2}} these constants depend also on $c^\circ$.
\end{theorem}
\begin{remark}
In the case where ${\boldsymbol \phi} \ne 0$, we are not able to derive
approximation for the fields ${\mathbf u}_\varepsilon$ and ${\mathbf w}_\varepsilon$ from the known results for the operator ${\mathcal L}_{J,\varepsilon}$, because ${\mathbf u}_\varepsilon$ and ${\mathbf w}_\varepsilon$ are expressed in terms of the derivatives of ${\mathbf v}_\varepsilon$ \emph{(}see \eqref{51a}\emph{)}, but we do not have approximation for the operator
$\cos (\tau {\mathcal L}_{J,\varepsilon}^{1/2})$ in the energy norm.
\end{remark}
In the case where ${\boldsymbol \phi}=0$, we obtain approximation for all four fields applying
Theorem \ref{cos_thrm2_J}($1^\circ$).
\begin{theorem}
\label{thrm_53}
Under the assumptions of Subsections \emph{\ref{sec4.1} and \ref{sec4.2}}, suppose in addition that
${\boldsymbol \phi}=0$.
\noindent $1^\circ$.
If ${\mathbf f} \in H^3({\mathbb R}^3;{\mathbb C}^3)$, then for $\tau \in {\mathbb R}$ and $0< \varepsilon\leqslant 1$ we have the following approximations for the fields ${\mathbf v}_\varepsilon$ and ${\mathbf z}_\varepsilon$ in the energy norm\emph{:}
\begin{align}
\label{4.20}
\bigl\| {\mathbf v}_\varepsilon(\cdot, \tau) - {\mathbf v}_0(\cdot, \tau) - \varepsilon \mu_0^{-1} \Psi^\varepsilon \operatorname{curl} {\mathbf v}_0(\cdot, \tau) \bigr\|_{H^1({\mathbb R}^3)}
\leqslant {\mathfrak C}_9 (1+ |\tau|) \varepsilon \| {\mathbf f} \|_{H^3({\mathbb R}^3)},
\\
\label{4.21}
\bigl\| {\mathbf z}_\varepsilon(\cdot, \tau) - {\mathbf z}_0(\cdot, \tau) - \varepsilon \Psi^\varepsilon
\operatorname{curl} {\mathbf v}_0(\cdot, \tau) \bigr\|_{H^1({\mathbb R}^3)}
\leqslant {\mathfrak C}_{10} (1+ |\tau|) \varepsilon \| {\mathbf f} \|_{H^3({\mathbb R}^3)}.
\\
\label{4.22}
\bigl\| (\eta^\varepsilon)^{-1} \operatorname{curl} {\mathbf v}_\varepsilon(\cdot, \tau) - ((\eta^0)^{-1} + \Sigma^\varepsilon) \operatorname{curl} {\mathbf v}_0(\cdot, \tau) \bigr\|_{L_2({\mathbb R}^3)}
\leqslant {\mathfrak C}_{11} (1+ |\tau|) \varepsilon \| {\mathbf f} \|_{H^3({\mathbb R}^3)}.
\end{align}
If ${\mathbf f} \in H^3({\mathbb R}^3;{\mathbb C}^3)$, then for $\tau \in {\mathbb R}$ and $0< \varepsilon \leqslant 1$ we have the following approximations for the fields ${\mathbf u}_\varepsilon$ and ${\mathbf w}_\varepsilon$ in $L_2$\emph{:}
\begin{align}
\label{4.23}
\bigl\| \bigl({\mathbf u}_\varepsilon(\cdot, \tau) - {\mathbf u}_\varepsilon(\cdot,0)\bigr) -
\bigl({\mathbf 1}+ \Sigma_\circ^\varepsilon \bigr) \bigl( {\mathbf u}_0(\cdot, \tau) - {\mathbf u}_0(\cdot, 0) \bigr) \bigr\|_{L_2({\mathbb R}^3)}
\leqslant {\mathfrak C}_{11} |\tau|(1+ |\tau|) \varepsilon \| {\mathbf f} \|_{H^3({\mathbb R}^3)},
\\
\label{4.24}
\bigl\| \bigl( {\mathbf w}_\varepsilon(\cdot, \tau) - {\mathbf w}_\varepsilon(\cdot,0)\bigr) -
\widetilde{\eta}^\varepsilon (\eta^0)^{-1} \bigl( {\mathbf w}_0(\cdot, \tau) - {\mathbf w}_0(\cdot, 0) \bigr) \bigr\|_{L_2({\mathbb R}^3)}
\leqslant {\mathfrak C}_{12} |\tau|(1+ |\tau|) \varepsilon \| {\mathbf f} \|_{H^3({\mathbb R}^3)}.
\end{align}
The constants $\mathfrak{C}_9, \mathfrak{C}_{10}, \mathfrak{C}_{11}, \mathfrak{C}_{12}$ depend on $|\mu_0|$, $|\mu_0^{-1}|$, $\|\eta\|_{L_\infty}$, $\|\eta^{-1}\|_{L_\infty}$, and the parameters of the lattice $\Gamma$.
\noindent $2^\circ$.
Let ${\mathbf f} \in H^{1+s}({\mathbb R}^3;{\mathbb C}^3)$, where $0\leqslant s \leqslant 2$. Then for $\tau \in {\mathbb R}$ and $\varepsilon>0$ we have
\begin{align*}
\bigl\| {\mathbf D} \bigl({\mathbf v}_\varepsilon(\cdot, \tau) - {\mathbf v}_0(\cdot, \tau) - \varepsilon \mu_0^{-1} \Psi^\varepsilon \Pi_\varepsilon \operatorname{curl} {\mathbf v}_0(\cdot, \tau) \bigr)
\bigr\|_{L_2({\mathbb R}^3)}
\leqslant {\mathfrak C}_{13}(s) (1+ |\tau|)^{s/2} \varepsilon^{s/2} \| {\mathbf f} \|_{H^{1+s}({\mathbb R}^3)},
\\
\bigl\| {\mathbf D} \bigl( {\mathbf z}_\varepsilon(\cdot, \tau) - {\mathbf z}_0(\cdot, \tau) - \varepsilon \Psi^\varepsilon \Pi_\varepsilon \operatorname{curl} {\mathbf v}_0(\cdot, \tau) \bigr)\bigr\|_{L_2({\mathbb R}^3)}
\leqslant {\mathfrak C}_{14}(s) (1+ |\tau|)^{s/2} \varepsilon^{s/2} \| {\mathbf f} \|_{H^{1+s}({\mathbb R}^3)}.
\\
\bigl\| (\eta^\varepsilon )^{-1} \operatorname{curl} {\mathbf v}_\varepsilon (\cdot, \tau) - ( (\eta^0)^{-1} + \Sigma^\varepsilon \Pi_\varepsilon) \operatorname{curl} {\mathbf v}_0(\cdot, \tau) \bigr\|_{L_2({\mathbb R}^3)}
\leqslant {\mathfrak C}_{15}(s) (1+ |\tau|)^{s/2} \varepsilon^{s/2} \| {\mathbf f} \|_{H^{1+s}({\mathbb R}^3)}.
\end{align*}
If ${\mathbf f} \in H^{1+s}({\mathbb R}^3;{\mathbb C}^3)$, where $0\leqslant s \leqslant 2$, then for $\tau \in {\mathbb R}$ and $\varepsilon>0$ we have
\begin{align*}
\begin{split}
\bigl\| \bigl({\mathbf u}_\varepsilon (\cdot, \tau) - {\mathbf u}_\varepsilon(\cdot,0)\bigr) -
\bigl(I+ \Sigma_\circ^\varepsilon \Pi_\varepsilon \bigr) \bigl({\mathbf u}_0(\cdot, \tau) - {\mathbf u}_0(\cdot, 0) \bigr) \bigr\|_{L_2({\mathbb R}^3)}
\\
\leqslant {\mathfrak C}_{15}(s) |\tau|(1+ |\tau|)^{s/2} \varepsilon^{s/2} \| {\mathbf f} \|_{H^{1+s}({\mathbb R}^3)},
\end{split}
\\
\begin{split}
\bigl\| \bigl( {\mathbf w}_\varepsilon(\cdot, \tau) - {\mathbf w}_\varepsilon(\cdot,0)\bigr) -
\bigl( I +(\widetilde{\eta}^\varepsilon (\eta^0)^{-1} - {\mathbf 1})\Pi_\varepsilon \bigr) \bigl( {\mathbf w}_0(\cdot, \tau) -
{\mathbf w}_0(\cdot, 0) \bigr)
\bigr\|_{L_2({\mathbb R}^3)}
\\
\leqslant {\mathfrak C}_{16}(s) |\tau|(1+ |\tau|)^{s/2} \varepsilon^{s/2} \| {\mathbf f} \|_{H^{1+s}({\mathbb R}^3)}.
\end{split}
\end{align*}
The constants $\mathfrak{C}_{13}(s), \mathfrak{C}_{14}(s), \mathfrak{C}_{15}(s), \mathfrak{C}_{16}(s)$ depend on
$|\mu_0|$, $|\mu_0^{-1}|$, $\|\eta\|_{L_\infty}$, $\|\eta^{-1}\|_{L_\infty}$, the parameters of the lattice $\Gamma$, and $s$.
\noindent $3^\circ$.
If ${\mathbf f} \in H^{1}({\mathbb R}^3;{\mathbb C}^3)$, then for $\tau \in {\mathbb R}$ we have
$$
\begin{aligned}
&\lim_{\varepsilon \to 0} \bigl\| {\mathbf D} \bigl( {\mathbf v}_\varepsilon(\cdot, \tau) - {\mathbf v}_0(\cdot, \tau) - \varepsilon \mu_0^{-1} \Psi^\varepsilon \Pi_\varepsilon \operatorname{curl} {\mathbf v}_0(\cdot, \tau) \bigr)\bigr\|_{L_2({\mathbb R}^3)} =0,
\\
&\lim_{\varepsilon \to 0}
\bigl\| {\mathbf D} \bigl( {\mathbf z}_\varepsilon(\cdot, \tau) - {\mathbf z}_0(\cdot, \tau) - \varepsilon \Psi^\varepsilon \Pi_\varepsilon \operatorname{curl} {\mathbf v}_0(\cdot, \tau)
\bigr) \bigr\|_{L_2({\mathbb R}^3)}
=0,
\\
&\lim_{\varepsilon \to 0}
\bigl\| (\eta^\varepsilon)^{-1} \operatorname{curl} {\mathbf v}_\varepsilon(\cdot, \tau) - ( (\eta^0)^{-1} + \Sigma^\varepsilon \Pi_\varepsilon) \operatorname{curl} {\mathbf v}_0(\cdot, \tau)
\bigr\|_{L_2({\mathbb R}^3)}
=0,
\\
&\lim_{\varepsilon \to 0}
\bigl\| \bigl({\mathbf u}_\varepsilon(\cdot, \tau) - {\mathbf u}_\varepsilon(\cdot,0)\bigr) -
\bigl(I+ \Sigma_\circ^\varepsilon \Pi_\varepsilon \bigr) \bigl({\mathbf u}_0(\cdot, \tau) - {\mathbf u}_0(\cdot, 0) \bigr) \bigr\|_{L_2({\mathbb R}^3)}
=0,
\\
&\lim_{\varepsilon \to 0}
\bigl\| \bigl( {\mathbf w}_\varepsilon (\cdot, \tau) - {\mathbf w}_\varepsilon (\cdot,0)\bigr) -
\bigl(I +(\widetilde{\eta}^\varepsilon (\eta^0)^{-1} - {\mathbf 1})\Pi_\varepsilon \bigr) \bigl( {\mathbf w}_0(\cdot, \tau) - {\mathbf w}_0(\cdot, 0) \bigr) \bigr\|_{L_2({\mathbb R}^3)} =0.
\end{aligned}
$$
\end{theorem}
\begin{proof}
Estimates \eqref{4.20} and \eqref{4.22} follow directly from
\eqref{sin_thrm1_corr1}, \eqref{sin_thrm1_corr2}, representations
\eqref{51b}, \eqref{54}, and the relation $\boldsymbol{\psi} = - \operatorname{curl} {\mathbf f}$.
Inequality \eqref{4.21} follows from \eqref{4.20} and the relations ${\mathbf z}_\varepsilon = \mu_0 {\mathbf v}_\varepsilon$, ${\mathbf z}_0 = \mu_0 {\mathbf v}_0$.
Next, integrating \eqref{4.22} in time and taking \eqref{51a} and \eqref{53a} into account, we obtain \eqref{4.23}.
We have used that $\Sigma({\mathbf x}) \eta^0 = \Sigma_\circ({\mathbf x})$. Estimate \eqref{4.24} follows from \eqref{4.23} and the relations ${\mathbf w}_\varepsilon = \eta^\varepsilon {\mathbf u}_\varepsilon$,
${\mathbf w}_0 = \eta^0 {\mathbf u}_0$.
Statement $2^\circ$ is proved similarly with the help of \eqref{sin_thrm1_corr3} and \eqref{sin_thrm1_corr4}.
Statement $3^\circ$ follows from statement $2^\circ$, by the Banach--Steinhaus theorem.
\end{proof}
In \cite[Lemma 8.6]{BSu2}, it was shown that the weak
$(L_2 \to L_2)$-limit of the operator $[Y^\varepsilon] \Pi_{\varepsilon}$ is equal to zero if $Y({\mathbf x})$ is a
$\Gamma$-periodic matrix-valued function with zero mean value.
Using this property, we deduce the following corollary from statement $3^\circ$ of Theorem \ref{thrm_53}.
\begin{corollary}
If ${\boldsymbol \phi}=0$ and ${\mathbf f} \in H^{1}({\mathbb R}^3;{\mathbb C}^3)$, then for $\tau \in {\mathbb R}$ and $\varepsilon \to 0$ we have
$$
\begin{aligned}
& {\mathbf v}_\varepsilon(\cdot, \tau) \to {\mathbf v}_0(\cdot, \tau)\ \text{weakly in}\ H^1({\mathbb R}^3;{\mathbb C}^3);
\\
&{\mathbf z}_\varepsilon(\cdot, \tau) \to {\mathbf z}_0(\cdot, \tau)\ \text{weakly in}\ H^1({\mathbb R}^3;{\mathbb C}^3);
\\
&(\eta^\varepsilon)^{-1} \operatorname{curl} {\mathbf v}_\varepsilon (\cdot, \tau) \to (\eta^0)^{-1}
\operatorname{curl} {\mathbf v}_0(\cdot, \tau)\ \text{weakly in}\ L_2({\mathbb R}^3;{\mathbb C}^3);
\\
&{\mathbf u}_\varepsilon(\cdot, \tau) - {\mathbf u}_\varepsilon(\cdot,0) \to {\mathbf u}_0(\cdot, \tau) - {\mathbf u}_0(\cdot,0)\ \text{weakly in}\ L_2({\mathbb R}^3;{\mathbb C}^3);
\\
& {\mathbf w}_\varepsilon(\cdot, \tau) - {\mathbf w}_\varepsilon(\cdot,0) \to {\mathbf w}_0(\cdot, \tau) - {\mathbf w}_0(\cdot,0)\ \text{weakly in}\ L_2({\mathbb R}^3;{\mathbb C}^3).
\end{aligned}
$$
\end{corollary}
Theorems \ref{s<2_thrm_J}($3^\circ$) and \ref{th_time_sharp2}($3^\circ$) show that, in the general case, estimates \eqref{4.20} and \eqref{4.21} are sharp regarding the norm type and regarding the dependence on $\tau$. However,
statements $1^\circ$ and $2^\circ$ of Theorem \ref{thrm_53} can be improved under some additional assumptions. The following result is deduced from Theorem \ref{cos_thrm2_J}($2^\circ$).
\begin{theorem}
\label{thrm_54}
Under the assumptions of Subsections \emph{\ref{sec4.1}, \ref{sec4.2}}, suppose in addition that
${\boldsymbol \phi}=0$.
Suppose that Condition \emph{\ref{cond1}} or Condition \emph{\ref{cond2}} is satisfied.
\noindent $1^\circ$.
If ${\mathbf f} \in H^{5/2}({\mathbb R}^3;{\mathbb C}^3)$, then for $\tau \in {\mathbb R}$ and $0< \varepsilon\leqslant 1$ we have the following approximations for the fields ${\mathbf v}_\varepsilon$ and ${\mathbf z}_\varepsilon$ in the energy norm\emph{:}
\begin{align*}
\bigl\| {\mathbf v}_\varepsilon (\cdot, \tau) - {\mathbf v}_0(\cdot, \tau) - \varepsilon \mu_0^{-1} \Psi^\varepsilon
\operatorname{curl} {\mathbf v}_0(\cdot, \tau) \bigr\|_{H^1({\mathbb R}^3)}
\leqslant {\mathfrak C}_{17} (1+ |\tau|)^{1/2} \varepsilon \| {\mathbf f} \|_{H^{5/2}({\mathbb R}^3)},
\\
\bigl\| {\mathbf z}_\varepsilon (\cdot, \tau) - {\mathbf z}_0(\cdot, \tau) - \varepsilon \Psi^\varepsilon
\operatorname{curl} {\mathbf v}_0(\cdot, \tau) \bigr\|_{H^1({\mathbb R}^3)}
\leqslant {\mathfrak C}_{18} (1+ |\tau|)^{1/2} \varepsilon \| {\mathbf f} \|_{H^{5/2}({\mathbb R}^3)}.
\\
\bigl\| (\eta^\varepsilon)^{-1} \operatorname{curl} {\mathbf v}_\varepsilon (\cdot, \tau) - ((\eta^0)^{-1} + \Sigma^\varepsilon) \operatorname{curl} {\mathbf v}_0(\cdot, \tau) \bigr\|_{L_2({\mathbb R}^3)}
\leqslant {\mathfrak C}_{19} (1+ |\tau|)^{1/2} \varepsilon \| {\mathbf f} \|_{H^{5/2}({\mathbb R}^3)}.
\end{align*}
If ${\mathbf f} \in H^{5/2}({\mathbb R}^3;{\mathbb C}^3)$, then for $\tau \in {\mathbb R}$ and $0< \varepsilon\leqslant 1$ we have the following approximations for the fields ${\mathbf u}_\varepsilon$ and ${\mathbf w}_\varepsilon$ in $L_2$\emph{:}
\begin{align*}
\bigl\| \bigl({\mathbf u}_\varepsilon(\cdot, \tau) - {\mathbf u}_\varepsilon(\cdot,0)\bigr) -
\bigl({\mathbf 1} + \Sigma_\circ^\varepsilon \bigr) \bigl({\mathbf u}_0(\cdot, \tau) - {\mathbf u}_0(\cdot, 0) \bigr) \bigr\|_{L_2({\mathbb R}^3)}
\leqslant {\mathfrak C}_{19} |\tau|(1+ |\tau|)^{1/2} \varepsilon \| {\mathbf f} \|_{H^{5/2}({\mathbb R}^3)},
\\
\bigl\| \bigl( {\mathbf w}_\varepsilon(\cdot, \tau) - {\mathbf w}_\varepsilon(\cdot,0)\bigr) -
\widetilde{\eta}^\varepsilon (\eta^0)^{-1} \bigl({\mathbf w}_0(\cdot, \tau) - {\mathbf w}_0(\cdot, 0) \bigr) \bigr\|_{L_2({\mathbb R}^3)}
\leqslant {\mathfrak C}_{20} |\tau|(1+ |\tau|)^{1/2} \varepsilon \| {\mathbf f} \|_{H^{5/2}({\mathbb R}^3)}.
\end{align*}
Under Condition \emph{\ref{cond1}}, the constants $\mathfrak{C}_{17}, \mathfrak{C}_{18}, \mathfrak{C}_{19}, \mathfrak{C}_{20}$ depend on $|\mu_0|$, $|\mu_0^{-1}|$, $\|\eta\|_{L_\infty}$, $\|\eta^{-1}\|_{L_\infty}$, and the parameters of the lattice $\Gamma$. Under Condition \emph{\ref{cond2}}, these constants depend also on $c^\circ$.
\noindent $2^\circ$.
Let ${\mathbf f} \in H^{1+s}({\mathbb R}^3;{\mathbb C}^3)$, where $0\leqslant s \leqslant 3/2$.
Then for $\tau \in {\mathbb R}$ and $\varepsilon>0$ we have
\begin{align*}
\bigl\| {\mathbf D} \bigl( {\mathbf v}_\varepsilon(\cdot, \tau) - {\mathbf v}_0(\cdot, \tau) - \varepsilon \mu_0^{-1} \Psi^\varepsilon \Pi_\varepsilon \operatorname{curl} {\mathbf v}_0(\cdot, \tau) \bigr)
\bigr\|_{L_2({\mathbb R}^3)}
\leqslant {\mathfrak C}_{21}(s) (1+ |\tau|)^{s/3} \varepsilon^{2s/3} \| {\mathbf f} \|_{H^{1+s}({\mathbb R}^3)},
\\
\bigl\| {\mathbf D} \bigl( {\mathbf z}_\varepsilon(\cdot, \tau) - {\mathbf z}_0(\cdot, \tau) - \varepsilon \Psi^\varepsilon \Pi_\varepsilon \operatorname{curl} {\mathbf v}_0(\cdot, \tau) \bigr)
\bigr\|_{L_2({\mathbb R}^3)}
\leqslant {\mathfrak C}_{22}(s) (1+ |\tau|)^{s/3} \varepsilon^{2s/3} \| {\mathbf f} \|_{H^{1+s}({\mathbb R}^3)}.
\\
\bigl\| (\eta^\varepsilon)^{-1} \operatorname{curl} {\mathbf v}_\varepsilon(\cdot, \tau) - ( (\eta^0)^{-1} + \Sigma^\varepsilon \Pi_\varepsilon) \operatorname{curl} {\mathbf v}_0(\cdot, \tau) \bigr\|_{L_2({\mathbb R}^3)}
\leqslant {\mathfrak C}_{23}(s) (1+ |\tau|)^{s/3} \varepsilon^{2s/3} \| {\mathbf f} \|_{H^{1+s}({\mathbb R}^3)}.
\end{align*}
If ${\mathbf f} \in H^{1+s}({\mathbb R}^3;{\mathbb C}^3)$, where $0\leqslant s \leqslant 3/2$, then for $\tau \in {\mathbb R}$ and $\varepsilon>0$ we have
\begin{align*}
\begin{split}
\bigl\| \bigl({\mathbf u}_\varepsilon(\cdot, \tau) - {\mathbf u}_\varepsilon(\cdot,0)\bigr) -
\bigl(I+ \Sigma_\circ^\varepsilon \Pi_\varepsilon \bigr) \bigl( {\mathbf u}_0(\cdot, \tau) - {\mathbf u}_0(\cdot, 0) \bigr) \bigr\|_{L_2({\mathbb R}^3)}
\\
\leqslant {\mathfrak C}_{23}(s) |\tau|(1+ |\tau|)^{s/3} \varepsilon^{2s/3} \| {\mathbf f} \|_{H^{1+s}({\mathbb R}^3)},
\end{split}
\\
\begin{split}
\bigl\| \bigl({\mathbf w}_\varepsilon(\cdot, \tau) - {\mathbf w}_\varepsilon(\cdot,0)\bigr) -
\bigl(I +(\widetilde{\eta}^\varepsilon (\eta^0)^{-1} - {\mathbf 1})\Pi_\varepsilon \bigr) \bigl( {\mathbf w}_0(\cdot, \tau) - {\mathbf w}_0(\cdot, 0) \bigr) \bigr\|_{L_2({\mathbb R}^3)}
\\
\leqslant {\mathfrak C}_{24}(s) |\tau|(1+ |\tau|)^{s/3} \varepsilon^{2s/3} \| {\mathbf f} \|_{H^{1+s}({\mathbb R}^3)}.
\end{split}
\end{align*}
Under Condition \emph{\ref{cond1}}, the constants $\mathfrak{C}_{21}(s), \mathfrak{C}_{22}(s), \mathfrak{C}_{23}(s), \mathfrak{C}_{24}(s)$ depend on
$|\mu_0|$, $|\mu_0^{-1}|$, $\|\eta\|_{L_\infty}$, $\|\eta^{-1}\|_{L_\infty}$, the parameters of the lattice $\Gamma$, and $s$. Under Condition \emph{\ref{cond2}}, these constants depend also on $c^\circ$.
\end{theorem}
\begin{remark}
$1^\circ$. In the estimates from Theorems \emph{\ref{thrm_51}}, \emph{\ref{thrm_52}}, \emph{\ref{thrm_53}}, \emph{\ref{thrm_54}}, the norm $\| {\mathbf f} \|_{H^s}$ can be replaced by $\| \operatorname{curl} {\mathbf f} \|_{H^{s-1}}$,
because these theorems are deduced from the results for problem \eqref{51} with the initial data
$\boldsymbol{\psi} =- \operatorname{curl} {\mathbf f}$.
$2^\circ$. Tracking the dependence of estimates on $\tau$ allows us to get qualified estimates for small $\varepsilon$ and large $\tau$. Under the assumptions of Theorem \emph{\ref{thrm_51}($1^\circ$)} we have
$$
\begin{aligned}
&\| {\mathbf v}_\varepsilon(\cdot, \tau) - {\mathbf v}_0(\cdot, \tau) \|_{L_2} = O(\varepsilon^{1-\alpha}),
\\
& \| {\mathbf z}_\varepsilon(\cdot, \tau) - {\mathbf z}_0(\cdot, \tau) \|_{L_2} = O(\varepsilon^{1-\alpha}),
\\
&\qquad \text{for}\ \tau = O(\varepsilon^{-\alpha}),\ 0< \alpha <1.
\end{aligned}
$$
Under the assumptions of Theorem \emph{\ref{thrm_51}($2^\circ$)} we have
$$
\begin{aligned}
&\| {\mathbf v}_\varepsilon(\cdot, \tau) - {\mathbf v}_0(\cdot, \tau) \|_{L_2} = O(\varepsilon^{(1-\alpha)s/2}),
\\
&\| {\mathbf z}_\varepsilon(\cdot, \tau) - {\mathbf z}_0(\cdot, \tau) \|_{L_2} = O(\varepsilon^{(1-\alpha)s/2}),
\\
&\qquad \text{for}\ \tau = O(\varepsilon^{-\alpha}),\ 0< \alpha <1.
\end{aligned}
$$
Under the assumptions of Theorem \emph{\ref{thrm_52}($1^\circ$)} we have
$$
\begin{aligned}
&\| {\mathbf v}_\varepsilon(\cdot, \tau) - {\mathbf v}_0(\cdot, \tau) \|_{L_2} = O(\varepsilon^{1-\alpha/2}),
\\
&\| {\mathbf z}_\varepsilon(\cdot, \tau) - {\mathbf z}_0(\cdot, \tau) \|_{L_2} = O(\varepsilon^{1-\alpha/2}),
\\
&\qquad \text{for}\ \tau = O(\varepsilon^{-\alpha}),\ 0< \alpha <2.
\end{aligned}
$$
Under the assumptions of Theorem \emph{\ref{thrm_52}($2^\circ$)} we have
$$
\begin{aligned}
&\| {\mathbf v}_\varepsilon(\cdot, \tau) - {\mathbf v}_0(\cdot, \tau) \|_{L_2} = O(\varepsilon^{(2-\alpha) s/3}),
\\
&\| {\mathbf z}_\varepsilon(\cdot, \tau) - {\mathbf z}_0(\cdot, \tau) \|_{L_2} = O(\varepsilon^{(2-\alpha)s/3}),
\\
&\qquad \text{for}\ \tau = O(\varepsilon^{-\alpha}),\ 0< \alpha <2.
\end{aligned}
$$
Under the assumptions of Theorem \emph{\ref{thrm_53}($1^\circ$)} for ${\mathbf v}_\varepsilon$ and ${\mathbf z}_\varepsilon$ we have $$
\begin{aligned}
&\bigl\| {\mathbf v}_\varepsilon(\cdot, \tau) - {\mathbf v}_0(\cdot, \tau) - \varepsilon \mu_0^{-1} \Psi^\varepsilon
\operatorname{curl} {\mathbf v}_0(\cdot, \tau)
\bigr\|_{H^1({\mathbb R}^3)}
= O(\varepsilon^{1- \alpha}),
\\
& \bigl\| {\mathbf z}_\varepsilon(\cdot, \tau) - {\mathbf z}_0(\cdot, \tau) - \varepsilon \Psi^\varepsilon
\operatorname{curl} {\mathbf v}_0(\cdot, \tau) \bigr\|_{H^1({\mathbb R}^3)}
= O(\varepsilon^{1- \alpha}),
\\
&\| (\eta^\varepsilon)^{-1} \operatorname{curl} {\mathbf v}_\varepsilon(\cdot, \tau) - ((\eta^0)^{-1} + \Sigma^\varepsilon) \operatorname{curl} {\mathbf v}_0(\cdot, \tau)\|_{L_2({\mathbb R}^3)}=
O(\varepsilon^{1- \alpha}),
\\
&\qquad \text{for}\ \tau = O(\varepsilon^{-\alpha}),\ 0< \alpha <1.
\end{aligned}
$$
Under the assumptions of Theorem \emph{\ref{thrm_53}($1^\circ$)} for ${\mathbf u}_\varepsilon$ and
${\mathbf w}_\varepsilon$ we have
$$
\begin{aligned}
& \bigl\| \bigl({\mathbf u}_\varepsilon(\cdot, \tau) - {\mathbf u}_\varepsilon (\cdot,0)\bigr) -
\bigl({\mathbf 1} + \Sigma_\circ^\varepsilon \bigr) \bigl({\mathbf u}_0(\cdot, \tau) - {\mathbf u}_0(\cdot, 0) \bigr) \bigr\|_{L_2({\mathbb R}^3)}
= O(\varepsilon^{1- 2 \alpha}),
\\
& \bigl\| \bigl( {\mathbf w}_\varepsilon (\cdot, \tau) - {\mathbf w}_\varepsilon (\cdot,0)\bigr) -
\widetilde{\eta}^\varepsilon (\eta^0)^{-1} \bigl( {\mathbf w}_0(\cdot, \tau) - {\mathbf w}_0(\cdot, 0) \bigr) \bigr\|_{L_2({\mathbb R}^3)}
= O(\varepsilon^{1- 2 \alpha}),
\\
&\qquad \text{for}\ \tau = O(\varepsilon^{-\alpha}),\ 0< \alpha <1/2.
\end{aligned}
$$
Under the assumptions of Theorem \emph{\ref{thrm_53}($2^\circ$)} for ${\mathbf v}_\varepsilon$ and ${\mathbf z}_\varepsilon$ we have
$$
\begin{aligned}
& \bigl\| {\mathbf D} \bigl( {\mathbf v}_\varepsilon(\cdot, \tau) - {\mathbf v}_0(\cdot, \tau) - \varepsilon \mu_0^{-1} \Psi^\varepsilon \Pi_\varepsilon \operatorname{curl} {\mathbf v}_0(\cdot, \tau) \bigr)\bigr\|_{L_2({\mathbb R}^3)} = O(\varepsilon^{(1- \alpha)s/2}),
\\
& \bigl\| {\mathbf D} \bigl( {\mathbf z}_\varepsilon(\cdot, \tau) - {\mathbf z}_0(\cdot, \tau) - \varepsilon \Psi^\varepsilon \Pi_\varepsilon \operatorname{curl} {\mathbf v}_0(\cdot, \tau) \bigr)
\bigr\|_{L_2({\mathbb R}^3)}
= O(\varepsilon^{(1- \alpha)s/2}),
\\
& \bigl\| (\eta^\varepsilon)^{-1} \operatorname{curl} {\mathbf v}_\varepsilon(\cdot, \tau) - ((\eta^0)^{-1} + \Sigma^\varepsilon \Pi_\varepsilon) \operatorname{curl} {\mathbf v}_0(\cdot, \tau)
\bigr\|_{L_2({\mathbb R}^3)}=
O(\varepsilon^{(1- \alpha)s/2}),
\\
&\qquad \text{for}\ \tau = O(\varepsilon^{-\alpha}),\ 0< \alpha <1.
\end{aligned}
$$
Under the assumptions of Theorem \emph{\ref{thrm_53}($2^\circ$)} for ${\mathbf u}_\varepsilon$ and
${\mathbf w}_\varepsilon$ we have
$$
\begin{aligned}
& \bigl\| \bigl({\mathbf u}_\varepsilon(\cdot, \tau) - {\mathbf u}_\varepsilon(\cdot,0)\bigr) -
\bigl({\mathbf 1} + \Sigma_\circ^\varepsilon \bigr) \bigl({\mathbf u}_0(\cdot, \tau) - {\mathbf u}_0(\cdot, 0) \bigr) \bigr\|_{L_2({\mathbb R}^3)}
= O(\varepsilon^{(1- \alpha)s/2 - \alpha}),
\\
& \bigl\| \bigl( {\mathbf w}_\varepsilon(\cdot, \tau) - {\mathbf w}_\varepsilon(\cdot,0)\bigr) -
\widetilde{\eta}^\varepsilon (\eta^0)^{-1} \bigl( {\mathbf w}_0(\cdot, \tau) - {\mathbf w}_0(\cdot, 0) \bigr) \bigr\|_{L_2({\mathbb R}^3)}
= O(\varepsilon^{(1- \alpha)s/2 - \alpha}),
\\
&\qquad \text{for}\ \tau = O(\varepsilon^{-\alpha}),\ 0< \alpha <\frac{s}{s+2}.
\end{aligned}
$$
Under the assumptions of Theorem \emph{\ref{thrm_54}}$(1^\circ)$ for ${\mathbf v}_\varepsilon$ and ${\mathbf z}_\varepsilon$ we have
$$
\begin{aligned}
& \bigl\| {\mathbf D} \bigl( {\mathbf v}_\varepsilon (\cdot, \tau) - {\mathbf v}_0(\cdot, \tau) - \varepsilon \mu_0^{-1} \Psi^\varepsilon \Pi_\varepsilon \operatorname{curl} {\mathbf v}_0(\cdot, \tau) \bigr)\bigr\|_{L_2({\mathbb R}^3)} = O( \varepsilon^{1- \alpha/2}),
\\
&\bigl\| {\mathbf D} \bigl( {\mathbf z}_\varepsilon(\cdot, \tau) - {\mathbf z}_0(\cdot, \tau) - \varepsilon \Psi^\varepsilon \Pi_\varepsilon \operatorname{curl} {\mathbf v}_0(\cdot, \tau) \bigr) \bigr\|_{L_2({\mathbb R}^3)}
= O(\varepsilon^{1- \alpha/2}),
\\
&\bigl\| (\eta^\varepsilon)^{-1} \operatorname{curl} {\mathbf v}_\varepsilon(\cdot, \tau) - ((\eta^0)^{-1} + \Sigma^\varepsilon \Pi_\varepsilon) \operatorname{curl} {\mathbf v}_0(\cdot, \tau)\bigr\|_{L_2({\mathbb R}^3)}=
O(\varepsilon^{1- \alpha /2}),
\\
&\qquad \text{for}\ \tau = O(\varepsilon^{-\alpha}),\ 0< \alpha <2.
\end{aligned}
$$
Under the assumptions of Theorem \emph{\ref{thrm_54}($1^\circ$)} for ${\mathbf u}_\varepsilon$ and
${\mathbf w}_\varepsilon$ we have
$$
\begin{aligned}
&\bigl\| \bigl({\mathbf u}_\varepsilon(\cdot, \tau) - {\mathbf u}_\varepsilon(\cdot,0)\bigr) -
\bigl({\mathbf 1} + \Sigma_\circ^\varepsilon \bigr) \bigl({\mathbf u}_0(\cdot, \tau) - {\mathbf u}_0(\cdot, 0) \bigr) \bigr\|_{L_2({\mathbb R}^3)}
= O(\varepsilon^{1- 3 \alpha/2}),
\\
& \bigl\| \bigl( {\mathbf w}_\varepsilon(\cdot, \tau) - {\mathbf w}_\varepsilon(\cdot,0)\bigr) -
\widetilde{\eta}^\varepsilon (\eta^0)^{-1} \bigl( {\mathbf w}_0(\cdot, \tau) - {\mathbf w}_0(\cdot, 0) \bigr) \bigr\|_{L_2({\mathbb R}^3)}
= O(\varepsilon^{1- 3 \alpha/2}),
\\
&\qquad \text{for}\ \tau = O(\varepsilon^{-\alpha}),\ 0< \alpha <2/3.
\end{aligned}
$$
Under the assumptions of Theorem \emph{\ref{thrm_54}($2^\circ$)} for ${\mathbf v}_\varepsilon$ and ${\mathbf z}_\varepsilon$ we have
$$
\begin{aligned}
& \bigl\| {\mathbf D} \bigl( {\mathbf v}_\varepsilon(\cdot, \tau) - {\mathbf v}_0(\cdot, \tau) - \varepsilon \mu_0^{-1} \Psi^\varepsilon \Pi_\varepsilon \operatorname{curl} {\mathbf v}_0(\cdot, \tau) \bigr)\bigr\|_{L_2({\mathbb R}^3)} = O(\varepsilon^{(2- \alpha)s/3}),
\\
& \bigl\| {\mathbf D} \bigl( {\mathbf z}_\varepsilon(\cdot, \tau) - {\mathbf z}_0(\cdot, \tau) -\varepsilon \Psi^\varepsilon \Pi_\varepsilon \operatorname{curl} {\mathbf v}_0(\cdot, \tau) \bigr)
\bigr\|_{L_2({\mathbb R}^3)}
= O(\varepsilon^{(2- \alpha)s/3}),
\\
& \bigl\| (\eta^\varepsilon)^{-1} \operatorname{curl} {\mathbf v}_\varepsilon(\cdot, \tau) - ((\eta^0)^{-1} + \Sigma^\varepsilon \Pi_\varepsilon) \operatorname{curl} {\mathbf v}_0(\cdot, \tau) \bigr\|_{L_2({\mathbb R}^3)}=
O(\varepsilon^{(2- \alpha)s/3}),
\\
&\qquad \text{for}\ \tau = O(\varepsilon^{-\alpha}),\ 0< \alpha <2.
\end{aligned}
$$
Under the assumptions of Theorem \emph{\ref{thrm_54}($2^\circ$)} for ${\mathbf u}_\varepsilon$ and
${\mathbf w}_\varepsilon$ we have
$$
\begin{aligned}
&\bigl\| \bigl({\mathbf u}_\varepsilon(\cdot, \tau) - {\mathbf u}_\varepsilon(\cdot,0)\bigr) -
\bigl( {\mathbf 1} + \Sigma_\circ^\varepsilon \bigr) \bigl({\mathbf u}_0(\cdot, \tau) - {\mathbf u}_0(\cdot, 0) \bigr) \bigr\|_{L_2({\mathbb R}^3)}
= O(\varepsilon^{(2- \alpha)s/3 - \alpha}),
\\
& \bigl\| \bigl({\mathbf w}_\varepsilon(\cdot, \tau) - {\mathbf w}_\varepsilon(\cdot,0)\bigr) -
\widetilde{\eta}^\varepsilon (\eta^0)^{-1} \bigl({\mathbf w}_0(\cdot, \tau) - {\mathbf w}_0(\cdot, 0) \bigr) \bigr\|_{L_2({\mathbb R}^3)}
= O(\varepsilon^{(2- \alpha)s/3 - \alpha}),
\\
&\qquad \text{for}\ \tau = O(\varepsilon^{-\alpha}),\ 0< \alpha <\frac{2s}{s+3}.
\end{aligned}
$$
\end{remark}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,355 |
Andrea Agnelli (né le à Turin dans le Piémont) est un entrepreneur et un dirigeant sportif italien.
Il était depuis 2010 le président du club de football turinois de la Juventus, ainsi qu'administrateur des sociétés Exor et Stellantis. Le 28 novembre 2022, il démissionne de son poste de président de la Juventus.
Biographie
Formation et carrière
Fils d'Umberto Agnelli et d'Allegra Caracciolo, il fait ses études au St Clare's International College d'Oxford et à l'Université Bocconi de Milan.
Après ses études, il tente diverses expériences à l'étranger ou en Italie, où il travaille dans le marketing de différentes entreprises comme Piaggio ou Auchan. De 2001 à 2004, il travaille pour Philip Morris International. En 2007, il crée un holding financier, Lamse S.p.A, dont il est l'administrateur délégué.
Il cultive également une passion pour le golf et devient en 2008 un membre du Royal Park Golf & Country Club I Roveri. Le , il est nommé conseiller fédéral de la Fédération italienne de Golf.
Il garde également certains contacts avec le groupe Fiat, créé par son arrière-grand-père Giovanni Agnelli. De 2005 à 2006, il s'occupe du développement stratégique à l'IFIL. Depuis le , il est conseiller d'administration de Fiat. Il est également membre du comité exécutif de l'UEFA depuis .
Président de la Juventus
Tout comme son père (qui fut également président du club entre 1955 et 1962), Andrea Agnelli a hérité d'une forte passion pour la Juventus, où il avait déjà travaillé comme assistant dans le secteur commercial.
Le , il devient le nouveau président de l'équipe piémontaise, en succédant au Français Jean-Claude Blanc, mais ne prend véritablement ses fonctions que le .
Il fait du développement marketing une priorité, diversifie les activités, notamment dans le secteur hôtelier, créé le musée du club. Le changement le plus visible est probablement le changement de logo de la Juventus en 2017. La période 2011-2020 est également marquée par les succès sur la scène nationale, après cinq années mouvementées. Sa stratégie de recrutement mise sur le trading de jeunes joueurs (achat en vue de réaliser une plue-value à la revente) et cible des joueurs stars ayant un potentiel marketing important, comme Paul Pogba en 2012 et Cristiano Ronaldo en 2018. La capitalisation boursière de la Juventus illustre cette évolution ; de 162 millions d'euros en 2010, elle a atteint 1 milliard 250 millions d'euros en 2020.
Association européenne des clubs
De 2017 à 2021, il est président de l'Association européenne des clubs.
Il démissionne à la suite de l'annonce du projet de Super Ligue auquel l'association s'est opposée.
Superligue européenne de football
En , il est parmi les initiateurs du projet de Superligue européenne de football, présidée par Florentino Pérez. Il est l'un des 4 vice-présidents de l'organisation.
Pour le quotidien L'Équipe, qui a contribué à la création de la Coupe des clubs champions européens, Agnelli est « l'un des hommes qui fait le plus de mal à l'universalité du football ».
Michel Platini, le décrit comme « la bonne personne si tu considères que le football est un business ».
Vie privée
le , il épouse Emma Winter avec qui il a deux enfants : Baya Agnelli (née le ) et Giacomo Agnelli (né le ).
Il met en avant l'expérience de ses enfants pour justifier sa vision du football. Selon lui, « Ils n'ont pas la patience de rester 90 minutes à regarder un match, on doit s'adapter aux habitudes des futurs supporters. ». Cette analyse des aspirations des futurs clients potentiels le conforte dans l'idée d'une Superligue réunissant des équipes de stars.
Considéré comme un ami proche du président de l'UEFA, Aleksander Čeferin, qui est le parrain de l'une de ses filles, il lui aurait dissimulé jusqu'au bout ses intentions d'une compétition privée échappant à la fédération européenne.
Notes et références
Articles connexes
Famille Agnelli
Fiat
Juventus Football Club
Naissance à Turin
Naissance en décembre 1975
Personnalité du groupe Stellantis
Personnalité italienne du XXIe siècle
Homme d'affaires italien
Étudiant de l'université Bocconi de Milan
Fiat
Dirigeant italien de football
Président de la Juventus FC
Famille Agnelli | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 9,381 |
NEW INDIE MUSIC: 10 SONGS TO BE GRATEFUL FOR THANKSGIVING
by Jess Grant
Photo by Samuel Rios on Unsplash
The countdown to Thanksgiving is on, but before I dig out my eating pants, it's time for the latest edition of We Are: The Guard's New Indie Music. That's right – I'm pleased to say I've managed to take a break from sobbing on my kitchen floor to Adele's 30 to curate the best songs this side of "I Drink Wine." Just don't forget to send me some mashed potatoes in return for serving you the following tracks from Wallice, Jensen McRae, Big Thief, and seven other favorites!
WALLICE – WISDOM TOOTH
Having recently signed to Dirty Hit – the musical home to the likes of The 1975 and Rina Sawayama – Los Angeles act Wallice is back today with the crunching "Wisdom Tooth." Produced by We Are: The Guard favorite Marinelli, "Wisdom Tooth" is a coming-of-age anthem set ablaze by fuzz and teenage angst, with Wallice comparing the loss of her relationship to the loss of her third molars: "You got me so messed up/I'm so bad at love/You're like a wisdom tooth/You hurt so much out of the blue."
JENSEN MCRAE – MY EGO DIES AT THE END
In January, Jensen McRae went viral on Twitter with a Phoebe Bridgers parody song about getting the COVID-19 vaccine. Since then, the Santa Monica singer-songwriter has released a debut EP and collaborated with the likes of Joy Oladokun and Chiiild, and today, Jensen is rounding out 2021 with "My Ego Dies at the End." Produced alongside Rahki, it's an anguished ode to the loss of self that a lot of us have felt over recent months, with Jensen's mellifluous croon coming set against a lilt of Americana.
BIG THIEF – TIME ESCAPING
Coinciding with the announcement of their double album Dragon New Warm Mountain I Believe In You – a 20-song collection due out on February 11th – Brooklyn's Big Thief has shared "Time Escaping." The follow-up to October's "Change" is a ramshackling expedition through Adrianne Lenker's musings on time, nature, and existence. "Everything, everything, everything for free/As it all eventually/Turns to dust and petal/Molten rock and meadow," sings Adrianne against a backdrop of careening riffs.
OLIVER MALCOLM – ROLLING STONE
Oliver Malcolm is embracing his inner bad boy on his latest single "Rolling Stone." Having won the blogosphere over with his debut EP Are You Living in the Real World? – an eight-song collection that featured hits such as "Helen" and "Skywalker" – the Swedish-born, Britain-based artist is continuing to hone his strange breed of soul on "Rolling Stone." With groovy licks and Vangelis-esque electronics underpinning his rasp, "Rolling Stone" sounds like the workings of a twisted rock star in the making.
FKA TWIGS (FEAT. CENTRAL CEE) – MEASURE OF A MAN
It's been over two years since FKA twigs released MAGDALENE. While we're still holding out for further news on the follow-up that she reportedly recorded while in lockdown, Tahliah Barnett is nevertheless making her return today with "Measure of a Man." Written for the upcoming movie The King's Man, "Measure of a Man" is a lush, theatrical listen that sounds like a trip-hop answer to James Bond. "Only you can truly understand/The measure of a hero is the measure of a man," sings Tahliah. Enjoy!
100 GECS – MEMEME
Just under a year on from releasing their Christmas song "sympathy 4 the grinch," St. Louis outfit 100 gecs is back today with a far less festive bop in the form of the frenetic "mememe." The first single to be lifted from their forthcoming album 10000 gecs finds Dylan Brady and Laura Les doubling down on their chaotic glitchcore, with 100 gecs coming for undeserving lovers in the absolute rip of a chorus: "You'll never really know, know-know-know, know-know-know/Anything about me, me-me-me, me-me-me."
GORDI – GRASS IS BLUE (DOLLY PARTON COVER)
Earlier this year, Gordi teamed up with Alex Lahey on "Dino's," and today, the Australian singer-songwriter is back with an absolutely gorgeous cover of Dolly Parton's "The Grass Is Blue." "I wanted to cover it because Dolly funded the Moderna vaccine and is an all-around queen," Gordi says. While the original is a string-swept country lament, listen as Gordi transforms it into a resounding, piano-driven indie hymn, with Sophie Payten plunging into the song's bottomless pit of despair. Just gorgeous!
ALICE GLASS – BABY TEETH
Halloween is over, but Alice Glass is keeping the spooky season going with "BABY TEETH." The latest single to be taken from Alice's forthcoming debut album PREY//IV – due out on January 28th – "BABY TEETH" is a throbbing exploration of trauma. "'Baby Teeth' is about embracing despair," says Alice. "It understands that violence against the vulnerable is inevitable, and it probably always will be." With Alice's vocals shattering like glass against pulsating laser beams, "BABY TEETH" is a darkcore essential.
JACKIE – MY BEST YEARS
The last few years have been a blur for so many of us, but Canada's jackie is reminding us that there are better times ahead on the ecstatic "My Best Years." Featured on jackie's forthcoming EP Hey Angel, "My Best Years" is about that moment when you decide to rise above the toxic human beings in your life. "My best years are callin'/My fears are far behind/And I loved you, but I'm sorry/You can't have me anymore," belts Jackie Mohr on the rhapsodic rocker – the kind made for stadium-sized sing-alongs.
HORSEGIRL – BILLY
Introducing Horsegirl, the Chicago outfit making their debut on We Are: The Guard today with the clamorous "Billy." "There was a period of last year where the three of us spent every day together writing and recording," says Horsegirl. "It was during this time, when we practically lived in Penelope's basement, that 'Billy' was written." With three-part harmonies fusing with detuned guitars, "Billy" is a tumultuous din that submerges you in fuzz and distortion until there's no room for other thoughts.
Have a fantastic Thanksgiving, everyone! xox
Jess Grant is a frustrated writer hailing from London, England. When she isn't tasked with disentangling her thoughts from her brain and putting them on paper, Jess can generally be found listening to The Beatles, or cooking vegetarian food. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,618 |
Shortly after birth the Princess Royal had been whisked upstairs to somewhat different surroundings – the attic story, far from frescoed staircases and damask chambers – to forge an intimate relationship with a mother of two named Mrs Muttlebury, who had been selected as her wet-nurse.
But Mrs. Muttlebury remained somewhat bewildered by the honour done her. "She told Mama she had not the least notion of anything she was to do," recorded Lady Charlotte's daughter Sophia, "and begged her to tell her…" She was surprised to hear she must provide a maid – "I suppose from a notion of having people do everything for her," commented Miss Sophia. "Mama told her of several other expenses, viz providing her own washing, always wearing silk gowns morning and evening…" The royal baby should come into contact only with superior materials – tussore and brocade and Mechlin lace for ruffles, as supplied by Lady Charlotte.
It was a world unto itself, that of the Princess Royal and Mrs Muttlebury. The wet-nurse was allowed no visitors, not even her own children, to divert her from her duty.
Mrs Muttlebury stopped breastfeeding her charge after six months, but continued working as a nanny for the royal family until 1770, for £200 a year. The Queen also paid school expenses for one of her sons.
* she was the royal governess. | {
"redpajama_set_name": "RedPajamaC4"
} | 1,217 |
International fans get first glimpse of actual costumes worn in Gentleman Jack at Halifax museum
12 of the costumes worn by Suranne Jones, Sophie Rundle and others are on display at Bankfield Museum
A couple of Gentleman Jack superfans were some of the first people to see a display of costumes used in the show at a Halifax museum.
Shantel Smith, from North Carolina in the USA, and Jane Monballiu from Antwerp in Belgium, were invited by to Bankfield Museum for a sneak preview of the new collection, which opened yesterday (Friday).
Fans can see 12 costumes actually worn by actors in the BBC show on display until Saturday October 26.
They are on loan from the production company behind the Sally Wainwright series and include the sleek black dress and top hat worn by Suranne Jones' Anne Lister as well as the puffy pink gown Sophie Rundle's Ann Walker often bustles around in.
Jane and Shantel are two of a handful of international superfans who have arrived in Halifax over the past few weeks to make pilgrimages to places visited in the show and attend some of the various Gentleman Jack events taking place in the area .
The 12 costumes on display at Bankfield Museum in Halifax
Jane, 26, who has even bought an annual pass to Shibden Hall, said: "I knew of Anne Lister way before the series started and I had been admiring Suranne's and Sally's work for years so to see them come together and do something about Anne Lister was just so exciting.
"I immediately fell in love with the series and just had to come here and experience it all for myself."
Shantel, 37, said: "I knew nothing of Suranne Jones, Sally or Anne Lister so I saw the show on HBO.
"I had already planned a trip to London in May which was like three weeks after it premiered so at that time I was like 'Oh my gosh, I'm going to be right down the street, let's drive up to Halifax for a day.
"I did that in May, fell in love with the place, started reading more about Anne Lister, started reading the books. So outside of the TV show the whole Anne Lister diaries - I fell in love with it - her story, being a lesbian back then, the way she kept her diaries. I was just hooked."
Anne Lister's black dress and Ann Walker's puffy pink gown
Shantel is due to fly back to the USA in four days but she said she will push that return flight back again if Calderdale Museums put on another event like the costume launch.
Costume designer Tom Pye visited Bankfield Museum's vast collection of authentic 19th Century clothing - which are on display alongside the show costumes - when he created the costumes for Gentleman Jack.
He said: "I really wanted to see as many pieces of original clothing as I could so I made appointments at museums with good fashion collections in Bath, Chertsey, Bankfield Museum in Halifax, Winchester as well as the John Bright collection in Cosprop, London.
Everything we know about the second series of Gentleman Jack
"It's so much better seeing [fashion collections] in person. True fabrics and construction. In your hands you can see how they were made inside out."
The exhibition is free to enter at Bankfield Museum during museum opening hours (10am to 4pm, Tuesday to Saturday).
For more information visit museums.calderdale.gov.uk
MorrisonsThis is how you get £20 worth of supermarket cakes and food for just £3She got a magic box full of cupcakes, fruit, veg and another cake
EnvironmentVegan mascara and charcoal toothpaste – 10 eco-friendly alternatives for 2020'Unpaper' towels and 'natural' deodorant also available from Huddersfield company
CrimeAmazon customers warned over 'financial details' scam costing victims £1,000sOne victim has already lost £25,000 thanks to the scam which claims something has been ordered from the site using their details
HalifaxManic Street Preachers play Piece Hall Halifax - tickets, times and moreThe Welsh rockers will be supported by British Sea Power
HalifaxShibden Hall where Gentleman Jack was filmed due to reopen The hugely popular show saw numbers to the hall triple last year with strong interest from US and Canada | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 2,794 |
Delias baracasa is een vlindersoort uit de familie van de Pieridae (witjes), onderfamilie Pierinae.
Delias baracasa werd in 1890 beschreven door Semper, G.
Witjes | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 9,351 |
Kedgar Volta
A Moment in Beijing
Recent Acquisitions: History in the Making
Virginia Derryberry: Private Domain
Jenn Peek: (I)dentify as Phoenix
Image courtesy of Doug Eng.
In Living Color: Andy Warhol and Contemporary Printmaking
from the Collections of Jordan D. Schnitzer and His Family Foundation
Andy Warhol (1928-1987) depicted the world with the volume turned up. Employing a seemingly endless palette, his work has challenged our perceptions of popular culture, politics, and consumerism for more than fifty years. Warhol was the central figure of American Pop Art, a genre that emerged in the late 1950s in reaction to the heroism of Abstract Expressionism. For Pop artists, social and political turbulence coupled with unprecedented consumerism meant that art was no longer about the persona of the heroic individual artist, as it had been in the years immediately following World War II. Warhol and his contemporaries sought to eradicate the notion of the "genius artist" and downplay the role of originality in art, adopting mechanical means of generating images, such as screen-printing, which theoretically allowed for an endless production of images. In drawing inspiration from the rapidly changing world around them, Pop artists sought to be more inclusive in their subjects, and more aware of the day-to-day conditions of contemporary existence.
Spanning three decades of Warhol's career, this exhibition features some of the artist's most iconic screen prints, including his portraits of Marilyn Monroe and Mao Zedong, the splashy camouflage series, and the controversial Electric Chair portfolio. Drawn exclusively from the rich collections of Jordan Schnitzer and his Family Foundation, In Living Color is divided into five sections-experimentation, emotion, experience, subversion, and attitude. In each, Warhol's work is placed in conversation with other artists of the postwar era who use color as a tool to shape how we interpret and respond to images.
Organized by the Joslyn Art Museum, Omaha, Nebraska
An audio guide produced by Karin Campbell, curator at the Joslyn Art Museum, with contributions by John Hutcheson, master printer and instructor of printmaking at UNF, accompanies the exhibition.
In Living Color: Stop 301
(American, 1928-1987)
Camouflage, 1987
Collection of Jordan D. Schnitzer
(American, born Germany, 1888-1976)
Homage to the Square, 1967
Louisiana Bendolph
(American, b. 1960)
Triangles (After Annie E. Pettway), 2005
Color aquatint, spit bite aquatint, and soft ground etching
Loretta Bennett
Forever (For Old Lady Sally), 2006
(American, born 1936)
Sinjerli Variations, 1977
Lithographs and screen prints
(Canadian, b. 1932)
Locus, 1972
Aquatints
Mao, 1972
Collection of the Jordan Schnitzer Family Foundation
Pop Shop V, 1989
Cliché: North American Indian (Red), 1995
Color lithograph and screen print
Sulfur Sails, 1969
Sunset, 1972
Madame Butterfly, 2000
Electric Chair, 1971
Bauhaus artist and teacher Josef Albers was instrumental in bringing ideas of European Modernism to America. His 1963 book Interaction of Color provided one of the most comprehensive analysis of the function and perception of color to date. Albers is renowned for his compositions that explore the relationships of color through a single, simple form, usually the square. His series Homage to the Square, produced from 1949 until his death, used a single geometric shape to systematically explore the vast range of visual effects that could be achieved through color and spatial relationships alone.
Josef Albers in his studio, Orange, Connecticut, 1968-1969. Silver gelatin print. Courtesy the Josef and Anni Albers Foundation. Photograph by Sedat Pakay.
In the late 1960s, John Baldessari began combining Pop Art's use of imagery from mass media with Conceptual Art's use of language. Early in his career, Baldessari incorporated images and texts in his photo-based art. By appropriating photographs from advertising and movie stills, he went on to juxtapose, edit, and crop them in conjunction with written texts. The layered, often humorous compositions, where colored dots mask figures, carry disparate connotations, underscoring how relative meaning can be.
Courtesy of the artist and Marian Goodman Gallery.
Commonly recognized for her work with quilts, Louisiana Bendolph is among the younger generation of quilt makers. Her abstract style consists of asymmetrical shapes, bright colors, and framed edges, resulting in images that are stunning gateways into both the past and the future. Her work was included in the national touring exhibition Gee's Bend: The Architecture of the Quilt.
Louisiana Bendolph in the Paulson Bott Press Studio, 2007. Courtesy of the Artist and Paulson Bott Press.
New York-based artist Ross Bleckner is known for painting a spectrum of subjects-from pulsating lines in his resurrection of Op Art in the 1980s to the magnified cellular structures of autoimmune diseases in the 1990s.
Photo copyright: David Seidner. Courtesy: Mary Boone Gallery, New York.
Using the body as a primary form, internationally acclaimed Louise Bourgeois explored the full range of the human condition. A Paris native, her early work is associated with Surrealism as seen in the integration of fantastic elements into her prints and sculptures. Bourgeois moved to New York in 1938, where she focused primarily on sculpture, crafting biomorphic forms. Her complex body of work includes poetic drawings to room-sized installations.
Louise Bourgeois in 1980. Photo: Mark Setteducati, © The Easton Foundation/Licensed by VAGA, NY.
Chuck Close is renowned for his highly inventive techniques of painting the human face and is best known for his large-scale, photo-based portrait paintings. In 1988, Close was paralyzed following a rare spinal artery collapse; he continues to paint using a brush-holding device strapped to his wrist and forearm. His practice extends beyond painting to encompass printmaking, photography, and, most recently, tapestries based on Polaroids.
Photo by: Gianfranco Gorgoni.
West Coast American painter Richard Diebenkorn came to define the California School of Abstract Expressionism during the early 1950s. His seductive colors and surfaces, found in works teetering between figuration and abstraction, are most commonly associated with Abstract Expressionism and the Bay Area Figurative Movement of the 1950s and 1960s.
Photo credit is Leo Holub.
Sam Francis is regarded as one of the leading interpreters of color and light in postwar America. After graduating from California Berkeley in 1950 with a degree in art, Francis moved to Paris, where he would go on to be named by Time magazine as "the hottest American painter in Paris these days." The Southern California climate-from its sun, sea, and skies-pervades his vibrant oils and translucent watercolors, where abstract forms float sensuously and weightlessly.
Sam Francis in Tokyo studio, c. 1957. Photo by Francois-Rene Roland, courtesy Sam Francis Foundation, California.
Helen Frankenthaler was among the most influential artists of the mid-twentieth century. Influenced by Abstract Expressionist painting practices, particularly those of Jackson Pollock and Franz Kline, Frankenthaler invented the "soak-stain" technique, in which she poured turpentine-thinned paint onto canvas, producing luminous color washes that appeared to merge with the canvas and deny any hint of three-dimensional illusionism. Her breakthrough use of paint on canvas gave rise to Color Field Painting.
Helen Frankenthaler in front of 'Freefall', Tyler Graphics Ltd., Mount Kisco, New York, 1993. Photographer: Marabeth Cohen-Tyler. National Gallery of Australia, Canberra. Gift of Kenneth Tyler 2002. Image courtesy of the Helen Frankenthaler Foundation.
Keith Haring rose to prominence in the early 1980s with his graffiti drawings made in the subways and on the sidewalks of New York City. He developed a distinct Pop-graffiti aesthetic centered on fluid, bold outlines against a dense, rhythmic overspread of imagery of crawling children, barking dogs, and dancing figures, all set in motion by staccato-like lines. Haring is regarded as a leading figure in New York's East Village Art scene in the 1970s and 1980s.
Photo courtesy of Keith Haring Foundation.
British-Indian sculptor Anish Kapoor is most recognized for public sculptures that are equal form and feats of engineering. Whether using the materials of classical sculpture like stone and bronze or newly applied forms of aluminum, pigment, enamel, resin, polymer, and PVC, Kapoor's sculpture seems to disappear, dissolve, levitate, or extend beyond a space the viewer can perceive. His sculptures and installations use pioneering technology to address absence and void as sites of potential.
© Anish Kapoor; Courtesy Lisson Gallery. Photography: Ji-Youn Lee.
Beginning in 1950, abstract painter Dorothea Rockburne attended the legendary Black Mountain College in Asheville, North Carolina, where classes with Merce Cunningham, John Cage, and perhaps most significantly, mathematician Max Dehn, had a seminal influence on her work. Dehn's teachings often merged the mathematical world and the natural world, providing Rockburne with new and complex approaches to her work. Working with varied materials including industrial wrinkle-finish paint, tar, and metal, Rockburne paints, cuts, draws, folds and calculates to create complex works of art built upon mathematical foundations.
Image courtesy of the Artist.
Defying categorization, Ed Ruscha's photography, drawing, painting, and artist books record the shifting emblems of American life in the last half century. His deadpan representations of Hollywood logos, stylized gas stations, and archetypal landscapes distil the imagery of popular culture into a language of cinematic and typographical codes that are as accessible as they are profound. Inspired by the ironies and idiosyncrasies of life in Los Angeles, Ruscha inserts glib words and phrases, from colloquial and consumerist usage, atop photographic images or fields of color in his work.
Photo by Aubrey Mayer. Courtesy of Ed Ruscha and Gagosian Gallery.
From free-standing sculptures to architectural sites, Frank Stella's unyielding experimentation has made him a key figure in American modernism, leading to such developments as Minimalism, Post-Painterly Abstraction, and Color Field Painting. An early practitioner of nonrepresentational painting, Stella gained early, immediate recognition in 1959 with his series of coolly impersonal black-striped paintings that turned the gestural brushwork and existential angst of Abstract Expressionism on its head.
Photo by Kristine Larsen
Mickalene Thomas
New York-based artist Mickalene Thomas is best known for her elaborate paintings composed of rhinestones, acrylic, and enamel. Thomas introduces a complex vision of what it means to be a woman and expands common definitions of beauty. Her work stems from her long study of art history and the classical genres of portraiture, landscape, and still life.
Francois Meyer, Mickalene Thomas, 2013. Courtesy Francois Meyer. Image is Copyright © Francois Meyer.
Pop artist Andy Warhol (1928-1987) was one of the most influential artists of the twentieth century. Obsessed with celebrity, consumer culture, and death and disaster, he revisited those themes throughout his career in his iconic images of Marilyn Monroe and the electric chair series. Fascinated with mechanical reproduction, Warhol used the medium of silkscreen printmaking to achieve his characteristic hard edges and flat areas of color. His infamous quips-"art is what you can get away with" and "everyone will be famous for 15 minutes"-outlive the artist today.
Andy Warhol promotional photo for his book A, a Novel, 1970s. Gelatin silver print, 8 x 10 in. (20.3 x 25.4 cm.). The Andy Warhol Museum, Pittsburgh; Founding Collection, Contribution The Andy Warhol Foundation for the Visual Arts, Inc. © 2016 The Andy Warhol Museum / Artists Rights Society (ARS), New York.
Judy Eisen
Scottie and Winfield Gartner
Todd Sack and Barbara Sharp
A tribute to John Hutcheson
Printmaking in Frank Stella's house
Around the mulberry bush with Helen Frankenthaler
In Living Color and Time Zones | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,955 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.