The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowInvalid
Message:      JSON parse error: Invalid escape character in string. in row 28
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
                  dataset = json.load(f)
                File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
                  return loads(fp.read(),
                File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
                  return _default_decoder.decode(s)
                File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
                  raise JSONDecodeError("Extra data", s, end)
              json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 27127)
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
                  for _, table in generator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
                  raise e
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Invalid escape character in string. in row 28
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

text
string
meta
dict
\section{Introduction} In Q4’19, Yelp had 36 million unique users and seated 30 million diners. It is not a stretch to say that poor Yelp reviews can "kill" a restaurant in today's high-review, high-traffic world. Luca (2016) found that a "one-star increase in Yelp rating leads to a 5-9 percent increase in revenue" with a primary effect for independent restaurants. Furthermore, Luca identified that "consumers do not use all available information and are more responsive to...changes that are...visible" such as images \cite{Luca}. Small businesses tend to operate on razor-thin margins in the food industry and cannot afford the advertising budgets of chain restaurants. On Yelp, a restaurant controls very few aspects of the experience. One such aspect, though, is the images they are able to upload. Business owners currently have no easy way of concretely deciding whether an image is a "good" image to bring in customers. They also do not have a central idea of what a "good" image looks like compared to a "bad" image. We seek to address the problem identified above by creating a dual set of tools that restaurant owners can use to assess their business. First, we build a classifier that assesses the quality of a restaurant's online photographic representation. This classifier will accept restaurant-related images as inputs and predict Yelp star review ratings as an output. A below average image will receive a 1-3.5 star classification, an average image a 4 star classification, and an above average image a 4.5-5 star classification. Then, we implement a GAN trained on average and above-average rated images to provide qualitative analysis regarding how business owners can increase the quality of their business's online representation. The images generated by the GAN will capture important features of high-quality images, and can be used as sources of guidance and comparison for business owners. \section{Related Work} Recent papers have focused on improving technical accuracy for deep learning related tasks with food. Liu et al (2016) demonstrated the use of a deep convolutional network based on LeNet-5 \cite{lecun1998gradient}, AlexNet \cite{krizhevsky2012imagenet}, and GoogLeNet \cite{szegedy2015going} to show better-than-previous classification accuracy \cite{phoneFood} \cite{liu2016deepfood}. Their approach involved pretraining on ImageNet followed by fine-tuning. Hassannejad et al (2016) used Google's Inception network as inspiration to train on well-known food image datasets: ETH Food-101, UEC FOOD 100, and UEC FOOD 256 \cite{hassannejad2016food}. This group achieved best results for most efficient computation on those datasets at the time. It is clear that classifying on images of food using neural networks is a growing area of research. We believe that the existing literature provides a good backing for this piece of our project. Generative adversarial networks (GANs) have recently been gaining traction in both academia and the public eye. The now-viral thispersondoesnotexist.com project provides a good demonstration of the capabilities of well-constructed GANs to produce images nearly indistinguishable from real images to the human eye \cite{karras2019analyzing}. Ito (2018) et al demonstrated the use of conditional GANs (cGANs) to produce both recipe and ramen photos \cite{ito2018food}. They showed that "dish discriminator and WGAN-GP are effective for food image domain." One constraint of this study was their main data was restricted to a single, mostly-uniform type of food. Our work can extend this with a dataset comprised of many extremely different dishes without written context labels for training. \section{Methods} This project has two parts: 1) classification and 2) GAN. \subsection{Classification} \subsubsection{Baseline} For our baseline, we use ResNet-18 \cite{he2016deep} to do transfer learning and predict star rating. We use the existing model and only change the final fully connected layer to map to our 9 possible star ratings buckets. \subsubsection{Our Approach} We use transfer learning as our starting point. Since our dataset is on the order of about 100,000 images sorted to 9 unevenly distributed categories, we tested various hyperparameter combinations, loss functions \cite{janocha2017loss}, optimizers, and whether fine-tuning the ConvNet or using it as a feature extractor is better. \paragraph{Loss Functions} It is important to recognize that the 9 classes in our classification are not equidistant. If the true label of a restaurant image is '1.5 stars', the model should be punished less for predicting the label '2 stars' in comparison to predicting the label '5 stars'. Class proximity is a metric we decided to test out. We therefore try distance-based loss such as Mean-Squared Error Loss. MSE loss can be modeled by $$ l(x,y) = mean( \{l_1, l_2, ... l_N\}^{\top}), l_n = (x_n-y_n)^2, $$ where $l(x,y)$ is our loss. In this case, $x$ represents our model outputs while $y$ represents our target values. Due to the nature of this construction, we see that predicted values numerically more distant from the expected value yield higher losses. Our preprocessing to changes the value ranges of our data from 9 classes in [1,5] to 9 classes in [1,10] perform the added value of ensuring all $l_n$ values will be whole numbers since decimal difference won't scale as well with a squaring function. The second loss function we tested with was Cross-Entropy Loss. This can be modeled by the equation $$ loss(x,class) = weight[class] \cdot (- log (\frac{exp(x[class])}{\sum_jexp(x[j])})) $$ where $loss(x,class)$ models the loss per class for each output. For Cross-Entropy Loss, the shape is batch size x \# of classes so the expected target value provided is the index of which class is the correct class for the specific example. Cross-Entropy Loss predicts probability of being in a specific class and calculates log loss based on this. Therefore, there is no preservation of true distance measure from the predicted class to the real class. \paragraph{Optimizers} First, we test stochastic gradient descent (SGD) with momentum \cite{keskar2017improving}. Since we used PyTorch\cite{NEURIPS2019_9015}, the SGD with Momentum updates can be modeled by the equations $$ v_{t+1}= \mu \cdot v_t +g_{t+1} $$ $$ p_{t+1} = p_t - lr * v_{t+1} $$ where $p$ denotes the parameters, $g$ is gradient, $v$ is velocity, and $\mu$ is momentum. SGD performs weight updates in the direction that will reduce a mini-batch's error. Second, we test Adam \cite{kingma2014adam}. Adam computes to moment estimates and then bias-corrects both. Once these internal metrics are calculated, Adam performs weight updates. \begin{center} $m = \beta_1 \cdot m + (1-\beta_1) \cdot (dx)$\\ $m_t = m / (1-(\beta_1)^t)$\\ $v = \beta_2 \cdot v + (1-\beta_2)\cdot((dx)^2)$\\ $v_t = v / (1-(\beta_2)^t)$\\ $x += - learning\_rate \cdot m_t / ((v_t)^{1/2} + eps)$ \end{center} Adam combines principles of RMSProp and AdaGrad to compute dynamic learning rates for its different parameters. One notable issue with Adam is that it has been known to fail to converge if given the right parameters. \paragraph{Hyperparameter Tuning} We perform hyperparameter tuning on learning rates, learning rate decay, weights, momentum, batch size, etc. by first doing a coarse-grained search on orders of 10 and then doing fine-grained search using random initialization within a defined range. \paragraph{Fine Tuning vs Feature Extractor} Fine tuning \cite{peters2019tune} on a pretrained model (in our case ResNet-18), the model is initialized with pretrained weights and then trained normally on our provided images. For feature extraction, the model is initialized with pretrained weights, frozen, and only the final fully connected layer is updated during training. \paragraph{Model Outputs} Although we originally trained a model to produce outputs from [1, 10], we found that the difference between adjacent star ratings was somewhat arbitrary and would not be beneficial to a potential user. To improve the usefulness and simplicity of our model, we hypothesize that users see any review [0,3.5] stars as being below average, any review at 4.0 stars as average, and any review [4.5,5] as above average. This is based on our own experience and the relative star distributions in Figure 2. We train a new classifier which accepts the same input images as the baseline but reduces the number of output classes to 3 to reflect these intuitive representations. \subsection{GAN} We use a GAN architecture in the generation of our restaurant images. We look at first to the classic GAN framework, which involves a generative model and a discriminator "competing" to decrease their losses. The generator takes random noise and attempts to create an image that matches the distribution of the true images. The discriminator attempts to predict whether or not an image is a member of the original dataset. Essentially, they are playing a min-max game as follows: $$ \min_{G} \max_{D}(D, G) = $$ $$\mathbb{E}_{x\ pdata}[\log (D(x)] + \mathbb{E}_{z\ p_z(z)}[\log (1 - D(G(z)))] $$ We will generally follow this classic GAN procedure of training our generator and discriminator in a zero-sum manner. Because we want to produce different classes of images based on an input star rating, we will train different models on the different subsets of real images. \\ The principal framework we looked to was StyleGAN2. Both it and its predecessor employ a unique architecture where the input latent code is also transformed into intermediate latent code which allows for the creation of styles and the use of adaptive instance normalization. \section{Dataset} We pull our data from the Yelp Academic Dataset, which is typically used for NLP research but also contains a folder of over 200,000 images. Associating each image with a business and that business's star rating took considerable pre-processing. Documentation for the original dataset is available \hyperlink{https://www.yelp.com/dataset/documentation/main}{here}. \subsection{Attributes} \begin{figure} [h] \begin{center} \fbox{\includegraphics[width = 200px]{image_samples.png}} \end{center} \caption{Sample photos from each of our 5 sub-datasets} \label{fig:short} \end{figure} This particular dataset is comprised of 8,021,122 reviews by 1,968,703 users of 209,393 businesses located in 10 metropolitan areas. Of the many available attributes, one available feature in this dataset is a lightly annotated bank of 200,000 images tied to those businesses. The metropolitan areas represented are Montreal, Calgary, Toronto, Pittsburgh, Charlotte, Urbana-Champaign, Phoenix, Las Vegas, Madison, and Cleveland. All of the text-based data is available across several json files (\eg business.json, photo.json, review.json, etc.). Within each json are relevant fields captured and clustered around a couple specific tags: \textit{business\_id} and \textit{user\_id}. The \textit{stars} field contains float star ratings for each business between 0. and 5.0 rounded to the nearest half star. Importantly, photo.json includes a \textit{label} field for each image which categorizes each image into one of five specific classes: "food", "drink", "menu", "inside" or "outside". Because each category of photos is qualitatively dissimilar from the others, we decided to create 5 different datasets, one for each category. Altogether, the dataset is about 20 GB worth of information. \subsection{Pre-processing} In order to reduce the feature space size, we select specific fields across multiple json files taken from the Yelp dataset that are of interest to us. We keep \textit{business\_id}, \textit{photo\_id}, \textit{label}, and \textit{stars} only. For classification, we first map each photo\_ID to a business\_ID, which is in turn mapped to its star rating. Once these two maps are constructed, we use Pillow to open images using their photo\_ID, convert the images to numpy arrays, and finally pad and reduce the images to a constant size. All pre-processed images are thus stored a int8 arrays of dimension (3, 144, 200). The processed image array and star rating are stored together together in a final numpy array that is saved to the disk. This process is repeated for all 5 datasets, resulting in two numpy arrays saved to the disk for each label category. We implement a custom Dataset class which interacts with these saved arrays and is used by a PyTorch DataLoader. \begin{table}[h] \begin{center} \begin{tabular}{|c c c c|} \hline & Train & Val & Test \\ [0.5ex] \hline Food & 106,737 & 5,929 & 5,931\\ \hline Menu & 755 & 41 & 42 \\ \hline Drink & 11,050 & 613 & 615\\ \hline Inside & 48,415 & 2,689 & 2,691\\ \hline Outside & 12,286 & 682 & 684\\ [1ex] \hline \end{tabular} \caption{Number of images in each subset.} \end{center} \end{table} For GAN training, we separated images into new directories by label and star rating, (i.e. one folder that contained all 5-star food images, and another which contained all 2-star inside images). Once these directories were created no further preprocessing was required. All of this is encoded in our CustomDataset class. \subsection{Limitations} \begin{figure}[h] \begin{center} \fbox{\includegraphics[width = 193px]{histograms.png}} \end{center} \caption{Histogram of star rating across each label. The star rating for a business was broadcasted to all of its photos.} \label{fig:short} \end{figure} One primary consideration is that we make the explicit choice to assign the same star value to all images from a restaurant. This neglects the likelihood that image quality even within the same business may drastically vary. A second consideration to be made here is that all of the locations represented in our mid-sized cities in North America and therefore any results we obtain will likely not be applicable to preferences in other settings. Another concern is the distribution of ratings. As can be seen in Figure 2, star ratings severely skew left. The distributions are not normal or uniform. Finally, one consideration to keep in mind is that all of these reviews come from Yelp users who only make up a subset of the customer population and their reviews may not necessarily be reflective of any factors outside of their own preferences. \section{Results \& Discussion} \begin{figure*}[h] \begin{center} \fbox{\includegraphics[width = 450px]{combined_graphs.png}} \end{center} \caption{Our results vs Baseline results for 4 of the datasets. Tb=training baseline, Vb=validaiton baseline, Tm=training with our modifications, Vm=validation with our modifications} \label{fig:short} \end{figure*} \subsection{Classification} For our baseline of ResNet-18 with a modified final FC layer, Figure 3 displays both Top-1 accuracy and loss. Table 2 displays these metrics as well. Across labels, we see similar trends. The best accuracies demonstrate a considerable improvement from random assignment into 9 classes which would yield an accuracy of roughly 11\%. These values show us that there isn't significant overfitting on our training data. After about 7 epochs, loss and accuracy both converge. Even prior to this point, we see noisy convergence around the final levels so learning rate decay applied at this point appears to be significantly helpful. \begin{table}[h] \begin{center} \title{Baseline Results} \label{tab:title} \begin{tabular}{|c| c c c c c|} \hline & Food & \multicolumn{1}{c}{Drink} & Menu & \multicolumn{1}{c}{Outside} & Inside \\ [0.5ex] \hline \begin{tabular}[c]{@{}c@{}}Train\\ Loss\end{tabular} & 1.879 & 1.5312 & 1.3647 & 1.5690 & 1.5401 \\ \hline \begin{tabular}[c]{@{}c@{}}Val\\ Loss\end{tabular} & 1.942 & 1.5092 & 1.2613 & 0.3860 & 0.3740 \\ \hline \begin{tabular}[c]{@{}c@{}}Train\\ Acc\end{tabular} & .3124 & 0.3740 & 0.4338 & 0.3575 & 0.3580 \\ \hline \begin{tabular}[c]{@{}c@{}}Val\\ Acc\end{tabular} & .3207 & 0.3860 & 0.4952 & 0.3678 & 0.3692\\ [1ex] \hline \end{tabular} \caption{ResNet-18 results when trained on all 5 datasets across 25 epochs. Acc is Top-1 best accuracy and Loss values are from the epoch with the best val acc value. } \end{center} \end{table} Some of this high accuracy may be able to be attributed to the distrubution of ratings we see in Figure 2. Since the mean rating is clustered at 4 and the ratings are severely skewed left, the model has higher incentive to predict a higher star rating. This effectively could reduce the overall number of classes the model is outputing for. Additionally, we see significantly higher validation accuracy for our \textit{menu} dataset. This could likely be due to it being the smallest dataset and having the least variation. Qualitatively, most menus did not look too different which could resulting in this behavior. Through model checking and verifying outputs, we confirmed that the models were not just outputting a 4 star rating for every image to get to these accuracy levels. The models are outputing different star values for different images. We notice that there is not significant overfitting or underfitting of the baseline model as the training and validation accuracies track each other fairly well for most of the datasets. Strangely, we see for these datasets that validation tends lower than training loss and validation accuracy tends to be higher than training accuracy. This could arise from the lack of regularization at testing/validation time, as the batch normalization layers in ResNet make use of means and variances that vary from batch to batch. We tested learning rates from 0.1 to 1e-8 by orders of 10 and then fine-tuned as detailed in our Methods section. In a similar manner, we tested with different momentum values, betas, learning rate decay, learning rate schedules, weights, etc. and found no significant difference beyond convergence time. In our testing with different optimizers, we found no significant difference in best accuracy between Adam and SGD with momentum except that Adam was more noisy at times. MSELoss performed very poorly compared with Cross-Entropy Loss. \begin{table}[h] \begin{center} \title{Best Results} \label{tab:title} \begin{tabular}{|r|r|r|r|r|l|} \hline & Food & Drink & Menu & Outside & Inside \\ \hline Train Loss & 0.228 & 0.963 & 1.254 & 1.177 & 1.079 \\ \hline Val Loss & 0.238 & 1.146 & 1.439 & 1.181 & 1.085 \\ \hline Test Loss & 0.586 & 0.552 & 0.124 & 0.347 & 0.208 \\ \hline \multicolumn{1}{|l|}{Train Acc} & \multicolumn{1}{l|}{0.905} & \multicolumn{1}{l|}{0.927} & \multicolumn{1}{l|}{0.993} & \multicolumn{1}{l|}{0.967} & 0.984 \\ \hline \multicolumn{1}{|l|}{Val Acc} & \multicolumn{1}{l|}{0.895} & \multicolumn{1}{l|}{0.918} & \multicolumn{1}{l|}{0.951} & \multicolumn{1}{l|}{0.969} & 0.981 \\ \hline \multicolumn{1}{|l|}{Test Acc} & \multicolumn{1}{l|}{0.901} & \multicolumn{1}{l|}{0.924} & \multicolumn{1}{l|}{0.976} & \multicolumn{1}{l|}{0.953} & 0.985 \\ \hline \end{tabular} \caption{Best results with newly defined buckets.} \end{center} \end{table} We tested to see what would happen when we followed the simplified output approach listed in our Methods section. In this approach, we determined that customers were more sensitive to the perceived "quality group" a restaurant belonged to rather than an individual decimal star rating. We found the bucketed approach brought significantly higher accuracy to our datasets. In Figure 3, we can see that the accuracy curves are generally smooth and concave down as we would like to see. Interesting to note is that convergence takes longer. Additionally, loss curves are still generally noisy. Table 3 holds these results. The relative accuracies of each of our 5 classes is significant. Note that in order of highest to lowest test accuracy we have Inside, Menu, Outsisde, Drink, Food. As we saw in Table 1, our food dataset is approximately twice as large as the next largest dataset and ~150x larger than our Menu dataset. Clearly, size of training data isn't the only metric leading to the differencesin accuracy. We believe this ordering of class accuracy can be attributed to a combination of dataset size and in-class variation. The average appearance of a menu is likely far less variable than what a dish in a store may look like. This likely causes menu to have such high accuracy. The insides of most establishment are also likely not too varied but the sheer amount of training examples in this category likely caused this variation. Food and drinks and the outdoor conditions of a restaurant all tend to be more variable. Especially in the food class, very few non-chain restaurants will likely make food that looks the same. In the next section, we report how this may have led to better GAN outputs for certain classes than others. \subsection{GAN} \textit{Note: More generated photos can be found in the appendix.} We created datasets of each class within each star rating (eg all 4.0 star outside images). Then, we trained our GAN using these images. \begin{figure}[h] \begin{center} \fbox{\includegraphics[width = 200px]{outside-47-mr.jpg}} \end{center} \caption{The image generated by our GAN on its 47th checkpoint of the outside 4.0 images.} \label{fig:short} \end{figure} After training on 4-star outside images, we observe a few specific qualities that are correlated with positive reviews. Large windows, clear blue skies, and clear storefronts are prevalent. Additionally, the generated images display storefronts in conjunction with geographical features or landmarks in view. This suggests that restaurant location and the surrounding environment ambience are important to consumers. We also notice that restaurants in other generated photos are placed in a natural outdoor setting. Although business owners might be limited on their location, beautifying the restaurant's surroundings with natural elements would likely increase attractiveness. \begin{figure}[h] \begin{center} \fbox{\includegraphics[width = 100px]{7.jpg}} \fbox{\includegraphics[width = 100px]{15.jpg}} \fbox{\includegraphics[width = 100px]{27.jpg}} \fbox{\includegraphics[width = 100px]{45.jpg}} \end{center} \caption{Checkpoint number generated at from left to right, then top down: 7, 15, 27, 45 (5.0 Food Images)} \label{fig:short} \end{figure} In Figure 5, we can see that food images did not perform well when generated. Again, we attribute this to the great variation in food appearance and style not just between cuisine and restaurant, but fundamentally to how dishes of food look. We believe far better quality outputs may be generated if the dataset is more significantly divided along some relevant features. \\ In general, we saw a trend across all sets of images where the both the visual quality and generator accuracy of model-generated images would plateau and then decrease. We see an example of this in checkpoint 56 in Figure 8. We predict this was caused by the small size of the datasets, as each type-star rating subset only had a few thousand images a piece. \section{Conclusion} In this project, we created a set of tools to assist business owners in boosting their Yelp reviews through their image advertising. Our classifiers achieved >90\% accuracy on menu, drink, inside, and outside pictures using our intuitive reduced-class approach. For our best results, we used Cross-Entropy Loss, SGD with momentum and batch sizes of 16. The optimal learning rates for each category were: 1e-7, 1e-6, 2.5e-8, 1e-8, and 1e-8 for drink, menu, outside, inside, and food, respectively. Our GAN outputs enabled us to visualize key potential indicators of a great photo in each of these categories. These qualitative results can be utilized to stage advertising photos to appeal to consumer tastes. \begin{figure}[h] \begin{center} \fbox{\includegraphics[width = 200px]{food1.jpg}} \end{center} \caption{Example of a 1-star rated restaurant's food image} \label{fig:short} \end{figure} At this point, we wish to explore higher fidelity generated images. Additionally, we think a further breakdown by cuisine or meal type is necessary. Figure 6 shows an objectively well-styled (AKA professionally created) food image. However, this image was rated by customers as from a 1-star restaurant. The image is clearly an advertising photo and not exactly what was served in the restaurant. We believe there can be many confounding variables here and so many questions are raised? Are fast-food or "cheaper" restaurants inherently lower-rated? Do customers prefer real photos to advertising photos? Are certain cuisines higher rated than others? We also wish to explore the customer perception side of advertising. Why are ratings so severely skewed left? Does photo captioning contribute to star rating and, if so, how? Do customers actually care about the images or is this correlation we identified not actually involved in restaurant perception? \section{Appendix} The contents of this section can be found on pages 8-9. \begin{figure*}[h] \begin{center} \fbox{\includegraphics[width = 200px]{23.jpg}} \fbox{\includegraphics[width = 200px]{27_1.jpg}} \fbox{\includegraphics[width = 200px]{outside-41-ema.jpg}} \fbox{\includegraphics[width = 200px]{outside-49-mr.jpg}} \end{center} \caption{Images generated at checkpoints 23, 27, 41, and 49 for 4.0 Outside Images from left to right, top down} \label{fig:short} \end{figure*} \begin{figure*}[h] \begin{center} \fbox{\includegraphics[width = 100px]{drink-23-ema.jpg}} \fbox{\includegraphics[width = 100px]{24.jpg}} \fbox{\includegraphics[width = 100px]{drink-30-mr.jpg}} \fbox{\includegraphics[width = 100px]{drink-41-ema.jpg}} \fbox{\includegraphics[width = 200px]{drink-56.jpg}} \fbox{\includegraphics[width = 200px]{drink-57-ema.jpg}} \end{center} \caption{Checkpoint number generated at from left to right, then top down: 23, 24, 30, 41, 56, 57 (4.0 Drink Images)} \label{fig:short} \end{figure*} \section{Contributions \& Acknowledgements} GB worked on classification, baseline, and visualization work. SS worked on preprocessing, baseline, and classification. YM worked on GANs. All authors contributed to the writeup. {\small \bibliographystyle{ieee}
{ "timestamp": "2020-11-04T02:09:23", "yymm": "2011", "arxiv_id": "2011.01434", "language": "en", "url": "https://arxiv.org/abs/2011.01434" }
\section{Introduction} \label{sec_intto} High-dimensional data are ubiquitous and commonly used in various real-world applications such as computer vision and image processing. Often times, such data have latent low-dimensional structures rather than uniformly distributed. To illustrate this, we show a simple example in \cref{fig_subspace}. Such phenomenons are often seen in real-world applications. For example, face images lie in high-dimensional space, however they belong to a few number of subjects and form clear low-dimensional structures. \begin{figure}[h] \centering {\includegraphics[width=0.6\columnwidth]{fig1.png}} \caption{ Example of high-dimensional data lying in low-dimensional subspaces. It is seen that rather than uniformly distributed in the 3-dimensional space, these data points lie on the union of two lines and one plane. } \label{fig_subspace} \end{figure} This inspires us to effectively represent high-dimensional data in low-dimensional subspaces \cite{peng2015subspace,liu2010robust}. To recover such low-dimensional subspaces, it usually requires clustering the data into different groups. Each of these groups can be fitted with a subspace and this procedure is known as subspace clustering or subspace segmentation. During the last decade, various types of subspace clustering algorithms have been developed. These methods can be roughly categorized into 4 groups, including algebraic methods \cite{boult1991factorization,vidal2003generalized,ma2008estimation}, statistical methods \cite{gruber2004multibody,rao2010motion,ma2007segmentation}, iterative methods \cite{ho2003clustering,zhang2009median}, and spectral clustering based methods \cite{CHEN2018107,SUI2019261,CHEN2020107441}; see \cite{vidal2011subspace} for a review. Among them, spectral clustering-based methods have been popular with great success. Typical methods such as low-rank representation (LRR) \cite{liu2013robust,favaro2011closed} and sparse subspace clustering (SSC) \cite{elhamifar2013sparse} have drawn considerable attentions due to their efficiencies and elegant theories. The basic idea of LRR and SSC is the self-expressive property of the data, which suggests that each example of the data can be represented by the data as a dictionary. With specific structure requirements on the representation matrix, the learned representation coefficient matrix by LRR and SSC admit low rankness and sparsity, respectively. In the ideal case, such low-rank or sparse structure clearly shows group information of the data. Recent work also attempts to merge the advantages of simultaneous low-rank and sparse learning with both low-rank and sparse regularization terms \cite{BRBIC2018247}. It is pointed out that the nuclear norm is not accurate for rank approximation, which makes LRR less efficiency in learning accurate structure of the data \cite{peng2015subspace}. To overcome this drawback, recent works develop various more accurate non-convex approximations to the rank function, such as the log-determinant rank approximation, which significantly improves the learning performance \cite{peng2015subspace}. Some studies demonstrate the importance of feature learning for subspace clustering \cite{peng2016feature,patel2013latent}. For example, \cite{peng2016feature,peng2017integrating} seek a low-rank representation with respect to a subset of features, which alleviates the importance of rank approximations; \cite{patel2013latent} seeks a sparse representation of projected data in a latent low-dimensional space such that hidden structures of the data provide useful information. To consider nonlinear structures of the data, various approaches have been attempted. For example, grpah Laplacian is introduced to LRR \cite{liu2014enhancing}, which accounts for nonlinear relationships of the data on manifold; kernel is adopted in LRR \cite{xiao2016robust} and SSC \cite{patel2014kernel}, respectively, which seeks sparse representation of the data in nonlinear feature space. Other types of representation are also shown to be successful in subspace clustering, such as thresholding ridge regression \cite{peng2015robust} and simplex representation \cite{Xu2019Scaled}. Other than dealing with noise effect with data reconstruction term, \cite{peng2015robust} alleviates the noise effect by vanishing the small values in the coefficients obtained from a ridge regression model. Simplex representation is similar to ridge regression, which seeks a representation matrix of the data with additional constraints on it \cite{Xu2019Scaled}. Subspace clustering is often used in problems that deal with 2-dimensional (2D) data, with each example being a matrix. Unfortunately, these methods usually suffer from a common issue when dealing with such data. When handling 2D data, these methods usually convert all examples of the data to vectors in a pre-processing step, which severely damages inherent structural information of the data. This strategy omits the inherent structures and correlations of the original data which are essentially important, and building models with vectored data is not effective to filter the noise, occlusions or redundant information \cite{fu2016tensor}. To better handle 2D data, tensor-based methods have been considered in many areas, such as non-negative tensor factorization \cite{Benaroya2018Binaural}, tensor robust principal component analysis \cite{LIU2020107252,lu2016tensor}, tensor subspace learning \cite{Pan2019Tensor,zhang2015low}, etc. For tensor methods, tensor decomposition is often needed, where the main techniques include candecomp/parafac decomposition (CPD) and Tucker decomposition (TD). Tensor-based subspace clustering methods usually involve flatting and folding operations, which may not measure the true structures of the data \cite{Zhou2019Tensor}. More importantly, tensor methods usually suffer from the following major issues: 1) for CPD-based methods, it is generally NP-hard to compute the CP rank \cite{lu2016tensor,kolda2009tensor}; 2) TD is not unique \cite{kolda2009tensor}; 3) the application of a core tensor and a high-order tensor product would incure information loss of spatial details \cite{letexier2008noise}. Besides tensor-based methods, some other approaches have been attempted to handle 2D data in many areas, such as 2-dimensional principal component analysis (2DPCA) \cite{yang2004two}, 2-dimensional semi-nonnegative matrix factorization \cite{Peng2020TwoDimensionalSM}, and nuclear norm-based 2DPCA \cite{zhang2014nuclear}. 2DPCA uses a projection matrix to extract the most representative spatial information from 2D data, which inspires us to recover low-dimensional subspaces of 2D data with such features. Thus, to overcome the above mentioned key drawbacks of the current subspace clustering methods, we propose a novel method for 2-dimensional data, which directly uses a projection matrix on the original 2D data, such that the rich structural information of the data can be maximally used in the learning process. We briefly summarize key contributions of the paper as follows: 1) Unlike existing methods that perform vectorization to 2D data in a pre-processing step, we propose to learn a 2D projection matrix such that the most expressive structural information is retained in the spanned subspaces; 2) The learning of projection and construction of representation are seamlessly integrated, such that these two tasks mutually enhance each other and lead to powerful representation; 3) Kernel method for 2D data is introduced to our model, which explicitly considers nonlinear structures of the data; 4) Efficient optimization algorithm is developed with provable convergence guarantee; 5) The algorithm does not rely on augmented Lagrangian multiplier (ALM) type optimization as existing methods usually do, thus we do not need to introduce additional parameters in ALM framework; 6) Extensive experiments confirm the effectiveness of our method. The rest of this paper is organized as follows. We briefly review some closely related methods in \cref{sec_related}. Then we introduce the proposed method, develop its optimization to obtain the representation matrix, and present how to perform clustering using the learned representation matrix in \cref{sec_proposed}. We conduct extensive experiments to testify the effectiveness of the proposed method in \cref{sec_experiment}. Finally, we conclude the paper in \cref{sec_conclusion}. \section{Related Work} \label{sec_related} In this section, we briefly review some closely related subspace clustering methods. Given the data matrix $A=[a_1,\cdots,a_n]\in\mathcal{R}^{d\times n}$ with each sample $a_i\in\mathcal{R}^d$, LRR seeks a low-rank representation of the data with the following minimization problem: \begin{equation} \min_{Z} \|A-AZ\|_{2,1} + \lambda \|Z\|_*, \end{equation} where $\|\cdot\|_{2,1}$ is the sum of column-wise $\ell_2$ norms of a matrix, $\|\cdot\|_*$ is the nuclear norm, $\lambda$ is a balancing parameter, and $Z\in\mathcal{R}^{n\times n}$ is the representation to be sought. Instead of seeking a low-rank representation, the SSC assumes sparse representation of the data, which leads to the following: \begin{equation} \begin{aligned} & \min_{Z,S,E} \|E\|_F^2 + \lambda \|Z\|_1 + \gamma \|S\|_1, \\ & s.t. \quad A = AZ + S + E, \textit{diag}(Z) = 0, \end{aligned} \end{equation} where $\gamma$ is a balancing parameter and the constraint $\textit{diag}(Z) = 0$ avoids trivial solution to SSC. The above models seek the representation of the data with the assumption of self-expressiveness. Various developments have been made based on LRR and SSC, such as nonlinear extensions \cite{Yin2016Laplacian,patel2014kernel} and feature integration approaches \cite{liu2011latent}. \section{Kernel Two-Dimensional Ridge Regression} \label{sec_proposed} In this section, we will develop a new subspace clustering model based on ridge regression. In the following of this section, we will present its formulation, optimization, and the clustering algorithm, respectively. \subsection{Formulation of Kernel Two-Dimensional Ridge Regression} Ridge regression-based data representation has been shown successful for high-dimensional data in both supervised \cite{peng2015robust} and unsupervised learning problems \cite{peng2020discriminative}. For a collection of examples $\{X_i\}_{i=1}^{n}$ with each example $X_i\in\mathcal{R}^{a\times b}$ being a matrix, inspired by \cite{peng2015robust,peng2020discriminative}, we seek a low-dimensional representation of the data with the following ridge regression model: \begin{equation} \label{eq_trr_2d} \min_{Z} \sum_{i=1}^{n} \|X_i - \sum_{j=1}^{n} X_j Z_{ji} \|_F^2 + \lambda \|Z\|_F^2, \end{equation} where $\|\cdot\|_F$ is the Frobenius norm, and $\lambda\ge0$ is a balancing parameter. Here, unlike \cite{peng2015robust,elhamifar2009sparse} that vanish the diagonal elements of $Z$, we do not have such constraints due to the following two reasons: 1) the example $X_i$ is in the intra-subspace of $X_i$ itself, thus it is meaningful if $Z_{ii}\not=0$; 2) $\gamma>0$ excludes potentially trivial solutions such as $I_n$, where $I_n$ is an identity matrix of size $n\times n$. It is straightforward that \cref{eq_trr_2d} is equivalent to seeking the representation $Z$ with vectored data due to the nature of element-wise operation of the squared Frobenius norm. To retain inherent spatial information of the data in the learning process, we introduce a projection vector $p\in\mathcal{R}^b$, i.e., a direction, which projects the data to a subspace in which the most expressive 2D feature of the data is retained. That is, each example $X_i$ is projected as $X_ipp^T$ to the subspace spanned by $p$. To mutually enhance the learning tasks of the projection and representation, we propose to simultaneously seek the representation with projected data as follows: \begin{equation} \label{eq_obj_pp} \begin{aligned} \min_{p^Tp = 1,Z} & \sum_{i=1}^{n}\Big\| X_ipp^T - \sum_{j=1}^{n} X_j pp^T z_{ji} \Big\|_F^2 \\ & + \lambda \sum_{i=1}^{n}\|X_i - X_i pp^T \|_F^2 + \gamma \|Z\|_F^2, \end{aligned} \end{equation} where $\gamma \ge 0$ is a balancing parameter. It is seen that the projection vector $p$ captures spatial information of the data and the representation is sought with the projected data and thus benefits from spatial information of the data. The first term of \cref{eq_obj_pp} can be derived as $ \sum_{i=1}^n \| X_ipp^T - \sum_{j=1}^{n} X_j pp^T z_{ji} \|_F^2 = \sum_{i=1}^n \| (X_i - \sum_{j=1}^{n} X_j z_{ji} ) pp^T \|_F^2 = \sum_{i=1}^n \| (X_i - \sum_{j=1}^{n} X_j z_{ji} ) p \|_2^2 = \sum_{i=1}^n \| X_ip - \sum_{j=1}^{n} X_j p z_{ji} \|_2^2.$ Thus, \cref{eq_obj_pp} can be mathematically simplified as \begin{equation} \label{eq_obj_p} \begin{aligned} \min_{p^Tp = 1,Z} & \sum_{i=1}^{n}\Big\| X_ip - \sum_{j=1}^{n} X_j p z_{ji} \Big\|_2^2 \\ & + \lambda \sum_{i=1}^{n}\|X_i - X_i pp^T \|_F^2 + \gamma \|Z\|_F^2. \end{aligned} \end{equation} Usually, it is not enough to seek a single projection vector in real-world applications and multiple projection directions are often needed. Major information of the data may exist in several distinct subspaces and recovering multiple subspaces may allow us to better understand the data. To seek multiple projection directions or feature subspaces, we define a projection matrix $P=[p_1,p_2,\cdots,p_r]\in\mathcal{R}^{b\times r}$ with $p_i$ being a projection direction satisfying that $p_i^Tp_i=1$ and $p_i^Tp_j=0$ for $i\not= j$. With $P$, we expand \cref{eq_obj_p} to construct the representation with simultaneous learning of multiple projection directions: \begin{equation} \label{eq_obj_P} \begin{aligned} \min_{P^TP = I_r,Z} & \sum_{i=1}^{n}\Big\| X_iP - \sum_{j=1}^{n} X_j P z_{ji} \Big\|_F^2 \\ &+ \lambda \sum_{i=1}^{n}\|X_i - X_i PP^T \|_F^2 + \gamma \|Z\|_F^2, \end{aligned} \end{equation} where $I_r$ is an identity matrix of size $r\times r$. It is seen that in \cref{eq_obj_P} the coefficient matrix $Z$ is constructed using the projected features $(X_jP)'s$ or equivalently the projected samples $(X_jPP^T)'s$, which contain the most expressive information in the orthogonal subspaces $(p_jp_j^T)'s$ spanned by projection vectors $p_j's$. The projection in our model performs dimension reduction, for which we claim from the following two perspectives: 1) The original examples have size $a\times b$ and the projection reduces the size of examples to $a\times r$. 2) The original examples have $c=\min\{a,b\}$ 2D component features. With the projection, it is seen that only up to $r$ 2D component features are used in the construction of representation matrix $Z$. In this paper, we consider the number of 2D features as the dimension, thus the projection actually extracts most expressive 2D features of the data and performs dimension reduction. For ease of representation, we define \begin{equation} \label{eq_JP} \mathcal{J} = \sum_{i=1}^{n} \Bigg\{\|X_iP - \sum_{j=1}^{n} X_j P z_{ji} \|_F^2 + \lambda \|X_i - X_i PP^T \|_F^2\Bigg\}, \end{equation} and thus \cref{eq_obj_P} can be written as \begin{equation} \label{eq_obj_linear} \begin{aligned} \min_{P^TP = I_r,Z} & \mathcal{J} + \gamma \|Z\|_F^2. \end{aligned} \end{equation} Up to now, model \cref{eq_obj_linear} only considers the linear relationships of projected data in the Euclidean space. In real world problems, nonlinear relationships of data often exist and should be counted in data processing. To directly take nonlinear relationships of 2D data into consideration, we adopt the kernel approach for 2D data and develop a nonlinear model in remaining of this section. Inspired by \cite{nhat2007kernel}, we define nonlinear mappings of the data in the following. For a 2D example $M\in\mathcal{R}^{a\times b}$, we define $m_i \in \mathcal{R}^{a \times 1}$ to be its column instance vectors, i.e., \begin{equation} M = \begin{bmatrix} m_1 & \cdots & m_b \end{bmatrix}. \end{equation} We define $\phi:\mathcal{R}^{a\times b} \rightarrow \mathcal{R}^{f_a \times b}$ with $f_a \ge a$ being a column-wise nonlinear mapping, such that it maps columns of a matrix to nonlinear space: \begin{equation} \label{eq_mapping_phi} \phi(M) = \begin{bmatrix} \phi(m_1) & \cdots & \phi(m_b) \end{bmatrix}, \end{equation} where $\phi(M) \in \mathcal{R}^{f_a \times b}$ and $\phi(m_i) \in \mathcal{R}^{f_a \times 1}$. For two matrices of the same size $U = [u_1,\cdots, u_b] \in\mathcal{R}^{a\times b}, V = [v_1,\cdots, v_b] \in\mathcal{R}^{a\times b}$, it is straightforward to obtain the following multiplications by simple algebra: \begin{equation} \label{eq_kernel_phi} \begin{aligned} \phi^T(U) \phi(V) = \begin{bmatrix} \phi^T(u_1)\phi(v_1) & \cdots & \phi^T(u_1)\phi(v_b) \\ \vdots & \ddots & \vdots \\ \phi^T(u_b)\phi(v_1) & \cdots & \phi^T(u_b)\phi(v_b) \end{bmatrix}, \end{aligned} \end{equation} where $\phi^T(\cdot)$ denotes $(\phi(\cdot))^T$ for simplicity, and $u_i$, $v_j$ are columns of $U$ and $V$, respectively. It is seen that each element of \cref{eq_kernel_phi} is inner product of mapped instance vectors and thus can be calculated by $ k(u_i,v_j) = \phi^T(u_i)\phi(v_j)$, where $k:\mathcal{R}^{a} \times \mathcal{R}^{a}\rightarrow \mathcal{R}$ is a kernel function. By defining $\mathcal{K}^{\phi}_{ij} = \phi^T(X_i)\phi(X_j) \in\mathcal{R}^{b\times b}$, we can see that $\mathcal{J}$ can be extended to its nonlinear version $\mathcal{J}^{\phi}$ in the kernel space: \begin{equation} \label{eq_JP_kernel} \begin{aligned} \mathcal{J}^{\phi} = & \sum_{i=1}^{n} \Bigg\{\|\phi(X_i)P - \sum_{j=1}^{n} \phi(X_j) P z_{ji} \|_F^2 \\ &+ \lambda \|\phi(X_i) - \phi(X_i) PP^T \|_F^2 \Bigg\} \\ = & \sum_{i=1}^{n}\textbf{Tr}\Bigg\{ P^T\phi^T(X_i) \phi(X_i)P \\ & -P^T\sum_{j=1}^{n} \phi^T(X_i) \phi(X_j) P z_{ji} \\ & \quad- \sum_{j=1}^{n} P^T\phi^T(X_j) \phi(X_i) z_{ji} P \\ \nonumber\end{aligned}\end{equation}\begin{equation}\begin{aligned} & + \sum_{s=1}^{n}\sum_{t=1}^{n} P^T\phi^T(X_s)\phi(X_t) P z_{si} z_{ti} \Bigg\} \\ & + \lambda \sum_{i=1}^{n} \textbf{Tr}\Bigg\{ \phi^T(X_i) \phi(X_i) - \phi^T(X_i) \phi(X_i) PP^T \Bigg\} \\ = & \sum_{i=1}^{n}\textbf{Tr}\Bigg\{ P^T \mathcal{K}^{\phi}_{ii} P -P^T\sum_{j=1}^{n} \mathcal{K}^{\phi}_{ij} P z_{ji} \\ & - \sum_{j=1}^{n} P^T \mathcal{K}^{\phi}_{ji} z_{ji} P + \sum_{s=1}^{n}\sum_{t=1}^{n} P^T \mathcal{K}^{\phi}_{st} P z_{si} z_{ti} \Bigg\} \\ & + \lambda \sum_{i=1}^{n} \textbf{Tr}\Bigg\{ \mathcal{K}^{\phi}_{ii} - \mathcal{K}^{\phi}_{ii} PP^T \Bigg\}. \\ \end{aligned} \end{equation} Therefore, by extending $\mathcal{J}$ to kernel version, we extend \cref{eq_obj_P} to the following nonlinear model, which is named Kernel Two-dimensional Ridge Regression (KTRR): \begin{equation} \label{eq_obj_kernel} \begin{aligned} \min_{P^TP = I_r, Z} \mathcal{J}^{\phi} + \gamma \|Z\|_F^2. \end{aligned} \end{equation} It is seen that the representation $Z$ is sought with the nonlinear similarity matrices of the examples. It is worth pointing out that the integrated projection $P$ extracts spatial information of the data from the right side, i.e., in vertical direction. It is straightforward to extend the above model by introducing another projection matrix $Q\in\mathcal{R}^{a\times r}$ to project the data from left side, such that spatial information from both vertical and horizontal directions can be retained. However, the current model \cref{eq_obj_kernel} already provides us with the key idea and contribution of the paper, i.e., seeking representation with 2D features in nonlinear space, and extending \cref{eq_obj_kernel} with $Q$ is not within the main scope of this paper. Thus, in this paper, we focus on \cref{eq_obj_kernel} and do not fully expand the model to the bi-directional case. We will discuss the optimization of \cref{eq_obj_kernel} in the following of this section. \subsection{Optimization of \cref{eq_obj_kernel} } \label{sec_optimization} In the above subsection, we have proposed a new subspace clustering model for 2D data. In this subsection, we will develop an alternating minimization algorithm for its optimization. Specifically, we alternatively solve the sub-problem associated with a single variable while keeping the others fixed. We repeat the procedure until convergent. It is worth mentioning that the optimization does not rely on ALM type optimization and thus no additional parameters are needed as existing methods usually do. We regard this as an advantage because such parameters usually have effects on the solution and it takes efforts to tune such parameters. The detailed optimization strategy is discussed as follows. \subsubsection{$P$-minimization} The sub-problem associated with $P$-minimization is \begin{equation} \label{eq_sub_P} \begin{aligned} \min_{P^TP = I_r} \mathcal{J}^{\phi}. \end{aligned} \end{equation} It is seen that \begin{equation} \label{eq_sub_P_rewrite} \begin{aligned} \mathcal{J}^{\phi} = & \textbf{Tr}\Bigg\{ P^T (\sum_{i=1}^{n} \mathcal{K}^{\phi}_{ii}) P \Bigg\} \\ & - \textbf{Tr}\Bigg\{P^T (\sum_{i=1}^{n}\sum_{j=1}^{n} (\mathcal{K}^{\phi}_{ij} + \mathcal{K}^{\phi}_{ji})z_{ji}) P\Bigg\} \\ & + \textbf{Tr}\Bigg\{ P^T (\sum_{s=1}^{n}\sum_{t=1}^{n}\mathcal{K}^{\phi}_{st} z_{\bar{s}} z_{\bar{t}}^T) P \Bigg\} \\ & + \lambda \textbf{Tr}\Bigg\{ \sum_{i=1}^{n} \mathcal{K}^{\phi}_{ii} - \sum_{i=1}^{n} \mathcal{K}^{\phi}_{ii} PP^T \Bigg\} \\ = & \textbf{Tr}\Bigg\{ P^T \Big( (1-\lambda)\mathcal{H}^{\phi}_1 + \mathcal{H}^{\phi}_2 - \mathcal{H}^{\phi}_3 \Big) P \Bigg\} + \xi^{\phi}, \end{aligned} \end{equation} where we define \begin{equation} \label{eq_sub_H_phi} \begin{aligned} \mathcal{H}^{\phi}_1 = & \sum_{i=1}^{n} \mathcal{K}^{\phi}_{ii},\\ \mathcal{H}^{\phi}_2 = & \sum_{s=1}^{n}\sum_{t=1}^{n}\mathcal{K}^{\phi}_{st} z_{\bar{s}} z_{\bar{t}}^T = \sum_{i=1}^{n}\sum_{j=1}^{n}\mathcal{K}^{\phi}_{ij} z_{\bar{i}} z_{\bar{j}}^T, \\ \mathcal{H}^{\phi}_3 = & \sum_{i=1}^{n}\sum_{j=1}^{n} (\mathcal{K}^{\phi}_{ij} + \mathcal{K}^{\phi}_{ji})z_{ji}, \\ \xi^{\phi} = & \lambda \textbf{Tr} \Big\{ \sum_{i=1}^{n} \mathcal{K}^{\phi}_{ii} \Big\}. \end{aligned} \end{equation} Here, $z_{\bar{s}}$ and $z_{\bar{t}}$ denote the $s$-th and $t$-th rows of matrix $Z$, respectively. It is easy to check that the matrices $\mathcal{H}^{\phi}_1$, $\mathcal{H}^{\phi}_2$, and $\mathcal{H}^{\phi}_3$ defined in \cref{eq_sub_H_phi} are real symmetric. Hence, $(1-\lambda)\mathcal{H}^{\phi}_1 + \mathcal{H}^{\phi}_2 - \mathcal{H}^{\phi}_3$ is real symmetric and $P$ can be obtained by performing the standard eigenvalue decomposition: \begin{equation} \label{eq_sol_P} P = \textbf{eig}_r \big( (1-\lambda)\mathcal{H}^{\phi}_1 + \mathcal{H}^{\phi}_2 - \mathcal{H}^{\phi}_3 \big), \end{equation} where $\textbf{eig}_r(\cdot)$ is an operator that returns eigenvectors of the input matrix that are associated with its $r$ smallest eigenvalues. \subsubsection{$Z$-minimization} Fixing $P$, the $Z$-minimization problem is \begin{equation} \label{eq_sub_Z} \begin{aligned} \min_{Z} \mathcal{J}^{\phi} + \gamma \|Z\|_F^2. \end{aligned} \end{equation} To simplify the notation of $Z$-minimization, we define an operator $\bar{\phi}(\cdot)$ such that $\bar{\phi}_P(X_i)$ and $\bar{\phi}_P(\bf{X})$ are defined as \begin{equation} \bar{\phi}_P(X_i) = \begin{bmatrix} \phi(X_i)p_1 \\ \vdots \\ \phi(X_i)p_r \end{bmatrix} \in \mathcal{R}^{f_a r \times 1}, \end{equation} and \begin{equation} \bar{\phi}_P(\textbf{X}) = \begin{bmatrix} \bar{\phi}_P(X_1) & \cdots & \bar{\phi}_P(X_n) \end{bmatrix} \in \mathcal{R}^{f_a r \times n}. \end{equation} Then it is seen that \cref{eq_sub_Z} can be mathematically derived as \begin{equation} \label{eq_JP_phi_Z} \begin{aligned} & \mathcal{J}^{\phi} + \gamma \|Z\|_F^2 \\ = & \sum_{i=1}^{n} \Big\|\phi(X_i)P - \sum_{j=1}^{n} \phi(X_j) P z_{ji} \Big\|_F^2 + \gamma \|Z\|_F^2 \\ = & \sum_{i=1}^{n} \Big\|\bar{\phi}_P(X_i) - \sum_{j=1}^{n} \bar{\phi}_P(X_j) z_{ji} \Big\|_F^2 + \gamma \|Z\|_F^2 \\ = & \Big\| \bar{\phi}_P(\textbf{X}) - \bar{\phi}_P(\textbf{X}) Z \Big\|_F^2 + \gamma \|Z\|_F^2. \end{aligned} \end{equation} It is seen that the $Z$-subproblem is quadratic and convex, which admits closed-form solution with its first-order optimality condition. Hence, $Z$ is solved by \begin{equation} \label{eq_sol_Z} Z = \Bigg( \bar{\phi}_P^T(\textbf{X}) \bar{\phi}_P(\textbf{X}) + \gamma I_n \Bigg)^{-1} \Bigg( \bar{\phi}_P^T(\textbf{X}) \bar{\phi}_P(\textbf{X}) \Bigg). \end{equation} To explicitly expand \cref{eq_sol_Z} and give the precise solution of $Z$, we define the matrix $\bar{\mathcal{K}}^{\phi}\in\mathcal{R}^{n\times n}$ as follows: \begin{equation} \label{eq_bK_phi} \begin{aligned} \bar{\mathcal{K}}^{\phi} = & \bar{\phi}_P^T (\textbf{X})\bar{\phi}_P(\textbf{X}) \\ = & \begin{bmatrix} \bar{\phi}_P^T (X_1) \\ \vdots \\ \bar{\phi}_P^T (X_n) \end{bmatrix} \begin{bmatrix} \bar{\phi}_P(X_1) & \cdots & \bar{\phi}_P(X_n) \end{bmatrix} \\ = & \begin{bmatrix} \sum_{s=1}^{r} p_s^T \mathcal{K}^{\phi}_{11} p_s & \cdots & \sum_{s=1}^{r} p_s^T \mathcal{K}^{\phi}_{1n} p_s \\ \vdots & \ddots & \vdots \\ \sum_{s=1}^{r} p_s^T \mathcal{K}^{\phi}_{n1} p_s & \cdots & \sum_{s=1}^{r} p_s^T \mathcal{K}^{\phi}_{nn} p_s \end{bmatrix} \\ \nonumber\end{aligned}\end{equation}\begin{equation}\begin{aligned} = & \begin{bmatrix} \textbf{Tr}(P^T\mathcal{K}^{\phi}_{11}P) & \cdots & \textbf{Tr}(P^T\mathcal{K}^{\phi}_{1n}P) \\ \vdots & \ddots & \vdots \\ \textbf{Tr}(P^T\mathcal{K}^{\phi}_{n1}P) & \cdots & \textbf{Tr}(P^T\mathcal{K}^{\phi}_{nn}P) \end{bmatrix}. \end{aligned} \end{equation} Incorporating \cref{eq_bK_phi} into \cref{eq_sol_Z}, we obtain the solution of $Z$ with explicit expression \begin{equation} \label{eq_sol_Z_K} \begin{aligned} Z = ( \bar{\mathcal{K}}^{\phi} + \gamma I_n )^{-1}(\bar{\mathcal{K}}^{\phi}). \end{aligned} \end{equation} To be clearer, we summarize the optimization steps in \algref{alg_optimization}. Regarding the optimization of KTRR, we have the following theorem to guarantee the convergence. \begin{theorem} Denote the objective function of \cref{eq_obj_kernel} as $g(P,Z)$, then its value sequence $\{ g(P^t,Z^t) \}_{t=1}^{\infty}$ is decreasing under the update rules of \cref{eq_sol_P,eq_sol_Z_K} and converges. \end{theorem} \begin{proof} According to the optimization of $P$ and $Z$, it is easy to see that \begin{equation} g(P^{t},Z^{t}) \le g(P^{t+1},Z^{t}) \le g(P^{t+1},Z^{t+1}), \end{equation} hence \cref{eq_obj_kernel}, i.e., the value sequence $\{ g(P^t,Z^t) \}$ is decreasing under the update rules of \cref{eq_sol_P,eq_sol_Z_K}. Moreover, it is straightforward to verify the nonnegativity of $\{ g(P^t,Z^t) \}$ by the definition of $g(P,Z)$ in \cref{eq_obj_kernel}, hence $\{ g(P^t,Z^t) \}$ is bounded and thus converges. \end{proof} \begin{rem} We analyze the time complexity of KTRR as follows. To compute the kernel matrices $\mathcal{K}^{\phi}_{ij}$, we need $O(ab^2)$ for each and thus $O(n^2ab^2)$ complexity for all. At each step, the complexity comes from the calculation of $P^t$ and $Z^t$. According to \cref{eq_sub_H_phi,eq_sol_P}, it takes $O(n^3 + n^2 b^2 + b^2 r)$ operations to solve $P$-subproblem per iteration. For $Z^t$-updating, it takes $O(n^2 br)$ operations to obtain $\bar{\mathcal{K}}^{\phi}$ in \cref{eq_bK_phi} and $O(b^3)$ operations to solve \cref{eq_sol_Z_K}. Thus, the overall complexity per each iteration of KTRR is $O(n^3 + n^2 b^2 + b^2 r + n^2 br + b^3) = O(n^3+n^2b^2+b^3)$. \end{rem} { \scriptsize \begin{algorithm}[!tb] \algsetup{linenosize=\small } \small \caption{ Solving \cref{eq_obj_kernel}: Kernel Two-dimensional Ridge Regression (KTRR) } \vspace{1mm} \begin{algorithmic}[1] \STATE \textbf{Input}: $\textbf{X}$, $\lambda$, $\gamma$, $\epsilon$ (convergent tolerance), $t_{max}$ \STATE \textbf{Initialize:} $Z^0$, $P^0$, $t=0$. \STATE Construct kernel matrices $\mathcal{K}^{\phi}_{ij}$. \REPEAT \STATE Update $P^t$ by \cref{eq_sol_P}. \STATE Update $Z^t$ by \cref{eq_sol_Z}. \STATE $t=t+1$. \UNTIL $t\geq t_{max}$ or $\{\mathcal{J}^{\phi}(P^t,Z^t)\}$ converges \STATE \textbf{Output}: $Z$, $P$ \vspace{1mm} \end{algorithmic} \label{alg_optimization} \end{algorithm} } \subsection{ Subspace Clustering Algorithm via KTRR } After we obtain the representation matrix $Z$ by solving \cref{eq_obj_kernel}, we construct an affinity matrix $\textbf{A}$ in a post-processing step as commonly done for many spectral clustering-based subspace clustering methods \cite{peng2015subspace,liu2013robust}. Following \cite{peng2015subspace,liu2013robust}, we construct $\textbf{A}$ with the following steps: \begin{itemize} \item[1)] Let $Z = U\Sigma V^{T}$ be the skinny SVD of $Z$. Define $\bar{Z} = U\Sigma^{1/2}$ to be the weighted column space of $Z$. \item[2)] Obtain $\bar{U}$ by normalizing each row of $\bar{Z}$. \item[3)] Construct the affinity matrix $\textbf{A}$ as $[\textbf{A}]_{ij}=\left(|[\bar{U}\bar{U}^{T}]_{ij}|\right)^{\phi}$, where $\phi\ge 1$ controls the sharpness of the affinity matrix between two data points\footnote{In this paper, we follow \cite{liu2010robust} and set $\phi=4$ for fair comparison. }. \end{itemize} Subsequently, we perform Normalized Cut (NCut) \cite{shi2000normalized} on $\textbf{A}$ in a way similar to \cite{agarwal2004k,peng2015subspace}. We will present the detailed experimental results in the following section. \section{Experiment} \label{sec_experiment} In this section, we conduct extensive experiments to verify the effectiveness of the proposed method. In particular, we compare our method with several state-of-the-art subspace clustering algorithms, including LRR \cite{liu2013robust}, LapLRR \cite{liu2014enhancing}, SCLA \cite{peng2015subspace}, SSC \cite{elhamifar2013sparse}, S$^{3}$C \cite{li2015structured}, TLRR \cite{Zhou2019Tensor}, SSRSC \cite{Xu2019Scaled}, and DSCN \cite{pan2017deep}. Seven data sets are used in our experiments, including Jaffe \cite{lyons1998japanese}, PIX \cite{hond1997distinctive}, Yale \cite{belhumeur1997eigenfaces}, Opticalpen, Alphadigit, ORL \cite{samaria1994parameterisation}, and PIE. Three evaluation metrics are adopted in the experiments, including clustering accuracy, normalized mutual information (NMI), and purity, whose detailed information can be found in \cite{peng2018integrate,peng2017integrating}. In rest of this section, we will introduce the subspace clustering methods, benchmark data sets, and detailed clustering performance and analysis, respectively. For purpose of re-productivity, we will provide our code at xxx (available after acceptance). \subsection{Dataset} For the data sets used in our experiments, we show some examples in \cref{fig_examples}. We briefly describe these data sets as follows: \textbf{1) Yale}. It contains 165 gray scale images of 15 persons with 11 images of size 32$\times$32 per person. \textbf{2) JAFFE}. 10 Japanese female models posed 7 facial expressions and 213 images were collected. Each image has been rated on 6 motion adjectives by 60 Japanese subjects. \textbf{3) PIX}. 100 gray scale images of $100\times100$ pixels from 10 objects were collected. \textbf{4) Alphadigit} data set is a binary data set, which collects handwritten digits 0-9 and letters A-Z. Totally, there are 36 classes and 39 samples for each class, of which each example has size of 20$\times$16 pixels. \textbf{5) Opticalpen} collects hand-written pen digits of 0-9. Totally, there are 1797 images of size 8$\times$8 in this data set. \textbf{6) ORL} contains face images of size $32\times 32$ pixels from 40 individuals. Each individual has 10 images taken at different times, with varying facial expressions, facial details, and lighting conditions. \textbf{7) PIE} has face images of 68 persons with different poses, illumination conditions, and expressions. For each person, we select the first 5 images. All images are resized to $32\times 32$ pixels. \begin{figure*}[!] \centering {\includegraphics[width=2\columnwidth]{fig2.png}} \caption{ Examples of data sets used in our experiments. From left to right are example from Yale, PIX, Jaffe, ORL, PIE, Opticalpen, and Alphadigit data sets, respectively. } \label{fig_examples} \end{figure*} \subsection{Methods in Comparison} \label{sec_methods} To evaluate the performance of our method, we compare it with several state-of-the-art subspace clustering methods. For the baseline methods and KTRR, we briefly describe them as follows: \begin{itemize} \item LRR seeks a low-rank representation of the data by minimizing the nuclear norm of the representation matrix. For its balancing parameter, we vary it within the set of $\{0.001,0.01,0.1,1,10,100,1000\}$; \item LapLRR is a nonlinear extension of the LRR, which exploits nonlinear relationships of the data on manifold \cite{Yin2016Laplacian}. We follow \cite{peng2018integrate} and keep 5 neighbors on the graph, where binary and radial basis function (RBF) kernel with radial varying in $\{0.001, 0.01, 0.1, 1, 10, 100, 1000\}$ are used, respectively; \item SCLA is a non-convex variant of the LRR, which seeks a low-rank representation of the data by minimizing the non-convex log-determinant rank approximation of the representation matrix. For its balancing parameters that controls the sparsity of noise and low-rankness of the representation, we vary them within the set of $\{0.001,0.01,0.1,1,10,100,1000\}$; \item SSC seeks a sparse representation of the data by minimizing the $\ell_1$ norm of the representation matrix. We tune the regularization parameters within the set of $\{0.001,0.01,0.1,1,10,100,1000\}$; \item S$^3$C is an extension of SSC, which seeks sparse representation of the data in latent space. Moreover, S$^3$C improves the clustering capability by considering nonlinear relationships of the data. For its balancing parameter , we set it within the set $\{0.001,0.01,0.1,1,10,100,1000\}$. For its parameter that balances the sparsity and nonlinear structure of the representation, we set it within the set $\{0.1,0.15,0.2,0.25\}$; \item TLRR seeks a low-rank representation of the tensor-type data, where it recovers a clean low-rank tensor while infering the cluster structur of the data. For its balancing parameter, we vary it within the set of \{0.001,0.01,0.1,1,10,100,1000\}; \item SSRSC recovers physically meaningful and more discriminative coefficient matrix by restricting the non-negativity of coefficients and constraining sum of the coefficient vectors up to a scalar less than 1. For its parameters including the sum of coefficient vectors, the penalty parameter of ADMM framework, and the iteration number, we follow the original paper and set them to be 0.5, 0.5, and 5, respectively. For its balancing parameter, we vary it within the set of \{0.001,0.01,0.1,1,10,100,1000\}. \item DSCN \cite{pan2017deep} constructs a representation matrix with deep neural networks, where it maps given samples using explicit hierarchical transformations and simultaneously learns the reconstruction coefficients. For the network, we conduct experiments with different kernel sizes and three-layer network depths. The kernel size and network depth are chosen within the sets of \{[3,3,3], [5,5,3]\} and \{[10,20,30], [10,20,40], [20,30,40]\}, respectively. \item KTRR seeks the least square representation of the data with 2D features. Both RBF and polynomial kernels are used, where we set the radial and power parameters within the sets of $\{0.001, 0.01, 0.1, 1, 10, 100, 1000\}$ and $\{1,2,$ $3,4,5,8,10\}$, respectively. We set the number of projections and vary other balancing parameters within the sets of $\{1,3,5,7,9\}$ and $\{0.001,$ $ 0.01, 0.1, 1,$ $ 10, 100, 1000\}$, respectively. \end{itemize} \subsection{Comparison of Clustering Performance} \label{sec_exp_performance} In this section, we present the detailed comparison of KTRR and baseline methods. To provide more comprehensive evaluation of KTRR, we consdier conducting experiments in a way similar to \cite{peng2015subspace,peng2017nonnegative,peng2020nonnegative}. Specifically, the experimental setting is as follows. For each data set, we conduct experiments using its subsets with different number of clusters. In particular, for a data set with a total number of $\bar{N}$ clusters, we consider its subsets with $N$ clusters, where $N$ may range in a set of values. For example, in ORL data, $\bar{N}=40$ and we consider its subsets with 5, 10, 15, 20, 25, 30, 35, and 40 clusters, respectively. It is clear that there are $\frac{\bar{N}!}{(\bar{N}-N)!N!}$ possible subsets for a specific $N$ value and we randomly choose 10 of them in the experiment. We report the results in \cref{tab_per_jaffe,tab_per_pix,tab_per_opticalpen,tab_per_alphadigit,tab_per_yale,tab_per_orl,tab_per_pie}, where average performance over the 10 subsets is reported with respect to each $N$ value. \input{results_all.tex} Generally, we observe that KTRR achieves the leading performance among all methods. Particularly, we have the following observations: 1) KTRR has the best performance in all cases on Jaffe data set. 2) KTRR has the best performance in almost all cases on PIX, Yale, and PIE data sets. 3) KTRR is the best on Alphadigit data set, where it obtains the top two performance in almost all cases among which more than half are the best. 4) KTRR is the second best method on Opticalpen and ORL data sets with quite competitive performance. For each observation, we provide detailed discussion and analysis in the following. \subsubsection{Observation 1)} It is seen that KTRR can cluster Jaffe data set correctly in all cases, whereas the baseline methods cannot. Besides KTRR, SSRSC has the best performance, where it achieves the top two performances in all cases. LRR, LapLRR, SCLA, and SSC are also very competitive on this data set with the averaged performance higher than 99\%. However, these methods are less competitive on other data sets, which will be discussed in later sections. It should be noted that although some methods show promising performance, KTRR is the only method that achieves 100\% accuracy in all cases. \subsubsection{Observation 2)} It is seen that KTRR achieves the top performances in 32 out of 36, 17 out of 21, and 24 out of 27 cases on Yale, PIE, and PIX data sets, respectively. Moreover, KTRR also achieves the top second performances on these data sets. For example, KTRR obtains the top second performances in 2, 2, and 3 cases on Yale, PIE, and PIX data sets, respectively, which indicates that KTRR has the top two performances in 34 out of 36, 19 out of 21, and 27 out of 27 cases on these data sets, respectively. On these data sets, the most competing methods include DSCN, SSRSC, LRR, LapLRR, and SCLA. Compared with these methods, KTRR improves averaged clustering accuracy, NMI, and purity by at least 7\%, 6\%, and 6\% on Yale data set. The improvement can be even more significant if we compare KTRR with each baseline method, respectively. For example, we can see that KTRR improves the averaged NMI by about 10\% compared with SSRSC and DSCN. On Yale data set, it is seen that LRR, LapLRR, and SCLA are comparable to each other and they obtain the top two best performances in 11, 10, and 9 out of 36 cases, respectively. However, such kind of performances is still significantly inferior to KTRR. On PIX and PIE data sets, the most competing methods include DSCN and S$^3$C. Similar observations to Yale data set can be found. That is, the most competing methods show better performances to the other baseline ones, but inferior to KTRR. Moreover, methods such as SSRSC and S$^3$C do not always show competing performance on all these data sets, whereas KTRR is consistently the best. These observations indicate the superior performance of KTRR. \subsubsection{Observation 3)} On Alphadigit data set, KTRR achieves the highest, the top second, and the top third performances in 13 and 8, and 3 out of 24 cases, respectively. It is seen that KTRR obtains more than half of the best and almost all of the top two performances on this data set. Among the baseline methods, DSCN, LRR, and SCLA achieve 6, 4, and 1 the top performances, respectively. Moreover, DSCN achieves the top second performances in 4 cases. Generally, DSCN is the second best method on Alphadigit data set, but its performance is less promising than KTRR. In general, we may conclude that KTRR outperforms DSCN, as well as the other methods on Alphadigit data set. \subsubsection{Observation 4)} On ORL data set, DSCN and KTRR are the most competitive methods. It is seen that DSCN obtains the top two performances in 18 out of 24 cases, among which 15 are the best and 3 are the top second, respectively. KTRR obtains the top two performances in all cases, including 6 cases with the best performances. Moreover, in the average cases, DSCN outperforms KTRR in accuracy and purity by about 1-2\% whereas KTRR outperforms DSCN in NMI by about 4\%. Among the other methods, SCLA obtains the top two performances in 7 cases, which is observed to be the best. These observations indicate that KTRR is competitive to DSCN while superior to the other baseline methods on ORL data set. On Opticalpen data set, SSRSC, LRR, and KTRR are the most competitive methods, among which SSRSC is the best. It is observed that SSRSC achieves the best performances in 17 out of 27 cases, which suggests its superior performance to the other methods on Opticalpen data set. Among the other methods, KTRR is the best, where it obtains the best and the top second performances in 3 and 13 cases, respectively. Overall, SSRSC and KTRR achieve the top two performances in 18 and 16 cases, respectively. Moreover, LRR has the top second performances in 8 cases but no best ones, showing inferior performance to KTRR. These observations indicate that though KTRR is not the best on Opticalpen data set, it is rather competitive to SSRSC and superior to the other methods. \subsubsection{Discussion} It is observed although KTRR outperforms the other methods on Alphadigit data set, the improvements are relatively less significant than on other data sets such as Yale. Moreover, although KTRR has the best performances in several cases on Opticalpen data set, generally it is inferior to SSRSC on this data set. One reasonable explanation is as follows. Alphadigit and Opticalpen data sets are pendigit images while the others are face images. It is observed that pendigit images contain less structural information than face images. Thus it is relatively more difficult to extract rich and useful structural information with the projection when constructing the representation matrix. However, KTRR still outperforms or is comparable to the baseline methods on these data sets. In general, all algorithms have relatively better performance on ``easy" data sets such as Jaffe and PIX than the ``hard" ones such as ORL and Alphadigit. The reason is that Jaffe and PIX data sets have less variations while the other data sets are more complicated. For example, face images in PIE data sets may have different angle, facial expression, lighting conditions, and wearings; some images in Alphadigit data set have similar shapes but belong to different categories, such as digit ``0" and letter ``O". These properties of the data sets make the corresponding classification task more challenging. In general, we can see that the baseline methods may obtain the best performances on some data sets, but they do not consistently show superior performance to KTRR on other data sets. For example, DSCN is the best method on ORL data set. However, KTRR outperforms DSCN on the other data sets. These observations suggest effectiveness and superior clustering performance of the KTRR to the baseline methods. In the following subsections, we will further evaluate KTRR with some more detailed tests. \begin{figure}[!t] \centering {\includegraphics[width=1\columnwidth]{fig3.png}} \caption{ Example of learned representation matrix $Z$ (on the top) and the constructed affinity matrix $\textbf{A}$ (on the bottom) on Jaffe data. } \label{fig_Z} \end{figure} \subsection{Learned Representation} In the above test, we have conducted extensive experiments to evaluate the the clustering performance of all methods, which has confirmed the effectiveness of KTRR. To better understand the clustering behavior of the KTRR, in this test, we visually show some examples of the learned representation matrix $Z$ as well as the constructed affinity matrix $\textbf{A}$ in the post-processing. Without loss of generality, we show the matrices on Jaffe data, where we consider the cases of $N=7, 8, 9,$ and 10, respectively. We visually show these matrices in \cref{fig_Z}. It is seen that the learned representation matrices have clear block-diagonal structure, which clearly shows group information of the data. The post-processing step makes the structured representation sharped, leading to even stronger structural effects. Hence, the proposed method performs clustering effectively with such representation matrices. \subsection{Convergence Study} \label{sec_exp_conv} In \cref{sec_optimization}, we have theoretically analyzed the convergence of objective value. To better understand the convergence behavior of the proposed algorithm, we empirically show some results of convergence. In this test, we use Jaffe and Alphadigit data sets for illustration. To empirically testify the convergence of KTRR in objective value, without loss of generality, we fix $r=5, \lambda = 0.1, \gamma = 0.1$ and iterate the algorithm 50 times. We plot the objective values in \cref{fig_conv_f}. It is observed that the proposed algorithm converges in objective value within a few number of iterations. \begin{figure}[!t] \centering {\includegraphics[width=1\columnwidth]{fig4.png}} \caption{ Examples of convergence curves of the objective value on Jaffe and Alphadigit data sets. Linear kernel is used and the other parameters are fixed as $r = 5$, $\lambda = 0.1$, and $\gamma = 0.1$. } \label{fig_conv_f} \end{figure} Moreover, since it is difficult to provide theoretical results on the convergence of variables, in this test we show some experimental results to verify this. To show the convergence of $\{Z_t\}$ and $\{P_t\}$, we show the plots of sequences $\{\|Z_{t+1}-Z_{t}\|_F\}_{t=0}^{\infty}$ and $\{\|P_{t+1}-P_{t}\|_F\}_{t=0}^{\infty}$, i.e., the difference of consecutive updates of variables. We remain the above settings and show the results in \cref{fig_conv_zp}. It is observed that the proposed algorithm converges within a few number of iterations in both $\{P_t\}$ and $\{Z_t\}$, which implies fast convergence of the proposed method in variable sequence. Similar convergence pattern can be observed on other data sets with various parameters. These observations suggest fast convergence and efficiency of KTRR and its potential applicability in real-world applications. \begin{figure}[!h] \centering {\includegraphics[width=1\columnwidth]{fig5.png}} \caption{ Examples of convergence curves of the variables on Jaffe and Alphadigit data sets. Linear kernel is used and the other parameters are fixed as $r = 5$, $\lambda = 0.1$, and $\gamma = 0.1$. } \label{fig_conv_zp} \end{figure} \subsection{Feature Extraction and Data Reconstruction} In this subsection, we show some results on how the sought projection matrix works. We use Yale data and adopt the linear kernel for illustration. Without loss of generality, we fix $r = 30$, $\lambda = 0.01$, and $\gamma = 0.01$ and obtain the projection matrix $P$. We show the extracted features and reconstructed examples by $P$ in \cref{fig_reconstruction}. It is seen that the key features of the examples can be captured with a few number of projection directions. These key features well reconstruct the original example, suggesting the effectiveness of the proposed method in feature extraction. \begin{figure}[!t] \centering \subfigure[Example 1]{\includegraphics[width=1\columnwidth]{fig6a.png}} \subfigure[Example 2]{\includegraphics[width=1\columnwidth]{fig6b.png}} \caption{ Examples of reconstructed examples on Yale data set. In each panel, the top left is the original sample image. For the rest, the top are the extracted $j$-th feature $Xp_jp_j^T$ while the bottom are the reconstructed image using the top $j$ features $\sum_{s=1}^{j}Xp_jp_j^T$. From left to right, $j=$ 1, 3, 5, 9, and 15, respectively. Linear kernel is used for reconstruction and the other parameters are fixed to be $\lambda = 1$, $\gamma = 0.01$, and $r=15$. } \label{fig_reconstruction} \end{figure} \begin{figure*}[!] \centering {\includegraphics[width=1.3\columnwidth]{fig7.png}} \caption{ Examples of how the number of projections affects the performance of KTRR in accuracy, NMI, and purity on Yale (on the top) and Jaffe (on the bottom) data sets. For a specific $r$ value, we report the best performance by tuning all the other parameters in a grid-search scheme as in \cref{sec_exp_performance}. } \label{fig_r} \end{figure*} To further test how the projection works, we investigate how the clustering performance of KTRR changes with respect to $r$ value. Without loss of generality, we use Yale and PIX data sets for illustration. For each data set, we consider two types of kernels with the same parameter settings as in previous test. For each kernel, we vary $r\in\{1,3,5,7,9,11,13,15\}$. For a fixed $r$ value we vary all the other parameters within the set $\{0.001,0.01,0.1,1,10,100,1000\}$, and we record the highest performance and report them in \cref{fig_r}. It is seen that for each metric, two curves can be obtained corresponding to RBF and polynomial kernels, respectively. For both kernels, the performance of KTRR reaches the best performance with small $r$ in all metrics. With large $r$ values, the performance of our method is not further improved, implying that a few number of projection directions can sufficiently extract key features of the data and lead to promising clustering performance. Thus, our method can also be applided as a powerful dimension reduction technique for 2D data. \subsection{KTRR v.s. TRR} In this test, we conduct some experiments on Yale and PIE data sets to verify the importance of learning nonlinear structures of data. For Yale and PIE data sets, we use the same subsets as in \cref{sec_exp_performance}. To show the importance of learning nonlinear structures with kernels, we compare the performances of KTRR with general kernels and linear kernel as two cases. For the linear case, we use a linear kernel for KTRR and denote it as TRR. For the other parameters, we tune them in the same way as in \cref{fig_ktrr_trr}. For KTRR, we use general kernels as described in \cref{sec_methods} and tune the other parameters in the same way as TRR. We report the best performances of KTRR as well as TRR with respect to the number of clusters in \cref{fig_ktrr_trr}. It is seen that generally KTRR with general kernels outperforms TRR with linear kernel with significant improvements in many cases. In fact, it is natural that KTRR can always perform no worse than TRR, because TRR is a special case of KTRR by using linear kernel and this ensures that KTRR has at least the same performance as TRR. Generally speaking, we can observe much better performance if general kernels are used because they correspond to some more complicated nonlinear mappings, which may better capture nonlinear structures of the data than linear mapping. \begin{figure*}[!] \centering {\includegraphics[width=1.3\columnwidth]{fig8.png}} \caption{ Examples of how the proposed method performs with general kernels (denoted as KTRR) and linear kernel (denoted as TRR) in accuracy, NMI, and purity on Yale (on the top) and PIE (on the bottom) data sets. For a specific $N$ value, we report the best performance of KTRR and TRR by tuning all the other parameters in a grid-search scheme as in \cref{sec_exp_performance}. } \label{fig_ktrr_trr} \end{figure*} \section{Conclusions} \label{sec_conclusion} In this paper, we propose a novel subspace clustering method named KTRR for 2D data. The KTRR provides us with a way, which is different from tensor methods, to learn the most representative 2D features from 2D data in learning data representation. The KTRR performs 2D feature learning and low-dimensional representation construction simultaneously, which renders the two tasks to mutually enhance each other. 2D kernel renders the KTRR to have enhanced capability of capturing nonlinear relationships from data. An efficient algorithm is developed for its optimization with provable decreasing and convergent property in objective value. Extensive experimental results confirm the effectiveness and efficiency of our method. Besides the strengths of the KTRR, we should also note its weakness and possible further research directions, which are summarized as follows. 1) The KTRR captures spatial information from horizontal direction by multiplying a single projection matrix $P$ on right hand side, which omits spatial information from vertical direction. Thus, it is interesting to introduce another projection $Q$ on left hand side of the data examples for both horizontal and vertical spatial information extraction. 2) In KTRR, we need to provide a value for $r$, which determines the number of projection directions to seek. After extending the KTRR to the bi-directional case, we need to provide the number of projections to seek from both sides. It is interesting to develop the KTRR such that it can automatically determine the optimal number of projection directions for $P$ and $Q$, respectively, in a self-learning way. 3) For KTRR, the clustering performance relays on the kernel selection. However, the optimal type of kernel function and parameters are not always available. Thus, it is meaningful to develop multi-kernel model based on KTRR such that it can automatically learn an optimal kernel from a set of kernel functions. \section*{Acknowledgment} This work is supported by National Natural Science Foundation of China (NSFC) under Grants 61806106, 61802215, and 61806045, Shandong Provincial Natural Science Foundation, China under Grants ZR2019QF009, and ZR2019BF011; Q.C. is supported by NIH UH3 NS100606-03. \bibliographystyle{ieeetran}
{ "timestamp": "2020-11-04T02:11:10", "yymm": "2011", "arxiv_id": "2011.01477", "language": "en", "url": "https://arxiv.org/abs/2011.01477" }
\section{Introduction} India is predominantly an agrarian economy, with nearly 70\% of its rural households depending primarily on agriculture for their livelihood. Approximately 82\% of farmers are small and marginal landowners (FAO, India at a glance~\cite{FAO}), which is estimated to grow to 91\% by 2030. The marginal landholdings combined with traditional modes of farming, and external factors such as irregular rainfall, depletion of groundwater, etc., result in the yield of crops in India still being below the world average (Economic Survey 2015-16~\cite{unionbudget}). Existing approaches for estimating yield rely on manual surveys during the growing season. The enormous costs and the manual effort required to conduct such studies makes it a cumbersome method to predict crop yields. At the same time, lack of reliable and up to date information on crop yield affects supply-demand stocks and export options. Thus, an in-season crop yield forecast can benefit the farmers to improve production and enable the government agencies to devise appropriate plans. Remote sensing data is becoming an increasingly popular source of data for developing models for various applications such as {poverty estimation}~\cite{xie}, {income prediction}~\cite{tushar}, {yield estimation}~\cite{aaai}, etc. The easy and inexpensive access, high-resolution imagery combined with increasing sophisticated modeling techniques, makes it a viable solution for many of these problems. In particular, multi-spectral satellite imagery have information across a wide spectrum of wavelengths that abundantly encode information related to land use such as vegetation, water bodies, urban areas, etc. In this paper, we propose an approach for crop yield estimation from satellite imagery using deep learning techniques that have found success in traditional computer vision tasks. Unlike prior methods that involve extracting hand-crafted features or rudimentary features such as histograms, our approach directly works on the satellite images. It allows the model to learn the representations that are useful for the yield prediction task. Often yield estimates in a geographical area depend on other factors such as nearby water bodies, urbanization, etc. While prior approaches to yield prediction do not take into account these factors, we incorporate these aspects by explicitly weaving into our model the land use classification data. We model the temporal features in the data through a deep LSTM model. This allows our model to automatically identify the relevance of the different growing steps and the satellite image bands towards the yield prediction task. We evaluate and validate our approach on the task of tehsil (block) level wheat prediction for seven states in India. We use the MODIS surface reflectance multi-spectral satellite images, along with the land use classification maps, to train the proposed deep learning models. The experimental results show that our model can outperform traditional remote-sensing based methods by 70\% and recently introduced deep learning models by 54\%. To the best of our knowledge, this is the first method that yields promising results for crop yield prediction in the Indian context. \section{Related Work} In recent years, remote sensing data has been widely used in various sustainability applications such as land use classification ~\cite{albert}, infrastructure quality prediction ~\cite{kdd}, poverty estimation ~\cite{xie}, population estimation ~\cite{population} and income level predictions ~\cite{tushar}. Crop yield estimation using remote sensing data has also been explored over the past few years. Prasad et al. ~\cite{anoop} employ a piece-wise linear regression and a non-linear Quasi-Newton multi-variate regression model to predict soybean yield in the state of Iowa using normalized difference vegetation index (NDVI), soil moisture, surface temperature, and rainfall data. Kuwata and Shibasaki ~\cite{kuwata} estimate county-level crop yield for the state of Illinois using MOD09A1 derived EVI (Enhanced Vegetation Index), climate and other environmental data by employing deep neural network and SVM. Johnson et al. ~\cite{jonson} learn models to predict corn and soybean yields from NDVI and daytime land surface temperature data (derived from the Aqua MODIS sensor product MYD11A2) using a regression tree. Mallick et al. ~\cite{rice} use the Vegetation Condition Index (VCI) that is derived from NDVI and Normalised Difference Wetness Index (NDWI) for rice yield prediction in India, while Dubey et al. ~\cite{sugarcane-fasal} use VCI to model sugarcane yield variability in 52 Indian districts. The Indian national-level program, called FASAL (Forecasting Agriculture using Space, Agro-meteorology, and Land-based observations), has been operational since 2006. FASAL aims at providing pre-harvest crop production forecasts at National/State/District level ~\cite{ray14a}. However, information about the forecasts is scarcely available in the public domain. All these prior approaches learn to model yield using some form of vegetation index that is derived from multi-spectral satellite imagery rather than directly employ the satellite imagery. These approaches mostly utilize 2 or 3 bands that are traditionally used in the generation of these indices. In contrast, we let our model to automatically learn the utility of the different bands during the crop growing season. The model also learns to implicitly estimate the importance of the multi-spectral satellite images belonging to different phases in the crop growing season. Our study is inspired by the work of You et al.~\cite{aaai} on corn yield prediction using remote sensing data and deep learning models. You et al. extract histograms of crop pixel intensities estimated using crop masks on multi-spectral satellite images. The series of histograms obtained during the growing season of the corn crop is modeled using LSTM and Gaussian processes to predict the crop yield. In contrast, our approach works directly on the raw multi-spectral satellite images to learn the representations that are crucial for crop yield prediction. We also incorporate additional information on nearby water bodies and urban built-up to train deep neural network models that yield better results. \begin{figure}[!t] \centering \includegraphics[width=0.45\textwidth]{s_area1.png} \caption{Study Area with all the 948 tehsils belonging to 7 states of India} \label{fig:studyarea} \end{figure} \section{Problem Setting and Data Description} The primary objective of the work is to build a crop yield prediction model for the wheat crop using multi-spectral satellite imagery. A series of satellite images during the growing season of wheat before the harvest is given as input to the model. Wheat is typically grown and harvested in the Rabi season (October-April) in India. Hence we focus on this growing period for learning the model. We have collected the statistics of crop yield from the open government data platform~\cite{OGD}. In India, the lowest administrative unit for which the statistics of crop yield are available is at the district level. However, the average size of a satellite image required to cover a district would be too large for training the model ($>1024\times1024)$. We, therefore, split the yield of a district across smaller administrative units called tehsils, taking into account the agricultural area in each tehsil. A tehsil (also known as a Mandal, or taluk) is an administrative division in India that comprises of multiple villages in the rural areas and various blocks in urban areas. The maximum size of a satellite image required to cover a tehsil is 300$\times$300. Predicting the crop yield at the tehsil level will also help the agencies to device customized plans for improved utilization of resources. In this study, we focus on the seven major wheat growing states that together account for more than 90\%of the total wheat production in India. The crop yield data for these states at the district level are only available from 2001-2011. There are a total of 948 tehsils in our study, with a tehsil having an average geographical spread of over 35,000 hectares. The state-wise distribution of these tehsils and the average wheat crop yield for the year 2011 is provided in Table \ref{tab:statistics}. The geographical spread of the study area is illustrated in Figure \ref{fig:studyarea}. \begin{table} \centering \small \caption{Statistics of tehsils and wheat crop yield for the year 2011 in the dataset.} \begin{tabular} {|l|c|p{1.5cm}|p{1.5cm}|} \hline \hline State & No. of tehsils & Average area (in hectares) & Average yield (kgs/hectare)\\ \hline \hline Gujarat & 215 & 7040.0 & 505.3 \\ \hline Bihar & 53 & 39123.9 & 2001.8\\ \hline Haryana & 46 & 51392.9 & 2354.5\\ \hline Madhya Pradesh & 167 & 28283.4 & 698.5\\ \hline Uttar Pradesh & 209 & 44361.6 & 1133.2\\ \hline Rajasthan & 211 & 14588.6 & 5768.6\\ \hline Punjab & 47 & 67125.0 & 2404.7\\ \hline \hline \end{tabular} \label{tab:statistics} \end{table} The proposed work uses publicly available satellite data from the following MODIS sensors onboard NASA's Terra and Aqua satellites~\cite{lpdaac}: \begin{itemize} \item MOD09A1- This is also referred to as the MODIS Surface Reflectance 8-Day L3 Global product. It provides an estimate of the surface spectral reflectance as it would be measured at ground level in the absence of atmospheric scattering or absorption with a spatial resolution of 500m. \item MYD11A2- It is an eight-day composite thermal product from the Aqua MODIS sensor. \item MODIS Land Cover -The primary land cover scheme incorporated by the MODIS Terra+Aqua Combined Land Cover product identifies 17 classes defined by the IGBP(International Geosphere- Biosphere Programme), including 11 natural vegetation classes, three human-altered classes, and three non-vegetated classes with a spatial resolution of 500m. A pixel is assigned to a class if 60\% or more of the area covered by the pixel belongs to the class. In our study, we only consider pixels that have been classified as agriculture, water bodies, and urban built-up. \end{itemize} \begin{figure}[!t] \centering \includegraphics[width=0.45\textwidth]{bihar-tehsil.png} \caption{[Best Viewed in Color] Visualization of the different satellite image bands for a tehsil.} \label{fig:bands} \end{figure} \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{arr_yield.png} \caption{[Best Viewed in Color] Visual images for different yield levels for the seven states} \label{fig:variationinstates} \end{figure*} Each multi-spectral satellite image, $S_t$, consists of 7 bands of MODIS land surface reflectance image MOD09A1, two bands of MODIS land surface temperature, and three binary bands derived from MODIS land cover image corresponding to water bodies, agricultural land and urban built-up. These bands are illustrated in Figure \ref{fig:bands} for a tehsil from the state of Bihar. Prior approaches use vegetation indices derived mostly from the bands 1 and 2. Figure \ref{fig:variationinstates} illustrates representative visual images for low, medium, and high yielding tehsils for each of the seven states. A significant variation in these images is observed across all the states. The variance is to such an extent that a couple of states do not have a high crop yielding tehsil. Further, we observe a lot of variation in the vegetation land space across tehsils that are supposed to result in a similar yield. For example, tehsils with medium yield in Punjab and Haryana appear to be a lot greener than states such as Rajasthan and Gujarat. This level of heterogeneity in the data made us decide to model the yield in each state independently. We also show through our experiments the difficulty in predicting the yield of state using a model that has been trained on the data from a different state. \section{Methodology} \subsection{Preliminaries} We first give a brief overview of the deep neural network models that are the building blocks of our crop yield estimator before describing the final model architecture. \subsubsection{Deep Convolutional Neural Networks} Deep Convolutional Neural Networks (CNN)~\cite{cnn} can be viewed as a large composition of complex nonlinear functions that learn hierarchical representations of the data. A CNN typically consists of two types of layers: fully connected and convolutional layers. A fully connected layer consists of multiple nodes. Each node takes a vector, $\textbf{x}\in \mathcal{R}^D$, as input and outputs a scalar that is a nonlinear transformation of the weighted sum of the inputs in the following manner. \begin{equation} z = f\left(b + \textbf{w}^T \textbf{x}\right) \end{equation} where $\textbf{w}$ are the weights, $b$ is the scalar bias term, and $f(.)$ is the non-linear transformation (usually a rectified linear unit (ReLU) or tanh). A convolutional layer typically consists of three main operations: convolution, nonlinear activation, and pooling. The convolution operation is performed using a filter with shared parameters that results in significant reduction in the number of parameters. The filter $\textbf{W} \in \mathcal{R}^{k\times k\times D}$ is convolved with an input tensor $\textbf{X}\in \mathcal{R}^{M\times N \times D}$. These filters are trainable and often learn various local patterns present in the input tensor. The convolution operation is followed by the nonlinear function. ReLU is the popular nonlinear function when working with images. The resulting output can be represented as \begin{equation} \textbf{Z} = f\left(b + \textbf{W} * \textbf{X}\right) \end{equation} where $f$ represents the nonlinear function, and $*$ represents the convolution operation. This is often succeeded by the pooling operation. Pooling can be viewed as a sampling process that summarizes the information present in the input. The most common pooling operation is Max Pooling that outputs the maximum of all inputs within a window of size $k\times k$. The output of the convolutional layer is referred to as a feature map. Deep CNN has a large number of stacked up convolutional and fully connected layers with the output of one layer acting as the input to the next layer. A large number of layers help CNN learn global patterns present in the input. The weights at each layer are learned using the backpropagation algorithm that follows a standard gradient descent approach to minimizing the overall loss. \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{LSTM-early.PNG} \caption{The proposed CNN-LSTM architecture for predicting crop yield from a sequence of multi-spectral satellite imagery} \label{fig:architecture} \end{figure*} \subsubsection{Recurrent Neural Networks} Recurrent neural networks (RNN)\cite{mikolov2010recurrent} are a special type of neural networks for learning sequential data. RNN can remember an encoded representation of its past, thus making it suitable for modeling sequential data. Given a sequential data $\textbf{x}_1, \textbf{x}_2,\ldots, \textbf{x}_T$ for $T$ time steps, the output $\textbf{y}_t$ at time step $t$, is a function of the input at time step $\textbf{x}_t$ and the hidden state $\textbf{z}_{t-1}$ at time step $t-1$, can be defined as follows \begin{equation} \textbf{z}_t = f(\textbf{w}^T_t \textbf{x}_t+ \textbf{u}^T\textbf{z}_{t-1}) \end{equation} \begin{equation} \textbf{y}_t = g(\textbf{v}^T\textbf{z}_t) \end{equation} where, $\textbf{w}$, $\textbf{u}$ and $\textbf{v}$ are the weights applied on $\textbf{x}_t$, $\textbf{z}_{t-1}$ and $\textbf{z}_t$ respectively and $f$ and $g$ are the non-linear activation functions. As, output is dependent on the hidden states of the previous time steps, the back propagation through time algorithm for updating the weights can result in the problem of vanishing or exploding gradients \cite{bengio1994learning}. \textbf{LSTM}~\cite{lstm}, a special kind of RNN, were introduced to overcome this issue by integrating a gradient superhighway in the form of a cell state $\textbf{c}$, in addition to the hidden state $\textbf{h}$. The LSTM model has gates for providing the ability to add and remove information to the cell state. The forget gate decides the information to be deleted from the cell state and can be defined as follows \begin{equation} \textbf{f}_t = \sigma (\textbf{w}_f^T [\textbf{h}_{t-1}, \textbf{x}_t] + b_f) \end{equation} The input gate that determines the information that should be added to the cell state is defined as \begin{equation} \textbf{i}_t = \sigma (\textbf{w}_i^T [\textbf{h}_{t-1}, \textbf{x}_t] + b_i) \end{equation} The cell state $\textbf{c}_t$ is obtained by using both $f_t$ and $i_t$ in the following manner \begin{eqnarray} \tilde{\textbf{c}}_t & = & tanh(\textbf{w}_c^T [\textbf{h}_{t-1}, \textbf{x}_t] + b_c)\\ \textbf{c}_t & = & \textbf{f}_t^T \textbf{c}_{t-1} + \textbf{i}_t^T \tilde{\textbf{c}}_t \end{eqnarray} Similarly, the hidden state $\textbf{h}_t$ and output state $\textbf{o}_t$ of the LSTM are defined as \begin{eqnarray} \textbf{o}_t & = & \sigma (\textbf{w}_o^T [\textbf{h}_{t-1}, \textbf{x}_t] + b_o)\\ \textbf{h}_t & = & \textbf{o}_t^T tanh(\textbf{c}_t) \end{eqnarray} LSTM is more effective in modeling longer sequences than a simple RNN due to a more effective gradient flow during backpropagation. \subsection{Crop Yield Prediction Model Architecture} We directly input the multi-spectral satellite imagery to our deep neural network model. The motivation for using the raw imagery is to be able to extract features relating to the spatial location of crop pixels and the properties of neighboring regions such as water bodies, urban landscapes, etc. We hypothesize that these parameters influence the crop yield. The proposed deep network has three modules. The first module is a CNN that learns to extract relevant features from the images. The second module is an LSTM that determines the temporal relationship during the crop growing season. The third module is a fully connected network that finally predicts the crop yield. The proposed CNN-LSTM architecture is illustrated in Figure \ref{fig:architecture}. The input to the network is a sequence $S_1, S_2,...,S_{24}$, where $S_t$ is a multi-spectral image of size $300 \times 300 \times b$ at time $t$, where $b$ refers to the number of bands. In the proposed model, we use 12 bands. The entire sequence is used during training and validation. During testing, we vary the sequence length between 1 and 23. The image $S_t$ at every time step $t$ is first passed to the CNN feature extractor to extract the features $f_t^s$ present in the image. The CNN feature extractor consists of 5 convolutional layers, each having 16 filters of size $[3\times 3]$ with a stride size of $[2\times 2]$ and Leaky-ReLU as the activation function. The choice of the number of convolutional layers and filters in each layer was constrained by the computational resources. There is no pooling operation due to the use of strided convolutions. The output of the convolutional feature extractor is flattened into a 1024 dimensional vector. The features extracted for each of the $T$ time steps are stacked and passed on to the LSTM model. The LSTM model is used to encode the temporal properties across the growing season. The model consists of 3 LSTM layers. Each LSTM layer contains 512 nodes that use Leaky-ReLU as the activation function. Dropout with a keep probability of $75\%$ is applied to the output of each LSTM layer. The 512-dimensional feature vector obtained from the last LSTM layer is passed to yield predictor. The yield predictor consists of 3 fully connected layers with the first two layers using Leaky-ReLU as the activation function. The yield predictor outputs $\hat{y}_t$ the crop yield in kilograms per hectare for the input sequence until time step $t$. An L2-loss is applied to the predictions corresponding to each time step against the actual output $y_t$. Note that the actual yield at every time step is the same as the yield at the last time step. The overall loss of the entire CNN-LSTM network is defined as follows \begin{equation} Loss = \sum_{t=1}^{24}(\hat{y}_t - y_t)^2 \end{equation} Applying the L2-loss at each time step increases the flow of gradients to the shared LSTM weights, thus increasing prediction accuracy and faster convergence. This also helps us to predict the yield for intermediate stages of the growing season. Given a test sequence of 24 images, the overall yield is obtained by averaging the yield predicted by the CNN-LSTM model at every time step. Figure \ref{fig:trainingerror} presents the decrease in the training and validation loss as a function of epochs. We use the model resulting in the lowest validation error for predicting the yield on the test set. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{training.jpg} \caption{Progression of training and validation error as a function of epochs} \label{fig:trainingerror} \end{figure} \begin{table*}[t] \centering \caption{Comparison of RMSE (in kgs/hectare) for the CNN-LSTM-12 approach against prior and state of the art approaches} \begin{tabular}{ | p{1.4cm} | p{1.3cm} | p{1.5cm} | p{1.3cm} | p{1.3cm} | p{1.5cm} | p{1.4cm} | p{1.5cm} | } \hline \hline State & Decision Forest (NDVI) & Decision Tree (NDVI) & Step Regression (VCI) & Ridge Regression (NDVI) & LSTM + GP (Histogram) & CNN-LSTM-9 & \textbf{CNN-LSTM-12}\\ \hline \hline Gujarat & 219 & 259 & 290 & 233 & 140 & 80 & \textbf{48}\\ Bihar & 835 & 1042 & 775 & 809 & 480 & 460 & \textbf{330}\\ Haryana & 980 & 1205 & 978 & 1026 & 590 & 234 & \textbf{103.7}\\ MP & 491 & 602 & 543 & 470 & 370 & 194 & \textbf{161}\\ UP & 516 & 637 & 509 & 497 & 800 & 138 & \textbf{76}\\ Rajasthan & 207 & 272 & 222 & 210 & 150 & 117 & \textbf{84}\\ Punjab & 1065 & 1061 & 1219 & 1061 & 690 & 184 & \textbf{100}\\ \hline \hline \end{tabular} \label{tab:baselinecomparison} \end{table*} \section{Experiments and Results} \subsection{Comparison Against Baselines} We compare the performance of the proposed model against approaches in the literature that use handcrafted features from the satellite imagery like NDVI and VCI. We train Decision Trees \cite{jonson}, Random Forests, and Ridge Regression models \cite{bolton} using a feature vector of NDVI values derived from each of the 24 satellite images spanning the entire growing season. We also perform step-wise regression with VCI \cite{rice}. We compare our approach against the LSTM+Gaussian Process model \cite{aaai} on the histogram of crop pixels. The parameters for all these approaches were fine-tuned using a cross-validation process. We denote our proposed model that uses raw satellite imagery and contextual information such as water bodies, an agricultural area, and urban landscape as CNN-LSTM-12. We use root mean square error (RMSE) in kgs/hectare for comparing the performance of the different models. The training set consists of data within the years 2001-2009, the validation set used for tuning the parameters was from the year 2010, and the test set consisted of the data from the year 2011. The results for the different states are presented in Table \ref{tab:baselinecomparison}. It can be observed that the proposed approach performs significantly better than the methods that use NDVI and VCI features by over 70\%. Further, our approach performs better than the LSTM+GP approach of You et al. by over 54\%. We attribute this improvement in the performance of the CNN-LSTM-12 model to its ability to learn features relevant to the task of crop yield prediction, instead of using handcrafted features like a histogram. The crop yield error plots at the tehsil level for every state are presented in Figure \ref{fig:tehsil-heatmap}. It can be observed that for a majority of the tehsils across all the states, the CNN-LSTM-12 model is under-predicting the yield marginally. This is further verified by the plot on the left-hand side in Figure \ref{fig:tehsil-yield}. This compares the error against the size of the tehsil. We observe that for large tehsils, the model always underestimated the yield. However, the number of such large area tehsils is minimal. The relationship between the actual and predicted yield is presented in the right-hand side plot in Figure \ref{fig:tehsil-yield}. This relationship is mostly linear with a slope of $47^o$, indicating that the average performance of the CNN-LSTM-12 model is good. \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{tehsil-prediction.png} \caption{Tehsil level error heat maps for all the 7 states.} \label{fig:tehsil-heatmap} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{analysis_yield.png} \caption{The left figure is the plot of the difference between predicted and actual yield against the area of the tehsils and the plot on the right side illustrates the relationship between predicted and actual yield of all tehsils.} \label{fig:tehsil-yield} \end{figure*} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{early_2.png} \caption{Accuracy of early prediction} \label{fig:early-prediction} \end{figure} \subsection{Early Crop Yield Prediction} Another aim of our project is to achieve real-time predictions throughout the growing season. Early crop yield predictions help the government agencies in planning for any contingencies. The CNN-LSTM-12 model has already been trained to predict the yield at every time step. To perform early prediction, we pass only a sub-sequence of the satellite images $(S_1,S_2....S_t)$ with $t<24$ to the CNN-LSTM-12 model. Figure \ref{fig:early-prediction} shows the performance (RMSE in kgs/hectare) if the prediction was made using only a sub-sequence in an online manner. We observe that the model has a higher error in the early months, as there is not enough information initially on the growth of the plants. However, as more data is made available, we notice an increase in the quality of the prediction for all the states. We notice that the error reduces significantly and consistently at every step until around the $8^{th}$ time step, beyond which there is only a gradual change. This approximately translates to 2 months since the beginning of the sowing season. We further observe a slight increase in the error towards the last time step. The final few time steps represent the harvesting part of the crop season. The harvesting is performed over many weeks that is not uniform across and within a tehsil. As a result, we expect to see inconsistencies in the images between areas where the harvesting has been completed and with those where it has not taken place. We suspect this to be the reason for the marginal increase in the error towards the end of the crop season. \subsection{Importance of Contextual Information} One of our hypotheses is that integrating contextual information such as the location of water bodies, farmlands, and urban landscape will help the CNN-LSTM-12 model to predict the crop yield more accurately. To test this hypothesis, we train another model without using this information. Specifically, we train a model using only the nine image bands, excluding the last three bands that encode the contextual information. This new model is represented as CNN-LSTM-9. We also mask out regions in these nine bands that do not correspond to agricultural land as encoded in the land use data. The column named CNN-LSTM-9 in Table \ref{tab:baselinecomparison} presents the average RSME for the tehsils of all the states in the study for this model. It is evident that the model that uses information about water bodies, farmlands, and urban landscape performs significantly better (by over 17\%) than the model that does not use this information. This trend is observed across all the states, indicating the importance of the contextual information. \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{importance.png} \caption{Increase in the RMSE when the images of a specific month are replaced with random noise.} \label{fig:month} \end{figure*} \begin{figure} \includegraphics[width=0.4\textwidth]{generalization.png} \centering \caption{Error of models trained and tested on different states} \label{fig:gen} \end{figure} \subsection{Importance of Image Bands and the Months in the Growing Season} We perform experiments to analyze how our model is utilizing the input data - the different periods in the growing season and the various bands of the multispectral satellite images for the task of crop yield prediction. The entire growing season spans six months. Every month, we have four satellite images that are captured approximately every eight days. To analyze the utility of a given month in the growing season, we replace the four images of the month with random Gaussian noise when passing the images to the yield prediction model. We quantify the increase in the RMSE for yield prediction due to this change to estimate the utility of the month in the growing season. The increase in the RMSE for every state and every month is presented in the right-hand side plot in Figure \ref{fig:month}. We observe that the satellite images belonging to the initial month of October are given the maximum importance. This is consistent with the observations in the literature that sowing time is an essential factor in wheat production~\cite{sow-dates}. This further supports our earlier observation on the decrease in the prediction error when the information about the initial two months is made available. As the model sees more satellite images, the increase in the error is only marginal. We also analyze how the bands are utilized month wise. To see the overall importance given to various bands in the crop yield prediction task, we iteratively send Gaussian noise in place of the individual bands for a given month and observe the increase in the error. During October, the model has only seen the first four satellite images of a test data point, which are insufficient for accurate yield prediction. Therefore the model gives maximum importance to bands 10, 11, and 12 signifying the pixels belonging to water bodies, agriculture, and urban built-up. As the model sees more satellite images, it has already recognized the type and the context of each pixel in the sequence of satellite images. Hence, it starts giving lesser importance to the last three bands. The trend that is visible in all the states is that when a band is given significance in a particular month, in the subsequent month, it is immediately given less importance, as the model now gives importance to other bands. The temperature band is consistently given high importance in the later months. \subsection{Generalizability} We also try to see how similar different states are in terms of heterogeneity of weather patterns, soil type, farming methods, etc. For this, we train our model on one state and test it on the remaining states. The results are presented in Figure \ref{fig:gen}. We observe that the results when the test state is different are quite poor, with a significant increase in the error (over 1000 in some cases). This further supports our original idea of modeling each state independently. We had performed a couple of experiments before fixing the state-wise models. A single model was trained using data from 10 bands for all the states; however, the model was having high loss $>$520, while the average loss of state-wise models was $<$120 ($<$100 for some states). The LSTM+GP model on the entire dataset also gave similar losses. \section{Conclusion} We introduce a reliable and inexpensive method to predict crop yields from publicly available satellite imagery. Specifically, we learn a deep neural network model for predicting the wheat crop yield for tehsils in India. The proposed method works directly on raw satellite imagery without the need to extract any hand-crafted features or perform dimensionality reduction on the images. We have created a new dataset consisting of a sequence of satellite images and the exact crop yield for the years 2001-2011 covering a total of 948 tehsils. We use this dataset to train and evaluate the proposed approach on tehsil level wheat predictions. Our model outperforms over existing methods by over 50\%. We also show that incorporating additional contextual information such as the location of farmlands, water bodies, and urban areas helps in improving the yield estimates. \section{Acknowledgement} We are grateful to Dr. Reet Kamal Tiwari and Akshar Tripathi for their inputs and assistance in understanding and collecting the satellite data. We are also grateful to NVIDIA Corporation for supporting this research through an academic hardware grant. {\small \bibliographystyle{ieee} \section{Introduction} India is predominantly an agrarian economy, with nearly 70\% of its rural households depending primarily on agriculture for their livelihood. Approximately 82\% of farmers are small and marginal landowners (FAO, India at a glance~\cite{FAO}), which is estimated to grow to 91\% by 2030. The marginal landholdings combined with traditional modes of farming, and external factors such as irregular rainfall, depletion of groundwater, etc., result in the yield of crops in India still being below the world average (Economic Survey 2015-16~\cite{unionbudget}). Existing approaches for estimating yield rely on manual surveys during the growing season. The enormous costs and the manual effort required to conduct such studies makes it a cumbersome method to predict crop yields. At the same time, lack of reliable and up to date information on crop yield affects supply-demand stocks and export options. Thus, an in-season crop yield forecast can benefit the farmers to improve production and enable the government agencies to devise appropriate plans. Remote sensing data is becoming an increasingly popular source of data for developing models for various applications such as {poverty estimation}~\cite{xie}, {income prediction}~\cite{tushar}, {yield estimation}~\cite{aaai}, etc. The easy and inexpensive access, high-resolution imagery combined with increasing sophisticated modeling techniques, makes it a viable solution for many of these problems. In particular, multi-spectral satellite imagery have information across a wide spectrum of wavelengths that abundantly encode information related to land use such as vegetation, water bodies, urban areas, etc. In this paper, we propose an approach for crop yield estimation from satellite imagery using deep learning techniques that have found success in traditional computer vision tasks. Unlike prior methods that involve extracting hand-crafted features or rudimentary features such as histograms, our approach directly works on the satellite images. It allows the model to learn the representations that are useful for the yield prediction task. Often yield estimates in a geographical area depend on other factors such as nearby water bodies, urbanization, etc. While prior approaches to yield prediction do not take into account these factors, we incorporate these aspects by explicitly weaving into our model the land use classification data. We model the temporal features in the data through a deep LSTM model. This allows our model to automatically identify the relevance of the different growing steps and the satellite image bands towards the yield prediction task. We evaluate and validate our approach on the task of tehsil (block) level wheat prediction for seven states in India. We use the MODIS surface reflectance multi-spectral satellite images, along with the land use classification maps, to train the proposed deep learning models. The experimental results show that our model can outperform traditional remote-sensing based methods by 70\% and recently introduced deep learning models by 54\%. To the best of our knowledge, this is the first method that yields promising results for crop yield prediction in the Indian context. \section{Related Work} In recent years, remote sensing data has been widely used in various sustainability applications such as land use classification ~\cite{albert}, infrastructure quality prediction ~\cite{kdd}, poverty estimation ~\cite{xie}, population estimation ~\cite{population} and income level predictions ~\cite{tushar}. Crop yield estimation using remote sensing data has also been explored over the past few years. Prasad et al. ~\cite{anoop} employ a piece-wise linear regression and a non-linear Quasi-Newton multi-variate regression model to predict soybean yield in the state of Iowa using normalized difference vegetation index (NDVI), soil moisture, surface temperature, and rainfall data. Kuwata and Shibasaki ~\cite{kuwata} estimate county-level crop yield for the state of Illinois using MOD09A1 derived EVI (Enhanced Vegetation Index), climate and other environmental data by employing deep neural network and SVM. Johnson et al. ~\cite{jonson} learn models to predict corn and soybean yields from NDVI and daytime land surface temperature data (derived from the Aqua MODIS sensor product MYD11A2) using a regression tree. Mallick et al. ~\cite{rice} use the Vegetation Condition Index (VCI) that is derived from NDVI and Normalised Difference Wetness Index (NDWI) for rice yield prediction in India, while Dubey et al. ~\cite{sugarcane-fasal} use VCI to model sugarcane yield variability in 52 Indian districts. The Indian national-level program, called FASAL (Forecasting Agriculture using Space, Agro-meteorology, and Land-based observations), has been operational since 2006. FASAL aims at providing pre-harvest crop production forecasts at National/State/District level ~\cite{ray14a}. However, information about the forecasts is scarcely available in the public domain. All these prior approaches learn to model yield using some form of vegetation index that is derived from multi-spectral satellite imagery rather than directly employ the satellite imagery. These approaches mostly utilize 2 or 3 bands that are traditionally used in the generation of these indices. In contrast, we let our model to automatically learn the utility of the different bands during the crop growing season. The model also learns to implicitly estimate the importance of the multi-spectral satellite images belonging to different phases in the crop growing season. Our study is inspired by the work of You et al.~\cite{aaai} on corn yield prediction using remote sensing data and deep learning models. You et al. extract histograms of crop pixel intensities estimated using crop masks on multi-spectral satellite images. The series of histograms obtained during the growing season of the corn crop is modeled using LSTM and Gaussian processes to predict the crop yield. In contrast, our approach works directly on the raw multi-spectral satellite images to learn the representations that are crucial for crop yield prediction. We also incorporate additional information on nearby water bodies and urban built-up to train deep neural network models that yield better results. \begin{figure}[!t] \centering \includegraphics[width=0.45\textwidth]{s_area1.png} \caption{Study Area with all the 948 tehsils belonging to 7 states of India} \label{fig:studyarea} \end{figure} \section{Problem Setting and Data Description} The primary objective of the work is to build a crop yield prediction model for the wheat crop using multi-spectral satellite imagery. A series of satellite images during the growing season of wheat before the harvest is given as input to the model. Wheat is typically grown and harvested in the Rabi season (October-April) in India. Hence we focus on this growing period for learning the model. We have collected the statistics of crop yield from the open government data platform~\cite{OGD}. In India, the lowest administrative unit for which the statistics of crop yield are available is at the district level. However, the average size of a satellite image required to cover a district would be too large for training the model ($>1024\times1024)$. We, therefore, split the yield of a district across smaller administrative units called tehsils, taking into account the agricultural area in each tehsil. A tehsil (also known as a Mandal, or taluk) is an administrative division in India that comprises of multiple villages in the rural areas and various blocks in urban areas. The maximum size of a satellite image required to cover a tehsil is 300$\times$300. Predicting the crop yield at the tehsil level will also help the agencies to device customized plans for improved utilization of resources. In this study, we focus on the seven major wheat growing states that together account for more than 90\%of the total wheat production in India. The crop yield data for these states at the district level are only available from 2001-2011. There are a total of 948 tehsils in our study, with a tehsil having an average geographical spread of over 35,000 hectares. The state-wise distribution of these tehsils and the average wheat crop yield for the year 2011 is provided in Table \ref{tab:statistics}. The geographical spread of the study area is illustrated in Figure \ref{fig:studyarea}. \begin{table} \centering \small \caption{Statistics of tehsils and wheat crop yield for the year 2011 in the dataset.} \begin{tabular} {|l|c|p{1.5cm}|p{1.5cm}|} \hline \hline State & No. of tehsils & Average area (in hectares) & Average yield (kgs/hectare)\\ \hline \hline Gujarat & 215 & 7040.0 & 505.3 \\ \hline Bihar & 53 & 39123.9 & 2001.8\\ \hline Haryana & 46 & 51392.9 & 2354.5\\ \hline Madhya Pradesh & 167 & 28283.4 & 698.5\\ \hline Uttar Pradesh & 209 & 44361.6 & 1133.2\\ \hline Rajasthan & 211 & 14588.6 & 5768.6\\ \hline Punjab & 47 & 67125.0 & 2404.7\\ \hline \hline \end{tabular} \label{tab:statistics} \end{table} The proposed work uses publicly available satellite data from the following MODIS sensors onboard NASA's Terra and Aqua satellites~\cite{lpdaac}: \begin{itemize} \item MOD09A1- This is also referred to as the MODIS Surface Reflectance 8-Day L3 Global product. It provides an estimate of the surface spectral reflectance as it would be measured at ground level in the absence of atmospheric scattering or absorption with a spatial resolution of 500m. \item MYD11A2- It is an eight-day composite thermal product from the Aqua MODIS sensor. \item MODIS Land Cover -The primary land cover scheme incorporated by the MODIS Terra+Aqua Combined Land Cover product identifies 17 classes defined by the IGBP(International Geosphere- Biosphere Programme), including 11 natural vegetation classes, three human-altered classes, and three non-vegetated classes with a spatial resolution of 500m. A pixel is assigned to a class if 60\% or more of the area covered by the pixel belongs to the class. In our study, we only consider pixels that have been classified as agriculture, water bodies, and urban built-up. \end{itemize} \begin{figure}[!t] \centering \includegraphics[width=0.45\textwidth]{bihar-tehsil.png} \caption{[Best Viewed in Color] Visualization of the different satellite image bands for a tehsil.} \label{fig:bands} \end{figure} \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{arr_yield.png} \caption{[Best Viewed in Color] Visual images for different yield levels for the seven states} \label{fig:variationinstates} \end{figure*} Each multi-spectral satellite image, $S_t$, consists of 7 bands of MODIS land surface reflectance image MOD09A1, two bands of MODIS land surface temperature, and three binary bands derived from MODIS land cover image corresponding to water bodies, agricultural land and urban built-up. These bands are illustrated in Figure \ref{fig:bands} for a tehsil from the state of Bihar. Prior approaches use vegetation indices derived mostly from the bands 1 and 2. Figure \ref{fig:variationinstates} illustrates representative visual images for low, medium, and high yielding tehsils for each of the seven states. A significant variation in these images is observed across all the states. The variance is to such an extent that a couple of states do not have a high crop yielding tehsil. Further, we observe a lot of variation in the vegetation land space across tehsils that are supposed to result in a similar yield. For example, tehsils with medium yield in Punjab and Haryana appear to be a lot greener than states such as Rajasthan and Gujarat. This level of heterogeneity in the data made us decide to model the yield in each state independently. We also show through our experiments the difficulty in predicting the yield of state using a model that has been trained on the data from a different state. \section{Methodology} \subsection{Preliminaries} We first give a brief overview of the deep neural network models that are the building blocks of our crop yield estimator before describing the final model architecture. \subsubsection{Deep Convolutional Neural Networks} Deep Convolutional Neural Networks (CNN)~\cite{cnn} can be viewed as a large composition of complex nonlinear functions that learn hierarchical representations of the data. A CNN typically consists of two types of layers: fully connected and convolutional layers. A fully connected layer consists of multiple nodes. Each node takes a vector, $\textbf{x}\in \mathcal{R}^D$, as input and outputs a scalar that is a nonlinear transformation of the weighted sum of the inputs in the following manner. \begin{equation} z = f\left(b + \textbf{w}^T \textbf{x}\right) \end{equation} where $\textbf{w}$ are the weights, $b$ is the scalar bias term, and $f(.)$ is the non-linear transformation (usually a rectified linear unit (ReLU) or tanh). A convolutional layer typically consists of three main operations: convolution, nonlinear activation, and pooling. The convolution operation is performed using a filter with shared parameters that results in significant reduction in the number of parameters. The filter $\textbf{W} \in \mathcal{R}^{k\times k\times D}$ is convolved with an input tensor $\textbf{X}\in \mathcal{R}^{M\times N \times D}$. These filters are trainable and often learn various local patterns present in the input tensor. The convolution operation is followed by the nonlinear function. ReLU is the popular nonlinear function when working with images. The resulting output can be represented as \begin{equation} \textbf{Z} = f\left(b + \textbf{W} * \textbf{X}\right) \end{equation} where $f$ represents the nonlinear function, and $*$ represents the convolution operation. This is often succeeded by the pooling operation. Pooling can be viewed as a sampling process that summarizes the information present in the input. The most common pooling operation is Max Pooling that outputs the maximum of all inputs within a window of size $k\times k$. The output of the convolutional layer is referred to as a feature map. Deep CNN has a large number of stacked up convolutional and fully connected layers with the output of one layer acting as the input to the next layer. A large number of layers help CNN learn global patterns present in the input. The weights at each layer are learned using the backpropagation algorithm that follows a standard gradient descent approach to minimizing the overall loss. \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{LSTM-early.PNG} \caption{The proposed CNN-LSTM architecture for predicting crop yield from a sequence of multi-spectral satellite imagery} \label{fig:architecture} \end{figure*} \subsubsection{Recurrent Neural Networks} Recurrent neural networks (RNN)\cite{mikolov2010recurrent} are a special type of neural networks for learning sequential data. RNN can remember an encoded representation of its past, thus making it suitable for modeling sequential data. Given a sequential data $\textbf{x}_1, \textbf{x}_2,\ldots, \textbf{x}_T$ for $T$ time steps, the output $\textbf{y}_t$ at time step $t$, is a function of the input at time step $\textbf{x}_t$ and the hidden state $\textbf{z}_{t-1}$ at time step $t-1$, can be defined as follows \begin{equation} \textbf{z}_t = f(\textbf{w}^T_t \textbf{x}_t+ \textbf{u}^T\textbf{z}_{t-1}) \end{equation} \begin{equation} \textbf{y}_t = g(\textbf{v}^T\textbf{z}_t) \end{equation} where, $\textbf{w}$, $\textbf{u}$ and $\textbf{v}$ are the weights applied on $\textbf{x}_t$, $\textbf{z}_{t-1}$ and $\textbf{z}_t$ respectively and $f$ and $g$ are the non-linear activation functions. As, output is dependent on the hidden states of the previous time steps, the back propagation through time algorithm for updating the weights can result in the problem of vanishing or exploding gradients \cite{bengio1994learning}. \textbf{LSTM}~\cite{lstm}, a special kind of RNN, were introduced to overcome this issue by integrating a gradient superhighway in the form of a cell state $\textbf{c}$, in addition to the hidden state $\textbf{h}$. The LSTM model has gates for providing the ability to add and remove information to the cell state. The forget gate decides the information to be deleted from the cell state and can be defined as follows \begin{equation} \textbf{f}_t = \sigma (\textbf{w}_f^T [\textbf{h}_{t-1}, \textbf{x}_t] + b_f) \end{equation} The input gate that determines the information that should be added to the cell state is defined as \begin{equation} \textbf{i}_t = \sigma (\textbf{w}_i^T [\textbf{h}_{t-1}, \textbf{x}_t] + b_i) \end{equation} The cell state $\textbf{c}_t$ is obtained by using both $f_t$ and $i_t$ in the following manner \begin{eqnarray} \tilde{\textbf{c}}_t & = & tanh(\textbf{w}_c^T [\textbf{h}_{t-1}, \textbf{x}_t] + b_c)\\ \textbf{c}_t & = & \textbf{f}_t^T \textbf{c}_{t-1} + \textbf{i}_t^T \tilde{\textbf{c}}_t \end{eqnarray} Similarly, the hidden state $\textbf{h}_t$ and output state $\textbf{o}_t$ of the LSTM are defined as \begin{eqnarray} \textbf{o}_t & = & \sigma (\textbf{w}_o^T [\textbf{h}_{t-1}, \textbf{x}_t] + b_o)\\ \textbf{h}_t & = & \textbf{o}_t^T tanh(\textbf{c}_t) \end{eqnarray} LSTM is more effective in modeling longer sequences than a simple RNN due to a more effective gradient flow during backpropagation. \subsection{Crop Yield Prediction Model Architecture} We directly input the multi-spectral satellite imagery to our deep neural network model. The motivation for using the raw imagery is to be able to extract features relating to the spatial location of crop pixels and the properties of neighboring regions such as water bodies, urban landscapes, etc. We hypothesize that these parameters influence the crop yield. The proposed deep network has three modules. The first module is a CNN that learns to extract relevant features from the images. The second module is an LSTM that determines the temporal relationship during the crop growing season. The third module is a fully connected network that finally predicts the crop yield. The proposed CNN-LSTM architecture is illustrated in Figure \ref{fig:architecture}. The input to the network is a sequence $S_1, S_2,...,S_{24}$, where $S_t$ is a multi-spectral image of size $300 \times 300 \times b$ at time $t$, where $b$ refers to the number of bands. In the proposed model, we use 12 bands. The entire sequence is used during training and validation. During testing, we vary the sequence length between 1 and 23. The image $S_t$ at every time step $t$ is first passed to the CNN feature extractor to extract the features $f_t^s$ present in the image. The CNN feature extractor consists of 5 convolutional layers, each having 16 filters of size $[3\times 3]$ with a stride size of $[2\times 2]$ and Leaky-ReLU as the activation function. The choice of the number of convolutional layers and filters in each layer was constrained by the computational resources. There is no pooling operation due to the use of strided convolutions. The output of the convolutional feature extractor is flattened into a 1024 dimensional vector. The features extracted for each of the $T$ time steps are stacked and passed on to the LSTM model. The LSTM model is used to encode the temporal properties across the growing season. The model consists of 3 LSTM layers. Each LSTM layer contains 512 nodes that use Leaky-ReLU as the activation function. Dropout with a keep probability of $75\%$ is applied to the output of each LSTM layer. The 512-dimensional feature vector obtained from the last LSTM layer is passed to yield predictor. The yield predictor consists of 3 fully connected layers with the first two layers using Leaky-ReLU as the activation function. The yield predictor outputs $\hat{y}_t$ the crop yield in kilograms per hectare for the input sequence until time step $t$. An L2-loss is applied to the predictions corresponding to each time step against the actual output $y_t$. Note that the actual yield at every time step is the same as the yield at the last time step. The overall loss of the entire CNN-LSTM network is defined as follows \begin{equation} Loss = \sum_{t=1}^{24}(\hat{y}_t - y_t)^2 \end{equation} Applying the L2-loss at each time step increases the flow of gradients to the shared LSTM weights, thus increasing prediction accuracy and faster convergence. This also helps us to predict the yield for intermediate stages of the growing season. Given a test sequence of 24 images, the overall yield is obtained by averaging the yield predicted by the CNN-LSTM model at every time step. Figure \ref{fig:trainingerror} presents the decrease in the training and validation loss as a function of epochs. We use the model resulting in the lowest validation error for predicting the yield on the test set. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{training.jpg} \caption{Progression of training and validation error as a function of epochs} \label{fig:trainingerror} \end{figure} \begin{table*}[t] \centering \caption{Comparison of RMSE (in kgs/hectare) for the CNN-LSTM-12 approach against prior and state of the art approaches} \begin{tabular}{ | p{1.4cm} | p{1.3cm} | p{1.5cm} | p{1.3cm} | p{1.3cm} | p{1.5cm} | p{1.4cm} | p{1.5cm} | } \hline \hline State & Decision Forest (NDVI) & Decision Tree (NDVI) & Step Regression (VCI) & Ridge Regression (NDVI) & LSTM + GP (Histogram) & CNN-LSTM-9 & \textbf{CNN-LSTM-12}\\ \hline \hline Gujarat & 219 & 259 & 290 & 233 & 140 & 80 & \textbf{48}\\ Bihar & 835 & 1042 & 775 & 809 & 480 & 460 & \textbf{330}\\ Haryana & 980 & 1205 & 978 & 1026 & 590 & 234 & \textbf{103.7}\\ MP & 491 & 602 & 543 & 470 & 370 & 194 & \textbf{161}\\ UP & 516 & 637 & 509 & 497 & 800 & 138 & \textbf{76}\\ Rajasthan & 207 & 272 & 222 & 210 & 150 & 117 & \textbf{84}\\ Punjab & 1065 & 1061 & 1219 & 1061 & 690 & 184 & \textbf{100}\\ \hline \hline \end{tabular} \label{tab:baselinecomparison} \end{table*} \section{Experiments and Results} \subsection{Comparison Against Baselines} We compare the performance of the proposed model against approaches in the literature that use handcrafted features from the satellite imagery like NDVI and VCI. We train Decision Trees \cite{jonson}, Random Forests, and Ridge Regression models \cite{bolton} using a feature vector of NDVI values derived from each of the 24 satellite images spanning the entire growing season. We also perform step-wise regression with VCI \cite{rice}. We compare our approach against the LSTM+Gaussian Process model \cite{aaai} on the histogram of crop pixels. The parameters for all these approaches were fine-tuned using a cross-validation process. We denote our proposed model that uses raw satellite imagery and contextual information such as water bodies, an agricultural area, and urban landscape as CNN-LSTM-12. We use root mean square error (RMSE) in kgs/hectare for comparing the performance of the different models. The training set consists of data within the years 2001-2009, the validation set used for tuning the parameters was from the year 2010, and the test set consisted of the data from the year 2011. The results for the different states are presented in Table \ref{tab:baselinecomparison}. It can be observed that the proposed approach performs significantly better than the methods that use NDVI and VCI features by over 70\%. Further, our approach performs better than the LSTM+GP approach of You et al. by over 54\%. We attribute this improvement in the performance of the CNN-LSTM-12 model to its ability to learn features relevant to the task of crop yield prediction, instead of using handcrafted features like a histogram. The crop yield error plots at the tehsil level for every state are presented in Figure \ref{fig:tehsil-heatmap}. It can be observed that for a majority of the tehsils across all the states, the CNN-LSTM-12 model is under-predicting the yield marginally. This is further verified by the plot on the left-hand side in Figure \ref{fig:tehsil-yield}. This compares the error against the size of the tehsil. We observe that for large tehsils, the model always underestimated the yield. However, the number of such large area tehsils is minimal. The relationship between the actual and predicted yield is presented in the right-hand side plot in Figure \ref{fig:tehsil-yield}. This relationship is mostly linear with a slope of $47^o$, indicating that the average performance of the CNN-LSTM-12 model is good. \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{tehsil-prediction.png} \caption{Tehsil level error heat maps for all the 7 states.} \label{fig:tehsil-heatmap} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{analysis_yield.png} \caption{The left figure is the plot of the difference between predicted and actual yield against the area of the tehsils and the plot on the right side illustrates the relationship between predicted and actual yield of all tehsils.} \label{fig:tehsil-yield} \end{figure*} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{early_2.png} \caption{Accuracy of early prediction} \label{fig:early-prediction} \end{figure} \subsection{Early Crop Yield Prediction} Another aim of our project is to achieve real-time predictions throughout the growing season. Early crop yield predictions help the government agencies in planning for any contingencies. The CNN-LSTM-12 model has already been trained to predict the yield at every time step. To perform early prediction, we pass only a sub-sequence of the satellite images $(S_1,S_2....S_t)$ with $t<24$ to the CNN-LSTM-12 model. Figure \ref{fig:early-prediction} shows the performance (RMSE in kgs/hectare) if the prediction was made using only a sub-sequence in an online manner. We observe that the model has a higher error in the early months, as there is not enough information initially on the growth of the plants. However, as more data is made available, we notice an increase in the quality of the prediction for all the states. We notice that the error reduces significantly and consistently at every step until around the $8^{th}$ time step, beyond which there is only a gradual change. This approximately translates to 2 months since the beginning of the sowing season. We further observe a slight increase in the error towards the last time step. The final few time steps represent the harvesting part of the crop season. The harvesting is performed over many weeks that is not uniform across and within a tehsil. As a result, we expect to see inconsistencies in the images between areas where the harvesting has been completed and with those where it has not taken place. We suspect this to be the reason for the marginal increase in the error towards the end of the crop season. \subsection{Importance of Contextual Information} One of our hypotheses is that integrating contextual information such as the location of water bodies, farmlands, and urban landscape will help the CNN-LSTM-12 model to predict the crop yield more accurately. To test this hypothesis, we train another model without using this information. Specifically, we train a model using only the nine image bands, excluding the last three bands that encode the contextual information. This new model is represented as CNN-LSTM-9. We also mask out regions in these nine bands that do not correspond to agricultural land as encoded in the land use data. The column named CNN-LSTM-9 in Table \ref{tab:baselinecomparison} presents the average RSME for the tehsils of all the states in the study for this model. It is evident that the model that uses information about water bodies, farmlands, and urban landscape performs significantly better (by over 17\%) than the model that does not use this information. This trend is observed across all the states, indicating the importance of the contextual information. \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{importance.png} \caption{Increase in the RMSE when the images of a specific month are replaced with random noise.} \label{fig:month} \end{figure*} \begin{figure} \includegraphics[width=0.4\textwidth]{generalization.png} \centering \caption{Error of models trained and tested on different states} \label{fig:gen} \end{figure} \subsection{Importance of Image Bands and the Months in the Growing Season} We perform experiments to analyze how our model is utilizing the input data - the different periods in the growing season and the various bands of the multispectral satellite images for the task of crop yield prediction. The entire growing season spans six months. Every month, we have four satellite images that are captured approximately every eight days. To analyze the utility of a given month in the growing season, we replace the four images of the month with random Gaussian noise when passing the images to the yield prediction model. We quantify the increase in the RMSE for yield prediction due to this change to estimate the utility of the month in the growing season. The increase in the RMSE for every state and every month is presented in the right-hand side plot in Figure \ref{fig:month}. We observe that the satellite images belonging to the initial month of October are given the maximum importance. This is consistent with the observations in the literature that sowing time is an essential factor in wheat production~\cite{sow-dates}. This further supports our earlier observation on the decrease in the prediction error when the information about the initial two months is made available. As the model sees more satellite images, the increase in the error is only marginal. We also analyze how the bands are utilized month wise. To see the overall importance given to various bands in the crop yield prediction task, we iteratively send Gaussian noise in place of the individual bands for a given month and observe the increase in the error. During October, the model has only seen the first four satellite images of a test data point, which are insufficient for accurate yield prediction. Therefore the model gives maximum importance to bands 10, 11, and 12 signifying the pixels belonging to water bodies, agriculture, and urban built-up. As the model sees more satellite images, it has already recognized the type and the context of each pixel in the sequence of satellite images. Hence, it starts giving lesser importance to the last three bands. The trend that is visible in all the states is that when a band is given significance in a particular month, in the subsequent month, it is immediately given less importance, as the model now gives importance to other bands. The temperature band is consistently given high importance in the later months. \subsection{Generalizability} We also try to see how similar different states are in terms of heterogeneity of weather patterns, soil type, farming methods, etc. For this, we train our model on one state and test it on the remaining states. The results are presented in Figure \ref{fig:gen}. We observe that the results when the test state is different are quite poor, with a significant increase in the error (over 1000 in some cases). This further supports our original idea of modeling each state independently. We had performed a couple of experiments before fixing the state-wise models. A single model was trained using data from 10 bands for all the states; however, the model was having high loss $>$520, while the average loss of state-wise models was $<$120 ($<$100 for some states). The LSTM+GP model on the entire dataset also gave similar losses. \section{Conclusion} We introduce a reliable and inexpensive method to predict crop yields from publicly available satellite imagery. Specifically, we learn a deep neural network model for predicting the wheat crop yield for tehsils in India. The proposed method works directly on raw satellite imagery without the need to extract any hand-crafted features or perform dimensionality reduction on the images. We have created a new dataset consisting of a sequence of satellite images and the exact crop yield for the years 2001-2011 covering a total of 948 tehsils. We use this dataset to train and evaluate the proposed approach on tehsil level wheat predictions. Our model outperforms over existing methods by over 50\%. We also show that incorporating additional contextual information such as the location of farmlands, water bodies, and urban areas helps in improving the yield estimates. \section{Acknowledgement} We are grateful to Dr. Reet Kamal Tiwari and Akshar Tripathi for their inputs and assistance in understanding and collecting the satellite data. We are also grateful to NVIDIA Corporation for supporting this research through an academic hardware grant. {\small \bibliographystyle{ieee}
{ "timestamp": "2020-11-04T02:12:05", "yymm": "2011", "arxiv_id": "2011.01498", "language": "en", "url": "https://arxiv.org/abs/2011.01498" }
"\\section{Introduction}\n\t\\subsection{Polymer models in statistical physics. }In this paper, we c(...TRUNCATED)
{"timestamp":"2022-07-29T02:16:00","yymm":"2011","arxiv_id":"2011.01491","language":"en","url":"http(...TRUNCATED)
"\n\n\n\n\n\\subsection{Servers and Jobs}\n\\noindent {\\bf Servers:} We assume servers in the netwo(...TRUNCATED)
{"timestamp":"2020-11-04T02:11:29","yymm":"2011","arxiv_id":"2011.01485","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\n\\subsection{Problem} \nIn the paper, we are concerned with the initial-(...TRUNCATED)
{"timestamp":"2020-11-04T02:12:24","yymm":"2011","arxiv_id":"2011.01503","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\nIn this paper, we consider the following viscosity dependent Stokes equa(...TRUNCATED)
{"timestamp":"2020-11-04T02:10:16","yymm":"2011","arxiv_id":"2011.01458","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\t\\label{sec:introduction}\n\n\t\n\t\\IEEEPARstart{D}{eep} learning has b(...TRUNCATED)
{"timestamp":"2022-06-08T02:01:09","yymm":"2011","arxiv_id":"2011.01509","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\nThe state of the art convolutional networks has been quite successful at v(...TRUNCATED)
{"timestamp":"2020-11-04T02:10:58","yymm":"2011","arxiv_id":"2011.01472","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\\label{sec:intro}\n\n\n\n\\IEEEPARstart{D}{istributed} machine learning h(...TRUNCATED)
{"timestamp":"2022-10-28T02:03:06","yymm":"2011","arxiv_id":"2011.01455","language":"en","url":"http(...TRUNCATED)
End of preview.