Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowInvalid
Message:      JSON parse error: Missing a closing quotation mark in string. in row 13
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
                  dataset = json.load(f)
                File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
                  return loads(fp.read(),
                File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
                  return _default_decoder.decode(s)
                File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
                  raise JSONDecodeError("Extra data", s, end)
              json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 35225)
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
                  for _, table in generator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
                  raise e
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 13
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

text
string
meta
dict
\section{Introduction} Emotion recognition is an important ability for good interpersonal relations and plays an important role in an effective interpersonal communications. Recognizing emotions, however, could be hard; even for human beings, the ability of emotion recognition varies among persons. The aim of this work is to recognize emotions in audios and audios+videos using deep neural networks. In this work, we attempt to understand bottlenecks in existing architecture and input data, and explored novel ways on top of existing architectures to increase emotion recognition accuracy. The dataset we used is IEMOCAP\cite{IEMOCAP}, which contains 12 hours audiovisual data of 10 people(5 females, 5 males) speaking in anger, happiness, excitement, sadness, frustration, fear, surprise, other and neutral state. Our work mainly consists of two stages. First, we build neural networks to recognize emotions in audios by replicating and expanding upon the work of \cite{inproceedings}. The input of the models are the audio spectrograms converted from the audio of an actor speaking a sentence, and the models give one output which is the emotion the actor has when saying that sentence. The models only predict one of the four different emotions, e.g. happiness, anger, sadness, and neutral state, which were chosen for comparison with \cite{inproceedings}. The deep learning architectures we explored were CNN, CNN + RNN, CNN + LSTM. After achieving a comparably good accuracy on audios comparing with \cite{inproceedings}, we build models which predict emotions using audio spectrogram and video frames in a video, since we believe video frames contain additional emotion-related information that can help us to achieve a better emotion prediction performance. The inputs of the models are the audio spectrogram and video frames, which are converted and extracted from the sound and images of a video recording an actor speaking one sentence. The output of the models is still one of the four selected emotions mentioned above. Inspired by the work of \cite{DBLP:journals/corr/TorfiIND17}, we explore the model made of two sub-networks; the first sub-network is a 3D CNN which takes in the video frames, and the second one is a CNN+RNN which takes in the audio spectrogram, and the last layer of the two sub-networks are concatenated and followed by a fully connected layer that output the prediction. The metric we use for evaluation is the overall accuracy for both the audio and audio+video models. \begin{figure}[t] \begin{center} \fbox{ \includegraphics[width=8cm,height=6cm]{Ang_3_Sec_With_Noise.PNG} \centering } \end{center} \caption{Example of audio spectrogram of anger emotion. Original time scale without noise cleanup.} \label{fig:long} \label{fig:onecol1} \end{figure} \section{Related Work} Emotion recognition is an important research area that many researchers work on in recent years using various methods. Using speech signals\cite{kwon2003emotion}, facial expression\cite{gouta2000emotion}, and physiological changes\cite{kim2008emotion} are some of the common approaches researchers arise to approach the emotion recognition problem. In this work, we will use audio spectrograms and video frames to do emotion recognition. It has been shown, emotion recognition accuracy can be improved with statistical learning of low-level features (frequency \& signal power intensity) by different layers of deep learning network. Mel-scale spectrograms for speech recognition was demonstrated to be useful in \cite{deng_2014}. There has been state of the art speech recognition method that uses linearly spaced audio spectrograms as described in \cite{AmodeiABCCCCCCD15} \cite{HannunCCCDEPSSCN14}. Our work related to emotion recognition using audio spectrogram follows the approach described in \cite{inproceedings}. Audio spectrogram is an image of audio signal which consists of 3 main components namely: 1. Time on x-axis. 2. frequency on y-axis. 3. power intensity on the colorbar scale which can be in decibels(dB) as shown in Fig. 1. \cite{sahu} covers machine learning methods to extract temporal features from the audio signals. The goodness in machine learning models is, it's training \& prediction latency is good but, prediction accuracy is low. The CNN model that uses audio spectrograms to detect emotion has better prediction accuracy compared to machine learning model. Comparing the CNN network used in \cite{inproceedings} \& \cite{DBLP:journals/corr/TorfiIND17} for training using audio spectrograms, \cite{inproceedings} uses wider kernel window size with zero padding while \cite{DBLP:journals/corr/TorfiIND17} uses smaller window size and no zero padding. With wider kernel window size we are able to see larger vision of the input which allows for more expressive power. In order to avoid loosing features use of zero padding becomes important. The zero padding decreases as the number of CNN layers increase in the architeture used in \cite{inproceedings}. \cite{DBLP:journals/corr/TorfiIND17} avoids adding zero padding in order to not consume extra virtual zero-energy coefficients which are not useful in extracting local features. One drawback that we see in \cite{DBLP:journals/corr/TorfiIND17} is that it does not compare performance between audio model \& audio+video model being used. One goodness observed in \cite{DBLP:journals/corr/TorfiIND17} is that it does not do noise removal from the audio input data while \cite{inproceedings} uses noise removal techniques in the audio spectrogram before training the model. To achieve better prediction accuracy, a natural progression of emotion recognition using audio sprectrogram is to include facial features extracted from the video frames. \cite{DBLP:journals/corr/abs-1902-01019} \& \cite{article_facial_video} implements facial emotion recognition using images and video frames respectively but, without audio. \cite{DBLP:journals/corr/TorfiIND17} \& \cite{DBLP:journals/corr/abs-1807-00230} implements neural network architecture which processes audio spectrogram \& video frames to recognize emotion. Both \cite{DBLP:journals/corr/TorfiIND17} and \cite{DBLP:journals/corr/abs-1807-00230} implement a self-supervised model for cooperative learning of audio \& video models on different dataset. \cite{DBLP:journals/corr/abs-1807-00230} further does a supervised learning on the pre-trained model to do classification. The model come up by \cite{DBLP:journals/corr/TorfiIND17} and \cite{DBLP:journals/corr/abs-1807-00230} are very similar; both are two-stream models that contains one part for audio data, and one part for video data. The only difference is the way the kernel size, layer number, input data dimension are set. These hyperparamters are set differently because their input data is different.\cite{DBLP:journals/corr/TorfiIND17} tends to use smaller input size, and kernel size because its input images only capture mouth, which doesn't contain as much information as the image which captures the movement of a person used in \cite{DBLP:journals/corr/abs-1807-00230}. \section{Dataset \& Features} \subsection{Dataset} The dataset we use is IEMOCAP \cite{IEMOCAP} corpora as it is the best known comprehensibly labeled public corpus of emotion speech by actors.\cite{lee2015} uses this IEMOCAP dataset to generate state of the art results at the time. IEMOCAP contains 12 hours audio and visual data of conversations of two persons (1 female and 1 male for one conversation, and there are 5 females and 5 males in total), where each sentence in conversations is labelled with one emotion--anger, happiness, excitement, sadness, frustration, fear, surprise, other or neutral state. \subsection{Data pre-processing} \subsubsection{Audio Data Pre-processing} IEMOCAP data corpus contain audio wav files with various time length and with marking of actual emotion label for corresponding time segment. The audio wav files in IEMOCAP are generated at a sample rate of 22KHz. The audio spectrogram is extracted from the wav file using librosa\footnote {https://librosa.github.io/librosa/index.html} python package with a sample rate of 44KHz. 44KHz sample rate was used because as per Nyquist-Shannon sampling theorem\footnote{https://en.wikipedia.org/wiki/Nyquist\%E2\%80\%93Shannon\_sampling\_theorem} in order to fully recover a signal the sampling frequency should at least be twice the signal frequency. The audio signal frequency ranges from 20Hz to 20KHz. Hence, 44KHz sampling rate is commonly used rate for sampling. The spectrograms were generated in 2 segments which are: 1. Original time length of utterance of sentence or emotion. 2. Clip each utterance of sentence into 3 second clips. Another data segmentation that was done is with noise cleanup and without noise cleanup. We have named these segmentation as DSI, DSII, DSIII \& DSIV. This data segmentation is summarized in Table 1. Model training is done on these data segments separately. \begin{table} \begin{small} \begin{center} \begin{tabular}{|l|c|c|} \hline Dataset segmentation Type & Noise Cleanup & Name\\ \hline\hline Original time length of utterance & No & DS I\\ Clip each utterance into 3 second clips & No & DS II\\ Original time length of utterance & Yes & DS III\\ Clip each utterance into 3 second clips & Yes & DS IV\\ \hline \end{tabular} \end{center} \caption{Segmentation of input data generation.} \end{small} \end{table} \begin{figure}[t] \begin{center} \fbox{ \includegraphics[width=8cm,height=6cm]{images/Ang_3_Sec_Without_Noise.PNG} \centering } \end{center} \caption{Example of audio spectrogram of anger emotion. 3 sec audio clip with noise cleanup. Compare with Fig.1} \label{fig:long1} \label{fig:onecol} \end{figure} In order to get rid of the background noise we applied bandpass filter\footnote{\small{https://timsainburg.com/noise-reduction-python.html}} \footnote{\small{https://github.com/julieeF/CS231N-Project/blob/master/load\_single\_wav\_file.py}} between 1Hz to 30KHz. Denoising or noise cleanup of the input audio signal for data augmentation is also followed by \cite{AmodeiABCCCCCCD15}. Sentence utterances that are less than 3 second are padded with noise to maintain uniformity in noise frequency and noise amplitude w.r.t noise in other parts of the signal. Initially zero padding was also experimented with to have 3 second time scale and then noise is added with signal to noise ratio (SNR) of 1 through out the signal time length but, this resulted in distorting the original audio signal. The resulting signal is then denoised. Denoising helps in making the frequency, time scale and amplitude features of the input audio signal to be more visible in the hopes of getting better prediction accuracy per emotion. All the audio spectrograms are generated with same colorbar intensity scale (+/- 60dB) to maintain uniformity of the spectrogram across the board among different emotions. This is similar to normalization of data. As seen in Fig. 2 after denoising only the signal that contains actual information remain with high power intensity or signal amplitude. Other regions in the spectrogram remains with low power intensity relative to where there is actual signal of interest. Compared to Fig. 1 where some signal intensity is observed throughout the time scale, which is actually the noise. The spectrogram images generated are of size 200x300 pixels. Total count of 3 second audio spectrograms among 4 different emotions is summarized in Table. 2. As observed the happy emotion count is significantly low. So we duplicated the happy data to reach total count of 1600. Similarly anger emotion count was also duplicated. Sad \& Neutral data count was reduced to match to 1600 data points for each emotion. A total images of 6400 is used for training the model. Data balance is crucial for the model to train well. 400 images from each emotion is used for model validation purpose. The images used for validation are never part of training set. At first, we started off with audio spectrograms that contains xy axis and colorbar scale but, we removed the scales after learning that including axis \& scale could be contributing negatively to prediction accuracy. To observe class accuracy improvement, input audio spectrograms were data augmented by cropping and rotation. Each image was cropped by 10 pixels from the top and resized back to 200x300 pixels. This cropping is done to simulate frequency change in the emotion by small amount. Similarly, each image was also rotated by +/- 10 degrees. This rotation also simulates frequency change but it also shifts the time scale. Augmenting data that changes time scale is not preferred hence the rotation was done to a very small degree of 10 degrees. With cropping and rotation, total count of data used for training becomes 19200. The model training was done separately with original images and images with data augmentation for comparison.Horizontal flip of images were avoided as this means flipping the timescale and enacts a person speaking in reverse, which will lead to lower model prediction accuracy. Model training on audio spectrogram that contains full time length, and not 3 second, was done separately. For given 3 second audio spectrogram, it was replaced with the full time length spectrogram, thus maintaining data count for balancing. Visual analysis of around 100 audio spectrograms were done. It was observed that maximum frequency observed among all these spectrograms is around 8KHz. This means around 60\% of the spectrogram image is blue which does not carry any information from emotion perspective. All the input audio spectrograms were cropped from the top by 60\% and resized back to 200X300 pixels. An ideal method would be to generate spectrograms specifying fixed frequency scale if the frequency range is known prior. \begin{table} \begin{center} \begin{tabular}{|l|c|} \hline Emotion & Count of data points\\ \hline\hline Happy & 786\\ Sad & 1752 \\ Anger & 1458\\ Neutral & 2118 \\ \hline \end{tabular} \end{center} \caption{Data count of each emotion.} \end{table} \subsubsection{Video Data Pre-processing} Since our work also include implementing video model to see room for improvement in prediction accuracy of emotion recognition, we also did video data pre-processing. For the video data, we first clipped each video file into sentences according to how we processed audio files. This ensured that we are querying that part of the video file that corresponds to given audio spectrogram. Then we extracted 20 images per 3 second from each video avi file that corresponds to 3 sec audio spectrogram. The video contains both the actors in the frames hence the frames were cropped accordingly from left or right to only capture the actor whose emotion is being recorded. We then cropped the video frames further to cover the actor's face/head. The final resolution of video frames is 60x100. One limitation with the dataset is that, in the video the actors are not speaking facing the camera, therefore full facial expression corresponding to a given emotion are not visible. While processing the extraction of audio spectrograms and video frames, it was observed that the memory usage on the machine was more than 12GB. This lead to machine crashes. Therefore, to extract data, each audio and video file was processed individually in batch. Python script\footnote{\small{https://github.com/julieeF/CS231N-Project/blob/master/load\_single\_Video.py}} was launched individually through unix shell script. \section{Methods \& Model Architecture} In this section, we will talk about the models we have built for emotion recognition in audios in the 'Audio Models' subsection, and models for emotion recognition in audios+videos in 'Audio+Video Models' subsection. \subsection{Audio Models} By replicating and expanding upon the network architecture used in \cite{inproceedings} we formulate three different models. The first model is a CNN model, which consists of three 2D convolutional layers and maxpooling layers followed by two fully connected layers, as shown in Fig.\ref{fig:audiom}. The second architecture we build is that we add a LSTM layer after the convolutional layers in the CNN model we have built, and we will call this model CNN+LSTM in this work. In the third model, we replace the LSTM layer with a vanilla RNN layer, and this model is named CNN+RNN in this work. A graph for the architecture of CNN+RNN is shown in Fig.\ref{fig:audiom}. The loss we use for training the model is the cross entropy loss. \begin{align} L_\textbf{cross entropy}=\frac{1}{N}\sum_{n=1}^N-\text{log}(\frac{\text{exp}(x_c^n)}{\sum_j \text{exp}(x_j)}) \end{align} where N is the number of data in the dataset, $x_c^n$ is the true class's score of the n-th data point, $x_j$ is the j-th class's score of the n-th input data. Minimizing the cross entropy loss will force our model to learn the emotion-related features from the audio spectrogram because when the loss will be minimum only when for a datapoint, the score of the true class is remarkably larger than the score of all other classes. \begin{figure}[t] \begin{center} \fbox{ \includegraphics[width=8cm,height=10cm]{images/CNN_CNN_plus_RNN.png} } \end{center} \caption{Audio model architectures.} \label{fig:audiom} \end{figure} \begin{figure}[t] \begin{center} \fbox{ \includegraphics[width=8cm,height=12cm]{images/3dcnn_cnn_rnn.PNG} } \end{center} \caption{Audio+Video model architectures.} \label{fig:videom} \end{figure} \subsection{Audio+Video Models} Inspired by the work of \cite{DBLP:journals/corr/TorfiIND17}, our audio+video model is a two-stream network that consists of two sub-networks as shown in Fig.\ref{fig:videom}(a). The first sub-network is the audio model, which we choose to use the best-performing audio model we have built--CNN \& RNN as shown in Fig.\ref{fig:audiom}. The architecture of the first sub-network is the same as the audio model except that it dumps the original output layer in order to get high-level features of audio spectrograms, as shown in Fig.\ref{fig:videom}(CNN+RNN). The second sub-network is the video model, and it is made of four 3D convolutional layers, and three 3D maxpooling layers, followed by two fully connected layers, as shown in Fig.\ref{fig:videom}(3D CNN). Finally, the last layer of the two sub-networks are concatenated together, followed by one output layer, as shown in Fig.\ref{fig:videom}(a). We train this audio+video model using two different methods--semi-supervised training and supervised training. For semi-supervised training method, we first pre-train our model using video frames and audio spectrogram from the same video and from different videos, as shown in Fig.\ref{fig:videom}(b). This forces the model to learn the correlation between the visual and auditive elements of a video. The input of the pre-training process has three distinct types--positive (the audio spectrogram and video frames are from the same video); hard negative (the audio spectrogram and video frames are from different videos with different emotions); super hard negative (the audio spectrogram and video frames are from different videos with the same emotion). The loss function we use for pre-training is the contrastive loss. $$L_{\text{contrastive loss}}=\frac{1}{N}\sum_{n=1}^N L_1^n+L_2^n$$ where $$L_1^n=(y^n)\left\|f_v(v^n)-f_a(a^n)\right\|_2^2$$ $$L_2^n=(1-y^n)\text{max}(\eta-\left\|f_v(v^n)-f_a(a^n)\right\|_2,0)^2$$ N is the number of datapoints in the dataset, $v^n, a^n$ are the video frames and audio spectrogram of the n-the datapoint, $f_v, f_a $ are the video and audio sub-networks, $y_n$ is one if the video frames and audio spectrogram are from the same video, and zero otherwise.$\eta$ is the margin hyperparameter. $\left\|f_v(v^n)-f_a(a^n)\right\|_2$ should be small when the video frames and audio spectrogram are from the same video, and large when they come from different videos. Therefore, by minimizing the contrastive loss, the audio and video models are forced to output similar values when their inputs are from the same video, and very disctint values when they are not. This allows the model to learn the connection between audio and visual elements from the same video. After pre-training is done, we do supervised learning on the pre-trained model where the input is the audio spectrogram and video frames of a video and output is the emotion predicted, as shown in Fig.\ref{fig:videom}(a). The loss of our model is the cross entropy, and the formula is the same as in Equation.1. The second training method is that we do supervised training directly on the model without pre-training process. \section{Experiments \& Results} For model evaluation, prediction accuracy is the key metric used. For results comparison, the accuracy was compared with accuracy reported in \cite{inproceedings}. Since we balanced the data count therefore the overall accuracy and class accuracy as reported in \cite{inproceedings} are mathematically equal terms in our work. Our work aimed to achieve prediction accuracy of around 60\% considering 4 emotions. We trained the model on all 4 segmentation of dataset generated and observed that results on data with original time scale and without noise cleanup gives the best accuracy. The results reported are based on this dataset. Spectrograms with noise removed theoretically sounds promising but it did not work due to 2 possible reasons. First, the algorithm used to remove noise reduces signal amplitude which may lead to some feature suppression. An algorithm that amplifies the signal back need to be explored. Some techniques for e.g. subtracting noise from the signal and multiplying final signal with a constant was explored but they all resulted in signal distortion. Secondly, having noise in the spectrogram simulates real scenario and during model training noise could indirectly acts as a regularizer. \cite{DBLP:journals/corr/TorfiIND17} also does not remove noise from the input audio spectrograms. \subsection{Hyperparameters} We started off with prediction on 4 emotions and most of the work, results and analysis is based off of these 4 emotions. Our validation accuracy did not go beyond 54.00\% and we saw overfitting during model training beyond this point. This lead us to experiment with various hyperparameters in the optimizer and in the network model layers for e.g. kernel size, size of input and output in each layer, dropout, batchnorm, data augmentation, l1 \& l2 regularization. Adam optimizer was used with learning rate of 1e-4 to train the model as this gave best accuracy. We experimented with 1e-3 \& 1e-5 and observed the model did not train well with these settings. It was observed weight decay(parameter that controls l2 regularization) of 0.01 in Adam optimizer improved the accuracy by ~1\%. Weight decay values of 0.005 and 0.02 were also experimented with but, did not help. All other parameters was kept default in the optimizer. Enabling l1 regularization, data augmentation of rotation and cropping, batchnorm resulted in no change or improvement in accuracy. This is possibly due to that the model learned all the features it can from the available data based on the model architecture. Tuning of dropout probability was also experimented with and optimal value of 0.2 for the last fully connected layer and 0.1 for the dropout in RNN layer was obtained. The input \& output dimensions in the audio network layers was doubled \& quadrupled which resulted in accuracy improvement of 1-2\%. Increasing the input output dimensions in the layers also resulted in high memory usage during model training. We attempted to extend this learning on the video network but due to limited memory of 12GB on the machine we were unable to carry out this experiment. This leads us to strongly believe that there is room for improvement that needs more experimentation on a machine with large memory. The accuracy improvement is also evident from different model architectures we used starting from CNN, CNN+RNN and CNN+RNN+3DCNN which actually is having more parameters in the model to learn features better. 80\% of total data points were used for training and rest for validation. Batch of 64 data points per iteration is used to train the model. Higher batch count resulted in long iteration time and high memory usage thus, 64 was picked. Using normalization in image transformation with mean of [0.485, 0.456, 0.406] and standard deviation of [0.229, 0.224, 0.225] on all images improved accuracy by 0.37\%. \subsection{Validation Set Accuracy} Table \ref{fig:tab} summarizes the validation set accuracy obtained among different architectures. \begin{table} \begin{small} \begin{center} \begin{tabular}{|l|c|c|c|} \hline Architecture & Accuracy(\%) & Data Aug. & Emotion \\ \hline\hline CNN & 52.23 & No & H,S,A,N\\ CNN & 51.90 & Yes& H,S,A,N\\ CNN+LSTM & 39.77 & No& H,S,A,N\\ CNN+LSTM & 39.65 & Yes& H,S,A,N\\ CNN+RNN & 54.00 & No& H,S,A,N\\ CNN+RNN & 70.25 & No& S,A,N\\ CNN+RNN+3DCNN & 51.94 & No& H,S,A,N\\ CNN+RNN+3DCNN & 71.75 & No& S,A,N\\ \hline \end{tabular} \end{center} \caption{Validation set accuracy over CNN, CNN+LSTM, CNN+RNN \& CNN+RNN+3DCNN among 4 and 3 different emotions. \small{H=Happy, S=Sad, A=Angry, N=Neutral}} \label{fig:tab} \end{small} \end{table} \subsection{Loss \& Classification Accuracy History on CNN+RNN+3DCNN} Fig. \ref{fig:long_loss4} is the contrastive loss curve obtained during self supervision model training. We ran self supervision model with 5 epochs and 10 epochs separately and fed the learned weights in these 2 experiments into CNN+RNN+3DCNN model for classification training. We observed the self supervised model run with 5 epochs gave better classification accuracy by 0.5\% compared to the self supervised model run with 10 epochs. This could be attributed to overfitting of weights learned in self supervised model when run with 10 epochs. \begin{figure}[t] \begin{center} \fbox{ \includegraphics[width=8cm,height=6cm]{images/SS_LOSS.PNG} \centering } \end{center} \caption{Contrastive loss history curve on CNN+RNN+3DCNN.} \label{fig:long_loss4} \end{figure} Fig. \ref{fig:long_loss} is the softmax/cross entropy loss curve obtained on the best model which is CNN+RNN+3DCNN. Since the loss is reported per iteration hence it appears noisy but we observed that per epoch it is decreasing on logarithmic scale. \begin{figure}[t] \begin{center} \fbox{ \includegraphics[width=8cm,height=6cm]{images/loss_71_75.PNG} \centering } \end{center} \caption{Loss history curve on CNN+RNN+3DCNN. The curve is noisy because it is generated per iteration.} \label{fig:long_loss} \end{figure} Fig. \ref{fig:long_acc} is the classification accuracy history on best model which is CNN+RNN+3DCNN. We obtained best validation accuracy of 71.75\% considering 3 emotions(sad, anger, neutral). \begin{figure}[t] \begin{center} \fbox{ \includegraphics[width=8cm,height=6cm]{images/Accuracy_71_75.PNG} \centering } \end{center} \caption{Classification accuracy history on CNN+RNN+3DCNN after every 20 iterations for a total of 1200 iteration.} \label{fig:long_acc} \end{figure} \subsection{Confusion Matrix on CNN+RNN \& CNN+RNN+3DCNN} Fig. \ref{fig:long_conf} is the confusion matrix obtained with CNN+RNN. From this confusion matrix we see only happy emotion is predicted poorly compared to other emotions. This led us to explore CNN+RNN \& CNN+RNN+3DCNN architecture only on 3 emotions (instead of 4) to understand if we do see performance improvement when switching from audio only inputs to audio+video inputs. Fig. \ref{fig:long_conf_3_emo} is the confusion matrix obtained with the best model which is CNN+RNN+3DCNN. \begin{figure}[t] \begin{center} \fbox{ \includegraphics[width=8cm,height=6cm]{images/54_per_CF.PNG} \centering } \end{center} \caption{Confusion matrix of true class vs. prediction in CNN+RNN.} \label{fig:long_conf} \end{figure} \begin{figure}[t] \begin{center} \fbox{ \includegraphics[width=8cm,height=6cm]{images/CM_71_75.PNG} \centering } \end{center} \caption{Confusion matrix of true class vs. prediction in CNN+RNN+3DCNN.} \label{fig:long_conf_3_emo} \end{figure} \subsection{Results Analysis} From Table \ref{fig:tab}, considering 4 emotions, we can see that CNN+RNN is the best performing architecture and data augmentation doesn't improve the accuracy. CNN does not work well comparing with CNN+RNN because CNN has the same architecture as the first few layers of CNN+RNN, and is comparably simple. Therefore, CNN+RNN will learn features of higher-level and performs better compared with CNN. For CNN+LSTM, it does have a more complex architecture, however, when we were tuning the hyperparams, we found out that accuracy improved slightly by increasing dropout probability in CNN+LSTM , indicating that CNN+LSTM could be overly complex for our dataset and training purpose. Also, adding model complexity requires more careful hyperparameters tuning, and since CNN+RNN is giving a relatively good performance compared with \cite{inproceedings}, we decided not to bother with adjusting CNN+LSTM. From Table \ref{fig:tab}, it also evident that CNN+RNN+3DCNN architecture which uses video frames along with audio spectrogram is the best considering 3 emotions but, the accuracy did not improve significantly to CNN+RNN. This is due to the fact that the cropping window to focus on the face/head to recognize facial emotion was large as the actors are not facing the camera and they moved during their speech. Auto detecting face/head with detection model and then cropping based on the bounding box would be ideal and accuracy is expected to increase significantly. Considering 4 emotions, CNN+RNN+3DCNN performed worse compared to CNN+RNN is because the model prediction accuracy for happy emotion itself is bad due its low data count, therefore when the video frames are used which are only using facial expression from the side only confuses the model to learn poorly. Data augmentation does not increase the validation accuracy and even slightly makes the model perform worse could be due to that the image generated by cropping and rotation loses some emotion-related features, since it alters the frequency and time scale. Which is similar to altering the pitch of the audio and reversing the audio of a sentence, and could confuse the model. From confusion matrix, we observed that happiness prediction is low compared to other emotions. One possible reason for this is, happiness data set count is very low compared to other emotions, and over-sampling by repetition the happiness data set is not enough. More dataset of happiness is expected to improve happiness prediction accuracy. Comparing our results with \cite{inproceedings}, we lag their class accuracy by 5.4\% but, comparing the overall accuracy considering 3 emotions our work achieved accuracy of 71.75\% which is better by 2.95\%. \section{Conclusion/Future Work} Our work demonstrated emotion recognition using audio spectrograms through various deep neural networks like CNN, CNN+RNN \& CNN+LSTM on IEMOCAP\cite{IEMOCAP} dataset. We then explored combining audio with video to achieve better performance accuracy through CNN+RNN+3DCNN. We demostrated that CNN+RNN+3DCNN performs better as it learns emotion features from the audio signal(CNN+RNN) and also learns emotion features from facial expression in video frames(3DCNN) thus complementing each other. To further improve the accuracy of our model we plan to explore more and touch on various aspects. We want to explore more noise removal algorithms and generate audio spectrograms without noise in them. This work will help in analyzing if removing noise actually helps or it acts as a regularizer and we don't need to remove noise from the spectrograms. We also want to explore, if there are multiple people speaking how the model predicts the emotion in such scenarios. Next, we want to explore auto cropping around the face/head from video frames. We strongly believe it will significantly improve the prediction accuracy. As far as data augmentation is concerned, even though none of direct data augmentation methods proved to be useful but, adding signal with very low amplitude and varying frequency onto the speech signal and then generating audio spectrogram from the resulting signal will create unique data points and help in getting rid of model overfitting. If there were machines/GPUs with more memory we wanted to experiment with increasing input and output dimensions in each layer in the network to obtain optimal point. There is definitely a room to get better accuracy using this method. We then want to experiment with prediction latency among different models and there architecture size. We also wanted to experiment more with CNN+LSTM network and fine tune it to see what is the best accuracy we can achieve with this model. We did try transfer learning using ResNet18 but didn't achieve good results. Need to do more experimentation on how to transfer learn using existing models. Lastly, try the model on all 12 emotions in the dataset and understand bottlenecks and come up with neural network solutions that can predict with high accuracy. \section{Link to github code} $\text{https://github.com/julieeF/CS231N-Project}$ \section{Contributions \& Acknowledgements} \href{https://www.linkedin.com/in/smandeep/}{\color{blue}Mandeep Singh}:\newline Mandeep is student at Stanford under SCPD. He has worked at Intel as Design Automation Engineer for 8 years. Prior to joining Intel, he did masters in electrical engineering specializing in analog \& mixed-signal design from SJSU. Yuan Fang:\newline Yuan is master's student at Stanford in ICME department. Her interests lies in machine learning \& deep learning. We would like to thank the CS231N Teaching Staff for guiding us through the project. We also want to thank Google Cloud Platform and Google Colaboratory for providing us free resources to carry out experimentation involved in this work. {\small \bibliographystyle{ieee}
{ "timestamp": "2020-06-16T02:24:20", "yymm": "2006", "arxiv_id": "2006.08129", "language": "en", "url": "https://arxiv.org/abs/2006.08129" }
\section{Introduction} Gradient computation is the methodological backbone of deep learning, but computing gradients is not always easy. Gradients with respect to parameters of the density of an integral are generally intractable, and one must resort to gradient estimators \citep{asmussen2007stochastic, mohamed2019gradientest}. Typical examples of objectives over densities are returns in reinforcement learning \citep{sutton2018reinforcement} or variational objectives for latent variable models \cite[e.g.,][]{kingma2014auto, rezende2014stochastic}. In this paper, we address {\em gradient estimation for discrete distributions} with an emphasis on latent variable models. We introduce a relaxed gradient estimation framework for combinatorial discrete distributions that generalizes the Gumbel-Softmax and related estimators \citep{maddison2016concrete, jang2016categorical}. Relaxed gradient estimators incorporate bias in order to reduce variance. Most relaxed estimators are based on the Gumbel-Max trick \citep{luce1959individual, maddison2014astarsamp}, which reparameterizes distributions over one-hot binary vectors. The Gumbel-Softmax estimator is the simplest; it continuously approximates the Gumbel-Max trick to admit a reparameterization gradient \citep{kingma2014auto, rezende2014stochastic, ruiz2016generalized}. This is used to optimize the ``soft'' approximation of the loss as a surrogate for the ``hard'' discrete objective. Adding structured latent variables to deep learning models is a promising direction for addressing a number of challenges:~improving interpretability (e.g., via latent variables for subset selection \citep{chen2018learning} or parse trees \cite{corro2018differentiable}), incorporating problem-specific constraints (e.g., via enforcing alignments \cite{mena2018learning}), and improving generalization (e.g., by modeling known algorithmic structure \cite{graves2014neural}). Unfortunately, the vanilla Gumbel-Softmax cannot scale to distributions over large state spaces, and the development of structured relaxations has been piecemeal. We introduce \emph{stochastic softmax tricks} (SST s), which are a unified framework for designing structured relaxations of combinatorial distributions. They include relaxations for the above applications, as well as many novel ones. To use an SST{,} a modeler chooses from a class of models that we call \emph{stochastic argmax tricks} (SMT{}). These are instances of perturbation models \citep[e.g.,][]{papandreou2011perturb, hazan2012partition, tarlow2012randoms, gane2014learning}, and they induce a distribution over a finite set $\discreteset$ by optimizing a linear objective (defined by random utility $U \in \R^n$) over $\discreteset$. An SST{} relaxes this SMT{} by combining a strongly convex regularizer with the random linear objective. The regularizer makes the solution a continuous, a.e. differentiable function of $U$ and appropriate for estimating gradients with respect to $U$'s parameters. The Gumbel-Softmax is a special case. Fig. \ref{fig:intro} provides a summary. We test our relaxations in the Neural Relational Inference (NRI) \citep{kipf2018neural} and L2X \cite{chen2018learning} frameworks. Both NRI and L2X use variational losses over latent combinatorial distributions. When the latent structure in the model matches the true latent structure, we find that our relaxations encourage the unsupervised discovery of this combinatorial structure. This leads to models that are more interpretable and achieve stronger performance than less structured baselines. All proofs are in the Appendix. \begin{figure}[t] \centering \begin{subfigure}[b]{0.246\textwidth} \includegraphics{figures/intro_polytope.pdf} \caption*{Finite set} \end{subfigure} \hfill \begin{subfigure}[b]{0.245\textwidth} \includegraphics{figures/intro_random_cost.pdf} \caption*{Random utility} \end{subfigure} \hfill \begin{subfigure}[b]{0.245\textwidth} \includegraphics{figures/intro_smt.pdf} \caption*{Stoch. Argmax Trick} \end{subfigure} \hfill \begin{subfigure}[b]{0.245\textwidth} \includegraphics{figures/intro_rsmt.pdf} \caption*{Stoch. Softmax Trick} \end{subfigure} \caption{Stochastic softmax tricks relax discrete distributions that can be reparameterized as random linear programs. $X$ is the solution of a random linear program defined by a finite set $\discreteset$ and a random utility $U$ with parameters $\theta \in \R^m$. To design relaxed gradient estimators with respect to $\theta$, $X_{\temp}$ is the solution of a random convex program that continuously approximates $X$ from within the convex hull of $\discreteset$. The Gumbel-Softmax \citep{maddison2016concrete, jang2016categorical} is an example of a stochastic softmax trick.}\label{fig:intro} \vspace{-0.5\baselineskip} \end{figure} \section{Problem Statement} \label{sec:problemstmt} Let $\abstractset$ be a non-empty, finite set of combinatorial objects, e.g. the spanning trees of a graph. To represent $\abstractset$, define the embeddings $\discreteset \subseteq \R^n$ of $\abstractset$ to be the image $\{ \embed(y) \mid y \in \abstractset\}$ of some embedding function $\embed : \abstractset \to \R^n$.\footnote{This is equivalent to the notion of sufficient statistics \cite{wainwright2008graphical}. We draw a distinction only to avoid confusion, because the distributions $p_{\theta}$ that we ultimately consider are not necessarily from the exponential family.} For example, if $\abstractset$ is the set of spanning trees of a graph with edges $E$, then we could enumerate $y_1, \ldots, y_{|\abstractset|}$ in $\abstractset$ and let $\embed(y)$ be the one-hot binary vector of length $|\abstractset|$, with $\embed(y)_i = 1$ iff $y = y_i$. This requires a very large ambient dimension $n = |\abstractset|$. Alternatively, in this case we could use a more efficient, structured representation: $\embed(y)$ could be a binary indicator vector of length $|E| \ll |\abstractset|$, with $\embed(y)_e = 1$ iff edge $e$ is in the tree $y$. See Fig. \ref{fig:embeddings} for visualizations and additional examples of structured binary representations. We assume that $\discreteset$ is convex independent.\footnote{Convex independence is the analog of linear independence for convex combinations.} Given a probability mass function $p_{\theta} : \discreteset \to (0, 1]$ that is differentiable in $\theta \in \R^m$, a loss function $\loss : \R^n \to \R$, and $X \sim p_{\theta}$, our ultimate goal is gradient-based optimization of $\expect[\loss(X)]$. Thus, we are concerned in this paper with the problem of estimating the derivatives of the expected loss, \begin{equation} \label{eq:problem} \frac{d}{d \theta}\expect[\loss(X)] = \frac{d}{d \theta} \left(\sum\nolimits_{x \in \discreteset} \loss(x) p_{\theta}(x)\right). \end{equation} \section{Background on Gradient Estimation} \label{sec:background} Relaxed gradient estimators assume that $\loss$ is differentiable and use a change of variables to remove the dependence of $p_{\theta}$ on $\theta$, known as the reparameterization trick \citep{kingma2014auto, rezende2014stochastic}. The Gumbel-Softmax trick (GST) \citep{maddison2016concrete, jang2016categorical} is a simple relaxed gradient estimator for one-hot embeddings, which is based on the Gumbel-Max trick (GMT) \citep{luce1959individual, maddison2014astarsamp}. Let $\discreteset$ be the one-hot embeddings of $\abstractset$ and $p_{\theta}(x) \propto \exp(x^T\theta)$. The GMT is the following identity: for $X\sim p_{\theta}$ and $G_i + \theta_i \sim \Gumbel(\theta_i)$ indep., \begin{align} \label{eq:gumbelmaxtrick} X \overset{d}{=} \arg \max\nolimits_{x \in \discreteset} \, (G+\theta)^T x. \end{align} Ideally, one would have a reparameterization estimator, $\expect[d \loss(X)/d \theta] = d \expect[\loss(X)]/d \theta$,\footnote{For a function $f(x_1, x_2)$, $\partial f(z_1, z_2) / \partial x_1$ is the partial derivative (e.g., a gradient vector) of $f$ in the first variable evaluated at $z_1, z_2$. $d f(z_1, z_2) / d x_1$ is the total derivative of $f$ in $x_1$ evaluated at $z_1, z_2$. For example, if $x = f(\theta)$, then $ d g(x, \theta)/ d\theta = (\partial g(x, \theta)/\partial x) (d f(\theta)/ d\theta) + \partial g(x, \theta)/\partial\theta$.} using the right-hand expression in \eqref{eq:gumbelmaxtrick}. Unfortunately, this fails. The problem is not the lack of differentiability, as normally reported. In fact, the argmax is differentiable almost everywhere. Instead it is the jump discontinuities in the argmax that invalidate this particular exchange of expectation and differentiation \citep[][Chap. 7.2]{lee2018reparameterization, asmussen2007stochastic}. The GST estimator \citep{maddison2016concrete, jang2016categorical} overcomes this by using the tempered softmax, $\softmax_{\temp}(u)_i = \exp(u_i/\temp) / \sum_{j=1}^n \exp(u_j/\temp)$ for $u \in \R^n, \temp > 0$, to continuously approximate $X$, \begin{align} \label{eq:gumbelsoftmaxestimator} X_{\temp} = \softmax_{\temp}(G + \theta). \end{align} The relaxed estimator is $d \loss(X_t) / d \theta$. While this is a biased estimator of \eqref{eq:problem}, it is an unbiased estimator of $d\mathbb{E}[\loss(X_{\temp})]/d\theta$ and $X_t \to X$ a.s. as $t \to 0$. Thus, $d \loss(X_t) / d \theta$ is used for optimizing $\expect[\loss(X_{\temp})]$ as a surrogate for $\expect[\loss(X)]$, on which the final model is evaluated. The score function estimator \citep{glynn1990likelihood, williams1992simple}, $\loss(X) \, \partial \log p_{\theta}(X) / \partial \theta$, is the classical alternative. It is a simple, unbiased estimator, but without highly engineered control variates, it suffers from high variance \citep{mnih2014neural}. Building on the score function estimator are a variety of estimators that require multiple evaluations of $\loss$ to reduce variance \citep{DBLP:journals/corr/GuLSM15, tucker2017rebar, grathwohl2018backpropagation, yin2018arm, Kool2020Estimating, aueb2015local}. The advantages of relaxed estimators are the following: they only require a single evaluation of $\loss$, they are easy to implement using modern software packages \citep{abadi2016tensorflow, paszke2017automatic, jax2018github}, and, as reparameterization gradients, they tend to have low variance \citep{gal2016uncertainty}. \begin{figure}[t] \centering \tabskip=0pt \valign{#\cr \hbox{ \begin{subfigure}[b]{0.225\textwidth} \centering \includegraphics[width=0.75in]{figures/onehotstate.pdf} \caption*{One-hot vector} \end{subfigure} } \vfill \hbox{ \begin{subfigure}[b]{0.225\textwidth} \centering \includegraphics[width=0.75in]{figures/khotstate.pdf} \caption*{$k$-hot vector} \end{subfigure} }\cr \hbox{ \begin{subfigure}[b]{0.225\textwidth} \centering \includegraphics[width=0.75in]{figures/permmatstate.pdf} \caption*{Permutation matrix} \end{subfigure} }\cr \hbox{ \begin{subfigure}[b]{0.225\textwidth} \centering \includegraphics[width=0.75in]{figures/spanningtreestate.pdf} \caption*{Spanning tree adj. matrix} \end{subfigure} }\cr \hbox{ \begin{subfigure}[b]{0.225\textwidth} \centering \includegraphics[width=0.75in]{figures/arborstate.pdf} \caption*{Arborescence adj. matrix} \end{subfigure} }\cr } \caption{Structured discrete objects can be represented by binary arrays. In these graphical representations, color indicates 1 and no color indicates 0. For example, ``Spanning tree'' is the adjacency matrix of an undirected spanning tree over 6 nodes; ``Arborescence'' is the adjacency matrix of a directed spanning tree rooted at node 3.} \label{fig:embeddings} \end{figure} \section{Stochastic Argmax Tricks} \label{sec:smts} Simulating a GST requires enumerating $|\abstractset|$ random variables, so it cannot scale. We overcome this by identifying generalizations of the GMT that can be relaxed and that scale to large $\abstractset$s by exploiting structured embeddings $\discreteset$. We call these \emph{stochastic argmax tricks} (SMT{s}), because they are perturbation models \citep{tarlow2012randoms, gane2014learning}, which can be relaxed into stochastic softmax tricks (Section \ref{sec:rsmts}). \begin{definition} \label{def:smt} Given a non-empty, convex independent, finite set $\discreteset \subseteq \R^n$ and a random utility $U$ whose distribution is parameterized by $\theta \in \R^m$, a {\em stochastic argmax trick} for $X$ is the linear program, \begin{equation} \label{eq:smt} X = \arg \max\nolimits_{x \in \discreteset} \, U^T x. \end{equation} \end{definition} The GMT is recovered with one-hot $\discreteset$ and $U \sim \Gumbel(\theta)$. We assume that \eqref{eq:smt} is a.s.~unique, which is guaranteed if $U$ a.s.~never lands in any particular lower dimensional subspace (Prop. \ref{prop:noisedistribution}, App. \ref{supp:sec:proofs}). Because efficient linear solvers are known for many structured $\discreteset$, SMT{s} are capable of scaling to very large $\abstractset$ \citep{schrijver2003combinatorial, kolmogorov2006convergent, koller2009probabilistic}. For example, if $\discreteset$ are the edge indicator vectors of spanning trees $\abstractset$, then \eqref{eq:smt} is the maximum spanning tree problem, which is solved by Kruskal's algorithm \citep{kruskal1956shortest}. The role of the SMT{} in our framework is to reparameterize $p_{\theta}$ in \eqref{eq:problem}. Ideally, \emph{given} $p_{\theta}$, there would be an efficient (e.g., $\mathcal{O}(n)$) method for simulating \emph{some} $U$ such that the marginal of $X$ in \eqref{eq:smt} is $p_{\theta}$. The GMT shows that this is possible for one-hot $\discreteset$, but the situation is not so simple for structured $\discreteset$. Characterizing the marginal of $X$ in general is difficult \cite{tarlow2012randoms, hazan2013perturb}, but $U$ that are efficient to sample from typically induce conditional independencies in $p_{\theta}$ \citep{gane2014learning}. Therefore, we are not able to reparameterize an arbitrary $p_{\theta}$ on structured $\discreteset$. Instead, for structured $\discreteset$ we \emph{assume} that $p_{\theta}$ is reparameterized by \eqref{eq:smt}, and treat $U$ as a modeling choice. Thus, we caution against the standard approach of taking $U \sim \Gumbel(\theta)$ or $U \sim \Normal(\theta, \sigma^2I)$ without further analysis. Practically, in experiments we show that the difference in noise distribution can have a large impact on quantitative results. Theoretically, we show in App. \ref{supp:sec:fieldguide} that an SMT{} over directed spanning trees with negative exponential utilities has a more interpretable structure than the same SMT{} with Gumbel utilities. \section{Stochastic Softmax Tricks} \label{sec:rsmts} If we assume that $X \sim p_{\theta}$ is reparameterized as an SMT{}, then a stochastic softmax trick (SST{)} is a random convex program with a solution that relaxes $X$. An SST{} has a valid reparameterization gradient estimator. Thus, we propose using SST{s} as surrogates for estimating gradients of \eqref{eq:problem}, a generalization of the Gumbel-Softmax approach. Because we want gradients with respect to $\theta$, we assume that $U$ is also reparameterizable. Given an SMT{}, an SST{} incorporates a strongly convex regularizer to the linear objective, and expands the state space to the convex hull of the embeddings $\discreteset = \{x_1, \ldots, x_m\} \subseteq \R^n$, \begin{equation} P := \hull(\discreteset) := \left\{\sum\nolimits_{i=1}^m \lambda_i x_i \, \middle\vert \, \lambda_i \geq 0, \, \sum\nolimits_{i=1}^m \lambda_i = 1\right\}. \end{equation} Expanding the state space to a convex polytope makes it path-connected, and the strongly convex regularizer ensures that the solutions are continuous over the polytope. \begin{definition} \label{def:rsmt} Given a stochastic argmax trick $(\discreteset, U)$ where $P := \hull(\discreteset)$ and a proper, closed, strongly convex function $f : \R^n \to \{\R, \infty\}$ whose domain contains the relative interior of $P$, a {\em stochastic softmax trick} for $X$ at temperature $\temp > 0$ is the convex program, \begin{equation} \label{eq:rsmt} X_{\temp} = \arg \max_{x \in P} \, U^T x - \temp f(x) \end{equation} \end{definition} For one-hot $\discreteset$, the Gumbel-Softmax is a special case of an SST{} where $P$ is the probability simplex, $U \sim \Gumbel(\theta)$, and $f(x) = \sum_{i} x_i \log(x_i)$. Objectives like \eqref{eq:rsmt} have a long history in convex analysis \citep[e.g.,][Chap. 12]{rockafellar1970convex} and machine learning \citep[e.g.,][Chap. 3]{wainwright2008graphical}. In general, the difficulty of computing the SST{} will depend on the interaction between $f$ and $\discreteset$. $X_{\temp}$ is suitable as an approximation of $X$. At positive temperatures $\temp$, $X_{\temp}$ is a function of $U$ that ranges over the faces and relative interior of $P$. The degree of approximation is controlled by the temperature parameter, and as $\temp \to 0^+$, $X_{\temp}$ is driven to $X$ a.s. \begin{restatable}{proposition}{approximation} \label{prop:approximation} If $X$ in Def. \ref{def:smt} is a.s. unique, then for $X_t$ in Def. \ref{def:rsmt}, $\lim_{t \to 0^+} X_{\temp} = X$ a.s. If additionally $\loss : P \to \R$ is bounded and continuous, then $\lim_{t \to 0^+} \expect[\loss(X_{\temp})] = \expect[\loss(X)]$. \end{restatable} It is common to consider temperature parameters that interpolate between marginal inference and a deterministic, most probable state. While superficially similar, our relaxation framework is different; as $\temp \to 0^+$, an SST{} approaches \emph{a sample from the SMT{} model} as opposed to a deterministic state. $X_{\temp}$ also admits a reparameterization trick. The SST{} reparameterization gradient estimator given by, \begin{equation} \label{eq:solution} \frac{d \loss(X_{\temp})}{d \theta} = \frac{\partial \loss(X_{\temp})}{\partial X_{\temp}} \frac{\partial X_{\temp}}{\partial U} \frac{d U}{d \theta}. \end{equation} If $\loss$ is differentiable on $P$, then this is an unbiased estimator\footnote{Technically, one needs an additional local Lipschitz condition for $\loss(X_{\temp})$ in $\theta$ \citep[Prop. 2.3, Chap. 7]{asmussen2007stochastic}.} of the gradient $d\mathbb{E}[\loss(X_{\temp})] / d \theta$, because $X_{\temp}$ is continuous and a.e. differentiable: \begin{restatable}{proposition}{relaxation} \label{prop:relaxation} $X_{\temp}$ in Def. \ref{def:rsmt} exists, is unique, and is a.e. differentiable and continuous in $U$. \end{restatable} In general, the Jacobian $\partial X_{\temp} / \partial U$ will need to be derived separately given a choice of $f$ and $\discreteset$. However, as pointed out by \citep{domke2010impdiff}, because the Jacobian of $X_{\temp}$ symmetric \citep[][Cor. 2.9]{rockafellar1999second}, local finite difference approximations can be used to approximate $d \loss(X_{\temp})/ d U$ (App. \ref{supp:sec:exp_details}). These finite difference approximations only require two additional calls to a solver for \eqref{eq:rsmt} and do not require additional evaluations of $\loss$. We found them to be helpful in a few experiments (c.f., Section \ref{sec:experiments}). There are many, well-studied $f$ for which \eqref{eq:rsmt} is efficiently solvable. If $f(x) = \lVert x \rVert^2 / 2$, then $X_{\temp}$ is the Euclidean projection of $U/t$ onto $P$. Efficient projection algorithms exist for some convex sets \citep[see][and references therein]{wolfe1976finding, duchi2008efficient, liu2009efficient, blondel2019structured}, and more generic algorithms exist that only call linear solvers as subroutines \citep{niculae2018sparsemap}. In some of the settings we consider, generic negative-entropy-based relaxations are also applicable. We refer to relaxations with $f(x) = \sum\nolimits_{i=1}^n x_i \log(x_i)$ as \emph{categorical entropy relaxations} \citep[e.g.,][]{blondel2019structured, blondel2020learning}. We refer to relaxations with $f(x) = \sum\nolimits_{i=1}^n x_i \log (x_i) + (1-x_i) \log(1-x_i)$ as \emph{binary entropy relaxations} \cite[e.g.,][]{amos2019limited}. Marginal inference in exponential families is a rich source of SST{} relaxations. Consider an exponential family over the finite set $\discreteset$ with natural parameters $u/\temp \in \R^n$ such that the probability of $x \in \discreteset$ is proportional to $\exp(u^Tx/\temp)$. The \emph{marginals} $\mu_{\temp} : \R^n \to \hull(\discreteset)$ of this family are solutions of a convex program in exactly the form \eqref{eq:rsmt} \citep{wainwright2008graphical}, i.e., there exists $A^* : \hull(\discreteset) \to \{\R, \infty\}$ such that, \begin{equation} \label{eq:expfamilymarg} \mu_{\temp}(u) := \sum\nolimits_{x \in \discreteset} \frac{x\exp(u^Tx/\temp)}{\sum_{y \in \discreteset} \exp(u^T y/\temp)} = \arg\max_{x \in P} u^Tx - \temp A^*(x). \end{equation} The definition of $A^*$, which generates $\mu_{\temp}$ in \eqref{eq:expfamilymarg}, can be found in \citep[][Thm. 3.4]{wainwright2008graphical}. $A^*$ is a kind of negative entropy and in our case it satisfies the assumptions in Def. \ref{def:rsmt}. Computing $\mu_{\temp}$ amounts to marginal inference in the exponential family, and efficient algorithms are known in many cases \citep[see][]{wainwright2008graphical, koller2009probabilistic}, including those we consider. We call $X_{\temp} = \mu_{\temp}(U)$ the \emph{exponential family entropy relaxation}. Taken together, Prop. \ref{prop:approximation} and \ref{prop:relaxation} suggest our proposed use for SST s: optimize $\expect[\loss(X_{\temp})]$ at a positive temperature, where unbiased gradient estimation is available, but evaluate $\expect[\loss(X)]$. We find that this works well in practice if the temperature used during optimization is treated as a hyperparameter and selected over a validation set. It is worth emphasizing that the choice of relaxation is unrelated to the distribution $p_{\theta}$ of $X$ in the corresponding SMT{}. $f$ is not only a modeling choice; it is a computational choice that will affect the cost of computing \eqref{eq:rsmt} and the quality of the gradient estimator. \section{Examples of Stochastic Softmax Tricks} \label{sec:examples} \begin{figure}[t] \centering \includegraphics[scale=0.66666]{figures/spanningtree-corrected.pdf} \caption{An example realization of a spanning tree SST{} for an undirected graph. Middle: Random undirected edge utilities. Left: The random soft spanning tree $X_{\temp}$, represented as a weighted adjacency matrix, can be computed via Kirchhoff's Matrix-Tree theorem. Right: The random spanning tree $X$, represented as an adjacency matrix, can be computed with Kruskal's algorithm.} \label{fig:spanningtree} \vspace{-0.5\baselineskip} \end{figure} The Gumbel-Softmax \citep{maddison2016concrete, jang2016categorical} introduced neither the Gumbel-Max trick nor the softmax. The novelty of this work is neither the pertubation model framework nor the relaxation framework in isolation, but their combined use for gradient estimation. Here we layout some example SST{s}, organized by the set $\abstractset$ with a choice of embeddings $\discreteset$. Bold italics indicates previously described relaxations, most of which are bespoke and not describable in our framework. Italics indicates our novel SST s used in our experiments; some of these are also novel perturbation models. A complete discussion is in App. \ref{supp:sec:fieldguide}. \textbf{Subset selection.} $\discreteset$ is the set of binary vectors indicating membership in the subsets of a finite set $S$. \emph{Indep. $S$} uses $U \sim \Logistic(\theta)$ and a binary entropy relaxation. $X$ and $X_{\temp}$ are computed with a dimension-wise step function or sigmoid, resp. \textbf{$\mathbf{k}$-Subset selection.} $\discreteset$ is the set of binary vectors with a $k$-hot binary vectors indicating membership in a $k$-subset of a finite set $S$. All of the following SMT s use $U \sim \Gumbel(\theta)$. Our SST{s} use the following relaxations: euclidean \citep{amos2017optnet} and categorical \citep{martins2017learning}, binary \citep{amos2019limited}, and exponential family \citep{swersky2012cardinality} entropies. $X$ is computed by sorting $U$ and setting the top $k$ elements to 1 \citep{blondel2019structured}. \emph{$R$ Top $k$} refers to our SST{} with relaxation $R$. \emph{\textbf{L2X}} \citep{chen2018learning} and \emph{\textbf{SoftSub}} \citep{xie2019reparameterizable} are bespoke relaxations. \textbf{Correlated $\mathbf{k}$-subset selection.} $\discreteset$ is the set of $(2n-1)$-dimensional binary vectors with a $k$-hot cardinality constraint on the first $n$ dimensions and a constraint that the $n-1$ dimensions indicate correlations between adjacent dimensions in the first $n$, i.e. the vertices of the correlation polytope of a chain \citep[][Ex. 3.8]{wainwright2008graphical} with an added cardinality constraint \citep{mezuman2013tighter}. \emph{Corr. Top $k$} uses $U_{1:n} \sim \Gumbel(\theta_{1:n})$, $U_{n+1:2n-1} = \theta_{n+1:2n-1}$, and the exponential family entropy relaxation. $X$ and $X_{\temp}$ can be computed with dynamic programs \citep{tarlow2012fast}, see App. \ref{supp:sec:fieldguide}. \textbf{Perfect Bipartite Matchings.} $\discreteset$ is the set of $n \times n$ permutation matrices representing the perfect matchings of the complete bipartite graph $K_{n,n}$. The \emph{\textbf{Gumbel-Sinkhorn}} \citep{mena2018learning} uses $U \sim \Gumbel(\theta)$ and a Shannon entropy relaxation. $X$ can be computed with the Hungarian method \citep{kuhn1955hungarian} and $X_{\temp}$ with the Sinkhorn algorithm \citep{sinkhorn1967concerning}. \emph{\textbf{Stochastic NeuralSort}} \citep{grover2018stochastic} uses correlated Gumbel-based utilities that induce a Plackett-Luce model and a bespoke relaxation. \textbf{Undirected spanning trees.} Given a graph $(V, E)$, $\discreteset$ is the set of binary indicator vectors of the edge sets $T \subseteq E$ of undirected spanning trees. \emph{Spanning Tree} uses $U \sim \Gumbel(\theta)$ and the exponential family entropy relaxation. $X$ can be computed with Kruskal's algorithm \citep{kruskal1956shortest}, $X_{\temp}$ with Kirchhoff's matrix-tree theorem \citep[][Sec. 3.3]{koo2007matrixtree}, and both are represented as adjacency matrices, Fig. \ref{fig:spanningtree}. \textbf{Rooted directed spanning trees.} Given a graph $(V, E)$, $\discreteset$ is the set of binary indicator vectors of the edge sets $T \subseteq E$ of $r$-rooted, directed spanning trees. \emph{Arborescence} uses $U \sim \Gumbel(\theta)$ or $-U \sim \exponential(\theta)$ or $U\sim \Normal(\theta, I)$ and an exponential family entropy relaxation. $X$ can be computed with the Chu-Liu-Edmonds algorithm \citep{chu1965shortest, edmonds1967optimum}, $X_{\temp}$ with a directed version of Kirchhoff's matrix-tree theorem \citep[][Sec. 3.3]{koo2007matrixtree}, and both are represented as adjacency matrices. \emph{\textbf{Perturb \& Parse}} \citep{corro2018differentiable} further restricts $\discreteset$ to be projective trees, uses $U \sim \Gumbel(\theta)$, and uses a bespoke relaxation. \section{Related Work} Here we review perturbation models (PMs) and methods for relaxation more generally. SMT{s} are a subclass of PMs, which draw samples by optimizing a random objective. Perhaps the earliest example comes from Thurstonian ranking models \cite{thurstone1927law}, where a distribution over rankings is formed by sorting a vector of noisy scores. Perturb \& MAP models \cite{papandreou2011perturb,hazan2012partition} were designed to approximate the Gibbs distribution over a combinatorial output space using low-order, additive Gumbel noise. Randomized Optimum models \cite{tarlow2012randoms,gane2014learning} are the most general class, which include non-additive noise distributions and non-linear objectives. Recent work \citep{lorberbom2019direct} uses PMs to construct finite difference approximations of the expected loss' gradient. It requires optimizing a non-linear objective over $\discreteset$, and making this applicable to our settings would require significant innovation. Using SST{}s for gradient estimation requires differentiating through a convex program. This idea is not ours and is enjoying renewed interest in \cite{cvxpylayers2019, agrawal2019differentiating, amos2019differentiable}. In addition, specialized solutions have been proposed for quadratic programs \cite{amos2017optnet, martins2016softmax, blondel2020fast} and linear programs with entropic regularizers over various domains \cite{martins2017learning, amos2019limited, adams2011ranking, mena2018learning, blondel2020fast}. In graphical modeling, several works have explored differentiating through marginal inference \cite{domke2010impdiff,ross-cvpr-11,poon2011sum,domke2013learning,swersky2012cardinality,djolonga2017differentiable} and our exponential family entropy relaxation builds on this work. The most superficially similar work is \citep{2020arXiv200208676B}, which uses noisy utilities to smooth the solutions of linear programs. In \citep{2020arXiv200208676B}, the noise is a tool for approximately relaxing a deterministic linear program. Our framework uses relaxations to approximate \emph{stochastic} linear programs. \section{Experiments} \label{sec:experiments} Our goal in these experiments was to evaluate the use of SST{s} for learning distributions over structured latent spaces in deep structured models. We chose frameworks (NRI \citep{kipf2018neural}, L2X \citep{chen2018learning}, and a latent parse tree task) in which relaxed gradient estimators are the methods of choice, and investigated the effects of $\discreteset$, $f$, and $U$ on the task objective and on the unsupervised structure discovery. For NRI, we also implemented the standard single-loss-evaluation score function estimators (REINFORCE \citep{williams1992simple} and NVIL \citep{mnih2014neural}), and the best SST{} outperformed these baselines both in terms of average performance and variance, see App. \ref{supp:sec:addresults}. All SST{} models were trained with the ``soft'' SST{} and evaluated with the ``hard'' SMT{}. We optimized hyperparameters (including fixed training temperature $\temp$) using random search over multiple independent runs. We selected models on a validation set according to the best objective value obtained during training. All reported values are measured on a test set. Error bars are bootstrap standard errors over the model selection process. We refer to SST{s} defined in Section ~\ref{sec:examples} with italics. Details are in App. \ref{supp:sec:exp_details}. Code is available at \url{https://github.com/choidami/sst}. \subsection{Neural Relational Inference (NRI) for Graph Layout} \begin{table}[t] \refstepcounter{figure} \label{fig:graph_layout} \captionsetup{labelformat=andfigure} \caption{\emph{Spanning Tree} performs best on structure recovery, despite being trained on the ELBO. Test ELBO and structure recovery metrics are shown from models selected on valid. ELBO. Below: Test set example where \emph{Spanning Tree} recovers the ground truth latent graph perfectly. \vspace{5pt}} \label{table:graph_layout} \begin{subfigure}{\textwidth} \centering \begin{small} \adjustbox{max width=\textwidth}{ \begin{tabular}{@{}lcccccc@{}} \toprule & \multicolumn{3}{c}{$T=10$} & \multicolumn{3}{c}{$T=20$} \\ \cmidrule(l){2-4} \cmidrule(l){5-7} Edge Distribution & ELBO & Edge Prec. & Edge Rec. & ELBO & Edge Prec. & Edge Rec. \\ \midrule \emph{Indep. Directed Edges} \citep{kipf2018neural} & $-1370 \pm 20$ & $48 \pm 2$ & $\mathbf{93 \pm 1}$ & $-1340 \pm160$ & $97 \pm 3$ & $\mathbf{99 \pm 1}$ \\ \emph{E.F. Ent. Top $|V|-1$} & $-2100 \pm 20$ & $41 \pm 1$ & $41 \pm 1$ & $-1700 \pm 320$ & $98\pm6$ & $98\pm6$ \\ \emph{Spanning Tree} & $\mathbf{-1080\pm110}$ & $\mathbf{91\pm3}$ & $91\pm3$ & $\mathbf{-1280\pm10}$ & $\mathbf{99\pm1}$ & $\mathbf{99\pm1}$ \\ \bottomrule \end{tabular}} \end{small} \end{subfigure} \begin{subfigure}{\textwidth} \centering \begin{subfigure}[b]{0.246\textwidth} \includegraphics{figures/layout_gt_test.pdf} \caption*{Ground Truth} \end{subfigure} \hfill \begin{subfigure}[b]{0.245\textwidth} \includegraphics{figures/layout_ie_test.pdf} \caption*{\emph{Indep. Directed Edges}} \end{subfigure} \hfill \begin{subfigure}[b]{0.245\textwidth} \includegraphics{figures/layout_tk_test.pdf} \caption*{\emph{E.F. Ent. Top $|V|-1$}} \end{subfigure} \hfill \begin{subfigure}[b]{0.245\textwidth} \includegraphics{figures/layout_st_test.pdf} \caption*{\emph{Spanning Tree}} \end{subfigure} \end{subfigure} \vspace{-\baselineskip} \end{table} With NRI we investigated the use of SST{}s for latent structure recovery and final performance. NRI is a graph neural network (GNN) model that samples a latent interaction graph $G = (V, E)$ and runs messages over the adjacency matrix to produce a distribution over an interacting particle system. NRI is trained as a variational autoencoder to maximize a lower bound (ELBO) on the marginal log-likelihood of the time series. We experimented with three SST{s} for the encoder distribution: \emph{Indep. Binary} over directed edges, which is the baseline NRI encoder \citep{kipf2018neural}, \emph{E.F. Ent. Top $|V|-1$} over undirected edges, and \emph{Spanning Tree} over undirected edges. We computed the KL with respect to the random utility $U$ for all SST{s}; see App. \ref{supp:sec:exp_details} for details. Our dataset consisted of latent prior spanning trees over 10 vertices sampled from the $\Gumbel(0)$ prior. Given a tree, we embed the vertices in $\R^2$ by applying $T \in \{10, 20\}$ iterations of a force-directed algorithm \citep{fruchterman1991graph}. The model saw particle locations at each iteration, not the underlying spanning tree. We found that \emph{Spanning Tree} performed best, improving on both ELBO and the recovery of latent structure over the baseline \citep{kipf2018neural}. For structure recovery, we measured edge precision and recall against the ground truth adjacency matrix. It recovered the edge structure well even when given only a short series ($T=10$, Fig. \ref{fig:graph_layout}). Less structured baselines were only competitive on longer time series. \subsection{Unsupervised Parsing on ListOps} We investigated the effect of $\discreteset{}$'s structure and of the utility distribution in a latent parse tree task. We used a simplified variant of the ListOps dataset \cite{nangia2018listops}, which contains sequences of prefix arithmetic expressions, e.g., \texttt{max[ 3 min[ 8 2 ]]}, that evaluate to an integer in $[0, 9]$. The arithmetic syntax induces a directed spanning tree rooted at its first token with directed edges from operators to operands. We modified the data by removing the \texttt{summod} operator, capping the maximum depth of the ground truth dependency parse, and capping the maximum length of a sequence. This simplifies the task considerably, but it makes the problem accessible to GNN models of fixed depth. Our models used a bi-LSTM encoder to produce a distribution over edges (directed or undirected) between all pairs of tokens, which induced a latent (di)graph. Predictions were made from the final embedding of the first token after passing messages in a GNN architecture over the latent graph. For undirected graphs, messages were passed in both directions. We experimented with the following SST{s} for the edge distribution: \emph{Indep. Undirected Edges}, \emph{Spanning Tree}, \emph{Indep. Directed Edges}, and \emph{Arborescence} (with three separate utility distributions). \emph{Arborescence} was rooted at the first token. For baselines we used an unstructured LSTM and the GNN over the ground truth parse. All models were trained with cross-entropy to predict the integer evaluation of the sequence. The best performing models were structured models whose structure better matched the true latent structure (Table \ref{table:listops_perform}). For each model, we measured the accuracy of its prediction (task accuracy). We measured both precision and recall with respect to the ground truth parse's adjacency matrix. \footnote{We exclude edges to and from the closing symbol ``$]$''. Its edge assignments cannot be learnt from the task objective, because the correct evaluation of an operation does not depend on the closing symbol.} Both tree-structured SST{s} outperformed their independent edge counterparts on all metrics. Overall, \emph{Arborescence} achieved the best performance in terms of task accuracy and structure recovery. We found that the utility distribution significantly affected performance (Table \ref{table:listops_perform}). For example, while negative exponential utilities induce an interpretable distribution over arborescences, App. \ref{supp:sec:fieldguide}, we found that the multiplicative parameterization of exponentials made it difficult to train competitive models. Despite the LSTM baseline performing well on task accuracy, \emph{Arborescence} additionally learns to recover much of the latent parse tree. \begin{table} \caption{Matching ground truth structure (non-tree $\to$ tree) improves performance on ListOps. The utility distribution impacts performance. Test task accuracy and structure recovery metrics are shown from models selected on valid. task accuracy. Note that because we exclude edges to and from the closing symbol ``$]$'', recall is not equal to twice of precision for \emph{Spanning Tree} and precision is not equal to recall for \emph{Arborescence}.} \label{table:listops_perform} \begin{center} \begin{small} \begin{tabular}{@{}llccc@{}} \toprule Model & Edge Distribution & Task Acc. & Edge Precision & Edge Recall \\ \midrule LSTM & --- & $92.1 \pm0.2$ & --- & --- \\ \cmidrule[0.15pt]{1-5} \multirow{2}{*}{\shortstack[l]{GNN on\\latent graph}} & \emph{Indep. Undirected Edges} & $89.4\pm0.6$ & $20.1\pm2.1$ & $45.4\pm6.5$ \\ & \emph{Spanning Tree} & $91.2\pm 1.8$ & $33.1\pm 2.9$ & $47.9\pm 5.2$ \\ \cmidrule[0.15pt]{1-5} \multirow{6}{*}{\shortstack[l]{GNN on\\latent digraph}} &\emph{Indep. Directed Edges} & $90.1\pm0.5$ & $13.0\pm2.0$ & $56.4\pm6.7$ \\ &\emph{Arborescence} & & & \\ &\hspace{2mm} - Neg. Exp. & $71.5 \pm 1.4$ & $23.2 \pm 10.2$ & $20.0 \pm 6.0$ \\ &\hspace{2mm} - Gaussian & $\mathbf{95.0 \pm 2.2}$ & $65.3 \pm 3.7$ & $60.8 \pm 7.3$ \\ &\hspace{2mm} - Gumbel & $\mathbf{95.0 \pm 3.0}$ & $\mathbf{75.5 \pm 7.0}$ & $\mathbf{71.9 \pm 12.4}$ \\ \cmidrule[0.15pt]{2-5} & Ground Truth Edges & $98.1 \pm 0.1$ & 100 & 100 \\ \bottomrule \end{tabular} \end{small} \end{center} \vspace{-\baselineskip} \end{table} \subsection{Learning To Explain (L2X) Aspect Ratings} With L2X we investigated the effect of the choice of relaxation. We used the BeerAdvocate dataset \citep{mcauley2012learning}, which contains reviews comprised of free-text feedback and ratings for multiple aspects (appearance, aroma, palate, and taste; Fig. \ref{fig:l2x:review}). Each sentence in the test set is annotated with the aspects that it describes, allowing us to define structure recovery metrics. We considered the L2X task of learning a distribution over $k$-subsets of words that best explain a given aspect rating.\footnote{While originally proposed for model interpretability, we used the original aspect ratings. This allowed us to use the sentence-level annotations for each aspect to facilitate comparisons between subset distributions.} Our model used word embeddings from \citep{lei2016rationalizing} and convolutional neural networks with one (simple) and three (complex) layers to produce a distribution over $k$-hot binary latent masks. Given the latent masks, our model used a convolutional net to make predictions from masked embeddings. We used $k$ in $\{5, 10, 15\}$ and the following SST{s} for the subset distribution: \{\emph{{Euclid., Cat. Ent., Bin. Ent., E.F. Ent.}}\} \emph{Top $k$} and \emph{Corr. Top $k$}. For baselines, we used bespoke relaxations designed for this task: \emph{{L2X}} \citep{chen2018learning} and \emph{{SoftSub}} \citep{xie2019reparameterizable}. We trained separate models for each aspect using mean squared error (MSE). We found that SST{s} improve over bespoke relaxations (Table \ref{table:l2x_beer_aroma} for aspect aroma, others in App. \ref{supp:sec:addresults}). For unsupervised discovery, we used the sentence-level annotations for each aspect to define ground truth subsets against which precision of the $k$-subsets was measured. SST{s} tended to select subsets with higher precision across different architectures and cardinalities and achieve modest improvements in MSE. We did not find significant differences arising from the choice of regularizer $f$. Overall, the most structured SST{}, \emph{Corr. Top $k$}, achieved the lowest MSE, highest precision and improved interpretability: The correlations in the model allowed it to select contiguous words, while subsets from less structured distributions were scattered (Fig. \ref{fig:l2x:review}). \newcommand{\corrtopk}[1]{\hlmyred{#1}} \newcommand{\topk}[1]{\hlmyblue{#1}} \newcommand{\both}[1]{\hlmypurple{#1}} \begin{table}[t] \centering \refstepcounter{figure} \label{fig:l2x:review} \captionsetup{labelformat=andfigure} \caption{For $k$-subset selection on aroma aspect, SST{s} tend to outperform baseline relaxations. Test set MSE ($\times 10^{-2}$) and subset precision (\%) is shown for models selected on valid. MSE. Bottom: \emph{Corr. Top $k$} (red) selects contiguous words while \emph{Top $k$} (blue) picks scattered words.} \label{table:l2x_beer_aroma} \begin{small} \adjustbox{max width=\textwidth}{ \begin{tabular}{@{}llcccccc@{}} \toprule & & \multicolumn{2}{c}{$k=5$} & \multicolumn{2}{c}{$k=10$} & \multicolumn{2}{c}{$k=15$} \\ \cmidrule(lr){3-4} \cmidrule(lr){5-6} \cmidrule(lr){7-8} Model & Relaxation & MSE & Subs. Prec. & MSE & Subs. Prec. & MSE & Subs. Prec. \\ \midrule \multirow{7}{*}{\shortstack[l]{Simple}} & \emph{L2X} \citep{chen2018learning} & $3.6 \pm 0.1$ & $28.3 \pm 1.7$ & $3.0 \pm 0.1$ & $25.5 \pm 1.2$ & $2.6 \pm 0.1$ & $25.5 \pm 0.4$ \\ & \emph{SoftSub} \citep{xie2019reparameterizable} & $3.6 \pm 0.1$ & $27.2 \pm 0.7$ & $3.0 \pm 0.1$ & $26.1 \pm 1.1$ & $2.6 \pm 0.1$ & $25.1 \pm 1.0$ \\ \cmidrule[0.15pt]{2-8} & \emph{Euclid. Top $k$} & $3.5 \pm 0.1$ & $25.8 \pm 0.8$ & $2.8 \pm 0.1$ & $32.9 \pm 1.2$ & $2.5 \pm 0.1$ & $29.0 \pm 0.3$ \\ & \emph{Cat. Ent. Top $k$} & $3.5 \pm 0.1$ & $26.4 \pm 2.0$ & $2.9 \pm 0.1$ & $32.1 \pm 0.4$ & $2.6 \pm 0.1$ & $28.7 \pm 0.5$ \\ & \emph{Bin. Ent. Top $k$} & $3.5 \pm 0.1$ & $29.2 \pm 2.0$ & $2.7 \pm 0.1$ & $33.6 \pm 0.6$ & $2.6 \pm 0.1$ & $28.8 \pm 0.4$ \\ & \emph{E.F. Ent. Top $k$} & $3.5 \pm 0.1$ & $28.8 \pm 1.7$ & $2.7 \pm 0.1$ & $32.8 \pm 0.5$ & $2.5 \pm 0.1$ & $29.2 \pm 0.8$ \\ \cmidrule[0.15pt]{2-8} & \emph{Corr. Top $k$} & $\mathbf{2.9 \pm 0.1}$ & $\mathbf{63.1 \pm 5.3}$ & $\mathbf{2.5 \pm 0.1}$ & $\mathbf{53.1 \pm 0.9}$ & $\mathbf{2.4 \pm 0.1}$ & $\mathbf{45.5 \pm 2.7}$ \\ \cmidrule[0.15pt]{1-8} \multirow{7}{*}{\shortstack[l]{Complex}} & \emph{L2X} \citep{chen2018learning} & $2.7 \pm 0.1$ & $50.5 \pm 1.0$ & $2.6 \pm 0.1$ & $44.1 \pm 1.7$ & $2.4 \pm 0.1$ & $44.4 \pm 0.9$ \\ & \emph{SoftSub} \citep{xie2019reparameterizable} & $2.7 \pm 0.1$ & $57.1 \pm 3.6$ & $\mathbf{2.3 \pm 0.1}$ & $50.2 \pm 3.3$ & $2.3 \pm 0.1$ & $43.0 \pm 1.1$ \\ \cmidrule[0.15pt]{2-8} & \emph{Euclid. Top $k$} & $2.7 \pm 0.1$ & $61.3 \pm 1.2$ & $2.4 \pm 0.1$ & $52.8 \pm 1.1$ & $2.3 \pm 0.1$ & $44.1 \pm 1.2$ \\ & \emph{Cat. Ent. Top $k$} & $2.7 \pm 0.1$ & $61.9 \pm 1.2$ & $\mathbf{2.3 \pm 0.1}$ & $52.8 \pm 1.0$ & $2.3 \pm 0.1$ & $44.5 \pm 1.0$ \\ & \emph{Bin. Ent. Top $k$} & $2.6 \pm 0.1$ & $62.1 \pm 0.7$ & $\mathbf{2.3 \pm 0.1}$ & $50.7 \pm 0.9$ & $2.3 \pm 0.1$ & $44.8 \pm 0.8$ \\ & \emph{E.F. Ent. Top $k$} & $2.6 \pm 0.1$ & $59.5 \pm 0.9$ & $\mathbf{2.3 \pm 0.1}$ & $54.6 \pm 0.6$ & $2.2 \pm 0.1$ & $44.9 \pm 0.9$ \\ \cmidrule[0.15pt]{2-8} & \emph{Corr. Top $k$} & $\mathbf{2.5 \pm 0.1}$ & $\mathbf{67.9 \pm 0.6}$ & $\mathbf{2.3 \pm 0.1}$ & $\mathbf{60.2 \pm 1.3}$ & $\mathbf{2.1 \pm 0.1}$ & $\mathbf{57.7 \pm 3.8}$ \\ \bottomrule \end{tabular}} \end{small} \vspace{2pt} \setlength{\fboxrule}{\heavyrulewidth} \begingroup\fboxsep=0.0025\textwidth \fbox{\parbox{0.995\textwidth}{ \small{ Pours a \topk{\strut slight tangerine} orange and \topk{\strut straw} yellow. The head is \topk{\strut nice} and bubbly but fades very quickly with a little lacing. \both{\strut Smells} \corrtopk{\strut{} like Wheat and European hops}, a little yeast in there too. There is some \topk{\strut fruit} in there too, but you have to take a good \topk{\strut whiff} to get it. The taste is of wheat, a bit of malt, and \corrtopk{\strut a little } \both{\strut fruit} \corrtopk{ \strut flavour} in there too. Almost feels like drinking \topk{\strut Champagne}, medium mouthful otherwise. Easy to drink, but \topk{\strut not} something I'd be trying every night. \begin{center} \begin{tabular}{ccccc} Appearance: 3.5 & \textbf{Aroma: 4.0} & Palate: 4.5 & Taste: 4.0 & Overall: 4.0 \end{tabular} \end{center} }}} \endgroup \vspace{-\baselineskip} \end{table} \section{Conclusion} We introduced stochastic softmax tricks, which are random convex programs that capture a large class of relaxed distributions over structured, combinatorial spaces. We designed stochastic softmax tricks for subset selection and a variety of spanning tree distributions. We tested their use in deep latent variable models, and found that they can be used to improve performance and to encourage the unsupervised discovery of true latent structure. There are future directions in this line of work. The relaxation framework can be generalized by modifying the constraint set or the utility distribution at positive temperatures. Some combinatorial objects might benefit from a more careful design of the utility distribution, while others, e.g., matchings, are still waiting to have their tricks designed. \section*{Broader Impact} This work introduces methods and theory that have the potential for improving the interpretability of latent variable models. While unfavorable consequences cannot be excluded, increased interpretability is generally considered a desirable property of machine learning models. Given that this is foundational, methodologically-driven research, we refrain from speculating further. \section*{Acknowledgements and Disclosure of Funding} We thank Daniel Johnson and Francisco Ruiz for their time and insightful feedback. We also thank Tamir Hazan, Yoon Kim, Andriy Mnih, and Rich Zemel for their valuable comments. MBP gratefully acknowledges support from the Max Planck ETH Center for Learning Systems. CJM is grateful for the support of the James D. Wolfensohn Fund at the Institute of Advanced Studies in Princeton, NJ. Resources used in preparing this research were provided, in part, by the Sustainable Chemical Processes through Catalysis (Suchcat) National Center of Competence in Research (NCCR), the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute. \section{Introduction} Gradient computation is the methodological backbone of deep learning, but computing gradients is not always easy. Gradients with respect to parameters of the density of an integral are generally intractable, and one must resort to gradient estimators \citep{asmussen2007stochastic, mohamed2019gradientest}. Typical examples of objectives over densities are returns in reinforcement learning \citep{sutton2018reinforcement} or variational objectives for latent variable models \cite[e.g.,][]{kingma2014auto, rezende2014stochastic}. In this paper, we address {\em gradient estimation for discrete distributions} with an emphasis on latent variable models. We introduce a relaxed gradient estimation framework for combinatorial discrete distributions that generalizes the Gumbel-Softmax and related estimators \citep{maddison2016concrete, jang2016categorical}. Relaxed gradient estimators incorporate bias in order to reduce variance. Most relaxed estimators are based on the Gumbel-Max trick \citep{luce1959individual, maddison2014astarsamp}, which reparameterizes distributions over one-hot binary vectors. The Gumbel-Softmax estimator is the simplest; it continuously approximates the Gumbel-Max trick to admit a reparameterization gradient \citep{kingma2014auto, rezende2014stochastic, ruiz2016generalized}. This is used to optimize the ``soft'' approximation of the loss as a surrogate for the ``hard'' discrete objective. Adding structured latent variables to deep learning models is a promising direction for addressing a number of challenges:~improving interpretability (e.g., via latent variables for subset selection \citep{chen2018learning} or parse trees \cite{corro2018differentiable}), incorporating problem-specific constraints (e.g., via enforcing alignments \cite{mena2018learning}), and improving generalization (e.g., by modeling known algorithmic structure \cite{graves2014neural}). Unfortunately, the vanilla Gumbel-Softmax cannot scale to distributions over large state spaces, and the development of structured relaxations has been piecemeal. We introduce \emph{stochastic softmax tricks} (SST s), which are a unified framework for designing structured relaxations of combinatorial distributions. They include relaxations for the above applications, as well as many novel ones. To use an SST{,} a modeler chooses from a class of models that we call \emph{stochastic argmax tricks} (SMT{}). These are instances of perturbation models \citep[e.g.,][]{papandreou2011perturb, hazan2012partition, tarlow2012randoms, gane2014learning}, and they induce a distribution over a finite set $\discreteset$ by optimizing a linear objective (defined by random utility $U \in \R^n$) over $\discreteset$. An SST{} relaxes this SMT{} by combining a strongly convex regularizer with the random linear objective. The regularizer makes the solution a continuous, a.e. differentiable function of $U$ and appropriate for estimating gradients with respect to $U$'s parameters. The Gumbel-Softmax is a special case. Fig. \ref{fig:intro} provides a summary. We test our relaxations in the Neural Relational Inference (NRI) \citep{kipf2018neural} and L2X \cite{chen2018learning} frameworks. Both NRI and L2X use variational losses over latent combinatorial distributions. When the latent structure in the model matches the true latent structure, we find that our relaxations encourage the unsupervised discovery of this combinatorial structure. This leads to models that are more interpretable and achieve stronger performance than less structured baselines. All proofs are in the Appendix. \begin{figure}[t] \centering \begin{subfigure}[b]{0.246\textwidth} \includegraphics{figures/intro_polytope.pdf} \caption*{Finite set} \end{subfigure} \hfill \begin{subfigure}[b]{0.245\textwidth} \includegraphics{figures/intro_random_cost.pdf} \caption*{Random utility} \end{subfigure} \hfill \begin{subfigure}[b]{0.245\textwidth} \includegraphics{figures/intro_smt.pdf} \caption*{Stoch. Argmax Trick} \end{subfigure} \hfill \begin{subfigure}[b]{0.245\textwidth} \includegraphics{figures/intro_rsmt.pdf} \caption*{Stoch. Softmax Trick} \end{subfigure} \caption{Stochastic softmax tricks relax discrete distributions that can be reparameterized as random linear programs. $X$ is the solution of a random linear program defined by a finite set $\discreteset$ and a random utility $U$ with parameters $\theta \in \R^m$. To design relaxed gradient estimators with respect to $\theta$, $X_{\temp}$ is the solution of a random convex program that continuously approximates $X$ from within the convex hull of $\discreteset$. The Gumbel-Softmax \citep{maddison2016concrete, jang2016categorical} is an example of a stochastic softmax trick.}\label{fig:intro} \vspace{-0.5\baselineskip} \end{figure} \section{Problem Statement} \label{sec:problemstmt} Let $\abstractset$ be a non-empty, finite set of combinatorial objects, e.g. the spanning trees of a graph. To represent $\abstractset$, define the embeddings $\discreteset \subseteq \R^n$ of $\abstractset$ to be the image $\{ \embed(y) \mid y \in \abstractset\}$ of some embedding function $\embed : \abstractset \to \R^n$.\footnote{This is equivalent to the notion of sufficient statistics \cite{wainwright2008graphical}. We draw a distinction only to avoid confusion, because the distributions $p_{\theta}$ that we ultimately consider are not necessarily from the exponential family.} For example, if $\abstractset$ is the set of spanning trees of a graph with edges $E$, then we could enumerate $y_1, \ldots, y_{|\abstractset|}$ in $\abstractset$ and let $\embed(y)$ be the one-hot binary vector of length $|\abstractset|$, with $\embed(y)_i = 1$ iff $y = y_i$. This requires a very large ambient dimension $n = |\abstractset|$. Alternatively, in this case we could use a more efficient, structured representation: $\embed(y)$ could be a binary indicator vector of length $|E| \ll |\abstractset|$, with $\embed(y)_e = 1$ iff edge $e$ is in the tree $y$. See Fig. \ref{fig:embeddings} for visualizations and additional examples of structured binary representations. We assume that $\discreteset$ is convex independent.\footnote{Convex independence is the analog of linear independence for convex combinations.} Given a probability mass function $p_{\theta} : \discreteset \to (0, 1]$ that is differentiable in $\theta \in \R^m$, a loss function $\loss : \R^n \to \R$, and $X \sim p_{\theta}$, our ultimate goal is gradient-based optimization of $\expect[\loss(X)]$. Thus, we are concerned in this paper with the problem of estimating the derivatives of the expected loss, \begin{equation} \label{eq:problem} \frac{d}{d \theta}\expect[\loss(X)] = \frac{d}{d \theta} \left(\sum\nolimits_{x \in \discreteset} \loss(x) p_{\theta}(x)\right). \end{equation} \section{Background on Gradient Estimation} \label{sec:background} Relaxed gradient estimators assume that $\loss$ is differentiable and use a change of variables to remove the dependence of $p_{\theta}$ on $\theta$, known as the reparameterization trick \citep{kingma2014auto, rezende2014stochastic}. The Gumbel-Softmax trick (GST) \citep{maddison2016concrete, jang2016categorical} is a simple relaxed gradient estimator for one-hot embeddings, which is based on the Gumbel-Max trick (GMT) \citep{luce1959individual, maddison2014astarsamp}. Let $\discreteset$ be the one-hot embeddings of $\abstractset$ and $p_{\theta}(x) \propto \exp(x^T\theta)$. The GMT is the following identity: for $X\sim p_{\theta}$ and $G_i + \theta_i \sim \Gumbel(\theta_i)$ indep., \begin{align} \label{eq:gumbelmaxtrick} X \overset{d}{=} \arg \max\nolimits_{x \in \discreteset} \, (G+\theta)^T x. \end{align} Ideally, one would have a reparameterization estimator, $\expect[d \loss(X)/d \theta] = d \expect[\loss(X)]/d \theta$,\footnote{For a function $f(x_1, x_2)$, $\partial f(z_1, z_2) / \partial x_1$ is the partial derivative (e.g., a gradient vector) of $f$ in the first variable evaluated at $z_1, z_2$. $d f(z_1, z_2) / d x_1$ is the total derivative of $f$ in $x_1$ evaluated at $z_1, z_2$. For example, if $x = f(\theta)$, then $ d g(x, \theta)/ d\theta = (\partial g(x, \theta)/\partial x) (d f(\theta)/ d\theta) + \partial g(x, \theta)/\partial\theta$.} using the right-hand expression in \eqref{eq:gumbelmaxtrick}. Unfortunately, this fails. The problem is not the lack of differentiability, as normally reported. In fact, the argmax is differentiable almost everywhere. Instead it is the jump discontinuities in the argmax that invalidate this particular exchange of expectation and differentiation \citep[][Chap. 7.2]{lee2018reparameterization, asmussen2007stochastic}. The GST estimator \citep{maddison2016concrete, jang2016categorical} overcomes this by using the tempered softmax, $\softmax_{\temp}(u)_i = \exp(u_i/\temp) / \sum_{j=1}^n \exp(u_j/\temp)$ for $u \in \R^n, \temp > 0$, to continuously approximate $X$, \begin{align} \label{eq:gumbelsoftmaxestimator} X_{\temp} = \softmax_{\temp}(G + \theta). \end{align} The relaxed estimator is $d \loss(X_t) / d \theta$. While this is a biased estimator of \eqref{eq:problem}, it is an unbiased estimator of $d\mathbb{E}[\loss(X_{\temp})]/d\theta$ and $X_t \to X$ a.s. as $t \to 0$. Thus, $d \loss(X_t) / d \theta$ is used for optimizing $\expect[\loss(X_{\temp})]$ as a surrogate for $\expect[\loss(X)]$, on which the final model is evaluated. The score function estimator \citep{glynn1990likelihood, williams1992simple}, $\loss(X) \, \partial \log p_{\theta}(X) / \partial \theta$, is the classical alternative. It is a simple, unbiased estimator, but without highly engineered control variates, it suffers from high variance \citep{mnih2014neural}. Building on the score function estimator are a variety of estimators that require multiple evaluations of $\loss$ to reduce variance \citep{DBLP:journals/corr/GuLSM15, tucker2017rebar, grathwohl2018backpropagation, yin2018arm, Kool2020Estimating, aueb2015local}. The advantages of relaxed estimators are the following: they only require a single evaluation of $\loss$, they are easy to implement using modern software packages \citep{abadi2016tensorflow, paszke2017automatic, jax2018github}, and, as reparameterization gradients, they tend to have low variance \citep{gal2016uncertainty}. \begin{figure}[t] \centering \tabskip=0pt \valign{#\cr \hbox{ \begin{subfigure}[b]{0.225\textwidth} \centering \includegraphics[width=0.75in]{figures/onehotstate.pdf} \caption*{One-hot vector} \end{subfigure} } \vfill \hbox{ \begin{subfigure}[b]{0.225\textwidth} \centering \includegraphics[width=0.75in]{figures/khotstate.pdf} \caption*{$k$-hot vector} \end{subfigure} }\cr \hbox{ \begin{subfigure}[b]{0.225\textwidth} \centering \includegraphics[width=0.75in]{figures/permmatstate.pdf} \caption*{Permutation matrix} \end{subfigure} }\cr \hbox{ \begin{subfigure}[b]{0.225\textwidth} \centering \includegraphics[width=0.75in]{figures/spanningtreestate.pdf} \caption*{Spanning tree adj. matrix} \end{subfigure} }\cr \hbox{ \begin{subfigure}[b]{0.225\textwidth} \centering \includegraphics[width=0.75in]{figures/arborstate.pdf} \caption*{Arborescence adj. matrix} \end{subfigure} }\cr } \caption{Structured discrete objects can be represented by binary arrays. In these graphical representations, color indicates 1 and no color indicates 0. For example, ``Spanning tree'' is the adjacency matrix of an undirected spanning tree over 6 nodes; ``Arborescence'' is the adjacency matrix of a directed spanning tree rooted at node 3.} \label{fig:embeddings} \end{figure} \section{Stochastic Argmax Tricks} \label{sec:smts} Simulating a GST requires enumerating $|\abstractset|$ random variables, so it cannot scale. We overcome this by identifying generalizations of the GMT that can be relaxed and that scale to large $\abstractset$s by exploiting structured embeddings $\discreteset$. We call these \emph{stochastic argmax tricks} (SMT{s}), because they are perturbation models \citep{tarlow2012randoms, gane2014learning}, which can be relaxed into stochastic softmax tricks (Section \ref{sec:rsmts}). \begin{definition} \label{def:smt} Given a non-empty, convex independent, finite set $\discreteset \subseteq \R^n$ and a random utility $U$ whose distribution is parameterized by $\theta \in \R^m$, a {\em stochastic argmax trick} for $X$ is the linear program, \begin{equation} \label{eq:smt} X = \arg \max\nolimits_{x \in \discreteset} \, U^T x. \end{equation} \end{definition} The GMT is recovered with one-hot $\discreteset$ and $U \sim \Gumbel(\theta)$. We assume that \eqref{eq:smt} is a.s.~unique, which is guaranteed if $U$ a.s.~never lands in any particular lower dimensional subspace (Prop. \ref{prop:noisedistribution}, App. \ref{supp:sec:proofs}). Because efficient linear solvers are known for many structured $\discreteset$, SMT{s} are capable of scaling to very large $\abstractset$ \citep{schrijver2003combinatorial, kolmogorov2006convergent, koller2009probabilistic}. For example, if $\discreteset$ are the edge indicator vectors of spanning trees $\abstractset$, then \eqref{eq:smt} is the maximum spanning tree problem, which is solved by Kruskal's algorithm \citep{kruskal1956shortest}. The role of the SMT{} in our framework is to reparameterize $p_{\theta}$ in \eqref{eq:problem}. Ideally, \emph{given} $p_{\theta}$, there would be an efficient (e.g., $\mathcal{O}(n)$) method for simulating \emph{some} $U$ such that the marginal of $X$ in \eqref{eq:smt} is $p_{\theta}$. The GMT shows that this is possible for one-hot $\discreteset$, but the situation is not so simple for structured $\discreteset$. Characterizing the marginal of $X$ in general is difficult \cite{tarlow2012randoms, hazan2013perturb}, but $U$ that are efficient to sample from typically induce conditional independencies in $p_{\theta}$ \citep{gane2014learning}. Therefore, we are not able to reparameterize an arbitrary $p_{\theta}$ on structured $\discreteset$. Instead, for structured $\discreteset$ we \emph{assume} that $p_{\theta}$ is reparameterized by \eqref{eq:smt}, and treat $U$ as a modeling choice. Thus, we caution against the standard approach of taking $U \sim \Gumbel(\theta)$ or $U \sim \Normal(\theta, \sigma^2I)$ without further analysis. Practically, in experiments we show that the difference in noise distribution can have a large impact on quantitative results. Theoretically, we show in App. \ref{supp:sec:fieldguide} that an SMT{} over directed spanning trees with negative exponential utilities has a more interpretable structure than the same SMT{} with Gumbel utilities. \section{Stochastic Softmax Tricks} \label{sec:rsmts} If we assume that $X \sim p_{\theta}$ is reparameterized as an SMT{}, then a stochastic softmax trick (SST{)} is a random convex program with a solution that relaxes $X$. An SST{} has a valid reparameterization gradient estimator. Thus, we propose using SST{s} as surrogates for estimating gradients of \eqref{eq:problem}, a generalization of the Gumbel-Softmax approach. Because we want gradients with respect to $\theta$, we assume that $U$ is also reparameterizable. Given an SMT{}, an SST{} incorporates a strongly convex regularizer to the linear objective, and expands the state space to the convex hull of the embeddings $\discreteset = \{x_1, \ldots, x_m\} \subseteq \R^n$, \begin{equation} P := \hull(\discreteset) := \left\{\sum\nolimits_{i=1}^m \lambda_i x_i \, \middle\vert \, \lambda_i \geq 0, \, \sum\nolimits_{i=1}^m \lambda_i = 1\right\}. \end{equation} Expanding the state space to a convex polytope makes it path-connected, and the strongly convex regularizer ensures that the solutions are continuous over the polytope. \begin{definition} \label{def:rsmt} Given a stochastic argmax trick $(\discreteset, U)$ where $P := \hull(\discreteset)$ and a proper, closed, strongly convex function $f : \R^n \to \{\R, \infty\}$ whose domain contains the relative interior of $P$, a {\em stochastic softmax trick} for $X$ at temperature $\temp > 0$ is the convex program, \begin{equation} \label{eq:rsmt} X_{\temp} = \arg \max_{x \in P} \, U^T x - \temp f(x) \end{equation} \end{definition} For one-hot $\discreteset$, the Gumbel-Softmax is a special case of an SST{} where $P$ is the probability simplex, $U \sim \Gumbel(\theta)$, and $f(x) = \sum_{i} x_i \log(x_i)$. Objectives like \eqref{eq:rsmt} have a long history in convex analysis \citep[e.g.,][Chap. 12]{rockafellar1970convex} and machine learning \citep[e.g.,][Chap. 3]{wainwright2008graphical}. In general, the difficulty of computing the SST{} will depend on the interaction between $f$ and $\discreteset$. $X_{\temp}$ is suitable as an approximation of $X$. At positive temperatures $\temp$, $X_{\temp}$ is a function of $U$ that ranges over the faces and relative interior of $P$. The degree of approximation is controlled by the temperature parameter, and as $\temp \to 0^+$, $X_{\temp}$ is driven to $X$ a.s. \begin{restatable}{proposition}{approximation} \label{prop:approximation} If $X$ in Def. \ref{def:smt} is a.s. unique, then for $X_t$ in Def. \ref{def:rsmt}, $\lim_{t \to 0^+} X_{\temp} = X$ a.s. If additionally $\loss : P \to \R$ is bounded and continuous, then $\lim_{t \to 0^+} \expect[\loss(X_{\temp})] = \expect[\loss(X)]$. \end{restatable} It is common to consider temperature parameters that interpolate between marginal inference and a deterministic, most probable state. While superficially similar, our relaxation framework is different; as $\temp \to 0^+$, an SST{} approaches \emph{a sample from the SMT{} model} as opposed to a deterministic state. $X_{\temp}$ also admits a reparameterization trick. The SST{} reparameterization gradient estimator given by, \begin{equation} \label{eq:solution} \frac{d \loss(X_{\temp})}{d \theta} = \frac{\partial \loss(X_{\temp})}{\partial X_{\temp}} \frac{\partial X_{\temp}}{\partial U} \frac{d U}{d \theta}. \end{equation} If $\loss$ is differentiable on $P$, then this is an unbiased estimator\footnote{Technically, one needs an additional local Lipschitz condition for $\loss(X_{\temp})$ in $\theta$ \citep[Prop. 2.3, Chap. 7]{asmussen2007stochastic}.} of the gradient $d\mathbb{E}[\loss(X_{\temp})] / d \theta$, because $X_{\temp}$ is continuous and a.e. differentiable: \begin{restatable}{proposition}{relaxation} \label{prop:relaxation} $X_{\temp}$ in Def. \ref{def:rsmt} exists, is unique, and is a.e. differentiable and continuous in $U$. \end{restatable} In general, the Jacobian $\partial X_{\temp} / \partial U$ will need to be derived separately given a choice of $f$ and $\discreteset$. However, as pointed out by \citep{domke2010impdiff}, because the Jacobian of $X_{\temp}$ symmetric \citep[][Cor. 2.9]{rockafellar1999second}, local finite difference approximations can be used to approximate $d \loss(X_{\temp})/ d U$ (App. \ref{supp:sec:exp_details}). These finite difference approximations only require two additional calls to a solver for \eqref{eq:rsmt} and do not require additional evaluations of $\loss$. We found them to be helpful in a few experiments (c.f., Section \ref{sec:experiments}). There are many, well-studied $f$ for which \eqref{eq:rsmt} is efficiently solvable. If $f(x) = \lVert x \rVert^2 / 2$, then $X_{\temp}$ is the Euclidean projection of $U/t$ onto $P$. Efficient projection algorithms exist for some convex sets \citep[see][and references therein]{wolfe1976finding, duchi2008efficient, liu2009efficient, blondel2019structured}, and more generic algorithms exist that only call linear solvers as subroutines \citep{niculae2018sparsemap}. In some of the settings we consider, generic negative-entropy-based relaxations are also applicable. We refer to relaxations with $f(x) = \sum\nolimits_{i=1}^n x_i \log(x_i)$ as \emph{categorical entropy relaxations} \citep[e.g.,][]{blondel2019structured, blondel2020learning}. We refer to relaxations with $f(x) = \sum\nolimits_{i=1}^n x_i \log (x_i) + (1-x_i) \log(1-x_i)$ as \emph{binary entropy relaxations} \cite[e.g.,][]{amos2019limited}. Marginal inference in exponential families is a rich source of SST{} relaxations. Consider an exponential family over the finite set $\discreteset$ with natural parameters $u/\temp \in \R^n$ such that the probability of $x \in \discreteset$ is proportional to $\exp(u^Tx/\temp)$. The \emph{marginals} $\mu_{\temp} : \R^n \to \hull(\discreteset)$ of this family are solutions of a convex program in exactly the form \eqref{eq:rsmt} \citep{wainwright2008graphical}, i.e., there exists $A^* : \hull(\discreteset) \to \{\R, \infty\}$ such that, \begin{equation} \label{eq:expfamilymarg} \mu_{\temp}(u) := \sum\nolimits_{x \in \discreteset} \frac{x\exp(u^Tx/\temp)}{\sum_{y \in \discreteset} \exp(u^T y/\temp)} = \arg\max_{x \in P} u^Tx - \temp A^*(x). \end{equation} The definition of $A^*$, which generates $\mu_{\temp}$ in \eqref{eq:expfamilymarg}, can be found in \citep[][Thm. 3.4]{wainwright2008graphical}. $A^*$ is a kind of negative entropy and in our case it satisfies the assumptions in Def. \ref{def:rsmt}. Computing $\mu_{\temp}$ amounts to marginal inference in the exponential family, and efficient algorithms are known in many cases \citep[see][]{wainwright2008graphical, koller2009probabilistic}, including those we consider. We call $X_{\temp} = \mu_{\temp}(U)$ the \emph{exponential family entropy relaxation}. Taken together, Prop. \ref{prop:approximation} and \ref{prop:relaxation} suggest our proposed use for SST s: optimize $\expect[\loss(X_{\temp})]$ at a positive temperature, where unbiased gradient estimation is available, but evaluate $\expect[\loss(X)]$. We find that this works well in practice if the temperature used during optimization is treated as a hyperparameter and selected over a validation set. It is worth emphasizing that the choice of relaxation is unrelated to the distribution $p_{\theta}$ of $X$ in the corresponding SMT{}. $f$ is not only a modeling choice; it is a computational choice that will affect the cost of computing \eqref{eq:rsmt} and the quality of the gradient estimator. \section{Examples of Stochastic Softmax Tricks} \label{sec:examples} \begin{figure}[t] \centering \includegraphics[scale=0.66666]{figures/spanningtree-corrected.pdf} \caption{An example realization of a spanning tree SST{} for an undirected graph. Middle: Random undirected edge utilities. Left: The random soft spanning tree $X_{\temp}$, represented as a weighted adjacency matrix, can be computed via Kirchhoff's Matrix-Tree theorem. Right: The random spanning tree $X$, represented as an adjacency matrix, can be computed with Kruskal's algorithm.} \label{fig:spanningtree} \vspace{-0.5\baselineskip} \end{figure} The Gumbel-Softmax \citep{maddison2016concrete, jang2016categorical} introduced neither the Gumbel-Max trick nor the softmax. The novelty of this work is neither the pertubation model framework nor the relaxation framework in isolation, but their combined use for gradient estimation. Here we layout some example SST{s}, organized by the set $\abstractset$ with a choice of embeddings $\discreteset$. Bold italics indicates previously described relaxations, most of which are bespoke and not describable in our framework. Italics indicates our novel SST s used in our experiments; some of these are also novel perturbation models. A complete discussion is in App. \ref{supp:sec:fieldguide}. \textbf{Subset selection.} $\discreteset$ is the set of binary vectors indicating membership in the subsets of a finite set $S$. \emph{Indep. $S$} uses $U \sim \Logistic(\theta)$ and a binary entropy relaxation. $X$ and $X_{\temp}$ are computed with a dimension-wise step function or sigmoid, resp. \textbf{$\mathbf{k}$-Subset selection.} $\discreteset$ is the set of binary vectors with a $k$-hot binary vectors indicating membership in a $k$-subset of a finite set $S$. All of the following SMT s use $U \sim \Gumbel(\theta)$. Our SST{s} use the following relaxations: euclidean \citep{amos2017optnet} and categorical \citep{martins2017learning}, binary \citep{amos2019limited}, and exponential family \citep{swersky2012cardinality} entropies. $X$ is computed by sorting $U$ and setting the top $k$ elements to 1 \citep{blondel2019structured}. \emph{$R$ Top $k$} refers to our SST{} with relaxation $R$. \emph{\textbf{L2X}} \citep{chen2018learning} and \emph{\textbf{SoftSub}} \citep{xie2019reparameterizable} are bespoke relaxations. \textbf{Correlated $\mathbf{k}$-subset selection.} $\discreteset$ is the set of $(2n-1)$-dimensional binary vectors with a $k$-hot cardinality constraint on the first $n$ dimensions and a constraint that the $n-1$ dimensions indicate correlations between adjacent dimensions in the first $n$, i.e. the vertices of the correlation polytope of a chain \citep[][Ex. 3.8]{wainwright2008graphical} with an added cardinality constraint \citep{mezuman2013tighter}. \emph{Corr. Top $k$} uses $U_{1:n} \sim \Gumbel(\theta_{1:n})$, $U_{n+1:2n-1} = \theta_{n+1:2n-1}$, and the exponential family entropy relaxation. $X$ and $X_{\temp}$ can be computed with dynamic programs \citep{tarlow2012fast}, see App. \ref{supp:sec:fieldguide}. \textbf{Perfect Bipartite Matchings.} $\discreteset$ is the set of $n \times n$ permutation matrices representing the perfect matchings of the complete bipartite graph $K_{n,n}$. The \emph{\textbf{Gumbel-Sinkhorn}} \citep{mena2018learning} uses $U \sim \Gumbel(\theta)$ and a Shannon entropy relaxation. $X$ can be computed with the Hungarian method \citep{kuhn1955hungarian} and $X_{\temp}$ with the Sinkhorn algorithm \citep{sinkhorn1967concerning}. \emph{\textbf{Stochastic NeuralSort}} \citep{grover2018stochastic} uses correlated Gumbel-based utilities that induce a Plackett-Luce model and a bespoke relaxation. \textbf{Undirected spanning trees.} Given a graph $(V, E)$, $\discreteset$ is the set of binary indicator vectors of the edge sets $T \subseteq E$ of undirected spanning trees. \emph{Spanning Tree} uses $U \sim \Gumbel(\theta)$ and the exponential family entropy relaxation. $X$ can be computed with Kruskal's algorithm \citep{kruskal1956shortest}, $X_{\temp}$ with Kirchhoff's matrix-tree theorem \citep[][Sec. 3.3]{koo2007matrixtree}, and both are represented as adjacency matrices, Fig. \ref{fig:spanningtree}. \textbf{Rooted directed spanning trees.} Given a graph $(V, E)$, $\discreteset$ is the set of binary indicator vectors of the edge sets $T \subseteq E$ of $r$-rooted, directed spanning trees. \emph{Arborescence} uses $U \sim \Gumbel(\theta)$ or $-U \sim \exponential(\theta)$ or $U\sim \Normal(\theta, I)$ and an exponential family entropy relaxation. $X$ can be computed with the Chu-Liu-Edmonds algorithm \citep{chu1965shortest, edmonds1967optimum}, $X_{\temp}$ with a directed version of Kirchhoff's matrix-tree theorem \citep[][Sec. 3.3]{koo2007matrixtree}, and both are represented as adjacency matrices. \emph{\textbf{Perturb \& Parse}} \citep{corro2018differentiable} further restricts $\discreteset$ to be projective trees, uses $U \sim \Gumbel(\theta)$, and uses a bespoke relaxation. \section{Related Work} Here we review perturbation models (PMs) and methods for relaxation more generally. SMT{s} are a subclass of PMs, which draw samples by optimizing a random objective. Perhaps the earliest example comes from Thurstonian ranking models \cite{thurstone1927law}, where a distribution over rankings is formed by sorting a vector of noisy scores. Perturb \& MAP models \cite{papandreou2011perturb,hazan2012partition} were designed to approximate the Gibbs distribution over a combinatorial output space using low-order, additive Gumbel noise. Randomized Optimum models \cite{tarlow2012randoms,gane2014learning} are the most general class, which include non-additive noise distributions and non-linear objectives. Recent work \citep{lorberbom2019direct} uses PMs to construct finite difference approximations of the expected loss' gradient. It requires optimizing a non-linear objective over $\discreteset$, and making this applicable to our settings would require significant innovation. Using SST{}s for gradient estimation requires differentiating through a convex program. This idea is not ours and is enjoying renewed interest in \cite{cvxpylayers2019, agrawal2019differentiating, amos2019differentiable}. In addition, specialized solutions have been proposed for quadratic programs \cite{amos2017optnet, martins2016softmax, blondel2020fast} and linear programs with entropic regularizers over various domains \cite{martins2017learning, amos2019limited, adams2011ranking, mena2018learning, blondel2020fast}. In graphical modeling, several works have explored differentiating through marginal inference \cite{domke2010impdiff,ross-cvpr-11,poon2011sum,domke2013learning,swersky2012cardinality,djolonga2017differentiable} and our exponential family entropy relaxation builds on this work. The most superficially similar work is \citep{2020arXiv200208676B}, which uses noisy utilities to smooth the solutions of linear programs. In \citep{2020arXiv200208676B}, the noise is a tool for approximately relaxing a deterministic linear program. Our framework uses relaxations to approximate \emph{stochastic} linear programs. \section{Experiments} \label{sec:experiments} Our goal in these experiments was to evaluate the use of SST{s} for learning distributions over structured latent spaces in deep structured models. We chose frameworks (NRI \citep{kipf2018neural}, L2X \citep{chen2018learning}, and a latent parse tree task) in which relaxed gradient estimators are the methods of choice, and investigated the effects of $\discreteset$, $f$, and $U$ on the task objective and on the unsupervised structure discovery. For NRI, we also implemented the standard single-loss-evaluation score function estimators (REINFORCE \citep{williams1992simple} and NVIL \citep{mnih2014neural}), and the best SST{} outperformed these baselines both in terms of average performance and variance, see App. \ref{supp:sec:addresults}. All SST{} models were trained with the ``soft'' SST{} and evaluated with the ``hard'' SMT{}. We optimized hyperparameters (including fixed training temperature $\temp$) using random search over multiple independent runs. We selected models on a validation set according to the best objective value obtained during training. All reported values are measured on a test set. Error bars are bootstrap standard errors over the model selection process. We refer to SST{s} defined in Section ~\ref{sec:examples} with italics. Details are in App. \ref{supp:sec:exp_details}. Code is available at \url{https://github.com/choidami/sst}. \subsection{Neural Relational Inference (NRI) for Graph Layout} \begin{table}[t] \refstepcounter{figure} \label{fig:graph_layout} \captionsetup{labelformat=andfigure} \caption{\emph{Spanning Tree} performs best on structure recovery, despite being trained on the ELBO. Test ELBO and structure recovery metrics are shown from models selected on valid. ELBO. Below: Test set example where \emph{Spanning Tree} recovers the ground truth latent graph perfectly. \vspace{5pt}} \label{table:graph_layout} \begin{subfigure}{\textwidth} \centering \begin{small} \adjustbox{max width=\textwidth}{ \begin{tabular}{@{}lcccccc@{}} \toprule & \multicolumn{3}{c}{$T=10$} & \multicolumn{3}{c}{$T=20$} \\ \cmidrule(l){2-4} \cmidrule(l){5-7} Edge Distribution & ELBO & Edge Prec. & Edge Rec. & ELBO & Edge Prec. & Edge Rec. \\ \midrule \emph{Indep. Directed Edges} \citep{kipf2018neural} & $-1370 \pm 20$ & $48 \pm 2$ & $\mathbf{93 \pm 1}$ & $-1340 \pm160$ & $97 \pm 3$ & $\mathbf{99 \pm 1}$ \\ \emph{E.F. Ent. Top $|V|-1$} & $-2100 \pm 20$ & $41 \pm 1$ & $41 \pm 1$ & $-1700 \pm 320$ & $98\pm6$ & $98\pm6$ \\ \emph{Spanning Tree} & $\mathbf{-1080\pm110}$ & $\mathbf{91\pm3}$ & $91\pm3$ & $\mathbf{-1280\pm10}$ & $\mathbf{99\pm1}$ & $\mathbf{99\pm1}$ \\ \bottomrule \end{tabular}} \end{small} \end{subfigure} \begin{subfigure}{\textwidth} \centering \begin{subfigure}[b]{0.246\textwidth} \includegraphics{figures/layout_gt_test.pdf} \caption*{Ground Truth} \end{subfigure} \hfill \begin{subfigure}[b]{0.245\textwidth} \includegraphics{figures/layout_ie_test.pdf} \caption*{\emph{Indep. Directed Edges}} \end{subfigure} \hfill \begin{subfigure}[b]{0.245\textwidth} \includegraphics{figures/layout_tk_test.pdf} \caption*{\emph{E.F. Ent. Top $|V|-1$}} \end{subfigure} \hfill \begin{subfigure}[b]{0.245\textwidth} \includegraphics{figures/layout_st_test.pdf} \caption*{\emph{Spanning Tree}} \end{subfigure} \end{subfigure} \vspace{-\baselineskip} \end{table} With NRI we investigated the use of SST{}s for latent structure recovery and final performance. NRI is a graph neural network (GNN) model that samples a latent interaction graph $G = (V, E)$ and runs messages over the adjacency matrix to produce a distribution over an interacting particle system. NRI is trained as a variational autoencoder to maximize a lower bound (ELBO) on the marginal log-likelihood of the time series. We experimented with three SST{s} for the encoder distribution: \emph{Indep. Binary} over directed edges, which is the baseline NRI encoder \citep{kipf2018neural}, \emph{E.F. Ent. Top $|V|-1$} over undirected edges, and \emph{Spanning Tree} over undirected edges. We computed the KL with respect to the random utility $U$ for all SST{s}; see App. \ref{supp:sec:exp_details} for details. Our dataset consisted of latent prior spanning trees over 10 vertices sampled from the $\Gumbel(0)$ prior. Given a tree, we embed the vertices in $\R^2$ by applying $T \in \{10, 20\}$ iterations of a force-directed algorithm \citep{fruchterman1991graph}. The model saw particle locations at each iteration, not the underlying spanning tree. We found that \emph{Spanning Tree} performed best, improving on both ELBO and the recovery of latent structure over the baseline \citep{kipf2018neural}. For structure recovery, we measured edge precision and recall against the ground truth adjacency matrix. It recovered the edge structure well even when given only a short series ($T=10$, Fig. \ref{fig:graph_layout}). Less structured baselines were only competitive on longer time series. \subsection{Unsupervised Parsing on ListOps} We investigated the effect of $\discreteset{}$'s structure and of the utility distribution in a latent parse tree task. We used a simplified variant of the ListOps dataset \cite{nangia2018listops}, which contains sequences of prefix arithmetic expressions, e.g., \texttt{max[ 3 min[ 8 2 ]]}, that evaluate to an integer in $[0, 9]$. The arithmetic syntax induces a directed spanning tree rooted at its first token with directed edges from operators to operands. We modified the data by removing the \texttt{summod} operator, capping the maximum depth of the ground truth dependency parse, and capping the maximum length of a sequence. This simplifies the task considerably, but it makes the problem accessible to GNN models of fixed depth. Our models used a bi-LSTM encoder to produce a distribution over edges (directed or undirected) between all pairs of tokens, which induced a latent (di)graph. Predictions were made from the final embedding of the first token after passing messages in a GNN architecture over the latent graph. For undirected graphs, messages were passed in both directions. We experimented with the following SST{s} for the edge distribution: \emph{Indep. Undirected Edges}, \emph{Spanning Tree}, \emph{Indep. Directed Edges}, and \emph{Arborescence} (with three separate utility distributions). \emph{Arborescence} was rooted at the first token. For baselines we used an unstructured LSTM and the GNN over the ground truth parse. All models were trained with cross-entropy to predict the integer evaluation of the sequence. The best performing models were structured models whose structure better matched the true latent structure (Table \ref{table:listops_perform}). For each model, we measured the accuracy of its prediction (task accuracy). We measured both precision and recall with respect to the ground truth parse's adjacency matrix. \footnote{We exclude edges to and from the closing symbol ``$]$''. Its edge assignments cannot be learnt from the task objective, because the correct evaluation of an operation does not depend on the closing symbol.} Both tree-structured SST{s} outperformed their independent edge counterparts on all metrics. Overall, \emph{Arborescence} achieved the best performance in terms of task accuracy and structure recovery. We found that the utility distribution significantly affected performance (Table \ref{table:listops_perform}). For example, while negative exponential utilities induce an interpretable distribution over arborescences, App. \ref{supp:sec:fieldguide}, we found that the multiplicative parameterization of exponentials made it difficult to train competitive models. Despite the LSTM baseline performing well on task accuracy, \emph{Arborescence} additionally learns to recover much of the latent parse tree. \begin{table} \caption{Matching ground truth structure (non-tree $\to$ tree) improves performance on ListOps. The utility distribution impacts performance. Test task accuracy and structure recovery metrics are shown from models selected on valid. task accuracy. Note that because we exclude edges to and from the closing symbol ``$]$'', recall is not equal to twice of precision for \emph{Spanning Tree} and precision is not equal to recall for \emph{Arborescence}.} \label{table:listops_perform} \begin{center} \begin{small} \begin{tabular}{@{}llccc@{}} \toprule Model & Edge Distribution & Task Acc. & Edge Precision & Edge Recall \\ \midrule LSTM & --- & $92.1 \pm0.2$ & --- & --- \\ \cmidrule[0.15pt]{1-5} \multirow{2}{*}{\shortstack[l]{GNN on\\latent graph}} & \emph{Indep. Undirected Edges} & $89.4\pm0.6$ & $20.1\pm2.1$ & $45.4\pm6.5$ \\ & \emph{Spanning Tree} & $91.2\pm 1.8$ & $33.1\pm 2.9$ & $47.9\pm 5.2$ \\ \cmidrule[0.15pt]{1-5} \multirow{6}{*}{\shortstack[l]{GNN on\\latent digraph}} &\emph{Indep. Directed Edges} & $90.1\pm0.5$ & $13.0\pm2.0$ & $56.4\pm6.7$ \\ &\emph{Arborescence} & & & \\ &\hspace{2mm} - Neg. Exp. & $71.5 \pm 1.4$ & $23.2 \pm 10.2$ & $20.0 \pm 6.0$ \\ &\hspace{2mm} - Gaussian & $\mathbf{95.0 \pm 2.2}$ & $65.3 \pm 3.7$ & $60.8 \pm 7.3$ \\ &\hspace{2mm} - Gumbel & $\mathbf{95.0 \pm 3.0}$ & $\mathbf{75.5 \pm 7.0}$ & $\mathbf{71.9 \pm 12.4}$ \\ \cmidrule[0.15pt]{2-5} & Ground Truth Edges & $98.1 \pm 0.1$ & 100 & 100 \\ \bottomrule \end{tabular} \end{small} \end{center} \vspace{-\baselineskip} \end{table} \subsection{Learning To Explain (L2X) Aspect Ratings} With L2X we investigated the effect of the choice of relaxation. We used the BeerAdvocate dataset \citep{mcauley2012learning}, which contains reviews comprised of free-text feedback and ratings for multiple aspects (appearance, aroma, palate, and taste; Fig. \ref{fig:l2x:review}). Each sentence in the test set is annotated with the aspects that it describes, allowing us to define structure recovery metrics. We considered the L2X task of learning a distribution over $k$-subsets of words that best explain a given aspect rating.\footnote{While originally proposed for model interpretability, we used the original aspect ratings. This allowed us to use the sentence-level annotations for each aspect to facilitate comparisons between subset distributions.} Our model used word embeddings from \citep{lei2016rationalizing} and convolutional neural networks with one (simple) and three (complex) layers to produce a distribution over $k$-hot binary latent masks. Given the latent masks, our model used a convolutional net to make predictions from masked embeddings. We used $k$ in $\{5, 10, 15\}$ and the following SST{s} for the subset distribution: \{\emph{{Euclid., Cat. Ent., Bin. Ent., E.F. Ent.}}\} \emph{Top $k$} and \emph{Corr. Top $k$}. For baselines, we used bespoke relaxations designed for this task: \emph{{L2X}} \citep{chen2018learning} and \emph{{SoftSub}} \citep{xie2019reparameterizable}. We trained separate models for each aspect using mean squared error (MSE). We found that SST{s} improve over bespoke relaxations (Table \ref{table:l2x_beer_aroma} for aspect aroma, others in App. \ref{supp:sec:addresults}). For unsupervised discovery, we used the sentence-level annotations for each aspect to define ground truth subsets against which precision of the $k$-subsets was measured. SST{s} tended to select subsets with higher precision across different architectures and cardinalities and achieve modest improvements in MSE. We did not find significant differences arising from the choice of regularizer $f$. Overall, the most structured SST{}, \emph{Corr. Top $k$}, achieved the lowest MSE, highest precision and improved interpretability: The correlations in the model allowed it to select contiguous words, while subsets from less structured distributions were scattered (Fig. \ref{fig:l2x:review}). \newcommand{\corrtopk}[1]{\hlmyred{#1}} \newcommand{\topk}[1]{\hlmyblue{#1}} \newcommand{\both}[1]{\hlmypurple{#1}} \begin{table}[t] \centering \refstepcounter{figure} \label{fig:l2x:review} \captionsetup{labelformat=andfigure} \caption{For $k$-subset selection on aroma aspect, SST{s} tend to outperform baseline relaxations. Test set MSE ($\times 10^{-2}$) and subset precision (\%) is shown for models selected on valid. MSE. Bottom: \emph{Corr. Top $k$} (red) selects contiguous words while \emph{Top $k$} (blue) picks scattered words.} \label{table:l2x_beer_aroma} \begin{small} \adjustbox{max width=\textwidth}{ \begin{tabular}{@{}llcccccc@{}} \toprule & & \multicolumn{2}{c}{$k=5$} & \multicolumn{2}{c}{$k=10$} & \multicolumn{2}{c}{$k=15$} \\ \cmidrule(lr){3-4} \cmidrule(lr){5-6} \cmidrule(lr){7-8} Model & Relaxation & MSE & Subs. Prec. & MSE & Subs. Prec. & MSE & Subs. Prec. \\ \midrule \multirow{7}{*}{\shortstack[l]{Simple}} & \emph{L2X} \citep{chen2018learning} & $3.6 \pm 0.1$ & $28.3 \pm 1.7$ & $3.0 \pm 0.1$ & $25.5 \pm 1.2$ & $2.6 \pm 0.1$ & $25.5 \pm 0.4$ \\ & \emph{SoftSub} \citep{xie2019reparameterizable} & $3.6 \pm 0.1$ & $27.2 \pm 0.7$ & $3.0 \pm 0.1$ & $26.1 \pm 1.1$ & $2.6 \pm 0.1$ & $25.1 \pm 1.0$ \\ \cmidrule[0.15pt]{2-8} & \emph{Euclid. Top $k$} & $3.5 \pm 0.1$ & $25.8 \pm 0.8$ & $2.8 \pm 0.1$ & $32.9 \pm 1.2$ & $2.5 \pm 0.1$ & $29.0 \pm 0.3$ \\ & \emph{Cat. Ent. Top $k$} & $3.5 \pm 0.1$ & $26.4 \pm 2.0$ & $2.9 \pm 0.1$ & $32.1 \pm 0.4$ & $2.6 \pm 0.1$ & $28.7 \pm 0.5$ \\ & \emph{Bin. Ent. Top $k$} & $3.5 \pm 0.1$ & $29.2 \pm 2.0$ & $2.7 \pm 0.1$ & $33.6 \pm 0.6$ & $2.6 \pm 0.1$ & $28.8 \pm 0.4$ \\ & \emph{E.F. Ent. Top $k$} & $3.5 \pm 0.1$ & $28.8 \pm 1.7$ & $2.7 \pm 0.1$ & $32.8 \pm 0.5$ & $2.5 \pm 0.1$ & $29.2 \pm 0.8$ \\ \cmidrule[0.15pt]{2-8} & \emph{Corr. Top $k$} & $\mathbf{2.9 \pm 0.1}$ & $\mathbf{63.1 \pm 5.3}$ & $\mathbf{2.5 \pm 0.1}$ & $\mathbf{53.1 \pm 0.9}$ & $\mathbf{2.4 \pm 0.1}$ & $\mathbf{45.5 \pm 2.7}$ \\ \cmidrule[0.15pt]{1-8} \multirow{7}{*}{\shortstack[l]{Complex}} & \emph{L2X} \citep{chen2018learning} & $2.7 \pm 0.1$ & $50.5 \pm 1.0$ & $2.6 \pm 0.1$ & $44.1 \pm 1.7$ & $2.4 \pm 0.1$ & $44.4 \pm 0.9$ \\ & \emph{SoftSub} \citep{xie2019reparameterizable} & $2.7 \pm 0.1$ & $57.1 \pm 3.6$ & $\mathbf{2.3 \pm 0.1}$ & $50.2 \pm 3.3$ & $2.3 \pm 0.1$ & $43.0 \pm 1.1$ \\ \cmidrule[0.15pt]{2-8} & \emph{Euclid. Top $k$} & $2.7 \pm 0.1$ & $61.3 \pm 1.2$ & $2.4 \pm 0.1$ & $52.8 \pm 1.1$ & $2.3 \pm 0.1$ & $44.1 \pm 1.2$ \\ & \emph{Cat. Ent. Top $k$} & $2.7 \pm 0.1$ & $61.9 \pm 1.2$ & $\mathbf{2.3 \pm 0.1}$ & $52.8 \pm 1.0$ & $2.3 \pm 0.1$ & $44.5 \pm 1.0$ \\ & \emph{Bin. Ent. Top $k$} & $2.6 \pm 0.1$ & $62.1 \pm 0.7$ & $\mathbf{2.3 \pm 0.1}$ & $50.7 \pm 0.9$ & $2.3 \pm 0.1$ & $44.8 \pm 0.8$ \\ & \emph{E.F. Ent. Top $k$} & $2.6 \pm 0.1$ & $59.5 \pm 0.9$ & $\mathbf{2.3 \pm 0.1}$ & $54.6 \pm 0.6$ & $2.2 \pm 0.1$ & $44.9 \pm 0.9$ \\ \cmidrule[0.15pt]{2-8} & \emph{Corr. Top $k$} & $\mathbf{2.5 \pm 0.1}$ & $\mathbf{67.9 \pm 0.6}$ & $\mathbf{2.3 \pm 0.1}$ & $\mathbf{60.2 \pm 1.3}$ & $\mathbf{2.1 \pm 0.1}$ & $\mathbf{57.7 \pm 3.8}$ \\ \bottomrule \end{tabular}} \end{small} \vspace{2pt} \setlength{\fboxrule}{\heavyrulewidth} \begingroup\fboxsep=0.0025\textwidth \fbox{\parbox{0.995\textwidth}{ \small{ Pours a \topk{\strut slight tangerine} orange and \topk{\strut straw} yellow. The head is \topk{\strut nice} and bubbly but fades very quickly with a little lacing. \both{\strut Smells} \corrtopk{\strut{} like Wheat and European hops}, a little yeast in there too. There is some \topk{\strut fruit} in there too, but you have to take a good \topk{\strut whiff} to get it. The taste is of wheat, a bit of malt, and \corrtopk{\strut a little } \both{\strut fruit} \corrtopk{ \strut flavour} in there too. Almost feels like drinking \topk{\strut Champagne}, medium mouthful otherwise. Easy to drink, but \topk{\strut not} something I'd be trying every night. \begin{center} \begin{tabular}{ccccc} Appearance: 3.5 & \textbf{Aroma: 4.0} & Palate: 4.5 & Taste: 4.0 & Overall: 4.0 \end{tabular} \end{center} }}} \endgroup \vspace{-\baselineskip} \end{table} \section{Conclusion} We introduced stochastic softmax tricks, which are random convex programs that capture a large class of relaxed distributions over structured, combinatorial spaces. We designed stochastic softmax tricks for subset selection and a variety of spanning tree distributions. We tested their use in deep latent variable models, and found that they can be used to improve performance and to encourage the unsupervised discovery of true latent structure. There are future directions in this line of work. The relaxation framework can be generalized by modifying the constraint set or the utility distribution at positive temperatures. Some combinatorial objects might benefit from a more careful design of the utility distribution, while others, e.g., matchings, are still waiting to have their tricks designed. \section*{Broader Impact} This work introduces methods and theory that have the potential for improving the interpretability of latent variable models. While unfavorable consequences cannot be excluded, increased interpretability is generally considered a desirable property of machine learning models. Given that this is foundational, methodologically-driven research, we refrain from speculating further. \section*{Acknowledgements and Disclosure of Funding} We thank Daniel Johnson and Francisco Ruiz for their time and insightful feedback. We also thank Tamir Hazan, Yoon Kim, Andriy Mnih, and Rich Zemel for their valuable comments. MBP gratefully acknowledges support from the Max Planck ETH Center for Learning Systems. CJM is grateful for the support of the James D. Wolfensohn Fund at the Institute of Advanced Studies in Princeton, NJ. Resources used in preparing this research were provided, in part, by the Sustainable Chemical Processes through Catalysis (Suchcat) National Center of Competence in Research (NCCR), the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute.
{ "timestamp": "2021-03-02T02:34:27", "yymm": "2006", "arxiv_id": "2006.08063", "language": "en", "url": "https://arxiv.org/abs/2006.08063" }
"\\section{Introduction}\n\nIn this work we explore the Bell like-inequalities over the strict proba(...TRUNCATED)
{"timestamp":"2020-06-16T02:22:19","yymm":"2006","arxiv_id":"2006.08066","language":"en","url":"http(...TRUNCATED)
"\n\\section{Introduction}\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=0.34\\te(...TRUNCATED)
{"timestamp":"2021-02-09T02:37:06","yymm":"2006","arxiv_id":"2006.08039","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\nCataclysmic variable stars \\citep{warner95} are close binary \nsystems (...TRUNCATED)
{"timestamp":"2020-06-16T02:22:31","yymm":"2006","arxiv_id":"2006.08078","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\nEstimating the support size of a discrete distribution is an important the(...TRUNCATED)
{"timestamp":"2020-06-16T02:20:12","yymm":"2006","arxiv_id":"2006.07999","language":"en","url":"http(...TRUNCATED)
"\\section*{Introduction}\n\nA \\definedterm{dynamical system} is a pair $\\pair{X}{f}$, where $X$ i(...TRUNCATED)
{"timestamp":"2020-09-14T02:13:07","yymm":"2006","arxiv_id":"2006.08277","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction\\label{sec:Introduction}}\n\nIt has been recently shown that the path integr(...TRUNCATED)
{"timestamp":"2020-11-04T02:06:52","yymm":"2006","arxiv_id":"2006.08216","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\\label{sec:introduction}\n\\IEEEPARstart{M}{odern} electric trains can bo(...TRUNCATED)
{"timestamp":"2020-06-16T02:24:09","yymm":"2006","arxiv_id":"2006.08119","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\\label{sec:intro}\n\nEMV, named after its founders Europay, Mastercard, and(...TRUNCATED)
{"timestamp":"2021-02-18T02:19:11","yymm":"2006","arxiv_id":"2006.08249","language":"en","url":"http(...TRUNCATED)
End of preview.

No dataset card yet

Downloads last month
3