index
int64 0
20.3k
| text
stringlengths 0
1.3M
| year
stringdate 1987-01-01 00:00:00
2024-01-01 00:00:00
| No
stringlengths 1
4
|
|---|---|---|---|
4,800
|
A lattice filter model of the visual pathway Karol Gregor Dmitri B. Chklovskii Janelia Farm Research Campus, HHMI 19700 Helix Drive, Ashburn, VA {gregork, mitya}@janelia.hhmi.org Abstract Early stages of visual processing are thought to decorrelate, or whiten, the incoming temporally varying signals. Motivated by the cascade structure of the visual pathway (retina →lateral geniculate nucelus (LGN) →primary visual cortex, V1) we propose to model its function using lattice filters - signal processing devices for stage-wise decorrelation of temporal signals. Lattice filter models predict neuronal responses consistent with physiological recordings in cats and primates. In particular, they predict temporal receptive fields of two different types resembling so-called lagged and non-lagged cells in the LGN. Moreover, connection weights in the lattice filter can be learned using Hebbian rules in a stage-wise sequential manner reminiscent of the neuro-developmental sequence in mammals. In addition, lattice filters can model visual processing in insects. Therefore, lattice filter is a useful abstraction that captures temporal aspects of visual processing. Our sensory organs face an ongoing barrage of stimuli from the world and must transmit as much information about them as possible to the rest of the brain [1]. This is a formidable task because, in sensory modalities such as vision, the dynamic range of natural stimuli (more than three orders of magnitude) greatly exceeds the dynamic range of relay neurons (less than two orders of magnitude) [2]. The reason why high fidelity transmission is possible at all is that the continuity of objects in the physical world leads to correlations in natural stimuli, which imply redundancy. In turn, such redundancy can be eliminated by compression performed by the front end of the visual system leading to the reduction of the dynamic range [3, 4]. A compression strategy appropriate for redundant natural stimuli is called predictive coding [5, 6, 7]. In predictive coding, a prediction of the incoming signal value is computed from past values delayed in the circuit. This prediction is subtracted from the actual signal value and only the prediction error is transmitted. In the absence of transmission noise such compression is lossless as the original signal could be decoded on the receiving end by inverting the encoder. If predictions are accurate, the dynamic range of the error is much smaller than that of the natural stimuli. Therefore, minimizing dynamic range using predictive coding reduces to optimizing prediction. Experimental support for viewing the front end of the visual system as a predictive encoder comes from the measurements of receptive fields [6, 7]. In particular, predictive coding suggests that, for natural stimuli, the temporal receptive fields should be biphasic and the spatial receptive fields center-surround. These predictions are born out by experimental measurements in retinal ganglion cells, [8], lateral geniculate nucleus (LGN) neurons [9] and fly second order visual neurons called large monopolar cells (LMCs) [2]. In addition, the experimentally measured receptive fields vary with signal-to-noise ratio as would be expected from optimal prediction theory [6]. Furthermore, experimentally observed whitening of the transmitted signal [10] is consistent with removing correlated components from the incoming signals [11]. As natural stimuli contain correlations on time scales greater than hundred milliseconds, experimentally measured receptive fields of LGN neurons are equally long [12]. Decorrelation over such long time scales requires equally long delays. How can such extended receptive field be produced by 1 biological neurons and synapses whose time constants are typically less than hundred milliseconds [13]? The field of signal processing offers a solution to this problem in the form of a device called a lattice filter, which decorrelates signals in stages, sequentially adding longer and longer delays [14, 15, 16, 17]. Motivated by the cascade structure of visual systems [18], we propose to model decorrelation in them by lattice filters. Naturally, visual systems are more complex than lattice filters and perform many other operations. However, we show that the lattice filter model explains several existing observations in vertebrate and invertebrate visual systems and makes testable predictions. Therefore, we believe that lattice filters provide a convenient abstraction for modeling temporal aspects of visual processing. This paper is organized as follows. First, we briefly summarize relevant results from linear prediction theory. Second, we explain the operation of the lattice filter in discrete and continuous time. Third, we compare lattice filter predictions with physiological measurements. 1 Linear prediction theory Despite the non-linear nature of neurons and synapses, the operation of some neural circuits in vertebrates [19] and invertebrates [20] can be described by a linear systems theory. The advantage of linear systems is that optimal circuit parameters may be obtained analytically and the results are often intuitively clear. Perhaps not surprisingly, the field of signal processing relies heavily on the linear prediction theory, offering a convenient framework [15, 16, 17]. Below, we summarize the results from linear prediction that will be used to explain the operation of the lattice filter. Consider a scalar sequence y = {yt} where time t = 1, . . . , n. Suppose that yt at each time point depends on side information provided by vector zt. Our goal is to generate a series of linear predictions, ˆyt from the vector zt, ˆyt = w · zt. We define a prediction error as: et = yt −ˆyt = yt −w · zt (1) and look for values of w that minimize mean squared error: ⟨e2⟩= 1 nt X t e2 t = 1 nt X t (yt −w · zt)2. (2) The weight vector, w is optimal for prediction of sequence y from sequence z if and only if the prediction error sequence e = y −w · z is orthogonal to each component of vector z: ⟨ez⟩= 0. (3) When the whole series y is given in advance, i.e. in the offline setting, these so-called normal equations can be solved for w, for example, by Gaussian elimination [21]. However, in signal processing and neuroscience applications, another setting called online is more relevant: At every time step t, prediction ˆyt must be made using only current values of zt and w. Furthermore, after a prediction is made, w is updated based on the prediction ˆyt and observed yt, zt . In the online setting, an algorithm called stochastic gradient descent is often used, where, at each time step, w is updated in the direction of negative gradient of e2 t: w →w −η∇w(yt −w · zt)2. (4) This leads to the following weight update, known as least mean square (LMS) [15], for predicting sequence y from sequence z: w →w + ηetzt, (5) where η is the learning rate. The value of η represents the relative influence of more recent observations compared to more distant ones. The larger the learning rate the faster the system adapts to recent observations and less past it remembers. In this paper, we are interested in predicting a current value xt of sequence x from its past values xt−1, . . . , xt−k restricted by the prediction order k > 0: ˆxt = wk · (xt−1, . . . , xt−k)T . (6) 2 This problem is a special case of the online linear prediction framework above, where yt = xt, zt = (xt−1, . . . , xt−k)T . Then the gradient update is given by: w →wk + ηet(xt−1, . . . , xt−k)T . (7) While the LMS algorithm can find the weights that optimize linear prediction (6), the filter wk has a long temporal extent making it difficult to implement with neurons and synapses. 2 Lattice filters One way to generate long receptive fields in circuits of biological neurons is to use a cascade architecture, known as the lattice filter, which calculates optimal linear predictions for temporal sequences and transmits prediction errors [14, 15, 16, 17]. In this section, we explain the operation of a discrete-time lattice filter, then adapt it to continuous-time operation. 2.1 Discrete-time implementation The first stage of the lattice filter, Figure 1, calculates the error of the first order optimal prediction (i.e. only using the preceding element of the sequence), the second stage uses the output of the first stage and calculates the error of the second order optimal prediction (i.e. using only two previous values) etc. To make such stage-wise error computations possible the lattice filter calculates at every stage not only the error of optimal prediction of xt from past values xt−1, . . . , xt−k, called forward error, f k t = xt −wk · (xt−1, . . . , xt−k)T , (8) but, perhaps non-intuitively, also the error of optimal prediction of a past value xt−k from the more recent values xt−k+1, . . . , xt, called backward error: bk t = xt−k −w ′k · (xt−k+1, . . . , xt)T , (9) where wk and w ′k are the weights of the optimal prediction. For example, the first stage of the filter calculates the forward error f 1 t of optimal prediction of xt from xt−1: f 1 t = xt −u1xt−1 as well as the backward error b1 t of optimal prediction of xt−1 from xt: b1 t = xt−1 −v1xt, Figure 1. Here, we assume that coefficients u1 and v1 that give optimal linear prediction are known and return to learning them below. Each following stage of the lattice filter performs a stereotypic operation on its inputs, Figure 1. The k-th stage (k > 1) receives forward, f k−1 t , and backward, bk−1 t , errors from the previous stage, delays backward error by one time step and computes a forward error: f k t = f k−1 t −ukbk−1 t−1 (10) of the optimal linear prediction of f k−1 t from bk−1 t−1 . In addition, each stage computes a backward error bk t = bk−1 t−1 −vkf k−1 t (11) of the optimal linear prediction of bk−1 t−1 from f k−1 t . As can be seen in Figure 1, the lattice filter contains forward prediction error (top) and backward prediction error (bottom) branches, which interact at every stage via cross-links. Operation of the lattice filter can be characterized by the linear filters acting on the input, x, to compute forward or backward errors of consecutive order, so called prediction-error filters (blue bars in Figure 1). Because of delays in the backward error branch the temporal extent of the filters grows from stage to stage. In the next section, we will argue that prediction-error filters correspond to the measurements of temporal receptive fields in neurons. For detailed comparison with physiological measurements we will use the result that, for bi-phasic prediction-error filters, such as the ones in Figure 1, the first bar of the forward prediction-error filter has larger weight, by absolute value, than the combined weights of the remaining coefficients of the corresponding filter. Similarly, in backward predictionerror filters, the last bar has greater weight than the rest of them combined. This fact arises from the observation that forward prediction-error filters are minimum phase, while backward predictionerror filters are maximum phase [16, 17]. 3 Figure 1: Discrete-time lattice filter performs stage-wise computation of forward and backward prediction errors. In the first stage, the optimal prediction of xt from xt−1 is computed by delaying the input by one time step and multiplying it by u1. The upper summation unit subtracts the predicted xt from the actual value and outputs prediction error f 1 t . Similarly, the optimal prediction of xt−1 from xt is computed by multiplying the input by v1. The lower summation unit subtracts the optimal prediction from the actual value and outputs backward error b1 t. In each following stage k, the optimal prediction of f k−1 t from bk−1 t is computed by delaying bk−1 t by one time step and multiplying it by uk. The upper summation unit subtracts the prediction from the actual f k−1 t and outputs prediction error f k t . Similarly, the optimal prediction of bk−1 t−1 from f k−1 t is computed by multiplying it by uk. The lower summation unit subtracts the optimal prediction from the actual value and outputs backward error bk t . Black connections have unitary weights and red connections have learnable negative weights. One can view forward and backward error calculations as applications of so-called prediction-error filters (blue) to the input sequence. Note that the temporal extent of the filters gets longer from stage to stage. Next, we derive a learning rule for finding optimal coefficients u and v in the online setting. The uk is used for predicting f k−1 t from bk−1 t−1 to obtain error f k t . By substituting yt = f k−1 t , zt = bk−1 t−1 and et = f k t into (5) the update of uk becomes uk →uk + ηf k t bk−1 t−1 . (12) Similarly, vk is updated by vk →vk + ηbk t f k−1 t . (13) Interestingly, the updates of the weights are given by the product of the activities of outgoing and incoming nodes of the corresponding cross-links. Such updates are known as Hebbian learning rules thought to be used by biological neurons [22, 23]. Finally, we give a simple proof that, in the offline setting when the entire sequence x is known, f k and bk, given by equations (10, 11), are indeed errors of optimal k-th order linear prediction. Let D be one step time delay operator (Dx)t = xt−1. The induction statement at k is that f k and bk are k-th order forward and backward errors of optimal linear prediction which is equivalent to f k and bk being of the form f k = x−wk 1Dx−. . .−wk kDkx and bk = Dkx−w ′k 1 Dk−1x−. . .−w ′k k x and, from normal equations (3), satisfying ⟨f kDix⟩= 0 and ⟨DbkDix⟩= ⟨bkDi−1x⟩= 0 for i = 1, . . . , k. That this is true for k = 1 directly follows from the definition of f 1 and b1. Now we assume that this is true for k −1 ≥1 and show it is true for k. It is easy to see from the forms of f k−1 and bk−1 and from f k = f k−1 −ukDbk−1 that f k has the correct form f k = x −wk 1Dx −. . . −wk kDkx. Regarding orthogonality for i = 1, . . . , k −1 we have ⟨f kDix⟩= ⟨(f k−1 −ukDbk−1)Dix⟩= ⟨f k−1Dix⟩−uk⟨(Dbk−1)Dix⟩= 0 using the induction assumptions of orhogonality at k −1. For the remaining i = k we note that f k is the error of the optimal linear prediction of f k−1 from Dbk−1 and therefore 0 = ⟨f kDbk−1⟩= ⟨f k(Dkx −w ′k−1 1 Dk−1x −. . . + w ′k−1 k−1 Dx)⟩= ⟨f kDkx⟩as desired. The bk case can be proven similarly. 2.2 Continuous-time implementation The last hurdle remaining for modeling neuronal circuits which operate in continuous time with a lattice filter is its discrete-time operation. To obtain a continuous-time implementation of the lattice 4 filter we cannot simply take the time step size to zero as prediction-error filters would become infinitesimally short. Here, we adapt the discrete-time lattice filter to continous-time operation in two steps. First, we introduce a discrete-time Laguerre lattice filter [24, 17] which uses Laguerre polynomials rather than the shift operator to generate its basis functions, Figure 2. The input signal passes through a leaky integrator whose leakage constant α defines a time-scale distinct from the time step (14). A delay, D, at every stage is replaced by an all-pass filter, L, (15) with the same constant α, which preserves the magnitude of every Fourier component of the input but shifts its phase in a frequency dependent manner. Such all-pass filter reduces to a single time-step delay when α = 0. The optimality of a general discrete-time Laguerre lattice filter can be proven similarly to that for the discrete-time filter, simply by replacing operator D with L in the proof of section 2.1. Figure 2: Continuous-time lattice filter using Laguerre polynomials. Compared to the discretetime version, it contains a leaky integrator, L0,(16) and replaces delays with all-pass filters, L, (17). Second, we obtain a continuous-time formulation of the lattice filter by replacing t −1 →t −δt, defining the inverse time scale γ = (1 −α)/δt and taking the limit δt →0 while keeping γ fixed. As a result L0 and L are given by: Discrete time L0(x)t = αL0(x)t−1 + xt (14) L(x)t = α(L(x)t−1 −xt) + xt−1 (15) Continuous time dL0(x)/dt = −γL0(x) + x (16) L(x) = x −2γL0(x) (17) Representative impulse responses of the continuous Laguerre filter are shown in Figure 2. Note that, similarly to the discrete-time case, the area under the first (peak) phase is greater than the area under the second (rebound) phase in the forward branch and the opposite is true in the backward branch. Moreover, the temporal extent of the rebound is greater than that of the peak not just in the forward branch like in the basic discrete-time implementation but also in the backward branch. As will be seen in the next section, these predictions are confirmed by physiological recordings. 3 Experimental evidence for the lattice filter in visual pathways In this section we demonstrate that physiological measurements from visual pathways in vertebrates and invertebrates are consistent with the predictions of the lattice filter model. For the purpose of modeling visual pathways, we identify summation units of the lattice filter with neurons and propose that neural activity represents forward and backward errors. In the fly visual pathway neuronal activity is represented by continuously varying graded potentials. In the vertebrate visual system, all neurons starting with ganglion cells are spiking and we identify their firing rate with the activity in the lattice filter. 3.1 Mammalian visual pathway In mammals, visual processing is performed in stages. In the retina, photoreceptors synapse onto bipolar cells, which in turn synapse onto retinal ganglion cells (RGCs). RGCs send axons to the LGN, where they synapse onto LGN relay neurons projecting to the primary visual cortex, V1. In addition to this feedforward pathway, at each stage there are local circuits involving (usually inhibitory) inter-neurons such as horizontal and amacrine cells in the retina. Neurons of each class 5 come in many types, which differ in their connectivity, morphology and physiological response. The bewildering complexity of these circuits has posed a major challenge to visual neuroscience. RGC LGN Time (ms) Temporal Filter 0 -1 -0.5 0 0.5 1 100 200 simple cells and geniculate cells differed for all temporal parameters measured, there was considerable overlap between the distributions (Fig. 7). This overlap raises the following question: does connectivity depend on how well geniculate and cortical responses are matched with respect to time? For instance, do simple cells with fast subregions (early times to peak and early zero crossings) receive input mostly from geniculate cells with fast centers? Figure 8 illustrates the visual responses from a geniculate cell and a simple cell that were monosynaptically connected. A strong positive peak was observed in the correlogram (shown with a 10 msec time window to emphasize its short latency and fast rise time). In this case, an ON central subregion was well overlapped with an ON geniculate center (precisely at the peak of the subregion). Moreover, the timings of the visual responses from the overlapped subregion and the geniculate center were very similar (same onset, ;0–25 msec; same peak, ;25–50 msec). It is worth noting that the two central subregions of the simple cell were faster and stronger than the two lateral subregions. The responses of the central subregions matched the timing of the geniculate center. In contrast, the timing of the lateral subregions resembled more closely the timing of the geniculate surround (both peaked at 25–50 msec). Unlike the example shown in Figure 8, a considerable number of geniculocortical pairs produced responses with different timing. For example, Figure 9 illustrates a case in which a geniculate center fully overlapped a strong simple-cell subregion of the same sign, but with slower timing (LGN onset, ;0–25 msec; peak, ;25–50 msec; simple-cell onset, ;25–50 msec; peak, ;50–75 msec). The cross-correlogram between this pair of neurons was flat, which indicates the absence of a monosynaptic connection (Fig. 9, top right). To examine the role of timing in geniculocortical connectivity, we measured the response time course from all cell pairs that met two criteria. First, the geniculate center overlapped a simple-cell subregion of the same sign (n 5 104). Second, the geniculate center overlapped the cortical subregion in a near-optimal position (relative overlap . 50%, n 5 47; see Materials and Methods; Fig. 5A). All these cell pairs had a high probability of being monosynaptically connected because of the precise match in receptive-field position and sign (31 of 47 were connected). The distributions of peak time, zero-crossing time, and rebound index from these cell pairs were very similar to the distributions from the entire sample (Fig. 7; see also Fig. 10 legend). The selected cell pairs included both presumed directional (predicted DI . 0.3, see Materials and Methods; 12/20 connected) and nondirectional (19/27 connected) simple cells. Most geniculate cells had small receptive fields (less than two simple-cell subregion widths; see Receptive-field sign), although five cells with larger receptive fields were also included (three connected). From the 47 cell pairs used in this analysis, those with similar response time courses had a higher probability of being connected (Fig. 10). In particular, cell pairs that had both similar peak time and zero-crossing time were all connected (n 5 12; Fig. 10A). Directionally selective simple cells were included in all timing groups. For example, in Figure 10A there were four, five, two, and one directionally selective cells in the time groups ,20, 40, 60, and .60 msec, respectively. Similar results were obtained if we restricted our sample to geniculate centers overlapped with the dominant subregion of the simple cell (n 5 31). Interestingly, the efficacy and contributions of the connections seemed to depend little on the relative timing of the visual responses (Fig. 10, right). Although our sample of them was quite small, lagged cells are of considerable interest and therefore deserve comment. We recorded from 13 potentially lagged LGN cells whose centers were superimposed with a simple-cell subregion (eight with rebound indices between 1.2 and 1.5; five with rebound indices .1.9). Only seven of these pairs could be used for timing comparisons (in one pair the baseline of the correlogram had insufficient spikes; in three pairs the geniculate receptive fields were Figure 7. Distribution of geniculate cells and simple cells with respect to the timing of their responses. The distribution of three parameters derived from impulse responses of geniculate and cortical neurons is shown. A, Peak time. B, Zero-crossing time. C, Rebound index. Peak time is the time with the strongest response in the first phase of the impulse response. Zero-crossing time is the time between the first and second phases. Rebound index is the area of the impulse response after the zero crossing divided by the area before the zero crossing. Only impulse responses with good signal to noise were included (.5 SD above baseline; n 5 169). Alonso et al. • Connections between LGN and Cortex J. Neurosci., June 1, 2001, 21(11):4002–4015 4009 Figure 3: Electrophysiologically measured temporal receptive fields get progressively longer along the cat visual pathway. Left: A cat LGN cell (red) has a longer receptive field than a corresponding RGC cell (blue) (adapted from [12] which also reports population data). Right (A,B): Extent of the temporal receptive fields of simple cells in cat V1 is greater than that of corresponding LGN cells as quantified by the peak (A) and zero-crossing (B) times. Right (C): In the temporal receptive fields of cat LGN and V1 cells the peak can be stronger or weaker than the rebound (adapted from [25]). Here, we point out several experimental observations related to temporal processing in the visual system consistent with the lattice filter model. First, measurements of temporal receptive fields demonstrate that they get progressively longer at each consecutive stage: i) LGN neurons have longer receptive fields than corresponding pre-synaptic ganglion cells [12], Figure 3left; ii) simple cells in V1 have longer receptive fields than corresponding pre-synaptic LGN neurons [25], Figure 3rightA,B. These observation are consistent with the progressively greater temporal extent of the prediction-error filters (blue plots in Figure 2). Second, the weight of the peak (integrated area under the curve) may be either greater or less than that of the rebound both in LGN relay cells [26] and simple cells of V1 [25], Figure 3right(C). Neurons with peak weight exceeding that of rebound are often referred to as non-lagged while the others are known as lagged found both in cat [27, 28, 29] and monkey [30]. The reason for this becomes clear from the response to a step stimulus, Figure 4(top). By comparing experimentally measured receptive fields with those of the continuous lattice filter, Figure 4, we identify non-lagged neurons with the forward branch and lagged neurons with the backward branch. Another way to characterize step-stimulus response is whether the sign of the transient is the same (non-lagged) or different (lagged) relative to sustained response. Third, measurements of cross-correlation between RGCs and LGN cell spikes in lagged and nonlagged neurons reveals a difference of the transfer function indicative of the difference in underlying circuitry [30]. This is consistent with backward pathway circuit of the Laguerre lattice filter, Figure 2, being more complex then that of the forward path (which results in different transfer function). ” (or replacing ”more complex” with ”different”) Third, measurements of cross-correlation between RGCs and LGN cell spikes in lagged and nonlagged neurons reveals a difference of the transfer function indicative of the difference in underlying circuitry [31]. This is consistent with the backward branch circuit of the Laguerre lattice filter, Figure 2, being different then that of the forward branch (which results in different transfer function). In particular, a combination of different glutamate receptors such as AMPA and NMDA, as well as GABA receptors are thought to be responsible for observed responses in lagged cells [32]. However, further investigation of the corresponding circuitry, perhaps using connectomics technology, is desirable. Fourth, the cross-link weights of the lattice filter can be learned using Hebbian rules, (12,13) which are biologically plausible [22, 23]. Interestingly, if these weights are learned sequentially, starting from the first stage, they do not need to be re-learned when additional stages are added or learned. This property maps naturally on the fact that in the course of mammalian development the visual pathway matures in a stage-wise fashion - starting with the retina, then LGN, then V1 - and implying that the more peripheral structures do not need to adapt to the maturation of the downstream ones. 6 Figure 4: Comparison of electrophysiologically measured responses of cat LGN cells with the continuous-time lattice filter model. Top: Experimentally measured temporal receptive fields and step-stimulus responses of LGN cells (adapted from [26]). Bottom: Typical examples of responses in the continuous-time lattice filter model. Lattice filter coefficients were u1 = v1 = 0.4, u2 = v2 = 0.2 and 1/γ = 50ms to model the non-lagged cell and u1 = v1 = u2 = v2 = 0.2 and 1/γ = 60ms to model the lagged cell. To model photoreceptor contribution to the responses, an additional leaky integrator L0 was added to the circuit of Figure 2. While Hebbian rules are biologically plausible, one may get an impression from Figure 2 that they must apply to inhibitory cross-links. We point out that this circuit is meant to represent only the computation performed rather than the specific implementation in terms of neurons. As the same linear computation can be performed by circuits with a different arrangement of the same components, there are multiple implementations of the lattice filter. For example, activity of non-lagged OFF cells may be seen as representing minus forward error. Then the cross-links between the non-lagged OFF pathway and the lagged ON pathway would be excitatory. In general, classification of cells into lagged and non-lagged seems independent of their ON/OFF and X/Y classification [31, 28, 29], but see[33]. 3.2 Insect visual pathway In insects, two cell types, L1 and L2, both post-synaptic to photoreceptors play an important role in visual processing. Physiological responses of L1 and L2 indicate that they decorrelate visual signals by subtracting their predictable parts. In fact, receptive fields of these neurons were used as the first examples of predictive coding in neuroscience [6]. Yet, as the numbers of synapses from photoreceptors to L1 and L2 are the same [34] and their physiological properties are similar, it has been a mystery why insects, have not just one but a pair of such seemingly redundant neurons per facet. Previously, it was suggested that L1 and L2 provide inputs to the two pathways that map onto ON and OFF pathways in the vertebrate retina [35, 36]. Here, we put forward a hypothesis that the role of L1 and L2 in visual processing is similar to that of the two branches of the lattice filter. We do not incorporate the ON/OFF distinction in the effectively linear lattice filter model but anticipate that such combined description will materialize in the future. As was argued in Section 2, in forward prediction-error filters, the peak has greater weight than the rebound, while in backward prediction-error filters the opposite is true. Such difference implies that in response to a step-stimulus the signs of sustained responses compared to initial transients are different between the branches. Indeed, Ca2+ imaging shows that responses of L1 and L2 to step-stimulus are different as predicted by the lattice filter model [35], Figure 5b. Interestingly, the activity of L1 seems to represent minus forward error and L2 - plus backward error, suggesting that the lattice filter cross-links are excitatory. To summarize, the predictions of the lattice filter model seem to be consistent with the physiological measurements in the fly visual system and may help understand its operation. 7 0 5 10 15 20 0 0.5 1 Stimulus 0 5 10 15 20 −1 0 1 − Forward Error 0 5 10 15 20 −1 0 1 Backward Error time Figure 5: Response of the lattice filter and fruit fly LMCs to a step-stimulus. Left: Responses of the first order discrete-time lattice filter to a step stimulus. Right: Responses of fly L1 and L2 cells to a moving step stimulus (adapted from [35]). Predicted and the experimentally measured responses have qualitatively the same shape: a transient followed by sustained response, which has the same sign for the forward error and L1 and the opposite sign for the backward error and L2. 4 Discussion Motivated by the cascade structure of the visual pathway, we propose to model its operation with the lattice filter. We demonstrate that the predictions of the continuous-time lattice filter model are consistent with the course of neural development and the physiological measurement in the LGN, V1 of cat and monkey, as well as fly LMC neurons. Therefore, lattice filters may offer a useful abstraction for understanding aspects of temporal processing in visual systems of vertebrates and invertebrates. Previously, [11] proposed that lagged and non-lagged cells could be a result of rectification by spiking neurons. Although we agree with [11] that LGN performs temporal decorrelation, our explanation does not rely on non-linear processing but rather on the cascade architecture and, hence, is fundamentally different. Our model generates the following predictions that are not obvious in [11]: i) Not only are LGN receptive fields longer than RGC but also V1 receptive fields are longer than LGN; ii) Even a linear model can generate a difference in the peak/rebound ratio; iii) The circuit from RGC to LGN should be different for lagged and non-lagged cells consistent with [31]; iv) The lattice filter circuit can self-organize using Hebbian rules, which gives a mechanistic explanation of receptive fields beyond the normative framework of [11]. In light of the redundancy reduction arguments given in the introduction, we note that, if the only goal of the system were to compress incoming signals using a given number of lattice filter stages, then after the compression is peformed only one kind of prediction errors, forward or backward needs to be transmitted. Therefore, having two channels, in the absence of noise, may seem redundant. However, transmitting both forward and backward errors gives one the flexibility to continue decorrelation further by adding stages performing relatively simple operations. We are grateful to D.A. Butts, E. Callaway, M. Carandini, D.A. Clark, J.A. Hirsch, T. Hu, S.B. Laughlin, D.N. Mastronarde, R.C. Reid, H. Rouault, A. Saul, L. Scheffer, F.T. Sommer, X. Wang for helpful discussions. References [1] F. Rieke, D. Warland, R.R. van Steveninck, and W. Bialek. Spikes: exploring the neural code. MIT press, 1999. [2] S.B. Laughlin. Matching coding, circuits, cells, and molecules to signals: general principles of retinal design in the fly’s eye. Progress in retinal and eye research, 13(1):165–196, 1994. [3] F. Attneave. Some informational aspects of visual perception. Psychological review, 61(3):183, 1954. [4] H. Barlow. Redundancy reduction revisited. Network: Comp in Neural Systems, 12(3):241–253, 2001. [5] R.M. Gray. Linear Predictive Coding and the Internet Protocol. Now Publishers, 2010. [6] MV Srinivasan, SB Laughlin, and A. Dubs. Predictive coding: a fresh view of inhibition in the retina. Proceedings of the Royal Society of London. Series B. Biological Sciences, 216(1205):427–459, 1982. [7] T. Hosoya, S.A. Baccus, and M. Meister. Dynamic predictive coding by the retina. Nature, 436:71, 2005. 8 [8] HK Hartline, H.G. Wagner, and EF MacNichol Jr. The peripheral origin of nervous activity in the visual system. Studies on excitation and inhibition in the retina: a collection of papers from the laboratories of H. Keffer Hartline, page 99, 1974. [9] N.A. Lesica, J. Jin, C. Weng, C.I. Yeh, D.A. Butts, G.B. Stanley, and J.M. Alonso. Adaptation to stimulus contrast and correlations during natural visual stimulation. Neuron, 55(3):479–491, 2007. [10] Y. Dan, J.J. Atick, and R.C. Reid. Efficient coding of natural scenes in the lateral geniculate nucleus: experimental test of a computational theory. The Journal of Neuroscience, 16(10):3351–3362, 1996. [11] D.W. Dong and J.J. Atick. Statistics of natural time-varying images. Network: Computation in Neural Systems, 6(3):345–358, 1995. [12] X. Wang, J.A. Hirsch, and F.T. Sommer. Recoding of sensory information across the retinothalamic synapse. The Journal of Neuroscience, 30(41):13567–13577, 2010. [13] C. Koch. Biophysics of computation: information processing in single neurons. Oxford Univ Press, 2005. [14] F. Itakura and S. Saito. On the optimum quantization of feature parameters in the parcor speech synthesizer. In Conference Record, 1972 International Conference on Speech Communication and Processing, Boston, MA, pages 434–437, 1972. [15] B. Widrow and S.D. Stearns. Adaptive signal processing. Prentice-Hall, Inc. Englewood Cliffs, NJ, 1985. [16] S. Haykin. Adaptive filter theory. Prentice-Hall, Englewood-Cliffs, NJ, 2003. [17] A.H. Sayed. Fundamentals of adaptive filtering. Wiley-IEEE Press, 2003. [18] D.J. Felleman and D.C. Van Essen. Distributed hierarchical processing in the primate cerebral cortex. Cerebral cortex, 1(1):1–47, 1991. [19] X. Wang, F.T. Sommer, and J.A. Hirsch. Inhibitory circuits for visual processing in thalamus. Current Opinion in Neurobiology, 2011. [20] SB Laughlin, J. Howard, and B. Blakeslee. Synaptic limitations to contrast coding in the retina of the blowfly calliphora. Proceedings of the Royal society of London. Series B. Biological sciences, 231(1265):437–467, 1987. [21] D.C. Lay. Linear Algebra and Its Applications. Addison-Wesley/Longman, New York/London, 2000. [22] D.O. Hebb. The organization of behavior: A neuropsychological theory. Lawrence Erlbaum, 2002. [23] O. Paulsen and T.J. Sejnowski. Natural patterns of activity and long-term synaptic plasticity. Current opinion in neurobiology, 10(2):172–180, 2000. [24] Z. Fejzo and H. Lev-Ari. Adaptive laguerre-lattice filters. Signal Processing, IEEE Transactions on, 45(12):3006–3016, 1997. [25] J.M. Alonso, W.M. Usrey, and R.C. Reid. Rules of connectivity between geniculate cells and simple cells in cat primary visual cortex. The Journal of Neuroscience, 21(11):4002–4015, 2001. [26] D. Cai, G.C. Deangelis, and R.D. Freeman. Spatiotemporal receptive field organization in the lateral geniculate nucleus of cats and kittens. Journal of Neurophysiology, 78(2):1045–1061, 1997. [27] D.N. Mastronarde. Two classes of single-input x-cells in cat lateral geniculate nucleus. i. receptive-field properties and classification of cells. Journal of Neurophysiology, 57(2):357–380, 1987. [28] J. Wolfe and L.A. Palmer. Temporal diversity in the lateral geniculate nucleus of cat. Visual neuroscience, 15(04):653–675, 1998. [29] AB Saul and AL Humphrey. Spatial and temporal response properties of lagged and nonlagged cells in cat lateral geniculate nucleus. Journal of Neurophysiology, 64(1):206–224, 1990. [30] A.B. Saul. Lagged cells in alert monkey lateral geniculate nucleus. Visual neurosci, 25:647–659, 2008. [31] D.N. Mastronarde. Two classes of single-input x-cells in cat lateral geniculate nucleus. ii. retinal inputs and the generation of receptive-field properties. Journal of Neurophysiology, 57(2):381–413, 1987. [32] P. Heggelund and E. Hartveit. Neurotransmitter receptors mediating excitatory input to cells in the cat lateral geniculate nucleus. i. lagged cells. Journal of neurophysiology, 63(6):1347–1360, 1990. [33] J. Jin, Y. Wang, R. Lashgari, H.A. Swadlow, and J.M. Alonso. Faster thalamocortical processing for dark than light visual targets. The Journal of Neuroscience, 31(48):17471–17479, 2011. [34] M. Rivera-Alba, S.N. Vitaladevuni, Y. Mischenko, Z. Lu, S. Takemura, L. Scheffer, I.A. Meinertzhagen, D.B. Chklovskii, and G.G. de Polavieja. Wiring economy and volume exclusion determine neuronal placement in the drosophila brain. Current Biology, 21(23):2000–5, 2011. [35] D.A. Clark, L. Bursztyn, M.A. Horowitz, M.J. Schnitzer, and T.R. Clandinin. Defining the computational structure of the motion detector in drosophila. Neuron, 70(6):1165–1177, 2011. [36] M. Joesch, B. Schnell, S.V. Raghu, D.F. Reiff, and A. Borst. On and off pathways in drosophila motion vision. Nature, 468(7321):300–304, 2010. 9
|
2012
|
82
|
4,801
|
Multi-scale Hyper-time Hardware Emulation of Human Motor Nervous System Based on Spiking Neurons using FPGA C. Minos Niu Department of Biomedical Engineering University of Southern California Los Angeles, CA 90089 minos.niu@sangerlab.net Sirish K. Nandyala Department of Biomedical Engineering University of Southern California Los Angeles, CA 90089 nandyala@usc.edu Won Joon Sohn Department of Biomedical Engineering University of Southern California Los Angeles, CA 90089 wonjsohn@gmail.com Terence D. Sanger Department of Biomedical Engineering Department of Neurology Department of Biokinesiology University of Southern California Los Angeles, CA 90089 terry@sangerlab.net Abstract Our central goal is to quantify the long-term progression of pediatric neurological diseases, such as a typical 10-15 years progression of child dystonia. To this purpose, quantitative models are convincing only if they can provide multi-scale details ranging from neuron spikes to limb biomechanics. The models also need to be evaluated in hyper-time, i.e. significantly faster than real-time, for producing useful predictions. We designed a platform with digital VLSI hardware for multiscale hyper-time emulations of human motor nervous systems. The platform is constructed on a scalable, distributed array of Field Programmable Gate Array (FPGA) devices. All devices operate asynchronously with 1 millisecond time granularity, and the overall system is accelerated to 365x real-time. Each physiological component is implemented using models from well documented studies and can be flexibly modified. Thus the validity of emulation can be easily advised by neurophysiologists and clinicians. For maximizing the speed of emulation, all calculations are implemented in combinational logic instead of clocked iterative circuits. This paper presents the methodology of building FPGA modules emulating a monosynaptic spinal loop. Emulated activities are qualitatively similar to real human data. Also discussed is the rationale of approximating neural circuitry by organizing neurons with sparse interconnections. In conclusion, our platform allows emulating pathological abnormalities such that motor symptoms will emerge and can be analyzed. It compels us to test the origins of childhood motor disorders and predict their long-term progressions. 1 Challenges of studying developmental motor disorders There is currently no quantitative model of how a neuropathological condition, which mainly affects the function of neurons, ends up causing the functional abnormalities identified in clinical examinations. The gap in knowledge is particularly evident for disorders in developing human nervous systems, i.e. childhood neurological diseases. In these cases, the ultimate clinical effect of cellu1 lar injury is compounded by a complex interplay among the child’s injury, development, behavior, experience, plasticity, etc. Qualitative insight has been provided by clinical experiences into the association between particular types of injury and particular types of outcome. Their quantitative linkages, nevertheless, have yet to be created – neither in clinic nor in cellular physiological tests. This discrepancy is significantly more prominent for individual child patients, which makes it very difficult to estimate the efficacy of treatment plans. In order to understand the consequence of injury and discover new treatments, it is necessary to create a modeling toolset with certain design guidelines, such that child neurological diseases can be quantitatively analyzed. Perhaps more than any other organ, the brain necessarily operates on multiple spatial and temporal scales. On the one hand, it is the neurons that perform fundamental computations, but neurons have to interact with large-scale organs (ears, eyes, skeletal muscles, etc.) to achieve global functions. This multi-scale nature worths more attention in injuries, where the overall deficits depend on both the cellular effects of injuries and the propagated consequences. On the other hand, neural processes in developmental diseases usually operate on drastically different time scales, e.g. spinal reflex in milliseconds versus learning in years. Thus when studying motor nervous systems, mathematical modeling is convincing only if it can provide multi-scale details, ranging from neuron spikes to limb biomechanics; also the models should be evaluated with time granularity as small as 1 millisecond, meanwhile the evaluation needs to continue trillions of cycles in order to cover years of life. It is particularly challenging to describe the multi-scale nature of human nervous system when modeling childhood movement disorders. Note that for a child who suffered brain injury at birth, the full development of all motor symptoms may easily take more than 10 years. Therefore the millisecondbased model needs to be evaluated significantly faster than real-time, otherwise the model will fail to produce any useful predictions in time. We have implemented realistic models for spiking motoneurons, sensory neurons, neural circuitry, muscle fibers and proprioceptors using VLSI and programmable logic technologies. All models are computed in Field Programmable Gate Array (FPGA) hardware in 365 times real-time. Therefore one year of disease progression can be assessed after one day of emulation. This paper presents the methodology of building the emulation platform. The results demonstrate that our platform is capable of producing physiologically realistic multi-scale signals, which are usually scarce in experiments. Successful emulations enabled by this platform will be used to verify theories of neuropathology. New treatment mechanisms and drug effects can also be emulated before animal experiments or clinical trials. 2 Methodology of multi-scale neural emulation αMN Bag 1 Bag 2 Chain Gamma dynamic input Gamma static input Primary output Secondary output A. Human arm B. Monosynaptic spinal loop C. Inner structure of muscle spindle Figure 1: Illustration of the multi-scale nature of motor nervous system. The motor part of human nervous system is responsible for maintaining body postures and generating voluntary movements. The multi-scale nature of motor nervous system is demonstrated in Fig.1. When the elbow (Fig.1A) is maintaining a posture or performing a movement, a force is established by the involved muscle based on how much spiking excitation the muscle receives from its αmotoneurons (Fig.1B). The α-motoneurons are regulated by a variety of sensory input, part of which comes directly from the proprioceptors in the muscle. As the primary proprioceptor found in skeletal muscles, a muscle spindle is another complex system that has its own microscopic Multiple-InputMultiple-Output structure (Fig.1C). Spindles continuously provide information about the length and lengthening speed of the muscle fiber. A muscle with its regulating motoneurons, sensory neurons and proprioceptors constitutes a monosynaptic spinal loop. This minimalist neurophysiological 2 structure is used as an example for explaining the multi-scale hyper-time emulation in hardware. Additional structures can be added to the backbone set-up using similar methodologies. 2.1 Modularized architecture for multi-scale models Decades of studies on neurophysiology provided an abundance of models characterizing different components of the human motor nervous system. The informational characteristics of physiological components allowed us to model them as functional structures, i.e. each of which converting input signals to certain outputs. In particular, within a monosynaptic spinal loop illustrated in Fig.1B, stretching the muscle will elicit a chain of physiological activities in: muscle stretch ⇒spindle ⇒ sensory neuron ⇒synapse ⇒motoneuron ⇒muscle contraction. The adjacent components must have compatible interfaces, and the interfacing variables must also be physiologically realistic. In our design, each component is mathematically described in Table 1: Table 1: Functional definition of neural models COMPONENT MATHEMATICAL DEFINITION Neuron S(t) = fneuron(I, t) Synapse I(t) = fsynapse(S, t) Muscle T(t) = fmuscle(S, L, ˙L, t) Spindle A(t) = fspindle(L, ˙L, Γdynamic, Γstatic, t) all components are modeled as black-box functions that map the inputs to the outputs. The meanings of these mathematical definitions are explained below. This design allows existing physiological models to be easily inserted and switched. In all models the input signals are time-varying, e.g. I = I(t), L = L(t) , etc. The argument of t in input signals are omitted throughout this paper. 2.2 Selection of models for emulation Models were selected in consideration of their computational cost, physiological verisimilitude, and whether it can be adapted to the mathematical form defined in Table 1. Model of Neuron The informational process for a neuron is to take post-synaptic current I as the input, and produce a binary spike train S in the output. The neuron model adopted in the emulation was developed by Izhikevich [1]: v′ = 0.04v2 + 5v + 140 −u + I (1) u′ = a(bv −u) (2) if v = 30 mV, then v ←c, u ←u + d where a, b, c, d are free parameters tuned to achieve certain firing patterns. Membrane potential v directly determines a binary spike train S(t) that S(t) = 1 if v ≥30, otherwise S(t) = 0. Note that v in Izhikevich model is in millivolts and time t is in milliseconds. Therefore the coefficients in eq.1 need to be adjusted in correspondence to SI units. Model of Synapse When a pre-synaptic neuron spikes, i.e. S(0) = 1, an excitatory synapse subsequently issues an Excitatory Post-Synaptic Current (EPSC) that drives the post-synaptic neuron. Neural recording of hair cells in rats [2] provided evidence that the time profile of EPSC can be well characterized using the equations below: I(t) = ( Vm × e− t τdVm −e− t τrVm if t ≥0 0 otherwise (3) The key parameters in a synapse model is the time constants for rising (τr) and decaying (τd). In our emulation τr = 0.001 s and τr = 0.003 s. 3 Model of Muscle force and electromyograph (EMG) The primary effect of skeletal muscle is to convert α-motoneuron spikes S into force T, depending on the muscle’s instantaneous length L and lengthening speed ˙L. We used Hill’s muscle model in the emulation with parameter tuning described in [3]. Another measurable output of muscle is electromyograph (EMG). EMG is the small skin current polarized by motor unit action potential (MUAP) when it travels along muscle fibers. Models exist to describe the typical waveform picked by surface EMG electrodes. In this project we chose to implement the one described in [4]. Model of Proprioceptor Spindle is a sensory organ that provides the main source of proprioceptive information. As can be seen in Fig.1C, a spindle typically produces two afferent outputs (primary Ia and secondary II) according to its gamma fusimotor drives (Γdynamic and Γstatic) and muscle states (L and ˙L). There is currently no closed-form models describing spindle functions due to spindle’s significant nonlinearity. On representative model that numerically approximates the spindle dynamics was developed by Mileusnic et al. [5]. The model used differential equations to characterize a typical cat soleus spindle. Eqs.4-10 present a subset of this model for one type of spindle fiber (bag1): ˙x0 = Γdynamic Γdynamic + Ω2 bag1 −x0 ! /τ (4) ˙x1 = x2 (5) ˙x2 = 1 M [TSR −TB −TP R −Γ1x0] (6) where TSR = KSR(L −x1 −LSR0) (7) TB = (B0 + B1x0) · (x1 −R) · CSS · |x2|0.3 (8) TP R = KP R (x1 −LP R0) (9) CSS = 2 1 + e−1000x2 −1 (10) Eq.8 and 10 suggest that evaluating the spindle model requires multiplication, division as well as more complex arithmetics like polynomials and exponentials. The implementation details are described in Section 3. 2.3 Neuron connectivity with sparse interconnections Although the number of spinal neurons (~1 billion) is significantly less compared to that of cortical neurons (~100 billion), a fully connected spinal network still means approximately 2 trillion synaptic endings [6]. Implementing such a huge number of synapses imposes a major challenge, if not impossible, given limited hardware resource. In this platform we approximated the neural connectivity by sparsely connecting sensory neurons to motoneurons as parallel pathways. We do not attempt to introduce the full connectivity. The rationale is that in a neural control system, the effect of a single neuron can be considered as mapping current state x to change in state ˙x through a band-limited channel. Therefore when a collection of neurons are firing stochastically, the probability of ˙x depends on both x and the firing behavior s (s = 1 when spiking, otherwise s = 0) of each neuron, as such: p( ˙x|x, s) = p( ˙x|s = 1)p(s = 1|x) + p( ˙x|s = 0)p(s = 0|x) (11) Eq.11 is a master equation that determines a probability flow on the state. From the Kramers-Moyal expansion we can associate this probability flow with a partial differential equation: ∂ ∂tp(x, t) = ∞ X i=1 −∂ ∂x i D(i)(x)p(x, t) (12) where D(i)(x) is a time-invariant term that modifies the change of probability density based on its i-th gradient. 4 Under certain conditions [7, 8], D(i)(x) for i > 2 all vanish and therefore the probability flow can be described deterministically using a linear operator L: ∂ ∂tp(x, t) = −∂ ∂xD(1)(x) + ∂2 ∂x2 D(2)(x) p(x, t) = Lp(x, t) (13) This means that various Ls can be superimposed to achieve complex system dynamics (illustrated in Fig.2A). + αMN αMN αMN αMN SN SN SN SN Sensory Input Motor Output B. Equivalent network with sparse interconnections A. Neuron function as superimposed linear operators Figure 2: Functions of neuron population can be described as the combination of linear operators (A). Therefore the original neural function can be equivalently produced by sparsely connected neurons formalizing parallel pathways (B). As a consequence, the statistical effect of two fully connected neuron populations is equivalent to ones that are only sparsely connected, as long as the probability flow can be described by the same L. For a movement task, in particular, it is the statistical effect from the neuron ensemble onto skeletal muscles that determines the global behavior. Therefore we argue that it is feasible to approximate the spinal cord connectivity by sparsely interconnecting sensory and motor neurons (Fig.2B). Here a pool of homogenous sensory neurons projects to another pool of homogeneous α-motoneurons. Pseudorandom noise is added to the input of all homogeneous neurons within a population. It is worth noting that this approximation significantly reduces the number of synapses that need to be implemented in hardware. 3 Hardware implementation on FPGA We select FPGA as the implementation device due to its inherent parallelism that resembles the nervous system. FPGA is favored over GPU or clustered CPUs because it is relatively easy to network hundreds of nodes under flexible protocols. The platform is distributed on multiple nodes of Xilinx Spartan-6 devices. The interfacing among FPGAs and computers is created using OpalKelly development board XEM6010. The dynamic range of variables is tight in models of Izhikevich neuron, synapse and EMG. This helps maintaining the accuracy of models even when they are evaluated in 32-bit fixed-point arithmetics. The spindle model, in contrast, requires floating-point arithmetics due to its wide dynamic range and complex calculations (see eq.4-10). Hyper-time computations with floating-point numbers are resource consuming and therefore need to be implemented with special attentions. 3.1 Floating-point arithmetics in combinational logic Our arithmetic implementations are compatible with IEEE-754 standard. Typical floating-point arithmetic IP cores are either pipe-lined or based on iterative algorithms such as CORDIC, all of which require clocks to schedule the calculation. In our platform, no clock is provided for model evaluations thus all arithmetics need to be executed in pure combinational logic. Taking advantage of combinational logic allows all model evaluations to be 1) fast, the evaluation time depends entirely on the propagating and settling time of signals, which is on the order of microseconds, and 2) parallel, each model is evaluated on its own circuit without waiting for any other results. Our implementations of adder and multiplier are inspired by the open source project “Free FloatingPoint Madness”, available at http://www.hmc.edu/chips/. Please contact the authors of this paper if the modified code is needed. 5 Fast combinational floating-point division Floating-point division is even more resource demanding than multiplications. We avoided directly implementing the dividing algorithm by approximating it with additions and multiplications. Our approach is inspired by an algorithm described in [9], which provides a good approximation of the inverse square root for any positive number x within one Newton-Raphson iteration: Q(x) = 1 √x ≈x(1.5 −x 2 · x2) (x > 0) (14) Q(x) can be implemented only using floating-point adders and multipliers. Thereby any division with a positive divisor can be achieved if two blocks of Q(x) are concatenated: a b = a √ b · √ b = a · Q(b) · Q(b) (b > 0) (15) This algorithm has been adjusted to also work with negative divisors (b < 0). Numerical integrators for differential equations Evaluating the instantaneous states of differential equation models require a fixed-step numerical integrator. Backward Euler’s Method was chosen to balance the numerical error and FPGA usage: ˙x = f(x, t) (16) xn+1 = xn + Tf(xn+1, tn+1) (17) where T is the sampling interval. f(x, t) is the derivative function for state variable x. 3.2 Asynchronous spike-based communication between FPGA chips 1 1 2 1 2 3 clean count Clock Spike Counter Figure 3: Timing diagram of asynchronous spike-based communication FPGA nodes are networked by transferring 1-bit binary spikes to each other. Our design allowed the sender and the receiver to operate on independent clocks without having to synchronize. The timing diagram of the spike-based communication is shown in Fig.3. The sender issues Spike with a pulse width of 1/(365 × Femu) second. Each Spike then triggers a counting event on the receiver, meanwhile each Clock first reads the accumulated spike count and subsequently cleans the counter. Note that the phase difference between Spike and Clock is not predictable due to asynchronicity. 3.3 Serialize neuron evaluations within a homogeneous population Different neuron populations are instantiated as standalone circuits. Within in each population, however, homogeneous neurons mentioned in Section 2.3 are evaluated in series in order to optimize FPGA usage. Within each FPGA node all modules operate with a central clock, which is the only source allowed to trigger any updating event. Therefore the maximal number of neurons that can be serialized (Nserial) is restrained by the following relationship: Ffpga = C × Nserial × 365 × Femu (18) Here Ffpga is the fastest clock rate that a FPGA can operate on; C = 4 is the minimal clock cycles needed for updating each state variable in the on-chip memory; Femu = 1 kHz is the time granularity of emulation (1 millisecond), and 365 × Femu represents 365x real-time. Consider that Xilinx 6 Spartan-6 FPGA devices peaks at 200MHz central clock frequency, the theoretical maximum of neurons that can be serialized is Nserial ⩽200 MHz/(4 × 365 × 1 kHz) ≈137 (19) In the current design we choose Nserial = 128. 4 Results: emulated activities of motor nervous system Figure 4 shows the implemented monosynaptic spinal loop in schematics and in operation. Each FPGA node is able to emulate monosynaptic spinal loops consisting of 1,024 sensory and 1,024 motor neurons, i.e. 2,048 neurons in total. The spike-based asynchronous communication is successful between two FPGA nodes. Note that the emulation has to be significantly slowed down for on-line plotting. When the emulation is at full speed (365x real-time) the software front-end is not able to visualize the signals due to limited data throughput. αMN αMN SN SN ... 8 parallel pathways 2,048 neurons 128 SNs 128 αMNs 128 SNs 128 αMNs Figure 4: The neural emulation platform in operation. Left: Neural circuits implemented for each FPGA node including 2,048 neurons. SN = Sensory Neuron; αMN = α-motoneuron. Center: One working FPGA node. Right: Two FPGA nodes networked using asynchronous spiking protocol. The emulation platform successfully created multi-scale information when the muscle is externally stretched (Fig.5A). We also tested if our emulated motor system is able to produce the recruitment order and size principles observed in real physiological data. It has been well known that when a voluntary motor command is sent to the α-motoneuron pool, the motor units are recruited in an order that small ones get recruited first, followed by the big ones [10]. The comparison between our results and real data are shown in Fig.5B, where the top panel shows 20 motor unit activities emulated using our platform, and the bottom panel shows decoded motor unit activities from real human EMG [11]. No qualitative difference was found. 5 Discussion and future work We designed a hardware platform for emulating the multi-scale motor nervous activities in hypertime. We managed to use one node of single Xilinx Spartan-6 FPGA to emulate monosynaptic spinal loops consisting of 2,048 neurons, associated muscles and proprioceptors. The neurons are organized as parallel pathways with sparse interconnections. The emulation is successfully accelerated to 365x real-time. The platform can be scaled by networking multiple FPGA nodes, which is enabled by an asynchronous spike-based communication protocol. The emulated monosynaptic spinal loops are capable of producing reflex-like activities in response to muscle stretch. Our results of motor unit recruitment order are compatible with the physiological data collected in real human subjects. There is a question of whether this stochastic system turns out chaotic, especially with accumulated errors from Backward Euler’s integrator. Note that the firing property of a neuron population is usually stable even with explicit noise [8], and spindle inputs are measured from real robots so the integrator errors are corrected at every iteration. To our knowledge, the system is not critically sensitive to the initial conditions or integrator errors. This question, however, is both interesting and important for in-depth investigations in the future. 7 It has been shown [12] that replicating classic types of spinal interneurons (propriospinal, Iaexcitatory, Ia-inhibitory, Renshaw, etc.) is sufficient to produce stabilizing responses and rapid reaching movement in a wrist. Our platform will introduce those interneurons to describe the known spinal circuitry in further details. Physiological models will also be refined as needed. For the purpose of modeling movement behavior or diseases, Izhikevich model is a good balance between verisimilitude and computational cost. Nevertheless when testing drug effects along disease progression, neuron models are expected to cover sufficient molecular details including how neurotransmitters affect various ion channels. With the advancing of programmable semiconductor technology, it is expected to upgrade our neuron model to Hodgkin-Huxley’s. For the muscle models, Hill’s type of model does not fit the muscle properties accurately enough when the muscle is being shortened. Alternative models will be tested. Other studies showed that the functional dexterity of human limbs – especially in the hands – is critically enabled by the tendon configurations and joint geometry [13]. As a result, if our platform is used to understand whether known neurophysiology and biomechanics are sufficient to produce able and pathological movements, it will be necessary to use this platform to control human-like limbs. Since the emulation speed can be flexibly adjusted from arbitrarily slow to 365x real-time, when speeded to exactly 1x real-time the platform will function as a digital controller with 1kHz refresh rate. The main purpose of the emulation is to learn how certain motor disorders progress during childhood development. This first requires the platform to reproduce motor symptoms that are compatible with clinical observations. For example it has been suggested that muscle spasticity in rats is associated with decreased soma size of α-motoneurons [14], which presumably reduced the firing threshold of neurons. Thus when lower firing threshold is introduced to the emulated motoneuron pool, similar EMG patterns as in [15] should be observed. It is also necessary for the symptoms to evolve with neural plasticity. In the current version we presume that the structure of each component remains time invariant. In the future work Spike Timing Dependent Plasticity (STDP) will be introduced such that all components are subject to temporal modifications. Stretch Spindle Ia Muscle Force EMG Sensory post-synaptic current Motoneurons A. Multi-scale activities from emulation Real Data B. Verify motor unit recruitment pattern Emulation 1s Figure 5: A) Physiological activity emulated by each model when the muscle is sinusoidally stretched. B) Comparing the emulated motor unit recruitment order with real experimental data. Acknowledgments The authors thank Dr. Gerald Loeb for helping set up the emulation of spindle models. This project is supported by NIH NINDS grant R01NS069214-02. 8 References [1] Izhikevich, E. M. Simple model of spiking neurons. IEEE transactions on neural networks / a publication of the IEEE Neural Networks Council 14, 1569–1572 (2003). [2] Glowatzki, E. & Fuchs, P. A. Transmitter release at the hair cell ribbon synapse. Nature neuroscience 5, 147–154 (2002). [3] Shadmehr, R. & Wise, S. P. A Mathematical Muscle Model. In Supplementary documents for “Computational Neurobiology of Reaching and Pointing”, 1–18 (MIT Press, Cambridge, MA, 2005). [4] Fuglevand, A. J., Winter, D. A. & Patla, A. E. Models of recruitment and rate coding organization in motor-unit pools. Journal of neurophysiology 70, 2470–2488 (1993). [5] Mileusnic, M. P., Brown, I. E., Lan, N. & Loeb, G. E. Mathematical models of proprioceptors. I. Control and transduction in the muscle spindle. Journal of neurophysiology 96, 1772–1788 (2006). [6] Gelfan, S., Kao, G. & Ruchkin, D. S. The dendritic tree of spinal neurons. The Journal of comparative neurology 139, 385–411 (1970). [7] Sanger, T. D. Neuro-mechanical control using differential stochastic operators. In Engineering in Medicine and Biology Society (EMBC), 2010 Annual International Conference of the IEEE, 4494–4497 (2010). [8] Sanger, T. D. Distributed control of uncertain systems using superpositions of linear operators. Neural computation 23, 1911–1934 (2011). [9] Lomont, C. Fast inverse square root (2003). URL http://www.lomont.org/Math/Papers/ 2003/InvSqrt.pdf. [10] Henneman, E. Relation between size of neurons and their susceptibility to discharge. Science (New York, N.Y.) 126, 1345–1347 (1957). [11] De Luca, C. J. & Hostage, E. C. Relationship between firing rate and recruitment threshold of motoneurons in voluntary isometric contractions. Journal of neurophysiology 104, 1034–1046 (2010). [12] Raphael, G., Tsianos, G. A. & Loeb, G. E. Spinal-like regulator facilitates control of a two-degree-offreedom wrist. The Journal of neuroscience : the official journal of the Society for Neuroscience 30, 9431–9444 (2010). [13] Valero-Cuevas, F. J. et al. The tendon network of the fingers performs anatomical computation at a macroscopic scale. IEEE transactions on bio-medical engineering 54, 1161–1166 (2007). [14] Brashear, A. & Elovic, E. Spasticity: Diagnosis and Management (Demos Medical, 2010), 1 edn. [15] Levin, M. F. & Feldman, A. G. The role of stretch reflex threshold regulation in normal and impaired motor control. Brain research 657, 23–30 (1994). 9
|
2012
|
83
|
4,802
|
Recognizing Activities by Attribute Dynamics Weixin Li Nuno Vasconcelos Department of Electrical and Computer Engineering University of California, San Diego La Jolla, CA 92093, United States {wel017, nvasconcelos}@ucsd.edu Abstract In this work, we consider the problem of modeling the dynamic structure of human activities in the attributes space. A video sequence is first represented in a semantic feature space, where each feature encodes the probability of occurrence of an activity attribute at a given time. A generative model, denoted the binary dynamic system (BDS), is proposed to learn both the distribution and dynamics of different activities in this space. The BDS is a non-linear dynamic system, which extends both the binary principal component analysis (PCA) and classical linear dynamic systems (LDS), by combining binary observation variables with a hidden Gauss-Markov state process. In this way, it integrates the representation power of semantic modeling with the ability of dynamic systems to capture the temporal structure of time-varying processes. An algorithm for learning BDS parameters, inspired by a popular LDS learning method from dynamic textures, is proposed. A similarity measure between BDSs, which generalizes the BinetCauchy kernel for LDS, is then introduced and used to design activity classifiers. The proposed method is shown to outperform similar classifiers derived from the kernel dynamic system (KDS) and state-of-the-art approaches for dynamics-based or attribute-based action recognition. 1 Introduction Human activity understanding has been a research topic of substantial interest in computer vision [1]. Inspired by the success of the popular bag-of-features (BoF) representation on image classification problems, it is frequently based on the characterization of video as a collection of orderless spatiotemporal features [2, 3]. Recently, there have been attempts to extend this representation along two dimensions that we explore in this work. The first is to introduce richer models for the temporal structure, also known as dynamics, of human actions [4, 5, 6, 7]. This aims to exploit the fact that actions are usually defined as sequences of poses, gestures, or other events over time. While desirable, modeling action dynamics can be a complex proposition, and this can sometimes compromise the robustness of recognition algorithms, or sacrifice their generality, e.g., it is not uncommon for dynamic models to require features specific to certain datasets or action classes [5, 6], or non-trivial forms of pre-processing, such as tracking [8], manual annotation [7], etc. The second dimension, again inspired by recent developments in image classification [9, 10], is to represent actions in terms of intermediate-level semantic concepts, or attributes [11, 12]. This introduces a layer of abstraction that improves the generalization of the representation, enables modeling of contextual relationships [13], and simplifies knowledge transfer across activity classes [11]. In this work, we propose a representation that combines the benefits of these two types of extensions. This consists of modeling the dynamics of human activities in the attributes space. The idea is to exploit the fact that an activity is usually defined as a sequence of semantic events. For example, the activity “storing an object in a box” is defined as the sequence of the action attributes “remove (hand from box)”, “grab (object)”, “insert (hand in box)”, and “drop (object)”. The representation of 1 the action as a sequence of these attributes makes the characterization of the “storing object in box” activity more robust (to confounding factors such as diversity of grabbing styles, hand motion speeds, or camera motions) than dynamic representations based on low-level features. It is also more discriminant than semantic representations that ignore dynamics, i.e., that simply record the occurrence (or frequency) of the action attributes “remove”, “grab”, “insert”, and “drop”. In the absence of information about the sequence in which these attributes occur, the “store object in box” activity cannot be distinguished from the “retrieve object from box” activity, defined as the sequence “insert (hand in box)”, “grab (object)”, “remove (hand from box)”, and “drop (object)”. In summary, the modeling of attribute dynamics is 1) more robust and flexible than the modeling of visual (lowlevel) dynamics, and 2) more discriminant than the modeling of attribute frequencies. In this work, we address the problem of modeling attribute dynamics for activities. As is usual in semantics-based recognition [11], we start by representing video in a semantic feature space, where each feature encodes the probability of occurrence of an action attribute in the video, at a given time. We then propose a generative model, denoted the binary dynamic system (BDS), to learn both the distribution and dynamics of different activities in this space. The BDS is a non-linear dynamic system, which combines binary observation variables with a hidden Gauss-Markov state process. It can be interpreted as either 1) a generalization of binary principal component analysis (binary PCA) [14], which accounts for data dynamics, or 2) an extension of the classical linear dynamic system (LDS), which operates on a binary observation space. For activity recognition, the BDS has the appeal of accounting for the two distinguishing properties of the semantic activity representation: 1) that semantic vectors define probability distributions over a space of binary attributes; and 2) that these distributions evolve according to smooth trajectories that reflect the dynamics of the underlying activity. Its advantages over previous representations are illustrated by the introduction of BDSbased activity classifiers. For this, we start by proposing an efficient BDS learning algorithm, which combines binary PCA and a least squares problem, inspired by the learning procedure in dynamic textures [15]. We then derive a similarity measure between BDSs, which generalizes the BinetCauchy kernel from the LDS literature [16]. This is finally used to design activity classifiers, which are shown to outperform similar classifiers derived from the kernel dynamic systems (KDS) [6], and state-of-the-art approaches for dynamics-based [4] and attribute-based [11] action recognition. 2 Prior Work One of the most popular representations for activity recognition is the BoF, which reduces video to an collection of orderless spatiotemporal descriptors [2, 3]. While robust, the BoF ignores the temporal structure of activities, and has limited power for fine-grained activity discrimination. A number of approaches have been proposed to characterize this structure. One possibility is to represent actions in terms of limb or torso motions, spatiotemporal shape models, or motion templates [17, 18]. Since they require detection, segmentation, tracking, or 3D structure recovery of body parts, these representations can be fragile. A robust alternative is to model the temporal structure of the BoF. This can be achieved with generalizations of popular still image recognition methods. For example, Laptev et al. extend pyramid matching to video, using a 3D binning scheme that roughly characterizes the spatio-temporal structure of video [3]. Niebles et al. employ a latent SVM that augments the BoF with temporal context, which they show to be critical for understanding realistic motion [4]. All these approaches have relatively coarse modeling of dynamics. More elaborate models are usually based on generative representations. For example, Laxton et al. model a combination of object contexts and action sequences with a dynamic Bayesian network [5], while Gaidon et al. reduce each activity to three atomic actions and model their temporal distributions [7]. These methods rely on action-class specific features and require detailed manual supervision. Alternatively, several researchers have proposed to model BoF dynamics with LDSs. For example, Kellokumpu et al. combine dynamic textures [15] and local binary patterns [19], Li et al. perform a discriminant canonical correlation analysis on the space of action dynamics [8], and Chaudhry et al. map frame-wise motion histograms to a reproducing kernel Hilbert space (RKHS), where they learn a KDS [6]. Recent research in image recognition has shown that various limitations of the BoF can be overcome with representations of higher semantic level [10]. The features that underly these representations are confidence scores for the appearance of pre-defined visual concepts in images. These concepts can be object attributes [9], object classes [20, 21], contextual classes [13], or generic visual concepts [22]. Lately, semantic attributes have also been used for action recognition [11], demonstrating the benefits of a mid-level semantic characterization for the analysis of complex human activities. 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . run land jump 0.5/0.8 0.5/0.2 0.2/0.7 0.8/0.3 1/0 Figure 1: Left: key frames of activities “hurdle race” (top) and “long jump” (bottom); Right: attribute transition probabilities of the two activities (“hurdle race” / “long jump”) for attributes “run”, “jump”, and “land”. The work also suggests that, for action categorization, supervised attribute learning is far more useful than unsupervised learning, resembling a similar observation from image recognition [20]. However, all of these representations are BoF-like, in the sense that they represent actions as orderless feature collections, reducing an entire video sequence to an attribute vector. For this reason, we denote them holistic attribute representations. The temporal evolution of semantic concepts, throughout a video sequence, has not yet been exploited as a cue for action understanding. There has, however, been some progress towards this type of modeling in the text analysis literature, where temporal extensions of latent Dirichlet allocation (LDA) have been proposed. Two representatives are the dynamic topic model (DTM) [23] and the topic over time (TOT) model [24]. Although modeling topic dynamics, these models are not necessarily applicable to semantic action recognition. First, like the underlying LDA, they are unsupervised models, and thus likely to underperform in recognition tasks [11, 10]. Second, the joint goal of topic discovery and modeling topic dynamics requires a complex graphical model. This is at odds with tractability, which is usually achieved by sacrificing the expressiveness of the temporal model component. 3 Modeling the Dynamics of Activity Attributes In this section, we introduce a new model, the binary dynamic system, for joint representation of the distribution and dynamics of activities in action attribute space. 3.1 Semantic Representation Semantic representations characterize video as a collection of descriptors with explicit semantics [10, 11]. They are obtained by defining a set of semantic concepts (or attributes, scene classes, etc), and learning a classifier to detect each of those concepts. Given a video v ∈X to analyze, each classifier produces a confidence score for the presence of the associated concept. The ensemble of classifiers maps the video to a semantic space S, according to π : X →S = [0, 1]K, π(v) = (π1(v), · · · , πK(v))T , where πi(v) is the confidence score for the presence of the i-th concept. In this work, the classification score is the posterior probability of a concept c given video v, i.e., πc(v) = p(c|v) under a certain video representation, e.g., the popular BoF histogram of spatiotemporal descriptors. As the video sequence v progresses with time t, the semantic encoding defines a trajectory {πt(v)} ⊂S. The benefits of semantic representations for recognition, namely a higher level of abstraction (which leads to better generalization than appearance-based representations), substantial robustness to the performance of the visual classifiers πi(v), and intrinsic ability to account for contextual relationships between concepts, have been previously documented in the literature [13]. No attention has, however, been devoted to modeling the dynamics of semantic encodings of video. Figure 1 motivates the importance of such modeling for action recognition, by considering two activity categories (“long jump” and “hurdle race”), which involve the same attributes, with roughy the same probabilities, but span very different trajectories in S. Modeling these dynamics can substantially enhance the ability of a classifier to discriminate between complex activities. 3.2 Binary PCA The proposed representation is a generalization of binary PCA [14], a dimensionality reduction technique for binary data, belonging to the generalized exponential family PCA [25]. It fits a linear model to binary observations, by embedding the natural parameters of Bernoulli distributions in a low-dimensional subspace. Let Y denote a K × τ binary matrix (Ykt ∈{0, 1}, e.g., the indicator of 3 occurrence of attribute k at time t) where each column is a vector of K binary observations sampled from a multivariate Bernoulli distribution Ykt ∼B(ykt; πkt) = πykt kt (1 −πkt)1−ykt = σ(θkt)yktσ(−θkt)1−ykt, ykt ∈{0, 1}. (1) The log-odds θ = log( π 1−π) is the natural parameter of the Bernoulli distribution, and σ(θ) = (1 + e−θ)−1 is the logistic function. Binary PCA finds a L-dimensional (L ≪K) embedding of the natural parameters, by maximizing the log-likelihood of the binary matrix Y L = log P(Y ; Θ) = X k,t h Ykt log σ(Θkt) + (1 −Ykt) log σ(−Θkt) i (2) under the constraint Θ = CX + u1T , (3) where C ∈RK×L, X ∈RL×τ, u ∈RK and 1 ∈Rτ is the vector of all ones. Each column of C is a basis vector of a latent subspace and the t-th column of X contains the coordinates of the t-th binary vector in this basis (up to a translation by u). Binary PCA is not directly applicable to attribute-based recognition, where the goal is to fit the vectors of confidence scores {πt} produced by a set of K attribute classifiers (and not a sample of binary attribute vectors per se). To overcome this problem, we maximize the expected log-likelihood of the data Y (which is the lower bound to the log expected likelihood of the data Y , by Jensen’s inequality). Since E[yt] = πt, it follows from (2) that EY [L] = X k,t h πkt log σ(Θkt) + (1 −πkt) log σ(−Θkt) i . (4) The proposed extension of binary PCA consists of maximizing this expected log-likelihood under the constraint of (3). It can be shown that, in the absence of the constraint, the maximum occurs when σ(Θkt) = πkt, ∀k, t. As in PCA, (3) forces σ(Θkt) to lie on a subspace of S, i.e., σ(Θkt) = ˆπkt ≈πkt. (5) The difference between the expected log-likelihood of the true scores {πt} and the binary PCA scores {σ(θt) = σ(Cxt + u)} (σ(θ) ≡[σ(θ1), · · · , σ(θK)]T ) is E[∆L({πt}; {σ(θt)})] = EY log(P(Y ; {πt})) −EY log(P(Y ; {σ(θt)})) (6) = X k,t πkt log πkt σ(Θkt) + (1 −πkt) log 1 −πkt σ(−Θkt) (7) = X t KL[B(y; πt)||B(y; σ(θt))], (8) where KL(B(y; π)||B(y; π′)) is the Kullback-Leibler (KL) divergence between two multivariate Bernoulli distributions of parameters π and π′. By maximizing the expected log-likelihood (4), the optimal projection {θ∗ t } of the attribute score vectors {πt} on the subspace of (3) also minimizes the KL divergence of (8). Hence, for the optimal natural parameters {θ∗ t }, the approximation of (5) is the best in the sense of KL divergence, the natural similarity measure between probability distributions. 3.3 Binary Dynamic Systems A discrete time linear dynamic system (LDS) is defined by xt+1 = Axt + vt yt = Cxt + wt + u , (9) where xt ∈RL and yt ∈RK (of mean u) are the hidden state and observation variable at time t, respectively; A ∈RL×L is the state transition matrix that encodes the underlying dynamics; C ∈RK×L the observation matrix that linearly maps the state to the observation space; and x1 = µ0 + v0 ∼N(µ0, S0) an initial condition. Both state and observations are subject to additive Gaussian noise processes vt ∼N(0, Q) and wt ∼N(0, R). Since the noise is Gaussian and L < K, the matrix C can be interpreted as a PCA basis for the observation space (L eigenvectors of the observation covariance). The state vector xt then encodes the trajectory of the PCA coefficients (projection on this basis) of the observed data over time. This interpretation is, in fact, at the core of the popular dynamic texture (DT) [15] representation for video. While the LDS parameters 4 Algorithm 1: Learning a binary dynamic system Input : a sequence of attribute score vectors {πt}τ t=1, state space dimension n. Binary PCA: {C, X, u} = B-PCA({πt}τ t=1, n) using the method of [14]. Estimate state parameters (Xt2 t1 ≡ xt1, · · · , xt2 ): A = Xτ 2 (Xτ−1 1 )†; V = (X)τ 2 −A(X)τ−1 1 ; Q = 1 τ−1V (V )T ; µ0 = 1 τ Pτ t=1 xt; S0 = 1 τ−1 Pτ t=1(xt −µ0)(xt −µ0)T . Output: {A, C, Q, u, µ0, S0} can be learned by maximum likelihood, using an expectation-maximization (EM) algorithm [26], the DT decouples the learning of observation and state variables. Observation parameters are first learned by PCA, and state parameters are then learned with a least squares procedure. This simple approximate learning algorithm tends to perform very well, and is widely used in computer vision. The proposed binary dynamic system (BDS) is defined as xt+1 = Axt + vt yt ∼B(y; σ(Cxt + u)) , (10) where xt ∈RL and u ∈RK are the hidden state variable and observation bias, respectively; A ∈ RL×L is the state transition matrix; and C ∈RK×L the observation matrix. The initial condition is given by x1 = µ0 + v0 ∼N(µ0, S0); and the state noise process is vt ∼N(0, Q). Like the LDS of (9), the BDS can be interpreted as combining a (now binary) PCA observation component with a Gauss-Markov process for the state sequence. As in binary PCA, for attribute-based recognition the binary observations yt are replaced by the attribute scores πt, their log-likelihood under (10) by the expected log-likelihood, and the optimal solution minimizes the approximation of (5) for the most natural definition of similarity (KL divergence) between probability distributions. This is conceptually equivalent to the behavior of the canonical LDS of (9), which determines the subspace that best approximates the observations in the Euclidean sense, the natural similarity measure for Gaussian data. Note that other extensions of the LDS, e.g., kernel dynamic systems (KDS) that rely on a non-linear kernel PCA (KPCA) [27] of the observation space but still assume an Euclidean measure (Gaussian noise) [28, 6], do not share this property. We will see, in the experimental section, that the BDS is a better model of attribute dynamics. 3.4 Learning Since the Gaussian state distribution of an LDS is a conjugate prior for the (Gaussian) conditionaldistribution of its observations given the state, maximum-likelihood estimates of LDS parameters are tractable. The LDS parameters ΩLDS = {A, C, Q, R, µ0, S0, u} of (9) can thus be estimated with an EM algorithm [26]. For the BDS, where the state is Gaussian but the observations are not, the expectation step is intractable. Hence, approximate inference is required to learn the parameters ΩBDS = {A, C, Q, µ0, S0, u} of (10). In this work, we resort to the approximate DT learning procedure, where observation and state components are learned separately [15]. The binary PCA basis is learned first, by maximizing the expected log-likelihood of (4) subject to the constraint of (3). Since the Bernoulli distribution is a member of exponential family, (4) is concave in Θ, but not in C, X and u jointly. We rely on a procedure introduced by [14], which iterates between the optimization with respect to one of the variables C, X and u, with the remaining two held constant. Each iteration is a convex sub-problem that can be solved efficiently with a fixed-point auxiliary function (see [14] for details). Once the latent embedding C∗, X∗and u∗of the attribute sequence in the optimal subspace is recovered, the remaining parameters are estimated by solving a leastsquares problem for A and Q, and using standard maximum likelihood estimates for the Gaussian parameters of the initial condition (µ0 and S0) [15]. The procedure is summarized in Algorithm 1. 4 Measuring Distances between BDSs The design of classifiers that account for attribute dynamics requires the ability to quantify similarity between BDSs. In this section, we derive the BDS counterpart to the popular Binet-Cauchy kernel (BCK) for the LDS, which evaluates the similarity of the output sequences of two LDSs. Given 5 LDSs Ωa and Ωb driven by identical noise processes vt and wt with observation sequences y(a) and y(b), [16] propose a family of BCKs KBC(Ωa, Ωb) = Ev,w hX∞ t=0 e−λt(y(a) t )T Wy(b) t i , (11) where W is a semi-definite positive weight matrix and λ ⩾0 a temporal discounting factor. To extend (11) to BDSs Ωa and Ωb, we note that (y(a) t )T Wy(b) t is the inner product of an Euclidean output space of metric d2(y(a) t , y(b) t ) = (y(a) t −y(b) t )T W(y(a) t −y(b) t ). For BDSs, whose observations yt are Bernouli distributed with parameters {σ(θ(a) t )}, for Ωa, and {σ(θ(b) t )}, for Ωb, this distance measure is naturally replaced by the KL divergence between Bernoulli distributions DBC(Ωa, Ωb) = Ev " ∞ X t=0 e−λt KL(B(σ(θ(a) t ))||B(σ(θ(b) t ))) + KL(B(σ(θ(b) t ))||B(σ(θ(a) t ))) # = Ev X∞ t=0 e−λt σ(θ(a) t ) −σ(θ(b) t ) T θ(a) t −θ(b) t , (12) where θt = Cxt + u. The distance term at time t can be rewritten as (σ(θ(a) t ) −σ(θ(b) t ))T (θ(a) t −θ(b) t ) = (θ(a) t −θ(b) t )T ˆ Wt(θ(a) t −θ(b) t ), (13) with ˆWt a diagonal matrix whose k-th diagonal element is ˆWt,k = (σ(Θ(a) t,k) −σ(Θ(b) t,k))/(Θ(a) t,k − Θ(b) t,k) = σ′(ˆΘ(a,b) t,k ) (where, by the mean value theorem, ˆΘ(a,b) t,k is some real value between ˆΘ(a) t,k and ˆΘ(b) t,k). This reduces (13) to a form similar to (11), although with a time varying weight matrix Wt. It is unclear whether (12) can be computed in closed-form. We currently rely on the approximation DBC(Ωa, Ωb) ≈P∞ t=0 e−λt(σ(¯θ (a) t ) −σ(¯θ (b) t ))T (¯θ (a) t −¯θ (b) t ), where ¯θ is the mean of θ. 5 Experiments Several experiments were conducted to evaluate the BDS as a model of activity attribute dynamics. In all cases, the BoF was used as low-level video representation, interest points were detected with [2], and HoG/HoF descriptors [3] computed at their locations. A codebook of 3000 visual words was learned via k-means, from the entire training set, and a binary SVM with histogram intersection kernel (HIK) and probability outputs [29] trained to detect each attribute using the attribute definition same as [11]. The probability for attribute k at time t was used as attribute score πtk, which was computed over a window of 20 frames, sliding across a video. 5.1 Weizmann Activities To obtain some intuition on the performance of different algorithms considered, we first used complex activity sequences synthesized from the Weizmann dataset [17]. This contains 10 atomic action classes (e.g., skipping, walking) annotated with respect to 30 lower-level attributes (e.g., “one-armmotion”), and performed by 9 people. We created activity sequences by concatenating Weizmann actions. A sequence of degree n (n = 4, 5, 6) is composed of n atomic actions, performed by the same person. The row of images at the top of Figure 2 presents an example of an activity sequence of degree 5. The images shown at the top of the figure are keyframes from the atomic actions (“walk”, “pjump”, “wave1”, “wave2”, “wave2”) that compose this activity sequence. The black curve (labeled “Sem. Seq”) in the plot at the bottom of the figure shows the score of the “two-arms-motion” attribute, as a function of time. 40 activity categories were defined per degree n (total of 120 activity categories) and a dataset was assembled per category, containing one activity sequence per person (9 people, 1080 sequences in total). Overall, the activity sequences differ in the number, category, and temporal order of atomic actions. Since the attribute ground truth is available for all atomic actions in this dataset, it is possible to train clean attribute models. Hence, all performance variations can be attributed to the quality of the attribute-based inference of different approaches. We started by comparing the binary PCA representation that underlies the BDS to the PCA and KPCA decompositions of the LDS and KDS. In all cases we projected a set of attribute score vectors {πt} into the low-dimensional PCA subspace, computed the reconstructed score vectors {ˆπt}, and the KL divergence KL(B(y, πt)||B(y, ˆπt), as reported in Figure 3. The kernel used for KPCA was 6 ! "! #!! #"! $!! $"! ! !%$ !%& !%' !%( # )**+,-.*/0123+/ * 0 0 ! !%!$ !%!& !%!' !%!( !%# 4/56078 1/9%01/:% ;3<,=*,2 >?1 7?1 78 >?1 78 7?1 Figure 2: Top: key frames from the activity sequence class “walkpjump-wave1-wave2-wave2”. Bottom: score of “two-arms-motion” attribute for video of this activity. True scores in black, and scores sampled from the BDS (red) and KDS (blue). Also shown is the KLdivergence between sampled and original scores, for both models. 0 1 2 3 4 5 6 7 −4 −2 0 2 4 6 n log KL−div PCA kernel−PCA binary−PCA Figure 3: Log KL-divergence between original and reconstructed attribute scores, v.s. number of PCA components n, on Weizmann activities for PCA, kernel PCA, and binary PCA. Table 1: Classification Accuracy on Weizmann Activities and Olympic Sports Datasets Dataset BoF Holistic Attri. DTM TOT KDS BDS Weizmann Activities 57.8% 72.6% 84.6% 88.2% 90.2% 94.8% Olympic Sports 56.8% 63.5% 47.1% 53.3% 62.3% 65.7% the logit kernel K(π1, π2) = σ−1(π1)T σ−1(π2), where σ−1(·) is the element-wise logit function. Figure 3 shows the average log-KL divergence, over the entire dataset, as a function of the number of PCA components used in the reconstruction. Binary PCA outperformed both PCA and KPCA. The improvements over KPCA are particularly interesting since the latter uses the logistic transformation that distinguishes binary PCA from PCA. This is explained by the Euclidean similarity measure that underlies the assumption of Gaussian noise in KPCA, as discussed in Section 3.3. To gain some more insight on the different models, a KDS and a BDS were learned from the 30 dimensional attribute score vectors of the activity sequence in Figure 2. A new set of attribute score vectors were then sampled from each model. The evolution of the scores sampled for the “two-arms-motion” attribute are shown in the figure (in red/blue for BDS/KDS). Note how the scores sampled from the BDS approximate the original attribute scores better than those sampled from the KDS, which is confirmed by the KL-divergences between the original attribute scores and those sampled from the two models (also shown in the figure). We next evaluated the benefits of different dynamics representations for activity recognition. Recognition rates were obtained with a 9-fold leave-one-out-cross-validation (LOOCV), where, per trial, the activities of one subject were used as test set and those of the remaining 8 as training set. We compared the performance of classifiers based on the KDS and BDS with a BoF classifier, a holistic attribute classifier that ignores attribute dynamics (using a single attribute score vector computed from the entire video sequence) and the dynamic topic models DTM [23] and TOT [24] from the text literature. For the latter, the topics were equated to the activity attributes and learned with supervision (using the SVMs discussed above). Unsupervised versions of the topic models had worse performance and are omitted. Classification was performed with Bayes rule for topic models, and a nearest-neighbor classifier for the remaining methods. For BDS, distances were measured with (12), while for the KDS we tried the Binet-Cauchy, X 2, intersection and logit kernels, and reported the best results. X 2 distance was used for the BoF and holistic attribute classifiers. The classification accuracy of all classifiers is shown in Table 1. BDS and KDS had the best performance, followed by the dynamic topic models, and the dynamics insensitive methods (BoF and holistic). Note that the difference between the holistic classifier and the best dynamic model is of approximately 22%. This shows that while attributes are important (14.8% improvement over BoF) they are not the whole story. Problems involving fine-grained activity classification, i.e., discrimination between activities composed of similar actions executed in different sequence, requires modeling of attribute dynamics. Among dynamic models, the BDS outperformed the KDS, and topic models DTM and TOT. 5.2 Olympic Sports The second set of experiments was performed on the Olympic Sports dataset [4]. This contains YouTuBe videos of 16 sport activities, with a total of 783 sequences. Some activities are sequences 7 Table 2: Fine-grained Classification Accuracy on Olympic Sports by BDS Method clean&jerk (snatch) long-jump (triple-jump) snatch (clean&jerk) triple-jump (long-jump) BDS 85% (9%) 80% (2%) 78% (10%) 62% (14%) Holistic 73% (21%) 72% (20%) 65% (27%) 38% (43%) Table 3: Mean Average Precision on Olympic Sports Dataset Laptev et al. [3] ( BoF ) Niebles et al. [4] ( BDS ) Liu et al. [11] ( Attr. / B+A ) B+A+D 62.0% ( 67.8% ) 72.1% (73.2%) 74.4% (72.9% / 73.3%) 76.5% !"#$ !"% !"%$ !"$ !"$$ !"& !"&$ !"' !"'$ !"( !"($ !")$ !") !"!$ ! !"!$ !") !")$ !"* !"*$ +,,-./,0123.14356786,1+88.69-8: 1+,,-./,0 8.6;5:1<-=; 7>/8,? ,5:/> />@ <:.A ;35:1B/-58 @6B6>C ;5/823.= )!= ?6C?1<-=; B/-58 Figure 4: Scatter plot of accuracy gain on Olympic Sports by BDS. of atomic actions, whose temporal structure is critical for discrimination from other classes (e.g., “clean and jerk” v.s.“snatch”, and “long-jump” v.s.“triple-jump”). Since attribute labels are only available for whole sequences, the training sets of the attribute classifiers are much noisier than in the previous experiment. This degrades the quality of attribute models. The dataset was split into 5 subsets, of roughly the same size, and results reported by 5-fold cross-validation. The DTM and TOT classifiers were as above, and all others were implemented with an SVM of kernel Kα(i, j) = exp(−1 αd2(i, j)), based on the distance measures d(i, j) of the previous section. Table 1 shows that dynamic modeling again has the best performance. However, the gains over the holistic attribute classifier are smaller than in Weizmann. This is due to two factors. First, the noisy attributes make the dynamics harder to model. Note that the robustness of the dynamic models to this noise varies substantially. As before, topic models have the weakest performance and the BDS outperforms the KDS. Second, since fine grained discrimination is not needed for all categories, attribute dynamics are not always necessary. This is confirmed by Figure 4, which presents a scatter plot of the gain (difference in accuracy) of the BDS classifier over the holistic classifier, as a function of the accuracy of the latter. Each point corresponds to an activity. Note the strongly negative correlation between the two factors: the largest gains occur for the most difficult classes for the holistic classifier. Table 2 details these results for the two pairs of classes with most confusable attributes. Numbers outside brackets correspond to ground-truth category, numbers in brackets to the confusing class (percentage of ground-truth examples assigned to it). BDS has dramatically better performance for these classes. Overall, despite the attribute noise and the fact that dynamics are not always required for discrimination, the BDS achieves the best performance on this dataset. Finally, we compare the BDS classifier to classifiers from the literature. Three approaches, representative of the state-of-the art in classification with the BoF [3], dynamic representations [4], and attributes [11], were selected as benchmarks. These were compared to our implementation of BoF (kernel using only word histograms), attributes (the holistic classifier of Table 1), dynamics (the BDS classifier), and multiple kernel classifiers combining 1) BoF and attributes (B+A), and 2) BoF, attributes, and dynamics (B+A+D). All multiple kernels combinations were achieved by cross-validation. The mean average precisions of all 1-vs-all classifiers are reported in Table 3. The numbers in each column report to directly comparable classifiers, e.g., B+A is directly comparable to [11], which jointly classifies BoF histograms and hollistic attribute vectors with a latent SVM. Note that the BDS classifier outperforms the state-of-the-art in dynamic classifiers (Niebles et al. [4]), which accounts for the dynamics of the BoF but not action attributes. This holds despite the fact that our attribute categories (only 40 specified attributes) and classifiers (simple SVMs) are much simpler than the best in the literature [11] , which uses both the data-driven and the 40 specified attributes as ours, plus a latent SVM as the classifier. The use of a stronger attribute detection architecture could potentially further improve these results. Note also that the addition of the BDS kernel to the simple attribute representation (B+A+D) far outperforms the use of the more sophisticated attribute classifier of [11], which does not account for attribute dynamics. This illustrates the benefits of modeling the dynamics of attributes. The combination of BoF, attributes, and attribute dynamics achieves the overall best performance on this dataset. Acknowledgements This work was partially supported by the NSF award under Grant CCF-0830535. We also thank Jingen Liu for providing the attribute annotations. 8 References [1] J. K. Aggarwal and M. S. Ryoo, “Human activity analysis: A review,” ACM Computing Surveys, vol. 43, no. 16, pp. 1–16, 2011. [2] P. Doll´ar, V. Rabaud, G. Cottrell, and S. Belongie, “Behavior recognition via sparse spatio-temporal features,” ICCV VS-PETS, 2005. [3] I. Laptev, M. Marszałek, C. Schmid, and B. Rozenfeld, “Learning realistic human actions from movies,” CVPR, 2008. [4] J. C. Niebles, C.-W. Chen, and L. Fei-Fei, “Modeling temporal structure of decomposable motion segments for activity classification,” ECCV, 2010. [5] B. Laxton, J. Lim, and D. Kriegman, “Leveraging temporal, contextual and ordering constraints for recognizing complex activities in video,” CVPR, 2007. [6] R. Chaudhry, A. Ravichandran, G. Hager, and R. Vidal, “Histograms of oriented optical flow and binetcauchy kernels on nonlinear dynamical systems for the recognition of human actions,” CVPR, 2009. [7] A. Gaidon, Z. Harchaoui, and C. Schmid, “Actom sequence models for efficient action detection,” CVPR, 2011. [8] B. Li, M. Ayazoglu, T. Mao, O. Camps, and M. Sznaier, “Activity recognition using dynamic subspace angles,” CVPR, 2011. [9] C. H. Lampert, H. Nickisch, and S. Harmeling, “Learning to detect unseen object classes by between-class attribute transfer,” CVPR, 2009. [10] N. Rasiwasia and N. Vasconcelos, “Holistic context models for visual recognition,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 34, no. 5, pp. 902–917, 2012. [11] J. Liu, B. Kuipers, and S. Savarese, “Recognizing human actions by attributes,” CVPR, 2011. [12] A. Fathi and G. Mori, “Action recognition by learning mid-level motion features,” CVPR, 2008. [13] N. Rasiwasia and N. Vasconcelos, “Holistic context modeling using semantic co-occurrences,” CVPR, 2009. [14] A. I. Schein, L. K. Saul, and L. H. Ungar, “A generalized linear model for principal component analysis of binary data,” AISTATS, 2003. [15] G. Doretto, A. Chiuso, Y. N. Wu, and S. Soatto, “Dynamic textures,” Int’l J. Computer Vision, vol. 51, no. 2, pp. 91–109, 2003. [16] S. V. N. Vishwanathan, A. J. Smola, and R. Vidal, “Binet-cauchy kernels on dynamical systems and its application to the analysis of dynamic scenes,” Int’l J. Computer Vision, vol. 73, no. 1, pp. 95–119, 2006. [17] L. Gorelick, M. Blank, E. Shechtman, M. Irani, and R. Basri, “Actions as space-time shapes,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 29, no. 12, pp. 2247–2253, 2007. [18] N. ˙Ikizler and D. A. Forsyth, “Searching for complex human activities with no visual examples,” Int’l J. Computer Vision, vol. 80, no. 3, pp. 337–357, 2008. [19] V. Kellokumpu, G. Zhao, and M. Pietik¨ainen, “Human activity recognition using a dynamic texture based method,” BMVC, 2008. [20] N. Rasiwasia and N. Vasconcelos, “Scene classification with low-dimensional semantic spaces and weak supervision,” CVPR, 2008. [21] A. Quattoni, M. Collins, and T. Darrell, “Learning visual representations using images with captions,” CVPR, 2007. [22] N. Rasiwasi, P. J. Moreno, and N. Vasconcelos, “Bridging the gap: Query by semantic example,” IEEE Trans. Multimedia, vol. 9, no. 5, pp. 923–938, 2007. [23] D. M. Blei and J. D. Lafferty, “Dynamic topic models,” ICML, 2006. [24] X. Wang and A. McCallum, “Topics over time: a non-markov continuous-time model of topical trends,” ACM SIGKDD, 2006. [25] M. Collins, S. Dasgupta, and R. E. Schapire, “A generalization of principal component analysis to the exponential family,” NIPS, 2002. [26] R. H. Shumway and D. S. Stoffer, “An approach to time series smoothing and forecasting using the em algorithm,” Journal of Time Series Analysis, vol. 3, no. 4, pp. 253–264, 1982. [27] B. Sch¨olkopf, A. Smola, and K.-R. M¨uller, “Nonlinear component analysis as a kernel eigenvalue problem,” Neural Computation, vol. 10, pp. 1299–1319, 1998. [28] A. B. Chan and N. Vasconcelos, “Classifying video with kernel dynamic textures,” CVPR, 2007. [29] C.-C. Chang and C.-J. Lin, “LIBSVM: A library for support vector machines,” ACM Trans. on Intelligent Systems and Technology, vol. 2, no. 3, pp. 27:1–27:27, 2011. 9
|
2012
|
84
|
4,803
|
Statistical Consistency of Ranking Methods in A Rank-Differentiable Probability Space Yanyan Lan Institute of Computing Technology Chinese Academy of Sciences lanyanyan@ict.ac.cn Jiafeng Guo Institute of Computing Technology Chinese Academy of Sciences guojiafeng@ict.ac.cn Xueqi Cheng Institute of Computing Technology Chinese Academy of Sciences cxq@ict.ac.cn Tie-Yan Liu Microsoft Research Asia Tie-Yan.Liu@microsoft.com Abstract This paper is concerned with the statistical consistency of ranking methods. Recently, it was proven that many commonly used pairwise ranking methods are inconsistent with the weighted pairwise disagreement loss (WPDL), which can be viewed as the true loss of ranking, even in a low-noise setting. This result is interesting but also surprising, given that the pairwise ranking methods have been shown very effective in practice. In this paper, we argue that the aforementioned result might not be conclusive, depending on what kind of assumptions are used. We give a new assumption that the labels of objects to rank lie in a rank-differentiable probability space (RDPS), and prove that the pairwise ranking methods become consistent with WPDL under this assumption. What is especially inspiring is that RDPS is actually not stronger than but similar to the low-noise setting. Our studies provide theoretical justifications of some empirical findings on pairwise ranking methods that are unexplained before, which bridge the gap between theory and applications. 1 Introduction Ranking is a central problem in many applications, such as document retrieval, meta search, and collaborative filtering. In recent years, machine learning technologies called ‘learning to rank’ have been successfully applied. A learning-to-rank process can be described as follows. In training, a number of sets (queries) of objects (documents) are given and within each set the objects are labeled by assessors, mainly based on multi-level ratings. The target of learning is to create a model that provides a ranking over the objects that best respects the observed labels. In testing, given a new set of objects, the trained model is applied to generate a ranked list of the objects. Ideally, the learning process should be guided by minimizing a true loss such as the weighted pairwise disagreement loss (WPDL) [11], which encodes people’s knowledge on ranking evaluation. However, the minimization can be very difficult due to the nonconvexity of the true loss. Alternatively, many learning-to-rank methods minimize surrogate loss functions. For example, RankSVM [14], RankBoost [12], and RankNet [3] minimize the hinge loss, the exponential loss, and the crossentropy loss, respectively. In machine learning, statistical consistency is regarded as a desired property of a learning method [1, 21, 20], which reveals the statistical connection between a surrogate loss function and the true loss. Statistical consistency in the context of ranking have been actively studied in recent years 1 [8, 9, 19, 11, 2, 18]. According to the studies in [11], many existing pairwise ranking methods are, surprisingly, inconsistent with WPDL, even in a low-noise setting. However, as we know, the pairwise ranking methods have been shown to work very well in practice, and have been regarded as state-of-the-art even today [15, 16, 17]. For example, the experimental results in [2] show that a weighted preorder loss in RankSVM [4] can outperform a consistent surrogate loss in terms of NDCG (See Table 2 in [2]). The contradiction between theory and application inspires us to revisit the statistical consistency of pairwise ranking methods. In particular, we will study whether there exists a new assumption on the probability space that can make statistical consistency naturally hold, and how this new assumption compares with the low-noise setting used in [11]. To perform our study, we first derive a sufficient condition for statistical consistency of ranking methods called rank-consistency, which is in nature very similar to edge-consistency in [11] and order-preserving in [2]. Then we give an assumption on the probability space where ratings (labels) of objects come from, which we call a rank-differentiable probability space (RDPS). Intuitively, RDPS reveals the reason why an object (denoted as object A) should be ranked higher than another object (denoted as object B). That is, the probability of any ratings consistent with the preference1 is larger than that of its dual ratings (obtained by exchanging the labels of object A and object B while keeping others unchanged). We then prove that with the RDPS assumption, the weighted pairwise surrogate loss, which is a generalization of many surrogate loss functions used in existing pairwise ranking methods (e.g., the preorder loss in RankSVM [2], the exponential loss in RankBoost [12], and the logistic loss in RankNet [3]), is statistically consistent with WPDL. Please note that our theoretical result contradicts the result obtained in [11], mainly due to the different assumptions used. What is interesting, and to some extent inspiring, is that our RDPS assumption is not stronger than the low-noise setting used in [11], and in some sense they are very similar to each other (although they focus on different aspects of the probability space). We then conducted detailed comparisons between them to gain more insights on what affects the consistency of ranking. According to our theoretical analysis, we argue that it is not yet appropriate to draw any conclusion about the inconsistency of pairwise ranking methods, especially because it is hard to know what the probability space really is. In this sense, we think the pairwise ranking methods are still good choices for real ranking applications, due to their good empirical performances. The rest of this paper is organized as follows. Sections 2 defines the consistency problem formally and provides a sufficient condition under which consistency with WPDL is achieved for ranking methods. Section 3 gives the main theoretical results, including formal definition of RDPS and conditions of statistical consistency of pairwise ranking methods. Further discussions on whether RDPS is a strong assumption and why our results contradict with that in [11] are presented in Section 4. Conclusions are presented in Section 5. 2 Preliminaries of Statistical Consistency Let x = {x1, · · · , xm} be a set of objects to be ranked. Suppose the labels of the objects are given as multi-level ratings r = (r1, · · · , rm) from space R, where ri denotes the label of xi. Without loss of generality, we adopt K-level ratings used in [7], that is, ri ∈{0, 1, · · · , K −1}. If ri > rj, xi should be ranked higher than xj. Assume that (x, r) is a random variable of space X ×R according to a probability measure P. Following existing literature, let f be a ranking function that gives a score to each object to produce a ranked list and denote F as the space of all ranking functions. In this paper, we adopt the weighted pairwise disagreement loss (WPDL) defined in [11, 10] as the true loss to evaluate f: l0(α, G) = ∑ i<j aG ij1{αi≤αj} + ∑ i>j aG ij1{αi<αj}, (1) where α = (α1, · · · , αm) = (f(x1), · · · , f(xm)), G is a directed acyclic graph (DAG for short) with edge i →j to represent the preference that xi should be ranked higher than xj, and aG ij is a non-negative penalty indexed by i →j on graph G. 1Here, consistency with the preference means that the rating of object A is larger than that of object B. 2 Specifically, in the setting of multi-level ratings, i →j is constructed between pair (i, j) with ri > rj, and aG ij is thus just relevant to the labels of the two objects. For ease of representation2, we replace aG ij with D(ri, rj), and WPDL becomes the following form: l0(f; x, r) = ∑ i,j:ri>rj D(ri, rj)1{f(xi)−f(xj)≤0}, (2) where 1{·} is an indicator function3 and D(ri, rj) is a weight function s.t. (1) ∀ri ̸= rj, D(ri, rj)> 0; (2) ∀ri, rj, D(ri, rj)=D(rj, ri); (3) ∀ri <rj <rk, D(ri, rj)≤D(ri, rk), D(rj, rk)≤D(ri, rk). The conditional expected true risk and the expected true risk of f are then defined as: R0(f|x)=Er|xl0(f; x, r)= ∑ r∈R l0(f; x, r)P(r|x), R0(f)=Ex[Er|xl0(f; x, r)]. (3) Due to the nonconvexity of the true loss, it is infeasible to minimize the true risk in Eq.(3). As is done in the literature of machine learning, we adopt a surrogate loss lΦ to minimize in place of l0. The conditional expected surrogate risk and the expected surrogate risk of f are then defined as: RΦ(f|x)=Er|xlΦ(f; x, r)= ∑ r∈R lΦ(f; x, r)P(r|x), RΦ(f)=Ex[Er|xlΦ(f; x, r)]. (4) Statistical consistency is a desired property for a good surrogate loss, which measures whether the expected true risk of the ranking function obtained by minimizing a surrogate loss converges to the expected true risk of the optimal ranking in the large sample limit. Definition 1. We say a ranking method that minimizes a surrogate loss lΦ is statistically consistent with respect to the true loss l0, if ∀ϵ1 > 0, ∃ϵ2 > 0, such that for any ranking function f ∈F, RΦ(f) ≤infh∈F RΦ(h) + ϵ2 implies R0(f) ≤infh∈F R0(h) + ϵ1. We then introduce a property of the surrogate loss, called rank-consistency, which is a sufficient condition for the statistical consistency of the surrogate loss, as indicated by Theorem 1. Definition 2. We say a surrogate loss lΦ is rank-consistent with the true loss l0, if ∀x, for any ranking function f ∈F such that R0(f|x)>infh∈F R0(h|x), the following inequality holds: inf h∈F RΦ(h|x) < inf{RΦ(g|x) : g ∈F, g(xi) ≤g(xj), for (i, j) where f(xi) ≤f(xj).}. (5) Rank-consistency can be viewed as a generalization of infinite sample consistency for classification proposed in [20] (also referred to as ‘classification-calibrated’ in [1]) to ranking on a set of objects. It is also similar to edge-consistent in [11] and order-preserving in [2]. Theorem 1. If a surrogate loss lΦ is rank-consistent with the true loss l0 on the function space F, then it is statistically consistent with the true loss l0 on F. We omit the proof since it is a straightforward extension of the proof for Theorem 3 in [20]. The proof is also similar to Lemma 3, 4, 5 and Theorem 6 in [11]. 3 Main Results In this section, we present our main theoretical results: with a new assumption on the probability space, many commonly used pairwise ranking algorithms can be proved consistent with WPDL. 3.1 A Rank-Differentiable Probability Space First, we give a new assumption named a rank-differentiable probability space (RDPS for short), with which many pairwise ranking methods will be rank-consistent with WPDL. Hereafter, we will also refer to data from RDPS as having a rank-differentiable property. 2Here we do not distinguish i > j and i < j, because they are just introduced to avoid minor technical issues as stated in [11]. Furthermore, it will not influence the consistency results. 31A = 1,if A is true and 1A = 0,if A is false. 3 Before introducing the definition of RDPS, we give two definitions, an equivalence class of ratings and dual ratings. Intuitively, we say two ratings are equivalent if they induce the same ranking or preference relationships. And we say two ratings are the dual ratings with respect to a pair of objects, if the two ratings just exchange the ratings of the two objects while keeping the ratings of other objects unchanged. The formal definitions are given as follows. Definition 3. A ratings r is called equivalent to ˜r, denoted as r ∼˜r, if P(r) = P(˜r). Where P(r) = {(i, j) : ri > rj.} and P(˜r) = {(i, j) : ˜ri > ˜rj.} stand for the preference relationships induced by r and ˜r, respectively. Therefore, an equivalence class of the ratings r, denoted as [r], is defined as the set of ratings which are equivalent to r. That is, [r] = {˜r ∈R : ˜r ∼r.}. Definition 4. Let R(i, j) = {r ∈R : ri > rj.}, r′ is called the dual ratings of r ∈R(i, j) with respect to (i, j) if r′ j = ri, r′ i = rj, r′ k = rk, ∀k ̸= i, j. Now we give the definition of RDPS. An intuitive explanation on this definition is that there exists a unique equivalence class of ratings that for each induced pairwise preference relationship, the probability will be able to separate the two dual ratings with respect to that pair. Definition 5. Let R(i, j) = {r ∈R : ri > rj.}, a probability space is called rank-differentiable with (i, j), if for any r ∈R(i, j), P(r|x) ≥P(r′|x), and there exists at least one ratings r ∈ R(i, j), s.t. P(r|x) > P(r′|x), where r′ is the dual ratings of r. Definition 6. A probability space is called rank-differentiable, if there exists an equivalence class [r∗], s.t. P(r∗) = {(i, j) : the probability space is rank-differentiable with(i, j).}, where P(r∗) = {(i, j) : r∗ i > r∗ j .}. We will also call this probability space a RDPS or rank-differentiable with [r∗]. Please note that [r∗] in Definition 6 is unique, which can be directly proved by Definition 3. Definition 5 implies that if a probability space is rank-differentiable with (i, j), the optimal ranking function will rank xi higher than xj, as shown in the following theorem. The proof is similar to that of Theorem 4, thus we omit it here for space limitation. Hereafter, we will call this property ‘separability on pairs’. Theorem 2. ∀x ∈X, let f ∗∈F be an optimal ranking function that R0(f ∗|x) = inff∈F R0(f|x). If the probability space is rank-differentiable with (i, j), we have f ∗(xi) > f ∗(xj). Further considering the ‘transitivity4 over pairs’ of a ranking function, Definition 6 implies that if a probability space is rank-differentiable with [r∗], the optimal ranking function will induce the same preference relationships, as shown in the following theorem. Theorem 3. ∀x ∈X, let f ∗∈F be an optimal ranking function that R0(f ∗|x) = inff∈F R0(f|x). If the probability space is rank-differentiable with [r∗], for any (i, j) ∈P(r∗), we have f ∗(xi) > f ∗(xj), where P(r∗) = {(i, j) : r∗ i > r∗ j .}. 3.2 Conditions of Statistical Consistency With RDPS as the new assumption, we study the statistical consistency of pairwise ranking methods. First, we define the weighted pairwise surrogate loss as lΦ(f; x, r) = ∑ i,j:ri>rj D(ri, rj)ϕ(f(xi) −f(xj)), (6) where ϕ is a convex function. The surrogate losses used in many existing pairwise ranking methods can be regarded as special cases of this weighted pairwise surrogate loss, such as the hinge loss in RankSVM [14], the exponential loss in RankBoost [12], the cross-entropy loss in RankNet [3] and the preorder loss in [2]. For the weighted pairwise surrogate loss, we get its sufficient condition of statistical consistency as shown in Theorem 5. In order to prove this theorem, we first prove Theorem 4. Theorem 4. We assume the probability space is rank-differentiable with an equivalence class [r∗]. Suppose that ϕ(·) : R →R in the weighted pairwise surrogate loss is a non-increasing function such that ϕ(z) < ϕ(−z), ∀z > 0. ∀x ∈X, let f ∈F be a ranking function such that RΦ(f|x) = 4Transitivity means that if xi is ranked higher than xj and xj is ranked higher than xk, xi must be ranked higher than xk. 4 infh∈F RΦ(h|x), then for any object pair (xi, xj), r∗ i > r∗ j , we have f(xi) ≥f(xj). Moreover, if ϕ(·) is differentiable and ϕ′(0) < 0, we have f(xi) > f(xj). Proof. (1) We assume that f(xi) < f(xj), and define f ′ as the function such that f ′(xi) = f(xj), f ′(xj)=f(xi), f ′(xk)=f(xk), ∀k ̸= i, j. We can then get the following equation, RΦ(f ′|x) −RΦ(f|x) = ∑ r,r′, r∈R(i,j) ∑ k:rj<ri<rk [D(rk, rj)−D(rk, ri)][ϕ(f(xk)−f(xi))−ϕ(f(xk)−f(xj))][P(r|x)−P(r′|x)] + ∑ r,r′, r∈R(i,j) ∑ k:rj<rk<ri D(ri, rk)[ϕ(f(xj)−f(xk))−ϕ(f(xi)−f(xk))][P(r|x)−P(r′|x)] + ∑ r,r′, r∈R(i,j) ∑ k:rj<rk<ri D(rk, rj)[ϕ(f(xk)−f(xi))−ϕ(f(xk)−f(xj))][P(r|x)−P(r′|x)] + ∑ r,r′, r∈R(i,j) ∑ k:rk<rj<ri [D(ri, rk)−D(rj, rk)][ϕ(f(xj)−f(xk))−ϕ(f(xi)−f(xk))][P(r|x)−P(r′|x)] +[ϕ(f(xj)−f(xi))−ϕ(f(xi)−f(xj))] ∑ r,r′, r∈R(i,j) D(ri, rj)[P(r|x)−P(r′|x)] According to the conditions of RDPS, the requirements of the weight function D in Section 2 and the assumption that ϕ is a non-increasing function such that ϕ(z) < ϕ(−z), ∀z > 0, we can obtain RΦ(f ′|x) −RΦ(f|x) ≤[ϕ(f(xj)−f(xi))−ϕ(f(xi)−f(xj))] ∑ r,r′, r∈R(i,j) D(ri, rj)[P(r|x)−P(r′|x)] < 0. This is a contradiction with RΦ(f)=infh∈F RΦ(h|x). Therefore, we have proven that f(xi)≥f(xj). (2) Now we assume that f(xi)=f(xj)=f0. From the assumption RΦ(f|x) = infh∈F RΦ(h|x), we can get ∂RΦ(f|x) ∂f(xi) f0= 0, ∂RΦ(f|x) ∂f(xj) f0= 0. Accordingly, we can obtain the two following equations: ∑ r,r′, r∈R(i,j) A1P(r|x) + A2P(r′|x) = 0, ∑ r,r′, r∈R(i,j) B1P(r|x) + B2P(r′|x) = 0, (7) where, A1 = B2 = ∑ k:rj<ri<rk D(rk, ri)[−ϕ′(f(xk)−f0)] + ∑ k:rj<rk<ri D(ri, rk)ϕ′(f0−f(xk)) + ∑ k:rk<rj<ri D(ri, rk)ϕ′(f0−f(xk)) + D(ri, rj)ϕ′(0). A2 = B1 = ∑ k:rj<ri<rk D(rk, rj)[−ϕ′(f(xk)−f0)] + ∑ k:rj<rk<ri D(rk, rj)[−ϕ′(f(xk)−f0)] + ∑ k:rk<rj<ri D(rj, rk)ϕ′(f0−f(xk)) + D(ri, rj)[−ϕ′(0)]. If ϕ′(0) < 0, based on the requirements of RDPS and the weight function D, we can get ∑ r,r′, r∈R(i,j) (A1 −B1)P(r|x) + (A2 −B2)P(r′|x) = ∑ r,r′, r∈R(i,j) (A1 −A2)[P(r|x) −P(r′|x)] ≤2ϕ′(0) ∑ r,r′, r∈R(i,j) D(ri, rj)[P(r|x) −P(r′|x)] < 0. This is a contradiction with Eq.(7). Therefore, we actually have proven that f(xi) > f(xj). 5 Figure 1: Relationships among order-preserving, rank-differentiable and low-noise. Theorem 5. Let ϕ(·) be a non-negative, non-increasing and differentiable function such that ϕ′(0) < 0. Then the weighted pairwise surrogate loss is consistent with WPDL under the assumption of RDPS. Proof. We assume that the probability space is rank-differentiable with an equivalence class [r∗]. Then for any object pair (xi, xj), r∗ i > r∗ j , we are going to prove that R∗ Φ|x = inf h∈F RΦ(h|x) < inf{RΦ(f|x) : f ∈F, f(xi) ≤f(xj).} (8) because from Theorem 3 this implies the rank-consistency condition in Eq.(5) holds. Suppose Eq.(8) is not true, then we can find a sequence of functions {fm} such that 0 = fm(xi) ≤ fm(xj), and limm RΦ(fm|x) = R∗ Φ|x. We can further select a subsequence such that for each pair (i, j), fm(xi) −fm(xj) converges (may also converge to ±∞). This leads to a limit function f, with properly defined f(xi) −f(xj), even when either f(xi) or f(xj) is ±∞. This implies that RΦ(f|x) = R∗ Φ|x and 0 = f(xi) ≤f(xj). However, this violates Theorem 4. Thus, Eq.(8) is true. Therefore, we have proven that the weighted pairwise surrogate loss is consistent with WPDL. Many commonly used pairwise surrogate losses, such as the preorder loss in RankSVM [2], the exponential loss in RankBoost [12], and the logistic loss in RankNet[3], satisfy the conditions in Theorem 5, thus they are consistent with WPDL. In other words, we have shown that statistical consistency of pairwise ranking methods is achieved under the assumption of RDPS. 4 Discussions In Section 3, we have shown that statistical consistency of pairwise ranking methods is achieved with the assumption of RDPS. Considering the contradicting conclusion drawn in [11], a natural question is whether the RDPS is stronger than the low-noise setting used in [11]. In this section we will make some discussions on this issue. 4.1 Relationships of RDPS with Previous Work Here, we discuss the relationships between the rank-differentiable property and the assumptions used in some previous works (including the order-preserving property in [19] and the low-noise setting in [11]). According to our analysis, we find that the rank-differentiable property is not a strong assumption on the probability space. Actually, it is a weaker assumption than the order-preserving property and is very similar to the low-noise setting. A sketch map of the relationships between the three assumptions is presented in Figure 1, where the low-noise probability spaces stands for a set of spaces satisfying the low-noise setting. Detailed discussions are given as follows. 6 1. Rank-Differentiable vs. Order-Preserving The rank-differentiable property is defined on the space of multi-level ratings while the orderpreserving property is defined on the permutation space. To understand their relationship, we need to put them onto the same space. Actually, we can restrict the space of multi-level ratings to the permutation space by setting K = m −1 and requiring the ratings of each two objects to be different. After doing so, it is not difficult to see that the rank-differentiable property is weaker than the order-preserving property, as shown in the following theorem. Theorem 6. Let K = m −1. For each permutation y ∈Y, where y(i) stands for the position of xi, define the corresponding ratings ry = (ry 1, · · · , ry m) as ry i = m −y(i), i = 1, · · · , n. Assume that P(ry) = P(y), and P(r) = 0 if there does not exist a permutation y s.t. r = ry. If the probability space is order-preserving with respect to m−1 pairs (j1, j2), (j2, j3),· · ·, (jm−1, jm), it is rank-differentiable with the equivalence class [r∗], where r∗ ji > r∗ ji+1, i = 1, · · · , m, but the converse is not always true. 2. Rank-Differentiable vs. Low-Noise The rank-differentiable property is defined on the space of multi-level ratings while the low-noise setting is defined on the space of DAGs. According to the correspondence between ratings and DAGs (as stated in Section 2), we can restrict the space of DAGs to the space of multi-level ratings. Consequently, we obtain the relationship between the rank-differentiable property and the low-noise setting as follows: (1) Mathematically, the inequalities in the low-noise setting can be viewed as the combinations of the corresponding inequalities in the rank-differentiable property. They are similar to each other in their forms and the rank-differentiable property is not stronger than the low-noise setting. (2) Intuitively, the rank-differentiable property induces ‘separability on pairs’ and ‘transitivity over pairs’ as described in Theorem 2 and 3, while the low-noise setting aims to explicitly express the transitivity over pairs, but fails in achieving it. Let us use an example to illustrate the above points. Suppose there are three objects to be ranked in the setting of three-level ratings (K = 3). Furthermore, suppose that the ratings of every two objects are different and all the graphs are fully connected DAGs in the setting of [11]. We order the ratings and DAGs as: r1 = (2, 1, 0), r2 = (1, 2, 0), r3 = (2, 0, 1), r4 = (0, 2, 1), r5 = (1, 0, 2), r6 = (0, 1, 2). G1 ={(1→2),(2→3),(1→3)}, G2 ={(2→1),(1→3),(2→3)}, G3 ={(1→3),(3→2),(1→2)}, G4 ={(2→3),(3→1),(2→1)}, G5 ={(3→1),(1→2),(3→2)}, G6 ={(3→2),(2→1),(3→1)}, Therefore ri, Gi have one-to-one correspondence, we can set the probability as P(ri|x) = P(Gi|x) = Pi and define aGi kl = D(rik, ril), i = 1, · · · , 6; k, l = 1, 2, 3. Considering conditions in the definition of RDPS, rank-differentiable with [r1] requires the following inequalities to hold and at least one inequalities in (9) and (10) to hold strictly. P1 −P2 ≥0, P3 −P4 ≥0, P5 −P6 ≥0, (9) P4 −P6 ≥0, P2 −P5 ≥0, P1 −P3 ≥0, (10) We assume there are edges 1 →2 and 2 →3 in the difference graph. Then the low-noise setting in Definition 8 of [11] requires that a13 −a31 ≥a12 −a21 + a23 −a32, where, a12 −a21 = D(2, 1)(P1 −P2) + D(2, 0)(P3 −P4) + D(1, 0)(P5 −P6), a23 −a32 = D(2, 1)(P4 −P6) + D(2, 0)(P2 −P5) + D(1, 0)(P1 −P3), a13 −a31 = D(2, 1)(P3 −P5) + D(2, 0)(P1 −P6) + D(1, 0)(P2 −P4). According to the above example, (1) a12 −a21 and a23 −a32 are exactly the combinations of the terms in (9) and (10), respectively. Thus, if the probability space is rank-differentiable with [r1], we can only get a12 −a21 > 0, a23 − a32 > 0, but not the inequalities in the low-noise setting. This indicates that our rank-differentiable property is not stronger than the low-noise setting. 7 (2) With the assumption that aij −aji > 0 can guarantee the optimal ranking with which xi is ranked higher than xj, it seems that the low-noise setting intends to make the preferences of 1 →2 and 2 →3 transitive to 1 →3. However, the assumption is not always true. Instead, the rankdifferentiable property can naturally induce the ‘transitivity over pairs’ (See Theorem 2 and 3). In this sense, the rank-differentiable property is much more powerful than the low-noise setting, although not stronger. 4.2 Explanation on Theoretical Contradiction On one hand, different conclusions on the consistency of pairwise ranking methods have been obtained in our work and in [11]. On the other hand, we have shown that there exists an connection between the rank-differentiable property and the low-noise setting (see Figure 1). Therefore, one may get confused by the contradicting results and may wonder what will happen if a probability space satisfies both the rank-differentiable property and the low-noise setting. In this subsection, we will make discussions on this issue. Please note that we adopt the multi-level ratings as the labeling strategy (as stated clearly in Section 2) in our analysis. With this setting, the graph space G in [11] will not contain all the DAGs. For example, considering a three-graph case, the graph G2 = {(1, 2, 3) : (2 →3), (3 →1)} in the proof of Theorem 11 of [11] (the main negative result on the consistency of pairwise surrogate losses) actually does not exist. That is because if 2 →3 and 3 →1 exist in a graph G, we can get that r2 > r3, r3 > r1 according to the correspondence between graphs and ratings as stated in Section 2. Therefore, we can immediately get r2 > r1. Once again according to the correspondence between graphs and ratings, we will get that 2 →1 should be contained in graph G, which contradicts with G2. Thus, G2 will not exist in the setting of multi-level ratings. However, in the proof of [11], they do not take the constraint of multi-level ratings into consideration, thus deduce contradict results. From the above discussions, we can see that our theoretical results contradict with that in [11] mainly because the two works consider different settings and assumptions. If a probability space satisfies both the rank-differentiable property and the low-noise setting, the pairwise ranking methods will be consistent with WPDL in the setting of multi-level ratings but inconsistent in the setting of DAGs. One may argue that the setting of multi-level ratings is not as general as the DAG setting, however, please note that multi-level ratings are the dominant setting in the literature of ‘learning to rank’ [13, 16, 15, 6] and have been widely used in many applications such as web search and document retrieval [17, 5]. Therefore, we think the setting of multi-level ratings is general enough and our result has its value to the mainstream research of learning to rank. To sum up, based on all the discussions in this paper, we argue that it is not yet appropriate to draw any conclusion about the inconsistency of pairwise ranking methods, especially because it is hard to know what the probability space really is. In this sense, we think the pairwise ranking methods are still good choices for real ranking applications, due to their good empirical performances. 5 Conclusions In this paper, we have discussed the statistical consistency of ranking methods. Specifically, we argue that the previous results on the inconsistency of commonly-used pairwise ranking methods are not conclusive, depending on the assumptions about the probability space. We then propose a new assumption, which we call a rank-differentiable probability space (RDPS), and prove that the pairwise ranking methods are consistent with the same true loss as in previous studies under this assumption. We show that RDPS is not a stronger assumption than the assumptions used in previous work, indicating that our finding is similarly reliable to previous ones. Acknowledgments This research work was funded by the National Natural Science Foundation of China under Grant No. 60933005, No. 61173008, No. 61003166 , No. 61203298 and 973 Program of China under Grants No. 2012CB316303. 8 References [1] P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 101(473):138–156, 2006. [2] D. Buffoni, C. Calauzenes, P. Gallinari, and N. Usunier. Learning scoring functions with order-preserving losses and standardized supervision. In Proceedings of the 28th International Conference on Machine Learning (ICML 2011), pages 825–832, 2011. [3] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender. Learning to rank using gradient descent. In Proceedings of the 22th International Conference on Machine Learning (ICML 2005), pages 89–96, 2005. [4] O. Chapelle. Training a support vector machine in the primal. Neural Computation, 19:1155– 1178, 2007. [5] O. Chapelle and Y. Chang. Yahoo! learning to rank challenge overview. Journal of Machine Learning Research - Proceedings Track, 14:1–24, 2011. [6] O. Chapelle, Y. Chang, and T.-Y. Liu. Future directions in learning to rank. Journal of Machine Learning Research - Proceedings Track, 14:91–100, 2011. [7] W. Chen, T.-Y. Liu, Y. Lan, Z. Ma, and H. Li. Ranking measures and loss functions in learning to rank. In 24th Annual Conference on Neural Information Processing Systems (NIPS 2009), pages 315–323, 2009. [8] S. Cl´emenc¸on, G. Lugosi, and N. Vayatis. Ranking and scoring using empirical risk minimization. In Proceedings of the 18th Annual Conference on Learning Theory (COLT 2005), volume 3559, pages 1–15, 2005. [9] D. Cossock and T. Zhang. Subset ranking using regression. In Proceedings of the 19th Annual Conference on Learning Theory (COLT 2006), pages 605–619, 2006. [10] O. Dekel, C. D. Manning, and Y. Singer. Log-linear models for label ranking. In 18th Annual Conference on Neural Information Processing Systems (NIPS 2003), 2003. [11] J. C. Duchi, L. W. Mackey, and M. I. Jordan. On the consistency of ranking algorithms. In Proceedings of the 27th International Conference on Machine Learning (ICML 2010), pages 327–334, 2010. [12] Y. Freund, R. Iyer, R. E. Schapire, and Y. Singer. An efficient boosting algorithm for combining preferences. Journal of Machine Learning Research, 4:933–969, 2003. [13] R. Herbrich, K. Obermayer, and T. Graepel. Large margin rank boundaries for ordinal regression. In Advances in Large Margin Classifiers., pages 115–132, 1999. [14] T. Joachims. Optimizing search engines using clickthrough data. In Proceedings of the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2002), pages 133–142, 2002. [15] H. Li, T.-Y. Liu, and C. Zhai. Learning to rank for information retrieval (lr4ir 2008). SIGIR Forum, 42:76–79, 2008. [16] T.-Y. Liu. Learning to rank for information retrieval. Foundation and Trends on Information Retrieval, 3:225–331, 2009. [17] T.-Y. Liu, J. Xu, T. Qin, W.-Y. Xiong, and H. Li. Letor: Benchmark dataset for research on learning to rank for information retrieval. In SIGIR ’07 Workshop, San Francisco, 2007. Morgan Kaufmann. [18] P. D. Ravikumar, A. Tewari, and E. Yang. On ndcg consistency of listwise ranking methods. Journal of Machine Learning Research - Proceedings Track, 15:618–626, 2011. [19] F. Xia, T.-Y. Liu, J. Wang, W. S. Zhang, and H. Li. Listwise approach to learning to rank - theory and algorithm. In Proceedings of the 25th International Conference on Machine Learning (ICML 2008), 2008. [20] T. Zhang. Statistical analysis of some multi-category large margin classification methods. Journal of Machine Learning Research, 5:1225–1251, 2004. [21] T. Zhang. Statistical behavior and consistency of classification methods based on convex risk minimization. Annuals of Statistics, 32:56–85, 2004. 9
|
2012
|
85
|
4,804
|
A Scalable CUR Matrix Decomposition Algorithm: Lower Time Complexity and Tighter Bound Shusen Wang and Zhihua Zhang College of Computer Science & Technology Zhejiang University Hangzhou, China 310027 {wss,zhzhang}@zju.edu.cn Abstract The CUR matrix decomposition is an important extension of Nystr¨om approximation to a general matrix. It approximates any data matrix in terms of a small number of its columns and rows. In this paper we propose a novel randomized CUR algorithm with an expected relative-error bound. The proposed algorithm has the advantages over the existing relative-error CUR algorithms that it possesses tighter theoretical bound and lower time complexity, and that it can avoid maintaining the whole data matrix in main memory. Finally, experiments on several real-world datasets demonstrate significant improvement over the existing relative-error algorithms. 1 Introduction Large-scale matrices emerging from stocks, genomes, web documents, web images and videos everyday bring new challenges in modern data analysis. Most efforts have been focused on manipulating, understanding and interpreting large-scale data matrices. In many cases, matrix factorization methods are employed to construct compressed and informative representations to facilitate computation and interpretation. A principled approach is the truncated singular value decomposition (SVD) which finds the best low-rank approximation of a data matrix. Applications of SVD such as eigenface [20, 21] and latent semantic analysis [4] have been illustrated to be very successful. However, the basis vectors resulting from SVD have little concrete meaning, which makes it very difficult for us to understand and interpret the data in question. An example in [10, 19] has well shown this viewpoint; that is, the vector [(1/2)age −(1/ √ 2)height + (1/2)income], the sum of the significant uncorrelated features from a dataset of people’s features, is not particularly informative. The authors of [17] have also claimed: “it would be interesting to try to find basis vectors for all experiment vectors, using actual experiment vectors and not artificial bases that offer little insight.” Therefore, it is of great interest to represent a data matrix in terms of a small number of actual columns and/or actual rows of the matrix. The CUR matrix decomposition provides such techniques, and it has been shown to be very useful in high dimensional data analysis [19]. Given a matrix A, the CUR technique selects a subset of columns of A to construct a matrix C and a subset of rows of A to construct a matrix R, and computes a matrix U such that ˜A = CUR best approximates A. The typical CUR algorithms [7, 8, 10] work in a two-stage manner. Stage 1 is a standard column selection procedure, and Stage 2 does row selection from A and C simultaneously. Thus Stage 2 is more complicated than Stage 1. The CUR matrix decomposition problem is widely studied in the literature [7, 8, 9, 10, 12, 13, 16, 18, 19, 22]. Perhaps the most widely known work on the CUR problem is [10], in which the authors devised a randomized CUR algorithm called the subspace sampling algorithm. Particularly, the algorithm has (1 + ϵ) relative-error ratio with high probability (w.h.p.). 1 Unfortunately, all the existing CUR algorithms require a large number of columns and rows to be chosen. For example, for an m × n matrix A and a target rank k ≤min{m, n}, the state-ofthe-art CUR algorithm — the subspace sampling algorithm in [10] — requires exactly O(k4ϵ−6) rows or O(kϵ−4 log2 k) rows in expectation to achieve (1 + ϵ) relative-error ratio w.h.p. Moreover, the computational cost of this algorithm is at least the cost of the truncated SVD of A, that is, O(min{mn2, nm2}).1 The algorithms are therefore impractical for large-scale matrices. In this paper we develop a CUR algorithm which beats the state-of-the-art algorithm in both theory and experiments. In particular, we show in Theorem 5 a novel randomized CUR algorithm with lower time complexity and tighter theoretical bound in comparison with the state-of-the-art CUR algorithm in [10]. The rest of this paper is organized as follows. Section 3 introduces several existing column selection algorithms and the state-of-the-art CUR algorithm. Section 4 describes and analyzes our novel CUR algorithm. Section 5 empirically compares our proposed algorithm with the state-of-the-art algorithm. 2 Notations For a matrix A = [aij] ∈Rm×n, let a(i) be its i-th row and aj be its j-th column. Let ∥A∥1 = ∑ i,j |aij| be the ℓ1-norm, ∥A∥F = (∑ i,j a2 ij)1/2 be the Frobenius norm, and ∥A∥2 be the spectral norm. Moreover, let Im denote an m × m identity matrix, and 0mn denotes an m × n zero matrix. Let A = UAΣAVT A = ∑ρ i=0 σA,iuA,ivT A,i = UA,kΣA,kVT A,k + UA,k⊥ΣA,k⊥VT A,k⊥be the SVD of A, where ρ = rank(A), and UA,k, ΣA,k, and VA,k correspond to the top k singular values. We denote Ak = UA,kΣA,kVT A,k. Furthermore, let A† = UA,ρΣ−1 A,ρVT A,ρ be the Moore-Penrose inverse of A [1]. 3 Related Work Section 3.1 introduces several relative-error column selection algorithms related to this work. Section 3.2 describes the state-of-the-art CUR algorithm in [10]. Section 3.3 discusses the connection between the column selection problem and the CUR problem. 3.1 Relative-Error Column Selection Algorithms Given a matrix A ∈Rm×n, column selection is a problem of selecting c columns of A to construct C ∈Rm×c to minimize ∥A −CC†A∥F . Since there are (n c ) possible choices of constructing C, so selecting the best subset is a hard problem. In recent years, many polynomial-time approximate algorithms have been proposed, among which we are particularly interested in the algorithms with relative-error bounds; that is, with c ≥k columns selected from A, there is a constant η such that ∥A −CC†A∥F ≤η∥A −Ak∥F . We call η the relative-error ratio. We now present some recent results related to this work. We first introduce a recently developed deterministic algorithm called the dual set sparsification proposed in [2, 3]. We show their results in Lemma 1. Furthermore, this algorithm is a building block of some more powerful algorithms (e.g., Lemma 2), and our novel CUR algorithm also relies on this algorithm. We attach the algorithm in Appendix A. Lemma 1 (Column Selection via Dual Set Sparsification Algorithm). Given a matrix A ∈Rm×n of rank ρ and a target rank k (< ρ), there exists a deterministic algorithm to select c (> k) columns of A and form a matrix C ∈Rm×c such that
A −CC†A
F ≤ √ 1 + 1 (1 − √ k/c)2
A −Ak
F . 1Although some partial SVD algorithms, such as Krylov subspace methods, require only O(mnk) time, they are all numerical unstable. See [15] for more discussions. 2 Moreover, the matrix C can be computed in TVA,k +O(mn+nck2), where TVA,k is the time needed to compute the top k right singular vectors of A. There are also a variety of randomized column selection algorithms achieving relative-error bounds in the literature: [3, 5, 6, 10, 14]. An randomized algorithm in [2] selects only c = 2k ϵ (1 + o(1)) columns to achieve the expected relative-error ratio (1 + ϵ). The algorithm is based on the approximate SVD via random projection [15], the dual set sparsification algorithm [2], and the adaptive sampling algorithm [6]. Here we present the main results of this algorithm in Lemma 2. Our proposed CUR algorithm is motivated by and relies on this algorithm. Lemma 2 (Near-Optimal Column Selection Algorithm). Given a matrix A ∈Rm×n of rank ρ, a target rank k (2 ≤k < ρ), and 0 < ϵ < 1, there exists a randomized algorithm to select at most c = 2k ϵ ( 1 + o(1) ) columns of A to form a matrix C ∈Rm×c such that E2∥A −CC†A∥F ≤E∥A −CC†A∥2 F ≤(1 + ϵ)∥A −Ak∥2 F , where the expectations are taken w.r.t. C. Furthermore, the matrix C can be computed in O((mnk+ nk3)ϵ−2/3). 3.2 The Subspace Sampling CUR Algorithm Drineas et al. [10] proposed a two-stage randomized CUR algorithm which has a relative-error bound w.h.p. Given a matrix A ∈Rm×n and a target rank k, in the first stage the algorithm chooses exactly c = O(k2ϵ−2 log δ−1) columns (or c = O(kϵ−2 log k log δ−1) in expectation) of A to construct C ∈Rm×c; in the second stage it chooses exactly r = O(c2ϵ−2 log δ−1) rows (or r = O(cϵ−2 log c log δ−1) in expectation) of A and C simultaneously to construct R and U. With probability at least 1 −δ, the relative-error ratio is 1 + ϵ. The computational cost is dominated by the truncated SVD of A and C. Though the algorithm is ϵ-optimal with high probability, it requires too many rows get chosen: at least r = O(kϵ−4 log2 k) rows in expectation. In this paper we seek to devise an algorithm with mild requirement on column and row numbers. 3.3 Connection between Column Selection and CUR Matrix Decomposition The CUR problem has a close connection with the column selection problem. As aforementioned, the first stage of existing CUR algorithms is simply a column selection procedure. However, the second stage is more complicated. If the second stage is na¨ıvely solved by a column selection algorithm on AT , then the error ratio will be at least (2 + ϵ). For a relative-error CUR algorithm, the first stage seeks to bound a construction error ratio of ∥A−CC†A∥F ∥A−Ak∥F , while the section stage seeks to bound ∥A−CC†AR†R∥F ∥A−CC†A∥F given C. Actually, the first stage is a special case of the second stage where C = Ak. Given a matrix A, if an algorithm solving the second stage results in a bound ∥A−CC†AR†R∥F ∥A−CC†A∥F ≤η, then this algorithm also solves the column selection problem for AT with an η relative-error ratio. Thus the second stage of CUR is a generalization of the column selection problem. 4 Main Results In this section we introduce our proposed CUR algorithm. We call it the fast CUR algorithm because it has lower time complexity compared with SVD. We describe it in Algorithm 1 and give a theoretical analysis in Theorem 5. Theorem 5 relies on Lemma 2 and Theorem 4, and Theorem 4 relies on Theorem 3. Theorem 3 is a generalization of [6, Theorem 2.1], and Theorem 4 is a generalization of [2, Theorem 5]. 3 Algorithm 1 The Fast CUR Algorithm. 1: Input: a real matrix A ∈Rm×n, target rank k, ϵ ∈(0, 1], target column number c = 2k ϵ ( 1+o(1) ) , target row number r = 2c ϵ ( 1 + o(1) ) ; 2: // Stage 1: select c columns of A to construct C ∈Rm×c 3: Compute approximate truncated SVD via random projection such that Ak ≈˜Uk ˜Σk ˜Vk; 4: Construct U1 ←columns of (A −˜Uk ˜Σk ˜Vk); V1 ←columns of ˜VT k ; 5: Compute s1 ←Dual Set Spectral-Frobenius Sparsification Algorithm (U1, V1, c −2k/ϵ); 6: Construct C1 ←ADiag(s1), and then delete the all-zero columns; 7: Residual matrix D ←A −C1C† 1A; 8: Compute sampling probabilities: pi = ∥di∥2 2/∥D∥2 F , i = 1, · · · , n; 9: Sampling c2 = 2k/ϵ columns from A with probability {p1, · · · , pn} to construct C2; 10: // Stage 2: select r rows of A to construct R ∈Rr×n 11: Construct U2 ←columns of (A −˜Uk ˜Σk ˜Vk)T ; V2 ←columns of ˜UT k ; 12: Compute s2 ←Dual Set Spectral-Frobenius Sparsification Algorithm (U2, V2, r −2c/ϵ); 13: Construct R1 ←Diag(s2)A, and then delete the all-zero rows; 14: Residual matrix B ←A −AR† 1R1; Compute qj = ∥b(j)∥2 2/∥B∥2 F , j = 1, · · · , m; 15: Sampling r2 = 2c/ϵ rows from A with probability {q1, · · · , qm} to construct R2; 16: return C = [C1, C2], R = [RT 1 , RT 2 ]T , and U = C†AR†. 4.1 Adaptive Sampling The relative-error adaptive sampling algorithm is established in [6, Theorem 2.1]. The algorithm is based on the following idea: after selecting a proportion of columns from A to form C1 by an arbitrary algorithm, the algorithms randomly samples additional c2 columns according to the residual A −C1C† 1A. Boutsidis et al. [2] used the adaptive sampling algorithm to decrease the residual of the dual set sparsification algorithm and obtained an (1 + ϵ) relative-error bound. Here we prove a new bound for the adaptive sampling algorithm. Interestingly, this new bound is a generalization of the original one in [6, Theorem 2.1]. In other words, Theorem 2.1 of [6] is a direct corollary of our following theorem in which C = Ak is set. Theorem 3 (The Adaptive Sampling Algorithm). Given a matrix A ∈Rm×n and a matrix C ∈ Rm×c such that rank(C) = rank(CC†A) = ρ, (ρ ≤c ≤n), we let R1 ∈Rr1×n consist of r1 rows of A, and define the residual B = A −AR† 1R1. Additionally, for i = 1, · · · , m, we define pi = ∥b(i)∥2 2/∥B∥2 F . We further sample r2 rows i.i.d. from A, in each trial of which the i-th row is chosen with probability pi. Let R2 ∈Rr2×n contains the r2 sampled rows and let R = [RT 1 , RT 2 ]T ∈R(r1+r2)×n. Then the following inequality holds: E∥A −CC†AR†R∥2 F ≤∥A −CC†A∥2 F + ρ r2 ∥A −AR† 1R1∥2 F , where the expectation is taken w.r.t. R2. 4.2 The Fast CUR Algorithm Based on the dual set sparsification algorithm of of Lemma 1 and the adaptive sampling algorithm of Theorem 3, we develop a randomized algorithm to solve the second stage of CUR problem. We present the results of the algorithm in Theorem 4. Theorem 5 of [2] is a special case of the following theorem where C = Ak. Theorem 4 (The Fast Row Selection Algorithm). Given a matrix A ∈Rm×n and a matrix C ∈ Rm×c such that rank(C) = rank(CC†A) = ρ, (ρ ≤c ≤n), and a target rank k (≤ρ), the proposed randomized algorithm selects r = 2ρ ϵ (1 + o(1)) rows of A to construct R ∈Rr×n, such that E∥A −CC†AR†R∥2 F ≤∥A −CC†A∥2 F + ϵ∥A −Ak∥2 F , where the expectation is taken w.r.t. R. Furthermore, the matrix R can be computed in O((mnk + mk3)ϵ−2/3) time. Based on Lemma 2 and Theorem 4, here we present the main theorem for the fast CUR algorithm. 4 Table 1: A summary of the datasets. Dataset Type size Source Redrocknatural image18000 × 4000 http://www.agarwala.org/efficient gdc/ Arcene biology 10000 × 900 http://archive.ics.uci.edu/ml/datasets/Arcene Dexter bag of words 20000 × 2600http://archive.ics.uci.edu/ml/datasets/Dexter Theorem 5 (The Fast CUR Algorithm). Given a matrix A ∈Rm×n and a positive integer k ≪ min{m, n}, the fast CUR algorithm (described in Algorithm 1) randomly selects c = 2k ϵ (1 + o(1)) columns of A to construct C ∈Rm×c with the near-optimal column selection algorithm of Lemma 2, and then selects r = 2c ϵ (1 + o(1)) rows of A to construct R ∈Rr×n with the fast row selection algorithm of Theorem 4. Then we have E∥A −CUR∥F = E∥A −C(C†AR†)R∥F ≤(1 + ϵ)∥A −Ak∥F . Moreover, the algorithm runs in time O ( mnkϵ−2/3 + (m + n)k3ϵ−2/3 + mk2ϵ−2 + nk2ϵ−4) . Since k, c, r ≪min{m, n} by the assumptions, so the time complexity of the fast CUR algorithm is lower than that of the SVD of A. This is the main reason why we call it the fast CUR algorithm. Another advantage of this algorithm is avoiding loading the whole m × n data matrix A into main memory. None of three steps — the randomized SVD, the dual set sparsification algorithm, and the adaptive sampling algorithm — requires loading the whole of A into memory. The most memoryexpensive operation throughout the fast CUR Algorithm is computing the Moore-Penrose inverse of C and R, which requires maintaining an m × c matrix or an r × n matrix in memory. In comparison, the subspace sampling algorithm requires loading the whole matrix into memory to compute its truncated SVD. 5 Empirical Comparisons In this section we provide empirical comparisons among the relative-error CUR algorithms on several datasets. We report the relative-error ratio and the running time of each algorithm on each data set. The relative-error ratio is defined by Relative-error ratio = ∥A −CUR∥F ∥A −Ak∥F , where k is a specified target rank. We conduct experiments on three datasets, including natural image, biology data, and bags of words. Table 1 briefly summarizes some information of the datasets. Redrock is a large size natural image. Arcene and Dexter are both from the UCI datasets [11]. Arcene is a biology dataset with 900 instances and 10000 attributes. Dexter is a bag of words dataset with a 20000-vocabulary and 2600 documents. Each dataset is actually represented as a data matrix, upon which we apply the CUR algorithms. We implement all the algorithms in MATLAB 7.10.0. We conduct experiments on a workstation with 12 Intel Xeon 3.47GHz CPUs, 12GB memory, and Ubuntu 10.04 system. According to the analysis in [10] and this paper, k, c, and r should be integers far less than m and n. For each data set and each algorithm, we set k = 10, 20, or 50, and c = αk, r = αc, where α ranges in each set of experiments. We repeat each set of experiments for 20 times and report the average and the standard deviation of the error ratios. The results are depicted in Figures 1, 2, 3. The results show that the fast CUR algorithm has much lower relative-error ratio than the subspace sampling algorithm. The experimental results well match our theoretical analyses in Section 4. As for the running time, the fast CUR algorithm is more efficient when c and r are small. When c and r become large, the fast CUR algorithm becomes less efficient. This is because the time complexity of the fast CUR algorithm is linear in ϵ−4 and large c and r imply small ϵ. However, the purpose of CUR is to select a small number of columns and rows from the data matrix, that is, c ≪n and r ≪m. So we are not interested in the cases where c and r are large compared with n and m, say k = 20 and α = 10. 5 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 0 100 200 300 400 500 600 700 Time (s) α Running Time Subspace Sampling (Exactly) Subspace Sampling (Expected) Fast CUR 2 4 6 8 10 12 14 16 18 20 22 24 0 100 200 300 400 500 600 700 Time (s) α Running Time Subspace Sampling (Exactly) Subspace Sampling (Expected) Fast CUR 2 4 6 8 10 12 14 16 18 0 100 200 300 400 500 600 700 Time (s) α Running Time Subspace Sampling (Exactly) Subspace Sampling (Expected) Fast CUR 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 Relative Error Ratio α Construction Error (Frobenius Norm) Subspace Sampling (Exactly) Subspace Sampling (Expected) Fast CUR (a) k = 10, c = αk, and r = αc. 2 4 6 8 10 12 14 16 18 20 22 24 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 Relative Error Ratio α Construction Error (Frobenius Norm) Subspace Sampling (Exactly) Subspace Sampling (Expected) Fast CUR (b) k = 20, c = αk, and r = αc. 2 4 6 8 10 12 14 16 18 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 Relative Error Ratio α Construction Error (Frobenius Norm) Subspace Sampling (Exactly) Subspace Sampling (Expected) Fast CUR (c) k = 50, c = αk, and r = αc. Figure 1: Empirical results on the Redrock data set. 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 0 2 4 6 8 10 12 Time (s) α Running Time Subspace Sampling (Exactly) Subspace Sampling (Expected) Fast CUR 2 4 6 8 10 12 14 16 18 20 0 2 4 6 8 10 12 14 Time (s) α Running Time Subspace Sampling (Exactly) Subspace Sampling (Expected) Fast CUR 2 4 6 8 10 12 14 4 6 8 10 12 14 16 18 20 Time (s) α Running Time Subspace Sampling (Exactly) Subspace Sampling (Expected) Fast CUR 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 Relative Error Ratio α Construction Error (Frobenius Norm) Subspace Sampling (Exactly) Subspace Sampling (Expected) Fast CUR (a) k = 10, c = αk, and r = αc. 2 4 6 8 10 12 14 16 18 20 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 Relative Error Ratio α Construction Error (Frobenius Norm) Subspace Sampling (Exactly) Subspace Sampling (Expected) Fast CUR (b) k = 20, c = αk, and r = αc. 2 4 6 8 10 12 14 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 Relative Error Ratio α Construction Error (Frobenius Norm) Subspace Sampling (Exactly) Subspace Sampling (Expected) Fast CUR (c) k = 50, c = αk, and r = αc. Figure 2: Empirical results on the Arcene data set. 6 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 0 50 100 150 200 250 Time (s) α Running Time Subspace Sampling (Exactly) Subspace Sampling (Expected) Fast CUR 2 4 6 8 10 12 14 16 18 20 22 24 0 50 100 150 200 250 300 Time (s) α Running Time Subspace Sampling (Exactly) Subspace Sampling (Expected) Fast CUR 2 4 6 8 10 12 14 16 18 0 50 100 150 200 250 300 350 400 Time (s) α Running Time Subspace Sampling (Exactly) Subspace Sampling (Expected) Fast CUR 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 0.9 0.95 1 1.05 1.1 1.15 1.2 1.25 Relative Error Ratio α Construction Error (Frobenius Norm) Subspace Sampling (Exactly) Subspace Sampling (Expected) Fast CUR (a) k = 10, c = αk, and r = αc. 2 4 6 8 10 12 14 16 18 20 22 24 0.85 0.9 0.95 1 1.05 1.1 1.15 1.2 1.25 Relative Error Ratio α Construction Error (Frobenius Norm) Subspace Sampling (Exactly) Subspace Sampling (Expected) Fast CUR (b) k = 20, c = αk, and r = αc. 2 4 6 8 10 12 14 16 18 0.75 0.8 0.85 0.9 0.95 1 1.05 1.1 1.15 1.2 Relative Error Ratio α Construction Error (Frobenius Norm) Subspace Sampling (Exactly) Subspace Sampling (Expected) Fast CUR (c) k = 50, c = αk, and r = αc. Figure 3: Empirical results on the Dexter data set. 6 Conclusions In this paper we have proposed a novel randomized algorithm for the CUR matrix decomposition problem. This algorithm is faster, more scalable, and more accurate than the state-of-the-art algorithm, i.e., the subspace sampling algorithm. Our algorithm requires only c = 2kϵ−1(1 + o(1)) columns and r = 2cϵ−1(1 + o(1)) rows to achieve (1+ϵ) relative-error ratio. To achieve the same relative-error bound, the subspace sampling algorithm requires c = O(kϵ−2 log k) columns and r = O(cϵ−2 log c) rows selected from the original matrix. Our algorithm also beats the subspace sampling algorithm in time-complexity. Our algorithm costs O(mnkϵ−2/3 + (m + n)k3ϵ−2/3 + mk2ϵ−2 + nk2ϵ−4) time, which is lower than O(min{mn2, m2n}) of the subspace sampling algorithm when k is small. Moreover, our algorithm enjoys another advantage of avoiding loading the whole data matrix into main memory, which also makes our algorithm more scalable. Finally, the empirical comparisons have also demonstrated the effectiveness and efficiency of our algorithm. A The Dual Set Sparsification Algorithm For the sake of completeness, we attach the dual set sparsification algorithm here and describe some implementation details. The dual set sparsification algorithms are deterministic algorithms established in [2]. The fast CUR algorithm calls the dual set spectral-Frobenius sparsification algorithm [2, Lemma 13] in both stages. We show this algorithm in Algorithm 2 and its bounds in Lemma 6. Lemma 6 (Dual Set Spectral-Frobenius Sparsification). Let U = {x1, · · · , xn} ⊂Rl, (l < n), contains the columns of an arbitrary matrix X ∈Rl×n. Let V = {v1, · · · , vn} ⊂Rk, (k < n), be a decompositions of the identity, i.e. ∑n i=1 vivT i = Ik. Given an integer r with k < r < n, Algorithm 2 deterministically computes a set of weights si ≥0 (i = 1, · · · , n) at most r of which are non-zero, such that λk ( n ∑ i=1 sivivT i ) ≥ ( 1 − √ k r )2 and tr ( n ∑ i=1 sixixT i ) ≤∥X∥2 F . 7 Algorithm 2 Deterministic Dual Set Spectral-Frobenius Sparsification Algorithm. 1: Input: U = {xi}n i=1 ⊂Rl, (l < n); V = {vi}n i=1 ⊂Rk, with ∑n i=1 vivT i = Ik (k < n); k < r < n; 2: Initialize: s0 = 0m×1, A0 = 0k×k; 3: Compute ∥xi∥2 2 for i = 1, · · · , n, and then compute δU = ∑n i=1 ∥xi∥2 2 1−√ k/r ; 4: for τ = 0 to r −1 do 5: Compute the eigenvalue decomposition of Aτ; 6: Find an index j in {1, · · · , n} and compute a weight t > 0 such that δ−1 U ∥xj∥2 2 ≤t−1 ≤ vT j ( Aτ −(Lτ + 1)Ik )−2 vj ϕ(Lτ + 1, Aτ) −ϕ(Lτ, Aτ) −vT j ( Aτ −(Lτ + 1)Ik )−1 vj; where ϕ(L, A) = k ∑ i=1 ( λi(A) −L )−1 , Lτ = τ − √ rk; 7: Update the j-th component of sτ and Aτ: sτ+1[j] = sτ[j] + t, Aτ+1 = Aτ + tvjvT j ; 8: end for 9: return s = 1−√ k/r r sr. The weights si can be computed deterministically in O(rnk2 + nl) time. Here we would like to mention the implementation of Algorithm 2, which is not described in detailed by [2]. In each iteration the algorithm performs once eigenvalue decomposition: Aτ = WΛWT . (Aτ is guaranteed to be positive semi-definite in each iteration). Since ( Aτ −αIk )q = WDiag ( (λ1 −α)q, · · · , (λk −α)q) WT , we can efficiently compute (Aτ −(Lτ +1)Ik)q based on the eigenvalue decomposition of Aτ. With the eigenvalues at hand, ϕ(L, Aτ) can also be computed directly. Acknowledgments This work has been supported in part by the Natural Science Foundations of China (No. 61070239), the Google visiting faculty program, and the Scholarship Award for Excellent Doctoral Student granted by Ministry of Education. References [1] Adi Ben-Israel and Thomas N.E. Greville. Generalized Inverses: Theory and Applications. Second Edition. Springer, 2003. [2] Christos Boutsidis, Petros Drineas, and Malik Magdon-Ismail. Near-optimal column-based matrix reconstruction. CoRR, abs/1103.0995, 2011. [3] Christos Boutsidis, Petros Drineas, and Malik Magdon-Ismail. Near optimal column-based matrix reconstruction. In Proceedings of the 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science, FOCS ’11, pages 305–314, 2011. [4] Scott Deerwester, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, and Richard Harshman. Indexing by latent semantic analysis. Journal of The American Society for Information Science, 41(6):391–407, 1990. [5] Amit Deshpande and Luis Rademacher. Efficient volume sampling for row/column subset selection. In Proceedings of the 2010 IEEE 51st Annual Symposium on Foundations of Computer Science, FOCS ’10, pages 329–338, 2010. [6] Amit Deshpande, Luis Rademacher, Santosh Vempala, and Grant Wang. Matrix approximation and projective clustering via volume sampling. Theory of Computing, 2(2006):225–247, 2006. [7] Petros Drineas. Pass-efficient algorithms for approximating large matrices. In In Proceeding of the 14th Annual ACM-SIAM Symposium on Dicrete Algorithms, pages 223–232, 2003. 8 [8] Petros Drineas, Ravi Kannan, and Michael W. Mahoney. Fast monte carlo algorithms for matrices iii: Computing a compressed approximate matrix decomposition. SIAM Journal on Computing, 36(1):184–206, 2006. [9] Petros Drineas and Michael W. Mahoney. On the Nystr¨om method for approximating a gram matrix for improved kernel-based learning. Journal of Machine Learning Research, 6:2153– 2175, 2005. [10] Petros Drineas, Michael W. Mahoney, and S. Muthukrishnan. Relative-error CUR matrix decompositions. SIAM Journal on Matrix Analysis and Applications, 30(2):844–881, September 2008. [11] A. Frank and A. Asuncion. UCI machine learning repository, 2010. [12] S. A. Goreinov, E. E. Tyrtyshnikov, and N. L. Zamarashkin. A theory of pseudoskeleton approximations. Linear Algebra and Its Applications, 261:1–21, 1997. [13] S. A. Goreinov, N. L. Zamarashkin, and E. E. Tyrtyshnikov. Pseudo-skeleton approximations by matrices of maximal volume. Mathematical Notes, 62(4):619–623, 1997. [14] Venkatesan Guruswami and Ali Kemal Sinop. Optimal column-based low-rank matrix reconstruction. In Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’12, pages 1207–1214. SIAM, 2012. [15] Nathan Halko, Per-Gunnar Martinsson, and Joel A. Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Review, 53(2):217–288, 2011. [16] John Hopcroft and Ravi Kannan. Computer Science Theory for the Information Age. 2012. [17] Finny G. Kuruvilla, Peter J. Park, and Stuart L. Schreiber. Vector algebra in the analysis of genome-wide expression data. Genome Biology, 3:research0011–research0011.1, 2002. [18] Lester Mackey, Ameet Talwalkar, and Michael I. Jordan. Divide-and-conquer matrix factorization. In Advances in Neural Information Processing Systems 24. 2011. [19] Michael W. Mahoney and Petros Drineas. CUR matrix decompositions for improved data analysis. Proceedings of the National Academy of Sciences, 106(3):697–702, 2009. [20] L. Sirovich and M. Kirby. Low-dimensional procedure for the characterization of human faces. Journal of the Optical Society of America A, 4(3):519–524, Mar 1987. [21] Matthew Turk and Alex Pentland. Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3(1):71–86, 1991. [22] Eugene E. Tyrtyshnikov. Incomplete cross approximation in the mosaic-skeleton method. Computing, 64:367–380, 2000. 9
|
2012
|
86
|
4,805
|
Efficient Bayes-Adaptive Reinforcement Learning using Sample-Based Search Arthur Guez aguez@gatsby.ucl.ac.uk David Silver d.silver@cs.ucl.ac.uk Peter Dayan dayan@gatsby.ucl.ac.uk Abstract Bayesian model-based reinforcement learning is a formally elegant approach to learning optimal behaviour under model uncertainty, trading off exploration and exploitation in an ideal way. Unfortunately, finding the resulting Bayes-optimal policies is notoriously taxing, since the search space becomes enormous. In this paper we introduce a tractable, sample-based method for approximate Bayesoptimal planning which exploits Monte-Carlo tree search. Our approach outperformed prior Bayesian model-based RL algorithms by a significant margin on several well-known benchmark problems – because it avoids expensive applications of Bayes rule within the search tree by lazily sampling models from the current beliefs. We illustrate the advantages of our approach by showing it working in an infinite state space domain which is qualitatively out of reach of almost all previous work in Bayesian exploration. 1 Introduction A key objective in the theory of Markov Decision Processes (MDPs) is to maximize the expected sum of discounted rewards when the dynamics of the MDP are (perhaps partially) unknown. The discount factor pressures the agent to favor short-term rewards, but potentially costly exploration may identify better rewards in the long-term. This conflict leads to the well-known explorationexploitation trade-off. One way to solve this dilemma [3, 10] is to augment the regular state of the agent with the information it has acquired about the dynamics. One formulation of this idea is the augmented Bayes-Adaptive MDP (BAMDP) [18, 9], in which the extra information is the posterior belief distribution over the dynamics, given the data so far observed. The agent starts in the belief state corresponding to its prior and, by executing the greedy policy in the BAMDP whilst updating its posterior, acts optimally (with respect to its beliefs) in the original MDP. In this framework, rich prior knowledge about statistics of the environment can be naturally incorporated into the planning process, potentially leading to more efficient exploration and exploitation of the uncertain world. Unfortunately, exact Bayesian reinforcement learning is computationally intractable. Various algorithms have been devised to approximate optimal learning, but often at rather large cost. Here, we present a tractable approach that exploits and extends recent advances in Monte-Carlo tree search (MCTS) [16, 20], but avoiding problems associated with applying MCTS directly to the BAMDP. At each iteration in our algorithm, a single MDP is sampled from the agent’s current beliefs. This MDP is used to simulate a single episode whose outcome is used to update the value of each node of the search tree traversed during the simulation. By integrating over many simulations, and therefore many sample MDPs, the optimal value of each future sequence is obtained with respect to the agent’s beliefs. We prove that this process converges to the Bayes-optimal policy, given infinite samples. To increase computational efficiency, we introduce a further innovation: a lazy sampling scheme that considerably reduces the cost of sampling. We applied our algorithm to a representative sample of benchmark problems and competitive algorithms from the literature. It consistently and significantly outperformed existing Bayesian RL methods, and also recent non-Bayesian approaches, thus achieving state-of-the-art performance. 1 Our algorithm is more efficient than previous sparse sampling methods for Bayes-adaptive planning [25, 6, 2], partly because it does not update the posterior belief state during the course of each simulation. It thus avoids repeated applications of Bayes rule, which is expensive for all but the simplest priors over the MDP. Consequently, our algorithm is particularly well suited to support planning in domains with richly structured prior knowledge — a critical requirement for applications of Bayesian reinforcement learning to large problems. We illustrate this benefit by showing that our algorithm can tackle a domain with an infinite number of states and a structured prior over the dynamics, a challenging — if not intractable — task for existing approaches. 2 Bayesian RL We describe the generic Bayesian formulation of optimal decision-making in an unknown MDP, following [18] and [9]. An MDP is described as a 5-tuple M = ⟨S, A, P, R, γ⟩, where S is the set of states, A is the set of actions, P : S × A × S →R is the state transition probability kernel, R : S × A →R is a bounded reward function, and γ is the discount factor [23]. When all the components of the MDP tuple are known, standard MDP planning algorithms can be used to estimate the optimal value function and policy off-line. In general, the dynamics are unknown, and we assume that P is a latent variable distributed according to a distribution P(P). After observing a history of actions and states ht = s1a1s2a2 . . . at−1st from the MDP, the posterior belief on P is updated using Bayes’ rule P(P|ht) ∝P(ht|P)P(P). The uncertainty about the dynamics of the model can be transformed into uncertainty about the current state inside an augmented state space S+ = S×H, where S is the state space in the original problem and H is the set of possible histories. The dynamics associated with this augmented state space are described by P+(⟨s, h⟩, a, ⟨s′, h′⟩) = 1[h′ = has′] Z P P(s, a, s′)P(P|h) dP, R+(⟨s, h⟩, a) = R(s, a) (1) Together, the 5-tuple M + = ⟨S+, A, P+, R+, γ⟩forms the Bayes-Adaptive MDP (BAMDP) for the MDP problem M. Since the dynamics of the BAMDP are known, it can in principle be solved to obtain the optimal value function associated with each action: Q∗(⟨st, ht⟩, a) = max π Eπ " ∞ X t′=t γt′−trt′|at = a # (2) from which the optimal action for each state can be readily derived. 1 Optimal actions in the BAMDP are executed greedily in the real MDP M and constitute the best course of action for a Bayesian agent with respect to its prior belief over P. It is obvious that the expected performance of the BAMDP policy in the MDP M is bounded above by that of the optimal policy obtained with a fullyobservable model, with equality occurring, for example, in the degenerate case in which the prior only has support on the true model. 3 The BAMCP algorithm 3.1 Algorithm Description The goal of a BAMDP planning method is to find, for each decision point ⟨s, h⟩encountered, the action a that maximizes Equation 2. Our algorithm, Bayes-adaptive Monte-Carlo Planning (BAMCP), does this by performing a forward-search in the space of possible future histories of the BAMDP using a tailored Monte-Carlo tree search. We employ the UCT algorithm [16] to allocate search effort to promising branches of the state-action tree, and use sample-based rollouts to provide value estimates at each node. For clarity, let us denote by Bayes-Adaptive UCT (BA-UCT) the algorithm that applies vanilla UCT to the BAMDP (i.e., the particular MDP with dynamics described in Equation 1). Sample-based search in the BAMDP using BA-UCT requires the generation of samples from P+ at every single node. This operation requires integration over all possible transition models, or at least a sample of a transition model P — an expensive procedure for all but the simplest generative models P(P). We avoid this cost by only sampling a single transition model Pi from the posterior at the root of the search tree at the 1The redundancy in the state-history tuple notation — st is the suffix of ht — is only present to ensure clarity of exposition. 2 start of each simulation i, and using Pi to generate all the necessary samples during this simulation. Sample-based tree search then acts as a filter, ensuring that the correct distribution of state successors is obtained at each of the tree nodes, as if it was sampled from P+. This root sampling method was originally introduced in the POMCP algorithm [20], developed to solve Partially Observable MDPs. 3.2 BA-UCT with Root Sampling The root node of the search tree at a decision point represents the current state of the BAMDP. The tree is composed of state nodes representing belief states ⟨s, h⟩and action nodes representing the effect of particular actions from their parent state node. The visit counts: N(⟨s, h⟩) for state nodes, and N(⟨s, h⟩, a) for action nodes, are initialized to 0 and updated throughout search. A value Q(⟨s, h⟩, a), initialized to 0, is also maintained for each action node. Each simulation traverses the tree without backtracking by following the UCT policy at state nodes defined by argmaxa Q(⟨s, h⟩, a) + c p log(N(⟨s, h⟩))/N(⟨s, h⟩, a), where c is an exploration constant that needs to be set appropriately. Given an action, the transition distribution Pi corresponding to the current simulation i is used to sample the next state. That is, at action node (⟨s, h⟩, a), s′ is sampled from Pi(s, a, ·), and the new state node is set to ⟨s′, has′⟩. When a simulation reaches a leaf, the tree is expanded by attaching a new state node with its connected action nodes, and a rollout policy πro is used to control the MDP defined by the current Pi to some fixed depth (determined using the discount factor). The rollout provides an estimate of the value Q(⟨s, h⟩, a) from the leaf action node. This estimate is then used to update the value of all action nodes traversed during the simulation: if R is the sampled discounted return obtained from a traversed action node (⟨s, h⟩, a) in a given simulation, then we update the value of the action node to Q(⟨s, h⟩, a) + R −Q(⟨s, h⟩, a)/N(⟨s, h⟩, a) (i.e., the mean of the sampled returns obtained from that action node over the simulations). A detailed description of the BAMCP algorithm is provided in Algorithm 1. A diagram example of BAMCP simulations is presented in Figure S3. The tree policy treats the forward search as a meta-exploration problem, preferring to exploit regions of the tree that currently appear better than others while continuing to explore unknown or less known parts of the tree. This leads to good empirical results even for small number of simulations, because effort is expended where search seems fruitful. Nevertheless all parts of the tree are eventually visited infinitely often, and therefore the algorithm will eventually converge on the Bayes-optimal policy (see Section 3.5). Finally, note that the history of transitions h is generally not the most compact sufficient statistic of the belief in fully observable MDPs. Indeed, it can be replaced with unordered transition counts ψ, considerably reducing the number of states of the BAMDP and, potentially the complexity of planning. Given an addressing scheme suitable to the resulting expanding lattice (rather than to a tree), BAMCP can search in this reduced space. We found this version of BAMCP to offer only a marginal improvement. This is a common finding for UCT, stemming from its tendency to concentrate search effort on one of several equivalent paths (up to transposition), implying a limited effect on performance of reducing the number of those paths. 3.3 Lazy Sampling In previous work on sample-based tree search, indeed including POMCP [20], a complete sample state is drawn from the posterior at the root of the search tree. However, this can be computationally very costly. Instead, we sample P lazily, creating only the particular transition probabilities that are required as the simulation traverses the tree, and also during the rollout. Consider P(s, a, ·) to be parametrized by a latent variable θs,a for each state and action pair. These may depend on each other, as well as on an additional set of latent variables φ. The posterior over P can be written as P(Θ|h) = R φ P(Θ|φ, h)P(φ|h), where Θ = {θs,a|s ∈S, a ∈A}. Define Θt = {θs1,a1, · · · , θst,at} as the (random) set of θ parameters required during the course of a BAMCP simulation that starts at time 1 and ends at time t. Using the chain rule, we can rewrite P(Θ|φ, h) = P(θs1,a1|φ, h)P(θs2,a2|Θ1, φ, h) . . . P(θsT ,aT |ΘT −1, φ, h)P(Θ \ ΘT |ΘT , φ, h) where T is the length of the simulation and Θ \ ΘT denotes the (random) set of parameters that are not required for a simulation. For each simulation i, we sample P(φ|ht) at the root and then lazily sample the θst,at parameters as required, conditioned on φ and all Θt−1 parameters sampled for the current simulation. This process is stopped at the end of the simulation, potentially before 3 Algorithm 1: BAMCP procedure Search( ⟨s, h⟩) repeat P ∼P(P|h) Simulate(⟨s, h⟩, P, 0) until Timeout() return argmax a Q(⟨s, h⟩, a) end procedure procedure Rollout(⟨s, h⟩, P, d ) if γdRmax < ϵ then return 0 end a ∼πro(⟨s, h⟩, ·) s′ ∼P(s, a, ·) r ←R(s, a) return r+γRollout(⟨s′, has′⟩, P, d+1) end procedure procedure Simulate( ⟨s, h⟩, P, d) if γdRmax < ϵ then return 0 if N(⟨s, h⟩) = 0 then for all a ∈A do N(⟨s, h⟩, a) ←0, Q(⟨s, h⟩, a)) ←0 end a ∼πro(⟨s, h⟩, ·) s′ ∼P(s, a, ·) r ←R(s, a) R ←r + γ Rollout(⟨s′, has′⟩, P, d) N(⟨s, h⟩) ←1, N(⟨s, h⟩, a) ←1 Q(⟨s, h⟩, a) ←R return R end a ←argmax b Q(⟨s, h⟩, b) + c q log(N(⟨s,h⟩)) N(⟨s,h⟩,b) s′ ∼P(s, a, ·) r ←R(s, a) R ←r + γ Simulate(⟨s′, has′⟩, P, d+1) N(⟨s, h⟩) ←N(⟨s, h⟩) + 1 N(⟨s, h⟩, a) ←N(⟨s, h⟩, a) + 1 Q(⟨s, h⟩, a) ←Q(⟨s, h⟩, a) + R−Q(⟨s,h⟩,a) N(⟨s,h⟩,a) return R end procedure all θ parameters have been sampled. For example, if the transition parameters for different states and actions are independent, we can completely forgo sampling a complete P, and instead draw any necessary parameters individually for each state-action pair. This leads to substantial performance improvement, especially in large MDPs where a single simulation only requires a small subset of parameters (see for example the domain in Section 5.2). 3.4 Rollout Policy Learning The choice of rollout policy πro is important if simulations are few, especially if the domain does not display substantial locality or if rewards require a carefully selected sequence of actions to be obtained. Otherwise, a simple uniform random policy can be chosen to provide noisy estimates. In this work, we learn Qro, the optimal Q-value in the real MDP, in a model-free manner (e.g., using Q-learning) from samples (st, at, rt, st+1) obtained off-policy as a result of the interaction of the Bayesian agent with the environment. Acting greedily according to Qro translates to pure exploitation of gathered knowledge. A rollout policy in BAMCP following Qro could therefore over-exploit. Instead, similar to [13], we select an ϵ-greedy policy with respect to Qro as our rollout policy πro. This biases rollouts towards observed regions of high rewards. This method provides valuable direction for the rollout policy at negligible computational cost. More complex rollout policies can be considered, for example rollout policies that depend on the sampled model Pi. However, these usually incur computational overhead. 3.5 Theoretical properties Define V (⟨s, h⟩) = max a∈A Q(⟨s, h⟩, a) ∀⟨s, h⟩∈S × H. Theorem 1. For all ϵ > 0 (the numerical precision, see Algorithm 1) and a suitably chosen c (e.g. c > Rmax 1−γ ), from state ⟨st, ht⟩, BAMCP constructs a value function at the root node that converges in probability to an ϵ′-optimal value function, V (⟨st, ht⟩) p→V ∗ ϵ′(⟨st, ht⟩), where ϵ′ = ϵ 1−γ . Moreover, for large enough N(⟨st, ht⟩), the bias of V (⟨st, ht⟩) decreases as O(log(N(⟨st, ht⟩))/N(⟨st, ht⟩)). (Proof available in supplementary material) 4 By definition, Theorem 1 implies that BAMCP converges to the Bayes-optimal solution asymptotically. We confirmed this result empirically using a variety of Bandit problems, for which the Bayes-optimal solution can be computed efficiently using Gittins indices (see supplementary material). 4 Related Work In Section 5, we compare BAMCP to a set of existing Bayesian RL algorithms. Given limited space, we do not provide a comprehensive list of planning algorithms for MDP exploration, but rather concentrate on related sample-based algorithms for Bayesian RL. Bayesian DP [22] maintains a posterior distribution over transition models. At each step, a single model is sampled, and the action that is optimal in that model is executed. The Best Of Sampled Set (BOSS) algorithm generalizes this idea [1]. BOSS samples a number of models from the posterior and combines them optimistically. This drives sufficient exploration to guarantee finite-sample performance guarantees. BOSS is quite sensitive to its parameter that governs the sampling criterion. Unfortunately, this is difficult to select. Castro and Precup proposed an SBOSS variant, which provides a more effective adaptive sampling criterion [5]. BOSS algorithms are generally quite robust, but suffer from over-exploration. Sparse sampling [15] is a sample-based tree search algorithm. The key idea is to sample successor nodes from each state, and apply a Bellman backup to update the value of the parent node from the values of the child nodes. Wang et al. applied sparse sampling to search over belief-state MDPs[25]. The tree is expanded non-uniformly according to the sampled trajectories. At each decision node, a promising action is selected using Thompson sampling — i.e., sampling an MDP from that beliefstate, solving the MDP and taking the optimal action. At each chance node, a successor belief-state is sampled from the transition dynamics of the belief-state MDP. Asmuth and Littman further extended this idea in their BFS3 algorithm [2], an adaptation of Forward Search Sparse Sampling [24] to belief-MDPs. Although they described their algorithm as MonteCarlo tree search, it in fact uses a Bellman backup rather than Monte-Carlo evaluation. Each Bellman backup updates both lower and upper bounds on the value of each node. Like Wang et al., the tree is expanded non-uniformly according to the sampled trajectories, albeit using a different method for action selection. At each decision node, a promising action is selected by maximising the upper bound on value. At each chance node, observations are selected by maximising the uncertainty (upper minus lower bound). Bayesian Exploration Bonus (BEB) solves the posterior mean MDP, but with an additional reward bonus that depends on visitation counts [17]. Similarly, Sorg et al. propose an algorithm with a different form of exploration bonus [21]. These algorithms provide performance guarantees after a polynomial number of steps in the environment. However, behavior in the early steps of exploration is very sensitive to the precise exploration bonuses; and it turns out to be hard to translate sophisticated prior knowledge into the form of a bonus. Table 1: Experiment results summary. For each algorithm, we report the mean sum of rewards and confidence interval for the best performing parameter within a reasonable planning time limit (0.25 s/step for Double-loop, 1 s/step for Grid5 and Grid10, 1.5 s/step for the Maze). For BAMCP, this simply corresponds to the number of simulations that achieve a planning time just under the imposed limit. * Results reported from [22] without timing information. Double-loop Grid5 Grid10 Dearden’s Maze BAMCP 387.6 ± 1.5 72.9 ± 3 32.7 ± 3 965.2 ± 73 BFS3 [2] 382.2 ± 1.5 66 ± 5 10.4 ± 2 240.9 ± 46 SBOSS [5] 371.5 ± 3 59.3 ± 4 21.8 ± 2 671.3 ± 126 BEB [17] 386 ± 0 67.5 ± 3 10 ± 1 184.6 ± 35 Bayesian DP* [22] 377 ± 1 Bayes VPI+MIX* [8] 326 ± 31 817.6 ± 29 IEQL+* [19] 264 ± 1 269.4 ± 1 QL Boltzmann* 186 ± 1 195.2 ± 20 5 5 Experiments We first present empirical results of BAMCP on a set of standard problems with comparisons to other popular algorithms. Then we showcase BAMCP’s advantages in a large scale task: an infinite 2D grid with complex correlations between reward locations. 5.1 Standard Domains Algorithms The following algorithms were run: BAMCP - The algorithm presented in Section 3, implemented with lazy sampling. The algorithm was run for different number of simulations (10 to 10000) to span different planning times. In all experiments, we set πro to be an ϵ-greedy policy with ϵ = 0.5. The UCT exploration constant was left unchanged for all experiments (c = 3), we experimented with other values of c ∈{0.5, 1, 5} with similar results. SBOSS [5]: for each domain, we varied the number of samples K ∈{2, 4, 8, 16, 32} and the resampling threshold parameter δ ∈{3, 5, 7}. BEB [17]: for each domain, we varied the bonus parameter β ∈{0.5, 1, 1.5, 2, 2.5, 3, 5, 10, 15, 20}. BFS3 [2] for each domain, we varied the branching factor C ∈{2, 5, 10, 15} and the number of simulations (10 to 2000). The depth of search was set to 15 in all domains except for the larger grid and maze domain where it was set to 50. We also tuned the Vmax parameter for each domain — Vmin was always set to 0. In addition, we report results from [22] for several other prior algorithms. Domains For all domains, we fix γ = 0.95. The Double-loop domain is a 9-state deterministic MDP with 2 actions [8], 1000 steps are executed in this domain. Grid5 is a 5 × 5 grid with no reward anywhere except for a reward state opposite to the reset state. Actions with cardinal directions are executed with small probability of failure for 1000 steps. Grid10 is a 10 × 10 grid designed like Grid5. We collect 2000 steps in this domain. Dearden’s Maze is a 264-states maze with 3 flags to collect [8]. A special reward state gives the number of flags collected since the last visit as reward, 20000 steps are executed in this domain. 2 To quantify the performance of each algorithm, we measured the total undiscounted reward over many steps. We chose this measure of performance to enable fair comparisons to be drawn with prior work. In fact, we are optimising a different criterion – the discounted reward from the start state – and so we might expect this evaluation to be unfavourable to our algorithm. One major advantage of Bayesian RL is that one can specify priors about the dynamics. For the Double-loop domain, the Bayesian RL algorithms were run with a simple Dirichlet-Multinomial model with symmetric Dirichlet parameter α = 1 |S|. For the grids and the maze domain, the algorithms were run with a sparse Dirichlet-Multinomial model, as described in [11]. For both of these models, efficient collapsed sampling schemes are available; they are employed for the BA-UCT and BFS3 algorithms in our experiments to compress the posterior parameter sampling and the transition sampling into a single transition sampling step. This considerably reduces the cost of belief updates inside the search tree when using these simple probabilistic models. In general, efficient collapsed sampling schemes are not available (see for example the model in Section 5.2). Results A summary of the results is presented in Table 1. Figure 1 reports the planning time/performance trade-off for the different algorithms on the Grid5 and Maze domain. On all the domains tested, BAMCP performed best. Other algorithms came close on some tasks, but only when their parameters were tuned to that specific domain. This is particularly evident for BEB, which required a different value of exploration bonus to achieve maximum performance in each domain. BAMCP’s performance is stable with respect to the choice of its exploration constant c and it did not require tuning to obtain the results. BAMCP’s performance scales well as a function of planning time, as is evident in Figure 1. In contrast, SBOSS follows the opposite trend. If more samples are employed to build the merged model, SBOSS actually becomes too optimistic and over-explores, degrading its performance. BEB cannot take advantage of prolonged planning time at all. BFS3 generally scales up with more planning time with an appropriate choice of parameters, but it is not obvious how to trade-off the branching factor, depth, and number of simulations in each domain. BAMCP greatly benefited from our lazy 2The result reported for Dearden’s maze with the Bayesian DP alg. in [22] is for a different version of the task in which the maze layout is given to the agent. 6 (a) 10 −3 10 −2 10 −1 10 0 10 20 30 40 50 60 70 80 90 10 −3 10 −2 10 −1 10 0 10 20 30 40 50 60 70 80 90 10 −3 10 −2 10 −1 10 0 10 20 30 40 50 60 70 80 90 Average Time per Step (s) 10 −3 10 −2 10 −1 10 0 10 20 30 40 50 60 70 80 90 Sum of Rewards after 1000 steps BAMCP BEB BFS3 SBOSS 10 −1 10 0 0 100 200 300 400 500 600 700 800 900 1000 1100 Average Time per Step (s) Undiscounted sum of rewards after 20000 steps BAMCP (BA−UCT+RS+LS+RL) BEB BFS3 SBOSS (b) 10 −1 10 0 0 100 200 300 400 500 600 700 800 900 1000 1100 Average Time per Step (s) BA−UCT + RL BA−UCT (c) 10 −1 10 0 0 100 200 300 400 500 600 700 800 900 1000 1100 Average Time per Step (s) BA−UCT + RS + LS + RL (BAMCP) BA−UCT + RS + LS BA−UCT + RS + RL BA−UCT + RS (d) Figure 1: Performance of each algorithm on the Grid5 (a.) and Maze domain (b-d) as a function of planning time. Each point corresponds to a single run of an algorithm with an associated setting of the parameters. Increasing brightness inside the points codes for an increasing value of a parameter (BAMCP and BFS3: number of simulations, BEB: bonus parameter β, SBOSS: number of samples K). A second dimension of variation is coded as the size of the points (BFS3: branching factor C, SBOSS: resampling parameter δ). The range of parameters is specified in Section 5.1. a. Performance of each algorithm on the Grid5 domain. b. Performance of each algorithm on the Maze domain. c. On the Maze domain, performance of vanilla BA-UCT with and without rollout policy learning (RL). d. On the Maze domain, performance of BAMCP with and without the lazy sampling (LS) and rollout policy learning (RL) presented in Sections 3.4, 3.3. Root sampling (RS) is included. sampling scheme in the experiments, providing 35× speed improvement over the naive approach in the maze domain for example; this is illustrated in Figure 1(c). Dearden’s maze aptly illustrates a major drawback of forward search sparse sampling algorithms such as BFS3. Like many maze problems, all rewards are zero for at least k steps, where k is the solution length. Without prior knowledge of the optimal solution length, all upper bounds will be higher than the true optimal value until the tree has been fully expanded up to depth k – even if a simulation happens to solve the maze. In contrast, once BAMCP discovers a successful simulation, its Monte-Carlo evaluation will immediately bias the search tree towards the successful trajectory. 5.2 Infinite 2D grid task We also applied BAMCP to a much larger problem. The generative model for this infinite-grid MDP is as follows: each column i has an associated latent parameter pi ∼Beta(α1, β1) and each row j has an associated latent parameter qj ∼Beta(α2, β2). The probability of grid cell ij having a reward of 1 is piqj, otherwise the reward is 0. The agent knows it is on a grid and is always free to move in any of the four cardinal directions. Rewards are consumed when visited; returning to the same location subsequently results in a reward of 0. As opposed to the independent Dirichlet priors employed in standard domains, here, dynamics are tightly correlated across states (i.e., observing a state transition provides information about other state transitions). Posterior inference (of the 7 10 −2 10 −1 10 0 10 1 10 20 30 40 50 60 70 80 90 Planning time (s) Undiscounted sum of rewards 10 −2 10 −1 10 0 10 1 2 4 6 8 10 12 14 Planning time (s) Discounted sum of rewards BAMCP BAMCP Wrong prior Random Figure 2: Performance of BAMCP as a function of planning time on the Infinite 2D grid task of Section 5.2, for γ = 0.97, where the grids are generated with Beta parameters α1 = 1, β1 = 2, α2 = 2, β2 = 1 (See supp. Figure S4 for a visualization). The performance during the first 200 steps in the environment is averaged over 50 sampled environments (5 runs for each sample) and is reported both in terms of undiscounted (left) and discounted (right) sum of rewards. BAMCP is run either with the correct generative model as prior or with an incorrect prior (parameters for rows and columns are swapped), it is clear that BAMCP can take advantage of correct prior information to gain more rewards. The performance of a uniform random policy is also reported. dynamics P) in this model requires approximation because of the non-conjugate coupling of the variables, the inference is done via MCMC (details in Supplementary). The domain is illustrated in Figure S4. Planning algorithms that attempt to solve an MDP based on sample(s) (or the mean) of the posterior (e.g., BOSS, BEB, Bayesian DP) cannot directly handle the large state space. Prior forward-search methods (e.g., BA-UCT, BFS3) can deal with the state space, but not the large belief space: at every node of the search tree they must solve an approximate inference problem to estimate the posterior beliefs. In contrast, BAMCP limits the posterior inference to the root of the search tree and is not directly affected by the size of the state space or belief space, which allows the algorithm to perform well even with a limited planning time. Note that lazy sampling is required in this setup since a full sample of the dynamics involves infinitely many parameters. Figure 2 (and Figure S5) demonstrates the planning performance of BAMCP in this complex domain. Performance improves with additional planning time, and the quality of the prior clearly affects the agent’s performance. Supplementary videos contrast the behavior of the agent for different prior parameters. 6 Future Work The UCT algorithm is known to have several drawbacks. First, there are no finite-time regret bounds. It is possible to construct malicious environments, for example in which the optimal policy is hidden in a generally low reward region of the tree, where UCT can be misled for long periods [7]. Second, the UCT algorithm treats every action node as a multi-armed bandit problem. However, there is no actual benefit to accruing reward during planning, and so it is in theory more appropriate to use pure exploration bandits [4]. Nevertheless, the UCT algorithm has produced excellent empirical performance in many domains [12]. BAMCP is able to exploit prior knowledge about the dynamics in a principled manner. In principle, it is possible to encode many aspects of domain knowledge into the prior distribution. An important avenue for future work is to explore rich, structured priors about the dynamics of the MDP. If this prior knowledge matches the class of environments that the agent will encounter, then exploration could be significantly accelerated. 7 Conclusion We suggested a sample-based algorithm for Bayesian RL called BAMCP that significantly surpassed the performance of existing algorithms on several standard tasks. We showed that BAMCP can tackle larger and more complex tasks generated from a structured prior, where existing approaches scale poorly. In addition, BAMCP provably converges to the Bayes-optimal solution. The main idea is to employ Monte-Carlo tree search to explore the augmented Bayes-adaptive search space efficiently. The naive implementation of that idea is the proposed BA-UCT algorithm, which cannot scale for most priors due to expensive belief updates inside the search tree. We introduced three modifications to obtain a computationally tractable sample-based algorithm: root sampling, which only requires beliefs to be sampled at the start of each simulation (as in [20]); a model-free RL algorithm that learns a rollout policy; and the use of a lazy sampling scheme to sample the posterior beliefs cheaply. 8 References [1] J. Asmuth, L. Li, M.L. Littman, A. Nouri, and D. Wingate. A Bayesian sampling approach to exploration in reinforcement learning. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pages 19–26, 2009. [2] J. Asmuth and M. Littman. Approaching Bayes-optimality using Monte-Carlo tree search. In Proceedings of the 27th Conference on Uncertainty in Artificial Intelligence, pages 19–26, 2011. [3] R. Bellman and R. Kalaba. On adaptive control processes. Automatic Control, IRE Transactions on, 4(2):1–9, 1959. [4] S. Bubeck, R. Munos, and G. Stoltz. Pure exploration in multi-armed bandits problems. In Proceedings of the 20th international conference on Algorithmic learning theory, pages 23–37. Springer-Verlag, 2009. [5] P. Castro and D. Precup. Smarter sampling in model-based Bayesian reinforcement learning. Machine Learning and Knowledge Discovery in Databases, pages 200–214, 2010. [6] P.S. Castro. Bayesian exploration in Markov decision processes. PhD thesis, McGill University, 2007. [7] P.A. Coquelin and R. Munos. Bandit algorithms for tree search. In Proceedings of the 23rd Conference on Uncertainty in Artificial Intelligence, pages 67–74, 2007. [8] R. Dearden, N. Friedman, and S. Russell. Bayesian Q-learning. In Proceedings of the National Conference on Artificial Intelligence, pages 761–768, 1998. [9] M.O.G. Duff. Optimal Learning: Computational Procedures For Bayes-Adaptive Markov Decision Processes. PhD thesis, University of Massachusetts Amherst, 2002. [10] AA Feldbaum. Dual control theory. Automation and Remote Control, 21(9):874–1039, 1960. [11] N. Friedman and Y. Singer. Efficient Bayesian parameter estimation in large discrete domains. Advances in Neural Information Processing Systems (NIPS), pages 417–423, 1999. [12] S. Gelly, L. Kocsis, M. Schoenauer, M. Sebag, D. Silver, C. Szepesv´ari, and O. Teytaud. The grand challenge of computer Go: Monte Carlo tree search and extensions. Communications of the ACM, 55(3):106– 113, 2012. [13] S. Gelly and D. Silver. Combining online and offline knowledge in UCT. In Proceedings of the 24th International Conference on Machine learning, pages 273–280, 2007. [14] J.C. Gittins, R. Weber, and K.D. Glazebrook. Multi-armed bandit allocation indices. Wiley Online Library, 1989. [15] M. Kearns, Y. Mansour, and A.Y. Ng. A sparse sampling algorithm for near-optimal planning in large Markov decision processes. In Proceedings of the 16th international joint conference on Artificial intelligence-Volume 2, pages 1324–1331, 1999. [16] L. Kocsis and C. Szepesv´ari. Bandit based Monte-Carlo planning. Machine Learning: ECML 2006, pages 282–293, 2006. [17] J.Z. Kolter and A.Y. Ng. Near-Bayesian exploration in polynomial time. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 513–520, 2009. [18] J.J. Martin. Bayesian decision problems and Markov chains. Wiley, 1967. [19] N. Meuleau and P. Bourgine. Exploration of multi-state environments: Local measures and backpropagation of uncertainty. Machine Learning, 35(2):117–154, 1999. [20] D. Silver and J. Veness. Monte-Carlo planning in large POMDPs. Advances in Neural Information Processing Systems (NIPS), pages 2164–2172, 2010. [21] J. Sorg, S. Singh, and R.L. Lewis. Variance-based rewards for approximate Bayesian reinforcement learning. In Proceedings of the 26th Conference on Uncertainty in Artificial Intelligence, 2010. [22] M. Strens. A Bayesian framework for reinforcement learning. In Proceedings of the 17th International Conference on Machine Learning, pages 943–950, 2000. [23] C. Szepesv´ari. Algorithms for reinforcement learning. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers, 2010. [24] T.J. Walsh, S. Goschin, and M.L. Littman. Integrating sample-based planning and model-based reinforcement learning. In Proceedings of the 24th Conference on Artificial Intelligence (AAAI), 2010. [25] T. Wang, D. Lizotte, M. Bowling, and D. Schuurmans. Bayesian sparse sampling for on-line reward optimization. In Proceedings of the 22nd International Conference on Machine learning, pages 956–963, 2005. 9
|
2012
|
87
|
4,806
|
Dual-Space Analysis of the Sparse Linear Model David Wipf and Yi Wu Visual Computing Group, Microsoft Research Asia davidwipf@gmail.com, jxwuyi@gmail.com Abstract Sparse linear (or generalized linear) models combine a standard likelihood function with a sparse prior on the unknown coefficients. These priors can conveniently be expressed as a maximization over zero-mean Gaussians with different variance hyperparameters. Standard MAP estimation (Type I) involves maximizing over both the hyperparameters and coefficients, while an empirical Bayesian alternative (Type II) first marginalizes the coefficients and then maximizes over the hyperparameters, leading to a tractable posterior approximation. The underlying cost functions can be related via a dual-space framework from [22], which allows both the Type I or Type II objectives to be expressed in either coefficient or hyperparmeter space. This perspective is useful because some analyses or extensions are more conducive to development in one space or the other. Herein we consider the estimation of a trade-off parameter balancing sparsity and data fit. As this parameter is effectively a variance, natural estimators exist by assessing the problem in hyperparameter (variance) space, transitioning natural ideas from Type II to solve what is much less intuitive for Type I. In contrast, for analyses of update rules and sparsity properties of local and global solutions, as well as extensions to more general likelihood models, we can leverage coefficient-space techniques developed for Type I and apply them to Type II. For example, this allows us to prove that Type II-inspired techniques can be successful recovering sparse coefficients when unfavorable restricted isometry properties (RIP) lead to failure of popular ℓ1 reconstructions. It also facilitates the analysis of Type II when non-Gaussian likelihood models lead to intractable integrations. 1 Introduction We begin with the likelihood model y = Φx + ǫ, (1) where Φ ∈Rn×m is a dictionary of unit ℓ2-norm basis vectors, x ∈Rm is a vector of unknown coefficients we would like to estimate, y ∈Rn is the observed signal, and ǫ is noise distributed as N(ǫ; 0, λI) (later we consider more general likelihood models). In many practical situations where large numbers of features are present relative to the signal dimension, the problem of estimating x given y becomes ill-posed. A Bayesian framework is intuitively appealing for formulating these types of problems because prior assumptions must be incorporated, whether explicitly or implicitly, to regularize the solution space. Recently, there has been a growing interest in models that employ sparse priors p(x) to encourage solutions x with mostly small or zero-valued coefficients and a few large or unrestricted values, i.e., we are assuming the generative x is a sparse vector. Such solutions can be favored by using p(x) ∝ Y i exp −1 2g(xi) = Y i exp −1 2h x2 i , (2) with h concave and non-decreasing on [0, ∞) [15, 16]. Virtually all sparse priors of interest can be expressed in this manner, including the popular Laplacian, Jeffreys, Student’s t, and generalized 1 Gaussian distributions. Roughly speaking, the ‘more concave’ h, the more sparse we expect x to be. For example, with h(z) = z, we recover a Gaussian, which is not sparse at all, while h(z) = √z gives a Laplacian distribution, with characteristic heavy tails and a sharp peak at zero. All sparse priors of the form (2) can be conveniently framed in terms of a collection of non-negative latent variables or hyperparameters γ ≜[γ1, . . . , γm]T for purposes of optimization, approximation, and/or inference. The hyperparameters dictate the structure of the prior via p(x) = Y i p(xi), p(xi) = max γi≥0 N(xi; 0, γi)ϕ(γi), (3) where ϕ(γi) is some non-negative function that is sometimes treated as a hyperprior, although it will not generally integrate to one. For the purpose of obtaining sparse point estimates of x, which will be our primary focus herein, models with latent variable sparse priors are frequently handled in one of two ways. First, the latent structure afforded by (3) offers a very convenient means of obtaining (possibly local) maximum a posteriori (MAP) estimates of x by iteratively solving x(I) = arg min x −log p(y|x)p(x) = arg min x;γ⪰0 ∥y −Φx∥2 2 + λ X i x2 i γi + log γi + f(γi) , (4) where f(γi) ≜−2 log ϕ(γi) and x(I) is commonly referred to as a Type I estimator. Examples include minimum ℓp-norm approaches [4, 11, 16], Jeffreys prior-based methods sometimes called FOCUSS [7, 6, 9], algorithms for computing the basis pursuit (BP) or Lasso solution [6, 16, 18], and iterative reweighted ℓ1 methods [3]. Secondly, instead of maximizing over both x and γ as in (4), Type II methods first integrate out (marginalize) the unknown x and then solve the empirical Bayesian problem [19] γ(II) = arg max γ p(γ|y) = arg max γ Z p(y|x) Y i N(x; 0, γi)ϕ(γi)dxi = arg min γ yT Σ−1 y y + log |Σy| + m X i=1 f(γi), (5) where Σy ≜λI + ΦΓΦT and Γ ≜diag[γ]. Once γ(II) is obtained, the conditional distribution p(x|y; γ(II)) is Gaussian, and a point estimate for x naturally emerges as the posterior mean x(II) = E x|y; γ(II) = Γ(II)ΦT λI + ΦΓ(II)ΦT −1 y. (6) Pertinent examples include sparse Bayesian learning and the relevance vector machine (RVM) [19], automatic relevance determination (ARD) [14], methods for learning overcomplete dictionaries [8], and large-scale experimental design [17]. While initially these two approaches may seem vastly different, both can be directly compared using a dual-space view [22] of the underlying cost functions. In brief, this involves expressing both the Type I and Type II objective solely in terms of either x or γ as reviewed in Section 2. The dual-space view is advantageous for several reasons, such as establishing connections between algorithms, developing efficient update rules, or handling more general (non-Gaussian) likelihood functions. In Section 3, we utilize γ-space cost functions to develop a principled method for choosing the tradeoff parameter λ (which accompanies the Gaussian likelihood model and essentially balances sparsity and data fit) and demonstrate its effectiveness via simulations. Section 4 then derives a new Type II-inspired algorithm in x-space that can compute maximally sparse (minimal ℓ0 norm) solutions even with highly coherent dictionaries, proving a result for clustered dictionaries that previously has only been shown empirically [21]. Finally, Section 5 leverages duality to address Type II methods with generalized likelihood functions that previously were rendered untenable because of intractable integrals. In general, some tasks and analyses are easier to undertake in γ-space (Section 3), while others are more transparent in x-space (Sections 4 and 5). Here we consider both with the goal of advancing the proper understanding and full utilization of the sparse linear model. 2 Dual-Space View of the Sparse Linear Model Type I is based on a natural cost function in x-space, p(x|y), while Type II involves an analogous function in γ-space, p(γ|y). The dual-space view defines a corresponding γ-space cost function for Type I and a x-space cost function for Type II to complete the symmetry. 2 Type II in x-Space: Using the relationship yΣ−1 y y = min x 1 λ∥y −Φx∥2 2 + xT Γ−1x (7) as in [22], it can be shown that the Type II coefficients from (6) satisfy x(II) = arg minx L(II)(x), where L(II)(x) ≜ ∥y −Φx∥2 2 + λg(II)(x), (8) and g(II)(x) ≜ min γ⪰0 X i x2 i γi + log |Σy| + X i f(γi). (9) This reformulation of Type II in x-space is revealing for multiple reasons (Sections 4 and 5 will address additional reasons in detail). For many applications of the sparse linear model, the primary goal is simply a point estimate that exhibits some degree of sparsity, meaning many elements of ˆx near zero and a few relatively large coefficients. This requires a penalty function g(x) that is concave and non-decreasing in x2 ≜[x2 1, . . . , x2 m]T . In the context of Type I, any prior p(x) expressible via (2) will satisfy this condition by definition; such priors are said to be strongly super-Gaussian and will always have positive kurtosis [15]. Regarding Type II, because the associated x-space penalty (9) is represented as a minimum of upper-bounding hyperplanes with respect to x2 (and the slopes are all non-negative given γ ⪰0), it must therefore be concave and non-decreasing in x2 [1]. For compression, interpretability, or other practical reasons, it is sometimes desirable to have exactly sparse point estimates, with many (or most) elements of x equal to exactly zero. This then necessitates a penalty function g(x) that is concave and non-decreasing in |x| ≜[|x1|, . . . , |xm|]T , a much stronger condition. In the case of Type I, if log γ + f(γ) is concave and non-decreasing in γ, then g(x) = P i g(xi) satisfies this condition. The Type II analog, which emerges by further inspection of (9) stipulates that if log |Σy| + X i f(γi) = log λ−1ΦT Φ + Γ−1 + log |Γ| + X i f(γi) (10) is a concave and non-decreasing function of γ, then g(II)(x) will be a concave, non-decreasing function of |x|. For this purpose it is sufficient, but not necessary, that f be a concave and nondecreasing function. Note that this is a somewhat stronger criteria than Type I since the first term on the righthand side of (10) (which is absent from Type I) is actually convex in γ. Regardless, it is now very transparent how Type II may promote sparsity akin to Type I. The dual-space view also leads to efficient, convergent algorithms such as iterative reweighted ℓ1 minimization and its variants as discussed in [22]. However, building on these ideas, we can demonstrate here that it also elucidates the original, widely applied update procedures developed for implementing the relevance vector machine (RVM), a popular Type II method for regression and classification that assumes f(γ) = 0 [19]. In fact these updates, which were inspired by a fixed-point heuristic from [12], have been widely used for a number of Bayesian inference tasks without any formal analyses or justification.1 The dual-space formulation can be leveraged to show that these updates are in fact executing a coordinate-wise, iterative min-max procedure in search of a saddle point. Specifically we have the following result (all proofs are in the supplementary material): Theorem 1. The original RVM update rule from [19, Equation (16)] is equivalent to a closed-form, coordinate-wise optimization of min x;γ⪰0 max z⪰0 " ∥y −Φx∥2 2 + X i x2 i γi + zi log γi −ϑ(z) # (11) over x, γ, and z, where ϑ(z) is the convex conjugate function [1] of log λI + Φdiag[exp(u)]ΦT with respect to u. 1Although a more recent, step-wise variant of the RVM has been shown to be substantially faster [20], the original version is still germane since it can easily be extended to handle more general structured sparsity problems. The step-wise method cannot without introducing additional approximations [10]. 3 Type I in γ-Space: Similar methodology and the expansion of yT Σ−1 y y can be used to express the Type I optimization problem in γ-space, which serves several useful purposes. Let γ(I) ≜ arg minγ⪰0 L(I)(γ), with L(I)(γ) ≜ yT Σ−1 y y + log |Γ| + m X i=1 f(γi). (12) Then the Type I coefficients obtained from (4) satisfy x(I) = Γ(I)ΦT λI + ΦΓ(I)ΦT −1 y. (13) Section 3 will use γ-space cost functions to derive well-motivated approaches for learning the tradeoff parameter λ. 3 Choosing the Trade-off Parameter λ The trade-off parameter is crucial for obtaining good estimates of x. In general, if λ is too large, ˆx →0; too small and ˆx is overfitted to the noise. In practice, either expensive cross-validation or some heuristic procedure is often required. However, because λ can be interpreted as a variance, it is useful to address its estimation in γ-space, in which existing unknowns (i.e., γ) are also variances. Learning λ with Type I: Consider the Type I cost function L(I)(γ). The data-dependent term can be shown to be a convex, non-increasing function of γ, which encourages each element to be large. The second term is a penalty factor that regulates the size of γ. It is here that a convenient regularizer for λ can be incorporated. This can be accomplished as follows. First we expand Σy via Σy = Pm j=1 γiφ·iφT ·i +Pn j=1 λejeT j , where φ·i denotes the i-th column of Φ and ej is a column vector of zeros with a ‘1’ in the j-th location. Thus we observe that λ is embedded in the data-dependent term in the exact same fashion as each γi. This motivates a penalty on λ with similar correspondence, leading to the objective L(I)(γ, λ) ≜ yT Σ−1 y y + m X i=1 [log γi + f(γi)] + n X j=1 [log λ + f(λ)] = yT Σ−1 y y + m X i=1 [log γi + f(γi)] + n log λ + nf(λ). (14) While admittedly simple, this construction is appealing because, regardless of how each γi is penalized, λ is penalized in a proportional manner, so both γ and λ have a properly balanced chance of explaining the observed data. This is important because the optimal λ will be highly dependent on both the true noise level, and crucially, the particular sparse prior assumed p(x) (as reflected by f). For analysis or implementational purposes, we may convert L(I)(γ, λ) back to x-space, with λdependency now removed. It can then be shown that solving (4), with λ fixed to the value that minimizes (14), is equivalent to solving min x,u X i g(xi) + ng 1 √n∥u∥2 , s.t. y = Φx + u. (15) If x∗and u∗minimize (15), then we can demonstrate using [15] that the corresponding λ estimate, which also minimizes (14), is given by λ∗= ∂h(z)/∂z evaluated at z = 1/n∥u∗∥2 2. Note that if we were just performing maximum likelihood estimation of λ given x∗, the optimal value would reduce to simply λ∗= 1/n∥u∗∥2 2, with no influence from the prior on x. This is a fundamental weakness. Solving (15), or equivalently (14), can be accomplished using simple iterative reweighted least squares, or if g is concave in |xi|, an iterative reweighted second-order-cone (SOC) minimization. Learning λ with Type II: The same procedure can be adopted for Type II yielding the cost function L(II)(γ, λ) = yT Σ−1 y y + log |Σy| + X i f(γi) + nf(λ), (16) 4 where we note that, unlike in the Type I case above, the log-based term is already naturally balanced between λ and γ by virtue of the symmetric embedding in Σy. It is important to stress that this Type II prescription for learning λ is not the same as originally proposed in the literature for Type II models of this genre. In this context, ϕ(γi) is interpreted a hyperprior on γi, and an equivalent distribution is assumed on the noise variance λ. Importantly, these assumptions leave out the factor of n in (16), and so an asymmetry is created. Simulation Examples: Empirical tests help to illustrate the efficacy of this procedure. As in many applications of sparse reconstruction, here we are only concerned with accurately estimating x, whose nonzero entries may have physical significance (e.g., source localization [16], compressive sensing [2], etc.), as opposed to predicting new values of y. Therefore, automatically learning the value of λ is particularly relevant, since cross-validation is often not possible.2 Simulations are helpful for evaluation purposes since we then have access to the true sparse generating vector. Figure 1 compares the estimation performance obtained by minimizing (15) with two different selections for g: g(x) = ∥x∥p p = P i |xi|p, with p = 0.01 and p = 1.0. Data generation proceeds as follows: We create a random 100 × 50 dictionary Φ, with ℓ2-normalized, iid Gaussian columns. x is randomly generated with 10 unit Gaussian nonzero elements. We then compute y = Φx + ǫ, where ǫ is iid Gaussian noise producing an SNR of 0dB. To determine what λ values lead to optimal performance we solve (4) with the appropriate g over a range of fixed λ values (10−4 to 101) and then compute the error between x and ˆx. The minimum of this curve reflects the best performance we can hope to achieve when learning λ blindly. In Figure 1 (Top) we plot these curves for both Type I methods averaged over 1000 independent trials. Next we solve (15), which produces an estimate of both x and λ. We mark with an ‘+’ the learned λ versus the corresponding error of ˆx. In both cases the learned λ’s (averaged across trials) perform just as well as if we knew the optimal value a priori. Results using other noise levels, problem dimensions n and m, sparsity levels ∥x∥0, and sparsity penalties g are similar. See the supplementary material for more examples. Figure 1 (Bottom) shows the average sparsity of estimates ˆx, as quantified by the ℓ0 norm ∥ˆx∥0, across λ values (∥x∥0 returns a count of the number of nonzero elements in x). The ‘+’ indicates the average sparsity of each ˆx for the learned λ as before. In general, the ℓ(0.01) penalty produces a much sparser estimate, very near the true value of ∥x∥0 = 10 at the optimal λ. The ℓ1 penalty, which is substantially less concave/sparsity-inducing, still sets some elements to exactly zero, but also substantially shrinks nonzero coefficients in achieving a similar overall reconstruction error. This highlights the importance of learning a λ via a penalty that is properly matched to the prior on x: if we instead tried to force a particular sparsity value (in this case 10), then the ℓ1 solution would be very suboptimal. Finally we note that maximum likelihood (ML) estimation of λ performs very poorly (not shown), except in the special case where the ML estimate is equivalent to solving (14) as occurs when f(γ) = 0 (see [6]). The proposed method can be viewed as adding a principled hyperprior on λ, properly matched to p(x), that compensates for this shortcoming of standard ML. Type II λ estimation has been explored elsewhere for the special case where f(γ) = 0 [19], which renders the factor of n in (16) irrelevant; however, for other selections we have found this factor to improve performance (not shown). For space considerations we have focused our attention here on Type I, which has frequently been noted for not lending itself well to λ estimation (or related parameters) [6, 13]. In fact, the symmetry afforded by the dual-space perspective reveals that Type I is just as natural a candidate for this task as Type II, and may be preferred in high-dimensional settings where computational resources are at a premium. 4 Maximally Sparse Estimation With the advent of compressive sensing and other related applications, there has been growing interest in finding maximally sparse signal representations from redundant dictionaries (m ≫n) [3, 5]. The canonical form of this problem involves solving x0 ≜arg min x ∥x∥0, s.t. y = Φx. (17) 2For example, in non-stationary environments, the value of both x and λ may be completely different for any new y, which then necessitates that we estimate both jointly. 5 10 −4 10 −3 10 −2 10 −1 10 0 10 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 λ value MSE ℓ(0.01) ℓ1 10 −4 10 −3 10 −2 10 −1 10 0 10 1 0 5 10 15 20 25 30 35 40 45 50 λ value ℓ(0.01) ℓ1 ∥ˆx∥0 Figure 1: Left: Normalized mean-squared error (MSE) given by ∥x −ˆx∥2 2/∥x∥2 (where the average is across 1000 trials) plotted versus λ for two different Type I approaches. Each black ‘+’ represents the estimated value of λ (averaged across trials) and the associated MSE produced with this estimate. In both cases the estimated value achieves the lowest possible MSE (it can actually be slightly lower than the curve because its value is allowed to fluctuate from trial to trial). Right: Solution sparsity ∥ˆx∥0 versus λ. Even though they both lead to similar MSE, the ℓ(0.01) penalty produces a much sparser estimate at the optimal λ value. While (17) is NP-hard, whenever the dictionary Φ satisfies a restricted isometry property (RIP) [2] or a related structural assumption, meaning that each ∥x0∥0 columns of Φ are sufficiently close to orthonormal (i.e., mutually uncorrelated), then replacing ℓ0 with ℓ1 in (17) leads to a convex problem with an equivalent global solution. Unfortunately however, in many situations (e.g., feature selection, source localization) these RIP equivalence conditions are grossly violated, implying that the ℓ1 solution may deviate substantially from x0. An alternative is to instead replace (17) with minimization of (8) and then take the limit as λ →0. (Note that the extension to the noisy case with λ > 0 is straightforward, but analysis is more difficult.) In this regime the optimization problem reduces to x(II) = lim λ→0 arg min x g(II)(x), s.t. y = Φx. (18) If log |Σy| + P i f(γi) is concave, then (18) can be minimized using reweighted ℓ1 minimization. With initial weight vector w(0) = 1, the (k + 1)-th iteration involves computing x(k+1) ←arg min x: y=Φx X i w(k) i |xi|, w(k+1) ←∂g(II)(x) ∂|xi| x=x(k+1) . (19) With f(γ) = 0, iterating (19) will provably lead to an estimate of x0 that is as good or better than the ℓ1 solution [21], in particular when Φ has highly correlated columns. Additionally, the assumption f(γ) = 0 leads to a closed-form expression for the weights w(k+1). Let ηi(x; α, q) ≜ φT ·i αI + Φ|X(k+1)|2ΦT −1 φ·i q , (20) where |X(k+1| denotes a diagonal matrix with i-th diagonal entry given by |x(k+1) i |. Then w(k+1) can be computed via w(k+1) i = ηi(x; 0, 1/2), ∀i. It remains unclear however in what circumstances this type of update can lead to guaranteed improvement nor if the functions ηi(x; 0, 1/2) are even the optimal choice. We will now demonstrate that for certain selections of α and q, we can guarantee that reweighted ℓ1 using ηi(x; α, q) is guaranteed to recover x0 exactly if Φ is drawn from what we call a clustered dictionary model. Definition 1. Clustered Dictionary Model: Let Φ(d) uncorr denote any dictionary such that ℓ1 minimization succeeds in solving (17) for all ∥x0∥0 ≤d. Let Φ(d,ǫ) corr denote any dictionary obtained by replacing each column of Φ(d) uncorr with a “cluster” of mi basis vectors such that the angle between any two vectors within a cluster is less than some ǫ > 0. We also define the cluster support 6 Ω0 ⊂{1, 2, . . . , m} as the set of cluster indices whereby x0 has at least one nonzero element. Finally, we assume that the resulting Φ(d,ǫ) corr is such that every n × n submatrix is full rank. Theorem 2. For any sparse vector x0 and any dictionary Φ(d,ǫ) corr obtained from the clustered dictionary model with ǫ sufficiently small, reweighted ℓ1 minimization using weights ηi(x; λ, q) with some q ≥1 and α sufficiently small will recover x0 exactly provided that |Ω0| ≤d, P i∈Ω0 mi ≤n, and within each cluster k ∈Ω0 the coefficients do not sum to zero. Theorem 2 implies that even though ℓ1 may fail to find the maximally sparse x0 because of severe RIP violations (high correlations between groups of dictionary columns as dictated by ǫ lead directly to a poor RIP), a Type II-inspired method can still be successful. Moreover, because whenever ℓ1 does succeed, Type II will always succeed as well (assuming a reweighted ℓ1 implementation), the converse (RIP violation leading to Type II failure but not ℓ1 failure) can never happen. Recent work from [21] has argued that Type II may be useful for addressing the sparse recovery problem with correlated dictionaries, and empirical evidence is provided showing vastly superior performance on clustered dictionaries. However, we stress that no results proving global convergence to the correct, maximally sparse solution have been shown before in the case of structured dictionaries (except in special cases with strong, unverifiable constraints on coefficient magnitudes [21]). Moreover, the proposed weighting strategy ηi(x; λ, q) accomplishes this without any particular tuning to the clustered dictionary model under consideration and thus likely holds in many other cases as well. 5 Generalized Likelihood functions Type I methods naturally accommodate alternative likelihood functions. We simply must replace the quadratic data fit term from (4) with some preferred function and then coordinate-wise optimization may proceed provided we have an efficient means of computing a weighted ℓ2-norm penalized solution. In contrast, generalizing Type II is substantially more complicated because it is no longer possible to compute the marginalization (5) or the posterior distribution p(x|y; γ(II)). Therefore, to obtain a tractable estimate x(II) additional heuristics are required. For example, the RVM classifier from [19] employs a Laplace approximation for this purpose; however, it is not clear what cost function is being minimized nor rigorous properties of the estimated solutions. Fortunately, the dual x-space view provides a natural mechanism for generalizing the basic Type II methodology to address alternative likelihood functions in a more principled manner. In the case of classification problems, we might want to replace the Gaussian likelihood p(y|x) implied by (1) with a multivariate Bernoulli distribution p(y|x) ∝log[−ψ(y, x)] where ψ(y, x) is the function ψ (y, x) ≜ X j (yj log [σj(x)] + (1 −yj) log [1 −σj(x)]) . (21) Here yj ∈{0, 1} and σj(x) ≜1/[1+exp(φT j·x)], with φj· denoting the j-th row of Φ. This function may be naturally substituted into the x-space Type II cost function (8) giving us the candidate penalized logistic regression function min x ψ (y, x) + λg(II)(x). (22) Importantly, recasting Type II classification using x-space in this way, with its attendant wellspecified cost function, facilitates more concrete analyses (see below) regarding properties of global and local minima that were previously rendered inaccessible because of intractable integrals and compensatory approximations. Moreover, we retain a tight connection with the original Type II marginalization process as follows. Consider the strict upper bound on the function ψ(y, x) (obtained by a Taylor series approximation and a Hessian bound) given by ψ(y, x) ≤π(y, x, v) ≜ψ(y, v) + (v −x)T ΦT t + 1/8 (v −x)T ΦT Φ (v −x) , (23) where t = [t1, . . . , tn]T with tj ≜yj −σj(v). This bound holds for all v with equality when v = x. Using this result we obtain the lower bound on the marginal likelihood given by R log[−ψ(y, x)]p(x)dx ≥ R log[−π(y, x, v)]p(x)dx. The dual-space framework can then be used to derive the following result: 7 Theorem 3. Minimization of (22) with λ = 4 is equivalent to solving max v;γ⪰0 Z exp [−π(y, x, v)] Y i N(x; 0, γi)ϕ(γi)dxi (24) and then computing x(II) by plugging the resulting γ into (6). Thus we may conclude that (22) provides a principled approximation to (5) when a Bernoulli likelihood function is used for classification purposes. In empirical tests on benchmark data sets (see supplementary material) using f(γ) = 0, it performs nearly identically to the original RVM (which also implicitly assumes f(γ) = 0), but nonetheless provides a more solid theoretical justification for Type II classifiers because of the underlying similarities and identical generative model. But while the RVM and its attendant approximations are difficult to analyze, (22) is relatively transparent. Additionally, for other sparse priors, or equivalently other selections for f, we can still perform optimization and analyze cost functions without any conjugacy requirements on the implicit p(x). Theorem 4. If log |Σy| + P i f(γi) is a concave, non-decreasing function of γ (as will be the case if f is concave and non-decreasing), then every local optimum of (24) is achieved at a solution with at most n nonzero elements in γ and therefore x(II). In contrast, if −log p(x) is convex, then (24) can be globally solved via a convex program. Despite the practical success of the RVM and related Bayesian techniques, and empirical evidence of sparse solutions, there is currently no proof that the standard variants of these classification methods will always produce exactly sparse estimates. Thus Theorem 4 provides some analytical validation of these types of classifiers. Finally, if we take (22) as our starting point, we may naturally consider modifications tailored to specific sparse classification tasks (that may or may not retain an explicit connection with the original Type II probabilistic model). For example, suppose we would like to obtain a maximally sparse classifier, where regularization is provided by a ∥x∥0 penalty. Direct optimization is combinatorial because of what we call the global zero attraction property: Whenever any individual coefficient xi goes to zero, we are necessarily at a local minimum with respect to this coefficient because of the infinite slope (discontinuity) of the ℓ0 norm at zero. However, (22) can be modified to approximate the ℓ0 without this property as follows. Theorem 5. Consider the Type II-inspired minimization problem ˆx, ˆγ = arg min x;γ⪰0 ψ (y, x) + α1 X i x2 i γi + log α2I + ΦΓΦT (25) which is equivalent to (22) with f(γ) = 0 when α1 = α2 = λ. For some α1 and α2 sufficiently small (but not necessarily equal), the support3 of ˆx will match the support of arg minx ψ (y, x) + λ∥x∥0. Moreover, (25) does not satisfy the global zero attraction property. Thus Type II affords the possibility of mimicking the ℓ0 norm in the presence of generalized likelihoods but with the advantageous potential for drastically fewer local minima. This is a direction for future research. Additionally, while here we have focused our attention on classification via logistic regression, these ideas can presumably be extended to other likelihood functions provided certain conditions are met. To the best of our knowledge, while already demonstrably successful in an empirical setting, Type II classifiers and other related Bayesian generalized likelihood models have never been analyzed in the context of sparse estimation as we have done in this section. 6 Conclusion The dual-space view of sparse linear or generalized linear models naturally allows us to transition x-space ideas originally developed for Type I and apply them to Type II, and conversely, apply γspace techniques from Type II to Type I. The resulting symmetry promotes a mutual understanding of both methodologies and helps ensure that they are not underutilized. 3Support refers to the index set of the nonzero elements. 8 References [1] S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004. [2] E. Cand`es, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Information Theory, vol. 52, no. 2, pp. 489–509, Feb. 2006. [3] E. Cand`es, M. Wakin, and S. Boyd, “Enhancing sparsity by reweighted ℓ1 minimization,” J. Fourier Anal. Appl., vol. 14, no. 5, pp. 877–905, 2008. [4] R. Chartrand and W. Yin, “Iteratively reweighted algorithms for compressive sensing,” Proc. Int. Conf. Accoustics, Speech, and Signal Proc., 2008. [5] D.L. Donoho and M. Elad, “Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ1 minimization,” Proc. National Academy of Sciences, vol. 100, no. 5, pp. 2197– 2202, March 2003. [6] M.A.T. Figueiredo, “Adaptive sparseness using Jeffreys prior,” Advances in Neural Information Processing Systems 14, pp. 697–704, 2002. [7] C. F´evotte and S.J. Godsill, “Blind separation of sparse sources using Jeffreys inverse prior and the EM algorithm,” Proc. 6th Int. Conf. Independent Component Analysis and Blind Source Separation, Mar. 2006. [8] M. Girolami, “A variational method for learning sparse and overcomplete representations,” Neural Computation, vol. 13, no. 11, pp. 2517–2532, 2001. [9] I.F. Gorodnitsky and B.D. Rao, “Sparse signal reconstruction from limited data using FOCUSS: A re-weighted minimum norm algorithm,” IEEE Transactions on Signal Processing, vol. 45, no. 3, pp. 600–616, March 1997. [10] S. Ji, D. Dunson, and L. Carin, “Multi-task compressive sensing,” IEEE Trans. Signal Processing, vol. 57, no. 1, pp. 92–106, Jan 2009. [11] K. Kreutz-Delgado, J. F. Murray, B.D. Rao, K. Engan, T.-W. Lee, and T.J. Sejnowski, “Dictionary learning algorithms for sparse representation,” Neural Computation, vol. 15, no. 2, pp. 349–396, February 2003. [12] D.J.C. MacKay, “Bayesian interpolation,” Neural Computation, vol. 4, no. 3, pp. 415–447, 1992. [13] J. Mattout, C. Phillips, W.D. Penny, M.D. Rugg, and K.J. Friston, “MEG source localization under multiple constraints: An extended Bayesian framework,” NeuroImage, vol. 30, pp. 753– 767, 2006. [14] R.M. Neal, Bayesian Learning for Neural Networks, Springer-Verlag, New York, 1996. [15] J.A. Palmer, D.P. Wipf, K. Kreutz-Delgado, and B.D. Rao, “Variational EM algorithms for non-Gaussian latent variable models,” Advances in Neural Information Processing Systems 18, pp. 1059–1066, 2006. [16] B.D. Rao, K. Engan, S. F. Cotter, J. Palmer, and K. Kreutz-Delgado, “Subset selection in noise based on diversity measure minimization,” IEEE Trans. Signal Processing, vol. 51, no. 3, pp. 760–770, March 2003. [17] M. Seeger and H. Nickisch, “Large scale Bayesian inference and experimental design for sparse linear models,” SIAM J. Imaging Sciences, vol. 4, no. 1, pp. 166–199, 2011. [18] R. Tibshirani, “Regression shrinkage and selection via the Lasso,” Journal of the Royal Statistical Society, vol. 58, no. 1, pp. 267–288, 1996. [19] M.E. Tipping, “Sparse Bayesian learning and the relevance vector machine,” Journal of Machine Learning Research, vol. 1, pp. 211–244, 2001. [20] M.E. Tipping and A.C. Faul, “Fast marginal likelihood maximisation for sparse Bayesian models,” Ninth Int. Workshop. Artificial Intelligence and Statistics, Jan. 2003. [21] D.P. Wipf “Sparse estimation with structured dictionaries,” Advances in Nerual Information Processing 24, 2011. [22] D.P. Wipf, B.D. Rao, and S. Nagarajan, “Latent variable Bayesian models for promoting sparsity,” IEEE Trans. Information Theory, vol. 57, no. 9, Sept. 2011. 9
|
2012
|
88
|
4,807
|
Neuronal Spike Generation Mechanism as an Oversampling, Noise-shaping A-to-D Converter Dmitri B. Chklovskii Daniel Soudry Janelia Farm Research Campus Department of Electrical Engineering Howard Hughes Medical Institute Technion mitya@janelia.hhmi.org daniel.soudry@gmail.com Abstract We test the hypothesis that the neuronal spike generation mechanism is an analog-to-digital (AD) converter encoding rectified low-pass filtered summed synaptic currents into a spike train linearly decodable in postsynaptic neurons. Faithful encoding of an analog waveform by a binary signal requires that the spike generation mechanism has a sampling rate exceeding the Nyquist rate of the analog signal. Such oversampling is consistent with the experimental observation that the precision of the spikegeneration mechanism is an order of magnitude greater than the cut-off frequency of low-pass filtering in dendrites. Additional improvement in the coding accuracy may be achieved by noise-shaping, a technique used in signal processing. If noise-shaping were used in neurons, it would reduce coding error relative to Poisson spike generator for frequencies below Nyquist by introducing correlations into spike times. By using experimental data from three different classes of neurons, we demonstrate that biological neurons utilize noise-shaping. Therefore, the spike-generation mechanism can be viewed as an oversampling and noise-shaping AD converter. The nature of the neural spike code remains a central problem in neuroscience [1-3]. In particular, no consensus exists on whether information is encoded in firing rates [4, 5] or individual spike timing [6, 7]. On the single-neuron level, evidence exists to support both points of view. On the one hand, post-synaptic currents are low-pass-filtered by dendrites with the cut-off frequency of approximately 30Hz [8], Figure 1B, providing ammunition for the firing rate camp: if the signal reaching the soma is slowly varying, why would precise spike timing be necessary? On the other hand, the ability of the spike-generation mechanism to encode harmonics of the injected current up to about 300Hz [9, 10], Figure 1B, points at its exquisite temporal precision [11]. Yet, in view of the slow variation of the somatic current, such precision may seem gratuitous and puzzling. The timescale mismatch between gradual variation of the somatic current and high precision of spike generation has been addressed previously. Existing explanations often rely on the population nature of the neural code [10, 12]. Although this is a distinct possibility, the question remains whether invoking population coding is necessary. Other possible explanations for the timescale mismatch include the possibility that some synaptic currents (for example, GABAergic) may be generated by synapses proximal to the soma and therefore not subject to low-pass filtering or that the high frequency harmonics are so strong in the pre-synaptic spike that despite attenuation, their trace is still present. Although in some cases, these explanations could apply, for the majority of synaptic inputs to typical neurons there is a glaring mismatch. The perceived mismatch between the time scales of somatic currents and the spike-generation mechanism can be resolved naturally if one views spike trains as digitally encoding analog somatic currents [13-15], Figure 1A. Although somatic currents vary slowly, information that could be communicated by their analog amplitude far exceeds that of binary signals, such as allor-none spikes, of the same sampling rate. Therefore, faithful digital encoding requires sampling rate of the digital signal to be much higher than the cut-off frequency of the analog signal, socalled over-sampling. Although the spike generation mechanism operates in continuous time, the high temporal precision of the spikegeneration mechanism may be viewed as a manifestation of oversampling, which is needed for the digital encoding of the analog signal. Therefore, the extra order of magnitude in temporal precision available to the spike-generation mechanism relative to somatic current, Figure 1B, is necessary to faithfully encode the amplitude of the analog signal, thus potentially reconciling the firing rate and the spike timing points of view [13-15]. Figure 1. Hybrid digital-analog operation of neuronal circuits. A. Post-synaptic currents are low-pass filtered and summed in dendrites (black) to produce a somatic current (blue). This analog signal is converted by the spike generation mechanism into a sequence of all-or-none spikes (green), a digital signal. Spikes propagate along an axon and are chemically transduced across synapses (gray) into post-synatpic currents (black), whose amplitude reflects synaptic weights, thus converting digital signal back to analog. B. Frequency response function for dendrites (blue, adapted from [8]) and for the spike generation mechanism (green, adapted from [9]). Note one order of magnitude gap between the cut off frequencies. C. Amplitude of the summed postsynaptic currents depends strongly on spike timing. If the blue spike arrives just 5ms later, as shown in red, the EPSCs sum to a value already 20% less. Therefore, the extra precision of the digital signal may be used to communicate the amplitude of the analog signal. In signal processing, efficient AD conversion combines the principle of oversampling with that of noise-shaping, which utilizes correlations in the digital signal to allow more accurate encoding of the analog amplitude. This is exemplified by a family of AD converters called modulators [16], of which the basic one is analogous to an integrate-and-fire (IF) neuron [13-15]. The analogy between the basic modulator and the IF neuron led to the suggestion that neurons also use noise-shaping to encode incoming analog current waveform in the digital spike train [13]. However, the hypothesis of noise-shaping AD conversion has never been tested experimentally in biological neurons. In this paper, by analyzing existing experimental datasets, we demonstrate that noise-shaping is present in three different classes of neurons from vertebrates and invertebrates. This lends support to the view that neurons act as oversampling and noise-shaping AD converters and accounts for the mismatch between the slowly varying somatic currents and precise spike timing. Moreover, we show that the degree of noise-shaping in biological neurons exceeds that used by basic modulators or IF neurons and propose viewing more complicated models in the noise-shaping framework. This paper is organized as follows: We review the principles of oversampling and noise-shaping in Section 2. In Section 3, we present experimental evidence for noise-shaping AD conversion in neurons. In Section 4 we argue that rectification of somatic currents may improve energy efficiency and/or implement de-noising. 2. Oversampling and noise-shaping in AD converters To understand how oversampling can lead to more accurate encoding of the analog signal amplitude in a digital form, we first consider a Poisson spike encoder, whose rate of spiking is modulated by the signal amplitude, Figure 2A. Such an AD converter samples an analog signal at discrete time points and generates a spike with a probability given by the (normalized) signal amplitude. Because of the binary nature of spike trains, the resulting spike train encodes the signal with a large error even when the sampling is done at Nyquist rate, i.e. the lowest rate for alias-free sampling. To reduce the encoding error a Poisson encoder can sample at frequencies, fs , higher than Nyquist, fN – hence, the term oversampling, Figure 2B. When combined with decoding by lowpass filtering (down to Nyquist) on the receiving end, this leads to a reduction of the error, which can be estimated as follows. The number of samples over a Nyquist half-period (1/2fN) is given by the oversampling ratio: . As the normalized signal amplitude, , stays roughly constant over the Nyquist half-period, it can be encoded by spikes generated with a fixed probability, x. For a Poisson process the variance in the number of spikes is equal to the mean, . Therefore, the mean relative error of the signal decoded by averaging over the Nyquist half-period: , (1) indicating that oversampling reduces transmission error. However, the weak dependence of the error on the oversampling frequency indicates diminishing returns on the investment in oversampling and motivates one to search for other ways to lower the error. Figure 2. Oversampling and noise-shaping in AD conversion. A. Analog somatic current (blue) and its digital code (green). The difference between the green and the blue curves is encoding error. B. Digital output of oversampling Poisson encoder over one Nyquist half-period. C. Error power spectrum of a Nyquist (dark green) and oversampled (light green) Poisson encoder. Although the total error power is the same, the fraction surviving low-pass filtering during decoding (solid green) is smaller in oversampled case. D. Basic modulator. E. Signal at the output of the integrator. F. Digital output of the modulator over one Nyquist period. G. Error power spectrum of the modulator (brown) is shifted to higher frequencies and low-pass filtered during decoding. The remaining error power (solid brown) is smaller than for Poisson encoder. To reduce encoding error beyond the ½ power of the oversampling ratio, the principle of noiseshaping was put forward [17]. To illustrate noise-shaping consider a basic AD converter called [18], Figure 2D. In the basic modulator, the previous quantized signal is fed back and subtracted from the incoming signal and then the difference is integrated in time. Rather than quantizing the input signal, as would be done in the Poisson encoder, modulator quantizes the integral of the difference between the incoming analog signal and the previous quantized signal, Figure 2F. One can see that, in the oversampling regime, the quantization error of the basic modulator is significantly less than that of the Poisson encoder. As the variance in the number of spikes over the Nyquist period is less than one, the mean relative error of the signal is at most, , which is better than the Poisson encoder. To gain additional insight and understand the origin of the term noise-shaping, we repeat the above analysis in the Fourier domain. First, the Poisson encoder has a flat power spectrum up to the sampling frequency, Figure 2C. Oversampling preserves the total error power but extends the frequency range resulting in the lower error power below Nyquist. Second, a more detailed analysis of the basic modulator, where the dynamics is linearized by replacing the quantization device with a random noise injection [19], shows that the quantization noise is effectively differentiated. Taking the derivative in time is equivalent to multiplying the power spectrum of the quantization noise by frequency squared. Such reduction of noise power at low frequencies is an example of noise shaping, Figure 2G. Under the additional assumption of the white quantization noise, such analysis yields: , (2) which for R >> 1 is significantly better performance than for the Poisson encoder, Eq.(1). As mentioned previously, the basic modulator, Figure 2D, in the continuous-time regime is nothing other than an IF neuron [13, 20, 21]. In the IF neuron, quantization is implemented by the spike generation mechanism and the negative feedback corresponds to the after-spike reset. Note that resetting the integrator to zero is strictly equivalent to subtraction only for continuous-time operation. In discrete-time computer simulations, the integrator value may exceed the threshold, and, therefore, subtraction of the threshold value rather than reset must be used. Next, motivated by the -IF analogy, we look for the signs of noise-shaping AD conversion in real neurons. 3. Experimental evidence of noise-shaping AD conversion in real neurons In order to determine whether noise-shaping AD conversion takes place in biological neurons, we analyzed three experimental datasets, where spike trains were generated by time-varying somatic currents: 1) rat somatosensory cortex L5 pyramidal neurons [9], 2) mouse olfactory mitral cells [22, 23], and 3) fruit fly olfactory receptor neurons [24]. In the first two datasets, the current was injected through an electrode in whole-cell patch clamp mode, while in the third, the recording was extracellular and the intrinsic somatic current could be measured because the glial compartment included only one active neuron. Testing the noise-shaping AD conversion hypothesis is complicated by the fact that encoded and decoded signals are hard to measure accurately. First, as somatic current is rectified by the spikegeneration mechanism, only its super-threshold component can be encoded faithfully making it hard to know exactly what is being encoded. Second, decoding in the dendrites is not accessible in these single-neuron recordings. In view of these difficulties, we start by simply computing the power spectrum of the reconstruction error obtained by subtracting a scaled and shifted, but otherwise unaltered, spike train from the somatic current. The scaling factor was determined by the total weight of the decoding linear filter and the shift was optimized to maximize information capacity, see below. At the frequencies below 20Hz the error contains significantly lower power than the input signal, Figure 3, indicating that the spike generation mechanism may be viewed as an AD converter. Furthermore, the error power spectrum of the biological neuron is below that of the Poisson encoder, thus indicating the presence of noise-shaping. For dataset 3 we also plot the error power spectrum of the IF neuron, the threshold of which is chosen to generate the same number of spikes as the biological neuron. Figure 3. Evidence of noise-shaping. Power spectra of the somatic current (blue), difference between the somatic current and the digital spike train of the biological neuron (black), of the Poisson encoder (green) and of the IF neuron (red). Left: datset 1, right: dataset 3. 0 10 20 30 40 50 60 70 80 90 10 2 10 3 10 4 Frequency [Hz] Spectral power, a.u. 0 10 20 30 40 50 60 70 80 90 100 10 -4 10 -3 10 -2 10 -1 10 0 10 1 Frequency [Hz] Spectral power, a.u. somatic current biological neuron error Poisson encoder error I&F neuron error Although the simple analysis presented above indicates noise-shaping, subtracting the spike train from the input signal, Figure 3, does not accurately quantify the error when decoding involves additional filtering. An example of such additional encoding/decoding is predictive coding, which will be discussed below [25]. To take such decoding filter into account, we computed a decoded waveform by convolving the spike train with the optimal linear filter, which predicts the somatic current from the spike train with the least mean squared error. Our linear decoding analysis lends additional support to the noise-shaping AD conversion hypothesis [13-15]. First, the optimal linear filter shape is similar to unitary post-synaptic currents, Figure 4B, thus supporting the view that dendrites reconstruct the somatic current of the presynaptic neuron by low-pass filtering the spike train in accordance with the noise-shaping principle [13]. Second, we found that linear decoding using an optimal filter accounts for 60-80% of the somatic current variance. Naturally, such prediction works better for neurons in suprathreshold regime, i.e. with high firing rates, an issue to which we return in Section 4. To avoid complications associated with rectification for now we focused on neurons which were in suprathreshold regime by monitoring that the relationship between predicted and actual current is close to linear. Figure 4. Linear decoding of experimentally recorded spike trains. A. Waveform of somatic current (blue), resulting spike train (black), and the linearly decoded waveform (red) from dataset 1. B. Top: Optimal linear filter for the trace in A, is representative of other datasets as well. Bottom: Typical EPSPs have a shape similar to the decoding filter (adapted from [26]). C-D. Power spectra of the somatic current (blue), the decdoding error of the biological neuron (black), the Poisson encoder (green), and IF neuron (red) for dataset 1 (C) dataset 3 (D). Next, we analyzed the spectral distribution of the reconstruction error calculated by subtracting the decoded spike train, i.e. convolved with the computed optimal linear filter, from the somatic current. We found that at low frequencies the error power is significantly lower than in the input signal, Figure 4C,D. This observation confirms that signals below the dendritic cut-off frequency of 20-30Hz can be efficiently communicated using spike trains. To quantify the effect of noise-shaping we computed information capacity of different encoders: 0 10 20 30 40 50 60 70 80 90 10 2 10 3 Frequency [Hz] Spectral power, a.u. 0 10 20 30 40 50 60 70 80 90 100 10 -4 10 -3 10 -2 10 -1 10 0 10 1 10 2 Frequency [Hz] Spectral power, a.u. somatic current biological neuron error Poisson encoder error I&F neuron error C D where S(f) and N(f) are the power spectra of the somatic current and encoding error correspondingly and the sum is computed only over the frequencies for which S(f) > N(f). Because the plots in Figure 4C,D use semi-logrithmic scale, the information capacity can be estimated from the area between a somatic current (blue) power spectrum and an error power spectrum. We find that the biological spike generation mechanism has higher information capacity than the Poisson encoder and IF neurons. Therefore, neurons act as AD converters with stronger noise-shaping than IF neurons. We now return to the predictive nature of the spike generation mechanism. Given the causal nature of the spike generation mechanism it is surprising that the optimal filters for all three datasets carry most of their weight following a spike, Figure 4B. This indicates that the spike generation mechanism is capable of making predictions, which are possible in these experiments because somatic currents are temporally correlated. We note that these observations make delay-free reconstruction of the signal possible, thus allowing fast operation of neural circuits [27]. The predictive nature of the encoder can be captured by a modulator embedded in a predictive coding feedback loop [28], Figure 5A. We verified by simulation that such a nested architecture generates a similar optimal linear filter with most of its weight in the time following a spike, Figure 5A right. Of course such prediction is only possible for correlated inputs implying that the shape of the optimal linear filter depends on the statistics of the inputs. The role of predictive coding is to reduce the dynamic range of the signal that enters , thus avoiding overloading. A possible biological implementation for such integrating feedback could be Ca2+ concentration and Ca2+ dependent potassium channels [25, 29]. Figure 5. Enhanced modulators. A. modulator combined with predictive coder. In such device, the optimal decoding filter computed for correlated inputs has most of its weight following a spike, similar to experimental measurements, Figure 4B. B. Second-order modulator possesses stronger noise-shaping properties. Because such circuit contains an internal state variable it generates a non-periodic spike train in response to a constant input. Bottom trace shows a typical result of a simulation. Black – spikes, blue – input current. 4. Possible reasons for current rectification: energy efficiency and de-noising We have shown that at high firing rates biological neurons encode somatic current into a linearly decodable spike train. However, at low firing rates linear decoding cannot faithfully reproduce the somatic current because of rectification in the spike generation mechanism. If the objective of spike generation is faithful AD conversion, why would such rectification exist? We see two potential reasons: energy efficiency and de-noising. It is widely believed that minimizing metabolic costs is an important consideration in brain design and operation [30, 31]. Moreover, spikes are known to consume a significant fraction of the metabolic budget [30, 32] placing a premium on their total number. Thus, we can postulate that neuronal spike trains find a trade-off between the mean squared error in the decoded spike train relative to the input signal and the total number of spikes, as expressed by the following cost function over a time interval T: , (3) where x is the analog input signal, s is the binary spike sequence composed of zeros and ones, and is the linear filter. To demonstrate how solving Eq.(3) would lead to thresholding, let us consider a simplified version taken over a Nyquist period, during which the input signal stays constant: (4) where and normalized by w. Minimizing such a cost function reduces to choosing the lowest lying parabola for a given , Figure 6A. Therefore, thresholding is a natural outcome of minimizing a cost function combining the decoding error and the energy cost, Eq.(3). In addition to energy efficiency, there may be a computational reason for thresholding somatic current in neurons. To illustrate this point, we note that the cost function in Eq. (3) for continuous variables, st, may be viewed as a non-negative version of the L1-norm regularized linear regression called LASSO [33], which is commonly used for de-noising of sparse and Laplacian signals [34]. Such cost function can be minimized by iteratively applying a gradient descent and a shrinkage steps [35], which is equivalent to thresholding (one-sided in case of non-negative variables), Figure 6B,C. Therefore, neurons may be encoding a de-noised input signal. Figure 6. Possible reasons for rectification in neurons. A. Cost function combining encoding error squared with metabolic expense vs. input signal for different values of the spike number N, Eq.(4). Note that the optimal number of spikes jumps from zero to one as a function of input. B. Estimating most probable “clean” signal value for continuous non-negative Laplacian signal and Gaussian noise, Eq.(3) (while setting w = 1). The parabolas (red) illustrate the quadratic loglikelihood term in (3) for different values of the measurement, s, while the linear function (blue) reflects the linear log-prior term in (3). C. The minimum of the combined cost function in B is at zero if s , and grows linearly with s, if s >. 5. Discussion In this paper, we demonstrated that the neuronal spike-generation mechanism can be viewed as an oversampling and noise-shaping AD converter, which encodes a rectified low-pass filtered somatic current as a digital spike train. Rectification by the spike generation mechanism may subserve both energy efficiency and de-noising. As the degree of noise-shaping in biological neurons exceeds that in IF neurons, or basic , we suggest that neurons should be modeled by more advanced modulators, e.g. Figure 5B. Interestingly, modulators can be also viewed as coders with error prediction feedback [19]. Many publications studied various aspects of spike generation in neurons yet we believe that the framework [13-15] we adopt is different and discuss its relationship to some of the studies. Our framework is different from previous proposals to cast neurons as predictors [36, 37] because a different quantity is being predicted. The possibility of perfect decoding from a spike train with infinite temporal precision has been proven in [38]. Here, we are concerned with a more practical issue of how reconstruction error scales with the over-sampling ratio. Also, we consider linear decoding which sets our work apart from [39]. Finally, previous experiments addressing noiseshaping [40] studied the power spectrum of the spike train rather than that of the encoding error. Our work is aimed at understanding biological and computational principles of spike-generation and decoding and is not meant as a substitute for the existing phenomenological spike-generation models [41], which allow efficient fitting of parameters and prediction of spike trains [42]. Yet, the theoretical framework [13-15] we adopt may assist in building better models of spike generation for a given somatic current waveform. First, having interpreted spike generation as AD conversion, we can draw on the rich experience in signal processing to attack the problem. Second, this framework suggests a natural metric to compare the performance of different spike generation models in the high firing rate regime: a mean squared error between the injected current waveform and the filtered version of the spike train produced by a model provided the total number of spikes is the same as in the experimental data. The AD conversion framework adds justification to the previously proposed spike distance obtained by subtracting low-pass filtered spike trains [43]. As the framework [13-15] we adopt relies on viewing neuronal computation as an analog-digital hybrid, which requires AD and DA conversion at every step, one may wonder about the reason for such a hybrid scheme. Starting with the early days of computers, the analog mode is known to be advantageous for computation. For example, performing addition of many variables in one step is possible in the analog mode simply by Kirchhoff law, but would require hundreds of logical gates in the digital mode [44]. However, the analog mode is vulnerable to noise build-up over many stages of computation and is inferior in precisely communicating information over long distances under limited energy budget [30, 31]. While early analog computers were displaced by their digital counterparts, evolution combined analog and digital modes into a computational hybrid [44], thus necessitating efficient AD and DA conversion, which was the focus of the present study. We are grateful to L. Abbott, S. Druckmann, D. Golomb, T. Hu, J. Magee, N. Spruston, B. Theilman for helpful discussions and comments on the manuscript, to X.-J. Wang, D. McCormick, K. Nagel, R. Wilson, K. Padmanabhan, N. Urban, S. Tripathy, H. Koendgen, and M. Giugliano for sharing their data. The work of D.S. was partially supported by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI). References 1. Ferster, D. and N. Spruston, Cracking the neural code. Science, 1995. 270: p. 756-7. 2. Panzeri, S., et al., Sensory neural codes using multiplexed temporal scales. Trends Neurosci, 2010. 33(3): p. 111-20. 3. Stevens, C.F. and A. Zador, Neural coding: The enigma of the brain. Curr Biol, 1995. 5(12): p. 1370-1. 4. Shadlen, M.N. and W.T. Newsome, The variable discharge of cortical neurons: implications for connectivity, computation, and information coding. J Neurosci, 1998. 18(10): p. 3870-96. 5. Shadlen, M.N. and W.T. Newsome, Noise, neural codes and cortical organization. Curr Opin Neurobiol, 1994. 4(4): p. 569-79. 6. Singer, W. and C.M. Gray, Visual feature integration and the temporal correlation hypothesis. Annu Rev Neurosci, 1995. 18: p. 555-86. 7. Meister, M., Multineuronal codes in retinal signaling. Proc Natl Acad Sci U S A, 1996. 93(2): p. 609-14. 8. Cook, E.P., et al., Dendrite-to-soma input/output function of continuous timevarying signals in hippocampal CA1 pyramidal neurons. J Neurophysiol, 2007. 98(5): p. 2943-55. 9. Kondgen, H., et al., The dynamical response properties of neocortical neurons to temporally modulated noisy inputs in vitro. Cereb Cortex, 2008. 18(9): p. 2086-97. 10. Tchumatchenko, T., et al., Ultrafast population encoding by cortical neurons. J Neurosci, 2011. 31(34): p. 12171-9. 11. Mainen, Z.F. and T.J. Sejnowski, Reliability of spike timing in neocortical neurons. Science, 1995. 268(5216): p. 1503-6. 12. Mar, D.J., et al., Noise shaping in populations of coupled model neurons. Proc Natl Acad Sci U S A, 1999. 96(18): p. 10450-5. 13. Shin, J., Adaptive noise shaping neural spike encoding and decoding. Neurocomputing, 2001. 38-40: p. 369-381. 14. Shin, J., The noise shaping neural coding hypothesis: a brief history and physiological implications. Neurocomputing, 2002. 44: p. 167-175. 15. Shin, J.H., Adaptation in spiking neurons based on the noise shaping neural coding hypothesis. Neural Networks, 2001. 14(6-7): p. 907-919. 16. Schreier, R. and G.C. Temes, Understanding delta-sigma data converters2005, Piscataway, NJ: IEEE Press, Wiley. xii, 446 p. 17. Candy, J.C., A use of limit cycle oscillations to obtain robust analog-to-digital converters. IEEE Trans. Commun, 1974. COM-22: p. 298-305. 18. Inose, H., Y. Yasuda, and J. Murakami, A telemetring system code modulation - modulation. IRE Trans. Space Elect. Telemetry, 1962. SET-8: p. 204-209. 19. Spang, H.A. and P.M. Schultheiss, Reduction of quantizing noise by use of feedback. IRE TRans. Commun. Sys., 1962: p. 373-380. 20. Hovin, M., et al., Delta-Sigma modulation in single neurons, in IEEE International Symposium on Circuits and Systems2002. 21. Cheung, K.F. and P.Y.H. Tang, Sigma-Delta Modulation Neural Networks. Proc. IEEE Int Conf Neural Networkds, 1993: p. 489-493. 22. Padmanabhan, K. and N. Urban, Intrinsic biophysical diversity decorelates neuronal firing while increasing information content. Nat Neurosci, 2010. 13: p. 1276-82. 23. Urban, N. and S. Tripathy, Neuroscience: Circuits drive cell diversity. Nature, 2012. 488(7411): p. 289-90. 24. Nagel, K.I. and R.I. Wilson, personal communication. 25. Shin, J., C. Koch, and R. Douglas, Adaptive neural coding dependent on the timevarying statistics of the somatic input current. Neural Comp, 1999. 11: p. 1893-913. 26. Magee, J.C. and E.P. Cook, Somatic EPSP amplitude is independent of synapse location in hippocampal pyramidal neurons. Nat Neurosci, 2000. 3(9): p. 895-903. 27. Thorpe, S., D. Fize, and C. Marlot, Speed of processing in the human visual system. Nature, 1996. 381(6582): p. 520-2. 28. Tewksbury, S.K. and R.W. Hallock, Oversample, linear predictive and noiseshaping coders of order N>1. IEEE Trans Circuits & Sys, 1978. CAS25: p. 436-47. 29. Wang, X.J., et al., Adaptation and temporal decorrelation by single neurons in the primary visual cortex. J Neurophysiol, 2003. 89(6): p. 3279-93. 30. Attwell, D. and S.B. Laughlin, An energy budget for signaling in the grey matter of the brain. J Cereb Blood Flow Metab, 2001. 21(10): p. 1133-45. 31. Laughlin, S.B. and T.J. Sejnowski, Communication in neuronal networks. Science, 2003. 301(5641): p. 1870-4. 32. Lennie, P., The cost of cortical computation. Curr Biol, 2003. 13(6): p. 493-7. 33. Tibshirani, R., Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society Series B-Methodological, 1996. 58(1): p. 267-288. 34. Chen, S.S.B., D.L. Donoho, and M.A. Saunders, Atomic decomposition by basis pursuit. Siam Journal on Scientific Computing, 1998. 20(1): p. 33-61. 35. Elad, M., et al., Wide-angle view at iterated shrinkage algorithms. P SOc Photo-Opt Ins, 2007. 6701: p. 70102. 36. Deneve, S., Bayesian spiking neurons I: inference. Neural Comp, 2008. 20: p. 91. 37. Yu, A.J., Optimal Change-Detection and Spinking Neurons, in NIPS, B. Scholkopf, J. Platt, and T. Hofmann, Editors. 2006. 38. Lazar, A. and L. Toth, Perfect Recovery and Sensitivity Analysis of Time Encoded Bandlimited Signals. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS, 2004. 51(10). 39. Pfister, J.P., P. Dayan, and M. Lengyel, Synapses with short-term plasticity are optimal estimators of presynaptic membrane potentials. Nat Neurosci, 2010. 13(10): p. 1271-5. 40. Chacron, M.J., et al., Experimental and theoretical demonstration of noise shaping by interspike interval correlations. Fluctuations and Noise in Biological, Biophysical, and Biomedical Systems III, 2005. 5841: p. 150-163. 41. Pillow, J., Likelihood-based approaches to modeling the neural code, in Bayesian Brain: Probabilistic Approaches to Neural Coding, K. Doya, et al., Editors. 2007, MIT Press. 42. Jolivet, R., et al., A benchmark test for a quantitative assessment of simple neuron models. J Neurosci Methods, 2008. 169(2): p. 417-24. 43. van Rossum, M.C., A novel spike distance. Neural Comput, 2001. 13(4): p. 751-63. 44. Sarpeshkar, R., Analog versus digital: extrapolating from electronics to neurobiology. Neural Computation, 1998. 10(7): p. 1601-38.
|
2012
|
89
|
4,808
|
Super-Bit Locality-Sensitive Hashing Jianqiu Ji⇤, Jianmin Li⇤, Shuicheng Yan†, Bo Zhang⇤, Qi Tian‡ ⇤State Key Laboratory of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology (TNList), Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China jijq10@mails.tsinghua.edu.cn, {lijianmin, dcszb}@mail.tsinghua.edu.cn †Department of Electrical and Computer Engineering, National University of Singapore, Singapore, 117576 eleyans@nus.edu.sg ‡Department of Computer Science, University of Texas at San Antonio, One UTSA Circle, University of Texas at San Antonio, San Antonio, TX 78249-1644 qitian@cs.utsa.edu Abstract Sign-random-projection locality-sensitive hashing (SRP-LSH) is a probabilistic dimension reduction method which provides an unbiased estimate of angular similarity, yet suffers from the large variance of its estimation. In this work, we propose the Super-Bit locality-sensitive hashing (SBLSH). It is easy to implement, which orthogonalizes the random projection vectors in batches, and it is theoretically guaranteed that SBLSH also provides an unbiased estimate of angular similarity, yet with a smaller variance when the angle to estimate is within (0, ⇡/2]. The extensive experiments on real data well validate that given the same length of binary code, SBLSH may achieve significant mean squared error reduction in estimating pairwise angular similarity. Moreover, SBLSH shows the superiority over SRP-LSH in approximate nearest neighbor (ANN) retrieval experiments. 1 Introduction Locality-sensitive hashing (LSH) method aims to hash similar data samples to the same hash code with high probability [7, 9]. There exist various kinds of LSH for approximating different distances or similarities, e.g., bit-sampling LSH [9, 7] for Hamming distance and `1-distance, min-hash [2, 5] for Jaccard coefficient. Among them are some binary LSH schemes, which generate binary codes. Binary LSH approximates a certain distance or similarity of two data samples by computing the Hamming distance between the corresponding compact binary codes. Since computing Hamming distance involves mainly bitwise operations, it is much faster than directly computing other distances, e.g. Euclidean, cosine, which require many arithmetic operations. On the other hand, the storage is substantially reduced due to the use of compact binary codes. In large-scale applications [22, 11, 5, 17], e.g. near-duplicate image detection, object and scene recognition, etc., we are often confronted with the intensive computing of distances or similarities between samples, then binary LSH may act as a scalable solution. 1.1 Locality-Sensitive Hashing for Angular Similarity For many data representations, the natural pairwise similarity is only related with the angle between the data, e.g., the normalized bag-of-words representation for documents, images, and videos, and the normalized histogram-based local features like SIFT [20]. In these cases, angular similarity 1 can serve as a similarity measurement, which is defined as sim(a, b) = 1 −cos−1( ha,bi kakkbk)/⇡. Here ha, bi denotes the inner product of a and b, and k · k denotes the `2-norm of a vector. One popular LSH for approximating angular similarity is the sign-random-projection LSH (SRPLSH) [3], which provides an unbiased estimate of angular similarity and is a binary LSH method. Formally, in a d-dimensional data space, let v denote a random vector sampled from the normal distribution N(0, Id), and x denote a data sample, then an SRP-LSH function is defined as hv(x) = sgn(vT x), where the sign function sgn(·) is defined as sgn(z) = ⇢ 1, z ≥0 0, z < 0 Given two data samples a, b, let ✓a,b = cos−1( ha,bi kakkbk), then it can be proven that [8] Pr(hv(a) 6= hv(b)) = ✓a,b ⇡ This property well explains the essence of locality-sensitive, and also reveals the relation between Hamming distance and angular similarity. By independently sampling K d-dimensional vectors v1, ..., vK from the normal distribution N(0, Id), we may define a function h(x) = (hv1(x), hv2(x), ..., hvK(x)), which consists of K SRP-LSH functions and thus produces K-bit codes. Then it is easy to prove that E[dHamming(h(a), h(b))] = K✓a,b ⇡ = C✓a,b That is, the expectation of the Hamming distance between the binary hash codes of two given data samples a and b is an unbiased estimate of their angle ✓a,b, up to a constant scale factor C = K/⇡. Thus SRP-LSH provides an unbiased estimate of angular similarity. Since dHamming(h(a), h(b)) follows a binomial distribution, i.e. dHamming(h(a), h(b)) ⇠ B(K, ✓a,b ⇡), its variance is K✓a,b ⇡ (1 − ✓a,b ⇡). This implies that the variance of dHamming(h(a), h(b))/K, i.e. V ar[dHamming(h(a), h(b))/K], satisfies V ar[dHamming(h(a), h(b))/K] = ✓a,b K⇡(1 −✓a,b ⇡) Though being widely used, SRP-LSH suffers from the large variance of its estimation, which leads to large estimation error. Generally we need a substantially long code to accurately approximate the angular similarity [24, 12, 23]. Since any two of the random vectors may be close to being linearly dependent, the resulting binary code may be less informative as it seems, and even contains many redundant bits. An intuitive idea would be to orthogonalize the random vectors. However, once being orthogonalized, the random vectors can no longer be viewed as independently sampled. Moreover, it remains unclear whether the resulting Hamming distance is still an unbiased estimate of the angle ✓a,b multiplied by a constant, and what its variance will be. Later we will give answers with theoretical justifications to these two questions. In the next section, based on the above intuitive idea, we propose the so-called Super-Bit localitysensitive hashing (SBLSH) method. We provide theoretical guarantees that after orthogonalizing the random projection vectors in batches, we still get an unbiased estimate of angular similarity, yet with a smaller variance when ✓a,b 2 (0, ⇡/2], and thus the resulting binary code is more informative. Experiments on real data show the effectiveness of SBLSH, which with the same length of binary code may achieve as much as 30% mean squared error (MSE) reduction compared with the SRP-LSH in estimating angular similarity on real data. Moreover, SBLSH performs best among several widely used data-independent LSH methods in approximate nearest neighbor (ANN) retrieval experiments. 2 Super-Bit Locality-Sensitive Hashing The proposed SBLSH is founded on SRP-LSH. When the code length K satisfies 1 < K d, where d is the dimension of data space, we can orthogonalize N (1 N min(K, d) = K) of the random vectors sampled from the normal distribution N(0, Id). The orthogonalization procedure 2 is the Gram-Schmidt process, which projects the current vector orthogonally onto the orthogonal complement of the subspace spanned by the previous vectors. After orthogonalization, these N random vectors can no longer be viewed as independently sampled, thus we group their resulting bits together as an N-Super-Bit. We call N the Super-Bit depth. However, when the code length K > d, it is impossible to orthogonalize all K vectors. Assume that K = N ⇥L without loss of generality, and 1 N d, then we can perform the GramSchmidt process to orthogonalize them in L batches. Formally, K random vectors {v1, v2..., vK} are independently sampled from the normal distribution N(0, Id), and then divided into L batches with N vectors each. By performing the Gram-Schmidt process to these L batches of N vectors respectively, we get K = N ⇥L projection vectors {w1, w2..., wK}. This results in K SBLSH functions (hw1, hw2..., hwK), where hwi(x) = sgn(wT i x). These K functions produce L N-SuperBits and altogether produce binary codes of length K. Figure 1 shows an example of generating 12 SBLSH projection vectors. Algorithm 1 lists the algorithm for generating SBLSH projection vectors. Note that when the Super-Bit depth N = 1, SBLSH becomes SRP-LSH. In other words, SRP-LSH is a special case of SBLSH. The algorithm can be easily extended to the case when the code length K is not a multiple of the Super-Bit depth N. In fact one can even use variable Super-Bit depth Ni as long as 1 Ni d. With the same code length, SBLSH has the same running time O(Kd) as SRP-LSH in on-line processing, i.e. generating binary codes when applying to data. 2,1 v 1 w 2 w 3 w 4 w 5 w 6 w 7 w 8 w 9 w 10 w 11 w 12 w 7v 8 v 9 v 4v 5v 6v 1v 2v 12 v 11 v 10 v 3v Orthogonalize in 4 batches Random projection vectors sampled from N(0, I) Resulting SBLSH projection vectors Figure 1: An illustration of 12 SBLSH projection vectors {wi} generated by orthogonalizing {vi} in 4 batches. Algorithm 1 Generating Super-Bit Locality-Sensitive Hashing Projection Vectors Input: Data space dimension d, Super-Bit depth 1 N d, number of Super-Bit L ≥1, resulting code length K = N ⇥L. Generate a random matrix H with each element sampled independently from the normal distribution N(0, 1), with each column normalized to unit length. Denote H = [v1, v2, ..., vK]. for i = 0 to L −1 do for j = 1 to N do wiN+j = viN+j. for k = 1 to j −1 do wiN+j = wiN+j −wiN+kwT iN+kviN+j. end for wiN+j = wiN+j/kwiN+jk. end for end for Output: ˜H = [w1, w2, ..., wK]. 2.1 Unbiased Estimate In this subsection we prove that SBLSH provides an unbiased estimate of ✓a,b of a, b 2 Rd. Lemma 1. ([8], Lemma 3.2) Let Sd−1 denote the unit sphere in Rd. Given a random vector v uniformly sampled from Sd−1, we have Pr[hv(a) 6= hv(b)] = ✓a,b/⇡. Lemma 2. If v 2 Rd follows an isotropic distribution, then ¯v = v/kvk is uniformly distributed on Sd−1. This lemma can be proven by the definition of isotropic distribution, and we omit the details here. 3 Lemma 3. Given k vectors v1, ..., vk 2 Rd, which are sampled i.i.d. from the normal distribution N(0, Id), and span a subspace Sk, let PSk denote the orthogonal projection onto Sk, then PSk is a random matrix uniformly distributed on the Grassmann manifold Gk,d−k. This lemma can be proven by applying the Theorem 2.2.1(iii), Theorem 2.2.2(iii) in [4]. Lemma 4. If P is a random matrix uniformly distributed on the Grassmann manifold Gk,d−k, 1 k d, and v ⇠N(0, Id) is independent of P, then the random vector ˜v = Pv follows an isotropic distribution. From the uniformity of P on the Grassmann manifold and the property of the normal distribution N(0, Id), we can get this result directly. We give a sketch of proof below. Proof. We can write P = UU T , where the columns of U = [u1, u2, ..., uk] constitute an orthonormal basis of a random k-dimensional subspace. Since the standard normal distribution is 2-stable [6], ˆv = U T v = [ ˆv1, ˆv2, ..., ˆvk]T is a N(0, Ik)-distributed vector, where each ˆvi ⇠N(0, 1), and it is easy to verify that ˆv is independent of U. Therefore ˜v = Pv = Uˆv = ⌃k i=1 ˆviui. Since ui, ..., uk can be any orthonormal basis of any k-dimensional subspace with equal probability density, and { ˆv1, ˆv2, ..., ˆvk} are i.i.d. N(0, 1) random variables, ˜v follows an isotropic distribution. Theorem 1. Given N i.i.d. random vectors v1, v2, ..., vN 2 Rd sampled from the normal distribution N(0, Id), where 1 N d, perform the Gram-Schmidt process on them and produce N orthogonalized vectors w1, w2, . . . , wN, then for any two data vectors a, b 2 Rd, by defining N indicator random variables X1, X2, ..., XN as Xi = ⇢ 1, hwi(a) 6= hwi(b) 0, hwi(a) = hwi(b) we have E[Xi] = ✓a,b/⇡, for any 1 i N. Proof. Denote Si−1 the subspace spanned by {w1, ..., wi−1}, and the orthogonal projection onto its orthogonal complement as P ? Si−1. Then wi = P ? Si−1vi. Denote ¯w = wi/kwik. For any 1 i N, E[Xi] = Pr[Xi = 1] = Pr[hwi(a) 6= hwi(b)] = Pr[h ¯ w(a) 6= h ¯ w(b)]. For i = 1, by Lemma 2 and Lemma 1, we have Pr[X1 = 1] = ✓a,b/⇡. For any 1 < i N, consider the distribution of wi. By lemma 3, PSi−1 is a random matrix uniformly distributed on the Grassmann manifold Gi−1,d−i+1, thus P ? Si−1 = I −PSi−1 is uniformly distributed on Gd−i+1,i−1. Since vi ⇠N(0, Id) is independent of v1, v2, ..., vi−1, vi is independent of P ? Si−1. By Lemma 4, we have that wi = P ? Si−1vi follows an isotropic distribution. By Lemma 2, ¯w = wi/kwik is uniformly distributed on the unit sphere in Rd. By Lemma 1, Pr[h ¯ w(a) 6= h ¯ w(b)] = ✓a,b/⇡. Corollary 1. For any Super-Bit depth N, 1 N d, assuming that the code length K = N ⇥L, the Hamming distance dHamming(h(a), h(b)) is an unbiased estimate of ✓a,b, for any two data vectors a and b 2 Rd, up to a constant scale factor C = K/⇡. Proof. Apply Theorem 1 and we get E[dHamming(h(a), h(b))] = L ⇥E[⌃N i=1Xi] = L ⇥ ⌃N i=1E[Xi] = L ⇥⌃N i=1✓a,b/⇡= K✓a,b ⇡ = C✓a,b. 2.2 Variance In this subsection we prove that when the angle ✓a,b 2 (0, ⇡/2], the variance of SBLSH is strictly smaller than that of SRP-LSH. Lemma 5. For the random variables {Xi} defined in Theorem 1, we have the following equality Pr[Xi = 1|Xj = 1] = Pr[Xi = 1|X1 = 1], 1 j < i N d. Proof. Pr[Xi = 1|Xj = 1] = Pr[hwi(a) 6= hwi(b)|Xj = 1] = Pr[hvi−⌃i−1 k=1wkwT k vi(a) 6= hvi−⌃i−1 k=1wkwT k vi(b)|hwj(a) 6= hwj(b)]. Since {w1, ...wi−1} is a uniformly random orthonormal 4 basis of a random subspace uniformly distributed on Grassmann manifold, by exchanging the index j and 1 we have that equals Pr[hvi−⌃i−1 k=1wkwT k vi(a) 6= hvi−⌃i−1 k=1wkwT k vi(b)|hw1(a) 6= hw1(b)] = Pr[Xi = 1|X1 = 1]. Lemma 6. For {Xi} defined in Theorem 1, we have Pr[Xi = 1|Xj = 1] = Pr[X2 = 1|X1 = 1], 1 j < i N d. Given ✓a,b 2 (0, ⇡ 2 ], we have Pr[X2 = 1|X1 = 1] < ✓a,b ⇡. The proof of this lemma is long, thus we provide it in the Appendix (in supplementary file). Theorem 2. Given two vectors a, b 2 Rd and random variables {Xi} defined as in Theorem 1, denote p2,1 = Pr[X2 = 1|X1 = 1], and SX = ⌃N i=1Xi which is the Hamming distance between the N-Super-Bits of a and b, for 1 < N d, then V ar[SX] = N✓a,b ⇡ +N(N −1) p2,1✓a,b ⇡ −( N✓a,b ⇡ )2. Proof. By Lemma 6, Pr[Xi = 1|Xj = 1] = Pr[X2 = 1|X1 = 1] = p2,1 when 1 j < i N. Therefore Pr[Xi = 1, Xj = 1] = Pr[Xi = 1|Xj = 1]Pr[Xj = 1] = p2,1✓a,b ⇡ , for any 1 j < i N. Therefore V ar[SX] = E[S2 X] −E[SX]2 = ⌃N i=1E[X2 i ] + 2⌃j<iE[XiXj] −N 2E[X1]2 = N✓a,b ⇡ + 2⌃j<iPr[Xi = 1, Xj = 1] −( N✓a,b ⇡ )2 = N✓a,b ⇡ + N(N −1) p2,1✓a,b ⇡ −( N✓a,b ⇡ )2. Corollary 2. Denote V ar[SBLSHN,K] as the variance of the Hamming distance produced by SBLSH, where 1 N d is the Super-Bit depth, and K = N ⇥L is the code length. Then V ar[SBLSHN,K] = L⇥V ar[SBLSHN,N]. Furthermore, given ✓a,b 2 (0, ⇡ 2 ], if K = N1⇥L1 = N2 ⇥L2 and 1 N2 < N1 d, then V ar[SBLSHN1,K] < V ar[SBLSHN2,K]. Proof. Since v1, v2, ..., vK are independently sampled, and w1, w2, ..., wK are produced by orthogonalizing every N vectors, the Hamming distances produced by different N-Super-Bits are independent, thus V ar[SBLSHN,K] = L ⇥V ar[SBLSHN,N]. Therefore V ar[SBLSHN1,K] = L1⇥( N1✓a,b ⇡ +N1(N1−1) p2,1✓a,b ⇡ −( N1✓a,b ⇡ )2) = K✓a,b ⇡ +K(N1− 1) p2,1✓a,b ⇡ −KN1( ✓a,b ⇡)2. By Lemma 6, when ✓a,b 2 (0, ⇡ 2 ], for N1 > N2 > 1, 0 p2,1 < ✓a,b ⇡. Therefore V ar[SBLSHN1,K] −V ar[SBLSHN2,K] = K✓a,b ⇡ (N1 −N2)(p2,1 −✓a,b ⇡) < 0. For N1 > N2 = 1, V ar[SBLSHN1,K] −V ar[SBLSHN2,K] = K✓a,b ⇡ (N1 −1)(p2,1 −✓a,b ⇡) < 0 Corollary 3. Denote V ar[SRPLSHK] as the variance of the Hamming distance produced by SRPLSH, where K = N ⇥L is the code length and L is a positive integer, 1 < N d. If ✓a,b 2 (0, ⇡ 2 ], then V ar[SRPLSHK] > V ar[SBLSHN,K]. Proof. By Corollary 2, V ar[SRPLSHK] = V ar[SBLSH1,K] > V ar[SBLSHN,K]. 2.2.1 Numerical verification ߨ/2 Figure 2: The variances of SRP-LSH and SBLSH against the angle ✓a,b to estimate. In this subsection we verify numerically the behavior of the variances of both SRP-LSH and SBLSH for different angles ✓a,b 2 (0, ⇡]. By Theorem 2, the variance of SBLSH is closely related to p2,1 defined in Theorem 2. We randomly generate 30 points in R10, which involves 435 angles. For each angle, we numerically approximate p2,1 using sampling method, where the sample number is 1000. We fix K = N = d, and plot the variances V ar[SRPLSHN] and V ar[SBLSHN,N] against various angles ✓a,b. Figure 2 shows that when ✓a,b 2 (0, ⇡/2], SBLSH has a much smaller variance than SRP-LSH, which verifies the correctness of Corollary 3 to some extent. Furthermore, Figure 2 shows that even when ✓a,b 2 (⇡/2, ⇡], SBLSH still has a smaller variance. 5 2.3 Discussion From Corollary 1, SBLSH provides an unbiased estimate of angular similarity. From Corollary 3, when ✓a,b 2 (0, ⇡/2], with the same length of binary code, the variance of SBLSH is strictly smaller than SRP-LSH. In real applications, many vector representations are limited in non-negative orthant with all vector entries being non-negative, e.g., bag-of-words representation of documents and images, and histogram-based representations like SIFT local descriptor [20]. Usually they are normalized to unit length, with only their orientations maintained. For this kind of data, the angle of any two different samples is limited in (0, ⇡/2], and thus SBLSH will provide more accurate estimation than SRP-LSH on such data. In fact, our later experiments show that even when ✓a,b is not constrained in (0, ⇡/2], SBLSH still gives much more accurate estimate of angular similarity. 3 Experimental Results We conduct two sets of experiments, angular similarity estimation and approximate nearest neighbor (ANN) retrieval, to evaluate the effectiveness of our proposed SBLSH method. In the first set of experiments we directly measure the accuracy in estimating pairwise angular similarity. The second set of experiments then test the performance of SBLSH in real retrieval applications. 3.1 Angular Similarity Estimation In this experiment, we evaluate the accuracy of estimating pairwise angular similarity on several datasets. Specifically, we test the effect to the estimation accuracy when the Super-Bit depth N varies and the code length K is fixed, and vice versa. For each preprocessed data D, we get DLSH after performing SRP-LSH, and get DSBLSH after performing the proposed SBLSH. We compute the angles between each pair of samples in D, the corresponding Hamming distances in DLSH and DSBLSH. We compute the mean squared error between the true angle and the approximated angles from DLSH and DSBLSH respectively. Note that after computing the Hamming distance, we divide the result by C = K/⇡and get the approximated angle. 3.1.1 Datasets and Preprocessing We conduct the experiment on the following datasets: 1) Photo Tourism patch dataset1 [26], Notre Dame, which contains 104,106 patches, each of which is represented by a 128D SIFT descriptor (Photo Tourism SIFT); and 2) MIR-Flickr2, which contains 25,000 images, each of which is represented by a 3125D bag-of-SIFT-feature histogram; For each dataset, we further conduct a simple preprocessing step as in [12], i.e. mean-centering each data sample, so as to obtain additional mean-centered versions of the above datasets, Photo Tourism SIFT (mean), and MIR-Flickr (mean). The experiment on these mean-centered datasets will test the performance of SBLSH when the angles of data pairs are not constrained in (0, ⇡/2]. 3.1.2 The Effect of Super-Bit Depth N and Code Length K SRP-LSH SBLSH Mean+SRP-LSH Mean+SBLSH SRP-LSH SBLSH Mean+SRP-LSH Mean+SBLSH SRP-LSH SBLSH Mean+SRP-LSH Mean+SBLSH SRP-LSH SBLSH Mean+SRP-LSH Mean+SBLSH Figure 3: The effect of Super-Bit depth N (1 < N min(d, K)) with fixed code length K (K = N ⇥L), and the effect of code length K with fixed Super-Bit depth N. 1http://phototour.cs.washington.edu/patches/default.htm 2http://users.ecs.soton.ac.uk/jsh2/mirflickr/ 6 Table 1: ANN retrieval results, measured by proportion of good neighbors within query’s Hamming ball of radius 3. Note that the code length K = 30. Data E2LSH SRP-LSH SBLSH Notre Dame 0.4675 ± 0.0900 0.7500 ± 0.0525 0.7845± 0.0352 Half Dome 0.4503 ± 0.0712 0.7137 ± 0.0413 0.7535± 0.0276 Trevi 0.4661 ± 0.0849 0.7591 ± 0.0464 0.7891± 0.0329 In each dataset, for each (N, K) pair, i.e. Super-Bit depth N and code length K, we randomly sample 10,000 data, which involve about 50,000,000 data pairs, and randomly generate SRP-LSH functions, together with SBLSH functions by orthogonalizing the generated SRP in batches. We repeat the test for 10 times, and compute the mean squared error (MSE) of the estimation. To test the effect of Super-Bit depth N, we fix K = 120 for Photo Tourism SIFT and K = 3000 for MIR-Flickr respectively, and to test the effect of code length K, we fix N = 120 for Photo Tourism SIFT and N = 3000 for MIR-Flickr. We repeat the experiment on the mean-centered versions of these datasets, and denote the methods by Mean+SRP-LSH and Mean+SBLSH respectively. Figure 3 shows that when using fixed code length K, as the Super-Bit depth N gets larger (1 < N min(d, K)), the MSE of SBLSH gets smaller, and the gap between SBLSH and SRPLSH gets larger. Particularly, when N = K, over 30% MSE reduction can be observed on all the datasets. This verifies Corollary 2 that when applying SBLSH, the best strategy would be to set the Super-Bit depth N as large as possible, i.e. min(d, K). An informal explanation to this interesting phenomenon is that as the degree of orthogonality of the random projections gets higher, the code becomes more and more informative, and thus provides better estimate. On the other hand, it can be observed that the performances on the mean-centered datasets are similar as on the original datasets. This shows that even when the angle between each data pair is not constrained in (0, ⇡/2], SBLSH still gives much more accurate estimation. Figure 3 also shows that with fixed Super-Bit depth N SBLSH significantly outperforms SRP-LSH. When increasing the code length K, the accuracies of SBLSH and SRP-LSH shall both increase. The performances on the mean-centered datasets are similar as on the original datasets. 3.2 Approximate Nearest Neighbor Retrieval In this subsection, we conduct ANN retrieval experiment, which compares SBLSH with two other widely used data-independent binary LSH methods: SRP-LSH and E2LSH (we use the binary version in [25, 1]). We use the datasets Notre Dame, Half Dome and Trevi from the Photo Tourism patch dataset [26], which is also used in [12, 10, 13] for ANN retrieval. We use 128D SIFT representation and normalize the vectors to unit norm. For each dataset, we randomly pick 1,000 samples as queries, and the rest samples (around 100,000) as the corpus for the retrieval task. We define the good neighbors to a query q as the samples within the top 5% nearest neighbors (measured in Euclidean distance) to q. We adopt the evaluation criterion used in [12, 25], i.e. the proportion of good neighbors in returned samples that are within the query’s Hamming ball of radius r. We set r = 3. Using code length K = 30, we repeat the experiment for 10 times and take the mean of the results. For SBLSH, we fix the Super-Bit depth N = K = 30. Table 1 shows that SBLSH performs best among all these data-independent hashing methods. 4 Relations to Other Hashing Methods There exist different kinds of LSH methods, e.g. bit-sampling LSH [9, 7] for Hamming distance and `1-distance, min-hash [2] for Jaccard coefficient, p-stable-distribution LSH [6] for `p-distance when p 2 (0, 2]. These data-independent methods are simple, thus easy to be integrated as a module in more complicated algorithms involving pairwise distance or similarity computation, e.g. nearest neighbor search. New data-independent methods for improving these original LSH methods have been proposed recently. [1] proposed a near-optimal LSH method for Euclidean distance. Li et al. [16] proposed b-bit minwise hash which improves the original min-hash in terms of compactness. 7 [17] shows that b-bit minwise hash can be integrated in linear learning algorithms for large-scale learning tasks. [14] reduces the variance of random projections by taking advantage of marginal norms, and compares the variance of SRP with regular random projections considering the margins. [15] proposed very sparse random projections for accelerating random projections and SRP. Prior to SBLSH, SRP-LSH [3] was the only hashing method proven to provide unbiased estimate of angular similarity. The proposed SBLSH method is the first data-independent method that outperforms SRP-LSH in terms of higher accuracy in estimating angular similarity. On the other hand, data-dependent hashing methods have been extensively studied. For example, spectral hashing [25] and anchor graph hashing [19] are data-dependent unsupervised methods. Kulis et al. [13] proposed kernelized locality-sensitive hashing (KLSH), which is based on SRPLSH, to approximate the angular similarity in very high or even infinite dimensional space induced by any given kernel, with access to data only via kernels. There are also a bunch of works devoted to semi-supervised or supervised hashing methods [10, 21, 23, 24, 18], which try to capture not only the geometry of the original data, but also the semantic relations. 5 Discussion Instead of the Gram-Schmidt process, we can use other method to orthogonalize the projection vectors, e.g. Householder transformation, which is numerically more stable. The advantage of Gram-Schmidt process is its simplicity in describing the algorithm. In the paper we did not test the method on data of very high dimension. When the dimension is high, and N is not small, the Gram-Schmidt process will be computationally expensive. In fact, when the dimension of data is very high, the random normal projection vectors {vi}i=1,2...,K will tend to be orthogonal to each other, thus it may not be very necessary to orthogonalize the vectors deliberately. From Corollary 2 and the results in Section 3.1.2, we can see that the closer the Super-Bit depth N is to the data dimension d, the larger the variance reduction SBLSH achieves over SRP-LSH. A technical report3 (Li et al.) shows that b-bit minwise hashing almost always has a smaller variance than SRP in estimating Jaccard coefficient on binary data. The comparison of SBLSH with b-bit minwise hashing for Jaccard coefficient is left for future work. 6 Conclusion and Future Work The proposed SBLSH is a data-independent hashing method which significantly outperforms SRPLSH. We have theoretically proved that SBLSH provides an unbiased estimate of angular similarity, and has a smaller variance than SRP-LSH when the angle to estimate is in (0, ⇡/2]. The algorithm is simple, easy to implement and can be integrated as a basic module in more complicated algorithms. Experiments show that with the same length of binary code, SBLSH achieves over 30% mean squared error reduction over SRP-LSH in estimating angular similarity, when the Super-Bit depth N is close to the data dimension d. Moreover, SBLSH performs best among several widely used data-independent LSH methods in approximate nearest neighbor retrieval experiments. Theoretically exploring the variance of SBLSH when the angle is in (⇡/2, ⇡] is left for future work. Acknowledgments This work was supported by the National Basic Research Program (973 Program) of China (Grant Nos. 2013CB329403 and 2012CB316301), National Natural Science Foundation of China (Grant Nos. 91120011 and 61273023), and Tsinghua University Initiative Scientific Research Program No.20121088071, and NExT Research Center funded under the research grant WBS. R-252-300001-490 by MDA, Singapore. And it was supported in part to Dr. Qi Tian by ARO grant W911BF12-1-0057, NSF IIS 1052851, Faculty Research Awards by Google, FXPAL, and NEC Laboratories of America, respectively. 3www.stat.cornell.edu/˜li/hashing/RP_minwise.pdf 8 References [1] Alexandr Andoni and Piotr Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. In Annual IEEE Symposium on Foundations of Computer Science, 2006. [2] Andrei Z. Broder, Steven C. Glassman, Mark S. Manasse, and Geoffrey Zweig. Syntactic clustering of the web. Computer Networks, 29(8-13):1157–1166, 1997. [3] Moses Charikar. Similarity estimation techniques from rounding algorithms. In ACM Symposium on Theory of Computing, 2002. [4] Yasuko Chikuse. Statistics on Special Manifolds. Springer, February 2003. [5] Ondrej Chum, James Philbin, and Andrew Zisserman. Near duplicate image detection: min-hash and tf-idf weighting. In British Machine Vision Conference, 2008. [6] Mayur Datar, Nicole Immorlica, Piotr Indyk, and Vahab S. Mirrokni. Locality-sensitive hashing scheme based on p-stable distributions. In Symposium on Computational Geometry, 2004. [7] Aristides Gionis, Piotr Indyk, and Rajeev Motwani. Similarity search in high dimensions via hashing. In International Conference on Very Large Databases, 1999. [8] Michel X. Goemans and David P. Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal of the ACM, 42(6):1115–1145, 1995. [9] Piotr Indyk and Rajeev Motwani. Approximate nearest neighbors: Towards removing the curse of dimensionality. In ACM Symposium on Theory of Computing, 1998. [10] Prateek Jain, Brian Kulis, and Kristen Grauman. Fast image search for learned metrics. In IEEE Conference on Computer Vision and Pattern Recognition, 2008. [11] Herv´e J´egou, Matthijs Douze, and Cordelia Schmid. Product quantization for nearest neighbor search. IEEE Trans. Pattern Anal. Mach. Intell., 33(1):117–128, 2011. [12] Brian Kulis and Trevor Darrell. Learning to hash with binary reconstructive embeddings. In Advances in Neural Information Processing Systems, 2009. [13] Brian Kulis and Kristen Grauman. Kernelized locality-sensitive hashing for scalable image search. In IEEE International Conference on Computer Vision, 2009. [14] Ping Li, Trevor Hastie, and Kenneth Ward Church. Improving random projections using marginal information. In COLT, pages 635–649, 2006. [15] Ping Li, Trevor Hastie, and Kenneth Ward Church. Very sparse random projections. In KDD, pages 287–296, 2006. [16] Ping Li and Arnd Christian K¨onig. b-bit minwise hashing. In International World Wide Web Conference, 2010. [17] Ping Li, Anshumali Shrivastava, Joshua L. Moore, and Arnd Christian K¨onig. Hashing algorithms for large-scale learning. In Advances in Neural Information Processing Systems, 2011. [18] Wei Liu, Jun Wang, Rongrong Ji, Yu-Gang Jiang, and Shih-Fu Chang. Supervised hashing with kernels. In CVPR, pages 2074–2081, 2012. [19] Wei Liu, Jun Wang, Sanjiv Kumar, and Shih-Fu Chang. Hashing with graphs. In ICML, pages 1–8, 2011. [20] David G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2):91–110, 2004. [21] Yadong Mu, Jialie Shen, and Shuicheng Yan. Weakly-supervised hashing in kernel space. In IEEE Conference on Computer Vision and Pattern Recognition, 2010. [22] Antonio Torralba, Robert Fergus, and William T. Freeman. 80 million tiny images: A large data set for nonparametric object and scene recognition. IEEE Trans. Pattern Anal. Mach. Intell., 30(11):1958–1970, 2008. [23] Jun Wang, Sanjiv Kumar, and Shih-Fu Chang. Sequential projection learning for hashing with compact codes. In International Conference on Machine Learning, 2010. [24] Jun Wang, Sanjiv Kumar, and Shih-Fu Chang. Semi-supervised hashing for large scale search. IEEE Transactions on Pattern Analysis and Machine Intelligence, 99(PrePrints), 2012. [25] Yair Weiss, Antonio Torralba, and Robert Fergus. Spectral hashing. In Advances in Neural Information Processing Systems, 2008. [26] Simon A. J. Winder and Matthew Brown. Learning local image descriptors. In IEEE Conference on Computer Vision and Pattern Recognition, 2007. 9
|
2012
|
9
|
4,809
|
Multiple Operator-valued Kernel Learning Hachem Kadri LIF - CNRS / INRIA Lille - Sequel Project Universit´e Aix-Marseille Marseille, France hachem.kadri@lif.univ-mrs.fr Alain Rakotomamonjy LITIS EA 4108 Universit´e de Rouen St Etienne du Rouvray, France alain.rakotomamony@insa-rouen.fr Francis Bach INRIA - Sierra Project Ecole Normale Sup´erieure Paris, France francis.bach@inria.fr Philippe Preux INRIA Lille - Sequel Project LIFL - CNRS, Universit´e de Lille Villeneuve d’Ascq, France philippe.preux@inria.fr Abstract Positive definite operator-valued kernels generalize the well-known notion of reproducing kernels, and are naturally adapted to multi-output learning situations. This paper addresses the problem of learning a finite linear combination of infinite-dimensional operator-valued kernels which are suitable for extending functional data analysis methods to nonlinear contexts. We study this problem in the case of kernel ridge regression for functional responses with an ℓr-norm constraint on the combination coefficients (r ≥1). The resulting optimization problem is more involved than those of multiple scalar-valued kernel learning since operator-valued kernels pose more technical and theoretical issues. We propose a multiple operator-valued kernel learning algorithm based on solving a system of linear operator equations by using a block coordinate-descent procedure. We experimentally validate our approach on a functional regression task in the context of finger movement prediction in brain-computer interfaces. 1 Introduction During the past decades, a large number of algorithms have been proposed to deal with learning problems in the case of single-valued functions (e.g., binary-output function for classification or real output for regression). Recently, there has been considerable interest in estimating vector-valued functions [21, 5, 7]. Much of this interest has arisen from the need to learn tasks where the target is a complex entity, not a scalar variable. Typical learning situations include multi-task learning [11], functional regression [12], and structured output prediction [4]. In this paper, we are interested in the problem of functional regression with functional responses in the context of brain-computer interface (BCI) design. More precisely, we are interested in finger movement prediction from electrocorticographic signals [23]. Indeed, from a set of signals measuring brain surface electrical activity on d channels during a given period of time, we want to predict, for any instant of that period whether a finger is moving or not and the amplitude of the finger flexion. Formally, the problem consists in learning a functional dependency between a set of d signals and a sequence of labels (a step function indicating whether a finger is moving or not) and between the same set of signals and vector of real values (the amplitude function). While, it is clear that this problem can be formalized as functional regression problem, from our point of view, such problem can benefit from the multiple operator-valued kernel learning framework. Indeed, for these problems, one of the difficulties arises from the unknown latency between the signal related to the finger 1 movement and the actual movement [23]. Hence, instead of fixing in advance some value for this latency in the regression model, our framework allows to learn it from the data by means of several operator-valued kernels. If we wish to address functional regression problem in the principled framework of reproducing kernel Hilbert spaces (RKHS), we have to consider RKHSs whose elements are operators that map a function to another function space, possibly source and target function spaces being different. Working in such RKHSs, we are able to draw on the important core of work that has been performed on scalar-valued and vector-valued RKHSs [28, 21]. Such a functional RKHS framework and associated operator-valued kernels have been introduced recently [12, 13]. A basic question with reproducing kernels is how to build these kernels and what is the optimal kernel choice for a given application. In order to overcome the need for choosing a kernel before the learning process, several works have tried to address the problem of learning the scalar-valued kernel jointly with the decision function [18, 29]. Since these seminal works, many efforts have been carried out in order to theoretically analyze the kernel learning framework [9, 3] or in order to provide efficient algorithms [24, 1, 15]. While many works have been devoted to multiple scalar-valued kernel learning, this problem of kernel learning have been barely investigated for operator-valued kernels. One motivation of this work is to bridge the gap between multiple kernel learning (MKL) and operatorvalued kernels by proposing a framework and an algorithm for learning a finite linear combination of operator-valued kernels. While each step of the scalar-valued MKL framework can be extended without major difficulties to operator-valued kernels, technical challenges arise at all stages because we deal with infinite dimensional spaces. It should be pointed out that in a recent work [10], the problem of learning the output kernel was formulated as an optimization problem over the cone of positive semidefinite matrices, and a block-coordinate descent method was proposed to solve it. However, they did not focus on learning the input kernel. In contrast, our multiple operator-valued kernel learning formulation can be seen as a way of learning simultaneously input and output kernels, although we consider a linear combination of kernels that are fixed in advance. In this paper, we make the following contributions: 1) we introduce a novel approach to infinitedimensional multiple operator-valued kernel learning (MovKL) suitable for learning the functional dependencies and interactions between continuous data; 2) we extend the original formulation of ridge regression in dual variables to the functional data analysis domain, showing how to perform nonlinear functional regression with functional responses by constructing a linear regression operator in an operator-valued kernel feature space (Section 2); 3) we derive a dual form of the MovKL problem with functional ridge regression, and show that a solution of the related optimization problem exists (Section 2); 4) we propose a block-coordinate descent algorithm to solve the MovKL optimization problem which involves solving a challenging linear system with a sum of block operator matrices (Section 3); 5) we provide an empirical evaluation of MovKL performance which demonstrates its effectiveness on a BCI dataset (Section 4). 2 Problem Setting Before describing the multiple operator-valued kernel learning algorithm that we will study and experiment with in this paper, we first review notions and properties of reproducing kernel Hilbert spaces with operator-valued kernels, show their connection to learning from multiple response data (multiple outputs; see [21] for discrete data and [12] for continuous data), and describe the optimization problem for learning kernels with functional response ridge regression. 2.1 Notations and Preliminaries We start by some standard notations and definitions used all along the paper. Given a Hilbert space H, ⟨·, ·⟩H and ∥· ∥H refer to its inner product and norm, respectively. We denote by Gx and Gy the separable real Hilbert spaces of input and output functional data, respectively. In functional data analysis domain, continuous data are generally assumed to belong to the space of square integrable functions L2. In this work, we consider that Gx and Gy are the Hilbert space L2(Ω) which consists of all equivalence classes of square integrable functions on a finite set Ω. Ωbeing potentially different for Gx and Gy. We denote by F(Gx, Gy) the vector space of functions f : Gx −→Gy, and by L(Gy) the set of bounded linear operators from Gy to Gy. 2 We consider the problem of estimating a function f such that f(xi) = yi when observed functional data (xi, yi)i=1,...,n ∈(Gx, Gy). Since Gx and Gy are spaces of functions, the problem can be thought of as an operator estimation problem, where the desired operator maps a Hilbert space of factors to a Hilbert space of targets. We can define the regularized operator estimate of f ∈F as: fλ ≜arg min f∈F 1 n n X i=1 ∥yi −f(xi)∥2 Gy + λ∥f∥2 F. (1) In this work, we are looking for a solution to this minimization problem in a function-valued reproducing kernel Hilbert space F. More precisely, we mainly focus on the RKHS F whose elements are continuous linear operators on Gx with values in Gy. The continuity property is obtained by considering a special class of reproducing kernels called Mercer kernels [7, Proposition 2.2]. Note that in this case, F is separable since Gx and Gy are separable [6, Corollary 5.2]. We now introduce (function) Gy-valued reproducing kernel Hilbert spaces and show the correspondence between such spaces and positive definite (operator) L(Gy)-valued kernels. These extend the traditional properties of scalar-valued kernels. Definition 1 (function-valued RKHS) A Hilbert space F of functions from Gx to Gy is called a reproducing kernel Hilbert space if there is a positive definite L(Gy)-valued kernel KF(w, z) on Gx × Gx such that: i. the function z 7−→KF(w, z)g belongs to F, ∀z ∈Gx, w ∈Gx, g ∈Gy, ii. ∀f ∈F, w ∈Gx, g ∈Gy, ⟨f, KF(w, ·)g⟩F = ⟨f(w), g⟩Gy (reproducing property). Definition 2 (operator-valued kernel) An L(Gy)-valued kernel KF(w, z) on Gx is a function KF(·, ·) : Gx × Gx −→L(Gy); furthermore: i. KF is Hermitian if KF(w, z) = KF(z, w)∗, where ∗denotes the adjoint operator, ii. KF is positive definite on Gx if it is Hermitian and for every natural number r and all {(wi, ui)i=1,...,r} ∈Gx × Gy, P i,j⟨KF (wi, wj)uj, ui⟩Gy ≥0. Theorem 1 (bijection between function-valued RKHS and operator-valued kernel) An L(Gy)-valued kernel KF(w, z) on Gx is the reproducing kernel of some Hilbert space F, if and only if it is positive definite. The proof of Theorem 1 can be found in [21]. For further reading on operator-valued kernels and their associated RKHSs, see, e.g., [5, 6, 7]. 2.2 Functional Response Ridge Regression in Dual Variables We can write the ridge regression with functional responses optimization problem (1) as follows: min f∈F 1 2∥f∥2 F + 1 2nλ n X i=1 ∥ξi∥2 Gy with ξi = yi −f(xi). (2) Now, we introduce the Lagrange multipliers αi, i = 1, . . . , n which are functional variables since the output space is the space of functions Gy. For the optimization problem (2), the Lagrangian multipliers exist and the Lagrangian function is well defined. The method of Lagrange multipliers on Banach spaces, which is a generalization of the classical (finite-dimensional) Lagrange multipliers method suitable to solve certain infinite-dimensional constrained optimization problems, is applied here. For more details, see [16]. Let α = (αi)i=1,...,n ∈Gn y the vector of functions containing the Lagrange multipliers, the Lagrangian function is defined as L(f, α, ξ) = 1 2∥f∥2 F + 1 2nλ∥ξ∥2 Gn y + ⟨α, y −f(x) −ξ⟩Gn y , (3) where α = (α1, . . . , αn) ∈Gn y , y = (y1, . . . , yn) ∈Gn y , f(x) = (f(x1), . . . , f(xn)) ∈Gn y , ξ = (ξ1, . . . , ξn) ∈Gn y , and ∀a, b ∈Gn y , ⟨a, b⟩Gn y = nP i=1 ⟨ai, bi⟩Gy. 3 Differentiating (3) with respect to f ∈F and setting to zero, we obtain f(.) = n X i=1 K(xi, .)αi, (4) where K : Gx × Gx −→L(Gy) is the operator-valued kernel of F. Substituting this into (3) and minimizing with respect to ξ, we obtain the dual of the functional response kernel ridge regression (KRR) problem max α −nλ 2 ∥α∥2 Gn y −1 2⟨Kα, α⟩Gn y + ⟨α, y⟩Gn y , (5) where K = [K(xi, xj)]n i,j=1 is the block operator kernel matrix. The computational details regrading the dual formulation of functional KRR are derived in Appendix B of [14]. 2.3 MovKL in Dual Variables Let us now consider that the function f(·) is sum of M functions {fk(·)}M k=1 where each fk belongs to a Gy-valued RKHS with kernel Kk(·, ·). Similarly to scalar-valued multiple kernel learning, we can cast the problem of learning these functions fk as min d∈D min fk∈Fk M X k=1 ∥fk∥2 Fk 2dk + 1 2nλ n X i=1 ∥ξi∥2 Gy with ξi = yi −PM k=1 fk(xi), (6) where d = [d1, · · · , dM], D = {d : ∀k, dk ≥0 and P k dr k ≤1} and 1 ≤r ≤∞. Note that this problem can equivalently be rewritten as an unconstrained optimization problem. Before deriving the dual of this problem, it can be shown by means of the generalized Weierstrass theorem [17] that this problem admits a solution. We report the proof in Appendix A of [14]. Now, following the lines of [24], a dualization of this problem leads to the following equivalent one min d∈D max α∈Gn y −nλ 2 ∥α∥2 Gn y −1 2⟨Kα, α⟩Gn y + ⟨α, y⟩Gn y , (7) where K = M P k=1 dkKk and Kk is the block operator kernel matrix associated to the operator-valued kernel Kk. The KKT conditions also state that at optimality we have fk(·) = nP i=1 dkKk(xi, ·)αi. 3 Solving the MovKL Problem After having presented the framework, we now devise an algorithm for solving this MovKL problem. 3.1 Block-coordinate descent algorithm Since the optimization problem (6) has the same structure as a multiple scalar-valued kernel learning problem, we can build our MovKL algorithm upon the MKL literature. Hence, we propose to borrow from [15], and consider a block-coordinate descent method. The convergence of a block coordinate descent algorithm which is related closely to the Gauss-Seidel method was studied in works of [30] and others. The difference here is that we have operators and block operator matrices rather than matrices and block matrices, but this doesn’t increase the complexity if the inverse of the operators are computable (typically analytically or by spectral decomposition). Our algorithm iteratively solves the problem with respect to α with d being fixed and then with respect to d with α being fixed (see Algorithm 1). After having initialized {dk} to non-zero values, this boils down to the following steps : 1. with {dk} fixed, the resulting optimization problem with respect to α has the following form: (K + λI)α = y, (8) 4 where K = PM k=1 dkKk. While the form of solution is rather simple, solving this linear system is still challenging in the operator setting and we propose below an algorithm for its resolution. 2. with {fk} fixed, according to problem (6), we can rewrite the problem as min d∈D M X k=1 ∥fk∥2 Fk dk (9) which has a closed-form solution and for which optimality occurs at [20]: dk = ∥fk∥ 2 r+1 (P k ∥fk∥ 2r r+1 )1/r . (10) This algorithm is similar to that of [8] and [15] both being based on alternating optimization. The difference here is that we have to solve a linear system involving a block-operator kernel matrix which is a combination of basic kernel matrices associated to M operator-valued kernels. This makes the system very challenging, and we present an algorithm for solving it in the next paragraph. We also report in Appendix C of [14] a convergence proof of a modified version of the MovKL algorithm that minimizes a perturbation of the objective function (6) with a small positive parameter required to guarantee convergence [2]. 3.2 Solving a linear system involving multiple operator-valued kernel matrices One common way to construct operator-valued kernels is to build scalar-valued ones which are carried over to the vector-valued (resp. function-valued) setting by a positive definite matrix (resp. operator). In this setting an operator-valued kernel has the following form: K(w, z) = G(w, z)T, where G is a scalar-valued kernel and T is a positive operator in L(Gy). In multi-task learning, T is a finite dimensional matrix that is expected to share information between tasks [11, 5]. More recently and for supervised functional output learning problems, T is chosen to be a multiplication or an integral operator [12, 13]. This choice is motivated by the fact that functional linear models for functional responses [25] are based on these operators and then such kernels provide an interesting alternative to extend these models to nonlinear contexts. In addition, some works on functional regression and structured-output learning consider operator-valued kernels constructed from the identity operator as in [19] and [4]. In this work we adopt a functional data analysis point of view and then we are interested in a finite combination of operator-valued kernels constructed from the identity, multiplication and integral operators. A problem encountered when working with operator-valued kernels in infinite-dimensional spaces is that of solving the system of linear operator equations (8). In the following we show how to solve this problem for two cases of operator-valued kernel combinations. Case 1: multiple scalar-valued kernels and one operator. This is the simpler case where the combination of operator-valued kernels has the following form K(w, z) = M X k=1 dkGk(w, z)T, (11) where Gk is a scalar-valued kernel, T is a positive operator in L(Gy), and dk are the combination coefficients. In this setting, the block operator kernel matrix K can be expressed as a Kronecker product between the multiple scalar-valued kernel matrix G = PM k=1 dkGk, where Gk = [Gk(xi, xj)]n i,j=1, and the operator T. Thus we can compute an analytic solution of the system of equations (8) by inverting K + λI using the eigendecompositions of G and T as in [13]. Case 2: multiple scalar-valued kernels and multiple operators. This is the general case where multiple operator-valued kernels are combined as follows K(w, z) = M X k=1 dkGk(w, z)Tk, (12) 5 Algorithm 1 ℓr-norm MovKL Input Kk for k = 1, . . . , M d1 k ←−1 M for k = 1, . . . , M α ←−0 for t = 1, 2, . . . do α′ ←−α K ←−P k dt kKk α ←−solution of (K + λI)α = y if ∥α −α′∥< ϵ then break end if dt+1 k ←− ∥fk∥ 2 r+1 (P k ∥fk∥ 2r r+1 )1/r for k = 1, . . . , M end for Algorithm 2 Gauss-Seidel Method choose an initial vector of functions α(0) repeat for i = 1, 2, . . . , n α(t) i ←−sol. of (13): [K(xi, xi) + λI]α(t) i = si end for until convergence where Gk is a scalar-valued kernel, Tk is a positive operator in L(Gy), and dk are the combination coefficients. Inverting the associated block operator kernel matrix K is not feasible in this case, that is why we propose a Gauss-Seidel iterative procedure (see Algorithm 2) to solve the system of linear operator equations (8). Starting from an initial vector of functions α(0), the idea is to iteratively compute, until a convergence condition is satisfied, the functions αi according to the following expression [K(xi, xi) + λI]α(t) i = yi − i−1 X j=1 K(xi, xj)α(t) j − n X j=i+1 K(xi, xj)α(t−1) j , (13) where t is the iteration index. This problem is still challenging because the kernel K(·, ·) still involves a positive combination of operator-valued kernels. Our algorithm is based on the idea that instead of inverting the finite combination of operator-valued kernels [K(xi, xi) + λI], we can consider the following variational formulation of this system min α(t) i 1 2 ⟨ M+1 X k=1 Kk(xi, xi)α(t) i , α(t) i ⟩Gy −⟨si, α(t) i ⟩Gy, where si = yi − i−1 P j=1 K(xi, xj)α(t) j − nP j=i+1 K(xi, xj)α(t−1) j , Kk = dkGkTk, ∀k ∈{1, . . . , M}, and KM+1 = λI. Now, by means of a variable-splitting approach, we are able to decouple the role of the different kernels. Indeed, the above problem is equivalent to the following one : min α(t) i 1 2 ⟨ˆK(xi, xi)α(t) i , α(t) i ⟩GM y −⟨si, α(t) i ⟩GM y with α(t) i,1 = α(t) i,k for k = 2, . . . , M + 1, where ˆK(xi, xi) is the (M + 1) × (M + 1) diagonal matrix [Kk(xi, xi)]M+1 k=1 . α(t) i is the vector (α(t) i,1, . . . , α(t) i,M+1) and the (M + 1)-dimensional vector si = (si, 0, . . . , 0). We now have to deal with a quadratic optimization problem with equality constraints. Writing down the Lagrangian of this optimization problem and then deriving its first-order optimality conditions leads us to the following set of linear equations K1(xi, xi)αi,1 −si + PM k=1 γk = 0 Kk(xi, xi)αi,k −γk = 0 αi,1 −αi,k = 0 (14) where k = 2, . . . , M + 1 and {γk} are the Lagrange multipliers related to the M equality constraints. Finally, in this set of equations, the operator-valued kernels have been decoupled and thus, if their inversion can be easily computed (which is the case in our experiments), one can solve the problem (14) with respect to {αi,k} and γk by means of another Gauss-Seidel algorithm after simple reorganization of the linear system. 6 0 20 40 60 80 100 120 140 160 180 200 −20 0 20 Ch. 1 0 20 40 60 80 100 120 140 160 180 200 −10 0 10 Ch. 2 0 20 40 60 80 100 120 140 160 180 200 −5 0 5 Ch. 3 0 20 40 60 80 100 120 140 160 180 200 −5 0 5 Ch. 4 0 20 40 60 80 100 120 140 160 180 200 −5 0 5 Ch. 5 Time samples 0 50 100 150 200 −1.5 −1 −0.5 0 0.5 1 1.5 Time samples Finger Movement State 0 50 100 150 200 −1 0 1 2 3 4 5 6 Time samples Finger Movement Figure 1: Example of a couple of input-output signals in our BCI task. (left) Amplitude modulation features extracted from ECoG signals over 5 pre-defined channels. (middle) Signal of labels denoting whether the finger is moving or not. (right) Real amplitude movement of the finger. 4 Experiments In order to highlight the benefit of our multiple operator-valued kernel learning approach, we have considered a series of experiments on a real dataset, involving functional output prediction in a brain-computer interface framework. The problem we addressed is a sub-problem related to finger movement decoding from Electrocorticographic (ECoG) signals. We focus on the problem of estimating if a finger is moving or not and also on the direct estimation of the finger movement amplitude from the ECoG signals. The development of the full BCI application is beyond the scope of this paper and our objective here is to prove that this problem of predicting finger movement can benefit from multiple kernel learning. To this aim, the fourth dataset from the BCI Competition IV [22] was used. The subjects were 3 epileptic patients who had platinium electrode grids placed on the surface of their brains. The number of electrodes varies between 48 to 64 depending on the subject, and their position on the cortex was unknown. ECoG signals of the subject were recorded at a 1KHz sampling using BCI2000 [27]. A band-pass filter from 0.15 to 200Hz was applied to the ECoG signals. The finger flexion of the subject was recorded at 25Hz and up-sampled to 1KHz by means of a data glove which measures the finger movement amplitude. Due to the acquisition process, a delay appears between the finger movement and the measured ECoG signal [22]. One of our hopes is that this time-lag can be properly learnt by means of multiple operator-valued kernels. Features from the ECoG signals are built by computing some band-specific amplitude modulation features, which is defined as the sum of the square of the band-specific filtered ECoG signal samples during a fixed time window. For our finger movement prediction task, we have kept 5 channels that have been manually selected and split ECoG signals in portions of 200 samples. For each of these time segments, we have the label of whether at each time sample, the finger is moving or not as well as the real movement amplitudes. The dataset is composed of 487 couples of input-output signals, the output signals being either the binary movement labels or the real amplitude movement. An example of inputoutput signals are depicted in Figure 1. In a nutshell, the problem boils down to be a functional regression task with functional responses. To evaluate the performance of the multiple operator-valued kernel learning approach, we use both: (1) the percentage of labels correctly recognized (LCR) defined by (Wr/Tn) × 100%, where Wr is the number of well-recognized labels and Tn the total number of labels to be recognized; (2) the residual sum of squares error (RSSE) as evaluation criterion for curve prediction RSSE = Z X i {yi(t) −byi(t)}2dt, (15) where byi(t) is the prediction of the function yi(t) corresponding to real finger movement or the finger movement state. For the multiple operator-valued kernels having the form (12), we have used a Gaussian kernel with 5 different bandwidths and a polynomial kernel of degree 1 to 3 combined with three operators T: identity Ty(t) = y(t), multiplication operator associated with the function e−t2 defined by Ty(t) = e−t2y(t), and the integral Hilbert-Schmidt operator with the kernel e−|t−s| proposed in [13], Ty(t) = R e−|t−s|y(s)ds. The inverses of these operators can be computed analytically. 7 Table 1: (Left) Results for the movement state prediction. Residual Sum of Squares Error (RSSE) and the percentage number of Labels Correctly Recognized (LCR) of : (1) baseline KRR with the Gaussian kernel, (2) functional response KRR with the integral operator-valued kernel, (3) MovKL with ℓ∞, ℓ1 and ℓ2-norm constraint. (Right) Residual Sum of Squares Error (RSSE) results for finger movement prediction. Algorithm RSSE LCR(%) KRR - scalar-valued 68.32 72.91 KRR - functional response 49.40 80.20 MovKL - ℓ∞norm 45.44 81.34 MovKL - ℓ1 norm 48.12 80.66 MovKL - ℓ2 norm 39.36 84.72 Algorithm RSSE KRR - scalar-valued 88.21 KRR - functional response 79.86 MovKL - ℓ∞norm 76.52 MovKL - ℓ1 norm 78.24 MovKL - ℓ2 norm 75.15 While the inverses of the identity and the multiplication operators are easily and directly computable from the analytic expressions of the operators, the inverse of the integral operator is computed from its spectral decomposition as in [13]. The number of eigenfunctions as well as the regularization parameter λ are fixed using “one-curve-leave-out cross-validation” [26] with the aim of minimizing the residual sum of squares error. Empirical results on the BCI dataset are summarized in Table 1. The dataset was randomly partitioned into 65% training and 35% test sets. We compare our approach in the case of ℓ1 and ℓ2-norm constraint on the combination coefficients with: (1) the baseline scalar-valued kernel ridge regression algorithm by considering each output independently of the others, (2) functional response ridge regression using an integral operator-valued kernel [13], (3) kernel ridge regression with an evenlyweighted sum of operator-valued kernels, which we denote by ℓ∞-norm MovKL. As in the scalar case, using multiple operator-valued kernels leads to better results. By directly combining kernels constructed from identity, multiplication and integral operators we could reduce the residual sum of squares error and enhance the label classification accuracy. Best results are obtained using the MovKL algorithm with ℓ2-norm constraint on the combination coefficients. RSSE and LCR of the baseline kernel ridge regression are significantly outperformed by the operator-valued kernel based functional response regression. These results confirm that by taking into account the relationship between outputs we can improve performance. This is due to the fact that an operatorvalued kernel induces a similarity measure between two pairs of input/output. 5 Conclusion In this paper we have presented a new method for learning simultaneously an operator and a finite linear combination of operator-valued kernels. We have extended the MKL framework to deal with functional response kernel ridge regression and we have proposed a block coordinate descent algorithm to solve the resulting optimization problem. The method is applied on a BCI dataset to predict finger movement in a functional regression setting. Experimental results show that our algorithm achieves good performance outperforming existing methods. It would be interesting for future work to thoroughly compare the proposed MKL method for operator estimation with previous related methods for multi-class and multi-label MKL in the contexts of structured-output learning and collaborative filtering. Acknowledgments We would like to thank the anonymous reviewers for their valuable comments. This research was funded by the Ministry of Higher Education and Research, Nord-Pas-de-Calais Regional Council and FEDER (Contrat de Projets Etat Region CPER 2007-2013), ANR projects LAMPADA (ANR09-EMER-007) and ASAP (ANR-09-EMER-001), and by the IST Program of the European Community under the PASCAL2 Network of Excellence (IST-216886). This publication only reflects the authors’ views. Francis Bach was partially supported by the European Research Council (SIERRA Project). 8 References [1] J. Aflalo, A. Ben-Tal, C. Bhattacharyya, J. Saketha Nath, and S. Raman. Variable sparsity kernel learning. JMLR, 12:565–592, 2011. [2] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Machine Learning, 73(3):243–272, 2008. [3] F. Bach. Consistency of the group Lasso and multiple kernel learning. JMLR, 9:1179–1225, 2008. [4] C. Brouard, F. d’Alch´e-Buc, and M. Szafranski. Semi-supervised penalized output kernel regression for link prediction. In Proc. ICML, 2011. [5] A. Caponnetto, C. A. Micchelli, M. Pontil, and Y. Ying. Universal multi-task kernels. JMLR, 68:1615– 1646, 2008. [6] C. Carmeli, E. De Vito, and A. Toigo. Vector valued reproducing kernel Hilbert spaces of integrable functions and mercer theorem. Analysis and Applications, 4:377–408, 2006. [7] C. Carmeli, E. De Vito, and A. Toigo. Vector valued reproducing kernel Hilbert spaces and universality. Analysis and Applications, 8:19–61, 2010. [8] C. Cortes, M. Mohri, and A. Rostamizadeh. L2 regularization for learning kernels. In Proc. UAI, 2009. [9] C. Cortes, M. Mohri, and A. Rostamizadeh. Generalization bounds for learning kernels. In ICML, 2010. [10] F. Dinuzzo, C. S. Ong, P. Gehler, and G. Pillonetto. Learning output kernels with block coordinate descent. In Proc. ICML, 2011. [11] T. Evgeniou, C. A. Micchelli, and M. Pontil. Learning multiple tasks with kernel methods. JMLR, 6:615–637, 2005. [12] H. Kadri, E. Duflos, P. Preux, S. Canu, and M. Davy. Nonlinear functional regression: a functional RKHS approach. In Proc. AISTATS, pages 111–125, 2010. [13] H. Kadri, A. Rabaoui, P. Preux, E. Duflos, and A. Rakotomamonjy. Functional regularized least squares classification with operator-valued kernels. In Proc. ICML, 2011. [14] H. Kadri, A. Rakotomamonjy, F. Bach, and P. Preux. Multiple operator-valued kernel learning. Technical Report 00677012, INRIA, 2012. [15] M. Kloft, U. Brefeld, S. Sonnenburg, and A. Zien. ℓp-norm multiple kernel learning. JMLR, 12:953–997, 2011. [16] S. Kurcyusz. On the existence and nonexistence of lagrange multipliers in Banach spaces. Journal of Optimization Theory and Applications, 20:81–110, 1976. [17] A. Kurdila and M. Zabarankin. Convex Functional Analysis. Birkhauser Verlag, 2005. [18] G. Lanckriet, N. Cristianini, L. El Ghaoui, P. Bartlett, and M. Jordan. Learning the kernel matrix with semi-definite programming. JMLR, 5:27–72, 2004. [19] H. Lian. Nonlinear functional models for functional responses in reproducing kernel Hilbert spaces. The Canadian Journal of Statistics, 35:597–606, 2007. [20] C. Micchelli and M. Pontil. Learning the kernel function via regularization. JMLR, 6:1099–1125, 2005. [21] C. A. Micchelli and M. Pontil. On learning vector-valued functions. Neural Comput., 17:177–204, 2005. [22] K. J. Miller and G. Schalk. Prediction of finger flexion: 4th brain-computer interface data competition. BCI Competition IV, 2008. [23] T. Pistohl, T. Ball, A. Schulze-Bonhage, A. Aertsen, and C. Mehring. Prediction of arm movement trajectories from ECoG-recordings in humans. Journal of Neuroscience Methods, 167(1):105–114, 2008. [24] A. Rakotomamonjy, F. Bach, Y. Grandvalet, and S. Canu. SimpleMKL. JMLR, 9:2491–2521, 2008. [25] J. O. Ramsay and B. W. Silverman. Functional Data Analysis, 2nd ed. Springer Verlag, New York, 2005. [26] John A. Rice and B. W. Silverman. Estimating the mean and covariance structure nonparametrically when the data are curves. Journal of the Royal Statistical Society. Series B, 53(1):233–243, 1991. [27] G. Schalk, D. J. McFarland, T. Hinterberger, N. Birbaumer, and J. R. Wolpaw. BCI2000: a generalpurpose brain-computer interface system. Biomedical Engineering, IEEE Trans. on, 51:1034–1043, 2004. [28] B. Sch¨olkopf and A. J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge, MA, USA, 2002. [29] S. Sonnenburg, G. R¨atsch, C. Sch¨afer, and B. Sch¨olkopf. Large scale multiple kernel learning. JMLR, 7:1531–1565, 2006. [30] P. Tseng. Convergence of block coordinate descent method for nondifferentiable minimization. J. Optim. Theory Appl., 109:475–494, 2001. 9
|
2012
|
90
|
4,810
|
No-Regret Algorithms for Unconstrained Online Convex Optimization Matthew Streeter Duolingo, Inc.∗ Pittsburgh, PA 15232 matt@duolingo.com H. Brendan McMahan Google, Inc. Seattle, WA 98103 mcmahan@google.com Abstract Some of the most compelling applications of online convex optimization, including online prediction and classification, are unconstrained: the natural feasible set is Rn. Existing algorithms fail to achieve sub-linear regret in this setting unless constraints on the comparator point ˚x are known in advance. We present algorithms that, without such prior knowledge, offer near-optimal regret bounds with respect to any choice of ˚x. In particular, regret with respect to ˚x = 0 is constant. We then prove lower bounds showing that our guarantees are near-optimal in this setting. 1 Introduction Over the past several years, online convex optimization has emerged as a fundamental tool for solving problems in machine learning (see, e.g., [3, 12] for an introduction). The reduction from general online convex optimization to online linear optimization means that simple and efficient (in memory and time) algorithms can be used to tackle large-scale machine learning problems. The key theoretical techniques behind essentially all the algorithms in this field are the use of a fixed or increasing strongly convex regularizer (for gradient descent algorithms, this is equivalent to a fixed or decreasing learning rate sequence). In this paper, we show that a fundamentally different type of algorithm can offer significant advantages over these approaches. Our algorithms adjust their learning rates based not just on the number of rounds, but also based on the sum of gradients seen so far. This allows us to start with small learning rates, but effectively increase the learning rate if the problem instance warrants it. This approach produces regret bounds of the form O R √ T log((1+R)T) , where R = ∥˚x∥2 is the L2 norm of an arbitrary comparator. Critically, our algorithms provide this guarantee simultaneously for all ˚x ∈Rn, without any need to know R in advance. A consequence of this is that we can guarantee at most constant regret with respect to the origin, ˚x = 0. This technique can be applied to any online convex optimization problem where a fixed feasible set is not an essential component of the problem. We discuss two applications of particular interest below: Online Prediction Perhaps the single most important application of online convex optimization is the following prediction setting: the world presents an attribute vector at ∈Rn; the prediction algorithm produces a prediction σ(at · xt), where xt ∈Rn represents the model parameters, and σ : R →Y maps the linear prediction into the appropriate label space. Then, the adversary reveals the label yt ∈Y , and the prediction is penalized according to a loss function ℓ: Y × Y →R. For appropriately chosen σ and ℓ, this becomes a problem of online convex optimization against functions ft(x) = ℓ(σ(at·x), yt). In this formulation, there are no inherent restrictions on the model coefficients x ∈Rn. The practitioner may have prior knowledge that “small” model vectors are more ∗This work was performed while the author was at Google. 1 likely than large ones, but this is rarely best encoded as a feasible set F, which says: “all xt ∈F are equally likely, and all other xt are ruled out.” A more general strategy is to introduce a fixed convex regularizer: L1 and L2 2 penalties are common, but domain-specific choices are also possible. While algorithms of this form have proved very effective at solving these problems, theoretical guarantees usually require fixing a feasible set of radius R, or at least an intelligent guess of the norm of an optimal comparator ˚x. The Unconstrained Experts Problem and Portfolio Management In the classic problem of predicting with expert advice (e.g., [3]), there are n experts, and on each round t the player selects an expert (say i), and obtains reward gt,i from a bounded interval (say [−1, 1]). Typically, one uses an algorithm that proposes a probability distribution pt on experts, so the expected reward is pt · gt. Our algorithms apply to an unconstrained version of this problem: there are still n experts with payouts in [−1, 1], but rather than selecting an individual expert, the player can place a “bet” of xt,i on each expert i, and then receives reward P i xt,igt,i = xt · gt. The bets are unconstrained (betting a negative value corresponds to betting against the expert). In this setting, a natural goal is the following: place bets so as to achieve as much reward as possible, subject to the constraint that total losses are bounded by a constant (which can be set equal to some starting budget which is to be invested). Our algorithms can satisfy constraints of this form because regret with respect to ˚x = 0 (which equals total loss) is bounded by a constant. It is useful to contrast our results in this setting to previous applications of online convex optimization to portfolio management, for example [6] and [2]. By applying algorithms for exp-concave loss functions, they obtain log-wealth within O(log(T)) of the best constant rebalanced portfolio. However, this approach requires a “no-junk-bond” assumption: on each round, for each investment, you always retain at least an α > 0 fraction of your initial investment. While this may be realistic (though not guaranteed!) for blue-chip stocks, it certainly is not for bets on derivatives that can lose all their value unless a particular event occurs (e.g., a stock price crosses some threshold). Our model allows us to handle such investments: if we play xi > 0, an outcome of gi = −1 corresponds exactly to losing 100% of that investment. Our results imply that if even one investment (out of exponentially many choices) has significant returns, we will increase our wealth exponentially. Notation and Problem Statement For the algorithms considered in this paper, it will be more natural to consider reward-maximization rather than loss-minimization. Therefore, we consider online linear optimization where the goal is to maximize cumulative reward given adversarially selected linear reward functions ft(x) = gt · x. On each round t = 1 . . . T, the algorithm selects a point xt ∈Rn, receives reward ft(xt) = gt · xt, and observes gt. For simplicity, we assume gt,i ∈ [−1, 1], that is, ∥gt∥∞≤1. If the real problem is against convex loss functions ℓt(x), they can be converted to our framework by taking gt = −▽ℓt(xt) (see pseudo-code for REWARD-DOUBLING), using the standard reduction from online convex optimization to online linear optimization [13]. We use the compressed summation notation g1:t = Pt s=1 gs for both vectors and scalars. We study the reward of our algorithms, and their regret against a fixed comparator ˚x: Reward ≡ T X t=1 gt · xt and Regret(˚x) ≡g1:T · ˚x − T X t=1 gt · xt. Comparison of Regret Bounds The primary contribution of this paper is to establish matching upper and lower bounds for unconstrained online convex optimization problems, using algorithms that require no prior information about the comparator point ˚x. Specifically, we present an algorithm that, for any ˚x ∈Rn, guarantees Regret(˚x) ≤O ∥˚x∥2 √ T log((1 + ∥˚x∥2) √ T) . To obtain this guarantee, we show that it is sufficient (and necessary) that reward is Ω(exp(|g1:T |/ √ T)) (see Theorem 1). This shift of emphasis from regret-minimization to reward-maximization eliminates the quantification on ˚x, and may be useful in other contexts. Table 1 compares the bounds for REWARD-DOUBLING (this paper) to those of two previous algorithms: online gradient descent [13] and projected exponentiated gradient descent [8, 12]. For each Our bounds are not directly comparable to the bounds cited above: a O(log(T)) regret bound on logwealth implies wealth at least O OPT/T , whereas we guarantee wealth like O OPT’ − √ T . But more importantly, the comparison classes are different. 2 Assuming ∥gt∥2 ≤1: ˚x = 0 ∥˚x∥2 ≤R Arbitrary ˚x Gradient Descent, η = R √ T R √ T R √ T ∥˚x∥2T REWARD-DOUBLING ϵ R √ T log n(1+R)T ϵ ∥˚x∥2 √ T log n(1+∥˚x∥2)T ϵ Assuming ∥gt∥∞≤1: ˚x = 0 ∥˚x∥1 ≤R Arbitrary ˚x Exponentiated G.D. R√T log n R√T log n ∥˚x∥1T REWARD-DOUBLING ϵ R √ T log n(1+R)T ϵ ∥˚x∥1 √ T log n(1+∥˚x∥1) √ T ϵ Table 1: Worst-case regret bounds for various algorithms (up to constant factors). Exponentiated G.D. uses feasible set {x : ∥x∥1 ≤R}, and REWARD-DOUBLING uses ϵi = ϵ n in both cases. algorithm, we consider a fixed choice of parameter settings and then look at how regret changes as we vary the comparator point ˚x. Gradient descent is minimax-optimal [1] when the comparator point is contained in a hypershere whose radius is known in advance (∥˚x∥2 ≤R) and gradients are sparse (∥gt∥2 ≤1, top table). Exponentiated gradient descent excels when gradients are dense (∥gt∥∞≤1, bottom table) but the comparator point is sparse (∥˚x∥1 ≤R for R known in advance). In both these cases, the bounds for REWARD-DOUBLING match those of the previous algorithms up to logarithmic factors, even when they are tuned optimally with knowledge of R. The advantage of REWARD-DOUBLING shows up when the guess of R used to tune the competing algorithms turns out to be wrong. When ˚x = 0, REWARD-DOUBLING offers constant regret compared to Ω( √ T) for the other algorithms. When ˚x can be arbitrary, only REWARD-DOUBLING offers sub-linear regret (and in fact its regret bound is optimal, as shown in Theorem 8). In order to guarantee constant origin-regret, REWARD-DOUBLING frequently “jumps” back to playing the origin, which may be undesirable in some applications. In Section 4 we introduce SMOOTH-REWARD-DOUBLING, which achieves similar guarantees without resetting to the origin. Related Work Our work is related, at least in spirit, to the use of a momentum term in stochastic gradient descent for back propagation in neural networks [7, 11, 9]. These results are similar in motivation in that they effectively yield a larger learning rate when many recent gradients point in the same direction. In Follow-The-Regularized-Leader terms, the exponentiated gradient descent algorithm with unnormalized weights of Kivinen and Warmuth [8] plays xt+1 = arg minx∈Rn + g1:t · x + 1 η(x log x −x), which has closed-form solution xt+1 = exp(−ηg1:t). Like our algorithm, this algorithm moves away from the origin exponentially fast, but unlike our algorithm it can incur arbitrarily large regret with respect to ˚x = 0. Theorem 9 shows that no algorithm of this form can provide bounds like the ones proved in this paper. Hazan and Kale [5] give regret bounds in terms of the variance of the gt. Letting G = |g1:t| and H = PT t=1 g2 t , they prove regret bounds of the form O( √ V ) where V = H −G2/T. This result has some similarity to our work in that G/ √ T = √ H −V , and so if we hold H constant, then when V is low, the critical ratio G/ √ T that appears in our bounds is large. However, they consider the case of a known feasible set, and their algorithm (gradient descent with a constant learning rate) cannot obtain bounds of the form we prove. 2 Reward and Regret In this section we present a general result that converts lower bounds on reward into upper bounds on regret, for one-dimensional online linear optimization. In the unconstrained setting, this result will be sufficient to provide guarantees for general n-dimensional online convex optimization. 3 Theorem 1. Consider an algorithm for one-dimensional online linear optimization that, when run on a sequence of gradients g1, g2, . . . , gT , with gt ∈[−1, 1] for all t, guarantees Reward ≥κ exp (γ|g1:T |) −ϵ, (1) where γ, κ > 0 and ϵ ≥0 are constants. Then, against any comparator ˚x ∈[−R, R], we have Regret(˚x) ≤R γ log R κγ −1 + ϵ, (2) letting 0 log 0 = 0 when R = 0. Further, any algorithm with the regret guarantee of Eq. (2) must guarantee the reward of Eq. (1). We give a proof of this theorem in the appendix. The duality between reward and regret can also be seen as a consequence of the fact that exp(x) and y log y −y are convex conjugates. The γ term typically contains a dependence on T like 1/ √ T. This bound holds for all R, and so for some small R the log term becomes negative; however, for real algorithms the ϵ term will ensure the regret bound remains positive. The minus one can of course be dropped to simplify the bound further. 3 Gradient Descent with Increasing Learning Rates In this section we show that allowing the learning rate of gradient descent to sometimes increase leads to novel theoretical guarantees. To build intuition, consider online linear optimization in one dimension, with gradients g1, g2, . . . , gT , all in [−1, 1]. In this setting, the reward of unconstrained gradient descent has a simple closed form: Lemma 2. Consider unconstrained gradient descent in one dimension, with learning rate η. On round t, this algorithm plays the point xt = ηg1:t−1. Letting G = |g1:t| and H = PT t=1 g2 t , the cumulative reward of the algorithm is exactly Reward = η 2 G2 −H . We give a simple direct proof in Appendix A. Perhaps surprisingly, this result implies that the reward is totally independent of the order of the linear functions selected by the adversary. Examining the expression in Lemma 2, we see that the optimal choice of learning rate η depends fundamentally on two quantities: the absolute value of the sum of gradients (G), and the sum of the squared gradients (H). If G2 > H, we would like to use as large a learning rate as possible in order to maximize reward. In contrast, if G2 < H, the algorithm will obtain negative reward, and the best it can do is to cut its losses by setting η as small as possible. One of the motivations for this work is the observation that the state-of-the-art online gradient descent algorithms adjust their learning rates based only on the observed value of H (or its upper bound T); for example [4, 10]. We would like to increase reward by also accounting for G. But unlike H, which is monotonically increasing with time, G can both increase and decrease. This makes simple guess-and-doubling tricks fail when applied to G, and necessitates a more careful approach. 3.1 Analysis in One Dimension In this section we analyze algorithm REWARD-DOUBLING-1D (Algorithm 1), which consists of a series of epochs. We suppose for the moment that an upper bound ¯H on H = PT t=1 g2 t is known in advance. In the first epoch, we run gradient descent with a small initial learning rate η = η1. Whenever the total reward accumulated in the current epoch reaches η ¯H, we double η and start a new epoch (returning to the origin and forgetting all previous gradients except the most recent one). Lemma 3. Applied to a sequence of gradients g1, g2, . . . , gT , all in [−1, 1], where H = PT t=1 g2 t ≤ ¯H, REWARD-DOUBLING-1D obtains reward satisfying Reward = T X t=1 xtgt ≥1 4η1 ¯H exp a|g1:T | √¯H −η1 ¯H, (3) for a = log(2)/ √ 3. 4 Algorithm 1 REWARD-DOUBLING-1D Parameters: initial learning rate η1, upper bound ¯H ≥PT t=1 g2 t . Initialize x1 ←0, i ←1, and Q1 ←0. for t = 1, 2, . . . , T do Play xt, and receive reward xtgt. Qi ←Qi + xtgt. if Qi < ηi ¯H then xt+1 ←xt + ηigt. else i ←i + 1. ηi ←2ηi−1; Qi ←0. xt+1 ←0 + ηigt. Algorithm 2 REWARD-DOUBLING Parameters: maximum origin-regret ϵi for 1 ≤i ≤n. for i = 1, 2, . . . , n do Let Ai be a copy of algorithm REWARD-DOUBLING-1D-GUESS (see Theorem 4), with parameter ϵi. for t = 1, 2, . . . , T do Play xt, with xt,i selected by Ai. Receive gradient vector gt = −▽ft(xt). for i = 1, 2, . . . , n do Feed back gt,i to Ai. Proof. Suppose round T occurs during the k’th epoch. Because epoch i can only come to an end if Qi ≥ηi ¯H, where ηi = 2i−1η1, we have Reward = k X i=1 Qi ≥ k−1 X i=1 2i−1η1 ¯H ! + Qk = 2k−1 −1 η1 ¯H + Qk . (4) We now lower bound Qk. For i = 1, . . . , k let ti denote the round on which Qi is initialized to 0, with t1 ≡1, and define tk+1 ≡T. By construction, Qi is the total reward of a gradient descent algorithm that is active on rounds ti through ti+1 inclusive, and that uses learning rate ηi (note that on round ti, this algorithm gets 0 reward and we initialize Qi to 0 on that round). Thus, by Lemma 2, we have that for any i, Qi = ηi 2 (gti:ti+1)2 − ti+1 X s=ti g2 s ! ≥−ηi 2 ¯H . Applying this bound to epoch k, we have Qk ≥−1 2ηk ¯H = −2k−2η1 ¯H. Substituting into (4) gives Reward ≥η1 ¯H(2k−1 −1 −2k−2) = η1 ¯H(2k−2 −1) . (5) We now show that k ≥|g1:T | √ 3 ¯ H . At the end of round ti+1 −1, we must have had Qi < ηi ¯H (otherwise epoch i + 1 would have begun earlier). Thus, again using Lemma 2, ηi 2 (gti:ti+1−1)2 −¯H ≤ηi ¯H so |gti:ti+1−1| ≤ √ 3 ¯H. Thus, |g1:T | ≤ k X i=1 |gti:ti+1−1| ≤k p 3 ¯H . Rearranging gives k ≥|g1:T | √ 3 ¯ H , and combining with Eq. (5) proves the lemma. We can now apply Theorem 1 to the reward (given by Eq. (3)) of REWARD-DOUBLING-1D to show Regret(˚x) ≤bR p ¯H log 4Rb √¯H η1 ! −1 ! + η1 ¯H (6) for any ˚x ∈[−R, R], where b = a−1 = √ 3/ log(2) < 2.5. When the feasible set is also fixed in advance, online gradient descent with a fixed learning obtains a regret bound of O(R √ T). Suppose we use the estimate ¯H = T. By choosing η1 = 1 T , we guarantee constant regret against the origin, ˚x = 0 (equivalently, constant total loss). Further, for any feasible set of radius R, we still have 5 worst-case regret of at most O(R √ T log((1 + R)T)), which is only modestly worse than that of gradient descent with the optimal R known in advance. The need for an upper bound ¯H can be removed using a standard guess-and-doubling approach, at the cost of a constant factor increase in regret (see appendix for proof). Theorem 4. Consider algorithm REWARD-DOUBLING-1D-GUESS, which behaves as follows. On each era i, the algorithm runs REWARD-DOUBLING-1D with an upper bound of ¯Hi = 2i−1, and initial learning rate ηi 1 = ϵ2−2i. An era ends when ¯Hi is no longer an upper bound on the sum of squared gradients seen during that era. Letting c = √ 2 √ 2−1, this algorithm has regret at most Regret ≤cR √ H + 1 log R ϵ (2H + 2)5/2 −1 + ϵ. 3.2 Extension to n dimensions To extend our results to general online convex optimization, it is sufficient to run a separate copy of REWARD-DOUBLING-1D-GUESS for each coordinate, as is done in REWARD-DOUBLING (Algorithm 2). The key to the analysis of this algorithm is that overall regret is simply the sum of regret on n one-dimensional subproblems which can be analyzed independently. Theorem 5. Given a sequence of convex loss functions f1, f2, . . . , fT from Rn to R, REWARD-DOUBLING with ϵi = ϵ n has regret bounded by Regret(˚x) ≤ϵ + c n X i=1 |˚xi| p Hi + 1 log n ϵ |˚xi|(2Hi + 2)5/2 −1 ≤ϵ + c∥˚x∥2 √ H + n log n ϵ ∥˚x∥2 2(2H + 2)5/2 −1 for c = √ 2 √ 2−1, where Hi = PT t=1 g2 t,i and H = PT t=1 ∥gt∥2 2. Proof. Fix a comparator ˚x. For any coordinate i, define Regreti = T X t=1 ˚xigt,i − T X t=1 xt,igt,i . Observe that n X i=1 Regreti = T X t=1 ˚x · gt − T X t=1 xt · gt = Regret(˚x) . Furthermore, Regreti is simply the regret of REWARD-DOUBLING-1D-GUESS on the gradient sequence g1,i, g2,i, . . . , gT,i. Applying the bound of Theorem 4 to each Regreti term completes the proof of the first inequality. For the second inequality, let ⃗H be a vector whose ith component is √Hi + 1, and let ⃗x ∈Rn where ⃗xi = |˚xi|. Using the Cauchy-Schwarz inequality, we have n X i=1 |˚xi| p Hi + 1 = ⃗x · ⃗H ≤∥˚x∥2 ∥⃗H∥2 = ∥˚x∥2 √ H + n . This, together with the fact that log(|˚xi|(2Hi + 2)5/2) ≤log(∥˚x∥2 2(2H + 2)5/2), suffices to prove second inequality. In some applications, n is not known in advance. In this case, we can set ϵi = ϵ i2 for the ith coordinate we encounter, and get the same bound up to constant factors. 4 An Epoch-Free Algorithm In this section we analyze SMOOTH-REWARD-DOUBLING, a simple algorithm that achieves bounds comparable to those of Theorem 4, without guessing-and-doubling. We consider only the 1-d problem, as the technique of Theorem 5 can be applied to extend to n dimensions. Given a parameter 6 η > 0, we achieve Regret ≤R √ T log RT 3/2 η −1 + 1.76η, (7) for all T and R, which is better (by constant factors) than Theorem 4 when gt ∈{−1, 1} (which implies T = H). The bound can be worse on a problems where H < T. The idea of the algorithm is to maintain the invariant that our cumulative reward, as a function of g1:t and t, satisfies Reward ≥N(g1:t, t), for some fixed function N. Because reward changes by gtxt on round t, it suffices to guarantee that for any g ∈[−1, 1], N(g1:t, t) + gxt+1 ≥N(g1:t + g, t + 1) (8) where xt+1 is the point the algorithm plays on round t + 1, and we assume N(0, 1) = 0. This inequality is approximately satisfied (for small g) if we choose xt+1 = ∂N(g1:t + g, t) ∂g ≈N(g1:t + g, t) −N(g1:t, t) g ≈N(g1:t + g, t + 1) −N(g1:t, t) g . This suggests that if we want to maintain reward at least N(g1:t, t) = 1 t (exp(|g1:t|/ √ t) −1) , we should set xt+1 ≈sign(g1:t)t−3/2 exp |g1:t| √ t . The following theorem (proved in the appendix) provides an inductive analysis of an algorithm of this form. Theorem 6. Fix a sequence of reward functions ft(x) = gtx with gt ∈[−1, 1], and let Gt = |g1:t|. We consider SMOOTH-REWARD-DOUBLING, which plays 0 on round 1 and whenever Gt = 0; otherwise, it plays xt+1 = η sign(g1:t)B(Gt, t + 5) (9) with η > 0 a learning-rate parameter and B(G, t) = 1 t3/2 exp G √ t . (10) Then, at the end of each round t, this algorithm has Reward(t) ≥η 1 t + 5 exp Gt √t + 5 −1.76η. Two main technical challenges arise in the proof: first, we prove a result like Eq. (8) for N(g1:t, t) = (1/t) exp |g1:t|/ √ t . However, this Lemma only holds for t ≥6 and when the sign of g1:t doesn’t change. We account for this by showing that a small modification to N (costing only a constant over all rounds) suffices. By running this algorithm independently for each coordinate using an appropriate choice of η, one can obtain a guarantee similar to that of Theorem 5. 5 Lower Bounds As with our previous results, it is sufficient to show a lower bound in one dimension, as it can then be replicated independently in each coordinate to obtain an n dimensional bound. Note that our lower bound contains the factor log(|˚x| √ T), which can be negative when ˚x is small relative to T, hence it is important to hold ˚x fixed and consider the behavior as T →∞. Here we give only a proof sketch; see Appendix A for the full proof. Theorem 7. Consider the problem of unconstrained online linear optimization in one dimension, and an online algorithm that guarantees origin-regret at most ϵ. Then, for any fixed comparator ˚x, and any integer T0, there exists a gradient sequence {gt} ∈[−1, 1]T of length T ≥T0 for which the algorithm’s regret satisfies Regret(˚x) ≥0.336|˚x| v u u tT log |˚x| √ T ϵ ! . 7 Proof. (Sketch) Assume without loss of generality that ˚x > 0. Let Q be the algorithm’s reward when each gt is drawn independently uniformly from {−1, 1}. We have E[Q] = 0, and because the algorithm guarantees origin-regret at most ϵ, we have Q ≥−ϵ with probability 1. Letting G = g1:T , it follows that for any threshold Z = Z(T), 0 = E[Q] = E[Q|G < Z] · Pr[G < Z] + E[Q|G ≥Z] · Pr[G ≥Z] ≥−ϵ Pr[G < Z] + E[Q|G ≥Z] · Pr[G ≥Z] > −ϵ + E[Q|G ≥Z] · Pr[G ≥Z] . Equivalently, E[Q|G ≥Z] < ϵ Pr[G ≥Z] . We choose Z(T) = √ kT, where k = j log( R √ T ϵ )/ log(p−1) k . Here R = |˚x| and p > 0 is a constant chosen using binomial distribution lower bounds so that Pr[G ≥Z] ≥pk. This implies E[Q|G ≥Z] < ϵp−k = ϵ exp k log p−1 ≤R √ T . This implies there exists a sequence with G ≥Z and Q < R √ T. On this sequence, regret is at least G˚x −Q ≥R √ kT −R √ T = Ω(R √ kT). Theorem 8. Consider the problem of unconstrained online linear optimization in Rn, and consider an online algorithm that guarantees origin-regret at most ϵ. For any radius R, and any T0, there exists a gradient sequence gradient sequence {gt} ∈([−1, 1]n)T of length T ≥T0, and a comparator ˚x with ∥˚x∥1 = R, for which the algorithm’s regret satisfies Regret(˚x) ≥0.336 n X i=1 |˚xi| v u u tT log |˚xi| √ T ϵ ! . Proof. For each coordinate i, Theorem 7 implies that there exists a T ≥T0 and a sequence of gradients gt,i such that T X t=1 ˚xigt,i − T X t=1 xt,igt,i ≥0.336|˚xi| v u u tT log |˚xi| √ T ϵ ! . (The proof of Theorem 7 makes it clear that we can use the same T for all i.) Summing this inequality across all n coordinates then gives the regret bound stated in the theorem. The following theorem presents a stronger negative result for Follow-the-Regularized-Leader algorithms with a fixed regularizer: for any such algorithm that guarantees origin-regret at most ϵT after T rounds, worst-case regret with respect to any point outside [−ϵT , ϵT ] grows linearly with T. Theorem 9. Consider a Follow-The-Regularized-Leader algorithm that sets xt = arg min x (g1:t−1x + ψT (x)) where ψT is a convex, non-negative function with ψT (0) = 0. Let ϵT be the maximum origin-regret incurred by the algorithm on a sequence of T gradients. Then, for any ˚x with |˚x| > ϵT , there exists a sequence of T gradients such that the algorithm’s regret with respect to ˚x is at least T −1 2 (|˚x| −ϵT ). In fact, it is clear from the proof that the above result holds for any algorithm that selects xt+1 purely as a function of g1:t (in particular, with no dependence on t). 6 Future Work This work leaves open many interesting questions. It should be possible to apply our techniques to problems that do have constrained feasible sets; for example, it is natural to consider the unconstrained experts problem on the positive orthant. While we believe this extension is straightforward, handling arbitrary non-axis-aligned constraints will be more difficult. Another possibility is to develop an algorithm with bounds in terms of H rather than T that doesn’t use a guess and double approach. 8 References [1] Jacob Abernethy, Peter L. Bartlett, Alexander Rakhlin, and Ambuj Tewari. Optimal strategies and minimax lower bounds for online convex games. In COLT, 2008. [2] Amit Agarwal, Elad Hazan, Satyen Kale, and Robert E. Schapire. Algorithms for portfolio management based on the Newton method. In ICML, 2006. [3] Nicol`o Cesa-Bianchi and Gabor Lugosi. Prediction, Learning, and Games. Cambridge University Press, New York, NY, USA, 2006. ISBN 0521841089. [4] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. In COLT, 2010. [5] Elad Hazan and Satyen Kale. Extracting certainty from uncertainty: Regret bounded by variation in costs. In COLT, 2008. [6] Elad Hazan and Satyen Kale. On stochastic and worst-case models for investing. In Advances in Neural Information Processing Systems 22. 2009. [7] Robert A. Jacobs. Increased rates of convergence through learning rate adaptation. Neural Networks, 1987. [8] Jyrki Kivinen and Manfred Warmuth. Exponentiated Gradient Versus Gradient Descent for Linear Predictors. Journal of Information and Computation, 132, 1997. [9] Todd K. Leen and Genevieve B. Orr. Optimal stochastic search and adaptive momentum. In NIPS, 1993. [10] H. Brendan McMahan and Matthew Streeter. Adaptive bound optimization for online convex optimization. In COLT, 2010. [11] Barak Pearlmutter. Gradient descent: Second order momentum and saturating error. In NIPS, 1991. [12] Shai Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107–194, 2012. [13] Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In ICML, 2003. 9
|
2012
|
91
|
4,811
|
Learning Partially Observable Models Using Temporally Abstract Decision Trees Erik Talvitie Department of Mathematics and Computer Science Franklin & Marshall College Lancaster, PA 17604 erik.talvitie@fandm.edu Abstract This paper introduces timeline trees, which are partial models of partially observable environments. Timeline trees are given some specific predictions to make and learn a decision tree over history. The main idea of timeline trees is to use temporally abstract features to identify and split on features of key events, spread arbitrarily far apart in the past (whereas previous decision-tree-based methods have been limited to a finite suffix of history). Experiments demonstrate that timeline trees can learn to make high quality predictions in complex, partially observable environments with high-dimensional observations (e.g. an arcade game). 1 Introduction Learning a model of a high-dimensional environment can pose a significant challenge, but the ability to make predictions about future events is key to good decision making. One common approach is to avoid learning a complete, monolithic model of the environment, and to instead focus on learning partial models that are only capable of making a restricted set of predictions (for instance, predictions about some particular aspect of the environment, or predictions about future rewards). Partial models can often be simpler to learn than a complete model. In some cases they can be combined to form complete, structured models, which can then be used for planning purposes (e.g. factored MDPs [1], collections of partial models [2]). In other cases, partial models can be directly useful for control (e.g. U-Tree [3], prediction profile models [4]). This paper introduces timeline trees which are partial models for partially observable environments. Timeline trees are focused on capturing a particular kind of partial observability; they assume that their predictions can be made by recalling a (finite) sequence of events in the past that may have occurred far apart from each other in time. While not all partially observable phenomena take this form, a good deal of everyday partial observability has this flavor. For instance, you may know that your keys are in the next room because you remember putting them there. Most of the experiences since that event are probably irrelevant to making predictions about the location of your keys. The main idea of timeline trees is to build a decision tree over history. As with similar approaches, the decision tree can split on features of observations in recent history. However, a timeline tree may also establish new timestamps in the past and is able split on features of observations surrounding those events as well. For instance, there could be a timestamp representing the last time the agent saw its keys, and then features of the neighboring observations could identify the keys’ location. In this way, timeline trees can make use of information arbitrarily spread out in history. 1 2 Partial Models This paper will focus on discrete dynamical systems. Specifically, time procedes in discrete steps t = 1, 2, 3, . . .. At every step t, the agent selects an action at from a finite set A and the environment (stochastically) emits an observation ot, taken from a finite set O. The history at time t is the sequence of actions and observations from the beginning of time, up through time t: ht def= a1o1a2o2 . . . atot. In the general partially observable case, the observation emitted at each step may depend upon the entire history (and the agent’s action). So, an agent wishing to predict the next observation must model the conditional probability distribution Pr(Ot+1 | Ht, At+1). If one is able to predict the next observation at any history and for any action (that is, if one has access to this conditional distribution), one can compute the probability of any future sequence of observations given any future sequence of actions and the history [5]. Such a model is called a complete model because in any situtation, it is capable of making any prediction about the future. Examples in the partially observable setting include POMDPs [6, 7] and PSRs [5, 8]. A partial model is any model that does not represent this full conditional distribution. This paper will focus on partial models that make conditional predictions about abstract features of the next observation, though many of the ideas can be straightforwardly adapted to work with predictions of other forms. Formally, let ω and κ be many-to-one mappings over the set of observations O. The task of the partial model at time t will be to predict the value of ω(ot+1), conditioned on the value of κ(ot+1). So it represents the distribution Pr(ω(Ot+1) | Ht, At+1, κ(Ot+1)). For example, in the experiment in Section 5.3, observations are images and multiple partial models are learned, each predicting the color of a single pixel, conditioned on the colors of pixels above and to the left. 2.1 Related Work: Partial Models in Partially Observable Environments McCallum’s U-Tree [3] learns a decision tree over history, where the leaves of the tree map to the expected discounted sum of rewards at the associated history (though the method could be adapted to make other predictions, as in [9]). McCallum used binary features of the form “Feature X takes value Y at time-step t −k,” where t is the current time-step. Thus the decision tree learns an abstraction both over observations (which could be high-dimensional in their own right) and over the sequence of observations (by using features from multiple time-steps). However, because it can only consider a finite number of such features, U-Tree has a finite memory horizon; events that occur before some arbitrary cutoff in the past cannot be taken into account when making predictions. Timeline trees are an extension of UTree that allow it to use features of observations arbitrarily far back in the past, though they are not the first attempt to address this issue. Looping predictive suffix trees (LPSTs) [10] are prediction suffix trees [11] that allow nodes to loop back to their ancestors. Local agent state representations (LASR) [12] map histories to a real number, and then learn a direct mapping from that number to the target predictions. McCallum [13] and Mahmud [14] both developed incremental hill-climbing algorithms to learn finite state machines (FSMs), where each state is associated with predictions about future rewards, and the transitions depend on both the action taken and the observation received. Prediction profile models [4] are similar FSMs, but rather than hill-climbing, they are learned by pre-processing the data and then applying standard complete model learning methods (they were demonstrated using POMDPs and LPSTs). All of these approaches can, in principle, represent arbitrarily long-range dependencies in time. However, unlike U-Tree, they all treat observations as atomic, which limits their applicability to truly high-dimensional systems. Furthermore, despite their theoretical representational capacity, their learning algorithms have difficulty discovering long-range dependencies in practice. The learning algorithm for LPSTs first learns a full suffix tree, and then adds loops as appropriate. Thus, to capture very long-range dependencies, one must first build a very deep suffix tree. McCallum reported that his FSM-learning method was often unable to detect long-range temporal dependencies (since this typically involves multiple elaborations of the FSM, none of which would individually seem valuable to the hill-climbing algorithm). Mahmud’s similar approach would likely suffer a similar limitation. The learning algorithms for LASR and prediction profile models both rely on estimating predictions at particular histories. Because estimates will only be accurate for histories that appear many times, these algorithms can only be effectively applied to data consisting of many short trajectories, which limits their ability to discover long-range dependencies in practice. 2 It should be noted that prediction profile models have been combined with an additional preprocessing step that learns an abstraction before the prediction profile model is learned [15]. Because of this, and because their formulation most closely fits the setting of this paper, the experiments in Section 5 will directly compare against the performance of prediction profile models. 3 Timeline Trees The goal of timeline trees is to combine the strengths of U-Tree with the ability to attend to events arbitrarily far apart in history (rather than limited to a finite suffix). Unlike several of the above approaches, timeline trees are not arbitrarily recurrent (they do not contain loops except in a limited, implicit sense), which does restrict their representational capacity. However, in exchange they retain the straightfoward decision tree training of U-Tree, which allows them to simultaneously learn an abstraction over both the history sequence and high-dimensional observations and which furthermore allows them to discover long-range temporal depencies in practice (and not just in principle). 3.1 Timestamps The decision tree built by U-Tree splits on features of observations at some temporal offset from the current timestep. The key idea of timeline trees is to allow multiple timestamps in history and to allow splits on features of observations at temporal offsets relative to any of these timestamps. Timeline trees take a set F of binary features, where each feature f(ht, k) takes the history at time t, ht, and a timestep 0 < k ≤t + 11 and returns 1 or 0. For example, if the observations are images, f could return 1 if a black pixel existed anywhere at step k −1 but did not exist at step k. It is assumed that f(ht, k) makes use of no information after step k (though it may access timestep k or before). For a fixed vector τ of timestamps, the model is a standard decision tree, and only a small extension of U-Tree (which fixed the number of timestamps to 1: the current timestep). Each internal node in the tree is associated with a feature f, a timestamp index i, and a temporal offset δ (which may be negative) and has two children representing histories where the value of f(ht, τ[i] + δ) is 0 or 1, respectively. The leaves of the tree are associated with estimates of Pr(ω(Ot+1) | ht, at+1, κ(Ot+1)). To use the timeline tree to make a prediction, one simply follows a path from the root to a leaf in the tree, choosing the appropriate child at each node according to the feature value f(ht, τ[i] + δ). Timeline trees’ real strength lies in their ability to add new timestamps. They do this via a special type of feature. For every feature f ∈F, there is an additional timestamp feature ξf. The feature ξf(ht, j, k) = 1 if there is some timestep m such that j < m < k where f(ht, m) = 1. More importantly, the greatest such m (that is, the time of the most recent occurence of f), call it mf, is added as a timestamp to all nodes in the subtree where ξf = 1. When making a prediction for ht, one maintains a growing vector τ of timestamps (in order of least to most recent). Beginning at the root there is only one timestamp: τroot = ⟨t + 1⟩, where t is the current timestep. As one travels from the root to a leaf, one may encounter a node associated with timestamp feature ξf. Such a node is also associated with an index i into the current timestamp vector. If ξf(ht, τ[i −1], τ[i]) = 1, the path moves to the corresponding child and adds mf to τ (let τ[0] def= −1). Nodes further down in the tree may refer to this new timestamp. As such, the tree is able to establish timestamps based on the occurence of key events (the presence of some feature). Timestamp features are a form of temporal abstraction; they refer to an event in the past, but abstract away how long ago it was. They are limited, however. There are systems that would require an infinite timeline tree that approaches in Section 2.1 can capture easily (see Section 5.2). Nevertheless, they do capture a natural and intuitive form of partial observability, as can be seen by example. 3.2 An Example As a simple illustrative example, consider an agent that must keep track of its key. The agent’s key can be in room A or B, or in the agent’s pocket (where it starts). The agent has three actions: move 1For simplicity’s sake, the notation f(ht, k) hides the fact that the features may also depend on at+1 and κ(ot+1). For this discussion, assume that k may equal t + 1 if the feature makes use of only these aspects of time t + 1. If k = t + 1 and the feature refers to other information, assume f(ht, k) = 0. 3 Figure 1: The Key World example. (which switches its location), stay (which does nothing), and pocket. The last action transfers the key between the agent’s pocket and the current room (in either direction) unless the key is in neither (in which case it does nothing). The agent can observe its location and whether the key is in the room. A diagram is shown in the left of Figure 1 (missing arrows are self-loops). On the right of Figure 1 an example timeline tree is shown that can predict whether the agent will see the key in the next timestep. At the root, there is only one timestamp: t+1, where t is the current step. The root checks if the agent can currently see the key. If so, the agent will only see the key in the next step if it stays. Otherwise, the agent must remember where the key was last seen. The square-shaped node is meant to indicate a timestamp feature, which checks if the agent has ever seen the key before the only timestamp. If not, the key is in the agent’s pocket. If so, a new timestamp m is added that marks the last time the key was seen. If the agent put the key in its pocket after m, it must take the key out to see it. Otherwise, it must be in the other room. 4 Learning Timeline Trees Timeline trees can be learned using standard decision tree induction algorithms (e.g. ID3 [16] and C4.5 [17]). The leaves of the tree contain the estimated predictions (counts of the occurrences of the various values of ω(o) associated with histories mapping to that leaf). The tree starts as just the root (not associated with a feature). Each phase of training expands a single leaf by associating it with a feature and adding the appropriate children under it. At each phase every candidate expansion (every leaf and every feature) is tried and the one that results in the highest information gain between the predictions of the original tree and the expanded tree is greedily selected. The main difference in timeline trees is that different features may be available in different leaf nodes (because different timestamps will be available). Specifically for each leaf n, all features of the form f(·, τn[i] + k) are considered for all timestamp indices i ∈{1, . . . , |τn|} and all integer offsets k in some finite range. Similarly, all timestamp features of the form ξf(·, τn[i −1], τn[i]) are considered for all timestamp indices i. In the experiments below, candidate expansions also include all combinations of timestamp features and regular features (essentially two expansions at once). These compound features take the form of first splitting on a timestamp feature, and then splitting the resulting “1 child” with a regular feature. This allows the tree to notice that a timestamp is useful for the subsequent splits it allows, even if it is not inherently informative itself. For instance, in the Key World, knowing whether the agent has ever seen the key may not be very informative, but knowing that the pocket action was taken immediately after seeing the key is very informative. Note that compound features will tend to result in higher information gain than simple features. As a result, there will be a bias toward selecting compound features, which is not necessarily desireable. To combat this, the information gain of compound features was penalized by a factor of β. In the experiments below, β = 0.5. Also note that because the information gain measurement used to choose expansions is estimated from a finite number of samples, expanding the tree until information gain is zero for all candidates will typically result in overfitting. Thus, some form of early stopping is common. In this implementation expansions are only considered if they make a statistically significant change to the predictions (as measured by a likelihood ratio test). The statistical test requires a significance level, α, which controls the probability of detecting a spurious difference. Applying the test several times to the same data set compounds the danger of such an error, so α should be set quite low. In the experiments below, α = 10−10. 4 (a) Shooting Gallery (b) Three Card Monte (c) Snake Figure 2: Experiment Domains 5 Experiments In this section, timeline trees will be evaluated in three problems to which prediction profile models have been previously applied. In each problem a set of features and a set of training trajectories are provided. For various amounts of training trajectories, timeline trees are learned and their prediction accuracy is evaluated (as well as their usefulness for control). Results are averaged over 20 trials. Note that for prediction profile models a completely new model is learned for each batch of training trajectories. For timeline trees, the new data is simply added to the existing tree and new splits are made until the algorithm stops. This strategy is effective for timeline trees since the initial splits can often be made with relatively little data (this not possible for prediction profile models). In addition to evaluating timeline trees, two variants will also be evaluated. One (labeled “Finite Suffix”) does not use any timestamp features at all. Thus, it is similar to U-Tree (splitting on features of a finite suffix of history). The other (labeled “No Timestamps”) includes timestamp features, but does not use them to create new timestamps. This variant is meant to evaluate whether any performance benefit is due to the form of the features or due to the addition of new timestamps. 5.1 Shooting Gallery In this example, from Talvitie and Singh [4], the agent is in a shooting gallery (see Figure 2(a)). Its gun is aimed at a fixed position (marked by the “X”) and it must shoot a target that moves around the grid, bouncing off the edges and obstacles (an example trajectory is pictured). If the target is in the crosshairs in the step after the agent shoots, the agent gets a reward of 10. Otherwise it gets a reward of -5. Whenever the agent hits the target, the gallery resets (obstacles are placed randomly) and an special observation is emitted. The gallery may also reset on its own with a 0.01 chance. Clearly the agent must predict whether the target will be in the crosshairs in the next timestep, but the target’s movement is stochastic and partially observable. At every step it either moves in its current direction with probability 0.7 or stays in place with probability 0.3. The agent must remember the direction of the ball the last time it moved. This problem is also fairly high-dimensional. There are roughly 4,000,000 possible observations, and even more hidden states. Because of the large number of observations Talvitie and Singh [4] hand-crafted an observation abstraction and applied it to the training data before learning the prediction profile models. Their abstraction pays attention only to the position of the target and the configuration of the obstacles in its immediate neighborhood. By constrast, timeline trees learn an abstraction over both observations and the history sequence. Experimental Setup: The prediction profile models were trained on trajectories of length 4, generated by the uniform random policy. Though short trajectories are necessary for training prediction profile models, the timeline trees tended to overfit to the short trajectories. In short trajectories, a feature like “Has the target ever been in the crosshairs?” might seem spuriously meaningful. During testing, which takes place on one long trajectory, this feature would be much less informative. Therefore, the tree models were trained on fewer, longer trajectories (of length 40). To train the tree models, a binary feature was provided for each color (target, obstacle, background, or reset) for each pixel in the image. There was also a feature for each action. The maximum temporal offset from a timestamp was set to 2. The learned models are evaluated by using their predictions as features for a policy gradient algorithm, OLGARB [18]. Good predictions about the color under the cross-hairs should lead to a good policy. For the details of how the predictions are encoded for OLGARB, see [4]. To evaluate the learned models, OLGARB is run for 1,000,000 steps. The average reward obtained and the root 5 mean squared error (RMSE) of the probabilities provided by the model are reported (at each step, the model’s probability that the target will be in the crosshairs is compared to the true probability). 0 0.005 0.01 0.015 0.02 0.025 0 1e+06 2e+06 3e+06 4e+06 Avg. Reward (20 Trials) # Training Steps Control Performance Optimal True TimelineTree Prediction Profile Finite Suffix No Timestamps 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0 1e+06 2e+06 3e+06 4e+06 Avg. RMSE (20 Trials) # Training Steps Prediction Performance TimelineTree Prediction Profile Finite Suffix No Timestamps Figure 3: Shooting gallery results. Results: Figure 3 shows the results. The line marked “Prediction Profile” shows the best results reported by Talvitie and Singh [4]; the other curves show the performance of timeline trees and the comparison variants. In the control performance graph, the dashed line marked “Optimal” shows the average performance of the optimal policy. The dashed line marked “True” shows the average performance of OLGARB when given the true predictions as features. This is the best performance a learned model could hope for. Both the timeline trees and the prediction profile models are able to learn to make good predictions, but timeline trees do so with less data. Remember that timeline trees are learning from raw images whereas the prediction profile models have been provided a hand-crafted abstraction. The tree models without timestamps are only able to make good predictions in histories where the target has recently moved, which limits their performance. The “No Timestamp” variant is outperformed by the “Finite Suffix” model, which indicates that, despite the longer training trajectories, it may still be overfitting. 5.2 Three Card Monte The next example, also from Talvitie and Singh [4], is one for which the decision tree approach would not be appropriate. While illustrating the limitations of timeline trees in comparison to more expressive methods, it also demonstrates that they can represent useful knowledge that the simpler tree-based methods cannot. The problem is based on the simple game “Three Card Monte”. There are three face down cards on the table, one of which is the ace. A dealer repeatedly chooses two cards and swaps their positions. Eventually the dealer asks the agent to flip over the ace. If the agent succeeds, it gets a reward of 1; if it fails it gets a reward of -1. For a detailed specification, see [4]. Note that to do well in this game, the agent only needs to make the prediction, “If I flip card 1, will it be the ace?” (and the corresponding predictions for the other 2 cards) at any history. It does not, for instance, need to predict which cards will be swapped in the next time step. A complete model would attempt to make this prediction, which would mean not only modeling the movement of the cards, but also the decision making process of the dealer! The dealer in these experiments choses the swap it has chosen least frequently so far with probability 0.5. With probability 0.4 it choses uniformly randomly between the other two swaps. With probability 0.1, it asks for a guess. Since modeling the dealer’s behavior requires counting the number of times each swap has been selected, a complete POMDP model of this system would require infinitely many states. Further note that the entire sequence of swap observations since the last time the ace’s position was observed is important for predicting the ace’s location. Since timeline trees’ primary strength is ignoring sections of history to focus on a few key events, they would not be expected to model this problem well. Prediction profile models, on the other hand, are able to track the ace’s location with a 3-state machine (pictured in Figure 2(b)). Experimental Setup: Training and evaluation were the same as in the Shooting Gallery (above) except the prediction profile models were given length 10 trajectories and the tree-based models were given length 100 trajectories. The features provided to the trees were encodings of the atomic actions and observations. There was a binary feature indicating each observation, action, and each action-observation pair. The maximum time offset from a timestamp was 10 steps (both positive and negative). The specification of the prediction profile models implicitly encodes the fact that the agent’s action at+1 is important to the predictions (i.e. which card it flips). For fairness, the tree models were also seeded with these features (they were split on the agent’s action before training). Results: Figure 4(a) presents the control performance results and Figure 4(b) shows the prediction error results. The prediction profile models are able to perfectly track the ace’s location after 100,000 6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 0 250000 500000 750000 Avg. Reward (20 Trials) # Training Steps Control Performance Optimal True Prediction Profile TimelineTree No Timestamps Finite Suffix (a) -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 250000 500000 750000 Avg. RMSE (20 Trials) # Training Steps Prediction Performance Prediction Profile TimelineTree No Timestamps Finite Suffix (b) -0.1 -0.05 0 0.05 0.1 0.15 0 2.5e+07 5e+07 7.5e+07 1e+08 Avg. Reward (20 Trials) # Training Steps Control (Expert Training) TimelineTree (Expert) No Timestamps (Expert) Finite Suffix (Expert) TimelineTree No Timestamps Finite Suffix (c) Figure 4: Results in Three Card Monte. training steps. As expected, the tree methods perform poorly (negative average reward indicates more wrong guesses than right), though timeline trees have marginally better control performance. Part of the difficulty is that randomly generated training data is quite different than what the agent will encounter during testing (the random agent flips cards over frequently, while the learning agent eventually flips a card only when prompted to). Figure 4(c) shows the control performance of the tree models trained with expert-generated data instead (generated by the optimal policy). The dashed lines show the results for random training data for comparison. Expert-trained timeline trees are eventually good enough to allow the agent to achieve positive average reward, though they do require a great deal of data to do so (note the changes to the axes). Though the expert training improves the performance of the limited variants as well, neither achieves positive average reward. So, though their representational limitations do prevent all three tree-based methods from performing well in this problem, timeline trees’ ability to create new timestamps seems to allow them to make some meaningful (and useful) predictions that the others cannot. 5.3 Snake The final example is an arcade game called “Snake” from Talvitie [15] (see Figure 2(c)). In this problem, multiple partial models will be learned and combined to form a complete, structured model which can be used for planning. The agent controls the head of a snake. The snake’s body trails behind, and the tail does exactly what the head did, at a delay. There are 10 food pellets on the screen and the goal is to eat them in a particular order (indicated by the shades of grey in Figure 2(c)). If the snake ever runs into the wrong pellet, its own body, or the edge of the screen, the game is over and the agent receives -0.01 reward. Whenever the snake eats a pellet, the agent gets 1 reward and the tail stays still for 5 timesteps, making the snake’s body longer. In addition, there is a 0.2 chance each step that the tail will not move, so the snake is always growing, imposing some time-pressure. This version of Snake has two sources of partial observability. The tail shadows the head and, in addition, the pellet the snake must eat next is invisible. Initially, all 10 pellets are shown, but when the first pellet is eaten, the next one disappears, and only reappears if the snake’s head is adjacent to it. When that pellet is eaten, the next pellet disappears, and so on. To do well, the agent must remember the location of the next pellet. The observations are 20 × 20 images. There are over 1030 distinct possible observations and even more hidden states. Experimental Setup: For every location (x, y) and every color c, a timeline tree model was used to predict whether the pixel at (x, y) would next be color c. These models jointly predict the entire next observation. The models were ordered, and each model could condition on outcomes predicted by the previous models (so κ gave the portion of the image predicted by models earlier in the order). In this case, the models were ordered by color, then by location (column-major order). The color order was head, tail, body, the 10 food pellet colors (in increasing order), and finally background. For each color, training data was pooled data across all locations (rather than learn a separate model for each coordinate). However, features were provided that gave the position of the model (there was a binary feature for each column and each row), so the tree could choose to attend to position information if necessary. There was also a binary feature for each action and several pixel-based features. There was a feature for each color of each pixel in a 5 × 5 square around the pixel being predicted. There were also features for the same square of pixels indicating whether each pixel had just changed from one color to another color (for all pairs of colors). Finally, there were features indicating whether a particular color or particular color change existed anywhere in the image. 7 Prediction profile models were applied to this problem by Talvitie [15]. Similarly, multiple prediction profile models were learned, each responsible for a particular prediction in particular situations (called “histories of interest”). For instance, one model type predicted whether a particular pixel would contain the head in the next timestep, but only when the head was in the immediate neighborhood of that pixel. Before training the prediction profile models, an abstraction learning process was applied to the data that mapped each action-observation sequence between histories of interest to a single abstract observation. Thus, even though the raw data might contain very long trajectories, the abstract data consisted of only short trajectories. The reader is referred to Talvitie [15] for a detailed description of this approach. The main thing to note is that the hand-crafted structure indicates to each model which key events it should attend to, and which stretches of history can be ignored. By contrast, timeline trees learn this information. As in [15], the training data was generated by running UCT [19] (a sample-based planning algorithm) on the true model (with a 0.25 chance of taking a random action). Each training trajectory is a full game (typically a few hundred steps long). The learned set of partial models was evaluated collectively. The joint model was given to UCT and its average planning performance over 100 test games was compared to that of UCT with a perfect model. The probability given by the model for the observation at each step was compared to the true probability and the RMSE has been reported. 0 2 4 6 8 10 12 0 2000 4000 6000 8000 10000 Avg. Score (20 Trials) # Training Trajectories Control Performance True TimelineTree No Timestamps Prediction Profile Finite Suffix 0 0.1 0.2 0.3 0.4 0.5 0 2000 4000 6000 8000 10000 Avg. RMSE (20 Trials) # Training Trajectories Prediction Performance TimelineTree No Timestamps Prediction Profile Finite Suffix Figure 5: Results in Snake. Results: The results are shown in Figure 5. Despite the hand-crafted structure provided to the prediction profile models, timeline trees learn higher quality models with less data. In fact, UCT appears to perform better using the timeline tree model than the true model (marked “True”). This is due to a coincidental interaction between the model and UCT. The learned model mistakenly predicts that the snake may not die when it moves off the edge of the screen (a rare event in the training data). This emboldens UCT to consider staying near the edges, which can be necessary to escape a tight spot. In terms of control performance, the “No Timestamps” variant performs nearly identically to the full timeline tree. This is because they are equally good at tracking the invisible food pellet. The model checks if each pixel has ever contained a food pellet and if it has ever contained the head. If “yes” and “no,” respectively then it must contain food. This can be expressed without creating new timestamps. However, the “No Timestamps” model cannot fully represent a model of the tail’s movement (which requires remembering what the head did when it was in the tail’s position). The timeline tree incurs substantially less prediction error, indicating that it is able to model the tail more accurately. 6 Conclusions In these experiments, timeline trees learned to capture long-range dependencies in complex, partially observable systems with high-dimensional observations. The assumption that the predictions of interest depend on only a few key events in the past is limiting in the sense that there are simple partial models that timeline trees cannot easily capture (e.g. Three Card Monte), but it does reflect a broad, natural class of partially observable phenomena (the examples here, for instance, were not designed with timeline trees in mind). In problems that do match timeline trees’ inductive biases, they have been shown to outperform the more expressive prediction profile models. There are many possible directions in which to consider extending timeline trees. More sophisticated decision tree induction methods could help with sample complexity and overfitting. Regression tree methods could extend timeline trees into environments with continuous dimensions. The timestamp features used here are only one of many possible types of temporally abstract features that could be devised. Of particular interest is whether the ideas here can be combined with approaches described in Section 2.1 in order to increase expressive power, while retaining the benefits of timeline trees. 8 References [1] Craig Boutilier, Thomas Dean, and Steve Hanks. Decision-theoretic planning: Structural assumptions and computational leverage. Journal of Artificial Intelligence Research, 11:1–94, 1999. [2] Erik Talvitie and Satinder Singh. Simple local models for complex dynamical systems. In Advances in Neural Information Processing Systems 21 (NIPS), pages 1617–1624, 2009. [3] Andrew K. McCallum. Reinforcement Learning with Selective Perception and Hidden State. PhD thesis, Rutgers University, 1995. [4] Erik Talvitie and Satinder Singh. Learning to make predictions in partially observable environments without a generative model. Journal of Artificial Intelligence Research (JAIR), 42:353–392, 2011. [5] Michael Littman, Richard Sutton, and Satinder Singh. Predictive representations of state. In Advances in Neural Information Processing Systems 14 (NIPS), pages 1555–1561, 2002. [6] George E. Monahan. A survey of partially observable markov decisions processes: Theory, models, and algorithms. Management Science, 28(1):1–16, 1982. [7] Anthony R. Cassandra, Leslie Pack Kaelbling, and Michael L. Littman. Acting optimally in partially observable stochastic domains. In Proceedings of the Twelfth National Conference on Artificial Intelligence (AAAI), volume 2, pages 1023–1028, 1994. [8] Satinder Singh, Michael R. James, and Matthew R. Rudary. Predictive state representations: A new theory for modeling dynamical systems. In Uncertainty in Artificial Intelligence: Proceedings of the Twentieth Conference (UAI), pages 512–519, 2004. [9] Alicia Peregrin Wolfe and Andrew G. Barto. Decision tree methods for finding reusable MDP homomorphisms. In Proceedings of the Twenty-First National Conference on Artificial Intelligence (AAAI), 2006. [10] Michael Holmes and Charles Isbell. Looping suffix tree-based inference of partially observable hidden state. In Proceedings of the Twenty-Third International Conference on Machine Learning (ICML), pages 409–416, 2006. [11] Dana Ron, Yoram Singer, and Naftali Tishby. The power of amnesia. In Advances in Neural Information Processing Systems 6, pages 176–183, 1994. [12] Monica Dinculescu and Doina Precup. Approximate predictive representations of partially observable systems. In Proceedings of the Twenty-Seventh International Conference on Machine Learning (ICML), pages 895–902, 2010. [13] R. Andrew McCallum. Overcoming incomplete perception with utile distinction memory. In Proceedings of the Tenth International Conference on Machine Learning (ICML), pages 190–196, 1993. [14] M. M. Hassan Mahmud. Constructing states for reinforcement learning. In Proceedings of the TwentySeventh International Conference on Machine Learning (ICML), pages 727–734, 2010. [15] Erik Talvitie. Simple Partial Models for Complex Dynamical Systems. PhD thesis, University of Michigan, Ann Arbor, MI, 2010. [16] J. Ross Quinlan. Induction of decision trees. Machine Learning, 1:81–106, 1986. [17] J. Ross Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufman Publishers Inc., San Francisco, CA, 1993. [18] Lex Weaver and Nigel Tao. The optimal reward baseline for gradient-based reinforcement learning. In Uncertainty in Artificial Intelligence: Proceedings of the Seventeenth Conference (UAI), pages 538–545, 2001. [19] Levente Kocsis and Csaba Szepesv´ari. Bandit based monte-carlo planning. In Proceedings of the Seventeenth European Conference on Machine Learning (ECML), pages 282–293, 2006. 9
|
2012
|
92
|
4,812
|
Emergence of Object-Selective Features in Unsupervised Feature Learning Adam Coates, Andrej Karpathy, Andrew Y. Ng Computer Science Department Stanford University Stanford, CA 94305 {acoates,karpathy,ang}@cs.stanford.edu Abstract Recent work in unsupervised feature learning has focused on the goal of discovering high-level features from unlabeled images. Much progress has been made in this direction, but in most cases it is still standard to use a large amount of labeled data in order to construct detectors sensitive to object classes or other complex patterns in the data. In this paper, we aim to test the hypothesis that unsupervised feature learning methods, provided with only unlabeled data, can learn high-level, invariant features that are sensitive to commonly-occurring objects. Though a handful of prior results suggest that this is possible when each object class accounts for a large fraction of the data (as in many labeled datasets), it is unclear whether something similar can be accomplished when dealing with completely unlabeled data. A major obstacle to this test, however, is scale: we cannot expect to succeed with small datasets or with small numbers of learned features. Here, we propose a large-scale feature learning system that enables us to carry out this experiment, learning 150,000 features from tens of millions of unlabeled images. Based on two scalable clustering algorithms (K-means and agglomerative clustering), we find that our simple system can discover features sensitive to a commonly occurring object class (human faces) and can also combine these into detectors invariant to significant global distortions like large translations and scale. 1 Introduction Many algorithms are now available to learn hierarchical features from unlabeled image data. There is some evidence that these algorithms are able to learn useful high-level features without labels, yet in practice it is still common to train such features from labeled datasets (but ignoring the labels), and to ultimately use a supervised learning algorithm to learn to detect more complex patterns that the unsupervised learning algorithm is unable to find on its own. Thus, an interesting open question is whether unsupervised feature learning algorithms are able to construct features, without the benefit of supervision, that can identify high-level concepts like frequently-occurring object classes. It is already known that this can be achieved when the dataset is sufficiently restricted that object classes are clearly defined (typically closely cropped images) and occur very frequently [13, 21, 22]. In this work we aim to test whether unsupervised learning algorithms can achieve a similar result without any supervision at all. The setting we consider is a challenging one. We have harvested a dataset of 1.4 million image thumbnails from YouTube and extracted roughly 57 million 32-by-32 pixel patches at random locations and scales. These patches are very different from those found in labeled datasets like CIFAR10 [9]. The overwhelming majority of patches in our dataset appear to be random clutter. In the cases where such a patch contains an identifiable object, it may well be scaled, arbitrarily cropped, or uncentered. As a result, it is very unclear where an “object class” begins or ends in this type of patch dataset, and less clear that a completely unsupervised learning algorithm could manage to cre1 ate “object-selective” features able to distinguish an object from the wide variety of clutter without some other type of supervision. In order to have some hope of success, we can identify several key properties that our learning algorithm should likely have. First, since identifiable objects show up very rarely, it is clear that we are obliged to train from extremely large datasets. We have no way of controlling how often a particular object shows up and thus enough data must be used to ensure that an object class is seen many times—often enough that it cannot be disregarded as random clutter. Second, we are also likely to need a very large number of features. Training too few features will cause us to “under-fit” the distribution, forcing the learning algorithm to ignore rare events like objects. Finally, as is already common in feature learning work, we should aim to build features that incorporate invariance so that features respond not just to a specific pattern (e.g., an object at a single location and scale), but to a range of patterns that collectively belong to the same object class (e.g., the same object seen at many locations and scales). Unfortunately, these desiderata are difficult to achieve at once: current methods for building invariant hierarchies of features are difficult to scale up to train many thousands of features from our 57 million patch dataset on our cluster of 30 machines. In this paper, we will propose a highly scalable combination of clustering algorithms for learning selective and invariant features that are capable of tackling this size of problem. Surprisingly, we find that despite the simplicity of these algorithms we are nevertheless able to discover high-level features sensitive to the most commonly occurring object class present in our dataset: human faces. In fact, we find that these features are better face detectors than a linear filter trained from labeled data, achieving up to 86% AUC compared to 77% on labeled validation data. Thus, our results emphasize that not only can unsupervised learning algorithms discover object-selective features with no labeled data, but that such features can potentially perform better than basic supervised detectors due to their deep architecture. Though our approach is based on fast clustering algorithms (K-means and agglomerative clustering), its basic behavior is essentially similar to existing methods for building invariant feature hierarchies, suggesting that other popular feature learning methods currently available may also be able to achieve such results if run at large enough scale. Indeed, recent work with a more sophisticated (but vastly more expensive) feature-learning algorithm appears to achieve similar results [11] when presented with full-frame images. We will begin with a description of our algorithms for learning selective and invariant features, and explain their relationship to existing systems. We will then move on to presenting our experimental results. Related results and methods to our own will be reviewed briefly before concluding. 2 Algorithm Our system is built on two separate learning modules: (i) an algorithm to learn selective features (linear filters that respond to a specific input pattern), and (ii) an algorithm to combine the selective features into invariant features (that respond to a spectrum of gradually changing patterns). We will refer to these features as “simple cells” and “complex cells” respectively, in analogy to previous work and to biological cells with (very loosely) related response properties. Following other popular systems [14, 12, 6, 5] we will then use these two algorithms to build alternating layers of simple cell and complex cell features. 2.1 Learning Selective Features (Simple Cells) The first module in our learning system trains a bank of linear filters to represent our selective “simple cell” features. For this purpose we use the K-means-like method used by [2], which has previously been used for large-scale feature learning. The algorithm is given a set of input vectors x(i) ∈ℜn, i = 1, . . . , m. These vectors are preprocessed by removing the mean and normalizing each example, then performing PCA whitening. We then learn a dictionary D ∈ℜn×d of linear filters as in [2] by alternating optimization over filters D and “cluster assignments” C: minimize D,C ||DC(i) −x(i)||2 2 subject to ||D(j)||2 = 1, ∀j, and ||C(i)||0 ≤1, ∀i. 2 Here the constraint ||C(i)||0 ≤1 means that the vectors C(i), i = 1, . . . , m are allowed to contain only a single non-zero, but the non-zero value is otherwise unconstrained. Given the linear filters D, we then define the responses of the learned simple cell features as s(i) = g(a(i)) where a(i) = D⊤x(i) and g(·) is a nonlinear activation function. In our experiments we will typically use g(a) = |a| for the first layer of simple cells, and g(a) = a for the second.1 2.2 Learning Invariant Features (Complex Cells) To construct invariant complex cell features a common approach is to create “pooling units” that combine the responses of lower-level simple cells. In this work, we use max-pooling units [14, 13]. Specifically, given a vector of simple cell responses s(i), we will train complex cell features whose responses are given by: c(i) j = max k∈Gj s(i) k where Gj is a set that specifies which simple cells the j’th complex cell should pool over. Thus, the complex cell cj is an invariant feature that responds significantly to any of the patterns represented by simple cells in its group. Each group Gj should specify a set of simple cells that are, in some sense, similar to one another. In convolutional neural networks [12], for instance, each group is hard-coded to include translated copies of the same filter resulting in complex cell responses cj that are invariant to small translations. Some algorithms [6, 3] fix the groups Gj ahead of time then optimize the simple cell filters D so that the simple cells in each group share a particular form of statistical dependence. In our system, we will use linear correlation of simple cell responses as our similarity metric, E[akal], and construct groups Gj that combine similar features according to this metric. Computing the similarity directly would normally require us to estimate the correlations from data, but since the inputs x(i) are whitened we can instead compute the similarity directly from the filter weights: E[akal] = E[D(k)⊤x(i)x(i)⊤D(l)] = D(k)⊤D(l). For convenience in the following, we will actually use the dissimilarity between features, defined as d(k, l) = ||D(k) −D(l)||2 = p 2 −2E[akal]. To construct the groups G, we will use a version of single-link agglomerative clustering to combine sets of features that have low dissimilarity according to d(k, l).2 To construct a single group G0 we begin by choosing a random simple cell filter, say D(k), as the first member. We then search for candidate cells to be added to the group by computing d(k, l) for each simple cell filter D(l) and add D(l) to the group if d(k, l) is less than some limit τ. The algorithm then continues to expand G0 by adding any additional simple cells that are closer than τ to any one of the simple cells already in the group. This procedure continues until there are no more cells to be added, or until the diameter of the group (the dissimilarity between the two furthest cells in the group) reaches a limit ∆.3 This procedure can be executed, quite rapidly, in parallel for a large number of randomly chosen simple cells to act as the “seed” cell, thus allowing us to train many complex cells at once. Compared to the simple cell learning procedure, the computational cost is extremely small even for our rudimentary implementation. In practice, we often generate many groups (e.g., several thousand) and then keep only a random subset of the largest groups. This ensures that we do not end up with many groups that pool over very few simple cells (and hence yield complex cells cj that are not especially invariant). 2.3 Algorithm Behavior Though it seems plausible that pooling simple cells with similar-looking filters according to d(k, l) as above should give us some form of invariant feature, it may not yet be clear why this form of 1This allows us to train roughly half as many simple cell features for the first layer. 2Since the first layer uses g(a) = |a|, we actually use d(k, l) = min{||D(k) −D(l)||2, ||D(k) + D(l)||2} to account for −D(l) and +D(l) being essentially the same feature. 3We use τ = 0.3 for the first layer of complex cells and τ = 1.0 for the second layer. These were chosen by examining the typical distance between a filter D(k) and its nearest neighbor. We use ∆= 1.5 > √ 2 so that a complex cell group may include orthogonal filters but cannot grow without limit. 3 invariance is desirable. To explain, we will consider a simple “toy” data distribution where the behavior of these algorithms is more clear. Specifically, we will generate three heavy-tailed random variables X, Y, Z according to: σ1, σ2 ∼L(0, λ) e1, e2, e3 ∼N(0, 1) X = e1σ1, Y = e2σ1, Z = e3σ2 Here, σ1, σ2 are scale parameters sampled independently from a Laplace distribution, and e1, e2, e3 are sampled independently from a unit Gaussian. The result is that Z is independent of both X and Y , but X and Y are not independent due to their shared scale parameter σ1 [6]. An isocontour of the density of this distribution is shown in Figure 1a. Other popular algorithms [6, 5, 3] for learning complex-cell features are designed to identify X and Y as features to be pooled together due to the correlation in their energies (scales). One empirical motivation for this kind of invariance comes from natural images: if we have three simple-cell filter responses a1 = D(1)⊤x, a2 = D(2)⊤x, a3 = D(3)⊤x where D(1) and D(2) are Gabor filters in quadrature phase, but D(3) is a Gabor filter at a different orientation, then the responses a1, a2, a3 will tend to have a distribution very similar to the model of X, Y, Z above [7]. By pooling together the responses of a1 and a2 a complex cell is able to detect an edge of fixed orientation invariant to small translations. This model also makes sense for higher-level invariances where X and Y do not merely represent responses of linear filters on image patches but feature responses in a deep network. Indeed, the X–Y plane in Figure 1a is referred to as an “invariant subspace” [8]. Our combination of simple cell and complex cell learning algorithms above tend to learn this same type of invariance. After whitening and normalization, the data points X, Y, Z drawn from the distribution above will lie (roughly) on a sphere. The density of these data points is pictured in Figure 1b, where it can be seen that the highest density areas are in a “belt” in the X–Y plane and at the poles along the Z axis with a low-density region in between. Application of our K-means clustering method to this data results in centroids shown as ∗marks in Figure 1b. From this picture it is clear what a subsequent application of our single-link clustering algorithm will do: it will try to string together the centroids around the “belt” that forms the invariant subspace and avoid connecting them to the (distant) centroids at the poles. Max-pooling over the responses of these filters will result in a complex cell that responds consistently to points in the X–Y plane, but not in the Z direction— that is, we end up with an invariant feature detector very similar to those constructed by existing methods. Figure 1c depicts this result, along with visualizations of the hypothetical gabor filters D(1), D(2), D(3) described above that might correspond to the learned centroids. (a) (b) (c) Figure 1: (a) An isocontour of a sparse probability distribution over variables X, Y, and Z. (See text for details.) (b) A visualization of the spherical density obtained from the distribution in (a) after normalization. Red areas are high density and dark blue areas are low density. Centroids learned by K-means from this data are shown on the surface of the sphere as * marks. (c) A pooling unit identified by applying single-link clustering to the centroids (black links join pooled filters). (See text.) 2.4 Feature Hierarchy Now that we have defined our simple and complex cell learning algorithms, we can use them to train alternating layers of selective and invariant features. We will train 4 layers total, 2 of each type. The architecture we use is pictured in Figure 2a. 4 (a) (b) Figure 2: (a) Cross-section of network architecture used for experiments. Full layer sizes are shown at right. (b) Randomly selected 128-by-96 images from our dataset. Our first layer of simple cell features are locally connected to 16 non-overlapping 8-by-8 pixel patches within the 32-by-32 pixel image. These features are trained by building a dataset of 8-by8 patches and passing them to our simple cell learning procedure to train 6400 first-layer filters D ∈ℜ64×6400. We apply our complex cell learning procedure to this bank of filters to find 128 pooling groups G1, G2, . . . , G128. Using these results, we can extract our simple cell and complex cell features from each 8-by-8 pixel subpatch of the 32-by-32 image. Specifically, the linear filters D are used to extract the first layer simple cell responses s(p) i = g(D(i)⊤x(p)) where x(p), p = 1, .., 16 are the 16 subpatches of the 32-by-32 image. We then compute the complex cell feature responses c(p) j = maxk∈Gj s(p) k for each patch. Once complete, we have an array of 128-by-4-by-4 = 2048 complex cell responses c representing each 32-by-32 image. These responses are then used to form a new dataset from which to learn a second layer of simple cells with K-means. In our experiments we train 150,000 second layer simple cells. We denote the second layer of learned filters as ¯D, and the second layer simple cell responses as ¯s = ¯D⊤c. Applying again our complex cell learning procedure to ¯D, we obtain pooling groups ¯G, and complex cells ¯c defined analogously. 3 Experiments As described above, we ran our algorithm on patches harvested from YouTube thumbnails downloaded from the web. Specifically, we downloaded the thumbnails for over 1.4 million YouTube videos4, some of which are shown in Figure 2b. These images were downsampled to 128-by-96 pixels and converted to grayscale. We cropped 57 million randomly selected 32-by-32 pixel patches from these images to form our unlabeled training set. No supervision was used—thus most patches contain partial views of objects or clutter at differing scales. We ran our algorithm on these images using a cluster of 30 machines over 3 days—virtually all of the time spent training the 150,000 second-layer features.5 We will now visualize these features and check whether any of them have learned to identify an object class. 3.1 Low-Level Simple and Complex Cell Visualizations We visualize the learned low-level filters D and pooling groups G to verify that they are, in fact, similar to those learned by other well-known algorithms. It is already known that our K-meansbased algorithm learns simple-cell-like filters (e.g., edge-like features, as well as spots, curves) as shown in Figure 3a. To visualize the learned complex cells we inspect the simple cell filters that belong to each of the pooling groups. The filters for several pooling groups are shown in Figure 3b. As expected, the filters cover a spectrum of similar image structures. Though many pairs of filters are extremely similar6, 4We cannot select videos at random, so we query videos under each YouTube category (“Pets & Animals”, “Science & Technology”, etc.) along with a date (e.g., “January 2001”). 5Though this is a fairly long run, we note that 1 iteration of K-means is cheaper than a single batch gradient step for most other methods able to learn high-level invariant features. We expect that these experiments would be impossible to perform in a reasonable amount of time on our cluster with another algorithm. 6Some filters have reversed polarity due to our use of absolute-value rectification during training of the first layer. 5 there are also other pairs that differ significantly yet are included in the group due to the singlelink clustering method. Note that some of our groups are composed of similar edges at differing locations, and thus appear to have learned translation invariance as expected. 3.2 Higher-Level Simple and Complex Cells Finally, we inspect the learned higher layer simple cell and complex cell features, ¯s and ¯c, particularly to see whether any of them are selective for an object class. The most commonly occurring object in these video thumbnails is human faces (even though we estimate that much less than 0.1% of patches contain a well-framed face). Thus we search through our learned features for cells that are selective for human faces at varying locations and scales. To locate such features we use a dataset of labeled images: several hundred thousand non-face images as well as tens of thousands of known face images from the “Labeled Faces in the Wild” (LFW) dataset [4].7 To test whether any of the ¯s simple cell features are selective for faces, we use each feature by itself as a “detector” on the labeled dataset: we compute the area under the precision-recall curve (AUC) obtained when each feature’s response ¯si is used as a simple classifier. Indeed, it turns out that there are a handful of high-level features that tend to be good detectors for faces. The precision-recall curves for the best 5 detectors are shown in Figure 3c (top curves); the best of these achieves 86% AUC. We visualize 16 of the simple cell features identified by this procedure8 in Figure 4a along with a sampling of the image patches that activate the first of these cells strongly. There it can be seen that these simple cells are selective for faces located at particular locations and scales. Within each group the faces differ slightly due to the learned invariance provided by the complex cells in the lower layer (and thus the mean of each group of images is blurry). (a) (b) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.4 0.5 0.6 0.7 0.8 0.9 1 Recall Precision (c) Figure 3: (a) First layer simple cell filters learned by K-means. (b) Sets of simple cell filters belonging to three pooling groups learned by our complex cell training algorithm. (c) Precision-Recall curves showing selectivity for human faces of 5 low-level simple cells trained from a full 32-by-32 patch (red curves, bottom) versus 5 higher-level simple cells (green curves, top). Performance of the best linear filter found by SVM from labeled data is also shown (black dotted curve, middle). It may appear that this result could be obtained by applying our simple cell learning procedure directly to full 32-by-32 images without any attempts at incorporating local invariance. That is, rather than training D (the first-layer filters) from 8-by-8 patches, we could try to train D directly from the 32-by-32 images. This turns out not to be successful. The lower curves in Figure 3c are the precision-recall curves for the best 5 simple cells found in this way. Clearly the higher-level features are dramatically better detectors than simple cells built directly from pixels9 (only 64% AUC). 7Our positive face samples include the entire set of labeled faces, plus randomly scaled and translated copies. 8We visualize the higher-level features by averaging together the 100 unlabeled images from our YouTube dataset that elicit the strongest activation. 9These simple cells were trained by applying K-means to normalized, whitened 32-by-32 pixel patches from a smaller unlabeled set known to have a higher concentration of faces. Due to this, a handful of centroids look roughly like face exemplars and act as simple “template matchers”. When trained on the full dataset (which contains far fewer faces), K-means learns only edge and arc features which perform much worse (about 45% AUC). 6 Best 32-by-32 simple cell Best in ¯s Best in ¯c Supervised Linear SVM AUC 64% 86% 80% 77% Table 1: Area under PR curve for different cells on our face detection validation set. Only the SVM uses labeled data. (a) (b) (c) (d) Figure 4: Visualizations. (a) A collection of patches from our unlabeled dataset that maximally activate one of the high-level simple cells from ¯s. (b) The mean of the top stimuli for a handful of face-selective cells in ¯s. (c) Visualization of the face-selective cells that belong to one of the complex cells in ¯c discovered by the single-link clustering algorithm applied to ¯D. (d) A collection of unlabeled patches that elicit a strong response from the complex cell visualized in (c) — virtually all are faces, at a variety of scales and positions. Compare to (a). As a second control experiment we train a linear SVM from half of the labeled data using only pixels as input (contrast-normalized and whitened). The PR curve for this linear classifier is shown in Figure 3c as a black dotted line. There we see that the supervised linear classifier is significantly better (77% AUC) than the 32-by-32 linear simple cells. On the other hand, it does not perform as well as the higher level simple cells learned by our system even though it is likely the best possible linear detector. Finally, we inspect the higher-level complex cells learned by the applying the same agglomerative clustering procedure to the higher-level simple cell filters. Due to the invariance introduced at the lower layers, two simple cells that detect faces at slightly different locations or scales will often have very similar filter weights and thus we expect our algorithm to find and combine these simple cells into higher-level invariant features cells. To visualize our higher-level complex cell features ¯c, we can simply look at visualizations for all of the simple cells in each of the groups ¯G. These visualizations show us the set of patches that strongly activate each simple cell, and hence also activate the complex cell. The results of such a visualization for one group that was found to contain only face-selective cells is shown in Figure 4c. There it can be seen that this single “complex cell” selects for faces at multiple positions and scales. A sampling of image patches collected from the unlabeled data that strongly activate the corresponding complex cell are shown in Figure 4d. We see that the complex cell detects many faces but at a much wider variety of positions and scales compared to the simple cells, demonstrating that even “higher level” invariances are being captured, including scale invariance. Benchmarked on our labeled set, this complex cell achieves 80.0% AUC—somewhat worse than the very best simple cells, but still in the top 10 performing cells in the entire network. Interestingly, the qualitative results in Figure 4d are excellent, and we believe these images represent an even greater range of variations than those in the labeled set. Thus the 80% AUC number may somewhat under-rate the quality of these features. These results suggest that the basic notions of invariance and selectivity that underpin popular feature learning algorithms may be sufficient to discover the kinds of high-level features that we desire, possibly including whole object classes robust to local and global variations. Indeed, using simple implementations of selective and invariant features closely related to existing algorithms, we have found that is possible to build features with high selectivity for a coherent, commonly occurring object class. Though human faces occur only very rarely in our very large dataset, it is clear that the complex cell visualized Figure 4d is adept at spotting them amongst tens of millions of images. The enabler for these results is the scalability of the algorithms we have employed, suggesting that other systems can likely achieve similar results to the ones shown here if their computational limitations are overcome. 7 4 Related Work The method that we have proposed has close connections to a wide array of prior work. For instance, the basic notions of selectivity and invariance that drive our system can be identified in many other algorithms: Group sparse coding methods [3] and Topographic ICA [6, 7] build invariances by pooling simple cells that lie in an invariant subspace, identified by strong scale correlations between cell responses. The advantage of this criterion is that it can determine which features to pool together even when the simple cell filters are orthogonal (where they would be too far apart for our algorithm to recognize their relationship). Our results suggest that while this type of invariance is very useful, there exist simple ways of achieving a similar effect. Our approach is also connected with methods that attempt to model the geometric (e.g., manifold) structure of the input space. For instance, Contractive Auto-Encoders [16, 15], Local Coordinate Coding [20], and Locality-constrained Linear Coding [19] learn sparse linear filters while attempting to model the manifold structure staked out by these filters (sometimes termed “anchor points”). One interpretation of our method, suggested by Figure 1b, is that with extremely overcomplete dictionaries it is possible to use trivial distance calculations to identify neighboring points on the manifold. This in turn allows us to construct features invariant to shifts along the manifold with little effort. [1] use similar intuitions to propose a clustering method similar to our approach. One of our key results, the unsupervised discovery of features selective for human faces is fairly unique (though seen recently in the extremely large system of [11]). Results of this kind have appeared previously in restricted settings. For instance, [13] trained Deep Belief Network models that decomposed object classes like faces and cars into parts using a probabilistic max-pooling to gain translation invariance. Similarly, [21] has shown results of a similar flavor on the Caltech recognition datasets. [22] showed that a probabilistic model (with some hand-coded geometric knowledge) can recover clusters containing 20 known object class silhouettes from outlines in the LabelMe dataset. Other authors have shown the ability to discover detailed manifold structure (e.g., as seen in the results of embedding algorithms [18, 17]) when trained in similarly restricted settings. The structure that these methods discover, however, is far more apparent when we are using labeled, tightly cropped images. Even if we do not use the labels themselves the labeled examples are, by construction, highly clustered: faces will be separated from other objects because there are no partial faces or random clutter. In our dataset, no supervision is used except to probe the representation post hoc. Finally, we note the recent, extensive findings of Le et al. [11]. In that work an extremely large 9layer neural network based on a TICA-like learning algorithm [10, 6] is also capable of identifying a wide variety of object classes (including cats and upper-bodies of people) seen in YouTube videos. Our results complement this work in several key ways. First, by training on smaller randomly cropped patches, we show that object-selectivity may still be obtained even when objects are almost never framed properly within the image—ruling out this bias as the source of object-selectivity. Second, we have shown that the key concepts (sparse selective filters and invariant-subspace pooling) used in their system can also be implemented in a different way using scalable clustering algorithms, allowing us to achieve results reminiscent of theirs using a vastly smaller amount of computing power. (We used 240 cores, while their large-scale system is composed of 16,000 cores.) In combination, these results point strongly to the conclusion that almost any highly scalable implementation of existing feature-learning concepts is enough to discover these sophisticated high-level representations. 5 Conclusions In this paper we have presented a feature learning system composed of two highly scalable but otherwise very simple learning algorithms: K-means clustering to find sparse linear filters (“simple cells”) and agglomerative clustering to stitch simple cells together into invariant features (“complex cells”). We showed that these two components are, in fact, capable of learning complicated high-level representations in large scale experiments on unlabeled images pulled from YouTube. Specifically, we found that higher level simple cells could learn to detect human faces without any supervision at all, and that our complex-cell learning procedure combined these into even higher-level invariances. These results indicate that we are apparently equipped with many of the key principles needed to achieve such results and that a critical remaining puzzle is how to scale up our algorithms to the sizes needed to capture more object classes and even more sophisticated invariances. 8 References [1] Y. Boureau, N. L. Roux, F. Bach, J. Ponce, and Y. LeCun. Ask the locals: multi-way local pooling for image recognition. In 13th International Conference on Computer Vision, pages 2651–2658, 2011. [2] A. Coates and A. Y. Ng. The importance of encoding versus training with sparse coding and vector quantization. In International Conference on Machine Learning, pages 921–928, 2011. [3] P. Garrigues and B. Olshausen. Group sparse coding with a laplacian scale mixture prior. In Advances in Neural Information Processing Systems 23, pages 676–684, 2010. [4] G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Technical Report 0749, University of Massachusetts, Amherst, October 2007. [5] A. Hyv¨arinen and P. Hoyer. Emergence of phase-and shift-invariant features by decomposition of natural images into independent feature subspaces. Neural Computation, 12(7):1705–1720, 2000. [6] A. Hyv¨arinen, P. Hoyer, and M. Inki. Topographic independent component analysis. Neural Computation, 13(7):1527–1558, 2001. [7] A. Hyv¨arinen, J. Hurri, and P. Hoyer. Natural Image Statistics. Springer-Verlag, 2009. [8] T. Kohonen. Emergence of invariant-feature detectors in self-organization. In M. Palaniswami et al., editor, Computational Intelligence, A Dynamic System Perspective, pages 17–31. IEEE Press, New York, 1995. [9] A. Krizhevsky. Learning multiple layers of features from Tiny Images. Master’s thesis, Dept. of Comp. Sci., University of Toronto, 2009. [10] Q. Le, A. Karpenko, J. Ngiam, and A. Ng. ICA with reconstruction cost for efficient overcomplete feature learning. In Advances in Neural Information Processing Systems, 2011. [11] Q. Le, M. Ranzato, R. Monga, M. Devin, K. Chen, G. Corrado, J. Dean, and A. Ng. Building high-level features using large scale unsupervised learning. In International Conference on Machine Learning, 2012. [12] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural Computation, 1:541– 551, 1989. [13] H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In International Conference on Machine Learning, pages 609–616, 2009. [14] M. Riesenhuber and T. Poggio. Hierarchical models of object recognition in cortex. Nature neuroscience, 2, 1999. [15] S. Rifai, Y. Dauphin, P. Vincent, Y. Bengio, and X. Muller. The manifold tangent classifier. In Advances in Neural Information Processing, 2011. [16] S. Rifai, P. Vincent, X. Muller, X. Glorot, and Y. Bengio. Contractive auto-encoders: Explicit invariance during feature extraction. In International Conference on Machine Learning, 2011. [17] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323—2326, December 2000. [18] L. van der Maaten and G. Hinton. Visualizing high-dimensional data using t-SNE. Journal of Machine Learning Research, 9:2579—2605, November 2008. [19] J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Gong. Locality-constrained linear coding for image classification. In Computer Vision and Pattern Recognition, pages 3360–3367, 2010. [20] K. Yu, T. Zhang, and Y. Gong. Nonlinear learning using local coordinate coding. In Advances in Neural Information Processing Systems 22, pages 2223–2231, 2009. [21] M. D. Zeiler, G. W. Taylor, and R. Fergus. Adaptive deconvolutional networks for mid and high level feature learning. In International Conference on Computer Vision, 2011. [22] L. Zhu, Y. Chen, A. Torralba, W. Freeman, and A. Yuille. Part and Appearance Sharing: Recursive Compositional Models for Multi-View Multi-Object Detection. In Computer Vision and Pattern Recognition, 2010. 9
|
2012
|
93
|
4,813
|
CPRL – An Extension of Compressive Sensing to the Phase Retrieval Problem Henrik Ohlsson Division of Automatic Control, Department of Electrical Engineering, Link¨oping University, Sweden. Department of Electrical Engineering and Computer Sciences University of California at Berkeley, CA, USA ohlsson@eecs.berkeley.edu Allen Y. Yang Department of Electrical Engineering and Computer Sciences University of California at Berkeley, CA, USA Roy Dong Department of Electrical Engineering and Computer Sciences University of California at Berkeley, CA, USA S. Shankar Sastry Department of Electrical Engineering and Computer Sciences University of California at Berkeley, CA, USA Abstract While compressive sensing (CS) has been one of the most vibrant research fields in the past few years, most development only applies to linear models. This limits its application in many areas where CS could make a difference. This paper presents a novel extension of CS to the phase retrieval problem, where intensity measurements of a linear system are used to recover a complex sparse signal. We propose a novel solution using a lifting technique – CPRL, which relaxes the NP-hard problem to a nonsmooth semidefinite program. Our analysis shows that CPRL inherits many desirable properties from CS, such as guarantees for exact recovery. We further provide scalable numerical solvers to accelerate its implementation. 1 Introduction In the area of X-ray imaging, phase retrieval (PR) refers to the problem of recovering a complex multivariate signal from the squared magnitude of its Fourier transform. Existing sensor devices for collecting X-ray images are only sensitive to signal intensities but not the phases. However, it is very important to be able to recover the missing phase information as it reveals finer structures of the subjects than using the intensities alone. The PR problem also has broader applications and has been studied extensively in biology, physics, chemistry, astronomy, and more recent nanosciences [29, 20, 18, 24, 23]. Mathematically, PR can be formulated using a linear system y = Ax ∈CN, where the matrix A may represent the Fourier transform or other more general linear transforms. If the complex measurements y are available and the matrix A is assumed given, it is well known that the leastsquares (LS) solution recovers the model parameter x that minimizes the squared estimation error: 1 ∥y −Ax∥2 2. In PR, we assume that the phase of the coefficients of y is omitted and only the squared magnitude of the output is observed: bi = |yi|2 = |⟨x, ai⟩|2, i = 1, · · · , N, (1) where AH = [a1, · · · , aN] ∈Cn×N, yT = [y1, · · · , yN] ∈CN, and AH denotes the Hermitian transpose of A. Inspired by the emerging theory of compressive sensing [17, 8] and a lifting technique recently proposed for PR [13, 10], we study the PR problem with a more restricted assumption that the model parameter x is sparse and the number of observations N are too few for (1) to have a unique solution, and in some cases even fewer measurements than the number of unknowns n. The problem is known as compressive phase retrieval (CPR) [25, 27, 28]. In many X-ray imaging applications, for instance, if the complex source signal is indeed sparse under a proper basis, CPR provides a viable solution to exactly recover the signal while collecting much fewer measurements than the traditional non-compressive solutions. Clearly, the PR problem and its CPR extension are much more challenging than the LS problem, as the phase of y is lost while only its squared magnitude is available. For starters, it is important to note that the setup naturally leads to ambiguous solutions regardless whether the original linear model is overdetermined or not. For example, if x0 ∈Cn is a solution to y = Ax, then any multiplication of x and a scalar c ∈C, |c| = 1, leads to the same squared output b. As mentioned in [10], when the dictionary A represents the unitary discrete Fourier transform (DFT), the ambiguities may represent time-reversed or time-shifted solutions of the ground-truth signal. Hence, these global ambiguities are considered acceptable in PR applications. In this paper, when we talk about a unique solution to PR, it is indeed a representative of a family of solutions up to a global phase ambiguity. 1.1 Contributions The main contribution of the paper is a convex formulation of the CPR problem. Using the lifting technique, the NP-hard problem is relaxed as a semidefinite program (SDP). We will briefly summarize several theoretical bounds for guaranteed recovery of the complex input signal, which is presented in full detail in our technical report [26]. Built on the assurance of the guaranteed recovery, we will focus on the development of a novel scalable implementation of CPR based on the alternating direction method of multipliers (ADMM) approach. The ADMM implementation provides a means to apply CS ideas to PR applications e.g., high-impact nanoscale X-ray imaging. In the experiment, we will present a comprehensive comparison of the new algorithm with the traditional interior-point method, other state-of-the-art sparse optimization techniques, and a greedy algorithm proposed in [26]. In high-dimensional complex domain, the ADMM algorithm demonstrates superior performance in our simulated examples and real images. Finally, the paper also provides practical guidelines to practitioners at large working on other similar nonsmooth SDP applications. To aid peer evaluation, the source code of all the algorithms have been made available at: http://www.rt.isy.liu.se/˜ohlsson/. 2 Compressive Phase Retrieval via Lifting (CPRL) Since (1) is nonlinear in the unknown x, N ≫n measurements are in general needed for a unique solution. When the number of measurements N are fewer than necessary for such a unique solution, additional assumptions are needed as regularization to select one of the solutions. In classical CS, the ability to find the sparsest solution to a linear equation system enables reconstruction of signals from far fewer measurements than previously thought possible. Classical CS is however only applicable to systems with linear relations between measurements and unknowns. To extend classical CS to the nonlinear PR problem, we seek the sparsest solution satisfying (1): min x ∥x∥0, subj. to b = |Ax|2 = {aH i xxHai}1≤i≤N, (2) with the square acting element-wise and b = [b1, · · · , bN]T ∈RN. As the counting norm ∥· ∥0 is not a convex function, following the ℓ1-norm relaxation in CS, (2) can be relaxed as min x ∥x∥1, subj. to b = |Ax|2 = {aH i xxHai}1≤i≤N. (3) 2 Note that (3) is still not a convex program, as its equality constraint is not a linear equation. In the literature, a lifting technique has been extensively used to reframe problems such as (3) to a standard form in SDP, such as in Sparse PCA [15]. More specifically, given the ground-truth signal x0 ∈Cn, let X0 ≜x0xH 0 ∈Cn×n be an induced rank-1 semidefinite matrix. Then (3) can be reformulated into1 minX⪰0 ∥X∥1, subj. to rank(X) = 1, bi = aH i Xai, i = 1, · · · , N. (4) This is of course still a nonconvex problem due to the rank constraint. The lifting approach addresses this issue by replacing rank(X) with Tr(X). For a positive-semidefinite matrix, Tr(X) is equal to the sum of the eigenvalues of X (or the ℓ1-norm on a vector containing all eigenvalues of X). This leads to the nonsmooth SDP minX⪰0 Tr(X) + λ∥X∥1, subj. to bi = Tr(ΦiX), i = 1, · · · , N, (5) where we further denote Φi ≜aiaH i ∈Cn×n and λ ≥0 is a design parameter. Finally, the estimate of x can be found by computing the rank-1 decomposition of X via singular value decomposition. We refer to the approach as compressive phase retrieval via lifting (CPRL). Consider now the case that the measurements are contaminated by data noise. In a linear model, bounded random noise typically affects the output of the system as y = Ax + e, where e ∈CN is a noise term with bounded ℓ2-norm: ∥e∥2 ≤ϵ. However, in phase retrieval, we follow closely a more special noise model used in [13]: bi = |⟨x, ai⟩|2 + ei. (6) This nonstandard model avoids the need to calculate the squared magnitude output |y|2 with the added noise term. More importantly, in most practical phase retrieval applications, measurement noise is introduced when the squared magnitudes or intensities of the linear system are measured on the sensing device, but not y itself. Accordingly, we denote a linear operator B of X as B : X ∈Cn×n 7→{Tr(ΦiX)}1≤i≤N ∈RN, (7) which measures the noise-free squared output. Then the approximate CPR problem with bounded ℓ2-norm error model can be solved by the following nonsmooth SDP program: minX⪰0 Tr(X) + λ∥X∥1, subj. to ∥B(X) −b∥2 ≤ε. (8) Due to the machine rounding error, in general a nonzero ε should be always assumed and in its termination condition during the optimization. The estimate of x, just as in noise free case, can finally be found by computing the rank-1 decomposition of X via singular value decomposition. We refer to the method as approximate CPRL. 3 Theoretical Analysis This section highlights some of the analysis results derived for CPRL. The proofs of these results are available in the technical report [26]. The analysis follows that of CS and is inspired by derivations given in [13, 12, 16, 9, 3, 7]. In order to state some theoretical properties for CPRL, we need a generalization of the restricted isometry property (RIP). Definition 1 (RIP) A linear operator B(·) as defined in (7) is (ϵ, k)-RIP if | ∥B(X)∥2 2 ∥X∥2 2 −1| < ϵ for all ∥X∥0 ≤k and X ̸= 0. We can now state the following theorem: Theorem 2 (Recoverability/Uniqueness) Let B(·) be a (ϵ, 2∥X∗∥0)-RIP linear operator with ϵ < 1 and let ¯x be the sparsest solution to (1). If X∗satisfies b = B(X∗), X∗⪰0, rank{X∗} = 1, then X∗is unique and X∗= ¯x¯xH. We can also give a bound on the sparsity of ¯x: Theorem 3 (Bound on ∥¯x¯xH∥0 from above) Let ¯x be the sparsest solution to (1) and let ˜X be the solution of CPRL (5). If ˜X has rank 1 then ∥˜X∥0 ≥∥¯x¯xH∥0. The following result now holds trivially: 1In this paper, ∥X∥1 for a matrix X denotes the entry-wise ℓ1-norm, and ∥X∥2 denotes the Frobenius norm. 3 Corollary 4 (Guaranteed recovery using RIP) Let ¯x be the sparsest solution to (1). The solution of CPRL ˜X is equal to ¯x¯xH if it has rank 1 and B(·) is (ϵ, 2∥˜X∥0)-RIP with ϵ < 1. If ¯x¯xH = ˜X can not be guaranteed, the following bound becomes useful: Theorem 5 (Bound on ∥X∗−˜X∥1) Let ϵ < 1 1+ √ 2 and assume B(·) to be a (ϵ, 2k)-RIP linear operator. Let X∗be any matrix (sparse or dense) satisfying b = B(X∗), X∗⪰0, rank{X∗} = 1, let ˜X be the CPRL solution, (5), and form Xs from X∗by setting all but the k largest elements to zero. Then, (1 −( 2 √ k 1−ρ + 1) 1 λ)∥˜X −X∗∥1 ≤ 2 (1−ρ) √ k∥X∗−Xs∥1, (9) with ρ = √ 2ϵ/(1 −ϵ). Given the RIP analysis, it may be the case that the linear operator B(·) does not well satisfy the RIP property defined in Definition 1, as pointed out in [13]. In these cases, RIP-1 maybe considered: Definition 6 (RIP-1) A linear operator B(·) is (ϵ, k)-RIP-1 if | ∥B(X)∥1 ∥X∥1 −1| < ϵ for all matrices X ̸= 0 and ∥X∥0 ≤k. Theorems 2–3 and Corollary 4 all hold with RIP replaced by RIP-1 and are not restated in detail here. Instead we summarize the most important property in the following theorem: Theorem 7 (Upper bound & recoverability through ℓ1) Let ¯x be the sparsest solution to (1). The solution of CPRL (5), ˜X, is equal to ¯x¯xH if it has rank 1 and B(·) is (ϵ, 2∥˜X∥0)-RIP-1 with ϵ < 1. The RIP type of argument may be difficult to check for a given matrix and are more useful for claiming results for classes of matrices/linear operators. For instance, it has been shown that random Gaussian matrices satisfy the RIP with high probability. However, given realization of a random Gaussian matrix, it is indeed difficult to check if it actually satisfies the RIP. Two alternative arguments are spark [14] and mutual coherence [17, 11]. The spark condition usually gives tighter bounds but is known to be difficult to compute as well. On the other hand, mutual coherence may give less tight bounds, but is more tractable. We will focus on mutual coherence, which is defined as: Definition 8 (Mutual coherence) For a matrix A, define the mutual coherence as µ(A) = max1≤i,j≤n,i̸=j |aH i aj | ∥ai∥2∥aj ∥2 . By an abuse of notation, let B be the matrix satisfying b = BXs with Xs being the vectorized version of X. We are now ready to state the following theorem: Theorem 9 (Recovery using mutual coherence) Let ¯x be the sparsest solution to (1). The solution of CPRL (5), ˜X, is equal to ¯x¯xH if it has rank 1 and ∥˜X∥0 < 0.5(1 + 1/µ(B)). 4 Numerical Implementation via ADMM In addition to the above analysis of guaranteed recovery properties, a critical issue for practitioners is the availability of efficient numerical solvers. Several numerical solvers used in CS may be applied to solve nonsmooth SDPs, which include interior-point methods (e.g., used in CVX [19]), gradient projection methods [4], and augmented Lagrangian methods (ALM) [4]. However, interior-point methods are known to scale badly to moderate-sized convex problems in general. Gradient projection methods also fail to meaningfully accelerate the CPRL implementation due to the complexity of the projection operator. Alternatively, nonsmooth SDPs can be solved by ALM. However, the augmented primal and dual objective functions are still complex SDPs, which are equally expensive to solve in each iteration. In summary, as we will demonstrate in Section 5, CPRL as a nonsmooth complex SDP is categorically more expensive to solve compared to the linear programs underlying CS, and the task exceeds the capability of many popular sparse optimization techniques. In this paper, we propose a novel solver to the nonsmooth SDP underlying CPRL via the alternating directions method of multipliers (ADMM, see for instance [6] and [5, Sec. 3.4]) technique. The motivation to use ADMM are two-fold: 1. It scales well to large data sets. 2. It is known for its fast convergence. There are also a number of strong convergence results [6] which further motivates the choice. To set the stage for ADMM, rewrite (5) to the equivalent SDP minX1,X2,Z f1(X1) + f2(X2) + g(Z), subj. to X1 −Z = 0, X2 −Z = 0, (10) 4 where f1(X)≜ Tr(X) if bi = Tr(ΦiX), i = 1, . . . , N ∞ otherwise , f2(X)≜ 0 if X ⪰0 ∞ otherwise, g(Z)≜λ∥Z∥1. The update rules of ADMM now lead to the following: Xl+1 i = arg minX fi(X) + Tr(Y l i (X −Zl)) + ρ 2∥X −Zl∥2 2, Zl+1 = arg minZ g(Z) + P2 i=1 −Tr(Y l i Z) + ρ 2∥Xl+1 i −Z∥2 2, Y l+1 i = Y l i + ρ(Xl+1 i −Zl+1), (11) where Xi, Yi, Z are constrained to stay in the domain of Hermitian matrices. Each of these steps has a tractable calculation. However, the Xi, Yi, and Z variables are complex-valued, and, as most of the optimization literature deals with real-valued vectors and symmetric matrices, we will emphasize differences between the real case and complex case. After some simple manipulations, we have: Xl+1 1 = argminX ∥X −(Zl −I+Y l 1 ρ )∥2, subj. to bi = Tr(ΦiX), i = 1, · · · , N. (12) Assuming that a feasible solution exists, and defining ΠA as the projection onto the convex set given by the linear constraints, the solution is: Xl+1 1 = ΠA(Zl − I+Y l 1 ρ ). This optimization problem has a closed-form solution; converting the matrix optimization problem in (12) into an equivalent vector optimization problem yields a problem of the form: minx ||x−z||2 subj. to b = Ax. The answer is given by the pseudo-inverse of A, which can be precomputed. This complex-valued problem can be solved by converting the linear constraint in Hermitian matrices into an equivalent constraint on real-valued vectors. This conversion is done by noting that for n × n Hermitian matrices A, B: ⟨A, B⟩= Tr(AB) = Pn i=1 Pn j=1 AijBij = Pn i=1 AiiBii + Pn i=1 Pn j=i+1 AijBij + AijBij = Pn i=1 AiiBii + Pn i=1 Pn j=i+1 2 real(Aij) real(Bij) + 2 imag(Aij) imag(Bij) So if we define the vector Av as an n2 vector such that its elements are Aii for i = 1, · · · , n, √ 2 real(Aij) for i = 1, · · · , n, j = i + 1, · · · , n, and √ 2 imag(Aij) for i = 1, · · · , n, j = i + 1, · · · , n, and similarly define Bv, then we can see that ⟨A, B⟩= ⟨Av, Bv⟩. This turns the constraint bi = Tr(ΦiX), i = 1, · · · , N, into one of the form: b = [Φv 1 · · · Φv N]T Xv, where each Φv i is in Rn2. Thus, for this subproblem, the memory usage scales linearly with N, the number of measurements, and quadratically with n, the dimension of the data. Next, Xl+1 2 = argminX⪰0 ∥X−(Zl− Y l 2 ρ )∥2 = ΠP SD(Zl − Y l 2 ρ ), where ΠP SD denotes the projection onto the positive-semidefinite cone, which can easily be obtained via eigenvalue decomposition. This holds for real-valued and complex-valued Hermitian matrices. Finally, let X l+1 = 1 2 P2 i=1 Xl+1 i and similarly Y l. Then, the Z update rule can be written: Zl+1 = argminZ λ∥Z∥1 + 2ρ 2 ∥Z −(X l+1 + Y l ρ )∥2 2 = soft(X l+1 + Y l ρ , λ 2ρ). (13) We note that the soft operator in the complex domain must be coded with care. One does not simply check the sign of the difference, as in the real case, but rather the magnitude of the complex number: soft(x, q) = ( 0 if |x| ≤q, |x|−q |x| x otherwise, (14) where q is a positive real number. Setting l = 0, the Hermitian matrices Xl i, Zl i, Y l i can now be iteratively computed using the ADMM iterations (11). The stopping criterion of the algorithm is given by: ∥rl∥2 ≤nϵabs + ϵrel max(∥X l∥2, ∥Zl∥2), ∥sl∥2 ≤nϵabs + ϵrel∥Y l∥2, (15) where ϵabs, ϵrel are algorithm parameters set to 10−3 and rl and sl are the primal and dual residuals given by: rl = (Xl 1 −Zl, Xl 2 −Zl), sl = −ρ(Zl −Zl−1, Zl −Zl−1). We also update ρ according to the rule discussed in [6]: ρl+1 = τincrρl if ∥rl∥2 > µ∥sl∥2, ρl/τdecr if ∥sl∥2 > µ∥rl∥2, ρl otherwise, (16) where τincr, τdecr, and µ are algorithm parameters. Values commonly used are µ = 10 and τincr = τdecr = 2. 5 5 Experiment The experiments in this section are chosen to illustrate the computational performance and scalability of CPRL. Being one of the first papers addressing the CPR problem, existing methods available for comparison are limited. For the CPR problem, to the authors’ best knowledge, the only methods developed are the greedy algorithms presented in [25, 27, 28], and GCPRL [26]. The method proposed in [25] handles CPR but is only tailored to random 2D Fourier samples from a 2D array and it is extremely sensitive to initialization. In fact, it would fail to converge in our scenarios of interest. [27] formulates the CPR problem as a nonconvex optimization problem that can be solved by solving a series of convex problems. [28] proposes to alternate between fit the estimate to measurements and thresholding. GCPRL, which stands for greedy CPRL, is a new greedy approximate algorithm tailored to the lifting technique in (5). The algorithm draws inspiration from the matching-pursuit algorithm [22, 1]. In each iteration, the algorithm adds a new nonzero component of x that minimizes the CPRL objective function the most. We have observed that if the number of nonzero elements in x is expected to be low, the algorithm can successfully recover the ground-truth sparse signal while consuming less time compared to interior-point methods for the original SDP.2 In general, greedy algorithms for solving CPR problems work well when a good guess for the true solution is available, are often computationally efficient but lack theoretical recovery guarantees. We also want to point out that CPRL becomes a special case in a more general framework that extends CS to nonlinear systems (see [1]). In general, nonlinear CS can be solved locally by greedy simplex pursuit algorithms. Its instantiation in PR is the GCPRL algorithm. However, the key benefit of developing the SDP solution for PR in this paper is that the global convergence can be guaranteed. In this section, we will compare implementations of CPRL using the interior-point method used by CVX [19] and ADMM with the design parameter choice recommended in [6] (τincr = τdecr = 2). λ = 10 will be used in all experiments. We will also compare the results to GCPRL and the PR algorithm PhaseLift [13]. The former is a greedy approximate solution, while the latter does not enforce sparsity and is obtained by setting λ = 0 in CPRL. In terms of the scale of the problem, the largest problem we have tested is on a 30 × 30 image and is 100-sparse in the Fourier domain with 2400 measurements. Our experiment is conducted on an IBM x3558 M3 server with two Xeon X5690 processors, 6 cores each at 3.46GHz, 12MB L3 cache, and 96GB of RAM. The execution for recovering one instance takes approximately 36 hours to finish in MATLAB environment, comprising of several tens of thousands of iterations. The average memory usage is 3.5 GB. 5.1 A simple simulation In this example we consider a simple CPR problem to illustrate the differences between CPRL, GCPRL, and PhaseLift. We also compare computational speed for solving the CPR problem and illustrate the theoretical bounds derived in Section 3. Let x ∈C64 be a 2-sparse complex signal, A ≜RF where F ∈C64×64 is the Fourier transform matrix and R ∈C32×64 a random projection matrix (generated by sampling a unit complex Gaussian), and let the measurements b satisfy the PR relation (1). The left plot of Figure 1 gives the recovered signal x using CPRL, GCPRL and PhaseLift. As seen, CPRL and GCPRL correctly identify the two nonzero elements in x while PhaseLift fails to identify the true signal and gives a dense estimate. These results are rather typical (see the MCMC simulation in [26]). For very sparse examples, like this one, CPRL and GCPRL often both succeed in finding the ground truth (even though we have twice as many unknowns as measurements). PhaseLift, on the other side, does not favor sparse solutions and would need considerably more measurements to recover the 2-sparse signal. The middle plot of Figure 1 shows the computational time needed to solve the nonsmooth SDP of CPRL using CVX, ADMM, and GCPRL. It shows that ADMM is the fastest and that GCPRL outperforms CVX. The right plot of Figure 1 shows the mutual coherence bound 0.5(1 + 1/µ(B)) for a number of different N’s and n’s, A ≜RF, F ∈Cn×n the Fourier transform matrix and R ∈CN×n a random projection matrix. This is of interest since Theorem 9 states that when the CRPL solution ˜X satisfies ∥˜X∥0 < 0.5(1 + 1/µ(B)) and has rank 1, then ˜X = ¯x¯xH, where ¯x is the sparsest solution to (1). From 2We have also tested an off-the-shelf toolbox that solves convex cone problems, called TFOCS [2]. Unfortunately, TFOCS cannot be applied directly to solving the nonsmooth SDP in CPRL. 6 the plot it can be concluded that if the CPRL solution ˜X has rank 1 and only a single nonzero component for a choice of 125 ≥n, N ≥5, Theorem 9 guarantees that ˜X = ¯x¯xH. We also observe that Theorem 9 is conservative, since we previously saw that 2 nonzero components could be recovered correctly for n = 64 and N = 32. In fact, numerical simulation can be used to show that N = 30 suffices to recover the ground truth in 95 out of 100 runs [26]. 0 10 20 30 40 50 60 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 i |xi| PhaseLift CPRL/GCPRL 0 20 40 60 80 100 0 1 2 3 4 5 6 7 8 9 10 time [s] || xxH − X || ~ _ _ CPRL (CVX) GCPRL CPRL (ADMM) n N 40 60 80 100 120 30 40 50 60 70 80 90 100 110 120 1.08 1.1 1.12 1.14 1.16 1.18 1.2 1.22 1.24 Figure 1: Left: The magnitude of the estimated signal provided by CPRL, GCPRL and PhaseLift. Middle: The residual ∥¯x¯xH −˜X∥2 plotted against time for ADMM (gray line), GCPRL (solid black line) and CVX (dashed black line). Right: A contour plot of the quantity 0.5(1 + 1/µ(B)). µ is taken as the average over 10 realizations of the data. 5.2 Compressive sampling and PR One of the motivations of presented work and CPRL is that it enables compressive sensing for PR problems. To illustrate this, consider the 20 × 20 complex image in Figure 2 Left. To measure the image, we could measure each pixel one-by-one. This would require us to sample 400 times. What CS proposes is to measure linear combinations of samples rather than individual pixels. It has been shown that the original image can be recovered from far fewer samples than the total number of pixels in the image. The gain using CS is hence that fewer samples are needed. However, traditional CS only discuss linear relations between measurements and unknowns. To extend CS to PR applications, consider again the complex image in Figure 2 Left and assume that we only can measure intensities or intensities of linear combinations of pixels. Let R ∈CN×400 capture how intensity measurements b are formed from linear combinations of pixels in the image, b = |Rz|2 (z is a vectorized version of the image). An essential part in CS is also to find a dictionary (possibly overcomplete) in which the image can be represented using only a few basis images. For classical CS applications, dictionaries have been derived. For applying CS to the PR applications, dictionaries are needed and a topic for future research. We will use a 2D inverse Fourier transform dictionary in our example and arrange the basis vectors as columns in F ∈C400×400. If we choose N = 400 and generate R by sampling from a unit Gaussian distribution and set A = RF, CPRL recovers exactly the true image. This is rather remarkable since the PR relation (1) is nonlinear in the unknown x and N ≫n measurements are in general needed for a unique solution. If we instead sample the intensity of each pixel, one-by-one, neither CPRL or PhaseLift recover the true image. If we set A = R and do not care about finding a dictionary, we can use a classical PR algorithm to recover the true image. If PhaseLift is used, N = 1600 measurements are sufficient to recover the true image. The main reasons for the low number of samples needed in CPRL is that we managed to find a good dictionary (20 basis images were needed to recover the true image) and CPRL’s ability to recover the sparsest solution. In fact, setting A = RF, PhaseLift still needs 1600 measurements to recover the true solution. 5.3 The Shepp-Logan phantom In this last example, we again consider the recovery of complex valued images from random samples. The motivation is twofold: Firstly, it illustrates the scalability of the ADMM implementation. In fact, ADMM has to be used in this experiment as CVX cannot handle the CPRL problem in this scale. Secondly, it illustrates that CPRL can provide approximate solutions that are visually close to the ground-truth images. Consider now the image in Figure 2 Middle Left. This 30 × 30 SheppLogan phantom has a 2D Fourier transform with 100 nonzero coefficients. We generate N linear combinations of pixels as in the previous example and square the measurements, and then apply 7 CPRL and PhaseLift with a 2D Fourier dictionary. The middel image in Figure 2 shows the recovered result using PhaseLift with N = 2400, the second image from the right shows the recovered result using CPRL with the same number N = 2400 and the right image is the recovered result using CPRL with N = 1500. The number of measurements with respect to the sparsity in x is too low for both CPRL and PhaseLift to perfectly recover z. However, CPRL provides a much better approximation and outperforms PhaseLift visually even though it uses considerably fewer measurements. 2 4 6 8 10 12 14 16 18 20 2 4 6 8 10 12 14 16 18 20 5 10 15 20 25 30 5 10 15 20 25 30 5 10 15 20 25 30 5 10 15 20 25 30 5 10 15 20 25 30 5 10 15 20 25 30 5 10 15 20 25 30 5 10 15 20 25 30 Figure 2: Left: Absolute value of the 2D inverse Fourier transform of x, |Fx|, used in the experiment in Section 5.2. Middle Left: Ground truth for the experiment in Section 5.3. Middle: Recovered result using PhaseLift with N = 2400. Middle Right: CPRL with N = 2400. Right: CPRL with N = 1500. 6 Future Directions The SDP underlying CPRL scales badly with the number of unknowns or basis vectors in the dictionary. Therefore, learning a suitable dictionary for a specific application becomes even more critical than that in traditional linear CS setting. We also want to point out that when classical CS was first studied, many of today’s accelerated numerical algorithms were not available. We are very excited about the new problem to improve the speed of SDP algorithms in sparse optimization, and hope our paper would foster the community’s interest to address this challenge collaboratively. One interesting direction might be to use ADMM to solve the dual of (5), see for instance [30, 31]. Another possible direction is the outer approximation methods [21]. 7 Acknowledgement Ohlsson is partially supported by the Swedish foundation for strategic research in the center MOVIII, the Swedish Research Council in the Linnaeus center CADICS, the European Research Council under the advanced grant LEARN, contract 267381, and a postdoctoral grant from the SwedenAmerica Foundation, donated by ASEA’s Fellowship Fund, and by a postdoctoral grant from the Swedish Research Council. Yang is supported by ARO 63092-MA-II. Dong is supported by the NSF Graduate Research Fellowship under grant DGE 1106400, and by the Team for Research in Ubiquitous Secure Technology (TRUST), which receives support from NSF (award number CCF0424422). The authors also want to acknowledge useful input from Stephen Boyd and Yonina Eldar. References [1] A. Beck and Y. C. Eldar. Sparsity constrained nonlinear optimization: Optimality conditions and algorithms. Technical Report arXiv:1203.4580, 2012. [2] S. Becker, E. Cand`es, and M. Grant. Templates for convex cone problems with applications to sparse signal recovery. Mathematical Programming Computation, 3(3), 2011. [3] R. Berinde, A. Gilbert, P. Indyk, H. Karloff, and M. Strauss. Combining geometry and combinatorics: A unified approach to sparse signal recovery. In Communication, Control, and Computing, 2008 46th Annual Allerton Conference on, pages 798–805, September 2008. [4] D. P. Bertsekas. Nonlinear Programming. Athena Scientific, 1999. [5] D. P. Bertsekas and J. N. Tsitsiklis. Parallel and Distributed Computation: Numerical Methods. Athena Scientific, 1997. [6] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 2011. [7] A. Bruckstein, D. Donoho, and M. Elad. From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Review, 51(1):34–81, 2009. 8 [8] E. Cand`es. Compressive sampling. In Proceedings of the International Congress of Mathematicians, volume 3, pages 1433–1452, Madrid, Spain, 2006. [9] E. Cand`es. The restricted isometry property and its implications for compressed sensing. Comptes Rendus Mathematique, 346(9–10):589–592, 2008. [10] E. Cand`es, Y. Eldar, T. Strohmer, and V. Voroninski. Phase retrieval via matrix completion. Technical Report arXiv:1109.0573, Stanford University, September 2011. [11] E. Cand`es, X. Li, Y. Ma, and J. Wright. Robust Principal Component Analysis? Journal of the ACM, 58(3), 2011. [12] E. Cand`es, J. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory, 52:489–509, February 2006. [13] E. Cand`es, T. Strohmer, and V. Voroninski. PhaseLift: Exact and stable signal recovery from magnitude measurements via convex programming. Technical Report arXiv:1109.4499, Stanford University, September 2011. [14] S. Chen, D. Donoho, and M. Saunders. Atomic decomposition by basis pursuit. SIAM Journal on Scientific Computing, 20(1):33–61, 1998. [15] A. d’Aspremont, L. El Ghaoui, M. Jordan, and G. Lanckriet. A direct formulation for Sparse PCA using semidefinite programming. SIAM Review, 49(3):434–448, 2007. [16] D. Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289–1306, April 2006. [17] D. Donoho and M. Elad. Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ1-minimization. PNAS, 100(5):2197–2202, March 2003. [18] J. Fienup. Reconstruction of a complex-valued object from the modulus of its Fourier transform using a support constraint. Journal of Optical Society of America A, 4(1):118–123, 1987. [19] M. Grant and S. Boyd. CVX: Matlab software for disciplined convex programming, version 1.21. http: //cvxr.com/cvx, August 2010. [20] D. Kohler and L. Mandel. Source reconstruction from the modulus of the correlation function: a practical approach to the phase problem of optical coherence theory. Journal of the Optical Society of America, 63(2):126–134, 1973. [21] H. Konno, J. Gotoh, T. Uno, and A. Yuki. A cutting plane algorithm for semi-definite programming problems with applications to failure discriminant analysis. Journal of Computational and Applied Mathematics, 146(1):141–154, 2002. [22] S. Mallat and Z. Zhang. Matching pursuits with time-frequency dictionaries. IEEE Transactions on Signal Processing, 41(12):3397–3415, December 1993. [23] S. Marchesini. Phase retrieval and saddle-point optimization. Journal of the Optical Society of America A, 24(10):3289–3296, 2007. [24] R. Millane. Phase retrieval in crystallography and optics. Journal of the Optical Society of America A, 7:394–411, 1990. [25] M. Moravec, J. Romberg, and R. Baraniuk. Compressive phase retrieval. In SPIE International Symposium on Optical Science and Technology, 2007. [26] H. Ohlsson, A. Y. Yang, R. Dong, and S. Sastry. Compressive Phase Retrieval From Squared Output Measurements Via Semidefinite Programming. Technical Report arXiv:1111.6323, University of California, Berkeley, November 2011. [27] Y. Shechtman, Y. C. Eldar, A. Szameit, and M. Segev. Sparsity based sub-wavelength imaging with partially incoherent light via quadratic compressed sensing. Opt. Express, 19(16):14807–14822, Aug 2011. [28] A. Szameit, Y. Shechtman, E. Osherovich, E. Bullkich, P. Sidorenko, H. Dana, S. Steiner, E. B. Kley, S. Gazit, T. Cohen-Hyams, S. Shoham, M. Zibulevsky, I. Yavneh, Y. C. Eldar, O. Cohen, and M. Segev. Sparsity-based single-shot subwavelength coherent diffractive imaging. Nature Materials, 11(5):455– 459, May 2012. [29] A. Walther. The question of phase retrieval in optics. Optica Acta, 10:41–49, 1963. [30] Z. Wen, D. Goldfarb, and W. Yin. Alternating direction augmented lagrangian methods for semidefinite programming. Mathematical Programming Computation, 2:203–230, 2010. [31] Z. Wen, C. Yang, X. Liu, and S. Marchesini. Alternating direction methods for classical and ptychographic phase retrieval. Inverse Problems, 28(11):115010, 2012. 9
|
2012
|
94
|
4,814
|
Learning optimal spike-based representations Ralph Bourdoukan∗ Group for Neural Theory ´Ecole Normale Sup´erieure Paris, France ralph.bourdoukan@ens.fr David G.T. Barrett∗ Group for Neural Theory ´Ecole Normale Sup´erieure Paris, France david.barrett@ens.fr Christian K. Machens Champalimaud Neuroscience Programme Champalimaud Centre for the Unknown Lisbon, Portugal christian.machens@neuro.fchampalimaud.org Sophie Den`eve Group for Neural Theory ´Ecole Normale Sup´erieure Paris, France sophie.deneve@ens.fr Abstract How can neural networks learn to represent information optimally? We answer this question by deriving spiking dynamics and learning dynamics directly from a measure of network performance. We find that a network of integrate-and-fire neurons undergoing Hebbian plasticity can learn an optimal spike-based representation for a linear decoder. The learning rule acts to minimise the membrane potential magnitude, which can be interpreted as a representation error after learning. In this way, learning reduces the representation error and drives the network into a robust, balanced regime. The network becomes balanced because small representation errors correspond to small membrane potentials, which in turn results from a balance of excitation and inhibition. The representation is robust because neurons become self-correcting, only spiking if the representation error exceeds a threshold. Altogether, these results suggest that several observed features of cortical dynamics, such as excitatory-inhibitory balance, integrate-and-fire dynamics and Hebbian plasticity, are signatures of a robust, optimal spike-based code. A central question in neuroscience is to understand how populations of neurons represent information and how they learn to do so. Usually, learning and information representation are treated as two different functions. From the outset, this separation seems like a good idea, as it reduces the problem into two smaller, more manageable chunks. Our approach, however, is to study these together. This allows us to treat learning and information representation as two sides of a single mechanism, operating at two different timescales. Experimental work has given us several clues about the regime in which real networks operate in the brain. Some of the most prominent observations are: (a) high trial-to-trial variability—a neuron responds differently to repeated, identical inputs [1, 2]; (b) asynchronous firing at the network level—spike trains of different neurons are at most very weakly correlated [3, 4, 5]; (c) tight balance of excitation and inhibition—every excitatory input is met by an inhibitory input of equal or greater size [6, 7, 8] and (4) spike-timing-dependent plasticity (STDP)—the strength of synapses change as a function of presynaptic and postsynaptic spike times [9]. Previously, it has been shown that observations (a)–(c) can be understood as signatures of an optimal, spike-based code [10, 11]. The essential idea is to derive spiking dynamics from the assumption that neurons only fire if their spike improves information representation. Information in a network may ∗Authors contributed equally 1 originate from several possible sources: external sensory input, external neural network input, or alternatively, it may originate within the network itself as a memory, or as a computation. Whatever the source, this initial assumption leads directly to the conclusion that a network of integrate-and-fire neurons can optimally represent a signal while exhibiting properties (a)–(c). A major problem with this framework is that network connectivity must be completely specified a priori, and requires the tuning of N 2 parameters, where N is the number of neurons in the network. Although this is feasible mathematically, it is unclear how a real network could tune itself into this optimal regime. In this work, we solve this problem using a simple synaptic learning rule. The key insight is that the plasticity rule can be derived from the same basic principle as the spiking rule in the earlier work—namely, that any change should improve information representation. Surprisingly, this can be achieved with a local, Hebbian learning rule, where synaptic plasticity is proportional to the product of presynaptic firing rates with post-synaptic membrane potentials. Spiking and synaptic plasticity then work hand in hand towards the same goal: the spiking of a neuron decreases the representation error on a fast time scale, thereby giving rise to the actual population representation; synaptic plasticity decreases the representation error on a slower time scale, thereby improving or maintaining the population representation. For a large set of initial connectivities and spiking dynamics, neural networks are driven into a balanced regime, where excitation and inhibition cancel each other and where spike trains are asynchronous and irregular. Furthermore, the learning rule that we derive reproduces the main features of STDP (property (d) above). In this way, a network can learn to represent information optimally, with synaptic, neural and network dynamics consistent with those observed experimentally. 1 Derivation of the learning rule for a single neuron We begin by deriving a learning rule for a single neuron with an autapse (a self-connection) (Fig. 1A). Our approach is to derive synaptic dynamics for the autapse and spiking dynamics for the neuron such that the neuron learns to optimally represent a time-varying input signal. We will derive a learning rule for networks of neurons later, after we have developed the fundamental concepts for the single neuron case. Our first step is to derive optimal spiking dynamics for the neuron, so that we have a target for our learning rule. We do this by making two simple assumptions [11]. First, we assume that the neuron can provide an estimate or read-out ˆx(t) of a time-dependent signal x(t) by filtering its spike train o(t) as follows: ˙ˆx(t) = −ˆx(t) + Γo(t), (1) where Γ is a fixed read-out weight, which we will refer to as the neuron’s “output kernel” and the spike train can be written as o(t) = P i δ(t −ti), where {ti} are the spike times. Next, we assume that the neuron only produces a spike if that spike improves the read-out, where we measure the read-out performance through a simple squared-error loss function: L(t) = x(t) −ˆx(t) 2. (2) With these two assumptions, we can now derive optimal spiking dynamics. First, we observe that if the neuron produces an additional spike at time t, the read-out increases by Γ, and the loss function becomes L(t|spike) = (x(t) −(ˆx(t) + Γ))2. This allows us to restate our spiking rule as follows: the neuron should only produce a spike if L(t|no spike) > L(t|spike), or (x(t) −ˆx(t))2 > (x(t) − (ˆx(t) + Γ))2. Now, squaring both sides of this inequality, defining V (t) ≡Γ(x(t) −ˆx(t)) and defining T ≡Γ2/2 we find that the neuron should only spike if: V (t) > T. (3) We interpret V (t) to be the membrane potential of the neuron, and we interpret T as the spike threshold. This interpretation allows us to understand the membrane potential functionally: the voltage is proportional to a prediction error—the difference between the read-out ˆx(t) and the actual signal x(t). A spike is an error reduction mechanism—the neuron only spikes if the error exceeds the spike threshold. This is a greedy minimisation, in that the neuron fires a spike whenever that action decreases L(t) without considering the future impact of that spike. Importantly, the neuron does not require direct access to the loss function L(t). 2 To determine the membrane potential dynamics, we take the derivative of the voltage, which gives us ˙V = Γ( ˙x −˙ˆx). (Here, and in the following, we will drop the time index for notational brevity.) Now, using Eqn. (1) we obtain ˙V = Γ ˙x −Γ(−ˆx + Γo) = −Γ(x −ˆx) + Γ( ˙x + x) −Γ2o, so that: ˙V = −V + Γc −Γ2o, (4) where c = ˙x + x is the neural input. This corresponds exactly to the dynamics of a leaky integrateand-fire neuron with an inhibitory autapse1 of strength Γ2, and a feedforward connection strength Γ. The dynamics and connectivity guarantee that a neuron spikes at just the right times to optimise the loss function (Fig. 1B). In addition, it is especially robust to noise of different forms, because of its error-correcting nature. If x is constant in time, the voltage will rise up to the threshold T at which point a spike is fired, adding a delta function to the spike train o at time t, thereby producing a read-out ˆx that is closer to x and causing an instantaneous drop in the voltage through the autapse, by an amount Γ2 = 2T, effectively resetting the voltage to V = −T. We now have a target for learning—we know the connection strength that a neuron must have at the end of learning if it is to represent information optimally, for a linear read-out. We can use this target to derive synaptic dynamics that can learn an optimal representation from experience. Specifically, we consider an integrate-and-fire neuron with some arbitrary autapse strength ω. The dynamics of this neuron are given by ˙V = −V + Γc −ωo. (5) This neuron will not produce the correct spike train for representing x through a linear read-out (Eqn. (1)) unless ω = Γ2. Our goal is to derive a dynamical equation for the synapse ω so that the spike train becomes optimal. We do this by quantifying the loss that we are incurring by using the suboptimal strength, and then deriving a learning rule that minimises this loss with respect to ω. The loss function underlying the spiking dynamics determined by Eqn. (5) can be found by reversing the previous membrane potential analysis. First, we integrate the differential equation for V , assuming that ω changes on time scales much slower than the membrane potential. We obtain the following (formal) solution: V = Γx −ω¯o, (6) where ¯o is determined by ˙¯o = −¯o+o. The solution to this latter equation is ¯o = h∗o, a convolution of the spike train with the exponential kernel h(τ) = θ(τ) exp(−τ). As such, it is analogous to the instantaneous firing rate of the neuron. Now, using Eqn. (6), and rewriting the read-out as ˆx = Γ¯o, we obtain the loss incurred by the sub-optimal neuron, L = (x −ˆx)2 = 1 Γ2 V 2 + 2(ω −Γ2)¯o + (ω −Γ2)2¯o2 . (7) We observe that the last two terms of Eqn. (7) will vanish whenever ω = Γ2, i.e., when the optimal reset has been found. We can therefore simplify the problem by defining an alternative loss function, LV = 1 2V 2, (8) which has the same minimum as the original loss (V = 0 or x = ˆx, compare Eqn. (2)), but yields a simpler learning algorithm. We can now calculate how changes to ω affect LV : ∂LV ∂ω = V ∂V ∂ω = −V ¯o −V ω ∂¯o ∂ω . (9) We can ignore the last term in this equation (as we will show below). Finally, using simple gradient descent, we obtain a simple Hebbian-like synaptic plasticity rule: τ ˙ω = −∂LV ∂ω = V ¯o, (10) where τ is the learning time constant. 1This contribution of the autapse can also be interpreted as the reset of an integrate-and-fire neuron. Later, when we generalise to networks of neurons, we shall employ this interpretation. 3 This synaptic learning rule is capable of learning the synaptic weight ω that minimises the difference between x and ˆx (Fig. 1B). During learning, the synaptic weight changes in proportion to the postsynaptic voltage V and the pre-synaptic firing rate ¯o (Fig. 1C). As such, this is a Hebbian learning rule. Of course, in this single neuron case, the pre-synaptic neuron and post-synaptic neuron are the same neuron. The synaptic weight gradually approaches its optimal value Γ2. However, it never completely stabilises, because learning never stops as long as neurons are spiking. Instead, the synapse oscillates closely about the optimal value (Fig. 1D). This is also a “greedy” learning rule, similar to the spiking rule, in that it seeks to minimise the error at each instant in time, without regard for the future impact of those changes. To demonstrate that the second term in Eqn. (5) can be neglected we note that the equations for V , ¯o, and ω define a system of coupled differential equations that can be solved analytically by integrating between spikes. This results in a simple recurrence relation for changes in ω from the ith to the (i + 1)th spike, ωi+1 = ωi + ωi(ωi −2T) τ(T −Γc −ωi). (11) This iterative equation has a single stable fixed point at ω = 2T = Γ2, proving that the neuron’s autaptic weight or reset will approach the optimal solution. 2 Learning in a homogeneous network We now generalise our learning rule derivation to a network of N identical, homogeneously connected neurons. This generalisation is reasonably straightforward because many characteristics of the single neuron case are shared by a network of identical neurons. We will return to the more general case of heterogeneously connected neurons in the next section. We begin by deriving optimal spiking dynamics, as in the single neuron case. This provides a target for learning, which we can then use to derive synaptic dynamics. As before, we want our network to produce spikes that optimally represent a variable x for a linear read-out. We assume that the read-out ˆx is provided by summing and filtering the spike trains of all the neurons in the network: ˙ˆx = −ˆx + Γo, (12) where the row vector Γ = (Γ, . . . , Γ) contains the read-out weights2 of the neurons and the column vector o = (o1, . . . , oN) their spike trains. Here, we have used identical read-out weights for each neuron, because this indirectly leads to homogeneous connectivity, as we will demonstrate. Next, we assume that a neuron only spikes if that spike reduces a loss-function. This spiking rule is similar to the single neuron spiking rule except that this time there is some ambiguity about which neuron should spike to represent a signal. Indeed, there are many different spike patterns that provide exactly the same estimate ˆx. For example, one neuron could fire regularly at a high rate (exactly like our previous single neuron example) while all others are silent. To avoid this firing rate ambiguity, we use a modified loss function, that selects amongst all equivalent solutions, those with the smallest neural firing rates. We do this by adding a ‘metabolic cost’ term to our loss function, so that high firing rates are penalised: L = (x −ˆx)2 + µ∥¯o∥2, (13) where µ is a small positive constant that controls the cost-accuracy trade-off, akin to a regularisation parameter. Each neuron in the optimal network will seek to reduce this loss function by firing a spike. Specifically, the ith neuron will spike whenever L(no spike in i) > L(spike in i). This leads to the following spiking rule for the ith neuron: Vi > Ti (14) where Vi ≡Γ(x −ˆx) −µoi and Ti ≡Γ2/2 + µ/2. We can naturally interpret Vi as the membrane potential of the ith neuron and Ti as the spiking threshold of that neuron. As before, we can now derive membrane potential dynamics: ˙V = −V + ΓT c −(ΓT Γ + µI)o, (15) 2The read-out weights must scale as Γ ∼1/N so that firing rates are not unrealistically small in large networks. We can see this by calculating the average firing rate PN i=1 ¯oi/N ≈x/(ΓN) ∼O(N/N) ∼O(1). 4 where I is the identity matrix and ΓT Γ + µI is the network connectivity. We can interpret the selfconnection terms {Γ2+µ} as voltage resets that decrease the voltage of any neuron that spikes. This optimal network is equivalent to a network of identical integrate-and-fire neurons with homogeneous inhibitory connectivity. The network has some interesting dynamical properties. The voltages of all the neurons are largely synchronous, all increasing to the spiking threshold at about the same time3 (Fig. 1F). Nonetheless, neural spiking is asynchronous. The first neuron to spike will reset itself by Γ2 + µ, and it will inhibit all the other neurons in the network by Γ2. This mechanism prevents neurons from spik3The first neuron to spike will be random if there is some membrane potential noise. 0 50 100 150 200 250 300 350 400 0.1 1 10 0 50 100 150 200 250 300 350 400 0 0.5 1 0 0.625 25 25.625 50 50.625 100 100.625 200 200.625 400 400.625 ï2 ï1 0 1 ï1 1 2.35 2.4 400 400.625 1.049 1.05 ï1 1 1.35 1.4 25 25.625 1.77 1.78 (C) end of learning (D) start of learning V O ω V O ω !me$ !me$ !me$ x * V Z * ˆx x * V Z * ˆx x * V Z * ˆx D V (A) (B) 0 1 2 3 4 5 1 25 0 1 2 3 4 5 1 25 neuron$ (F) V neuron$ (E) V !me$ !me$ !me$ !me$ Figure 1: Learning in a single neuron and a homogeneous network. (A) A single neuron represents an input signal x by producing an output ˆx. (B) During learning, the single neuron output ˆx (solid red line, top panel) converges towards the input x (blue). Similarly, for a homogeneous network the output ˆx (dashed red line, top panel) converges towards x. Connectivity also converges towards optimal connectivity in both the single neuron case (solid black line, middle panel) and the homogeneous network case (dashed black line, middle panel), as quantified by D = maxi,j( Ωij −Ωopt ij 2 / Ωopt ij 2) at each point in time. Consequently, the membrane potential reset (bottom panel) converges towards the optimal reset (green line, bottom panel). Spikes are indicated by blue vertical marks, and are produced when the membrane potential reaches threshold (bottom panel). Here, we have rescaled time, as indicated, for clarity. (C) Our learning rule dictates that the autapse ω in our single neuron (bottom panel) changes in proportion to the membrane potential (top panel) and the firing rate (middle panel). (D) At the end of learning, the reset ω fluctuates weakly about the optimal value. (E) For a homogeneous network, neurons spike regularly at the start of learning, as shown in this raster plot. Membrane potentials of different neurons are weakly correlated. (F) At the end of learning, spiking is very irregular and membrane potentials become more synchronous. 5 ing synchronously. The population as a whole acts similarly to the single neuron in our previous example. Each neuron fires regularly, even if a different neuron fires in every integration cycle. The design of this optimal network requires the tuning of N(N −1) synaptic parameters. How can an arbitrary network of integrate-and-fire neurons learn this optimum? As before, we address this question by using the optimal network as a target for learning. We start with an arbitrarily connected network of integrate-and-fire neurons: ˙V = −V + ΓT c −Ωo, (16) where Ωis a matrix of connectivity weights, which includes the resets of the individual neurons. Assuming that learning occurs on a slow time scale, we can rewrite this equation as V = ΓT x −Ω¯o. (17) Now, repeating the arguments from the single neuron derivation, we modify the loss function to obtain an online learning rule. Specifically, we set LV = ∥V∥2/2, and calculate the gradient: ∂LV ∂Ωij = X k Vk ∂Vk ∂Ωij = − X k Vkδki¯oj − X kl VkΩkl ∂¯ol ∂Ωij . (18) We can simplify this equation considerably by observing that the contribution of the second summation is largely averaged out under a wide variety of realistic conditions4. Therefore, it can be neglected, and we obtain the following local learning rule: τ ˙Ωij = −∂LV ∂Ωij = Vi¯oj. (19) This is a Hebbian plasticity rule, whereby connectivity changes in proportion to the presynaptic firing rate ¯oj and post-synaptic membrane potential Vi. We assume that the neural thresholds are set to a constant T and that the neural resets are set to their optimal values −T. In the previous section we demonstrated that these resets can be obtained by a Hebbian plasticity rule (Eqn. (10)). This learning rule minimises the difference between the read-out and the signal, by approaching the optimal recurrent connection strengths for the network (Fig. 1B). As in the single neuron case, learning does not stop, so the connection strengths fluctuate close to their optimal value. During learning, network activity becomes progressively more asynchronous as it progresses towards optimal connectivity (Fig. 1E, F). 3 Learning in the general case Now that we have developed the fundamental concepts underlying our learning rule, we can derive a learning rule for the more general case of a network of N arbitrarily connected leaky integrateand-fire neurons. Our goal is to understand how such networks can learn to optimally represent a J-dimensional signal x = (x1, . . . , xJ), using the read-out equation ˙x = −x + Γo. We consider a network with the following membrane potential dynamics: ˙V = −V + ΓT c −Ωo, (20) where c is a J-dimensional input. We assume that this input is related to the signal according to c = ˙x + x. This assumption can be relaxed by treating the input as the control for an arbitrary linear dynamical system, in which case the signal represented by the network is the output of such a computation [11]. However, this further generalisation is beyond the scope of this work. As before, we need to identify the optimal recurrent connectivity so that we have a target for learning. Most generally, the optimal recurrent connectivity is Ωopt ≡ΓT Γ + µI. The output kernels of the individual neurons, Γi, are given by the rows of Γ, and their spiking thresholds by Ti ≡∥Γi∥2/2 + 4From the definition of the membrane potential we can see that Vk ∼O(1/N) because Γ ∼1/N. Therefore, the size of the first term in Eqn. (18) is P k Vkδki¯oj = Vi¯oj ∼O(1/N). Therefore, the second term can be ignored if P kl VkΩkl∂¯ol/∂Ωij ≪O(1/N). This happens if Ωkl ≪O(1/N 2) as at the start of learning. It also happens towards the end of learning if the terms {Ωkl∂¯ol/∂Ωij} are weakly correlated with zero mean, or if the membrane potentials {Vi} are weakly correlated with zero mean. 6 µ/2. With these connections and thresholds, we find that a network of integrate-and-fire neurons will produce spike trains in such a way that the loss function L = ∥x −ˆx∥2 + µ∥¯o∥2 is minimised, where the read-out is given by ˆx = Γ¯o. We can show this by prescribing a greedy5 spike rule: a spike is fired by neuron i whenever L(no spike in i) > L(spike in i) [11]. The resulting spike generation rule is Vi > Ti, (21) where Vi ≡ΓT i (x −ˆx) −µ¯oi is interpreted as the membrane potential. 5Despite being greedy, this spiking rule can generate firing rates that are practically identical to the optimal solutions: we checked this numerically in a large ensemble of networks with randomly chosen kernels. 10 ï8 10 ï6 10 ï4 0 0.5 1 0 2000 4000 0 0.2 0.4 1 50 0 1 2 2 4 5 ï1 0 1 x 10 ï3 1 50 0 1 2 3 4 5 ï8 ï4 0 x 10 ï3 neuron x * V Z * ˆx x * V Z * ˆx (C) neuron x * V Z * ˆx x * V Z * ˆx (D) L D F (B) (A) !me
!me
!me
start of learning end of learning 0 2 1.32 1.5 x 10 ï4 0 0.5 1 0 0.1 ISI
Δt
!me
Ρ(Δt)
0 2 0.95 1.3 x 10 ï4 0 0.5 1 0 0.4 E-‐I
input
(E) ISI
Δt
!me
Ρ(Δt)
(F) E-‐I
input
1 50 0 1 2 3 4 5 ï8 ï4 0 x 10 ï3 Jx 1x … Jx 1x T ī iV Ȧ i ī ˆJx 1ˆx … Jx 1x … Jx 1x T ī iV Ȧ i ī ˆJx 1ˆx … Jx 1x … Jx 1x T ī iV Ȧ i ī ˆJx 1ˆx … Figure 2: Learning in a heterogeneous network. (A) A network of neurons represents an input signal x by producing an output ˆx. (B) During learning, the loss L decreases (top panel). The difference between the connection strengths and the optimal strengths also decreases (middle panel), as quantified by the mean difference (solid line), given by D =
Ω−Ωopt
2 /
Ωopt
2 and the maximum difference (dashed line), given by maxi,j( Ωij −Ωopt ij 2 / Ωopt ij 2). The mean population firing rate (solid line, bottom panel) also converges towards the optimal firing rate (dashed line, bottom panel). (C, E) Before learning, a raster plot of population spiking shows that neurons produce bursts of spikes (upper panel). The network output ˆx (red line, middle panel) fails to represent x (blue line, middle panel). The excitatory input (red, bottom left panel) and inhibitory input (green, bottom left panel) to a randomly selected neuron is not tightly balanced. Furthermore, a histogram of interspike intervals shows that spiking activity is not Poisson, as indicated by the red line that represents a best-fit exponential distribution. (D, F) At the end of learning, spiking activity is irregular and Poisson-like, excitatory and inhibitory input is tightly balanced and ˆx matches x. 7 How can we learn this optimal connection matrix? As before, we can derive a learning rule by minimising the cost function LV = ∥V∥2/2. This leads to a Hebbian learning rule with the same form as before: τ ˙Ωij = Vi¯oj. (22) Again, we assume that the neural resets are given by −Ti. Furthermore, in order for this learning rule to work, we must assume that the network input explores all possible directions in the J-dimensional input space (since the kernels Γi can point in any of these directions). The learning performance does not critically depend on how the input variable space is sampled as long as the exploration is extensive. In our simulations, we randomly sample the input c from a Gaussian white noise distribution at every time step for the entire duration of the learning. We find that this learning rule decreases the loss function L, thereby approaching optimal network connectivity and producing optimal firing rates for our linear decoder (Fig. 2B). In this example, we have chosen connectivity that is initially much too weak at the start of learning. Consequently, the initial network behaviour is similar to a collection of unconnected single neurons that ignore each other. Spike trains are not Poisson-like, firing rates are excessively large, excitatory and inhibitory input is unbalanced and the decoded variable ˆx is highly unreliable (Fig. 2C, E). As a result of learning, the network becomes tightly balanced and the spike trains become asynchronous, irregular and Poisson-like with much lower rates (Fig. 2D, F). However, despite this apparent variability, the population representation is extremely precise, only limited by the the metabolic cost and the discrete nature of a spike. This learnt representation is far more precise than a rate code with independent Poisson spike trains [11]. In particular, shuffling the spike trains in response to identical inputs drastically degrades this precision. 4 Conclusions and Discussion In population coding, large trial-to-trial spike train variability is usually interpreted as noise [2]. We show here that a deterministic network of leaky integrate-and-fire neurons with a simple Hebbian plasticity rule can self-organise into a regime where information is represented far more precisely than in noisy rate codes, while appearing to have noisy Poisson-like spiking dynamics. Our learning rule (Eqn. (22)) has the basic properties of STDP. Specifically, a presynaptic spike occurring immediately before a post-synaptic spike will potentiate a synapse, because membrane potentials are positive immediately before a postsynaptic spike. Furthermore, a presynaptic spike occurring immediately after a post-synaptic spike will depress a synapse, because membrane potentials are always negative immediately after a postsynaptic spike. This is similar in spirit to the STDP rule proposed in [12], but different to classical STDP, which depends on post-synaptic spike times [9]. This learning rule can also be understood as a mechanism for generating a tight balance between excitatory and inhibitory input. We can see this by observing that membrane potentials after learning can be interpreted as representation errors (projected onto the read-out kernels). Therefore, learning acts to minimise the magnitude of membrane potentials. Excitatory and inhibitory input must be balanced if membrane potentials are small, so we can equate balance with optimal information representation. Previous work has shown that the balanced regime produces (quasi-)chaotic network dynamics, thereby accounting for much observed cortical spike train variability [13, 14, 4]. Moreover, the STDP rule has been known to produce a balanced regime [16, 17]. Additionally, recent theoretical studies have suggested that the balanced regime plays an integral role in network computation [15, 13]. In this work, we have connected these mechanisms and functions, to conclude that learning this balance is equivalent to the development of an optimal spike-based population code, and that this learning can be achieved using a simple Hebbian learning rule. Acknowledgements We are grateful for generous funding from the Emmy-Noether grant of the Deutsche Forschungsgemeinschaft (CKM) and the Chaire d’excellence of the Agence National de la Recherche (CKM, DB), as well as a James Mcdonnell Foundation Award (SD) and EU grants BACS FP6-IST-027140, BIND MECT-CT-20095-024831, and ERC FP7-PREDSPIKE (SD). 8 References [1] Tolhurst D, Movshon J, and Dean A (1982) The statistical reliability of signals in single neurons in cat and monkey visual cortex. Vision Res 23: 775–785. [2] Shadlen MN, Newsome WT (1998) The variable discharge of cortical neurons: implications for connectivity, computation, and information coding. J Neurosci 18(10): 3870–3896. [3] Zohary E, Newsome WT (1994) Correlated neuronal discharge rate and its implication for psychophysical performance. Nature 370: 140–143. [4] Renart A, de la Rocha J, Bartho P, Hollender L, Parga N, Reyes A, & Harris, KD (2010) The asynchronous state in cortical circuits. Science 327, 587–590. [5] Ecker AS, Berens P, Keliris GA, Bethge M, Logothetis NK, Tolias AS (2010) Decorrelated neuronal firing in cortical microcircuits. Science 327: 584–587. [6] Okun M, Lampl I (2008) Instantaneous correlation of excitation and inhibition during ongoing and sensory-evoked activities. Nat Neurosci 11, 535–537. [7] Shu Y, Hasenstaub A, McCormick DA (2003) Turning on and off recurrent balanced cortical activity. Nature 423, 288–293. [8] Gentet LJ, Avermann M, Matyas F, Staiger JF, Petersen CCH (2010) Membrane potential dynamics of GABAergic neurons in the barrel cortex of behaving mice. Neuron 65: 422–435. [9] Caporale N, Dan Y (2008) Spike-timing-dependent plasticity: a Hebbian learning rule. Annu Rev Neurosci 31: 25–46. [10] Boerlin M, Deneve S (2011) Spike-based population coding and working memory. PLoS Comput Biol 7, e1001080. [11] Boerlin M, Machens CK, Deneve S (2012) Predictive coding of dynamic variables in balanced spiking networks. under review. [12] Clopath C, B¨using L, Vasilaki E, Gerstner W (2010) Connectivity reflects coding: a model of voltage-based STDP with homeostasis. Nat Neurosci 13(3): 344–352. [13] van Vreeswijk C, Sompolinsky H (1998) Chaotic balanced state in a model of cortical circuits. Neural Comput 10(6): 1321–1371. [14] Brunel N (2000) Dynamics of sparsely connected networks of excitatory and inhibitory neurons. J Comput Neurosci 8, 183–208. [15] Vogels TP, Rajan K, Abbott LF (2005) Neural network dynamics. Annu Rev Neurosci 28: 357–376. [16] Vogels TP, Sprekeler H, Zenke F, Clopath C, Gerstner W. (2011) Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks. Science 334(6062):1569– 73. [17] Song S, Miller KD, Abbott LF (2000) Competitive Hebbian learning through spike-timingdependent synaptic plasticity. Nat Neurosci 3(9): 919–926. 9
|
2012
|
95
|
4,815
|
Collaborative Ranking With 17 Parameters Maksims N. Volkovs University of Toronto mvolkovs@cs.toronto.edu Richard S. Zemel University of Toronto zemel@cs.toronto.edu Abstract The primary application of collaborate filtering (CF) is to recommend a small set of items to a user, which entails ranking. Most approaches, however, formulate the CF problem as rating prediction, overlooking the ranking perspective. In this work we present a method for collaborative ranking that leverages the strengths of the two main CF approaches, neighborhood- and model-based. Our novel method is highly efficient, with only seventeen parameters to optimize and a single hyperparameter to tune, and beats the state-of-the-art collaborative ranking methods. We also show that parameters learned on datasets from one item domain yield excellent results on a dataset from very different item domain, without any retraining. 1 Introduction Collaborative Filtering (CF) is a method of making predictions about an individual’s preferences based on the preference information from many users. The emerging popularity of web-based services such as Amazon, YouTube, and Netflix has led to significant developments in CF in recent years. Most applications use CF to recommend a small set of items to the user. For instance, Amazon presents a list of top-T products it predicts a user is most likely to buy next. Similarly, Netflix recommends top-T movies it predicts a user will like based on his/her rating and viewing history. However, while recommending a small ordered list of items is a ranking problem, ranking in CF has gained relatively little attention from the learning-to-rank community. One possible reason for this is the Netflix[3] challenge which was the primary venue for CF model development and evaluation in recent years. The challenge was formulated as a rating prediction problem, and almost all of the proposed models were designed specifically for this task, and were evaluated using the normalized squared error objective. Another potential reason is the absence of user-item features. The standard learning-to-rank problem in information retrieval (IR), which is well explored with many powerful approaches available, always includes item features, which are used to learn the models. These features incorporate a lot of external information and are highly engineered to accurately describe the query-document pairs. While a similar approach can be taken in CF settings, it is likely to be very time consuming to develop analogous features, and features developed for one item domain (books, movies, songs etc.) are likely to not generalize well to another. Moreover, user features typically include personal information which cannot be publicly released, preventing open research in the area. An example of this is the second part of the Netflix challenge which had to be shut down due to privacy concerns. The absence of user-item features makes it very challenging to apply the models from the learning-to-rank domain to this task. However, recent work [23, 15, 2] has shown that by optimizing a ranking objective just given the known ratings a significantly higher ranking accuracy can be achieved as compared to models that optimize rating prediction. Inspired by these results we propose a new ranking framework where we show how the observed ratings can be used to extract effective feature descriptors for every user-item pair. The features do not require any external information and make it it possible to apply any learning-to-rank method to optimize the parameters of the ranking function for the target metric. Experiments on MovieLens and Yahoo! datasets show that our model outperforms existing rating and ranking approaches to CF. 1 Moreover, we show that a model learned with our approach on a dataset from one user/item domain can then be applied to a different domain without retraining and still achieve excellent performance. 2 Collaborative Ranking Framework In a typical collaborative filtering (CF) problem we are given a set of N users U = {u1, ..., uN} and a set of M items V = {v1, ..., vM}. The users’ ratings of the items can be represented by an N ×M matrix R where R(un, vm) is the rating assigned by user un to item vm and R(un, vm) = 0 if vm is not rated by un. We use U(vm) to denote the set of all users that have rated vm and V(un) to denote the set of items that have been rated by un. We use vector notation: R(un, :) denotes the n’th row of R (1 × M vector), and R(:, vm) denotes the m’th column (N × 1 vector). As mentioned above, most research has concentrated on the rating prediction problem in CF where the aim is to accurately predict the ratings for the unrated items for each user. However, most applications that use CF typically aim to recommend only a small ranked set of items to each user. Thus rather than concentrating on rating prediction we instead approach this problem from the ranking viewpoint and refer to it as Collaborative Ranking (CR). In CR the goal is to rank the unrated items in the order of relevance to the user. A ranking of the items V can be represented as a permutation π : {1, ..., M} →{1, ..., M} where π(m) = l denotes the rank of the item vm and m = π−1(l). A number of evaluation metrics have been proposed in IR to evaluate the performance of the ranking. Here we use the most commonly used metric, Normalized Discounted Cumulative Gain (NDCG) [12]. For a given user un and ranking π the NDCG is given by: NDCG(un, π, R)@T = 1 GT (un, R) T X t=1 2ˆR(un, vπ−1(t)) −1 log(t + 1) (1) where T is a truncation constant, vπ−1(t) is the item in position t in π and GT (un, R) is a normalizing term which ensures that NDCG ∈[0, 1] for all rankings. T is typically set to a small value to emphasize that the user will only be shown the top-T ranked items and the items below the top-T are not evaluated. 3 Related Work Related work in CF and CR can be divided into two categories: neighborhood-based approaches and model-based approaches. In this section we describe both types of models. 3.1 Neighborhood-Based Approaches Neighborhood-based CF approaches estimate the unknown ratings for a target user based on the ratings from the set of neighborhood users that tend to rate similarly to the target user. Formally, given the target user un and item vm the neighborhood-based methods find a subset of K neighbor users who are most similar to un and have rated vm, i.e., are in the set U(vm) \ un. We use K(un, vm) ⊆U(vm) \ un to denote the set of K neighboring users. A central component of these methods is the similarity function ψ used to compute the neighbors. Several such functions have been proposed including the Cosine Similarity [4] and the Pearson Correlation [20, 10]: ψcos(un, u′) = R(un, :) · R(u′, :)T ∥R(un, :)∥∥R(u′, :)∥ ψpears(un, u′) = (R(un, :) −µ(un)) · (R(u′, :) −µ(u′))T ∥R(i, :) −µ(un)∥∥R(u′, :) −µ(u′)∥ where µ(un) is the average rating for un. Once the K neighbors are found the rating is predicted by taking the weighted average of the neighbors’ ratings. An analogous item-based approach [22] can be used when the number of items is smaller than the number of users. One problem with the neighborhood-based approaches is that the raw ratings often contain user bias. For instance, some users tend to give high ratings while others tend to give low ones. To correct for these biases various methods have been proposed to normalize or center the ratings [4, 20] before computing the predictions. Another major problem with the neighborhood-based approaches arises from the fact that the observed rating matrix R is typically highly sparse, making it very difficult to find similar neighbors reliably. To addresss this sparsity, most methods employ dimensionality reduction [9] and data smoothing [24] to fill in some of the unknown ratings, or to cluster users before computing user 2 similarity. This however adds computational overhead and typically requires tuning additional parameters such as the number of clusters. A neighborhood-based approach to ranking has been proposed recently by Liu & Yang [15]. Instead of predicting ratings, this method uses the neighbors of un to fill in the missing entries in the M ×M pairwise preference matrix Yn, where Yn(vm, vl) is the preference strength for vm over vl by un. Once the matrix is completed an approximate Markov chain algorithm is used to infer the ranking from the pairwise preferences. The main drawback of this approach is that the model is not optimized for the target evaluation metric, such as NDCG. The ranking is inferred directly from Yn and no additional parameters are learned. In general, to the best of our knowledge, no existing neighborhood-based CR method takes the target metric into account during optimization. 3.2 Model-Based Approaches In contrast to the neighborhood-based approaches, the model-based approaches use the observed ratings to create a compact model of the data which is then used to predict the unobserved ratings. Methods in this category include latent models [11, 16, 21], clustering methods [24] and Bayesian networks [19]. Latent factorization models such as Probabilistic Matrix Factorization (PMF) [21] are the most popular model-based approaches. In PMF every user un and item vm are represented by latent vectors φ(un) and φ(vm) of length D. For a given user-item pair (un, vm) the dot product of the corresponding latent vectors gives the rating prediction: R(un, vm) ≈φ(un) · φ(vm). The latent representations are learned by minimizing the squared error between the observed ratings and the predicted ones. Latent models have more expressive power and typically perform better than the neighborhoodbased models when the number of observed ratings is small because they are able to learn preference correlations that extend beyond the simple neighborhood similarity. However, this comes at the cost of a large number of parameters and complex optimization. For example, with the suggested setting of D = 20 the PMF model on the full Netflix dataset has over 10 million parameters and is prone to overfitting. To prevent overfitting the weighted ℓ2 norms of the latent representations are minimized together with the squared error during the optimization phase, which introduces additional hyperparameters to tune. Another problem with the majority of the model-based approaches is that inference for a new user/item is typically expensive. For instance, in PMF the latent representation has to be learned before any predictions can be made for a new user/item, and if many new users/items are added the entire model has to be retrained. On the other hand, inference for a new user in neighborhood-based methods can be done efficiently by simply computing the K neighbors, which is a key advantage of these approaches. Several model-based approaches to CR have recently been proposed, notably CofiRank [23] and the PMF-based ranking model [2]. CofiRank learns latent representations that minimize a ranking-based loss instead of the squared error. The PMF-based approach uses the latent representations produced by PMF as user-item features and learns a ranking model on these features. The authors of that work also note that the PMF representations might not be optimal for ranking since they are learned using a squared error objective which is very different from most ranking metric. To account for this they propose an extension where both user-item features and the weights of the ranking function are optimized during learning. Both methods incorporate NDCG during the optimization phase which is a significant advantage over most neighborhood-based approaches to CR. However, neither method addresses the optimization or inference problems mentioned above. In the following section we present our approach to CR which leverages the advantages of both neighborhood and model-based methods. 3.3 Learning-to-Rank Learning-to-rank has received a lot of attention in the machine learning community due to its importance in a wide variety of applications ranging from information retrieval to natural language processing to computer vision. In IR the learning-to-rank problem consists of a set of training queries where for each query we are given a set of retrieved documents and their relevance labels that indicate the degree of relevance to the query. The documents are represented as query dependent feature vectors and the goal is to learn a feature-based ranking function to rank the documents in the order of relevance to the query. Existing approaches to this problem can be partitioned into three 3 Figure 1: An example rating matrix R and the resulting WIN, LOSS and TIE matrices for the user-item pair (u3, v4) with K = 3 (number of neighbors). (1) Top-3 closest neighbors {u1, u5, u6} are selected from U(v4) = {u1, u2, u5, u6} (all users who rated v4). Note that u2 is not selected because the ratings for u2 deviate significantly from those for u3. (2) The WIN, LOSS and TIE matrices are computed for each neighbor using Equation 2. Here g ≡1 is used to compute the matrices. For example, u5 gave a rating of 3 to v4 which ties it with v3 and beats v1. Normalizing by |V(u5)| −1 = 2 gives WIN34(u5) = 0.5, LOSS34(u5) = 0 and TIE34(u5) = 0.5. categories: pointwise, pairwise, and listwise. Due to the lack of space we omit the description of the individual approaches here and instead refer the reader to [14] for an excellent overview. 4 Our Approach The main idea behind our approach is to transform the CR problem into a learning-to-rank one and then utilize one of the many developed ranking methods to learn the ranking function. CR can be placed into the learning-to-rank framework by noting that the users correspond to queries and items to documents. For each user the observed ratings indicate the relevance of the corresponding items to that user and can be used to train the ranking function. The key difference between this setup and the standard learning-to-rank one is the absence of user-item features. In this work we bridge this gap and develop a robust feature extraction approach which does not require any external user or item information and is based only on the available training ratings. 4.1 Feature Extraction The PMF-based ranking approach [2] extracts user-item features by concatenating together the latent representations learned by the PMF model. The model thus requires the user-item representations to be learned before the items can be ranked and hence suffers from the main disadvantages of the model-based approaches: the large number of parameters, complex optimization, and expensive inference for new users and items. In this work we take a different approach which avoids these disadvantages. We propose to use the neighbor preferences to extract the features for a given useritem pair. Formally, given a user-item pair (un, vm) and a similarity function ψ, we use ψ to extract a subset of the K most similar users to un that rated vm, i.e., K(un, vm). This step is identical to the standard neighborhood-based model, and ψ can be any rating or preference based similarity function. Once K(un, vm) = {uk}K k=1 is found, instead of using only the ratings for vm, we use all of the observed ratings for each neighbor and summarize the net preference for vm into three K × 1 summary preference matrices WINnm, LOSSnm and TIEnm: WINnm(k) = 1 |V(uk)| −1 X v′∈V(uk)\vm g(R(uk, vm), R(uk, v′))I[R(uk, vm) > R(uk, v′)] LOSSnm(k) = 1 |V(uk)| −1 X v′∈V(uk)\vm g(R(uk, vm), R(uk, v′))I[R(uk, vm) < R(uk, v′)] TIEnm(k) = 1 |V(uk)| −1 X v′∈V(uk)\vm I[R(uk, vm) = R(uk, v′)] (2) where I[x] is an indicator function evaluating to 1 if x is true and to 0 otherwise, and g : R2 → R is the pairwise preference function used to convert ratings to pairwise preferences. A simple choice for g is g ≡1 which ignores the rating magnitude and turns the matrices into normalized counts. However, recent work in preference aggregation [8, 13] has shown that additional gain can be achieved by taking the relative rating magnitude into account by using either the normalized rating or log rating difference. All three versions of g address the user bias problem mentioned above by using 4 relative comparisons rather than the absolute rating magnitude. In this form WINnm(k) corresponds to the net positive preference for vm by neighbor uk. Similarly, LOSSnm(k) corresponds to the net negative preference and TIEnm(k) counts the number of ties. Together the three matrices thus describe the relative preferences for vm across all the neighbors of un. Normalization by |V(uk) \ vm| (number of observed ratings for uk excluding vm), ensures that the entries are comparable across neighbors with different numbers of ratings. For unpopular items vm that do not have many ratings with |U(vm)| < K, the number of neighbors will be less than K, i.e., |K(un, vm)| < K. When such an item is encountered we shrink the preference matrices to be the same size as |K(un, vm)|. Figure 1 shows an example rating matrix R together with the preference matrices computed for the user-item pair (u3, v4). Given the preference matrix WINnm we summarize it with a set of simple descriptive statistics: γ(WINnm) = " µ(WINnm), σ(WINnm), max(WINnm), min(WINnm), 1 K X k I[WINnm(k) ̸= 0] # where µ and σ are mean and standard deviation functions respectively. The last statistic counts the number of neighbors (out of K) that express any positive preference towards vm, and together with σ summarizes the overall confidence of the preference. Extending this procedure to the other two preference matrices and concatenating the resulting statistics gives the feature vector for (un, vm): γ(un, vm) = [γ(WINnm), γ(LOSSnm), γ(TIEnm)] (3) Intuitively the features describe the net preference for vm and its variability across the neighbors. Figure 2: The flow diagram for WLT, our feature-based CR model. Note that since γ is independent of K, N and M this representation will have the same length for every user-item pair. We have thus created a fixed length feature representation for every useritem pair, effectively transforming the CR problem into a standard learning-to-rank one. During training our aim is now to use the observed training ratings to learn a scoring function f : R|γ| →R which maximizes the target IR metric, such as NDCG, across all users. At test time, given a user u and items {v1, ..., vM}, we (1) extract features for each item vm using the neighbors of (u, vm); (2) apply the learned scoring function to get the score for every item; and (3) sort the scores to produce the ranking. This process is shown in Figure 2. It is important to note here that, first, a single scoring function is learned for all users and items so the number of parameters is independent of the number of users or items and only depends on the size of γ. This is a significant advantage over most model-based approaches where the number of parameters typically scales linearly with the number of users and/or items. Second, given a new user u no optimization is necessary to produce a ranking of the items for u. Similarly to neighborhoodbased methods, our approach only requires computing the neighbors to extract the features and apply the learned scoring function to get the ranking. This is also a significant advantage over most userbased approaches where it is typically necessary to learn a new model for every user not present in the training data before predictions can be made. Finally, unlike the existing neighborhoodbased methods to CR our approach allows to optimize the parameters of the model for the target metric. Moreover, the extracted features incorporate preference confidence information such as the variance across the neighbors and the fraction of the neighbors that generated each preference type (positive, negative and tie). Taking this information into account allows us to adapt the parameters of the scoring function to sparse low-confidence settings and addresses the reliability problem of the neighborhood-based methods (see Section 3.1). Note that an analogous item-based approach can be taken here by similarly summarizing the preferences of un for items that are closest to vm, we leave this for future work. A modified version of this approach adapted to binary ratings recently placed second in the Million Song Dataset Challenge [18] ran by Kaggle. 4.2 Learning the Scoring Function Given the user-item features extracted based on the neighbors our goal is to use the observed training ratings for each user to optimize the parameters of the scoring function for the target IR metric. A key difference between this feature-based CR approach and the typical learning-to-rank setup is the 5 possibility of missing features. If a given training item vm is not ranked by any other user except un the feature vector is set to zero (γ(un, vm) ≡0). One way to avoid missing features is to learn only with those items that have at least ϵ ratings in the training set. However, in very sparse settings this would force us to discard some of the valuable training data. We take a different approach, modifying the conventional linear scoring function to include an additional bias term b0: f(γ(un, vm), W) = w · γ(un, vm) + b + I[U(vm) \ un = ∅]b0 (4) where W = {w, b, b0} is the set of free parameters to be learned. Here w has the same dimension as γ, and I is an indicator function. The bias term b0 provides a base score for vm if vm does not have enough ratings in the training data. Several possible extensions of this model are worth mentioning here. First, the scoring function can be made non-linear by adding additional hidden layer(s) as done in conventional multilayer neural networks. Second, user information can be incorporated into the model by learning user specific weights. To incorporate user information we can learn a separate set of weights wn for each user un or group of users. The weights will provide user specific information and are then applied to rank the unrated items for the corresponding user(s). However, this extension makes the approach similar to the model-based approaches, with all the corresponding disadvantages mentioned above. Finally, additional user/item information such as, for example, personal information for users and description/genre etc. for items, can be incorporated by simply concatenating it with γ(un, vm) and expanding the dimensionality of W. Note that if these additional features can be extracted efficiently, incorporating them will not add significant overhead to either learning or inference and the model can still be applied to new users and items very efficiently. In the form given by Equation 4 our model has a total of |γ|+2 parameters to be learned. We can use any of the developed learning-to-rank approaches to optimize W. In this work we chose to use the LambdaRank method, due it its excellent performance, having recently won the Yahoo! LearningTo-Rank Challenge [7]. We omit the description of LambdaRank here due to the lack of space, and refer the reader to [6] and [5] for a detailed description. 5 Experiments To validate the proposed approach we conducted extensive experiments on three publicly available datasets: two movie datasets MovieLens-1, MovieLens-2, and a musical artist dataset from Yahoo! [1]. All datasets were kept as is except Yahoo!, which we subsampled by first selecting the 10,000 most popular items and then selecting the 100,000 users with the most ratings. The subsampling was done to speed up the experiments as the original dataset has close to 2 million users and 100,000 items. In addition to subsampling we rescaled user ratings from 0-100 to the 1-5 interval to make the data consistent with the other two datasets. The rescaling was done by mapping 0-19 to 1, 20-39 to 2, etc. The user, item and rating statistics are summarized in Table 1. To investigate the effect that the number of ratings has on accuracy we follow the framework of [23, 2]. Table 1: Dataset statistics. Dataset Users Items Ratings MovieLens-1 1000 1700 100,000 MovieLens-2 72,000 10,000 10,000,000 Yahoo! 100,000 10,000 45,729,723 For each dataset we randomly select 10, 20, 30, 40 ratings from each user for training, 10 for validation and test on the remaining ratings. Users with less than 30, 40, 50, 60 ratings were removed to ensure that we could evaluate on at least 10 ratings for each user. Note that the number of test items varies significantly across users with many users having more test ratings than training ones. This simulates the real life CR scenario where the set of unrated items from which the recommendations are generated is typically much larger than the rated item set for each user. We trained our ranking model, referred to as WLT, using stochastic gradient descent with the learning rates 10−2, 10−3, 10−4 for MovieLens-1, MovieLens-2 and Yahoo! respectively. We found that 1 to 21 iterations was sufficient to trained the models. We also found that using smaller learning rates typically resulted in better generalization. We compare WLT with a well established userbased (UB) collaborative filtering model. We also compare with two collaborative ranking models: PMF-based ranker [2] (PMF-R) and CofiRank [23] (CO). To make the comparison fair we used the same LambdaRank architecture to train both WLT and PMF-R. Note that both PMF-R and CofiRank report state-of-the-art CR results. To compute the PMF features we used extensive cross-validation to determine the L2 penalty weights and the latent dimension size D (5, 10, 10 for MovieLens1, MovieLens-2, and Yahoo! datasets respectively). For CofiRank we used the settings suggested 1Note that 1 iteration of stochastic gradient descent corresponds to |U| weight updates. 6 Table 2: Collaborative Ranking results. NDCG values at different truncation levels are shown within the main columns, which are split based on the number of training ratings. Each model’s rounded number of parameters is shown in brackets, with K = thousand, M = million. 10 20 30 40 N@1 N@3 N@5 N@1 N@3 N@5 N@1 N@3 N@5 N@1 N@3 N@5 MovieLens-1: UB 49.30 54.67 57.36 57.49 61.81 62.88 64.25 65.75 66.58 62.27 64.92 66.14 PMF-R(12K) 69.39 68.33 68.65 72.50 70.42 69.95 72.77 72.23 71.55 74.02 71.55 70.90 CO(240K) 67.28 66.23 66.59 71.82 70.80 70.30 71.60 71.15 70.58 71.43 71.64 71.43 WLT(17) 70.96 68.25 67.98 70.34 69.50 69.21 71.41 71.16 71.02 74.09 71.85 71.52 MovieLens-2: UB 67.62 68.23 68.74 71.29 70.78 70.87 72.65 71.98 71.90 73.33 72.63 72.42 PMF-R(500K) 70.12 69.41 69.35 70.65 70.04 70.09 72.22 71.48 71.43 72.18 71.60 71.55 CO(5M) 70.14 68.40 68.46 68.80 68.51 68.76 64.60 65.62 66.38 62.82 63.49 64.25 WLT(17) 72.78 71.70 71.49 73.93 72.63 72.37 74.67 73.37 73.04 75.19 73.73 73.30 Yahoo!: UB 57.20 55.29 54.31 64.29 61.48 60.16 66.82 63.83 62.42 68.97 65.89 64.50 PMF-R(1M) 52.86 51.98 51.53 63.93 62.42 61.65 66.82 65.41 64.61 69.46 68.05 67.21 CO(10M) 57.42 56.88 56.46 60.59 59.94 59.48 62.07 61.10 60.54 61.68 60.78 60.24 WLT(17) 58.76 55.20 53.53 66.06 62.77 61.21 69.74 66.58 65.02 71.50 68.52 67.00 in [23] and ran the code available on the author’s home page. Similarly to [2], we found that the regression-based objective almost always gave the best results for CofiRank, consistently outperforming NDCG and ordinal objectives. For WLT and UB models we use cosine similarity as the distance function to find the top-K neighbors. Note that using the same similarity function ensures that both models select the same neighbor sets and allows for fair comparison. The number of neighbors K was cross validated in the range [10, 100] on the small MovieLens-1 dataset and set to 200 on all other datasets as we found the results to be insensitive for K above 100 which is consistent with the findings of [15]. In all experiments only ratings in the training set were used to select the neighbors, and make predictions for the validation and test set items. 5.1 Results The NDCG (N@T) results at truncations 1,3 and 5 are shown in Table 2. From the table it is seen that the WLT model performs comparably to the best baseline on MovieLens-1, outperforms all methods on MovieLens-2 and is also the best overall approach on Yahoo!. Across the datasets the gains are especially large at lower truncations N@1 and N@3, which is important since those items will most likely be the ones viewed by the user. Several patterns can also be seen from the table. First, when the number of users and ratings is small (MovieLens-1) the performance of the UB approach significantly drops. This is likely due to the fact that neighbors cannot be found reliably in this setting since users have little overlap in ratings. By taking into account the confidence information such as the number of available neighbors WLT is able to significantly improve over UB while using the same set of neighbors. On MovieLens-1 WLT outperforms UB by as much as 20 NDCG points. Second, for larger datasets such as MovieLens-2 and Yahoo! the model-based approaches have millions of parameters (shown in brackets in Table 2) to optimize and are highly prone to overfitting. Tuning the hyper-parameters for these models is difficult and computationally expensive in this setting as it requires conducting many cross-validation runs over large datasets. On the other hand, our approach achieves consistently better performance with only 17 parameters, and a single hyper-parameter K which is fixed to 200. Overall, the results demonstrate the robustness of the proposed features which generalize well when both few and many users available. 5.2 Transfer Learning Results In addition to the small number of parameters, another advantage of our approach over most modelbased methods is that inference for a new user only requires finding the K neighbors. Thus both users and items can be taken from a different, unseen during training, set. This transfer learning task is much more difficult than the strong generalization task [17] commonly used to test CF methods on new users. In strong generalization the models are evaluated on users not present at training time while keeping the item set fixed, while here the item set also changes. Note that it is impossible to 7 Table 3: Transfer learning NDCG results. Original: WLT model trained on the respective dataset. WLT-M1 and WLT-M2 models are trained on MovieLens-1 and MovieLens-2 respectively, WLT-Y is trained on Yahoo!. WLT-M1, WLT-M2 and WLT-Y models are applied to other datasets without retraining. 10 20 30 40 N@1 N@3 N@5 N@1 N@3 N@5 N@1 N@3 N@5 N@1 N@3 N@5 MovieLens-1: Original 70.96 68.25 67.98 70.34 69.50 69.21 71.41 71.16 71.02 74.09 71.85 71.52 WLT-M2 63.15 62.46 62.75 69.66 68.61 68.47 71.02 70.99 70.88 73.28 71.70 71.46 WLT-Y 44.12 47.06 48.75 61.73 62.60 63.57 67.33 66.99 67.99 71.11 69.22 68.95 MovieLens-2: Original 72.78 71.70 71.49 73.93 72.63 72.37 74.67 73.37 73.04 75.19 73.73 73.30 WLT-M1 72.90 71.77 71.57 73.97 72.59 72.34 74.67 73.36 73.01 75.28 73.76 73.28 WLT-Y 68.04 68.03 68.41 71.54 71.02 71.07 73.15 72.38 72.25 74.00 73.03 72.79 Yahoo!: Original 58.76 55.20 53.53 66.06 62.77 61.21 69.74 66.58 65.02 71.50 68.52 67.00 WLT-M1 57.93 53.91 52.35 66.03 62.68 61.18 68.93 65.85 64.32 71.15 68.17 66.65 WLT-M2 58.81 54.70 53.15 65.29 61.95 60.47 68.68 65.55 64.07 70.84 67.91 66.44 apply PMF-R, CO and most other model-based methods to this setting without re-training the entire model. Our model, on the other hand, can be applied without re-training by simply extracting the features for every new user-item pair and applying the learned scoring function to rank the items. To test the generalization properties of the model we took the three learned WLT models (referred to as WLT-M1, WLT-M2, WLT-Y for MovieLens-1&2 and Yahoo! respectively) and applied each model to the datasets that it was not trained on. So for instance WLT-M1 was applied to MovieLens-2 and Yahoo!. Table 3 shows the transfer results for each of the datasets along with the original results for the WLT model trained on each Figure 3: Normalized WLT weights. White/black correspond to positive/negative weights; the weight magnitude is proportional to the square size. dataset (referred to as Original). Note that none of the models were re-trained or tuned in any way. From the table it seen that our model generalizes very well to different domains. For instance, WLT-M1 trained on MovieLens-1 is able to achieve state-of-the art performance on MovieLens-2, outperforming all the baselines that were trained on MovieLens-2. Note that MovieLens-2 has over 5 times more items and 72 times more users than MovieLens-1, majority of which the WLTM1 model has not seen during training. Moreover, perhaps surprisingly, our model also generalizes well across item domains. The WLT-Y model trained on musical artist data achieves state-of-the-art performance on MovieLens-2 movie data, performing better than all the baselines when 20, 30 and 40 ratings are used for training. Moreover, both WLT-M1 and WLT-M2 achieve very competitive results on Yahoo! outperforming most of the baselines. More insight into why the model generalizes well can be gained from Figure 3, which shows the normalized weights learned by the WLT models on each of the three datsets. The weights are partitioned into feature sets from each of the three preference matrices (see Equation 2). From the figure it can be seen that the learned weights share a lot of similarities. The weights on the features from the WIN matrix are mostly positive while those on the features from the LOSS matrix are mostly negative. Mean preferences and the number of neighbors features have the highest absolute weights which indicates that they are the most useful for predicting the item scores. The similarity between the weight vectors suggests that the features convey very similar information and remain invariant across different user/item sets. 6 Conclusion In this work we presented an effective approach to extract user-item features based on neighbor preferences. The features allow us to apply any learning-to-rank approach to learn the ranking function. Experimental results show that by using these features state-of-the art ranking results can be achieved. Going forward, the strong transfer results call into question whether the complex machinery developed for CF is appropriate when the true goal is recommendation, as the required information for finding the best items to recommend can be obtained from basic neighborhood statistics. We are also currently investigating additional features such as neighbors’ rating overlap. 8 References [1] The Yahoo! R1 dataset. http://webscope.sandbox.yahoo.com/catalog.php? datatype=r. [2] S. Balakrishnan and S. Chopra. Collaborative ranking. In WSDM, 2012. [3] J. Bennet and S. Lanning. The Netflix prize. www.cs.uic.edu/˜liub/ KDD-cup-2007/NetflixPrize-description.pdf. [4] J. S. Breese, D. Heckerman, and C. Kadie. Empirical analysis of predictive algorithm for collaborative filtering. In UAI, 1998. [5] C. J. C. Burges. From RankNet to LambdaRank to LambdaMART: An overview. Technical Report MSR-TR-2010-82, 2010. [6] C. J. C. Burges, R. Rango, and Q. V. Le. Learning to rank with nonsmooth cost functions. In NIPS, 2007. [7] O. Chapelle, Y. Chang, and T.-Y. Liu. The Yahoo! Learning To Rank Challenge. http: //learningtorankchallenge.yahoo.com, 2010. [8] D. F. Gleich and L.-H. Lim. Rank aggregation via nuclear norm minimization. In SIGKDD, 2011. [9] K. Y. Goldberg, T. Roeder, D. Gupta, and C. Perkins. Eigentaste: A constant time collaborative filtering algorithm. Information Retrieval, 4(2), 2001. [10] J. Herlocker, J. A. Konstan, and J. Riedl. An empirical analysis of design choices in neighborhood-based collaborative filtering algorithms. Information Retrieval, 5(4), 2002. [11] T. Hofmann. Latent semantic models for collaborative filtering. ACM Trans. Inf. Syst., 22(1), 2004. [12] K. Jarvelin and J. Kekalainen. IR evaluation methods for retrieving highly relevant documents. In SIGIR, 2000. [13] X. Jiang, L.-H. Lim, Y. Yao, and Y. Ye. Statistical ranking and combinatorial hodge theory. Mathematical Programming, 127, 2011. [14] H. Li. Learning to Rank for Information Retrieval and Natural Language Processing. Morgan & Claypool, 2011. [15] N. Liu and Q. Yang. Eigenrank: A ranking-oriented approach to collaborative filtering. In SIGIR, 2008. [16] B. Marlin. Modeling user rating profiles for collaborative filtering. In NIPS, 2003. [17] B. Marlin. Collaborative filtering: A machine learning perspective. Master’s thesis, University of Toronto, 2004. [18] B. McFee, T. Bertin-Mahieux, D. Ellis, and G. R. G. Lanckriet. The Million Song Dataset Challenge. In WWW, http://www.kaggle.com/c/msdchallenge, 2012. [19] D. M. Pennock, E. Horvitz, S. Lawrence, and C. L. Giles. Collaborative filtering by personality diagnosis: A hybrid memory and model-based approach. In UAI, 2000. [20] P. Resnick, N. Iacovou, M. Suchak, P. Bergstrom, and J. Riedl. Grouplens: An open architecture for collaborative filtering of netnews. In CSCW, 1994. [21] R. Salakhutdinov and A. Mnih. Probabilistic matrix factorization. In NIPS, 2008. [22] B. Sarwar, G. Karypis, J. Konstan, and J. Riedl. Item-based collaborative filtering recommendation algorithms. In WWW, 2001. [23] M. Weimer, A. Karatzoglou, Q. V. Le, and A. J. Smola. CofiRank - maximum margin matrix factorization for collaborative ranking. In NIPS, 2007. [24] G.-R. Xue, C. Lin, Q. Yang, W. Xi, H.-J. Zeng, Y. Yu, and Z. Chen. Scalable collaborative filtering using cluster-based smoothing. In SIGIR, 2005. 9
|
2012
|
96
|
4,816
|
Small-Variance Asymptotics for Exponential Family Dirichlet Process Mixture Models Ke Jiang, Brian Kulis Department of CSE The Ohio State University {jiangk,kulis}@cse.ohio-state.edu Michael I. Jordan Departments of EECS and Statistics University of California at Berkeley jordan@cs.berkeley.edu Abstract Sampling and variational inference techniques are two standard methods for inference in probabilistic models, but for many problems, neither approach scales effectively to large-scale data. An alternative is to relax the probabilistic model into a non-probabilistic formulation which has a scalable associated algorithm. This can often be fulfilled by performing small-variance asymptotics, i.e., letting the variance of particular distributions in the model go to zero. For instance, in the context of clustering, such an approach yields connections between the kmeans and EM algorithms. In this paper, we explore small-variance asymptotics for exponential family Dirichlet process (DP) and hierarchical Dirichlet process (HDP) mixture models. Utilizing connections between exponential family distributions and Bregman divergences, we derive novel clustering algorithms from the asymptotic limit of the DP and HDP mixtures that features the scalability of existing hard clustering methods as well as the flexibility of Bayesian nonparametric models. We focus on special cases of our analysis for discrete-data problems, including topic modeling, and we demonstrate the utility of our results by applying variants of our algorithms to problems arising in vision and document analysis. 1 Introduction An enduring challenge for machine learning is in the development of algorithms that scale to truly large data sets. While probabilistic approaches—particularly Bayesian models—are flexible from a modeling perspective, lack of scalable inference methods can limit applicability on some data. For example, in clustering, algorithms such as k-means are often preferred in large-scale settings over probabilistic approaches such as Gaussian mixtures or Dirichlet process (DP) mixtures, as the k-means algorithm is easy to implement and scales to large data sets. In some cases, links between probabilistic and non-probabilistic models can be made by applying asymptotics to the variance (or covariance) of distributions within the model. For instance, connections between probabilistic and standard PCA can be made by letting the covariance of the data likelihood in probabilistic PCA tend toward zero [1, 2]; similarly, the k-means algorithm may be obtained as a limit of the EM algorithm when the covariances of the Gaussians corresponding to each cluster goes to zero. Besides providing a conceptual link between seemingly quite different approaches, small-variance asymptotics can yield useful alternatives to probabilistic models when the data size becomes large, as the non-probabilistic models often exhibit more favorable scaling properties. The use of such techniques to derive scalable algorithms from rich probabilistic models is still emerging, but provides a promising approach to developing scalable learning algorithms. This paper explores such small-variance asymptotics for clustering, focusing on the DP mixture. Existing work has considered asymptotics over the Gaussian DP mixture [3], leading to k-meanslike algorithms that do not fix the number of clusters upfront. This approach, while an important first step, raises the question of whether we can perform similar asymptotics over distributions other 1 than the Gaussian. We answer in the affirmative by showing how such asymptotics may be applied to the exponential family distributions for DP mixtures; such analysis opens the door to a new class of scalable clustering algorithms and utilizes connections between Bregman divergences and exponential families. We further extend our approach to hierarchical nonparametric models (specifically, the hierarchical Dirichlet process (HDP) [4]), and we view a major contribution of our analysis to be the development of a general hard clustering algorithm for grouped data. One of the primary advantages of generalizing beyond the Gaussian case is that it opens the door to novel scalable algorithms for discrete-data problems. For instance, visual bag-of-words [5] have become a standard representation for images in a variety of computer vision tasks, but many existing probabilistic models in vision cannot scale to the size of data sets now commonly available. Similarly, text document analysis models (e.g., LDA [6]) are almost exclusively discrete-data problems. Our analysis covers such problems; for instance, a particular special case of our analysis is a hard version of HDP topic modeling. We demonstrate the utility of our methods by exploring applications in text and vision. Related Work: In the non-Bayesian setting, asymptotics for the expectation-maximization algorithm for exponential family distributions were studied in [7]. The authors showed a connection between EM and a general k-means-like algorithm, where the squared Euclidean distance is replaced by the Bregman divergence corresponding to exponential family distribution of interest. Our results may be viewed as generalizing this approach to the Bayesian nonparametric setting. As discussed above, our results may also be viewed as generalizing the approach of [3], where the asymptotics were performed for the DP mixture with a Gaussian likelihood, leading to a k-means-like algorithm where the number of clusters is not fixed upfront. Note that our setting is considerably more involved than either of these previous works, particularly since we will require an appropriate technique for computing an asymptotic marginal likelihood. Other connections between hard clustering and probabilistic models were explored in [8], which proposes a “Bayesian k-means” algorithm by performing a maximization-expectation algorithm. 2 Background In this section, we briefly review exponential family distributions, Bregman divergences, and the Dirichlet process mixture model. 2.1 The Exponential Family Consider the exponential family with natural parameter θ = {θj}d j=1 ∈Rd; then the exponential family probability density function can be written as [9]: p(x | θ) = exp ⟨x, θ⟩−ψ(θ) −h(x) , where ψ(θ) = log R exp(⟨x, θ⟩−h(x))dx is the log-partition function. Here we assume for simplicity that x is a minimal sufficient statistic for the natural parameter θ. ψ(θ) can be utilized to compute the mean and covariance of p(x | θ); in particular, the expected value is given by ∇ψ(θ), and the covariance is ∇2ψ(θ). Conjugate Priors: In a Bayesian setting, we will require a prior distribution over the natural parameter θ. A convenient property of the exponential family is that a conjugate prior distribution of θ exists; in particular, given any specific distribution in the exponential family, the conjugate prior can be parametrized as [11]: p(θ | τ, η) = exp ⟨θ, τ⟩−ηψ(θ) −m(τ, η) . Here, the ψ(·) function is the same as that of the likelihood function. Given a data point xi, the posterior distribution of θ has the same form as the prior, with τ →τ + xi and η →η + 1. Relationship to Bregman Divergences: Let φ : S →R be a differentiable, strictly convex function defined on a convex set S ⊆Rd. The Bregman divergence for any pair of points x, y ∈S is defined as Dφ(x, y) = φ(x) −φ(y) −⟨x −y, ∇φ(y)⟩, and can be viewed as a generalized distortion measure. An important result connecting Bregman divergences and exponential families was discussed in [7] (see also [10, 11]), where a bijection between the two was established. A key consequence of this result is that we can equivalently parameterize both p(x | θ) and p(θ | τ, η) in terms of the 2 expectation µ: p(x | θ) = p(x | µ) = exp(−Dφ(x, µ))fφ(x), p(θ | τ, η) = p(µ | τ, η) = exp −ηDφ τ η , µ gφ(τ, η), where φ(·) is the Legendre-conjugate function of ψ(·) (denoted as φ = ψ∗), fφ(x) = exp(φ(x) − h(x)), and µ is the expectation parameter which satisfies µ = ∇ψ(θ) (and also µ = θ∗). The Bregman divergence representation provides a natural way to parametrize the exponential family distributions with its expectation parameter and, as we will see, we will find it convenient to work with this form. 2.2 Dirichlet Process Mixture Models The Dirichlet Process (DP) mixture model is a Bayesian nonparametric mixture model [12]; unlike most parametric mixture models (Bayesian or otherwise), the number of clusters in a DP mixture is not fixed upfront. Using the exponential family parameterized by the expectation µc, the likelihood for a data point can be expressed as the following infinite mixture: p(x) = ∞ X c=1 πcp(x | µc) = ∞ X c=1 πc exp(−Dφ(x, µc))fφ(x). Even though there are conceptually an infinite number of clusters, the nonparametric prior over the mixing weights causes the weights πc to decay exponentially. Moreover, a simple collapsed Gibbs sampler can be employed for performing inference in this model [13]; this Gibbs sampler will form the basis of our asymptotic analysis. Given a data set {x1, ..., xn}, the state of the Markov chain is the set of cluster indicators {z1, ..., zn} as well as the cluster means of the currently-occupied clusters (the mixing weights have been integrated out). The Gibbs updates for zi, (i = 1, . . . , n), are given by the following conditional probabilities: P(zi = c | z−i, xi, µ) = n−i,c Z(n −1 + α)p(xi | µc) P(zi = cnew | z−i, xi, µ) = α Z(n −1 + α) Z p(xi | µ)dG0, where Z is the normalizing constant, n−i,c is the number of data points (excluding xi) that are currently assigned to cluster c, G0 is a prior over µ, and α is the concentration parameter that determines how likely we are to start a new cluster. If we choose to start a new cluster during the Gibbs update, we sample its mean from the posterior distribution obtained from the prior distribution G0 and the single observation xi. After performing Gibbs moves on the cluster indicators, we update the cluster means µc by sampling from the posterior of µc given the data points assigned to cluster c. 3 Hard Clustering for Exponential Family DP Mixtures Our goal is to analyze what happens as we perform small-variance asymptotics on the exponential family DP mixture when running the collapsed Gibbs sampler described earlier, and we begin by considering how to scale the covariance in an exponential family distribution. Given an exponential family distribution p(x | θ) with natural parameter θ and log-partition function ψ(θ), consider a scaled exponential family distribution whose natural parameter is ˜θ = βθ and log-partition function is ˜ψ(˜θ) = βψ(˜θ/β), where β > 0. The following result characterizes the relationship between the mean and covariance of the original and scaled exponential family distributions. Lemma 3.1. Denote µ(θ) as the mean, and cov(θ) as the covariance, of p(x | θ) with log-partition ψ(θ). Given a scaled exponential family with ˜θ = βθ and ˜ψ(˜θ) = βψ(˜θ/β), the mean ˜µ(˜θ) of the scaled distribution is µ(θ) and the covariance, ˜ cov(˜θ), is cov(θ)/β. This lemma follows directly from ˜µ(˜θ) = ∇˜θ ˜ψ(˜θ) = β∇˜θψ(˜θ/β) = ∇θψ(˜θ/β) = ∇θψ(θ) = µ(θ), and ˜ cov(˜θ) = ∇2 ˜θ( ˜ψ(˜θ)) = β∇˜θ(∇˜θψ(˜θ/β)) = 1 β × ∇2 θψ(˜θ/β) = 1 β × ∇2 θψ(θ) = cov(θ)/β. It is perhaps intuitively simpler to observe what happens to the distribution using the 3 Bregman divergence representation. Recall that the generating function φ for the Bregman divergence is given by the Legendre-conjugate of ψ. Using standard properties of convex conjugates, we see that the conjugate of ˜ψ is simply ˜φ = βφ. The Bregman divergence representation for the scaled distribution is given by p(x | ˜θ) = p(x | ˜µ) = exp(−D ˜φ(x, ˜µ))f ˜φ(x) = exp(−βDφ(x, µ))fβφ(x), where the last equality follows from Lemma 3.1 and the fact that, for a Bregman divergence, Dβφ(·, ·) = βDφ(·, ·). Thus, as β increases under the above scaling, the mean is fixed while the distribution becomes increasingly concentrated around the mean. Next we consider the prior distribution under the scaled exponential family. When scaling by β, we also need to scale the hyper-parameters τ and η, namely τ →τ/β and η →η/β. This gives the following prior written using the Bregman divergence, where we are now explicitly conditioning on β: p(˜θ | τ, η, β) = exp −η β D ˜φ τ/β η/β , µ g ˜φ τ β , η β = exp −ηDφ τ η , µ g ˜φ τ β , η β . Finally, we compute the marginal likelihood for x by integrating out ˜θ, as it will be necessary for the Gibbs sampler. Standard algebraic manipulations yield the following: p(x | τ, η, β) = Z p(x | ˜θ) × p(˜θ | τ, η, β)d˜θ = f ˜φ(x) · g ˜φ τ β , η β A( ˜φ,τ,η,β)(x) Z exp −(β + η)Dφ βx + τ β + η , ˜µ(˜θ) d˜θ = f ˜φ(x) · g ˜φ τ β , η β A( ˜φ,τ,η,β)(x) · βd · Z exp −(β + η)Dφ βx + τ β + η , µ(θ) dθ. (1) Here, A( ˜φ,τ,η,β)(x) = exp −(βφ(x)+ηφ( τ η )−(β +η)φ( βx+τ β+η )) , which arises when combining the Bregman divergences from the likelihood and the prior. Now we make the following key insight, which will allow us to perform the necessary asymptotics. We can write the integral from the last line above (denoted I below) via Laplace’s method. Since Dφ( βx+τ β+η , µ) has a local minimum (which is global in this case) at ˆθ = ˆµ∗= ( βx+τ β+η )∗, we have: I = exp −(β + η)Dφ βx + τ β + η , ˆµ 2π β + η d/2 ∂2Dφ( βx+τ β+η , ˆµ) ∂θ∂θT −1/2 + O 1 β = 2π β + η d/2 ∂2Dφ( βx+τ β+η , ˆµ) ∂θ∂θT −1/2 + O 1 β (2) where ∂2Dφ( βx+τ β+η , ˆµ) ∂θ∂θT = cov(ˆθ) is the covariance matrix of the likelihood function instantiated at ˆθ and approaches cov(x∗) when β goes to ∞. Note that the exponential term equals one since the divergence inside is 0. 3.1 Asymptotic Behavior of the Gibbs Sampler We now have the tools to consider the Gibbs sampler for the exponential family DP mixture as we let β →∞. As we will see, we will obtain a general k-means-like hard clustering algorithm which utilizes the appropriate Bregman divergence in place of the squared Euclidean distance, and also can vary the number of clusters. Recall the conditional probabilities for performing Gibbs moves on the cluster indicators zi, where we now are considering the scaled distributions: P(zi = c | z−i, xi, β, µ) = n−i,c Z exp(−βDφ(xi, µc))f ˜φ(xi) P(zi = cnew | z−i, xi, β, µ) = α Zp(xi | τ, η, β), where Z is a normalization factor, and the marginal probability p(xi | τ, η, β) is given by the derivations in (1) and (2). Now, we consider the asymptotic behavior of these probabilities as β →∞. We 4 note that lim β→∞ βxi + τ β + η = xi, and lim β→∞A( ˜φ,τ,η,β)(xi) = exp(−η(φ(τ/η) −φ(xi))), and that the Laplace approximation error term goes to zero as β →∞. Further, we define α as a function of β, η, and τ (but independent of the data): α = g ˜φ τ β , η β · 2π β + η d/2 · βd −1 · exp(−βλ), for some λ. After canceling out the f ˜φ(xi) terms from all probabilities, we can then write the Gibbs probabilities as P(zi = c | z−i, xi, β, µ) = n−i,c · exp(−βDφ(xi, µc)) Cxi · exp(−βλ) + Pk j=1 n−i,j · exp(−βDφ(xi, µj)) P(zi = cnew | z−i, xi, β, µ) = Cxi · exp(−βλ) Cxi · exp(−βλ) + Pk j=1 n−i,j · exp(−βDφ(xi, µj)) , where Cxi approaches a positive, finite constant for a given xi as β →∞. Now, all of the above probabilities will become binary as β →∞. More specifically, all the k + 1 values will be increasingly dominated by the smallest value of {Dφ(xi, µ1), . . . , Dφ(xi, µk), λ}. As β →∞, only the smallest of these values will receive a non-zero probability. That is, the data point xi will be assigned to the nearest cluster with a divergence at most λ. If the closest mean has a divergence greater than λ, we start a new cluster containing only xi. Next, we show that sampling µc from the posterior distribution is achieved by simply computing the empirical mean of a cluster in the limit. During Gibbs sampling, once we have performed one complete set of Gibbs moves on the cluster assignments, we need to sample the µc conditioned on all assignments and observations. If we let nc be the number of points assigned to cluster c, then the posterior distribution (parameterized by the expectation parameter) for cluster c is p(µc | X, z, τ, η, β) ∝p(Xc | µc, β)×p(µc | τ, η, β) ∝exp −(βnc+η)Dφ Pnc i=1 βxc i + τ βnc + η , µ , where X is all the data, Xc = {xc 1, ..., xc nc} is the set of points currently assigned to cluster c, and z is the set of all current assignments. We can see that the mass of the posterior distribution becomes concentrated around the sample mean Pnc i=1 xi nc as β →∞. In other words, after we determine the assignments of data points to clusters, we update the means as the sample mean of the data points in each cluster. This is equivalent to the standard k-means cluster mean update step. 3.2 Objective function and algorithm From the above asymptotic analysis of the Gibbs sampler, we observe a new algorithm which can be utilized for hard clustering. It is as simple as the popular k-means algorithm, but also provides the ability to adapt the number of clusters depending on the data as well as incorporate different distortion measures. The algorithm description is as follows: • Initialization: input data x1, . . . , xn, λ > 0, and µ1 = 1 n Pn i=1 xn • Assignment: for each data point xi, compute the Bregman divergence Dφ(xi, µc) to all existing clusters. If minc Dφ(xi, µc) ≤λ, then zi,c0 = 1 where c0 = argmincDφ(xi, µc); otherwise, start a new cluster and set zi,cnew = 1; • Mean Update: compute the cluster mean for each cluster, µj = 1 |lj| P x∈lj x, where lj is the set of points in the j-th cluster. We iterate between the assignment and mean update steps until local convergence. Note that the initialization used here—placing all data points into a single cluster—is not necessary, but is one natural way to initialize the algorithm. Also note that the algorithm depends heavily on the choice of λ; heuristics for selecting λ were briefly discussed for the Gaussian case in [3], and we will follow this approach (generalized in the obvious way to Bregman divergences) for our experiments. 5 We can easily show that the underlying objective function for our algorithm is quite similar to that in [3], replacing the squared Euclidean distance with an appropriate Bregman divergence. Recall that the squared Euclidean distance is the Bregman divergence corresponding to the Gaussian distribution. Thus, the objective function in [3] can be seen as a special case of our work. The objective function optimized by our derived algorithm is the following: min {lc}k c=1 k X c=1 X x∈lc Dφ(x, µc) + λk (3) where k is the total number of clusters, φ is the conjugate function of the log-partition function of the chosen exponential family distribution, and µc is the sample mean of cluster c. The penalty term λ controls the tradeoff between the likelihood and the model complexity, where a large λ favors small model complexity (i.e., fewer clusters) while a small λ favors more clusters. Given the above objective function, our algorithm can be shown to monotonically decrease the objective function value until convergence to some local minima. We omit the proof here as it is almost identical as the proof for Theorem 3.1 in [3]. 4 Extension to Hierarchies A key benefit of the Bayesian approach is its natural ability to form hierarchical models. In the context of clustering, a hierarchical mixture allows one to cluster multiple groups of data—each group is clustered into a set of local clusters, but these local clusters are shared among the groups (i.e., sets of local clusters across groups form global clusters, with a shared global mean). For Bayesian nonparametric mixture models, one way of achieving such hierarchies arises via the hierarchical Dirichlet Process (HDP) [4], which provides a nonparametric approach to allow sharing of clusters among a set of DP mixtures. In this section, we will briefly sketch out the extension of our analysis to the HDP mixture, which yields a natural extension of our methods to groups of data. Given space considerations, and the fact that the resulting algorithm turns out to reduce to Algorithm 2 from [3] with the squared Euclidean distance replaced by an appropriate Bregman divergence, we will omit the full specification of the algorithm here. However, despite the similarity to the existing Gaussian case, we do view the extension to hierarchies as a promising application of our analysis. In particular, our approach opens the door to hard hierarchical algorithms over discrete data, such as text, and we briefly discuss an application of our derived algorithm to topic modeling. We assume that there are J data sets (groups) which we index by j = 1, ..., J. Data point xij refers to data point i from set j. The HDP model can be viewed as clustering each data set into local clusters, but where each local cluster is associated to a global mean. Global means may be shared across data sets. When performing the asymptotics, we require variables for the global means (µ1, ..., µg), the associations of data points to local clusters, zij, and the associations of local clusters to global means, vjt, where t indexes the local clusters for a data set. A standard Gibbs sampler considers updates on all of these variables, and in the nonparametric setting does not fix the number of local or global clusters. The tools from the previous section may be nearly directly applied to the hierarchical case. As opposed to the flat model, the hard HDP requires two parameters: a value λtop which is utilized when starting a global (top-level) cluster, and a value λbottom which is utilized when starting a local cluster. The resulting hard clustering algorithm first performs local assignment moves on the zij, then updates the local cluster assignments, and finally updates all global means. The resulting objective function that is monotonically minimized by our algorithm is given as follows: min {lc}k c=1 k X c=1 X xij∈lc Dφ(xij, µc) + λbottomt + λtopk, (4) where k is the total number of global clusters and t is the total number of local clusters. The bottomlevel penalty term λbottom controls both the number of local and top-level clusters, where larger λbottom tends to give fewer local clusters and more top-level clusters. Meanwhile, the top-level penalty term λtop, as in the one-level case, controls the tradeoff between the likelihood and model complexity. 6 Figure 1: (Left) Example images from the ImageNet data (Persian cat and elephant categories). Each image is represented via a discrete visual-bag-of-words histogram. Clustering via an asymptotic multinomial DP mixture considerably outperforms the asymptotic Gaussian DP mixture; see text for details. (Right) Elapsed time per iteration in seconds of our topic modeling algorithm when running on the NIPS data, as a function of the number of topics. 5 Experiments We conclude with a brief set of experiments highlighting applications of our analysis to discrete-data problems, namely image clustering and topic modeling. For all experiments, we randomly permute the data points at each iteration, as this tends to improve results (as discussed previously, unlike standard k-means, the order in which the data points are processed impacts the resulting clusters). Image Clustering. We first explore an application of our techniques to image clustering, focusing on the ImageNet data [14]. We utilize a subset of this data for quantitative experiments, sampling 100 images from 10 different categories of this data set (Persian cat, African elephant, fire engine, motor scooter, wheelchair, park bench, cello, French horn, television, and goblet), for a total of 1000 images. Each image is processed via standard visual-bag-of-words: SIFT is densely applied on top of image patches in image, and the resulting SIFT vectors are quantized into 1000 visual words. We use the resulting histograms as our discrete representation for an image, as is standard. Some example images from this data set are shown in Figure 1. We explore whether the discrete version of our hard clustering algorithm based on a multinomial DP mixture outperforms the Gaussian mixture version (i.e., DP-means); this will validate our generalization beyond the Gaussian setting. For both the Gaussian and multinomial cases, we utilize a farthest-first approach for both selecting λ as well as initializing the clusters (see [3] for a discussion of farthest-first for selecting λ). We compute the normalized mutual information (NMI) between the true clusters and the results of the two algorithms on this difficult data set. The Gaussian version performs poorly, achieving an NMI of .06 on this data, whereas the hard multinomial version achieves a score of .27. While the multinomial version is far from perfect, it performs significantly better than DP-means. Scalability to large data sets is clearly feasible, given that the method scales linearly in the number of data points. Note that comparisons between the Gibbs sampler and the corresponding hard clustering algorithm for the Gaussian case were considered in [3], where experiments on several data sets showed comparable clustering accuracy results between the sampler and the hard clustering method. Furthermore, for a fully Bayesian model that places a prior on the concentration parameter, the sampler was shown to be considerably slower than the corresponding hard clustering method. Given the similarity of the sampler for the Gaussian and multinomial case, we expect similar behavior with the multinomial Gibbs sampler. Illustration: Scalable Hard Topic Models. We also highlight an application to topic modeling, by providing some qualitative results over two common document collections. Utilizing our general algorithm for a hard version of the multinomial HDP is straightforward. In order to apply the hard hierarchical algorithm to topic modeling, we simply utilize the discrete KL-divergence in the hard exponential family HDP, since topic modeling for text uses a multinomial distribution for the data likelihood. To test topic modeling using our asymptotic approach, we performed analyses using the NIPS 1-121 and the NYTimes [15] datasets. For the NIPS dataset, we use the whole dataset, which contains 1740 total documents, 13649 words in the vocabulary, and 2,301,375 total words. For the NYTimes 1http://www.cs.nyu.edu/ roweis/data.html 7 NIPS NYTimes 1 neurons, memory, patterns, activity, response, neuron, stimulus, firing, cortex, recurrent, pattern, spike, stimuli, delay, responses team, game, season, play, games, point, player, coach, win, won, guy, played, playing, record, final 2 neural, networks, state, weight, states, results, synaptic, threshold, large, time, systems, activation, small, work, weights percent, campaign, money, fund, quarter, federal, public, pay, cost, according, income, half, term, program, increase 3 training, hidden, recognition, layer, performance, probability, parameter, error, speech, class, weights, trained, algorithm, approach, order president, power, government, country, peace, trial, public, reform, patriot, economic, past, clear, interview, religious, early 4 cells, visual, cell, orientation, cortical, connection, receptive, field, center, tuning, low, ocular, present, dominance, fields family, father, room, line, shares, recount, told, mother, friend, speech, expression, won, offer, card, real 5 energy, solution, methods, function, solutions, local, equations, minimum, hopfield, temperature, adaptation, term, optimization, computational, procedure company, companies, stock, market, business, billion, firm, computer, analyst, industry, internet, chief, technology, customer, number 6 noise, classifier, classifiers, note, margin, noisy, regularization, generalization, hypothesis, multiclasses, prior, cases, boosting, fig, pattern right, human, decision, need, leadership, foundation, number, question, country, strike, set, called, support, law, train Table 1: Sample topics inferred from the NIPS and NYTimes datasets by our hard multinomial HDP algorithm. dataset, we randomly sampled 2971 documents with 10171 vocabulary words, and 853,451 words in total; we also eliminated low-frequency words (those with less than ten occurrences). The prevailing metric to measure the goodness of topic models is perplexity; however, this is based on the predictive probability, which has no counterpart in the hard clustering case. Furthermore, ground truth for topic models is difficult to obtain. This makes quantitative comparisons difficult for topic modeling, and so we therefore focus on qualitative results. Some sample topics (with the corresponding top 15 terms) discovered by our approach from both the NIPS and NYTimes datasets are given in Table 1; we can see that the topics appear to be quite reasonable. Also, we highlight the scalability of our approach: the number of iterations needed for convergence on these data sets ranges from 13 to 25, and each iteration completes in under one minute (see the right side of Figure 1). In contrast, for sampling methods, it is notoriously difficult to detect convergence, and generally a large number of iterations is required. Thus, we expect this approach to scale favorably to large data sets. 6 Conclusion We considered a general small-variance asymptotic analysis for the exponential family DP and HDP mixture model. Crucially, this analysis allows us to move beyond the Gaussian distribution in such models, and opens the door to new clustering applications, such as those involving discrete data. Our analysis utilizes connections between Bregman divergences and exponential families, and results in a simple and scalable hard clustering algorithm which may be viewed as generalizing existing non-Bayesian Bregman clustering algorithms [7] as well as the DP-means algorithm [3]. Due to the prevalence of discrete data in modern computer vision and information retrieval, we hope our algorithms will find use for a variety of large-scale data analysis tasks. We plan to continue to focus on the difficult problem of quantitative evaluations comparing probabilistic and non-probabilistic methods for clustering, particularly for topic models. We also plan to compare our algorithms with recent online inference schemes for topic modeling, particularly the online LDA [16] and online HDP [17] algorithms. Acknowledgements. This work was supported by NSF award IIS-1217433 and by the ONR under grant number N00014-11-1-0688. 8 References [1] M. E. Tipping and C. M. Bishop. Probabilistic principal component analysis. Journal of the Royal Statistical Society, Series B, 21(3):611–622, 1999. [2] S. Roweis. EM algorithms for PCA and SPCA. In Advances in Neural Information Processing Systems, 1998. [3] B. Kulis and M. I. Jordan. Revisiting k-means: New algorithms via Bayesian nonparametrics. In Proceedings of the 29th International Conference on Machine Learning, 2012. [4] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101(476):1566–1581, 2006. [5] L. Fei-Fei and P. Perona. A Bayesian hierarchical model for learning natural scene categories. In IEEE Conference on Computer Vision and Patterns Recognition, 2005. [6] D. Blei, A. Ng, and M. I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022, 2003. [7] A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh. Clustering with Bregman divergences. Journal of Machine Learning Research, 6:1705–1749, 2005. [8] K. Kurihara and M. Welling. Bayesian k-means as a “Maximization-Expectation” algorithm. Neural Computation, 21(4):1145–1172, 2008. [9] O. Barndorff-Nielsen. Information and Exponential Families in Statistical Theory. Wiley Publishers, 1978. [10] J. Forster and M. K. Warmuth. Relative expected instantaneous loss bounds. In Proceedings of 13th Conference on Computational Learning Theory, 2000. [11] A. Agarwal and H. Daume. A geometric view of conjugate priors. Machine Learning, 81(1):99–113, 2010. [12] N. Hjort, C. Holmes, P. Mueller, and S. Walker. Bayesian Nonparametrics: Principles and Practice. Cambridge University Press, Cambridge, UK, 2010. [13] R. M. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9:249–265, 2000. [14] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Patterns Recognition, 2009. [15] A. Frank and A. Asuncion. UCI Machine Learning Repository, 2010. [16] M. D. Hoffman, D. M. Blei, and F. Bach. Online learning for Latent Dirichlet Allocation. In Advances in Neural Information Processing Systems, 2010. [17] C. Wang, J. Paisley, and D. M. Blei. Online variational inference for the hierarchical Dirichlet process. In Proceedings of the 14th International Conference on Artificial Intelligence and Statistics, 2011. 9
|
2012
|
97
|
4,817
|
Deep Representations and Codes for Image Auto-Annotation Ryan Kiros Department of Computing Science University of Alberta Edmonton, AB, Canada rkiros@ualberta.ca Csaba Szepesv´ari Department of Computing Science University of Alberta Edmonton, AB, Canada szepesva@ualberta.ca Abstract The task of image auto-annotation, namely assigning a set of relevant tags to an image, is challenging due to the size and variability of tag vocabularies. Consequently, most existing algorithms focus on tag assignment and fix an often large number of hand-crafted features to describe image characteristics. In this paper we introduce a hierarchical model for learning representations of standard sized color images from the pixel level, removing the need for engineered feature representations and subsequent feature selection for annotation. We benchmark our model on the STL-10 recognition dataset, achieving state-of-the-art performance. When our features are combined with TagProp (Guillaumin et al.), we compete with or outperform existing annotation approaches that use over a dozen distinct handcrafted image descriptors. Furthermore, using 256-bit codes and Hamming distance for training TagProp, we exchange only a small reduction in performance for efficient storage and fast comparisons. Self-taught learning is used in all of our experiments and deeper architectures always outperform shallow ones. 1 Introduction The development of successful methods for training deep architectures have influenced the development of representation learning algorithms either on top of SIFT descriptors [1, 2] or raw pixel input [3, 4, 5] for feature extraction of full-sized images. Algorithms for pixel-based representation learning avoid the use of any hand-crafted features, removing the difficulty of deciding which features are better suited for the desired task. Furthermore, self-taught learning [6] can be employed, taking advantage of feature learning from image databases independent of the target dataset. Image auto-annotation is a multi-label classification task of assigning a set of relevant, descriptive tags to an image where tags often come from a vocabulary of hundreds to thousands of words. Figure 1 illustrates this task. Auto-annotation is a difficult problem due to the high variability of tags. Tags may describe objects, colors, scenes, local regions of the image (e.g. a building) or global characteristics (e.g. whether the image is outdoors). Consequently, many of the most successful annotation algorithms in the literature [7, 8, 9, 10, 11] have opted to focus on tag assignment and often fix a large number of hand-crafted features for input to their algorithms. The task of feature selection and applicability was studied by Zhang et al. [12] who utilized a group sparsity approach for dropping features. Furthermore, they observed that feature importance varied across datasets and some features led to redundancy, such as RGB and HSV histograms. Our main contribution in this paper is to remove the need to compute over a dozen hand-crafted features for annotating images and consequently remove the need for feature selection. We introduce a deep learning algorithm for learning hierarchical representations of full-sized color images from the pixel level, which may be seen as a generalization of the approach by Coates et al. [13] to larger images and more layers. We first benchmark our algorithm on the STL-10 recognition dataset, achieving a classification accuracy 1 of 62.1%. For annotation, we use the TagProp discriminitve metric learning algorithm [9] which has enjoyed state-of-the-art performance on popular annotation benchmarks. We test performance on three datasets: Natural scenes, IAPRTC-12 and ESP-Game. When our features are combined with TagProp, we either compete with or outperform existing methods when 15 distinct hand-crafted features and metrics are used. This gives the advantage of focusing new research on improving tag assignment algorithms without the need of deciding which features are best suited for the task. Figure 1: Sample annotation results on IAPRTC-12 (top) and ESP-Game (bottom) using TagProp when each image is represented by a 256-bit code. The first column of tags is the gold standard and the second column are the predicted tags. Predicted tags that are italic are those that are also gold standard. More recently, auto-annotation algorithms have focused on scalability to large databases with hundreds of thousands to millions of images. Such approaches include that of Tsai et al. [10] who construct visual synsets of images and Weston et al. [11] who used joint word-image embeddings. Our second contribution proposes the use of representing an image with a 256-bit code for annotation. Torralba et al. [14] performed an extensive analysis of small codes for image retrieval showing that even on databases with millions of images, linear search with Hamming distance can be performed efficiently. We utilize an autoencoder with a single hidden layer on top of our learned hierarchical representations to construct codes. Experimental results show only a small reduction in performance is obtained compared to the original learned features. In exchange, 256-bit codes are efficient to store and can be compared quickly with bitwise operations. To our knowledge, our approach is the first to learn binary codes from full-sized color images without the use of handcrafted features. Such approaches often compute an initial descriptor such as GIST for representing an image. These approaches introduce too strong of a bottleneck too early, where the bottleneck in our pipeline comes after multiple layers of representation learning. 2 Hierarchical representation learning In this section we describe our approach for learning a deep feature representation from the pixellevel of a color image. Our approach involves aspects of typical pipelines: pre-processing and whitening, dictionary learning, convolutional extraction and pooling. We define a module as a pass through each of the above operations. We first introduce our setup with high level descriptions followed by a more detailed descriptions of each stage. Finally, we show how to stack multiple modules on top of eachother. Given a set of images, the learning phase of the network is as follows: 1. Extract randomly selected patches from each image and apply pre-processing. 2. Construct a dictionary using K-SVD. 3. Convolve the dictionary with larger tiles extracted across the image with a pre-defined stride length. Re-assemble the outputs in a non-overlapping, spatially preserving grid. 4. Pool over the reassembled features with a 2 layer pyramid. 5. Repeat the above operations for as many modules as desired. 2 For extracting features of a new image, we perform steps (3) and (4) for each module. 2.1 Patch extraction and pre-processing Let {I(1), . . . , I(m)} be a set of m input images. For simplicity of explanation, assume I(i) ∈ RnV ×nH×3, i = 1 . . . m, though it need not be the case that all images are of the same size. Given a receptive field of size r×c, we first extract np patches across all images of size r×c×3, followed by flatting each patch into a column vector. Let X = {x(1), . . . , x(np)}, x(i) ∈Rn, i = 1 . . . np, n = 3rc denote the extracted patches. We first perform mean centering and unit variance scaling across features. This corresponds to local brightness and contrast normalization, respectively. Next we follow [13] by performing ZCA whitening, which results in patches having zero mean, Pnp i=1 x(i) = 0, and identity covariance, 1 np Pnp i=1 x(i)(x(i))T = I. A whitening matrix is computed as W = V (Z + ϵI)−1 2 V T where C = V ZV T is an eigendecompostion of the centered covariance matrix C = C(X) produced by mean subtraction of M = M(X). The parameter ϵ is a small positive number having the effect of a low-pass filter. 2.2 Dictionary learning Let S = {s(1), . . . , s(np)} denote the whitened patches. We are now ready to construct a set of bases from S. We follow Bo et al. [5] and use K-SVD for learning a dictionary. K-SVD constructs a dictionary D ∈Rn×k and a sparse representation ˆS ∈Rk×np by solving the following optimization problem: minimize D, ˆS ∥S −D ˆS∥2 F subject to ||ˆs(i)||0 ≤q ∀i (1) where k is the desired number of bases. Optimization is done using alternation. When D is fixed, the problem of obtaining ˆS can be decomposed into np subproblems of the form ∥s(i) −Dˆs(i)∥2 subject to ||ˆs(i)||0 ≤q ∀i which can be solved approximately using batch orthogonal matching pursuit [15]. When ˆS is fixed, we update D by first expressing equation 1 in terms of a residual R(l): ∥S −D ˆS∥2 F = ∥S − X j̸=l d(j)ˆs(j)T −d(l)ˆs(l)T ∥2 F = ∥R(l) −d(l)ˆs(l)T ∥2 F (2) where l ∈{1, . . . , k}. A solution for d(l), the l-th column of D, can be obtained through an SVD of R(l). For space considerations, we refer the reader to Rubinstein et al. [15] for more details. 1 2.3 Convolutional feature extraction Given an image I(i), we first partition the image into a set of tiles T (i) of size nt × nt with a predefined stride length s between each tile. Each patch in tile T (i) t is processed in the same way as before dictionary construction (mean centering, contrast normalization, whitening) for which the mean and whitening matrices M and W are used. Let T (i) tj denote the t-th tile and j-th channel with respect to image I(i) and let D(l) j ∈Rr×c denote the l-th basis for channel j of D. The encoding f (i) tl for tile t and basis l is given by: f (i) tl = max n tanh 3 X j=1 T (i) tj ∗D(l) j , 0 o (3) where * denotes convolution and max and tanh operations are applied componentwise. Even though it is not the associated encoding with K-SVD, this type of ’surrogate coding’ was studied by Coates 1We use Rubinstein’s implementation available at http://www.cs.technion.ac.il/ ˜ronrubin/software.html 3 et al. [13]. Let f (i) t denote the concatenated encodings over bases, which have a resulting dimension of (nt −r + 1) × (nt −c + 1) × k. These are then re-assembled into spatial-preserving, nonoverlapping regions. See figure 2 for an illustration. We perform one additional localized contrast normalization over f (i) t of the form f (i) t ←(f (i) t −µ(f (i) t ))/max{µ(σt), σ(i) t }. Similar types of normalization have been shown to be critical for performance by Ranzato et al. [16] and Bo et al. [5]. Figure 2: Left: D is convolved with each tile (large green square) with receptive field (small blue square) over a given stride. The outputs are re-assembled in non-overlapping regions preserving spatial structure. Right: 2 × 2 and 1 × 1 regions are summed (pooled) along each cross section. 2.4 Pooling The final step of our pipeline is to perform spatial pooling over the re-assembled regions of the encodings f (i) t . Consider the l-th cross section corresponding to the l-th dictionary element, l ∈ {1, . . . , k}. We may then pool over each of the spatial regions of this cross section by summing over the activations of the corresponding spatial regions. This is done in the form of a 2-layer spatial pyramid, where the base of the pyramid consists of 4 blocks of 2×2 tiling and the top of the pyramid consisting of a single block across the whole cross section. See figure 2 for an illustration. Once pooling is performed, the re-assembled encodings result in a shape of size 1 × 1 × k and 2 × 2 × k from each layer of the pyramid. To obtain the final feature vector, each layer is flattened into a vector and the resulting vectors are concatinated into a single long feature vector of dimension 5k for each image I(i). Prior to classification, these features are normalized to have zero mean and unit variance. 2.5 Training multiple modules What we have described up until now is how to extract features using a single module corresponding to dictionary learning, extraction and pooling. We can now extend this framework into a deep network by stacking multiple modules. Once the first module has been trained, we can take the pooled features to be input to a second module. Freezing the learned dictionary from the first module, we can then apply all the same steps a second time to the pooled representations. This type of stacked training can be performed to as many modules as desired. To be more specific on the input to the second module, we use an additional spatial pooling operation on the re-assembled encodings of the first module, where we extract 256 blocks of 16 × 16 tiling, resulting in a representation of size 16×16×k. It is these inputs which we then pass on to the second module. We choose to use 16×16 as a trade off to aggregating too much information and increasing memory and time complexity. As an illustration, the same operations for the second module are used as in figure 2 except the image is replaced with the 16 × 16 × k pooled features. In the next module, the number of channels is equal to the number of bases from the previous module. 3 Code construction and discriminitive metric learning In this section we first show to learn binary codes from our learned features, followed by a review of the TagProp algorithm [9] used for annotation. 4 3.1 Learning binary codes for annotation Figure 3: Coding layer activation values after training the autoencoder. Our codes are learned by adding an autoencoder with a single hidden layer on top of the learned output representations. Let f (i) ∈Rdm denote the learned representation for image I(i) of dimension dm using either a one or two module architecture. The code b(i) for f (i) is computed by b(i) = round(σ(f (i))) where σ(f (i)) = (1 + exp(Wf (i) + β))−1, W ∈Rdb×dm, β ∈Rdb and db is the number of bits (in our case, db = 256). Using a linear output layer, our objective is to minimize the mean squared error of reconstructions of the the inputs given by 1 m P i ( ˜Wσ(f (i))+ ˜β) −f (i)2, where ˜W ∈Rdm×db, ˜β ∈Rdm are the second layer weights and biases respectively. The objective is minimized using standard backpropagation. As is, the optimization does not take into consideration the rounding used in the coding layer and consequently the output is not adapted for this operation. We follow Salakhutdinov et al. [17] and use additive ‘deterministic’ Gaussian noise with zero mean in the coding layer that is fixed in advance for each datapoint when performing a bottom-up pass through the network. Using unit variance was sufficient to force almost all the activations near {0, 1}. We tried other approaches, including simple thresholding but found the Gaussian noise to be most successful without interfering with the optimization. Figure 3 shows the coding layer activation values after backpropagation when noise has been added. 3.2 The tag propagation (TagProp) algorithm Let V denote a fixed vocabulary of tags and I denote a list of input images. Our goal at test time, given a new input image i′, is to assign a set of tags v ∈V that are most relevant to the content of i′. TagProp operates on pairwise distances to learn a conditional distribution of words given images. More specifically, let yiw ∈{1, −1}, i ∈I, w ∈V be an indicator for whether tag w is present in image i. In TagProp, the probability that yiw = 1 is given by σ(αwxiw + βw), xiw = P j πijyjw where σ(z) = (1 + exp(−z))−1 is the logistic function, (αw, βw) are word-specific model parameters to be estimated and πij are distance-based weights also to be estimated. More specifically, πij is expressed as πij = exp(−dh(i, j)) P j′ exp(−dh(i, j′)), dh(i, j) = hdij, h ≥0 (4) where we shall call dij the base distance between images i and j. Let θ = {αw∀w ∈V, βw∀w ∈ V, h} denote the list of model parameters. The model is trained to maximize the following quasilikelihood of the data given by L = P i,w ciw log p(yiw), ciw = 1 n+ if yiw = 1 and 1 n−otherwise, where n+ is the total number of positive labels of w and likewise for n−and missing labels. This weighting allows us to take into account imbalances between label presence and absence. Combined with the logistic word models, it accounts for much higher recall in rare tags which would normally be less likely to be recalled in a basic k-NN setup. Optimization of L is performed using a projected gradient method for enforcing the non-negativity constraint in h. The choice of base distance used depends on the image representation. In the above description, the model was derived assuming only a single base distance is computed between images. This can be generalized to an arbitrary number of distances by letting h be a parameter vector and letting dh(i, j) be a weighted combination of distances in h. Under this formulation, multiple descriptors of images can be computed and weighted. The best performance of TagProp [9] was indeed obtained using this multiple metric formulation in combination with the logistic word models. In our case, Euclidean distance is used and Hamming distance for binary codes. Furthermore, we only consider pairwise distances from the K nearest neighbors, where K is chosen though cross validation. 5 4 Experiments We perform experimental evaluation of our methods on 4 datasets: one dataset, STL-10, for object recognition to benchmark our hierarchical model and three datasets for annotation: Natural Scenes, IAPRTC-12 and ESP-Game 2 . For all our experiments, we use k1 = 512 first module bases, k2 = 1024 second module bases, receptive field sizes of 6 × 6 and 2 × 2 and tile sizes (nt) of 16 × 16 and 6 × 6. The total number of features for the combined first and second module representation is thus 5(k1 + k2) = 7680. Images are resized such that the longest side is no larger than 300 pixels with preserved aspect ratio. The first module stride length is chosen based on the length of the longest side of the image: 4 if the side is less than 128 pixels, 6 if less than 214 pixels and 8 otherwise. The second module stride length is fixed at 2. For training the autoencoder, we use 10 epochs (passes over the training set) with minibatches of size no larger than 1000. Optimization is done using Polak Ribiere conjugate gradients with 3 linesearches per minibatch. 3 We also incorporate the use of self-taught learning [6] in our annotation experiments by utilizing the Mirflickr dataset for dictionary learning. Mirflickr is a collection of 25000 images taken from flickr and deemed to have high interestness rating. We randomly sampled 10000 images from this dataset for training K-SVD on both modules. All reported results for Natural Scenes, IAPRTC-12 and ESP-Game use self-taught learning. Our code for feature learning will be made available online. 4.1 STL-10 Table 1: A selection of the best results obtained on the STL10 dataset. Method Accuracy Sparse filtering [18] 53.5% OMP, k = 1600 [13] 54.9% OMP, SC encoder, k = 1600 [13] 59.0% Receptive field learning, 3 modules [19] 60.1% Video unsup features [20] 61.0% Hierarchical matching persuit [21] 64.5% 1st Module 56.4 % 1st + 2nd Module 62.1 % The STL-10 dataset is a collection of 96×96 images of 10 classes with images partitioned into 10 folds of 1000 images each and a test set of size 8000. Alongside these labeled images is a set of 100000 unlabeled images that may or may not come from the same distribution as the training data. The evaluation procedure is to perform representation learning on the unlabeled data and apply the representations to the training set, averaging test errors across all folds. We randomly chose 10000 images from the unlabeled set for training and use a linear L2-SVM for classification with 5-fold cross validation for model selection. Table 1 shows our results on STL-10. Our 2 module architecture outperforms all existing approaches except for the recently proposed hierarchical matching pursuit (HMP). HMP uses joint layerwise pooling and separate training for RGB and grayscale dictionaries, approaches which may also be adapted to our method. Moreover, we hypothesize that further improvements can be made when the receptive field learning strategies of Coates et al. [19] and Jia et al. [22] are incorporated into a third module. 4.2 Natural scenes The Natural Scenes dataset is a multi-label collection of 2000 images from 5 classes: desert, forest, mountain, ocean and sunset. We follow standard protocol and report the average results of 5 metrics using 10 fold cross validation: Hamming loss (HL), one error (OE), coverage (C), ranking loss (RL) and average precision (AP). For space considerations, these metrics are defined in the appendix. To perform model selection with TagProp, we perform 5-fold cross validation with each of the 10-folds to determine the value of K which minimizes Hamming loss. 2Tags for IAPRTC-12 and ESP-Game as well as the features used by existing approaches can be found at http://lear.inrialpes.fr/people/guillaumin/data.php 3Rasmussen’s minimize routine is used. 6 Table 2: A selection of the best results obtained on the Natural Scenes dataset. Arrows indicate direction of improved performance. Method HL ↓ OE ↓ C ↓ RL ↓ AP ↑ ML-KNN [23] 0.169 0.300 0.939 0.168 0.803 ML-I2C [24] 0.159 0.311 0.883 0.156 0.804 InsDif [25] 0.152 0.259 0.834 0.140 0.830 ML-LI2C [24] 0.129 0.190 0.624 0.091 0.881 1st Module 0.113 0.170 0.580 0.080 0.895 1st Module, 256-bit 0.113 0.169 0.585 0.082 0.894 1st + 2nd Module 0.100 0.140 0.554 0.074 0.910 1st + 2nd Module, 256-bit 0.106 0.155 0.558 0.075 0.903 Table 3: A selection of the best results obtained on the IAPRTC-12 dataset (left) and ESP-Game (right) datasets. Method P R N+ P R N+ MBRM [26] 0.24 0.23 223 0.18 0.19 209 LASSO [7] 0.28 0.29 246 0.21 0.24 224 JEC [7] 0.28 0.29 250 0.22 0.25 224 GS [12] 0.32 0.29 252 CCD [8] 0.44 0.29 251 0.36 0.24 232 TagProp (σ SD) [9] 0.41 0.30 259 0.39 0.24 232 TagProp (σ ML) [9] 0.46 0.35 266 0.39 0.27 239 1st Module 0.37 0.25 241 0.37 0.20 231 1st Module, 256-bit 0.34 0.22 236 0.35 0.20 231 1st + 2nd Module 0.42 0.29 252 0.38 0.22 228 1st + 2nd Module, 256-bit 0.36 0.25 244 0.37 0.23 236 Table 2 shows the results of our method. In all five measures we obtain improvement over previous methods. Furthermore, using 256-bit codes offers near equivalent performance. As in the case of STL-10, improvements are made over a single module. 4.3 IAPRTC-12 and ESP-Game IAPRTC-12 is a collection of 20000 images with a vocabulary size of |V | = 291 and an average of 5.7 tags per image. ESP-Game is a collection of 60000 images with |V | = 268 and an average of 4.7 tags per class. Following Guillaumin et al. [9] we apply experiments to a pre-defined subset of 20000 images. Using standard protocol performance is evaluated using 3 measures: precision (P), recall (R) and the number of recalled tags (N+). N+ indicates the number of tags that were recalled at least once for annotation on the test set. Annotations are made by choosing the 5 most probable tags for each image as is done with previous evaluations. As with the natural scenes dataset, we perform 5-fold cross validation to determine K for training TagProp. Table 3 shows our results with IAPRTC-12 on the left and ESP-Game on the right. Our results give comparable performance to CCD and the single distance (SD) variation of TagProp. Unfortunately, we are unable to match the recall values obtained with the multiple metric (ML) variation of TagProp. Of importance, we outperform GS who specifically studied the use of feature selection. Our 256-bit codes suffer a loss of performance on IAPRTC-12 but give near equivalent results on ESP-Game. We note again that our features were learned on an entirely different dataset (Mirflickr) in order to show their generalization capabilities. Finally, we perform two qualitative experiments. Figure 4 shows sample unsupervised retrieval results using the learned 256-bit codes on IAPRTC-12 and ESP-Game while figure 5 illustrates sample annotation performance when training on one dataset and annotating the other. These results show that our codes are able to capture high-level semantic concepts that perform well for retrieval and transfer learning across datasets. We note however, that when annotating ESP-game when training was done on IAPRTC-12 led to more false human annotations (such as the bottom-right 7 Figure 4: Sample 256-bit unsupervised retrieval results on ESP-Game (top) and IAPRTC-12 (bottom). A query image from the test set is used to retrieve the four nearest neighbors from the training set. Figure 5: Sample 256-bit annotation results when training on one dataset and annotating the other. Top: Training on ESP-Game, annotation on IAPRTC-12. Bottom: Training on IAPRTC-12, annotation on ESP-Game. image in figure 5). We hypothesize that this is due to a larger proportion of persons in the IAPRTC12 training set. 5 Conclusion In this paper we introduced a hierarchical model for learning feature representations of standard sized color images for the task of image annotation. Our results compare favorably to existing approaches that use over a dozen handcrafted image descriptors. Our primary goal for future work is to test the effectiveness of this approach on web-scale annotation systems with millions of images. The success of self-taught learning in this setting means only one dictionary per module ever needs to be learned. Furthermore, our features can be used in combination with any nearest neighbor based algorithm for annotation. It is our hope that the successful use of binary codes for annotation will allow further research to bridge the gap between the annotation algorithms used on small scale problems to those required for web scale tasks. We also intend to evaluate the effectiveness of semantic hashing on large databases when much smaller codes are used. Krizhevsky et al. [27] evaluated semantic hashing using very deep autoencoders on tiny (32 × 32) images. Future work also involves performing similar experiments but on standard sized RGB images. Acknowledgments The authors thank Axel Soto as well as the anonymous reviewers for helpful discussion and comments. This work was funded by NSERC and the Alberta Innovates Centre for Machine Learning. 8 References [1] T Huang. Linear spatial pyramid matching using sparse coding for image classification. CVPR, pages 1794–1801, 2009. [2] K. Yu F. Lv T. Huang J. Wang, J. Yang and Y. Gong. Locality-constrained linear coding for image classification. In CVPR, pages 3360–3367, 2010. [3] R. Ranganath H Lee, R. Grosse and A.Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. ICML, pages 1–8, 2009. [4] K. Yu, Y. Lin, and J. Lafferty. Learning image representations from the pixel level via hierarchical sparse coding. In CVPR, pages 1713–1720, 2011. [5] L. Bo, X. Ren, and D. Fox. Hierarchical Matching Pursuit for Image Classification: Architecture and Fast Algorithms. In NIPS, 2011. [6] Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer, and Andrew Y Ng. Self-taught learning, pages 759–766. ICML, 2007. [7] A. Makadia, V. Pavlovic, and S. Kumar. A new baseline for image annotation. In ECCV, volume 8, pages 316–329, 2008. [8] H. Nakayama. Linear Distance Metric Learning for Large-scale Generic Image Recognition. PhD thesis, The University of Tokyo. [9] M. Guillaumin, T. Mensink, J. Verbeek, and C. Schmid. Tagprop: Discriminative metric learning in nearest neighbor models for image auto-annotation. In ICCV, pages 309–316, 2009. [10] D. Tsai, Y. Jing, Y. Liu, H.A. Rowley, S. Ioffe, and J.M. Rehg. Large-scale image annotation using visual synset. In ICCV, pages 611–618, 2011. [11] J. Weston, S. Bengio, and N. Usunier. Large scale image annotation: learning to rank with joint wordimage embeddings. Machine Learning, 81(1):21–35, 2010. [12] S. Zhang, J. Huang, Y. Huang, Y. Yu, H. Li, and D.N. Metaxas. Automatic image annotation using group sparsity. In CVPR, pages 3312–3319, 2010. [13] A. Coates and A.Y. Ng. The importance of encoding versus training with sparse coding and vector quantization. In ICML, 2011. [14] A. Torralba, R. Fergus, and Y. Weiss. Small codes and large image databases for recognition. In CVPR, pages 1–8, 2008. [15] R. Rubinstein, M. Zibulevsky, and M. Elad. Efficient implementation of the k-SVD algorithm using batch orthogonal matching pursuit. Technical Report, 2008. [16] M. Ranzato K. Jarrett, K. Kavukcuoglu and Y. LeCun. What is the best multi-stage architecture for object recognition? ICCV, 6:2146–2153, 2009. [17] G. Hinton and R. Salakhutdinov. Discovering binary codes for documents by learning deep generative models. Topics in Cognitive Science, 3(1):74–91, 2011. [18] Z. Chen S. Bhaskar J. Ngiam, P. W. Koh and A.Y. Ng. Sparse filtering. NIPS, 2011. [19] A. Coates and A.Y. Ng. Selecting receptive fields in deep networks. NIPS, 2011. [20] W. Zou, A. Ng, and Kai. Yu. Unsupervised learning of visual invariance with temporal coherence. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011. [21] L. Bo, X. Ren, and D. Fox. Unsupervised Feature Learning for RGB-D Based Object Recognition. In ISER, June 2012. [22] Y. Jia, C. Huang, and T. Darrell. Beyond spatial pyramids: Receptive field learning for pooled image features. In CVPR, 2012. [23] M.L. Zhang and Z.H. Zhou. ML-KNN: A lazy learning approach to multi-label learning. Pattern Recognition, 40(7):2038–2048, 2007. [24] Y. Hu Z. Wang and L.T. Chia. Multi-label learning by image-to-class distance for scene classification and image annotation. In CIVR, pages 105–112, 2010. [25] M.L. Zhang and Z.H. Zhou. Multi-label learning by instance differentiation. In AAAI, number 1, pages 669–674, 2007. [26] SL Feng, R. Manmatha, and V. Lavrenko. Multiple Bernoulli relevance models for image and video annotation. In CVPR, pages 1002–1009, 2004. [27] A. Krizhevsky and G.E. Hinton. Using very deep autoencoders for content-based image retrieval. ESANN, 2011. 9
|
2012
|
98
|
4,818
|
Patient Risk Stratification for Hospital-Associated C. diff as a Time-Series Classification Task Jenna Wiens jwiens@mit.edu John V. Guttag guttag@mit.edu Eric Horvitz horvitz@microsoft.com Abstract A patient’s risk for adverse events is affected by temporal processes including the nature and timing of diagnostic and therapeutic activities, and the overall evolution of the patient’s pathophysiology over time. Yet many investigators ignore this temporal aspect when modeling patient outcomes, considering only the patient’s current or aggregate state. In this paper, we represent patient risk as a time series. In doing so, patient risk stratification becomes a time-series classification task. The task differs from most applications of time-series analysis, like speech processing, since the time series itself must first be extracted. Thus, we begin by defining and extracting approximate risk processes, the evolving approximate daily risk of a patient. Once obtained, we use these signals to explore different approaches to time-series classification with the goal of identifying high-risk patterns. We apply the classification to the specific task of identifying patients at risk of testing positive for hospital acquired Clostridium difficile. We achieve an area under the receiver operating characteristic curve of 0.79 on a held-out set of several hundred patients. Our two-stage approach to risk stratification outperforms classifiers that consider only a patient’s current state (p<0.05). 1 Introduction Time-series data are available in many different fields, including medicine, finance, information retrieval and weather prediction. Much research has been devoted to the analysis and classification of such signals [1] [2]. In recent years, researchers have had great success with identifying temporal patterns in such time series and with methods that forecast the value of variables. In most applications there is an explicit time series, e.g., ECG signals, stock prices, audio recordings, or daily average temperatures. We consider a novel application of time-series analysis, patient risk. Patient risk has an inherent temporal aspect; it evolves over time as it is influenced by intrinsic and extrinsic factors. However, it has no easily measurable time series. We hypothesize that, if one could measure risk over time, one could learn patterns of risk that are more likely to lead to adverse outcomes. In this work, we frame the problem of identifying hospitalized patients for high-risk outcomes as a time-series classification task. We propose and motivate the study of patient risk processes to model the evolution of risk over the course of a hospital admission. Specifically, we consider the problem of using time-series data to estimate the risk of an inpatient becoming colonized with Clostridium difficile (C. diff) during a hospital stay. (C. diff is a bacterial infection most often acquired in hospitals or nursing homes. It causes severe diarrhea and can lead to colitis and other serious complications.) Despite the fact that many of the risk factors are well known (e.g., exposure, age, underlying disease, use of antimicrobial agents, etc.) [3], C. diff continues to be a significant problem in many US hospitals. From 1996 to 2009, C. diff rates for hospitalized patients aged ≥65 years increased by 200% [4]. 1 There are well-established clinical guidelines for predicting whether a test for C. diff is likely to be positive [5]. Such guidelines are based largely on the presence of symptoms associated with an existing C. diff infection, and thus are not useful for predicting whether a patient will become infected. In contrast, risk stratification models aim to identify patients at high risk of becoming infected. The use of these models could lead to a better understanding of the risk factors involved and ultimately provide information about how to reduce the incidence of C. diff in hospitals. There are many different ways to define the problem of estimating risk. The precise definition has important ramifications for both the potential utility of the estimate and the difficulty of the problem. Reported results in the medical literature for the problem of risk stratification for C. diff vary greatly, with areas under the receiver operating characteristic curve (AUC) of 0.628-0.896 [6] [7][8][9][10]. The variation in classification performance is based in part on differences in the task definition, in part on differences in the study populations, and in part on the evaluation method. The highest reported AUCs were from studies of small (e.g., 50 patients) populations, relatively easy tasks (e.g., inclusion of large number of patients with predictably short stays, e.g., patients in labor), or both. Additionally, some of the reported results were not obtained from testing on held-out sets. We consider patients with at least a 7-day hospital admission who do not test positive for C. diff until day 7 or later. This group of patients is already at an elevated risk for acquiring C. diff because of the duration of the hospital stay. Focusing on this group makes the problem more relevant (and more difficult) than other related tasks. To the best of our knowledge, representing and studying the risk of acquiring C. diff (or any other infection) as a time series has not previously been explored. We propose a risk stratification method that aims to identify patterns of risk that are more likely to lead to adverse outcomes. In [11] we proposed a method for extracting patient risk processes. Once patient risk processes are extracted, the problem of risk stratification becomes that of time-series classification. We explore a variety of different methods including classification using similarity metrics, feature extraction, and hidden Markov models. A direct comparison with the reported results in the literature for C. diff risk prediction is difficult because of the differences in the studies mentioned above. Thus, to measure the added value of considering the temporal dimension, we implemented the standard approach as represented in the related literature of classifying patients based on their current or average state and applied it to our data set. Our method leads to a significant improvement over this more traditional approach. 2 The Data Our dataset comes from a large US hospital database. We extracted all stays >= 7days, from all inpatient admissions that occurred over the course of a year. To ensure that we are in fact predicting the acquisition of C. diff during the current admission, we remove patients who tested positive for C. diff in the 60 days preceding or, if negative, following the current admission [3]. In addition, we remove patients who tested positive for C. diff before day 7 of the admission. Positive cases are those patients who test positive on or after 7 days in the hospital. Negative patients are all remaining patients. We define the start of the risk period of a patient as the time of admission and define the end of the risk period, according to the following rule: if the patient tests positive, the first positive test marks the end of the risk period, otherwise the patient is considered at risk until discharge. The final population consisted of 9,751 hospital admissions and 8,166 unique patients. Within this population, 177 admissions had a positive test result for C. diff. 3 Methods Patient risk is not a directly measurable time series. Thus, we propose a two-stage approach to risk stratification. We first extract approximate risk processes and then apply time-series classification techniques to those processes. Both stages are described here; for more detail regarding the first stage we direct the reader to [11]. 2 3.1 Extracting Patient Risk Processes We extract approximate patient risk processes, i.e., a risk time series for each admission, by independently calculating the daily risk of a patient and then concatenating these predictions. We begin by extracting more than 10,000 variables for each day of each hospital admission. Almost all of the features pertain to categorical features that have been exploded into binary features; hence the high dimensionality. Approximately half of the features are based on data collected at the time of admission e.g., patient history, admission reason, and patient demographics. These features remain constant throughout the stay. The remaining features are collected over the course of the admission and may change on a daily basis e.g., lab results, room location, medications, and vital sign measurements. We employ a support vector machine (SVM) to produce daily risk scores. Each day of an admission is associated with its own feature vector. We refer to this feature vector of observations as the patient’s current state. However, we do not have ground-truth labels for each day of a patient’s admission. We only know whether or not a patient eventually tests positive for C. diff. Thus we assign each day of an admission in which the patient eventually tests positive as positive, even though the patient may not have actually been at high risk on each of those days. In doing so, we hope to identify high-risk patients as early as possible. Since we do not expect a patient’s risk to remain constant during an entire admission, there is noise in the training labels. For example, there may be some days that look almost identical in the feature space but have different labels. To handle this noise we use a soft-margin SVM, that allows for misclassifications. As long as our assumption does not lead to more incorrect labels than correct labels, it is possible to learn a meaningful classifier, despite the approximate labels. We do not use the SVM as a classifier but instead consider the continuous prediction made by the SVM, i.e., the distance to the decision boundary. We take the concatenated continuous outputs of the SVM for a hospital admission as a representation of the approximate risk process. We give some examples of these approximate risk processes for both case and non-case patients in Figure 1. 5 10 15 20 25 −1.5 −1 −0.5 0 0.5 1 1.5 Approximate Risk Time (days) 5 10 15 20 25 30 35 40 −1 −0.5 0 0.5 1 1.5 Time (days) Patient is discharged Patient tests positive Figure 1: Approximate daily risk represented as a time series results in a risk process for each patient. One could risk stratify patients based solely on their current state, i.e., use the daily risk value from the risk process to classify patients as either high risk or low risk on that day. This method, which ignores the temporal evolution of risk, achieves an AUC of 0.69 (95% CI 0.61-0.77). Intuitively, current risk should depend on previous risk. We tested this intuition by classifying patients based on the average of their risk process. This performed significantly better achieving an AUC of 0.75 (95% CI 0.69-0.81). Still, averaging in this way ignores the possibility of leveraging richer temporal patterns, as discussed in the next section. 3.2 Classifying Patient Risk Processes Given the risk processes of each patient, the risk stratification task becomes a time-series classification task. Time-series classification is a well-investigated area of research, with many proposed methods. For an in-depth review of sequence classification we refer the reader to [2]. Here, we explore three different approaches to the problem: classification based on feature vectors, similarity measures, and finally HMMs. We first describe each method, and then present results about their performance in Section 4. 3 3.2.1 Classification using Feature Extraction There are many different ways to extract features from time series. In the literature many have proposed time-frequency representations extracted using various Fourier or wavelet transforms [12]. Given the small number of samples composing our time-series data, we were wary of applying such techniques. Instead we chose an approach inspired by the combination of classifiers in the text domain using reliability indicators [13]. We define a feature vector based on different combinations of the predictions made in the first stage. We list the features in Table 1. Table 1: Univariate summary statistics for observation vector x = [x1, x2, ..., xn] Feature Description 1 length of time series n, 2 average daily risk 1 n Pn 1 xi, 3 linear weighted average daily risk 2 n(n+1) Pn 1 ixi, 4 quadratic weighted average daily risk 6 n(n+1)(2n+1) Pn 1 i2xi, 5 risk on most recently observed day xn, 6 standard deviation of daily risk σ, 7 average absolute change in daily risk 1 n Pn−1 1 |xi −xi+1|, 8 average absolute change in 1st difference 1 n Pn−2 1 |x′ i −x′ i+1|, 9 fraction of the visit with positive risk score 1 n Pn 1 1xi>0, 10 fraction of the visit with negative risk score 1 n Pn 1 1xi<0, 11 sum of the risk over the most recent 3 days Pn n−2 xi, 12 longest positive run (normalized) 13 longest negative run (normalized) 14 maximum observation max i xi, 15 location of maximum (normalized) 1 nargmax i xi, 16 minimum observation min i xi, 17 location of minimum (normalized) 1 nargmin i xi, Features 2-4 are averages; Features 3 and 4 weight days closer to the time of classification more heavily. Features 6-10 are different measures for the amount of fluctuation in the time series. Features 5 and 11 capture information about the most recent states of the patient. Features 12 and 13 identify runs in the data, i.e., periods of time where the patient is consistently at high or low risk. Finally, Features 14-17 summarize information regarding global maxima and minima in the approximate risk process. Given these feature definitions, we map each patient admission risk process to a fixed-length feature vector. These summarization variables allow one to compare time series of different lengths, while still capturing temporal information, e.g., when the maximum risk occurs relative to the time of prediction. Given this feature space, one can learn a classifier to identify high-risk patients. This approach is described in Figure 2. SVM1 Feature Extraction p1 x x’ SVM2 y 3 Classify each admission based on the x’; predict whether or not P
will test positive for C. diff. 2 Concatenate predictions and extract feature vector x’ based on time series x. 1 Given m×n admission P, where m is the number of observ. for each day and n is the number of days, predict daily risk xi based on the observ. pi, for i=1…n. pn p2 x1 xn x2 ... ... Figure 2: A two-step approach to risk stratification where predefined features are extracted from the time-series data. 4 3.2.2 Classification using Similarity Metrics In the previous section, we learned a second classifier based on extracted features. In this section, we consider classifiers based on the raw data, i.e., the concatenated time series from Step 2 in Figure 2. SVMs classify examples based on a kernel or similarity measure. One of the most common non-linear kernels is the Gaussian radial basis function kernel: k(xi, xj) = exp(−γ∥xi −xj∥2). Its output is dependent on the Euclidean distance between examples xi and xj. This distance measure requires vectors of the same length. We consider two approaches to generating vectors of the same length: (1) linear interpolation and (2) truncation. In the first approach we linearly interpolate between points. In the second approach we consider only the most recent 5 days of data, xn−4, xn−3, ..., xn. Euclidean distance is a one-to-one comparison. In contrast, the dynamic time warping (DTW) distance is a one-to-many comparison [14]. DTW computes the distance between two time series by finding the minimal cost alignment. Here, the cost is the absolute distance between aligned points. We linearly interpolate all time series to have the same length, the length of the longest admission within the dataset (54). To ensure that the warping path does not contain lengthy vertical and horizontal segments, we constrain the warping window (how far the warping path can stray from the diagonal) using the Sakoe-Chiba band with a width of 10% of the length of the time series [15]. We learn an SVM classifier based on this distance metric, by replacing the Euclidean distance in the RBF kernel with the DTW distance, k(xi, xj) = exp(−γDTW(xi, xj)) as in [16]. 3.2.3 Classification using Hidden Markov Models We can make observations about a patient on a daily basis, but we cannot directly measure whether or not a patient is at high risk. Hence, we used the phrase approximate risk process. By applying HMMs we assume there is a sequence of hidden states, x1, x2, ..., xn that govern the observations y1, y2, ..., yn. Here, the observations are the predictions made by the SVM. We consider a twostate HMM where each state, s1 and s2, is associated with a mixture of Gaussian distributions over possible observations. At an intuitive level, one can think of these states as representing low and high risk. Using the data, we learn and apply HMMs in two different ways. Classification via Likelihood We hypothesize that there may exist patterns of risk over time that are more likely to lead to a positive test result. To test this hypothesis, we first consider the classic approach to classification using HMMs described in Section VI-B [17]. We learn two separate HMMs: one using only observation sequences from positive patients and another using only observation sequences from negative patients. We initialize the emission probabilities differently for each model based on the data, but initialize the transition probabilities as uniform probabilities. Given a test observation sequence, we apply both models and calculate the log-likelihood of the data given each model using the forwardbackward algorithm. We classify patients continuously, based on the ratio of the log-likelihoods. Classification via Posterior State Probabilities As we saw in Figure 1, the SVM output for a patient may fluctuate greatly from day to day. While large fluctuations in risk are not impossible, they are not common. Recall that in our initial calculation while the variables from time of admission are included in the prediction, the previous day’s risk is not. The predictions produced by the SVM are independent. HMMs allow us to model the observations as a sequence and induce a temporal dependence in the model: the current state, xt, depends on the previous state, xt−1. We learn an HMM on a training set. We consider a two state model in which we initialize the emission probabilities as p(yt|xt = s1) = N(µs1, 1), p(yt|xt = s2) = N(µs2, 1) ∀t where µs1 = −1 and µs2 = 1. Based on this initialization s1 and s2 correspond to “low-risk” and “high-risk” states, as mentioned above. A key decision was to use a left-to-right model where, once a patient reaches a “high-risk” state they remain there. All remaining transition probabilities were initialized uniformly. Applied to a test example we compute the posterior probabilities p(xt|y1, ..., yn) for t = 1...n using the forward-backward algorithm. Because of the left-to-right assumption, if enough high-risk observations are made it will trigger a transition to the high-risk state. Figure 3 shows two examples of risk processes and their associated posterior state probabilities p(xt = s2|y1, ..., yn) for 5 0 2 4 6 8 10 12 14 16 18 20 −1 −0.5 0 0.5 1 y 0 5 10 15 20 0 0.5 1 p(x=s2|y1,...,yn) Time in Days Time (days)! (a) Patient is discharged on day 40 0 5 10 15 20 25 −1 0 1 y 0 5 10 15 20 25 0 0.5 1 p(x=s2|y1,...,yn) Time in Days Time (days)! (b) Patient tests positive for C. diff on day 24 Figure 3: Given all of the observations from y1, ..., yn (in blue) we compute the posterior probability of being in a high-risk state for each day (in red). t = 1...n. We classify each patient according to the probability of being in a high-risk state on the most recent day i.e., p(xn = s2|y1, ...yn). 4 Experiments & Results This section describes a set of experiments used to compare several methods for predicting a patient’s risk of acquiring C. diff during the current hospital admission. We start by describing the experimental setup, which is maintained across all experiments, and later present the results. 4.1 Experimental Setup In order to reduce the possibility of confusing the risk of becoming colonized with C. diff with the existence of a current infection, for patients from the positive class we consider only data collected up to two days before a positive test result. This reduces the possibility of learning a classifier based on symptoms or treatment (a problem with some earlier studies). For patients who never test positive, researchers typically use the discharge day as the index event [3]. However, this can lead to deceptively good results because patients nearing discharge are typically healthier than patients not nearing discharge. To avoid this problem, we define the index event for negative examples as either the halfway point of their admission, or 5 days into the admission, whichever is greater. We consider a minimum of 5 days for a negative patient since 5 days is the minimum amount of data we have for any positive patient (e.g., a patient who tests positive on day 7). To handle class imbalance, we randomly subsample the negative class, selecting 10 negative examples for each positive example. When training the SVM we employ asymmetric cost parameters as in [18]. Additionally, we remove outliers, those patients with admissions longer than 60 days. Next, we randomly split the data into stratified training and test sets with a 70/30 split. The training set consisted of 1,251 admissions (127 positive), while the test set was composed of 532 admissions (50 positive). This split was maintained across all experiments. In all of the experiments, the training data was used for training purposes and validation of parameter selection, and the test set was used for evaluation purposes. For training and classification, we employed SVMlight [19] and Kevin Murphy’s HMM Toolbox [20]. 4.2 Results Table 2 compares the performance of eight different classifiers applied to the held-out test data. The first classifier is our baseline approach, described in Section 3.1, it classifies patients based solely on their current state. The second classifier RP+Average is an initial improvement on this approach, and classifies patients based on the average value of their risk process. The remaining classifiers are all based on time-series classification methods. RP+SimilarityEuc.5days classifies patients using a non-linear SVM based on the Euclidean distance between the most recent 6 Table 2: Predicting a positive test result two days in advance using different classifiers. Current State represents the traditional approach to risk stratification, and is the only classifier that is not based on patient Risk Processes (RP). Approach AUC 95% CI F-Score 95% CI Current State 0.69 0.61-0.77 0.28 0.19-0.38 RP+Average 0.75 0.69-0.81 0.32 0.21-0.41 RP+SimilarityEuc.5days 0.73 0.67-0.80 0.27 0.18-0.37 RP+HMMlikelihood 0.74 0.68-0.81 0.30 0.20-0.38 RP+SimilarityEuc.interp. 0.75 0.69-0.82 0.31 0.22-0.41 RP+SimilarityDT W 0.76 0.69-0.82 0.31 0.22-0.41 RP+HMMposterior 0.76 0.70-0.82 0.30 0.21-0.41 RP+Features 0.79 0.73-0.85 0.37 0.24-0.49 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 FPR (1−Specificity) TPR (Sensivity) Risk Process Features AUC: 0.7906 RP + Features! Figure 4: Results of predicting a patient’s risk of testing positive for C. diff in the held-out test set using RP+Features. −0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Feature Weight Figure 5: Feature weights from SVMs learned using different folds of the training set. The definition of features is given in Table 1 5 days. RP+SimilarityEuc.interp. uses the entire risk process by interpolating between points. These two methods in addition to DTW are described in Section 3.2.2. The difference between RP+HMMlikelihood and RP+HMMposterior is described in Section 3.2.3. RP+Features classifies patients based on a linear combination of the average and other summary statistics (described in Section 3.2.1) of the risk process. For all of the performance measures we compute 95% point wise confidence intervals by bootstrapping (sampling with replacement) the held-out test set. Figure 4 gives the ROC curve for the best method, the RP+Features. The AUC is calculated by sweeping the decision threshold. The RP+Features performed as well or better than the Current State and RP+Average approach at every point along the curve, thereby dominating both traditional approaches. Compared to the other classifiers the classifier based on the RP+Features dominates on both AUC and F-Score. This classifier is based on a linear combination of statistics (listed in Table 1) computed from the patient risk processes. We learned the feature weights using the training data. To get a sense of the importance of each feature we used repeated sub-sampling validation on the training set. We randomly subsampled 70% of the training data 100 times and learned 100 different SVMs; this resulted in 100 different sets of feature weights. The results of this experiment are shown in Figure 5. The most important features are the length of the time series (Feature 1), the fraction of the time for which the patient is at positive risk (Feature 9), and the maximum risk attained (Feature 14). The only two features with significantly negative weights are Feature 10 and Feature 13, the overall fraction of time a patient has a negative risk, and the longest consecutive period of time that a patient has negative risk. It is difficult to interpret the performance of a classifier based on these results alone, especially since the classes are imbalanced. Figure 6 gives the confusion matrix for mean performance of the best 7 classifier, RP+Features. To further convey the ability of the classifier to risk stratify patients, we split the test patients into quintiles (as is often done in clinical studies) based on the continuous output of the classifier. Each quintile contains approximately 106 patients. For each quintile we calculated the probability of a positive test result, based on those patients who eventually test positive for C. diff. Figure 7 shows that the probability increases with each quintile. The difference between the 1st and 5th quintiles is striking; relative to the 1st quintile, patients in the 5th quintile are at more than a 25-fold greater risk. Actual Outcome Predicted Outcome p n p′ TP:26 FN:24 n′ FP:72 TN:410 Figure 6: Confusion Matrix Using the best approach, the RP+Features, we achieve a Sensitivity of 50% and a Specificity of 85% on the held-out data. 1st 2nd 3rd 4th 5th 0 0.05 0.1 0.15 0.2 0.25 0.3 Quintile Fraction of Patients who test Positive Figure 7: Test patients with RP+Features predictions in the 5th quintile are more than 25 times more likely to test positive for C. diff than those in the 1st quintile. 5 Discussion & Conclusion To the best of our knowledge, we are the first to consider risk of acquiring an infection as a time series. We use a two-stage process, first extracting approximate risk processes and then using the risk process as an input to a classifier. We explore three different approaches to classification: similarity metrics, feature vectors, and hidden Markov models. The majority of the methods based on time-series classification performed as well if not better than the previous approach of classifying patients simply based on the average of their risk process. The differences were not statistically significant, perhaps because of the small number of positive examples in the held-out set. Still, we are encouraged by these results, which suggest that posing the risk stratification problem as a time-series classification task can provide more accurate models. There is large overlap in the confidence intervals for many of the results reported in Table 2, in part because of the paucity of positive examples. Still, based on the mean performance, all classifiers that incorporate patient risk processes outperform the Current State classifier, and the majority of those classifiers perform as well or better than the RP+Average. Only two classifiers did not perform better than the latter classifier: RP+SimilarityEuc.5days and RP+HMMlikelihood. RP+SimilarityEuc.5days classifies patients based on a similarity metric using only the most recent 5 days of the patient risk processes. Its relatively poor performance suggests that a patient’s risk may depend on the entire risk process. The reasons for the relatively poor performance of the RP+HMMlikelihood approach are less clear. Initially, we thought that perhaps two states was insufficient, but experiments with larger numbers of states led to overfitting on the training data. It may well be that the Markovian assumption is problematic in this context. We plan to investigate other graphical models, e.g., conditional random fields, going forward. The F-Scores reported in Table 2 are lower than often seen in the machine-learning literature. However, when predicting outcomes in medicine, the problems are often so hard, the data so noisy, and the class imbalance so great that one cannot expect to achieve the kind of classification performance typically reported in the machine-learning literature. For this reason, the medical literature on risk stratification typically focuses on a combination of the AUC and the kind of odds ratios derivable from the data in Figure 7. As observed in the introduction, a direct comparison with the AUC achieved by others is not possible because of differences in the datasets, the inclusion criteria, and the details of the task. We have yet to thoroughly investigate the clinical ramifications of this work. However, for the daunting task of risk stratifying patients already at an elevated risk for C. diff, an AUC of 0.79 and an odds ratio of >25 are quite good. 8 References [1] M. M. Gaber, A. Zaslavsky, and S. Krishnaswamy. Mining data streams: A review. SIGMOD, 34(2), June 2005. [2] Z. Xing, J. Pei, and E. Keogh. A brief survey on sequence classification. ACM SIGKDD Explorations, 12(1):40–48, June 2010. [3] E. R. Dubberke, K. A. Reske, Y. Yan, M. A. Olsen, L. C. McDonald, and V. J. Fraser. Clostridium difficile - associated disease in a setting of endemicity: Identification of novel risk factors. Clinical Infectious Diseases, 45:1543–9, December 2007. [4] CDC. Rates for clostridium difficile infection among hospitalized patients. Centers for Disease Control and Prevention Morbidity and Mortality Weekly Report, 60(34):1171, 2011. [5] D. A. Katz, M.E. Lynch, and B. Littenber. Clinical prediction rules to optimize cytotoxin testing for clostridium difficile in hospitalized patients with diarrhea. American Journal of Medicine, 100(5):487–95, 1996. [6] J. Tanner, D. Khan, D. Anthony, and J. Paton. Waterlow score to predict patietns at risk of developing clostridium difficile-associated disease. Journal of Hospital Infection, 71(3):239– 244, 2009. [7] E. R. Dubberke, Y. Yan, K. A. Reske, A.M. Butler, J. Doherty, V. Pham, and V.J. Fraser. Development and validation of a clostridium difficile infection risk prediction model. Infect Control Hosp Epidemiol, 32(4):360–366, 2011. [8] K. W. Garey, T. K. Dao-Tran, Z. D. Jiang, M. P. Price, L. O. Gentry, and DuPont H. L. A clinical risk index for clostridium difficile infection in hospitalized patients receiving broadspectrum antibiotics. Journal of Hospital Infections, 70(2):142–147, 2008. [9] G. Krapohl. Preventing health care-associated infection: Development of a clinical prediction rule for clostridium difficile infection. PhD Thesis, 2011. [10] N. Peled, S. Pitlik, Z. Samra, A. Kazakov, Y. Bloch, and J. Bishara. Predicting clostridium difficile toxin in hospitalized patients with antibiotic-associated diarrhea. Infect Control Hosp Epidemiol, 28(4):377–81, 2007. [11] J. Wiens, E. Horvitz, and J. Guttag. Learning evolving patient risk processes for c. diff colonization. In ICML Workshop on Machine Learning from Clinical Data, 2012. [12] T. W. Liao. Clustering of time series data - a survey. The Journal of the Pattern Recognition Society, January 2005. [13] P. Bennett, S. Dumais, and E. Horvitz. The combination of test classifiers using reliability indicators. Information Retrieval, 8(1):67–100, 2005. [14] H. Sakoe and S. Chiba. Dynamic programming algorithm optimization for spoken word recognition. IEEE Transactions on Acoustics, Speech, and Signal Processing, 26(1):43–49, 1978. [15] C. Ratanamahatana and E. Keogh. Three myths about dynamic time warping data mining. In Proceedings of the Fifth SIAM International Conference on Data Mining, 2005. [16] C. Bahlmann, B. Haasdonk, and Burkhardt H. On-line handwriting recognition with support vector machines - a kernel approach. Proceedings of the 8th International Workshop on Frontiers in Handwriting Recognition, 2002. [17] L.R. Rabiner. A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2), February 1989. [18] K. Morik, P. Brockhausen, and T. Joachims. Combining statistical learning with a knowledgebased approach - a case study in intensive care monitoring. Proc. 16th International Conference on Machine Learning, 1999. [19] T. Joachims. Making large-scale svm learning practical. advances in kernel methods - support vector learning, 1999. [20] K. Murphy. Hidden Markov Model (HMM) Toolbox for Matlab. www.cs.ubc.ca/˜murphyk/Software/HMM/hmm.html. 9
|
2012
|
99
|
4,819
|
Inferring neural population dynamics from multiple partial recordings of the same neural circuit Srinivas C. Turaga∗1,2, Lars Buesing1, Adam M. Packer2, Henry Dalgleish2, Noah Pettit2, Michael H¨ausser2 and Jakob H. Macke3,4 1Gatsby Computational Neuroscience Unit, University College London 2Wolfson Institute for Biomedical Research, University College London 3Max-Planck Institute for Biological Cybernetics, T¨ubingen 4Bernstein Center for Computational Neuroscience, T¨ubingen Abstract Simultaneous recordings of the activity of large neural populations are extremely valuable as they can be used to infer the dynamics and interactions of neurons in a local circuit, shedding light on the computations performed. It is now possible to measure the activity of hundreds of neurons using 2-photon calcium imaging. However, many computations are thought to involve circuits consisting of thousands of neurons, such as cortical barrels in rodent somatosensory cortex. Here we contribute a statistical method for “stitching” together sequentially imaged sets of neurons into one model by phrasing the problem as fitting a latent dynamical system with missing observations. This method allows us to substantially expand the population-sizes for which population dynamics can be characterized—beyond the number of simultaneously imaged neurons. In particular, we demonstrate using recordings in mouse somatosensory cortex that this method makes it possible to predict noise correlations between non-simultaneously recorded neuron pairs. 1 Introduction The computation performed by a neural circuit is a product of the properties of single neurons in the circuit and their connectivity. Simultaneous measurements of the collective dynamics of all neurons in a neural circuit will help us understand their function and test theories of neural computation. However, experimental limitations make it difficult to measure the joint activity of large populations of neurons. Recent progress in 2-photon calcium imaging now allows for recording of the activity of hundreds of neurons nearly simultaneously [1, 2]. However, in neocortex where circuits or subnetworks can span thousands of neurons, current imaging techniques are still inadequate. We present a computational method to more effectively leverage currently available experimental technology. To illustrate our method consider the following example: A whisker barrel in the mouse somatosensory cortex consists of a few thousand neurons responding to stimuli from one whisker. Modern microscopes can only image a small fraction—a few hundred neurons—of this circuit. But since nearby neurons couple strongly to one another [3], by moving the microscope to nearby locations, one can expect to image neurons which are directly coupled to the first population of neurons. In this paper we address the following question: Could we characterize the joint dynamics of the first and second populations of neurons, even though they were not imaged simultaneously? Can we estimate correlations in variability across the two populations? Surprisingly, the answer is yes. We propose a statistical tool for “stitching” together measurements from multiple partial observations of the same neural circuit. We show that we can predict the correlated dynamics of large ∗sturaga@gatsby.ucl.ac.uk 1 non-simultaneously measured pairs simultaneously measured pairs a b session 1 session 2 couplings (A) imaging imaging Figure 1: Inferring neuronal interactions from non-simultaneous measurements. a) If two subsets of a neural population can only be recorded from in two separate imaging sessions, can we infer the connectivity across the sub-populations (red connections)? b) We want to infer the functional connectivity matrix, and in particular those entries which correspond to pairs of neurons that were not simultaneously measured (red off-diagonal block). While the two sets of neurons are pictured as non-overlapping here, we will also be interested in the case of partially overlapping measurements. populations of neurons even if many of the neurons have not been imaged simultaneously. In sensory cortical neurons, where large variability in the evoked response is observed [4, 5], our model can successfully predict the magnitude of (so-called) noise correlations between non-simultaneously recorded neurons. Our method can help us build data-driven models of large cortical circuits and help test theories of circuit function. Related recent research. Numerous studies have addressed the question of inferring functional connectivity from 2-photon imaging data [6, 7] or electrophysiological measurements [8, 9, 10, 11]. These approaches include detailed models of the relationship between fluorescence measurements, calcium transients and spiking activity [6] as well as model-free information-theoretic approaches [7]. However, these studies do not attempt to infer functional connections between non-simultaneously observed neurons. On the other hand, a few studies have presented statistical methods for dealing with sub-sampled observations of neural activity or connectivity, but these approaches are not applicable to our problem: A recent study [12] presented a method for predicting noise correlations between non-simultaneously recorded neurons, but this method requires the strong assumption that noise correlations are monotonically related to stimulus correlations. [13] presented an algorithm for latent GLMs, but this algorithm does not scale to the population sizes of interest here. [14] presented a method for inferring synaptic connections on dendritic trees from sub-sampled voltage observations. In this setting, one typically obtains a measurement from each location every few imaging frames, and it is therefore possible to interpolate these observations. In contrast, in our application, imaging sessions are of much longer duration than the time-scale of neural dynamics. Finally, [15] presented a statistical framework for reconstructing anatomical connectivity by superimposing partial connectivity matrices derived from fluorescent markers. 2 Methods Our goal is to estimate a joint model of the activity of a neural population which captures the correlation structure and stimulus selectivity of the population from partial observations of the population activity. We model the problem as fitting a latent dynamical system with missing observations. In principle, any latent dynamical system model [13] can be used—here we demonstrate our main point using the simple linear gaussian dynamical system for its computational tractability. 2.1 A latent dynamical system model for combining multiple measurements of population activity Linear dynamics. We denote by xk the activity of N neurons in the population on recording session k, and model its dynamics as linear with Gaussian innovations in discrete time, 2 xk t = Axk t−1 + Buk t + ηt, where ηt ∼N(0, Q). (1) Here, the N × N coupling matrix A models correlations across neurons and time. An entry Aij being non-zero implies that activity of neuron j at time t has a statistical influence on the activity of neuron i on the next time-step t+1, but does not necessarily imply a direct synaptic connection. For this reason, entries of A are usually referred to as the ‘functional’ (rather than anatomical) couplings or connectivity of the population. The entries of A also shape trial-to-trial variability which is correlated across neurons, i.e. noise-correlations. Further, we include an external, observed stimulus uk t (of dimension Nu) as well as receptive fields B (of size N × Nu) which model the stimulus dependence of the population activity. We model neural noise (which could include the effect of other influences not modeled explicitly) using zero-mean innovations ηt, which are Gaussian i.i.d. with covariance matrix Q, assuming the latter to be diagonal (see below for how our framework also can allow for correlated noise). The mean x0 and covariance Q0 of the initial state xk 0 were chosen such that the system is stationary (apart from the stimulus contribution Buk t ), i.e. x0 = 0 and Q0 satisfies the Lyapunov equation Q0 = AQ0A⊤+ Q. For the sake of simplicity, we work directly in the space of continuous valued imaging measurements (rather than on the underlying spiking activity), i.e. xk t models the relative calcium fluorescence signal. While this model does not capture the nonlinear and non-Gaussian cascade of neural couplings, calcium dynamics, fluorescence measurements and imaging noise [16, 6], we will show that this model nevertheless is able to predict correlations across non-simultaneously observed pairs of neurons. Incomplete observations. In each imaging session k we measure the activity of Nk neurons simultaneously, where Nk is smaller than the total number of neurons N. Since these measurements are noisy and incomplete observations of the full state vector, the true underlying activity of all neurons xk t is treated as a latent variable. The vector of the Nk measurements at time t in session k is denoted as yk t and is related to the underlying population activity by yk t = Ck(xk t + d + ϵt) ϵt ∼N(0, R), (2) where the ‘measurement matrix’ Ck is of size Nk × N. Further assuming that the recording sites correspond to identified cells (which typically is the case for 2-photon calcium imaging), we can assume Ck to be known and of the following form: The element Ck ij is 1 if neuron j of the population is being recorded from on session k (as the i-th recording site); the remaining elements of Ck are 0. The measurement noise is modeled as a Gaussian random variable ϵt with covariance R, and the parameter d captures a constant offset. One can also envisage using our model with dimensions of xk t which are never observed– such latent dimensions would then model correlated noise or the input from unobserved neurons into the population [17, 18]. Fitting the model. Our goal is to estimate the parameters (A, B, Q, R) of the latent linear dynamical system (LDS) model described by equations (1) and (2) from experimental data. One can learn these parameters using the standard expectation maximization (EM) algorithm that finds a local maximum of the log-likelihood of the observed data [19]. The E-step can be performed via Kalman Smoothing (with a different Ck for each session). In the M-step, the updates for A, B and Q are as in standard linear dynamical systems, and the updates for R and d are element-wise given by dj = 1 Tnj X k,t χk j yk t,σk j − xk t,j Rjj = 1 Tnj X k,t χk j D (yk t,σk j −xk t,j −dj)2E , where ⟨·⟩denotes the expectation over the posterior distribution calculated in the E-step, and T is the number of time steps in each recording session (assumed to be the same for each session for the sake of simplicity). Furthermore, χk j := P i Ck ij is 1 if neuron j was imaged in session k and 0 otherwise, nj = P k χk j is the total number of sessions in which neuron j was imaged and σk j is the index of the recording site of neuron j during session k. To improve the computational efficiency of the fitting procedure as well as to avoid shallow local maxima, we used a variant of online-EM with randomly selected mini-batches [20] followed by full batch EM for fine-tuning. 3 2.2 Details of simulated and experimental data Simulated data. We simulated a population of 60 neurons which were split into 3 pools (’cell types’) of 20 neurons each, with both connection probability and strength being cell-type specific. Within each pool, pairs were coupled with probability 50% and random weights, cell-types one and two had excitatory connections onto the other cells, and type three had weak but dense inhibitory couplings (see Figure 2a, top left). Coupling weights were truncated at ±0.2. The 4-dimensional external stimulus was delivered into the first pool. On average, 24% of the variance of each neuron was noise, 2% driven by the stimulus, 25% by self-couplings and a further 49% by network-interactions. After shuffling the ordering of neurons (resulting in the connectivity matrix displayed in Fig. 2a, top middle), we simulated K = 10 trials of length T = 1000 samples from the population. We then pretended that the population was imaged in two sessions with non-overlapping subsets of 30 neurons each (Figure 2a, green outlined blocks) of K = 5 trials each, and that observation noise ϵ was uncorrelated and very small, std(ϵii) = 0.006. Experimental data. We also applied the stitching method to two calcium imaging datasets recorded in the somatosensory cortex of awake or anesthetized mice. We imaged calcium signals in the superficial layers of mouse barrel cortex (S1) in-vivo using 2-photon laser scanning microscopy [1]. A genetically encoded calcium indicator (GCaMP6s) was virally expressed, driven pan-neuronally by the human-synapsin promoter, in the C2 whisker barrel and the activity of about 100-200 neurons was imaged simultaneously in-vivo at about 3Hz, compatible with the slow timescales of the calcium dynamics revealed by GCaMP6s. The anesthetized dataset was collected during an experiment in which the C2 whisker of an anesthetized mouse was repeatedly flicked randomly in one of three different directions (rostrally, caudally or ventrally). About 200 neurons were imaged for about 27min at a depth of 240µm in the C2 whisker barrel. The awake dataset was collected while an awake animal was performing a whisker flick detection task. In this session, about 80 neurons were imaged for about 55min at a depth of 190µm, also in the C2 whisker barrel. Regions of interest (ROI) corresponding to putative GCaMP expressing soma (and in some instances isolated neuropil) were manually defined and the time-series corresponding to the calcium signal for each such ROI was extracted. The calcium time-series were high-pass filtered with a time-constant of 1s. 2.3 Quantifying and comparing model performance Fictional imaging scenario in experimental data. To evaluate how well stitching works on real data, we created a fictional imaging scenario. We pretended that the neurons, which were in reality simultaneously imaged, were not imaged in one session but instead were ‘imaged’ in two subsets in two different sessions. The subsets corresponding to different ‘sessions’ c = 60% of the neurons, meaning that the subsets overlapped and a few neurons in common. We also experimented with c = 50% as in our simulation above, but failed to get good performance without any overlapping neurons. We imagined that we spent the first 40% of the time ‘imaging’ subset 1 and the second 40% of the time ‘imaging’ subset 2. The final 20% of the data was withheld for use as the test set. We then used our stitching method to predict pairwise correlations from the fictional imaging session. Upper and lower bounds on performance. We wanted to benchmark how well our method is doing both compared to the theoretical optimum and to a conventional approach. On synthetic data, we can use the ground-truth parameters as the optimal model. In lieu of ground-truth on the real data, we fit a ‘fully observed’ model to the simulatenous imaging data of all neurons (which would be impossible of course in practice, but is possible in our fictional imaging scenario). We also analyzed the data using a conventional, ‘naive’ approach in which we separately fit dynamical system models to each of the two imaging sessions and then combined their parameters. We set coefficients of nonsimultaneously recorded pairs to 0 and averaged coefficients for neurons which were part of both imaging sessions (in the c = 60% scenario). The “fully observed” and the “naive” models constitute an upper and lower bound respectively on our performance. Certainly we can not expect to do better at predicting correlations, than if we had observed all neurons simultaneously. 3 Results We tested our ability to stitch multiple observations into one coherent model which is capable of predicting statistics of the joint dynamics, such as correlations across non-simultaneously imaged 4 true couplings stitched estimate unshuffle shuffle stitched couplings unshuffle blocks naive estimate true stitching estimate −0.5 0 0.5 −0.5 0 0.5 stitched true noise correlations −0.2 0 0.2 −0.2 0 0.2 stitched true off-diagonal coupling naive couplings a b c Figure 2: Noise correlations and coupling parameters can be well recovered in a simulated dataset. a) A coupling matrix for 60 neurons arranged in 3 blocks was generated (true coupling matrix) and shuffled. We simulated the imaging of non-overlapping subsets of 30 neurons each in two sessions. Couplings were recovered using a “naive” strategy and using our proposed “stitching” method. b) Noise correlations estimated by our stitching method match true noise correlations well. c) Couplings between non-simultaneously imaged neuron pairs (red off-diagonal block) are estimated well by our method. neuron pairs. We first apply our method to a synthetic dataset to explain its properties, and then demonstrate that it works for real calcium imaging measurements from the mouse somatosensory cortex. 3.1 Inferring correlations and model parameters in a simulated population It might seem counterintuitive that one can infer the cross-couplings, and hence noise-correlations, between neurons observed in separate sessions. An intuition for why this might work nevertheless can gained by considering the artificial scenario of a network of linearly interacting neurons driven by Gaussian noise: Suppose that during the first recording session we image half of these neurons. We can fit a linear state-space model to the data in which the other, unobserved half of the population constitutes the latent space. Given enough data, the maximum likelihood estimate of the model parameters (which is consistent) lets us identify the true joint dynamics of the whole population up to an invertible linear transformation of the unobserved dimensions [21]. After the second imaging session, where we image the second (and previously unobserved) half of the population, we can identify this linear transformation, and thus identify all model parameters uniquely, in particular the cross-couplings. To demonstrate this intuition, we simulated such an artificial dataset (described in 2.2) and describe here the results of the stitching procedure. Recovering the coupling matrix. Our stitching method was able to recover the true coupling matrix, including the off-diagonal blocks which correspond to pairs of neurons that were not imaged simultaneously (see red-outlined blocks in 2a, bottom middle). As expected, recovery was better for couplings across observed pairs (correlation between true and estimated parameters 0.95, excluding self-couplings) than for non-simultaneously recorded pairs (Figure 2c; correlation 0.73). With the “naive” approach couplings between non-simultaneously observed pairs cannot be recovered, and even for simultaneously observed pairs, the estimate of couplings is biased (correlation 0.75). Recovering noise correlations. We also quantified the degree to which we are able to predict statistics of the joint dynamics of the whole network, in particular noise correlations across pairs of neurons that were never observed simultaneously. We calculated noise correlations by computing correlations in variability of neural activity after subtracting contributions due to the stimulus. We found that the stitching method was able to accurately recover the noise-correlations of nonsimultaneously recorded pairs (correlation between predicted and true correlations was 0.92; Figure 2b). In fact, we generally found the prediction of correlations to be more accurate than prediction 5 fully observed stitched naive couplings correlations partially observed fully observed −0.5 0 0.5 −0.5 0 0.5 0 0.1 0.2 0.3 0 0.1 0.2 0.3 Naive Stitched a c b d Figure 3: Examples of correlation and coupling recovery in the anesthetized calcium imaging experiments. a) Coupling matrices fit to calcium signal using all neurons (fully observed) or fit after “imaging” two overlapping subsets of 60% neurons each (stitched and naive). The naive approach is unable to estimate coupling terms for “non-simultaneously imaged” neurons, so these are set to zero. b) Scatter plot of coupling terms for “non-simultaneously imaged” neuron pairs estimated using the stitching method vs the fully observed estimates. c) Correlations predicted using the coupling matrices. d) Scatter plot of correlations in c for “non-simultaneously imaged” neuron pairs estimated using the stitching and the naive approaches. of the underlying coupling parameters. In contrast, a naive approach would not be able to estimate noise correlations between non-simultaneously observed pairs. (We note that, as the stimulus drive in this simulation was very weak, inferring noise correlations from stimulus correlations [12] would be impossible). Predicting unobserved neural activity. Given activity measurements from a subset of neurons, our method can predict the activity of neurons in the unobserved subset. This prediction can be calculated by doing inference in the resulting LDS, i.e. by calculating the posterior mean µk 1:T = E(xk 1:T |yk 1:T , hk 1:T ) and looking at those entries of µk 1:T which correspond to unobserved neurons. On our simulated data, we found that this prediction was strongly correlated with the underlying ground-truth activity (average correlation 0.70 ± 0.01 s.e.m across neurons, using a separate testset which was not used for parameter fitting.). The upper bound for this prediction metric can be obtained by using the ground-truth parameters to calculate the posterior mean. Use of this groundtruth model resulted in a performance of 0.82 ± 0.01. In contrast, the ’naive’ approach can only utilize the stimulus, but not the activity of the observed population for prediction and therefore only achieved a correlation of 0.23 ± 0.01. 3.2 Inferring correlations in mouse somatosensory cortex Next, we applied our stitching method to two real datasets: anesthetized and awake (described in Section 2.2). We demonstrate that it can predict correlations between non-simultaneously accessed neuron pairs with accuracy approaching that of the upper bound (“fully observed” model trained on all neurons), and substantially better than the lower bound “naive” model. Example results. Figure 3a displays coupling matrices of a population consisting of the 50 most correlated neurons in the anesthetized dataset (see Section 2.2 for details) estimated using all three methods. Our stitching method yielded a coupling matrix with structure similar to the fully observed model (Figure 3a, central panel), even in the off-diagonal blocks which correspond to nonsimultaneously recorded pairs. In contrast, the naive method, by definition, is unable to infer couplings for non-simultaneously recorded pairs, and therefore over-estimates the magnitude of observed couplings (Figure 3a, right panel). Even for non-simultaneously recorded pairs, the stitched model predicted couplings which were correlated with the fully observed predictions (Figure 3b, correlation 0.38). 6 predicting correlations predicting couplings anesthetized awake population size correlation a c predicting neural activity b 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1 20 40 60 80 0.4 0.6 0.8 1 20 40 60 80 100 0 0.2 0.4 20 40 60 80 0.2 0.4 0.6 0.8 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1 20 40 60 80 0.2 0.4 0.6 0.8 1 full obs (UB) stitched naive (LB) Figure 4: Recovering correlations and coupling parameters in a real calcium imaging experiments. 100 neurons were simultaneously imaged in an anesthetized mouse (top row) and an awake mouse (bottom row). Random populations of these neurons, ranging in size from 10 to 100 were chosen and split into two slightly overlapping sub-sets each containing 60% of the neurons. The activity of these sub-sets were imagined to be “imaged” in two separate “imaging” sessions (see Section 2.2). a) Pairwise correlations for “non-simultaneously imaged” neuron pairs estimated by the “naive” and our ”stitched” strategies compared to correlations predicted by a model fit to all neurons (”full obs“). b) Accuracy of predicting the activity of one sub-set of neurons, given the activities of the other sub-set of neurons. c) Comparison of estimated couplings for “non-simultaneously imaged” neuron pairs to those estimated using the “fully observed” model. Note that true coupling terms are unavailable here. However, of greater interest is how well our model can recover pairwise correlations between nonsimultaneously measured neuron pairs. We found that our stitching method, but not the naive method, was able to accurately reconstruct these correlations (Figure 3c). As expected, the naive method strongly under-estimated correlations in the non-simultaneously recorded blocks, as it can only model stimulus-correlations but not noise-correlations across neurons. 1 In contrast, our stitching method predicted correlations well, matching those of the fully observed model (correlation 0.84 for stitchLDS, 0.15 for naiveLDS, figure 3d). Summary results across multiple populations. Here, we investigate the robustness of our findings. We drew random neuronal populations of sizes ranging from 10 to 80 (for awake) or 100 (for anesthetized) from the full datasets. For each population, we fit three models (fully observed, stitch, naive) and compared their correlations, parameters and activity cross-prediction accuracy. We repeated this process 20 times for each population size and dataset (anesthetized/awake) to characterize the variability. We found that for both datasets, the correlations predicted by the stitching method for non-simultaneously recorded pairs were similar to the fully observed ones, and that this similarity is almost independent of population size (Figure 4a). In fact, for the awake data (in which the overall level of correlation was higher), the correlation matrices were extremely similar (lower panel). The stitching method also substantially outperformed the naive approach, for which the similarity was lower by a factor of about 2. We compared the accuracy of the models at predicting the neural activity of one subset of neurons given the stimulus and the activity of the other subset (Figure 4b). We find that our model makes significantly better predictions than the lower bound naive model, whose performance comes from modeling the stimulus and neurons in the overlap between both subsets. Indeed for the more active and correlated awake dataset, predictions are nearly as good as those of the fully observed 1The naive approach also over-estimated correlations within each view. This is a consequence of biases resulting from averaging couplings across views for neurons in the overlap between the two fictional sessions. 7 model. We also found that prediction accuracy increased slightly with population size, perhaps since a larger population provides more neurons from which the activity of the other subset can be predicted. Apparently, this gain in accuracy from additional neurons outweighed any potential drop in performance resulting from increased potential for over-fitting on larger populations. While we have no access to the true cross-couplings for the real data, we can nonetheless compare the couplings from our stitched model to those estimated by the fully observed model. We find that the stitching model is indeed able to estimate couplings that correlate positively with the fully observed couplings, even for non-simultaneously imaged neuron pairs. Interestingly, this correlation drops with increasing population size, perhaps due to possible near degeneracy of parameters for large systems. 4 Discussion It has long been appreciated that a dynamical system can be reconstructed from observations of only a subset of its variables [22, 23, 21]. These theoretical results suggest that while only measuring the activity of one population of neurons, we can infer the activity of a second neural population that strongly interacts with the first, up to re-parametrization. Here, we go one step further. By later measuring the activity of the second population, we recover the true parametrization allowing us to predict aspects of the joint dynamics of the two populations, such as noise correlations. Our essential finding is that we can put these theoretical insights to work using a simple linear dynamical system model that “stitches” together data from non-simultaneously recorded but strongly interacting populations of neurons. We applied our method to analyze 2-photon population calcium imaging measurements from the superficial layers of the somatosensory cortex of both anesthetized and awake mice, and found that our method was able to successfully combine data not accessed simultaneously. In particular, this approach allowed us to accurately predict correlations even for pairs of non-simultaneously recorded neurons. In this paper, we focused our demonstration to stitching together two populations of neurons. Our framework can be generalized to more than two populations, however it remains to be empirically seen how well larger numbers of populations can be combined. An experimental variable of interest is the degree of overlap (shared neurons) between different populations of neurons. We found that some overlap was critical for stitching to work, and increasing overlap improves stitching performance. Given a fixed imaging time budget, determining a good trade-off between overlap and total coverage is an intriguing open problem in experimental design. We emphasise that our linear gaussian dynamical system provides only a statistical description of the observed data. However, even this simple model makes accurate predictions of correlations between non-simultaneously observed neurons. Nevertheless, more realistic models [16, 6] can help improve the accuracy of these predictions and disentangle the contributions of spiking activity, calcium dynamics, fluorescence measurements and imaging noise to the observed statistics. Similarly, better priors on neural connectivity [24] might improve reconstruction performance. Indeed, we found in unreported simulations that using a sparsifying penalty on the connectivity matrix [6] improves parameter estimates slightly. We note that our model can easily be extended to model potential common input from neurons which are never observed [13] as a low dimensional LDS [17, 18]. The simultaneous measurement of the activity of all neurons in a neural circuit will shed much light on the nature of neural computation. While there is much progress in developing faster imaging modalities, there are fundamental physical limits to the number of neurons which can be simultaneously imaged. Our paper suggests a means for expanding our limited capabilities. With more powerful algorithmic tools, we can imagine mapping population dynamics of all the neurons in an entire neural circuit such as the zebrafish larval olfactory bulb, or layers 2 & 3 of a whisker barrel— an ambitious goal which has until now been out of reach. Acknowledgements We thank Peter Dayan for valuable comments on our manuscript and members of the Gatsby Unit for discussions. We are grateful for support from the Gatsby Charitable Trust, Wellcome Trust, ERC, EMBO, People Programme (Marie Curie Actions) and German Federal Ministry of Education and Research (BMBF; FKZ: 01GQ1002, Bernstein Center T¨ubingen). 8 References [1] J. N. D. Kerr and W. Denk, “Imaging in vivo: watching the brain in action,” Nat Rev Neurosci, vol. 9, no. 3, pp. 195–205, 2008. [2] C. Grienberger and A. Konnerth, “Imaging calcium in neurons.,” Neuron, vol. 73, no. 5, pp. 862–885, 2012. [3] S. Lefort, C. Tomm, J.-C. Floyd Sarria, and C. C. H. Petersen, “The excitatory neuronal network of the C2 barrel column in mouse primary somatosensory cortex.,” Neuron, vol. 61, no. 2, pp. 301–316, 2009. [4] D. J. Tolhurst, J. A. Movshon, and A. F. Dean, “The statistical reliability of signals in single neurons in cat and monkey visual cortex,” Vision research, vol. 23, no. 8, pp. 775–785, 1983. [5] W. R. Softky and C. Koch, “The highly irregular firing of cortical cells is inconsistent with temporal integration of random epsps,” The Journal of Neuroscience, vol. 13, no. 1, pp. 334–350, 1993. [6] Y. Mishchenko, J. T. Vogelstein, and L. Paninski, “A bayesian approach for inferring neuronal connectivity from calcium fluorescent imaging data,” The Annals of Applied Statistics, vol. 5, no. 2B, pp. 1229– 1261, 2011. [7] O. Stetter, D. Battaglia, J. Soriano, and T. Geisel, “Model-free reconstruction of excitatory neuronal connectivity from calcium imaging signals,” PLoS Comp Bio, vol. 8, no. 8, p. e1002653, 2012. [8] J. W. Pillow, J. Shlens, L. Paninski, A. Sher, A. M. Litke, E. J. Chichilnisky, and E. P. Simoncelli, “Spatio-temporal correlations and visual signalling in a complete neuronal population.,” Nature, vol. 454, no. 7207, pp. 995–999, 2008. [9] I. H. Stevenson, J. M. Rebesco, L. E. Miller, and K. P. K¨ording, “Inferring functional connections between neurons,” Current opinion in neurobiology, vol. 18, no. 6, pp. 582–588, 2008. [10] A. Singh and N. A. Lesica, “Incremental mutual information: A new method for characterizing the strength and dynamics of connections in neuronal circuits,” PLoS Comp Bio, vol. 6, no. 12, p. e1001035, 2010. [11] D. Song, H. Wang, C. Y. Tu, V. Z. Marmarelis, R. E. Hampson, S. A. Deadwyler, and T. W. Berger, “Identification of sparse neural functional connectivity using penalized likelihood estimation and basis functions,” J Comp Neursci, pp. 1–23, 2013. [12] A. Wohrer, R. Romo, and C. Machens, “Linear readout from a neural population with partial correlation data,” in Advances in Neural Information Processing Systems, vol. 22, Curran Associates, Inc., 2010. [13] J. W. Pillow and P. Latham, “Neural characterization in partially observed populations of spiking neurons,” Adv Neural Information Processing Systems, vol. 20, no. 3.5, 2008. [14] A. Pakman, J. H. Huggins, and P. L., “Fast penalized state-space methods for inferring dendritic synaptic connectivity,” Journal of Computational Neuroscience, 2013. [15] Y. Mishchenko and L. Paninski, “A bayesian compressed-sensing approach for reconstructing neural connectivity from subsampled anatomical data,” J Comp Neurosci, vol. 33, no. 2, pp. 371–388, 2012. [16] J. T. Vogelstein, B. O. Watson, A. M. Packer, R. Yuste, B. Jedynak, and L. Paninski, “Spike inference from calcium imaging using sequential monte carlo methods,” Biophysical Journal, vol. 97, no. 2, pp. 636– 655, 2009. [17] M. Vidne, Y. Ahmadian, J. Shlens, J. Pillow, J. Kulkarni, A. Litke, E. Chichilnisky, E. Simoncelli, and L. Paninski, “Modeling the impact of common noise inputs on the network activity of retinal ganglion cells.,” J Comput Neurosci, 2011. [18] J. H. Macke, L. B¨using, J. P. Cunningham, B. M. Yu, K. V. Shenoy, and M. Sahani., “Empirical models of spiking in neural populations.,” in Advances in Neural Information Processing Systems, vol. 24, Curran Associates, Inc., 2012. [19] A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum likelihood from incomplete data via the EM algorithm,” J R Stat Soc Ser B, vol. 39, no. 1, pp. 1–38, 1977. [20] P. Liang and D. Klein, “Online EM for unsupervised models,” in NAACL ’09: Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Association for Computational Linguistics, 2009. [21] T. Katayama, Subspace methods for system identification. Springer Verlag, 2005. [22] L. E. Baum and T. Petrie, “Statistical Inference for Probabilistic Functions of Finite State Markov Chains,” The Annals of Mathematical Statistics, vol. 37, no. 6, pp. 1554–1563, 1966. [23] F. Takens, “Detecting Strange Attractors In Turbulence,” in Dynamical Systems and Turbulence (D. A. Rand and L. S. Young, eds.), vol. 898 of Lecture Notes in Mathematics, (Warwick), pp. 366–381, Springer-Verlag, Berlin, 1981. [24] S. W. Linderman and R. P. Adams, “Inferring functional connectivity with priors on network topology,” in Cosyne Abstracts, 2013. 9
|
2013
|
1
|
4,820
|
A Kernel Test for Three-Variable Interactions Dino Sejdinovic, Arthur Gretton Gatsby Unit, CSML, UCL, UK {dino.sejdinovic, arthur.gretton}@gmail.com Wicher Bergsma Department of Statistics, LSE, UK w.p.bergsma@lse.ac.uk Abstract We introduce kernel nonparametric tests for Lancaster three-variable interaction and for total independence, using embeddings of signed measures into a reproducing kernel Hilbert space. The resulting test statistics are straightforward to compute, and are used in powerful interaction tests, which are consistent against all alternatives for a large family of reproducing kernels. We show the Lancaster test to be sensitive to cases where two independent causes individually have weak influence on a third dependent variable, but their combined effect has a strong influence. This makes the Lancaster test especially suited to finding structure in directed graphical models, where it outperforms competing nonparametric tests in detecting such V-structures. 1 Introduction The problem of nonparametric testing of interaction between variables has been widely treated in the machine learning and statistics literature. Much of the work in this area focuses on measuring or testing pairwise interaction: for instance, the Hilbert-Schmidt Independence Criterion (HSIC) or Distance Covariance [1, 2, 3], kernel canonical correlation [4, 5, 6], and mutual information [7]. In cases where more than two variables interact, however, the questions we can ask about their interaction become significantly more involved. The simplest case we might consider is whether the variables are mutually independent, PX = Qd i=1 PXi, as considered in Rd by [8]. This is already a more general question than pairwise independence, since pairwise independence does not imply total (mutual) independence, while the implication holds in the other direction. For example, if X and Y are i.i.d. uniform on {−1, 1}, then (X, Y, XY ) is a pairwise independent but mutually dependent triplet [9]. Tests of total and pairwise independence are insufficient, however, since they do not rule out all third order factorizations of the joint distribution. An important class of high order interactions occurs when the simultaneous effect of two variables on a third may not be additive. In particular, it may be possible that X ⊥⊥Z and Y ⊥⊥Z, whereas ¬ ((X, Y ) ⊥⊥Z) (for example, neither adding sugar to coffee nor stirring the coffee individually have an effect on its sweetness but the joint presence of the two does). In addition, study of three-variable interactions can elucidate certain switching mechanisms between positive and negative correlation of two genes expressions, as controlled by a third gene [10]. The presence of such interactions is typically tested using some form of analysis of variance (ANOVA) model which includes additional interaction terms, such as products of individual variables. Since each such additional term requires a new hypothesis test, this increases the risk that some hypothesis test will produce a false positive by chance. Therefore, a test that is able to directly detect the presence of any kind of higher-order interaction would be of a broad interest in statistical modeling. In the present work, we provide to our knowledge the first nonparametric test for three-variable interaction. This work generalizes the HSIC test of pairwise independence, and has as its test statistic the norm 1 of an embedding of an appropriate signed measure to a reproducing kernel Hilbert space (RKHS). When the statistic is non-zero, all third order factorizations can be ruled out. Moreover, this test is applicable to the cases where X, Y and Z are themselves multivariate objects, and may take values in non-Euclidean or structured domains.1 One important application of interaction measures is in learning structure for graphical models. If the graphical model is assumed to be Gaussian, then second order interaction statistics may be used to construct an undirected graph [11, 12]. When the interactions are non-Gaussian, however, other approaches are brought to bear. An alternative approach to structure learning is to employ conditional independence tests. In the PC algorithm [13, 14, 15], a V-structure (a directed graphical model with two independent parents pointing to a single child) is detected when an independence test between the parent variables accepts the null hypothesis, while a test of dependence of the parents conditioned on the child rejects the null hypothesis. The PC algorithm gives a correct equivalence class of structures subject to the causal Markov and faithfulness assumptions, in the absence of hidden common causes. The original implementations of the PC algorithm rely on partial correlations for testing, and assume Gaussianity. A number of algorithms have since extended the basic PC algorithm to arbitrary probability distributions over multivariate random variables [16, 17, 18], by using nonparametric kernel independence tests [19] and conditional dependence tests [20, 18]. We observe that our Lancaster interaction based test provides a strong alternative to the conditional dependence testing approach, and is seen to outperform earlier approaches in detecting cases where independent parent variables weakly influence the child variable when considered individually, but have a strong combined influence. We begin our presentation in Section 2 with a definition of interaction measures, these being the signed measures we will embed in an RKHS. We cover this embedding procedure in Section 3. We then proceed in Section 4 to define pairwise and three way interactions. We describe a statistic to test mutual independence for more than three variables, and provide a brief overview of the more complex high-order interactions that may be observed when four or more variables are considered. Finally, we provide experimental benchmarks in Section 5. 2 Interaction Measure An interaction measure [21, 22] associated to a multidimensional probability distribution P of a random vector (X1, . . . , XD) taking values in the product space X1×· · ·×XD is a signed measure ∆P that vanishes whenever P can be factorised in a non-trivial way as a product of its (possibly multivariate) marginal distributions. For the cases D = 2, 3 the correct interaction measure coincides with the the notion introduced by Lancaster [21] as a formal product ∆LP = D Y i=1 P ∗ Xi −PXi , (1) where each product QD′ j=1 P ∗ Xij signifies the joint probability distribution PXi1 ···XiD′ of a subvector Xi1, . . . , XiD′ . We will term the signed measure in (1) the Lancaster interaction measure. In the case of a bivariate distribution, the Lancaster interaction measure is simply the difference between the joint probability distribution and the product of the marginal distributions (the only possible non-trivial factorization for D = 2), ∆LP = PXY −PXPY , while in the case D = 3, we obtain ∆LP = PXY Z −PXY PZ −PY ZPX −PXZPY + 2PXPY PZ. (2) It is readily checked that (X, Y ) ⊥⊥Z ∨(X, Z) ⊥⊥Y ∨(Y, Z) ⊥⊥X ⇒ ∆LP = 0. (3) For D > 3, however, (1) does not capture all possible factorizations of the joint distribution, e.g., for D = 4, it need not vanish if (X1, X2) ⊥⊥(X3, X4), but X1 and X2 are dependent and X3 and X4 are dependent. Streitberg [22] corrected this definition using a more complicated construction with the M¨obius function on the lattice of partitions, which we describe in Section 4.3. In this 1As the reader might imagine, the situation becomes more complex again when four or more variables interact simultaneously; we provide a brief technical overview in Section 4.3. 2 work, however, we will focus on the case of three variables and formulate interaction tests based on embedding of (2) into an RKHS. The implication (3) states that the presence of Lancaster interaction rules out the possibility of any factorization of the joint distribution, but the converse is not generally true; see Appendix C for details. In addition, it is important to note the distinction between the absence of Lancaster interaction and the total (mutual) independence of (X, Y, Z), i.e., PXY Z = PXPY PZ. While total independence implies the absence of Lancaster interaction, the signed measure ∆totP = PXY Z −PXPY PZ associated to the total (mutual) independence of (X, Y, Z) does not vanish if, e.g., (X, Y ) ⊥⊥Z, but X and Y are dependent. In this contribution, we construct the non-parametric test for the hypothesis ∆LP = 0 (no Lancaster interaction), as well as the non-parametric test for the hypothesis ∆totP = 0 (total independence), based on the embeddings of the corresponding signed measures ∆LP and ∆totP into an RKHS. Both tests are particularly suited to the cases where X, Y and Z take values in a high-dimensional space, and, moreover, they remain valid for a variety of non-Euclidean and structured domains, i.e., for all topological spaces where it is possible to construct a valid positive definite function; see [23] for details. In the case of total independence testing, our approach can be viewed as a generalization of the tests proposed in [24] based on the empirical characteristic functions. 3 Kernel Embeddings We review the embedding of signed measures to a reproducing kernel Hilbert space. The RKHS norms of such embeddings will then serve as our test statistics. Let Z be a topological space. According to the Moore-Aronszajn theorem [25, p. 19], for every symmetric, positive definite function (henceforth kernel) k : Z × Z →R, there is an associated reproducing kernel Hilbert space (RKHS) Hk of real-valued functions on Z with reproducing kernel k. The map ϕ : Z →Hk, ϕ : z 7→k(·, z) is called the canonical feature map or the Aronszajn map of k. Denote by M(Z) the Banach space of all finite signed Borel measures on Z. The notion of a feature map can then be extended to kernel embeddings of elements of M(Z) [25, Chapter 4]. Definition 1. (Kernel embedding) Let k be a kernel on Z, and ν ∈M(Z). The kernel embedding of ν into the RKHS Hk is µk(ν) ∈Hk such that ´ f(z)dν(z) = ⟨f, µk(ν)⟩Hk for all f ∈Hk. Alternatively, the kernel embedding can be defined by the Bochner integral µk(ν) = ´ k(·, z) dν(z). If a measurable kernel k is a bounded function, it is straightforward to show using the Riesz representation theorem that µk(ν) exists for all ν ∈M(Z).2 For many interesting bounded kernels k, including the Gaussian, Laplacian and inverse multiquadratics, the embedding µk : M(Z) →Hk is injective. Such kernels are said to be integrally strictly positive definite (ISPD) [26, p. 4]. A related but weaker notion is that of a characteristic kernel [20, 27], which requires the kernel embedding to be injective only on the set M1 +(Z) of probability measures. In the case that k is ISPD, since Hk is a Hilbert space, we can introduce a notion of an inner product between two signed measures ν, ν′ ∈M(Z), ⟨⟨ν, ν′⟩⟩k := ⟨µk(ν), µk(ν′)⟩Hk = ˆ k(z, z′)dν(z)dν′(z′). Since µk is injective, this is a valid inner product and induces a norm on M(Z), for which ∥ν∥k = ⟨⟨ν, ν⟩⟩1/2 k = 0 if and only if ν = 0. This fact has been used extensively in the literature to formulate: (a) a nonparametric two-sample test based on estimation of maximum mean discrepancy ∥P −Q∥k, for samples {Xi}n i=1 i.i.d. ∼ P, {Yi}m i=1 i.i.d. ∼ Q [28] and (b) a nonparametric independence test based on estimation of ∥PXY −PXPY ∥k⊗l, for a joint sample {(Xi, Yi)}n i=1 i.i.d. ∼PXY [19] (the latter is also called a Hilbert-Schmidt independence criterion), with kernel k ⊗l on the product space defined as k(x, x′)l(y, y′). When a bounded characteristic kernel is used, the above tests are consistent against all alternatives, and their alternative interpretation is as a generalization [29, 3] of energy distance [30, 31] and distance covariance [2, 32]. 2Unbounded kernels can also be considered, however [3]. In this case, one can still study embeddings of the signed measures M1/2 k (Z) ⊂M(Z), which satisfy a finite moment condition, i.e., M1/2 k (Z) = n ν ∈M(Z) : ´ k1/2(z, z) d|ν|(z) < ∞ o . 3 Table 1: V -statistic estimates of ⟨⟨ν, ν′⟩⟩k⊗l in the two-variable case ν\ν′ PXY PXPY PXY 1 n2 (K ◦L)++ 1 n3 (KL)++ PXPY 1 n4 K++L++ In this article, we extend this approach to the three-variable case, and formulate tests for both the Lancaster interaction and for the total independence, using simple consistent estimators of ∥∆LP∥k⊗l⊗m and ∥∆totP∥k⊗l⊗m respectively, which we describe in the next Section. Using the same arguments as in the tests of [28, 19], these tests are also consistent against all alternatives as long as ISPD kernels are used. 4 Interaction Tests Notational remarks: Throughout the paper, ◦denotes an Hadamard (entrywise) product. Let A be an n×n matrix, and K a symmetric n×n matrix. We will fix the following notational conventions: 1 denotes an n × 1 column of ones; A+j = Pn i=1 Aij denotes the sum of all elements of the j-th column of A; Ai+ = Pn j=1 Aij denotes the sum of all elements of the i-th row of A; A++ = Pn i=1 Pn j=1 Aij denotes the sum of all elements of A; K+ = 11⊤K, i.e., [K+]ij = K+j = Kj+, and K⊤ + ij = Ki+ = K+i. 4.1 Two-Variable (Independence) Test We provide a short overview of the kernel independence test of [19], which we write as the RKHS norm of the embedding of a signed measure. While this material is not new (it appears in [28, Section 7.4]), it will help define how to proceed when a third variable is introduced, and the signed measures become more involved. We begin by expanding the squared RKHS norm ∥PXY −PXPY ∥2 k⊗l as inner products, and applying the reproducing property, ∥PXY −PXPY ∥2 k⊗l = EXY EX′Y ′k(X, X′)l(Y, Y ′) + EXEX′k(X, X′)EY EY ′l(Y, Y ′) −2EX′Y ′ [EXk(X, X′)EY l(Y, Y ′)] , (4) where (X, Y ) and (X′, Y ′) are independent copies of random variables on X × Y with distribution PXY . Given a joint sample {(Xi, Yi)}n i=1 i.i.d. ∼ PXY , an empirical estimator of ∥PXY −PXPY ∥2 k⊗l is obtained by substituting corresponding empirical means into (4), which can be represented using Gram matrices K and L (Kij = k(Xi, Xj), Lij = l(Yi, Yj)), ˆEXY ˆEX′Y ′k(X, X′)l(Y, Y ′) = 1 n2 n X a=1 n X b=1 KabLab = 1 n2 (K ◦L)++ , ˆEX ˆEX′k(X, X′)ˆEY ˆEY ′l(Y, Y ′) = 1 n4 n X a=1 n X b=1 n X c=1 n X d=1 KabLcd = 1 n4 K++L++, ˆEX′Y ′ h ˆEXk(X, X′)ˆEY l(Y, Y ′) i = 1 n3 n X a=1 n X b=1 n X c=1 KacLbc = 1 n3 (KL)++ . Since these are V-statistics [33, Ch. 5], there is a bias of OP (n−1); U-statistics may be used if an unbiased estimate is needed. Each of the terms above corresponds to an estimate of an inner product ⟨⟨ν, ν′⟩⟩k⊗l for probability measures ν and ν′ taking values in {PXY , PXPY }, as summarized in Table 1. Even though the second and third terms involve triple and quadruple sums, each of the empirical means can be computed using sums of all terms of certain matrices, where the dominant computational cost is in computing the matrix product KL. In fact, the overall estimator can be 4 Table 2: V -statistic estimates of ⟨⟨ν, ν′⟩⟩k⊗l⊗m in the three-variable case ν\ν′ nPXY Z n2PXY PZ n2PXZPY n2PY ZPX n3PXPY PZ nPXY Z (K ◦L ◦M)++ ((K ◦L) M)++ ((K ◦M) L)++ ((M ◦L) K)++ tr(K+ ◦L+ ◦M+) n2PXY PZ (K ◦L)++ M++ (MKL)++ (KLM)++ (KL)++M++ n2PXZPY (K ◦M)++ L++ (KML)++ (KM)++L++ n2PY ZPX (L ◦M)++ K++ (LM)++K++ n3PX PY PZ K++L++M++ computed in an even simpler form (see Proposition 9 in Appendix F), as
ˆPXY −ˆPX ˆPY
2 k⊗l = 1 n2 (K ◦HLH)++ , where H = I −1 n11⊤is the centering matrix. Note that by the idempotence of H, we also have that (K ◦HLH)++ = (HKH ◦HLH)++. In the rest of the paper, for any Gram matrix K, we will denote its corresponding centered matrix HKH by ˜K. When three variables are present, a two-variable test already allows us to determine whether for instance (X, Y ) ⊥⊥Z, i.e., whether PXY Z = PXY PZ. It is sufficient to treat (X, Y ) as a single variable on the product space X × Y, with the product kernel k ⊗l. Then, the Gram matrix associated to (X, Y ) is simply K ◦L, and the corresponding V -statistic is 1 n2 K ◦L ◦˜ M ++.3 What is not obvious, however, is if a V-statistic for the Lancaster interaction (which can be thought of as a surrogate for the composite hypothesis of various factorizations) can be obtained in a similar form. We will address this question in the next section. 4.2 Three-Variable Tests As in the two-variable case, it suffices to derive V-statistics for inner products ⟨⟨ν, ν′⟩⟩k⊗l⊗m, where ν and ν′ take values in all possible combinations of the joint and the products of the marginals, i.e., PXY Z, PXY PZ, etc. Again, it is easy to see that these can be expressed as certain expectations of kernel functions, and thereby can be calculated by an appropriate manipulation of the three Gram matrices. We summarize the resulting expressions in Table 2 - their derivation is a tedious but straightforward linear algebra exercise. For compactness, the appropriate normalizing terms are moved inside the measures considered. Based on the individual RKHS inner product estimators, we can now easily derive estimators for various signed measures arising as linear combinations of PXY Z, PXY PZ, and so on. The first such measure is an “incomplete” Lancaster interaction measure ∆(Z)P = PXY Z+PXPY PZ−PY ZPX− PXZPY , which vanishes if (Y, Z) ⊥⊥X or (X, Z) ⊥⊥Y , but not necessarily if (X, Y ) ⊥⊥Z. We obtain the following result for the empirical measure ˆP. Proposition 2 (Incomplete Lancaster interaction).
∆(Z) ˆP
2 k⊗l⊗m = 1 n2 ˜K ◦˜L ◦M ++ . Analogous expressions hold for ∆(X) ˆP and ∆(Y ) ˆP. Unlike in the two-variable case where either matrix or both can be centered, centering of each matrix in the three-variable case has a different meaning. In particular, one requires centering of all three kernel matrices to perform a “complete” Lancaster interaction test, as given by the following Proposition. Proposition 3 (Lancaster interaction).
∆L ˆP
2 k⊗l⊗m = 1 n2 ˜K ◦˜L ◦˜ M ++ . The proofs of these Propositions are given in Appendix A. We summarize various hypotheses and the associated V-statistics in the Appendix B. As we will demonstrate in the experiments in Section 5, while particularly useful for testing the factorization hypothesis, i.e., for (X, Y ) ⊥⊥Z ∨(X, Z) ⊥⊥ Y ∨(Y, Z) ⊥⊥X, the statistic
∆L ˆP
2 k⊗l⊗m can also be used for powerful tests of either the individual hypotheses (Y, Z) ⊥⊥X, (X, Z) ⊥⊥Y , or (X, Y ) ⊥⊥Z, or for total independence testing, 3In general, however, this approach would require some care since, e.g., X and Y could be measured on very different scales, and the choice of kernels k and l needs to take this into account. 5 i.e., PXY Z = PXPY PZ, as it vanishes in all of these cases. The null distribution under each of these hypotheses can be estimated using a standard permutation-based approach described in Appendix D. Another way to obtain the Lancaster interaction statistic is as the RKHS norm of the joint “central moment” ΣXY Z = EXY Z[(kX −µX) ⊗(lY −µY ) ⊗(mZ −µZ)] of RKHS-valued random variables kX, lY and mZ (understood as an element of the tensor RKHS Hk ⊗Hl ⊗Hm). This is related to a classical characterization of the Lancaster interaction [21, Ch. XII]: there is no Lancaster interaction between X, Y and Z if and only if cov [f(X), g(Y ), h(Z)] = 0 for all L2 functions f, g and h. There is an analogous result in our case (proof is given in Appendix A), which states Proposition 4. ∥∆LP∥k⊗l⊗m = 0 if and only if cov [f(X), g(Y ), h(Z)] = 0 for all f ∈Hk, g ∈Hl, h ∈Hm. And finally, we give an estimator of the RKHS norm of the total independence measure ∆totP. Proposition 5 (Total independence). Let ∆tot ˆP = ˆPXY Z −ˆPX ˆPY ˆPZ. Then:
∆tot ˆP
2 k⊗l⊗m = 1 n2 (K ◦L ◦M)++ −2 n4 tr(K+ ◦L+ ◦M+) + 1 n6 K++L++M++. The proof follows simply from reading off the corresponding inner-product V-statistics from the Table 2. While the test statistic for total independence has a somewhat more complicated form than that of Lancaster interaction, it can also be computed in quadratic time. 4.3 Interaction for D > 3 Streitberg’s correction of the interaction measure for D > 3 has the form ∆SP = X π (−1)|π|−1 (|π| −1)!JπP, (5) where the sum is taken over all partitions of the set {1, 2, . . . , n}, |π| denotes the size of the partition (number of blocks), and Jπ : P 7→Pπ is the partition operator on probability measures, which for a fixed partition π = π1|π2| . . . |πr maps the probability measure P to the product measure Pπ = Qr j=1 Pπj, where Pπj is the marginal distribution of the subvector (Xi : i ∈πj) . The coefficients correspond to the M¨obius inversion on the partition lattice [34]. While the Lancaster interaction has an interpretation in terms of joint central moments, Streitberg’s correction corresponds to joint cumulants [22, Section 4]. Therefore, a central moment expression like EX1...Xn[ k(1) X1 −µX1 ⊗ · · · ⊗ k(n) Xn −µXn ] does not capture the correct notion of the interaction measure. Thus, while one can in principle construct RKHS embeddings of higher-order interaction measures, and compute RKHS norms using a calculus of V -statistics and Gram-matrices analogous to that of Table 2, it does not seem possible to avoid summing over all partitions when computing the corresponding statistics, yielding a computationally prohibitive approach in general. This can be viewed by analogy with the scalar case, where it is well known that the second and third cumulants coincide with the second and third central moments, whereas the higher order cumulants are neither moments nor central moments, but some other polynomials of the moments. 4.4 Total independence for D > 3 In general, the test statistic for total independence in the D-variable case is
ˆPX1:D − D Y i=1 ˆPXi
2 ND i=1 k(i) = 1 n2 n X a=1 n X b=1 D Y i=1 K(i) ab − 2 nD+1 n X a=1 D Y i=1 n X b=1 K(i) ab + 1 n2D D Y i=1 n X a=1 n X b=1 K(i) ab . A similar statistic for total independence is discussed by [24] where testing of total independence based on empirical characteristic functions is considered. Our test has a direct interpretation in terms of characteristic functions as well, which is straightforward to see in the case of translation invariant kernels on Euclidean spaces, using their Bochner representation, similarly as in [27, Corollary 4]. 6 Null acceptance rate (Type II error) Marginal independence tests: Dataset A Dimension 1 3 5 7 9 11 13 15 17 19 0 0.2 0.4 0.6 0.8 1 ∆L: (X, Y )⊥⊥Z 2var: (X, Y )⊥⊥Z 2var: X ⊥⊥Z 2var: X ⊥⊥Y Marginal independence tests: Dataset B Dimension 1 3 5 7 9 11 13 15 17 19 0 0.2 0.4 0.6 0.8 1 Figure 1: Two-variable kernel independence tests and the test for (X, Y ) ⊥⊥Z using the Lancaster statistic Null acceptance rate (Type II error) Total independence test: Dataset A Dimension 1 3 5 7 9 11 13 15 17 19 0 0.2 0.4 0.6 0.8 1 ∆tot: total indep. ∆L: total indep. Total independence test: Dataset B Dimension 1 3 5 7 9 11 13 15 17 19 0 0.2 0.4 0.6 0.8 1 Figure 2: Total independence: ∆tot ˆP vs. ∆L ˆP . 5 Experiments We investigate the performance of various permutation based tests that use the Lancaster statistic
∆L ˆP
2 k⊗l⊗m and the total independence statistic
∆tot ˆP
2 k⊗l⊗m on two synthetic datasets where X, Y and Z are random vectors of increasing dimensionality: Dataset A: Pairwise independent, mutually dependent data. Our first dataset is a triplet of random vectors (X, Y, Z) on Rp × Rp × Rp, with X, Y i.i.d. ∼ N(0, Ip), W ∼Exp( 1 √ 2), Z1 = sign(X1Y1)W, and Z2:p ∼N(0, Ip−1), i.e., the product of X1Y1 determines the sign of Z1, while the remaining p −1 dimensions are independent (and serve as noise in this example).4 In this case, (X, Y, Z) is clearly a pairwise independent but mutually dependent triplet. The mutual dependence becomes increasingly difficult to detect as the dimensionality p increases. Dataset B: Joint dependence can be easier to detect. In this example, we consider a triplet of random vectors (X, Y, Z) on Rp × Rp × Rp, with X, Y i.i.d. ∼N(0, Ip), Z2:p ∼N(0, Ip−1), and Z1 = X2 1 + ǫ, w.p. 1/3, Y 2 1 + ǫ, w.p. 1/3, X1Y1 + ǫ, w.p. 1/3, where ǫ ∼N(0, 0.12). Thus, dependence of Z on pair (X, Y ) is stronger than on X and Y individually. 4Note that there is no reason for X, Y and Z to have the same dimensionality p - this is done for simplicity of exposition. 7 Null acceptance rate (Type II error) V-structure discovery: Dataset A Dimension 1 3 5 7 9 11 13 15 17 19 0 0.2 0.4 0.6 0.8 1 CI: X ⊥⊥Y |Z ∆L: Factor 2var: Factor V-structure discovery: Dataset B Dimension 1 3 5 7 9 11 13 15 17 19 0 0.2 0.4 0.6 0.8 1 Figure 3: Factorization hypothesis: Lancaster statistic vs. a two-variable based test; Test for X ⊥⊥ Y |Z from [18] In all cases, we use permutation tests as described in Appendix D. The test level is set to α = 0.05, sample size to n = 500, and we use gaussian kernels with bandwidth set to the interpoint median distance. In Figure 1, we plot the null hypothesis acceptance rates of the standard kernel twovariable tests for X ⊥⊥Y (which is true for both datasets A and B, and accepted at the correct rate across all dimensions) and for X ⊥⊥Z (which is true only for dataset A), as well as of the standard kernel two-variable test for (X, Y ) ⊥⊥Z, and the test for (X, Y ) ⊥⊥Z using the Lancaster statistic. As expected, in dataset B, we see that dependence of Z on pair (X, Y ) is somewhat easier to detect than on X individually with two-variable tests. In both datasets, however, the Lancaster interaction appears significantly more sensitive in detecting this dependence as dimensionality p increases. Figure 2 plots the Type II error of total independence tests with statistics
∆L ˆP
2 k⊗l⊗m and
∆tot ˆP
2 k⊗l⊗m. The Lancaster statistic outperforms the total independence statistic everywhere apart from the Dataset B when the number of dimensions is small (between 1 and 5). Figure 3 plots the Type II error of the factorization test, i.e., test for (X, Y ) ⊥⊥Z ∨(X, Z) ⊥⊥Y ∨(Y, Z) ⊥⊥X with Lancaster statistic with Holm-Bonferroni correction as described in Appendix D, as well as the two-variable based test (which performs three standard two-variable tests and applies the HolmBonferroni correction). We also plot the Type II error for the conditional independence test for X ⊥⊥Y |Z from [18]. Under assumption that X ⊥⊥Y (correct on both datasets), negation of each of these three hypotheses is equivalent to the presence of V-structure X →Z ←Y , so the rejection of the null can be viewed as a V-structure detection procedure. As dimensionality increases, the Lancaster statistic appears significantly more sensitive to the interactions present than the competing approaches, which is particularly pronounced in Dataset A. 6 Conclusions We have constructed permutation-based nonparametric tests for three-variable interactions, including the Lancaster interaction and total independence. The tests can be used in datasets where only higher-order interactions persist, i.e., variables are pairwise independent; as well as in cases where joint dependence may be easier to detect than pairwise dependence, for instance when the effect of two variables on a third is not additive. The flexibility of the framework of RKHS embeddings of signed measures allows us to consider variables that are themselves multidimensional. While the total independence case readily generalizes to more than three dimensions, the combinatorial nature of joint cumulants implies that detecting interactions of higher order requires significantly more costly computation. References [1] A. Gretton, O. Bousquet, A. Smola, and B. Sch¨olkopf. Measuring statistical dependence with HilbertSchmidt norms. In ALT, pages 63–78, 2005. [2] G. Sz´ekely, M. Rizzo, and N.K. Bakirov. Measuring and testing dependence by correlation of distances. Ann. Stat., 35(6):2769–2794, 2007. 8 [3] D. Sejdinovic, B. Sriperumbudur, A. Gretton, and K. Fukumizu. Equivalence of distance-based and RKHS-based statistics in hypothesis testing. Ann. Stat., 41(5):2263–2291, 2013. [4] F. R. Bach and M. I. Jordan. Kernel independent component analysis. J. Mach. Learn. Res., 3:1–48, 2002. [5] K. Fukumizu, F. Bach, and A. Gretton. Statistical consistency of kernel canonical correlation analysis. J. Mach. Learn. Res., 8:361–383, 2007. [6] J. Dauxois and G. M. Nkiet. Nonlinear canonical analysis and independence tests. Ann. Stat., 26(4):1254– 1278, 1998. [7] D. Pal, B. Poczos, and Cs. Szepesvari. Estimation of renyi entropy and mutual information based on generalized nearest-neighbor graphs. In NIPS 23, 2010. [8] A. Kankainen. Consistent Testing of Total Independence Based on the Empirical Characteristic Function. PhD thesis, University of Jyv¨askyl¨a, 1995. [9] S. Bernstein. The Theory of Probabilities. Gastehizdat Publishing House, Moscow, 1946. [10] M. Kayano, I. Takigawa, M. Shiga, K. Tsuda, and H. Mamitsuka. Efficiently finding genome-wide threeway gene interactions from transcript- and genotype-data. Bioinformatics, 25(21):2735–2743, 2009. [11] N. Meinshausen and P. Buhlmann. High dimensional graphs and variable selection with the lasso. Ann. Stat., 34(3):1436–1462, 2006. [12] P. Ravikumar, M.J. Wainwright, G. Raskutti, and B. Yu. High-dimensional covariance estimation by minimizing ℓ1-penalized log-determinant divergence. Electron. J. Stat., 4:935–980, 2011. [13] J. Pearl. Causality: Models, Reasoning and Inference. Cambridge University Press, 2001. [14] P. Spirtes, C. Glymour, and R. Scheines. Causation, Prediction, and Search. 2nd edition, 2000. [15] M. Kalisch and P. Buhlmann. Estimating high-dimensional directed acyclic graphs with the PC algorithm. J. Mach. Learn. Res., 8:613–636, 2007. [16] X. Sun, D. Janzing, B. Sch¨olkopf, and K. Fukumizu. A kernel-based causal learning algorithm. In ICML, pages 855–862, 2007. [17] R. Tillman, A. Gretton, and P. Spirtes. Nonlinear directed acyclic structure learning with weakly additive noise models. In NIPS 22, 2009. [18] K. Zhang, J. Peters, D. Janzing, and B. Schoelkopf. Kernel-based conditional independence test and application in causal discovery. In UAI, pages 804–813, 2011. [19] A. Gretton, K. Fukumizu, C.-H. Teo, L. Song, B. Sch¨olkopf, and A. Smola. A kernel statistical test of independence. In NIPS 20, pages 585–592, Cambridge, MA, 2008. MIT Press. [20] K. Fukumizu, A. Gretton, X. Sun, and B. Sch¨olkopf. Kernel measures of conditional dependence. In NIPS 20, pages 489–496, 2008. [21] H.O. Lancaster. The Chi-Squared Distribution. Wiley, London, 1969. [22] B. Streitberg. Lancaster interactions revisited. Ann. Stat., 18(4):1878–1885, 1990. [23] K. Fukumizu, B. Sriperumbudur, A. Gretton, and B. Schoelkopf. Characteristic kernels on groups and semigroups. In NIPS 21, pages 473–480, 2009. [24] A. Kankainen. Consistent Testing of Total Independence Based on the Empirical Characteristic Function. PhD thesis, University of Jyv¨askyl¨a, 1995. [25] A. Berlinet and C. Thomas-Agnan. Reproducing Kernel Hilbert Spaces in Probability and Statistics. Kluwer, 2004. [26] B. Sriperumbudur, K. Fukumizu, and G. Lanckriet. Universality, characteristic kernels and rkhs embedding of measures. J. Mach. Learn. Res., 12:2389–2410, 2011. [27] B. Sriperumbudur, A. Gretton, K. Fukumizu, G. Lanckriet, and B. Sch¨olkopf. Hilbert space embeddings and metrics on probability measures. J. Mach. Learn. Res., 11:1517–1561, 2010. [28] A. Gretton, K. Borgwardt, M. Rasch, B. Sch¨olkopf, and A. Smola. A kernel two-sample test. J. Mach. Learn. Res., 13:723–773, 2012. [29] D. Sejdinovic, A. Gretton, B. Sriperumbudur, and K. Fukumizu. Hypothesis testing using pairwise distances and associated kernels. In ICML, 2012. [30] G. Sz´ekely and M. Rizzo. Testing for equal distributions in high dimension. InterStat, (5), November 2004. [31] L. Baringhaus and C. Franz. On a new multivariate two-sample test. J. Multivariate Anal., 88(1):190–206, 2004. [32] G. Sz´ekely and M. Rizzo. Brownian distance covariance. Ann. Appl. Stat., 4(3):1233–1303, 2009. [33] R. Serfling. Approximation Theorems of Mathematical Statistics. Wiley, New York, 1980. [34] T.P. Speed. Cumulants and partition lattices. Austral. J. Statist., 25:378–388, 1983. [35] S. Holm. A simple sequentially rejective multiple test procedure. Scand. J. Statist., 6(2):65–70, 1979. [36] A. Gretton, K. Fukumizu, Z. Harchaoui, and B. Sriperumbudur. A fast, consistent kernel two-sample test. In NIPS 22, Red Hook, NY, 2009. Curran Associates Inc. 9
|
2013
|
10
|
4,821
|
Online Learning in Markov Decision Processes with Adversarially Chosen Transition Probability Distributions Yasin Abbasi-Yadkori Queensland University of Technology yasin.abbasiyadkori@qut.edu.au Peter L. Bartlett UC Berkeley and QUT bartlett@eecs.berkeley.edu Varun Kanade UC Berkeley vkanade@eecs.berkeley.edu Yevgeny Seldin Queensland University of Technology yevgeny.seldin@gmail.com Csaba Szepesv´ari University of Alberta szepesva@cs.ualberta.ca Abstract We study the problem of online learning Markov Decision Processes (MDPs) when both the transition distributions and loss functions are chosen by an adversary. We present an algorithm that, under a mixing assumption, achieves O( p T log |Π| + log |Π|) regret with respect to a comparison set of policies Π. The regret is independent of the size of the state and action spaces. When expectations over sample paths can be computed efficiently and the comparison set Π has polynomial size, this algorithm is efficient. We also consider the episodic adversarial online shortest path problem. Here, in each episode an adversary may choose a weighted directed acyclic graph with an identified start and finish node. The goal of the learning algorithm is to choose a path that minimizes the loss while traversing from the start to finish node. At the end of each episode the loss function (given by weights on the edges) is revealed to the learning algorithm. The goal is to minimize regret with respect to a fixed policy for selecting paths. This problem is a special case of the online MDP problem. It was shown that for randomly chosen graphs and adversarial losses, the problem can be efficiently solved. We show that it also can be efficiently solved for adversarial graphs and randomly chosen losses. When both graphs and losses are adversarially chosen, we show that designing efficient algorithms for the adversarial online shortest path problem (and hence for the adversarial MDP problem) is as hard as learning parity with noise, a notoriously difficult problem that has been used to design efficient cryptographic schemes. Finally, we present an efficient algorithm whose regret scales linearly with the number of distinct graphs. 1 Introduction In many sequential decision problems, the transition dynamics can change with time. For example, in steering a vehicle, the state of the vehicle is determined by the actions taken by the driver, but also by external factors, such as terrain and weather conditions. As another example, the state of a 1 robot that moves in a room is determined both by its actions and by how people in the room interact with it. The robot might not have influence over these external factors, or it might be very difficult to model them. Other examples occur in portfolio optimization, clinical trials, and two player games such as poker. We consider the problem of online learning Markov Decision Processes (MDPs) when the transition probability distributions and loss functions are chosen adversarially and are allowed to change with time. We study the following game between a learner and an adversary: 1. The (oblivious) adversary chooses a sequence of transition kernels mt and loss functions ℓt. 2. At time t: (a) The learner observes the state xt in state space X and chooses an action at in the action space A. (b) The new state xt+1 ∈X is drawn at random according to distribution mt(·|xt, at). (c) The learner observes the transition kernel mt and the loss function ℓt, and suffers the loss ℓt(xt, at). To handle the case when the representation of mt or ℓt is very large, we assume that the learner has a black-box access to mt and ℓt. The above game is played for a total of T rounds and the total loss suffered by the learner is PT t=1 ℓt(xt, at). In the absence of state variables, the MDP problem reduces to a full information online learning problem (Cesa-Bianchi and Lugosi [1]). The difficulty with MDP problems is that, unlike full information online learning problems, the choice of a policy at each round changes the future states and losses. A policy is a mapping π : X →∆A, where ∆A denotes the set of distributions over A. To evaluate the learner’s performance, we imagine a hypothetical game where at each round the action played is chosen according to a fixed policy π, and the transition kernels mt and loss functions ℓt are the same as those chosen by the oblivious adversary. Let (xπ t , aπ t ) denote a sequence of state and action pairs in this game. Then the loss of the policy π is PT t=1 ℓt(xπ t , aπ t ). Define a set Π of policies that will be used as a benchmark to evaluate the learner’s performance. The regret of a learner A with respect to a policy π ∈Π is defined as the random variable RT (A, π) = PT t=1 ℓt(xt, at) −PT t=1 ℓt(xπ t , aπ t ). The goal in adversarial online learning is to design learning algorithms for which the regret with respect to any policy grows sublinearly with T, the total number of rounds played. Algorithms with such a guarantee, somewhat unfortunately, are typically termed no-regret algorithms. We also study a special case of this problem: the episodic online adversarial shortest path problem. Here, in each episode the adversary chooses a layered directed acyclic graph with a unique start and finish node. The adversary also chooses a loss function, i.e., a weight for every edge in the graph. The goal of the learning algorithm is to choose a path from start to finish that minimizes the total loss. The loss along any path is simply the sum of the weights on the edges. At the end of the round the graph and the loss function are revealed to the learner. The goal, as in the case of the online MDP problem, is to minimize regret with respect to a class of policies for choosing the path. Note that the online shortest path problem is a special case of the online MDP problem; the states are the nodes in the graph and the transition dynamics is specified by the edges. 1.1 Related Work Burnetas and Katehakis [2], Jaksch et al. [3], and Bartlett and Tewari [4] propose efficient algorithms for finite MDP problems with stochastic transitions and loss functions. These results are extended to MDPs with large state and action spaces in [5, 6, 7]. Abbasi-Yadkori and Szepesv´ari [5] and Abbasi-Yadkori [6] derive algorithms with O( √ T) regret for linearly parameterized MDP problems, while Ortner and Ryabko [7] derive O(T (2d+1)/(2d+2)) regret bounds under a Lipschitz assumption, where d is the dimensionality of the state space. We note that these algorithms are computationally expensive. Even-Dar et al. [8] consider the problem of online learning MDPs with fixed and known dynamics, but adversarially changing loss functions. They show that when the transition kernel satisfies a mixing condition (see Section 3), there is an algorithm with regret bound O( √ T). Yu and Mannor [9, 10] study a harder setting, where the transition dynamics may also change adversarially over time. 2 However, their regret bound scales with the amount of variation in the transition kernels and in the worst case may grow linearly with time. Recently, Neu et al. [11] give a no-regret algorithm for the episodic shortest path problem with adversarial losses but stochastic transition dynamics. 1.2 Our Contributions First, we study a general MDP problem with large (possibly continuous) state and action spaces and adversarially changing dynamics and loss functions. We present an algorithm that guarantees O( √ T) regret with respect to a suitably small (totally bounded) class of policies Π for this online MDP problem. The regret grows with the metric entropy of Π, so that if the comparison class is the set of all policies (that is, the algorithm must compete with the optimal fixed policy), it scales polynomially with the size of the state and action spaces. The above algorithm is efficient as long as the comparison class has polynomial size and we can compute expectations over sample paths for each policy. This result has several advantages over the results of [5, 6, 7]. First, the transition distributions and loss functions are chosen adversarially. Second, by designing an appropriate small class of comparison policies, the algorithm is efficient, even in the face of very large state and action spaces. Next, we present efficient no-regret algorithms for the episodic online shortest path problem for two cases: when the graphs and loss functions (edge weights) are chosen adversarially and the set of graphs is small; and when the graphs are chosen adversarially, but the loss is stochastic. Finally, we show that for the general adversarial online shortest path problem, designing an efficient no-regret algorithm is at least as hard as learning parity with noise. Since the online shortest path problem is a special case of online MDP problem, the hardness result is also applicable there.1 The noisy parity problem is widely believed to be computationally intractable, and has been used to design cryptographic schemes. Organization: In Section 3 we introduce an algorithm for MDP problems with adversarially chosen transition kernels and loss functions. Section 4 discusses how this algorithm can also be applied to the online episodic shortest path problem with adversarially varying graphs and loss functions and also considers the case of stochastic loss functions. Finally, in Section 4.2, we show the reduction from the adversarial online epsiodic shortest path problem to learning parity with noise. 2 Notations Let X ⊂Rn be a state space and A ⊂Rd be an action space. Let ∆S be the space of probability distributions over a set S. Define a policy π as a mapping π : X →∆A. We use π(a|x) to denote the probability of choosing an action a in state x under policy π. A random action under policy π is denoted by π(x). A transition probability kernel (or transition kernel) m is a mapping m : X × A →∆X . For finite X, let P(π, m) be the transition probability matrix of policy π under transition kernel m. A loss function is a bounded real-valued function over state and action spaces, ℓ: X × A →R. For a vector v, define ∥v∥1 = P i |vi|. For a real-valued function f defined over X × A, define ∥f∥∞,1 = maxx∈X P a∈A |f(x, a)|. The inner product between two vectors v and w is denoted by ⟨v, w⟩. 3 Online MDP Problems In this section, we study a general MDP problem with large state and action spaces. The adversary can change the dynamics and the loss functions, but is restricted to choose dynamics that satisfy a mixing condition. Assumption A1 Uniform Mixing There exists a constant τ > 0 such that for all distributions d and d′ over the state space, any deterministic policy π, and any transition kernel m ∈M, ∥dP(π, m) −d′P(π, m)∥1 ≤e−1/τ ∥d −d′∥1. 1There was an error in the proof of a claimed hardness result for the online adversarial MDP problem [8]; this claim has since been retracted [12, 13]. 3 For all policies π ∈Π, wπ,0 = 1. η = min{ p log |Π| /T, 1/2}. Choose π1 uniformly at random. for t := 1, 2, . . . , T do Learner takes the action at ∼πt(.|xt) and adversary chooses mt and ℓt. Learner suffers loss ℓt(xt, at) and observes mt and ℓt. Update state: xt+1 ∼mt(.|xt, at). For all policies π, wπ,t = wπ,t−1(1 −η)E[ℓt(xπ t ,π)]. Wt = P π∈Π wπ,t. For any π, pπ,t+1 = wπ,t/Wt. With probability βt = wπt,t/wπt,t−1 choose the previous policy, πt+1 = πt, while with probability 1 −βt, choose πt+1 based on the distribution pπ,t+1. end for Figure 1: OMDP: The Online Algorithm for Markov Decision Processes This assumption excludes deterministic MDPs that can be more difficult to deal with. As discussed by Neu et al. [14], if Assumption A1 holds for deterministic policies, then it holds for all policies. We propose an exponentially-weighted average algorithm for this problem. The algorithm, called OMDP and shown in Figure 1, maintains a distribution over the policy class, but changes its policy with a small probability. The main results of this section are the following regret bounds for the OMDP algorithm. The proofs can be found in Appendix A. Theorem 1. Let the loss functions selected by the adversary be bounded in [0, 1], and the transition kernels selected by the adversary satisfy Assumption A1. Then, for any policy π ∈Π, E [RT (OMDP, π)] ≤(4 + 2τ 2) p T log |Π| + log |Π| . Corollary 2. Let Π be an arbitrary policy space, N(ϵ) be the ϵ-covering number of the metric space (Π, ∥.∥∞,1), and C(ϵ) be an ϵ-cover. Assume that we run the OMDP algorithm on C(ϵ). Then, under the same assumptions as in Theorem 1, for any policy π ∈Π, E [RT (OMDP, π)] ≤(4 + 2τ 2) p T log N(ϵ) + log N(ϵ) + τTϵ . Remark 3. If we choose Π to be the space of deterministic policies and X and A are finite spaces, from Theorem 1 we obtain that E [RT (OMDP, π)] ≤(4 + 2τ 2) p T|X| log |A| + |X| log |A|. This result, however, is not sufficient to show that the average regret with respect to the optimal stationary policy converges to zero. This is because, unlike in the standard MDP framework, the optimal stationary policy is not necessarily deterministic. Corollary 2 extends the result of Theorem 1 to continuous policy spaces. In particular, if X and A are finite spaces and Π is the space of all policies, N(ϵ) ≤(|A|/ϵ)|A||X|, so the expected regret satisfies E [RT (OMDP, π)] ≤(4+2τ 2) q T|A||X| log |A| ϵ +|A||X| log |A| ϵ + τTϵ. By the choice of ϵ = 1 T , we get that E [RT (OMDP, π)] = O(τ 2p T |A| |X| log(|A|T)). 3.1 Proof Sketch The main idea behind the design and the analysis of our algorithm is the following regret decomposition: RT (A, π) = T X t=1 ℓt(xA t , at) − T X t=1 ℓt(xπt t , πt) + T X t=1 ℓt(xπt t , πt) − T X t=1 ℓt(xπ t , π) , (1) where A is an online learning algorithm that generates a policy πt at round t, xA t is the state at round t if we have followed the policies generated by algorithm A, and ℓ(x, π) = ℓ(x, π(x)). Let BT (A) = T X t=1 ℓt(xA t , at) − T X t=1 ℓt(xπt t , πt) , CT (A, π) = T X t=1 ℓt(xπt t , πt) − T X t=1 ℓt(xπ t , π) . 4 Note that the choice of policies has no influence over future losses in CT (A, π). Thus, CT (A, π) can be bounded by a reduction to full information online learning algorithms. Also, notice that the competitor policy π does not appear in BT (A). In fact, BT (A) depends only on the algorithm A. We will show that if the class of transition kernels satisfies Assumption A1 and algorithm A rarely changes its policies, then BT (A) can be bounded by a sublinear term. To be more precise, let αt be the probability that algorithm A changes its policy at round t. We will require that there exists a constant D such that for any 1 ≤t ≤T, any sequence of models m1, . . . , mt and loss functions ℓ1, . . . , ℓt, αt ≤D/ √ t. We would like to have a full information online learning algorithm that rarely changes its policy. The first candidate that we consider is the well-known Exponentially Weighted Average (EWA) algorithm [15, 16]. In our MDP problem, the EWA algorithm chooses a policy π ∈Π according to distribution qt(π) ∝exp −λ Pt−1 s=1 E [ℓs(xπ s , π)] for some λ > 0. The policies that this EWA algorithm generates most likely are different in consecutive rounds and thus, the EWA algorithm might change its policy frequently. However, a variant of EWA, called Shrinking Dartboard (SD) (Guelen et al. [17]), rarely changes its policy (see Lemma 8 in Appendix A). The OMDP algorithm is based on the SD algorithm. Note that the algorithm needs to know the number of rounds, T, in advance. Also note that we could use any rarely switching algorithm such as Follow the Lazy Leader of Kalai and Vempala [18] as the subroutine. 4 Adversarial Online Shortest Path Problem We consider the following adversarial online shortest path problem with changing graphs. The problem is a repeated game played between a decision-maker and an (oblivious) adversary over T rounds. At each round t the adversary presents a directed acyclic graph gt on n nodes to the decision maker, with L layers indexed by {1, . . . , L} and a special start and finish node. Each layer contains a fixed set of nodes and has connections only with the next layer. 2 The decision-maker must choose a path pt from the start to the finish node. Then, the adversary reveals weights across all the edges of the graph. The loss ℓt(gt, pt) of the decision-maker is the weight along the path that the decision-maker took on that round. Denote by [k] the set {1, 2, . . . , k}. A policy is a mapping π : [n] →[n]. Each policy may be interpreted as giving a start to finish path. Suppose that the start node is s ∈[n], then π(i) gives the subsequent node. The path is interpreted as follows : if at a node v, the edge (v, π(v)) exists then the next node is π(v). Otherwise, the next node is an arbitrary (pre-determined) choice that is adjacent to v. We compete against the class of such policies for choosing the shortest path. Denote the class of such policies by Π. The regret of a decision-maker A with respect to a policy π ∈Π is defined as: RT (A, π) = PT t=1 ℓt(gt, pt) −PT t=1 ℓt(gt, π(gt)), where π(gt) is the path obtained by following the policy π starting at the source node. Note that it is possible that there exists no policy that would result in an actual path that leads to the sink for some graph. In this case we say that the loss of the policy is infinite. Thus, there may be adversarially chosen sequences of graphs for which the regret of a decision-maker is not well-defined. This can be easily corrected by the adversary ensuring that the graph always has some fixed set of edges which result in a (possibly high loss) s →f path. In fact, we show that the adversary can choose a sequence of graphs and loss functions that make this problem at least as hard as learning noisy parities. Learning noisy parities is a notoriously hard problem in computational learning theory. The best known algorithm runs in time 2O(n/ log(n)) [20] and the presumed hardness of this and related problems has been used for designing cryptographic protocols [21]. Interestingly, for the hardness result to hold, it is essential that the adversary have the ability to control both the sequence of graphs and losses. The problem is well-understood when the graphs are generated randomly and the losses are adversarial. Jaksch et al. [3] and Bartlett and Tewari [4] propose efficient algorithms for problems with stochastic losses.3 Neu et al. [22] extend these results to problems with adversarial loss functions. 2As noted by Neu et al. [19], any directed acyclic graph can be transformed into a graph that satisfies our assumptions. 3These algorithms are originally proposed for continuing problems, but we can use them in shortest path problems with small modifications. 5 One can also ask what happens in the case when the graphs are chosen by the adversary, but the weight of each edge is drawn at random according to a fixed stationary distribution. In this setting, we show a reduction to bandit linear optimization. Thus, in fact, that algorithm does not need to see the weights of all edges at the end of the round, but only needs to know the loss it suffered. Finally, we consider the case when both graphs and losses are chosen adversarially. Although the general problem is at least as hard as learning noisy parities, we give an efficient algorithm whose regret scales linearly with the number of different graphs. Thus, if the adversary is forced to choose graphs from some small set G, then we have an efficient algorithm for solving the problem. We note that in fact, our algorithm does not need to see the graph gt at the beginning of the round, in which case an algorithm achieving O(|G| √ T) may be trivially obtained. 4.1 Stochastic Loss Functions and Adversarial Graphs Consider the case when the weight of each edge is chosen from a fixed distribution. Then it is easy to see that the expected loss of any path is a fixed linear function of the expected weights vector. The set of available paths depends on the graph and it may change from time to time. This is an instance of stochastic linear bandit problem, for which efficient algorithms exist [23, 24, 25]. Theorem 4. Let us represent each path by a binary vector of length n(n −1)/2, such that the ith element is 1 only if the corresponding edge is present in the path. Assume that the learner suffers the loss of c(p) for choosing path p, where E [c(p)] = ⟨ℓ, p⟩and the loss vector ℓ∈Rn(n−1)/2 is fixed. Let Pt be the set of paths in a graph gt. Consider the CONFIDENCEBALL1 algorithm of Dani et al. [24] applied to the shortest path problem with a changing action set Pt and the loss function ℓ. Then the regret with respect to the best path in each round is Cn3√ T for a problem-independent constant C. Let bℓt be the least squares estimate of ℓat round t, Vt = Pt−1 s=1 psp⊤ s be the covariance matrix, and Pt be the decision set at round t. The CONFIDENCEBALL1 algorithm constructs a high probability norm-1 ball confidence set, Ct = n ℓ:
V 1/2 t (ℓ−bℓt)
1 ≤βt o for an appropriate βt, and chooses an action pt according to pt = argminℓ∈Ct,p∈Pt⟨ℓ, p⟩. Dani et al. [24] prove that the regret of the CONFIDENCEBALL1 algorithm is bounded by O(m3/2√ T), where m is the dimensionality of the action set (in our case m = n(n −1)/2). The above optimization can be solved efficiently, because only 2n corners of Ct need to be evaluated. Note that the regret in Theorem 4 is with respect to the best path in each round, which is a stronger result than competing with a fixed policy. 4.2 Hardness Result In this section, we show that the setting when both the graphs and the losses are chosen by an adversary, the problem is at least as hard as the noisy parity problem. We consider the online agnostic parity learning problem. Recall that the class of parity function over {0, 1}n is the following: For S ⊆[n], PARS(x) = ⊕i∈Sxi, where ⊕denotes modulo 2 addition. The class is PARITIES = {PARS | S ⊆[n]}. In the online setting, the learning algorithm is given xt ∈{0, 1}n, the learning algorithm then picks ˆyt ∈{0, 1}, and then the true label yt is revealed. The learning algorithm suffers loss I(ˆyt ̸= yt). The regret of the learning algorithm with respect to PARITIES is defined as: Regret = PT t=1 I(ˆyt ̸= yt) −minPARS∈PARITIES PT t=1 I(PARS(xt) ̸= yt). The goal is to design a learning algorithm that runs in time polynomial in n, T and suffers regret O(poly(n)T 1−δ) for some constant δ > 0. It follows from prior work that online agnostic learning of parities is at least as hard as the offline version (see Littlestone [26], Kanade and Steinke [27]). As mentioned previously, the agnostic parity learning problem is notoriously difficult. Thus, it seems unlikely that a computationally efficient no-regret algorithm for this problem exists. Theorem 5. Suppose there is a no-regret algorithm for the online adversarial shortest path problem that runs in time poly(n, T) and achieves regret O(poly(n)T 1−δ) for any constant δ > 0. Then there is a polynomial-time algorithm for online agnostic parity learning that achieves regret O(poly(n)T 1−δ). By the online to batch reduction, this would imply a polynomial time algorithm for agnostically learning parities. 6 1a 2a 2b 3a 3b 4a 4b 5a 5b 6a 6b ⊥ y 1 −y (a) for t := 1, 2, . . . do Adversary chooses a graph gt ∈G for l = 1, . . . , L do Initialize an EWA expert algorithm, E for s = 1, . . . , t −1 do if gs ∈C(xt,l) then Feed expert E with the value function Qs = Qπs,gs,cs end if end for Let πt(.|xt,l) be the distribution over actions of the expert E Take the action at,l ∼ πt(.|xt,l), suffer the loss ct(nt,l, at,l), and move to the node nt,l+1 = gt(nt,l, at,l) end for Learner observes the graph gt and the loss function ct Compute the value function Qt = Qπt,gt,ct for all nodes n′ ∈[n] end for (b) Figure 2: (a) Encoding the example (1, 0, 1, 0, 1) ∈{0, 1}5 as a graph. (b) Improved Algorithm for the Online Shortest Path Problem. Proof. We first show how to map a point (x, y) to a graph and a loss function. Let (x, y) ∈{0, 1}n× {0, 1}. We define a graph, g(x) and a loss function ℓx,y associated with (x, y). Define a graph on 2n + 2 nodes – named 1a, 2a, 2b, 3a, 3b, . . . , na, nb, (n + 1)a, (n + 1)b, ⊥in that order. Let E(x) denote the set of edges of g(x). The set E(x) contains the following edges: (i) If x1 = 1, both (1a, 2a) and (1a, 2b) are in E(x), else if x1 = 0, only (1a, 2a) is present. (ii) For 1 < i ≤n, if xi = 1, the edges (ia, (i + 1)a), (ia, (i + 1)b), (ib, (i + 1)a), (ib, (i + 1)b) are all present; if xi = 0 only the two edges (ia, (i + 1)a) and (ib, (i + 1)b) are present. (iii) The two edges ((n + 1)a, ⊥) and ((n + 1)b, ⊥) are always present. For the loss function, define the weights as follows. The weight of the edge ((n + 1)a, ⊥) is y; the weight of the edge ((n + 1)b, ⊥) is 1 −y. The weights of all the remaining edges are set to 0. Figure 2(a) shows the encoding of the example (1, 0, 1, 0, 1) ∈{0, 1}5. Suppose an algorithm with the stated regret bound for the online shortest path problem exists, call it U. We will use this algorithm to solve the online parity learning problem. Let xt be an example received; then pass the graph g(xt) to the algorithm U. The start vertex is 1a and the finish vertex is ⊥. Suppose the path pt chosen by U reaches ⊥using the edge ((n + 1)a, ⊥) then set ˆyt to be 0. Otherwise, choose ˆyt = 1. Thus, in effect we are using algorithm U as a meta-algorithm for the online agnostic parity learning problem. First, it is easy to check that the loss suffered by the meta-algorithm on the parity problem is exactly the same as the loss of U on the online shortest path problem. This follows directly from the definition of the losses on the edges. Next, we claim that for any S ⊆[n], there is a policy πS that achieves the same loss (on the online shortest path problem) as the parity PARS does (on the parity learning problem). The policy is as follows: (i) From node ia, if i ∈S and (ia, (i + 1)b) ∈E(gt), go to (i + 1)b, otherwise go to (i + 1)a. (ii) From node ib, if i ∈S and (ib, (i + 1)a) ∈E(gt), go to (i + 1)a, otherwise go to (i + 1)b. (iii) Finally, from either (n + 1)a or (n + 1)b, just move to ⊥. We can think of the path pt as being in type a nodes or type b nodes. For each i ∈S, such that xt i = 1, the path pt switches types. Thus, if PARS(xt) = 1, pt reaches ⊥via the edge ((n + 1)b, ⊥) and if PARS(xt) = 0, pt reaches ⊥via the edge ((n + 1)a, ⊥). Recall that the loss function is 7 defined as follows: weight of the edge ((n + 1)a, ⊥) is yt, weight of the edge ((n + 1)b, ⊥) is 1 −yt; other edges have loss 0. Thus, the loss suffered by the policy πS is 1 if PARS(xt) ̸= yt and 0 otherwise. This is exactly the loss of the parity function PARS on the agnostic parity learning problem. Thus, if the algorithm U has regret O(poly(n), T 1−δ), then the meta-algorithm for the online agnostic parity learning problem also has regret O(poly(n), T 1−δ). Remark 6. We observe that the online shortest path problem is a special case of online MDP learning. Thus, the above reduction also shows that, short of a major breakthrough, it is unlikely that there exists a computationally efficient algorithm for the fully adversarial online MDP problem. 4.3 Small Number of Graphs In this section, we design an efficient algorithm and prove a O(|G| √ T) regret bound, where G is the set of graphs played by the adversary up to round T. The computational complexity of the algorithm is O(L2t) at round t. The algorithm does not need to know the set G or |G|. This regret bound holds even if the graphs are revealed at the end of the rounds. Notice that if the graphs are shown at the beginning of the rounds, obtaining regret bounds that scale like O(|G| √ T) is trivial; the learner only needs to run |G| copies of the MDP-E algorithm of Even-Dar et al. [12], one for each graph. Let nπ t,l denote the node at layer l of round t if we run policy π. Let ct(n′, a) be the loss incurred for taking action a in node n′ at round t.4 We construct a new graph, called G, as follows: graph G also has a layered structure with the same number of layers, L. At each layer, we have a number of states that represent all possible observations that we might have upon arriving at that layer. Thus, a state at layer l has the form of x = (s, a0, n1, a1, . . . , nl−1, al−1, nl), where ni belongs to layer i and ai ∈A. Let X be the set of states in G and Xl be the set of states in layer l of G. For (x, a) ∈X × A, let c(x, a) = c(n(x), a), where n(x) is the last node observed in state x. Let g(n′, a) be the next node under graph g if we take action a in node n′. Let g(x, a) = g(n(x), a). Let c(x, π) = P a π(a|x)c(x, a). For a graph g and a loss function ℓ, define the value functions by ∀n′ ∈[n], Qπ,g,c(n′, π′) = Ea∼π′(n′) [c(n′, a) + Qπ,g,c(g(n′, a), π)] , ∀x, s.t. g ∈C(x), Qπ,g,c(x, π′) = Qπ,g,c(n(x), π′) , with Qπ,g,c(f, a) = 0 for any π, g, c, a where f is the finish node. Let Qt = Qπt,gt,ct denote the value function associated with policy πt at time t. For x = (s, a0, n1, a1, . . . , nl−1, al−1, nl), define C(x) = {g ∈G : n1 = g(s, a0), . . . , nl = g(nl−1, al−1)}, the set of graphs that are consistent with the state x. We can use the MDP-E algorithm to generate policies. The algorithm, however, is computationally expensive as it updates a large set of experts at each round. Notice that the number of states at stage l, |Xl|, can be exponential in the number of graphs. We show a modification of the MDP-E algorithm that would generate the same sequence of policies, with the advantage that the new algorithm is computationally efficient. The algorithm is shown in Figure 2(b). As the generated policies are always the same, the regret bound in the next theorem, that is proven for the MDP-E algorithm, also applies to the new algorithm. The proof can be found in Appendix B. Theorem 7. For any policy π, E [RT (MDP-E, π)] ≤ 2L p 8T log(2T) + L min{|G| , maxl |Xl|} q T log |A| 2 + 2L. The theorem gives a sublinear regret as long as |G| = o( √ T). On the other hand, the hardness result in Theorem 5 applies when |G| = Θ(T). Characterizing regret vs. computational complexity tradeoffs when |G| is in between remains for future work. References [1] Nicol`o Cesa-Bianchi and G´abor Lugosi. Prediction, Learning, and Games. Cambridge University Press, New York, NY, USA, 2006. 4Thus, ℓt(Gt, π(Gt)) = PL l=1 ct(nπ t,l, π). 8 [2] Apostolos N. Burnetas and Michael N. Katehakis. Optimal adaptive policies for Markov decision processes. Mathematics of Operations Research, 22(1):222–255, 1997. [3] T. Jaksch, R. Ortner, and P. Auer. Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research, 11:1563—1600, 2010. [4] P. L. Bartlett and A. Tewari. REGAL: A regularization based algorithm for reinforcement learning in weakly communicating MDPs. In UAI, 2009. [5] Yasin Abbasi-Yadkori and Csaba Szepesv´ari. Regret bounds for the adaptive control of linear quadratic systems. In COLT, 2011. [6] Yasin Abbasi-Yadkori. Online Learning for Linearly Parametrized Control Problems. PhD thesis, University of Alberta, 2012. [7] Ronald Ortner and Daniil Ryabko. Online regret bounds for undiscounted continuous reinforcement learning. In NIPS, 2012. [8] Eyal Even-Dar, Sham M. Kakade, and Yishay Mansour. Experts in a Markov decision process. In NIPS, 2004. [9] Jia Yuan Yu and Shie Mannor. Arbitrarily modulated Markov decision processes. In IEEE Conference on Decision and Control, 2009. [10] Jia Yuan Yu and Shie Mannor. Online learning in Markov decision processes with arbitrarily changing rewards and transitions. In GameNets, 2009. [11] Gergely Neu, Andr´as Gy¨orgy, and Csaba Szepesv´ari. The adversarial stochastic shortest path problem with unknown transition probabilities. In AISTATS, 2012. [12] Eyal Even-Dar, Sham M. Kakade, and Yishay Mansour. Online Markov decision processes. Mathematics of Operations Research, 34(3):726–736, 2009. [13] Eyal Even-Dar. Personal communication., 2013. [14] Gergely Neu, Andr´as Gy¨orgy, Csaba Szepesv´ari, and Andr´as Antos. Online Markov decision processes under bandit feedback. In NIPS, 2010. [15] Vladimir Vovk. Aggregating strategies. In COLT, pages 372–383, 1990. [16] Nick Littlestone and Manfred K. Warmuth. The weighted majority algorithm. Information and Computation, 108(2):212–261, 1994. [17] Sascha Geulen, Berthold V¨ocking, and Melanie Winkler. Regret minimization for online buffering problems using the weighted majority algorithm. In COLT, 2010. [18] Adam Kalai and Santosh Vempala. Efficient algorithms for online decision problems. Journal of Computer and System Sciences, 71(3):291–307, 2005. [19] Gergely Neu, Andr´as Gy¨orgy, and Csaba Szepesv´ari. The online loop-free stochastic shortest path problem. In COLT, 2010. [20] Adam Tauman Kalai, Yishay Mansour, and Elad Verbin. On agnostic boosting and parity learning. In STOC, pages 629–638, 2008. [21] Oded Regev. On lattices, learning with errors, random linear codes, and cryptography. In STOC, pages 84–93, 2005. [22] Gergely Neu, Andr´as Gy¨orgy, and Csaba Szepesv´ari. The adversarial stochastic shortest path problem with unknown transition probabilities. In AISTATS, 2012. [23] P. Auer. Using confidence bounds for exploitation-exploration trade-offs. Journal of Machine Learning Research, 2002. [24] V. Dani, T. P. Hayes, and S. M. Kakade. Stochastic linear optimization under bandit feedback. In Rocco Servedio and Tong Zhang, editors, COLT, pages 355–366, 2008. [25] Yasin Abbasi-Yadkori, D´avid P´al, and Csaba Szepesv´ari. Improved algorithms for linear stochastic bandits. In NIPS, 2011. [26] Nick Littlestone. From on-line to batch learning. In COLT, pages 269–284, 1989. [27] Varun Kanade and Thomas Steinke. Learning hurdles for sleeping experts. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, ITCS ’12, pages 11–18, 2012. 9
|
2013
|
100
|
4,822
|
Improved and Generalized Upper Bounds on the Complexity of Policy Iteration Bruno Scherrer Inria, Villers-l`es-Nancy, F-54600, France Universit´e de Lorraine, LORIA, UMR 7503, Vandoeuvre-l`es-Nancy, F-54506, France bruno.scherrer@inria.fr Abstract Given a Markov Decision Process (MDP) with n states and m actions per state, we study the number of iterations needed by Policy Iteration (PI) algorithms to converge to the optimal “-discounted optimal policy. We consider two variations of PI: Howard’s PI that changes the actions in all states with a positive advantage, and Simplex-PI that only changes the action in the state with maximal advantage. We show that Howard’s PI terminates after at most n(m ≠1) Ï 1 1≠“ log 1 1 1≠“ 2Ì = O 1 nm 1≠“ log 1 1 1≠“ 22 iterations, improving by a factor O(log n) a result by [3], while Simplex-PI terminates after at most n2(m ≠1) 1 1 + 2 1≠“ log 1 1 1≠“ 22 = O 1 n2m 1≠“ log 1 1 1≠“ 22 iterations, improving by a factor O(log n) a result by [11]. Under some structural assumptions of the MDP, we then consider bounds that are independent of the discount factor “: given a measure of the maximal transient time ·t and the maximal time ·r to revisit states in recurrent classes under all policies, we show that Simplex-PI terminates after at most n2(m≠ 1) (Á·r log(n·r)Ë + Á·r log(n·t)Ë) # (m ≠1)Án·t log(n·t)Ë + Án·t log(n2·t)Ë $ = ˜O ! n3m2·t·r " iterations. This generalizes a recent result for deterministic MDPs by [8], in which ·t Æ n and ·r Æ n. We explain why similar results seem hard to derive for Howard’s PI. Finally, under the additional (restrictive) assumption that the state space is partitioned in two sets, respectively states that are transient and recurrent for all policies, we show that Howard’s PI terminates after at most n(m ≠1) (Á·t log n·tË + Á·r log n·rË) = ˜O(nm(·t + ·r)) iterations while Simplex-PI terminates after n(m ≠1) (Án·t log n·tË + Á·r log n·rË) = ˜O(n2m(·t + ·r)) iterations. 1 Introduction We consider a discrete-time dynamic system whose state transition depends on a control. We assume that there is a state space X of finite size n. At state i œ {1, .., n}, the control is chosen from a control space A of finite size1 m. The control a œ A specifies the transition probability pij(a) = P(it+1 = j|it = i, at = a) to the next state j. At each transition, the system is given a reward r(i, a, j) where r is the instantaneous reward function. In this context, we look for a stationary deterministic policy (a function fi: X æ A that maps 1In the works of [11, 8, 3] that we reference, the integer “m” denotes the total number of actions, that is nm with our notation. When we restate their result, we do it with our own notation, that is we replace their ÕÕmÕÕ by ÕÕnmÕÕ. 1 states into controls2) that maximizes the expected discounted sum of rewards from any state i, called the value of policy fiat state i: vfi(i) := E C Œ ÿ k=0 “kr(ik, ak, ik+1) ----- i0 = i, ’k Ø 0, ak = fi(ik), ik+1 ≥P(·|ik, ak) D where “ œ (0, 1) is a discount factor. The tuple ÈX, A, p, r, “Í is called a Markov Decision Process (MDP) [9, 1], and the associated problem is known as optimal control. The optimal value starting from state i is defined as vú(i) := max fi vfi(i). For any policy fi, we write Pfifor the n ◊n stochastic matrix whose elements are pij(fi(i)) and rfithe vector whose components are q j pij(fi(i))r(i, fi(i), j). The value functions vfi and vú can be seen as vectors on X. It is well known that vfiis the solution of the following Bellman equation: vfi= rfi+ “Pfivfi, that is vfiis a fixed point of the affine operator Tfi: v ‘æ rfi+ “Pfiv. It is also well known that vú satisfies the following Bellman equation: vú = max fi(rfi+ “Pfivú) = max fi Tfivú where the max operator is componentwise. In other words, vú is a fixed point of the nonlinear operator T : v ‘æ maxfiTfiv. For any value vector v, we say that a policy fiis greedy with respect to the value v if it satisfies: fiœ arg max fiÕ TfiÕv or equivalently Tfiv = Tv. With some slight abuse of notation, we write G(v) for any policy that is greedy with respect to v. The notions of optimal value function and greedy policies are fundamental to optimal control because of the following property: any policy fiú that is greedy with respect to the optimal value vú is an optimal policy and its value vfiú is equal to vú. Let fibe some policy. We call advantage with respect to fithe following quantity: afi= max fiÕ TfiÕvfi≠vfi= Tvfi≠vfi. We call the set of switchable states of fithe following set Sfi= {i, afi(i) > 0}. Assume now that fiis non-optimal (this implies that Sfiis a non-empty set). For any non-empty subset Y of Sfi, we denote switch(fi, Y ) a policy satisfying: ’i, switch(fi, Y )(i) = ; G(vfi)(i) if i œ Y fi(i) if i ”œ Y. The following result is well known (see for instance [9]). Lemma 1. Let fibe some non-optimal policy. If fiÕ = switch(fi, Y ) for some non-empty subset Y of Sfi, then vfiÕ Ø vfiand there exists at least one state i such that vfiÕ(i) > vfi(i). This lemma is the foundation of the well-known iterative procedure, called Policy Iteration (PI), that generates a sequence of policies (fik) as follows. fik+1 Ω switch(fik, Yk) for some set Yk such that ÿ ( Yk ™Sfik. The choice for the subsets Yk leads to different variations of PI. In this paper we will focus on two specific variations: 2Restricting our attention to stationary deterministic policies is not a limitation. Indeed, for the optimality criterion to be defined soon, it can be shown that there exists at least one stationary deterministic policy that is optimal [9]. 2 • When for all iterations k, Yk = Sfik, that is one switches the actions in all states with positive advantage with respect to fik, the above algorithm is known as Howard’s PI; it can be seen then that fik+1 œ G(vfik). • When for all k, Yk is a singleton containing a state ik œ arg maxi afik(i), that is if we only switch one action in the state with maximal advantage with respect to fik, we will call it Simplex-PI3. Since it generates a sequence of policies with increasing values, any variation of PI converges to the optimal policy in a number of iterations that is smaller than the total number of policies mn. In practice, PI converges in very few iterations. On random MDP instances, convergence often occurs in time sub-linear in n. The aim of this paper is to discuss existing and provide new upper bounds on the number of iterations required by Howard’s PI and Simplex-PI that are much sharper than mn. In the next sections, we describe some known results—see [11] for a recent and comprehensive review—about the number of iterations required by Howard’s PI and Simplex-PI, along with some of our original improvements and extensions.4 2 Bounds with respect to a Fixed Discount Factor “ < 1 A key observation for both algorithms, that will be central to the results we are about to discuss, is that the sequence they generate satisfies some contraction property5. For any vector u œ Rn, let ÎuÎŒ = max1ÆiÆn|u(i)| be the max-norm of u. Let 1 be the vector of which all components are equal to 1. Lemma 2 (Proof in Section A). The sequence (Îvú ≠vfikÎŒ)kØ0 built by Howard’s PI is contracting with coefficient “. Lemma 3 (Proof in Section B). The sequence (1T (vú ≠vfik))kØ0 built by Simplex-PI is contracting with coefficient 1 ≠1≠“ n . Though this observation is widely known for Howard’s PI, it was to our knowledge never mentionned explicitly in the literature for Simplex-PI. These contraction properties have the following immediate consequence6. Corollary 1. Let Vmax = maxfiÎrfiÎŒ 1≠“ be an upper bound on ÎvfiÎŒ for all policies fi. In order to get an ‘-optimal policy, that is a policy fik satisfying Îvú ≠vfikÎŒ Æ ‘, Howard’s PI requires at most Ï log Vmax ‘ 1≠“ Ì iterations, while Simplex-PI requires at most Ï n log nVmax ‘ 1≠“ Ì iterations. These bounds depend on the precision term ‘, which means that Howard’s PI and SimplexPI are weakly polynomial for a fixed discount factor “. An important breakthrough was recently achieved by [11] who proved that one can remove the dependency with respect to ‘, and thus show that Howard’s PI and Simplex-PI are strongly polynomial for a fixed discount factor “. Theorem 1 ([11]). Simplex-PI and Howard’s PI both terminate after at most n(m ≠ 1) Ï n 1≠“ log 1 n2 1≠“ 2Ì iterations. 3In this case, PI is equivalent to running the simplex algorithm with the highest-pivot rule on a linear program version of the MDP problem [11]. 4For clarity, all proofs are deferred to the Appendix. The first proofs about bounds for the case “ < 1 are given in the Appendix of the paper. The other proofs, that are more involved, are provided in the Supplementary Material. 5A sequence of non-negative numbers (xk)kØ0 is contracting with coefficient – if and only if for all k Ø 0, xk+1 Æ –xk. 6For Howard’s PI, we have: Îvú≠vfikÎŒ Æ “kÎvú≠vfi0ÎŒ Æ “kVmax. Thus, a sufficient condition for Îvú ≠vfikÎŒ < ‘ is “kVmax < ‘, which is implied by k Ø log Vmax ‘ 1≠“ > log Vmax ‘ log 1 “ . For Simplex-PI, we have Îvú ≠vfikÎŒ Æ Îvú ≠vfikÎ1 Æ ! 1 ≠1≠“ n "k Îvú ≠vfi0Î1 Æ ! 1 ≠1≠“ n "k nVmax, and the conclusion is similar to that for Howard’s PI. 3 The proof is based on the fact that PI corresponds to the simplex algorithm in a linear programming formulation of the MDP problem. Using a more direct proof, [3] recently improved the result by a factor O(n) for Howard’s PI. Theorem 2 ([3]). Howard’s PI terminates after at most (nm + 1) Ï 1 1≠“ log 1 n 1≠“ 2Ì iterations. Our first two results, that are consequences of the contraction properties (Lemmas 2 and 3), are stated in the following theorems. Theorem 3 (Proof in Section C). Howard’s PI terminates after at most n(m ≠ 1) Ï 1 1≠“ log 1 1 1≠“ 2Ì iterations. Theorem 4 (Proof in Section D). Simplex-PI terminates after at most n(m ≠ 1) Ï n 1≠“ log 1 n 1≠“ 2Ì iterations. Our result for Howard’s PI is a factor O(log n) better than the previous best result of [3]. Our result for Simplex-PI is only very slightly better (by a factor 2) than that of [11], and uses a proof that is more direct. Using more refined argument, we managed to also improve the bound for Simplex-PI by a factor O(log n). Theorem 5 (Proof in Section E). Simplex-PI terminates after at most n2(m ≠ 1) 1 1 + 2 1≠“ log 1 1≠“ 2 iterations. Compared to Howard’s PI, our bound for Simplex-PI is a factor O(n) larger. However, since one changes only one action per iteration, each iteration may have a complexity lower by a factor n: the update of the value can be done in time O(n2) through the Sherman-Morrisson formula, though in general each iteration of Howard’s PI, which amounts to compute the value of some policy that may be arbitrarily different from the previous policy, may require O(n3) time. Overall, both algorithms seem to have a similar complexity. It is easy to see that the linear dependency of the bound for Howard’s PI with respect to n is optimal. We conjecture that the linear dependency of both bounds with respect to m is also optimal. The dependency with respect to the term 1 1≠“ may be improved, but removing it is impossible for Howard’s PI and very unlikely for Simplex-PI. [2] describes an MDP for which Howard’s PI requires an exponential (in n) number of iterations for “ = 1 and [5] argued that this holds also when “ is in the vicinity of 1. Though a similar result does not seem to exist for Simplex-PI in the literature, [7] consider four variations of PI that all switch one action per iteration, and show through specifically designed MDPs that they may require an exponential (in n) number of iterations when “ = 1. 3 Bounds for Simplex-PI that are independent of “ In this section, we will describe some bounds that do not depend on “ but that will be based on some structural assumptions of the MDPs. On this topic, [8] recently showed the following result for deterministic MDPs. Theorem 6 ([8]). If the MDP is deterministic, then Simplex-PI terminates after at most O(n5m2 log2 n) iterations. Given a policy fiof a deterministic MDP, states are either on cycles or on paths induced by fi. The core of the proof relies on the following lemmas that altogether show that cycles are created regularly and that significant progress is made every time a new cycle appears; in other words, significant progress is made regularly. Lemma 4. If the MDP is deterministic, after at most nmÁ2(n ≠1) log nË iterations, either Simplex-PI finishes or a new cycle appears. Lemma 5. If the MDP is deterministic, when Simplex-PI moves from fito fiÕ where fiÕ involves a new cycle, we have 1T (vfiú ≠vfiÕ) Æ 3 1 ≠1 n 4 1T (vfiú ≠vfi). 4 Indeed, these observations suffice to prove7 that Simplex-PI terminates after O(n4m2 log n 1≠“ ) = ˜O(n4m2). Removing completely the dependency with respect to the discount factor “—the term in O(log 1 1≠“ )—requires a careful extra work described in [8], which incurs an extra term of order O(n log(n)). At a more technical level, the proof of [8] critically relies on some properties of the vector xfi= (I ≠“P T fi)≠11 that provides a discounted measure of state visitations along the trajectories induced by a policy fistarting from a uniform distribution: ’i œ X, xfi(i) = n Œ ÿ t=0 “tP(it = i | i0 ≥U, at = fi(it)), where U denotes the uniform distribution on the state space X. For any policy fiand state i, we trivially have xfi(i) œ 1 1, n 1≠“ 2 . The proof exploits the fact that xfi(i) belongs to the set (1, n) when i is on a path of fi, while xfi(i) belongs to the set ( 1 1≠“ , n 1≠“ ) when i is on a cycle of fi. As we are going to show, it is possible to extend the proof of [8] to stochastic MDPs. Given a policy fiof a stochastic MDP, states are either in recurrent classes or transient classes (these two categories respectively generalize those of cycles and paths). We will consider the following structural assumption. Assumption 1. Let ·t Ø 1 and ·r Ø 1 be the smallest constants such that for all policies fiand all states i, (1 Æ )xfi(i) Æ ·t if i is transient for fi, and n (1 ≠“)·r Æ xfi(i) 3 Æ n 1 ≠“ 4 if i is recurrent for fi. The constant ·t (resp. ·r) can be seen as a measure of the time needed to leave transient states (resp. the time needed to revisit states in recurrent classes). In particular, when “ tends to 1, it can be seen that ·t is an upper bound of the expected time L needed to “Leave the set of transient states”, since for any policy fi, lim “æ1 ·t Ø 1 n lim “æ1 ÿ i transient for fi xfi(i) = Œ ÿ t=0 P(it transient for fi| i0 ≥U, at = fi(it)) = E [ L | i0 ≥U, at = fi(it)] . Similarly, when “ is in the vicinity of 1, 1 ·r is the minimal asymptotic frequency8 in recurrent states given that one starts from a random uniform state, since for any policy fiand recurrent state i: lim “æ1 1 ≠“ n xfi(i) = lim “æ1(1 ≠“) Œ ÿ t=0 “tP(it = i | i0 ≥U, at = fi(it)) = lim T æŒ 1 T T ≠1 ÿ t=0 P(it = i | i0 ≥U, at = fi(it)). With Assumption 1 in hand, we can generalize Lemmas 4-5 as follows. Lemma 6. If the MDP satisfies Assumption 1, after at most n # (m ≠1)Án·t log(n·t)Ë + Án·t log(n2·t)Ë $ iterations either Simplex-PI finishes or a new recurrent class appears. 7This can be done by using arguments similar to the proof of Theorem 4 in Section D. 8If the MDP is aperiodic and irreducible, and thus admits a stationary distribution ‹fifor any policy fi, one can see that 1 ·r = min fi, i recurrent for fi‹fi(i). 5 Lemma 7. If the MDP satisfies Assumption 1, when Simplex-PI moves from fito fiÕ where fiÕ involves a new recurrent class, we have 1T (vfiú ≠vfiÕ) Æ 3 1 ≠1 ·r 4 1T (vfiú ≠vfi). From these generalized observations, we can deduce the following original result. Theorem 7 (Proof in Appendix F of the Supp. Material). If the MDP satisfies Assumption 1, then Simplex-PI terminates after at most n2(m ≠1) (Á·r log(n·r)Ë + Á·r log(n·t)Ë) # (m ≠1)Án·t log(n·t)Ë + Án·t log(n2·t)Ë $ iterations. Remark 1. This new result is a strict generalization of the result for deterministic MDPs. Indeed, in the deterministic case, we have ·t Æ n and ·r Æ n, and it is is easy to see that Lemmas 6, 7 and Theorem 7 respectively imply Lemmas 4, 5 and Theorem 6. An immediate consequence of the above result is that Simplex-PI is strongly polynomial for sets of MDPs that are much larger than the deterministic MDPs mentionned in Theorem 6. Corollary 2. For any family of MDPs indexed by n and m such that ·t and ·r are polynomial functions of n and m, Simplex-PI terminates after a number of steps that is polynomial in n and m. 4 Similar results for Howard’s PI? One may then wonder whether similar results can be derived for Howard’s PI. Unfortunately, and as quickly mentionned by [8], the line of analysis developped for Simplex-PI does not seem to adapt easily to Howard’s PI, because simultaneously switching several actions can interfere in a way that the policy improvement turns out to be small. We can be more precise on what actually breaks in the approach we have described so far. On the one hand, it is possible to write counterparts of Lemmas 4 and 6 for Howard’s PI (see Appendix G of the Supp. Material). Lemma 8. If the MDP is deterministic, after at most n iterations, either Howard’s PI finishes or a new cycle appears. Lemma 9. If the MDP satisfies Assumption 1, after at most nmÁ·t log n·tË iterations, either Howard’s PI finishes or a new recurrent class appears. However, on the other hand, we did not manage to adapt Lemma 5 nor Lemma 7. In fact, it is unlikely that a result similar to that of Lemma 5 will be shown to hold for Howard’s PI. In a recent deterministic example due to [4] to show that Howard’s PI may require at most O(n2) iterations, new cycles are created every single iteration but the sequence of values satisfies9 for all iterations k < n2 4 + n 4 and states i, vú(i) ≠vfik+1(i) Ø C 1 ≠ 3 2 n 4kD (vú(i) ≠vfik(i)). Contrary to Lemma 5, as k grows, the amount of contraction gets (exponentially) smaller and smaller. With respect to Simplex-PI, this suggests that Howard’s PI may suffer from subtle specific pathologies. In fact, the problem of determining the number of iterations required by Howard’s PI has been challenging for almost 30 years. It was originally identified as an open problem by [10]. In the simplest—deterministic—case, the question is still open: the currently best known lower bound is the O(n2) bound by [4] we have just mentionned, while the best known upper bound is O( mn n ) (valid for all MDPs) due to [6]. 9This MDP has an even number of states n = 2p. The goal is to minimize the long term expected cost. The optimal value function satisfies vú(i) = ≠pN for all i, with N = p2 + p. The policies generated by Howard’s PI have values vfik(i) œ (pN≠k≠1, pN≠k). We deduce that for all iterations k and states i, vú(i)≠vfik+1 (i) vú(i)≠vfik (i) Ø 1+p≠k≠2 1+p≠k = 1 ≠p≠k≠p≠k≠2 1+p≠k Ø 1 ≠p≠k(1 ≠p≠2) Ø 1 ≠p≠k. 6 On the positive side, an adaptation of the line of proof we have considered so far can be carried out under the following assumption. Assumption 2. The state space X can be partitioned in two sets T and R such that for all policies fi, the states of T are transient and those of R are recurrent. Indeed, under this assumption, we can prove for Howard’s PI a variation of Lemma 7 introduced for Simplex-PI. Lemma 10. For an MDP satisfying Assumptions 1-2, suppose Howard’s PI moves from fi to fiÕ and that fiÕ involves a new recurrent class. Then 1T (vfiú ≠vfiÕ) Æ 3 1 ≠1 ·r 4 1T (vfiú ≠vfi). And we can deduce the following original bound (that also applies to Simplex-PI). Theorem 8 (Proof in Appendix H of the Supp. Material). If the MDP satisfies Assumptions 1-2, then Howard’s PI terminates after at most n(m ≠1) (Á·t log n·tË + Á·r log n·rË) iterations, while Simplex-PI terminates after at most n(m ≠1) (Án·t log n·tË + Á·r log n·rË) iterations. It should however be noted that Assumption 2 is rather restrictive. It implies that the algorithms converge on the recurrent states independently of the transient states, and thus the analysis can be decomposed in two phases: 1) the convergence on recurrent states and then 2) the convergence on transient states (given that recurrent states do not change anymore). The analysis of the first phase (convergence on recurrent states) is greatly facilitated by the fact that in this case, a new recurrent class appears every single iteration (this is in contrast with Lemmas 4, 6, 8 and 9 that were designed to show under which conditions cycles and recurrent classes are created). Furthermore, the analysis of the second phase (convergence on transient states) is similar to that of the discounted case of Theorems 3 and 4. In other words, if this last result sheds some light on the practical efficiency of Howard’s PI and Simplex-PI, a general analysis of Howard’s PI is still largely open, and constitutes our main future work. A Contraction property for Howard’s PI (Proof of Lemma 2) For any k, using the facts that {’fi, Tfivfi = vfi}, {Tfiúvfik≠1 Æ Tfikvfik≠1} and {Lemma 1 and Pfik is positive definite}, we have vfiú ≠vfik = Tfiúvfiú ≠Tfiúvfik≠1 + Tfiúvfik≠1 ≠Tfikvfik≠1 + Tfikvfik≠1 ≠Tfikvfik Æ “Pfiú(vfiú ≠vfik≠1) + “Pfik(vfik≠1 ≠vfik) Æ “Pfiú(vfiú ≠vfik≠1). Since vfiú ≠vfik is non negative, we can take the max norm and get: Îvfiú ≠vfikÎŒ Æ “Îvfiú ≠vfik≠1ÎŒ. B Contraction property for Simplex-PI (Proof of Lemma 3) By using the fact that {vfi= Tfivfi∆vfi= (I ≠“Pfi)≠1rfi}, we have that for all pairs of policies fiand fiÕ. vfiÕ ≠vfi= (I ≠“PfiÕ)≠1rfiÕ ≠vfi= (I ≠“PfiÕ)≠1(rfiÕ + “PfiÕvfi≠vfi) = (I ≠“PfiÕ)≠1(TfiÕvfi≠vfi). (1) On the one hand, by using this lemma and the fact that {Tfik+1vfik ≠vfik Ø 0}, we have for any k: vfik+1 ≠vfik = (I ≠“Pk+1)≠1(Tfik+1vfik ≠vfik) Ø Tfik+1vfik ≠vfik, which implies that 1T (vfik+1 ≠vfik) Ø 1T (Tfik+1vfik ≠vfik). (2) On the other hand, using Equation (1) and the facts that {Î(I ≠“Pfiú)≠1ÎŒ = 1 1≠“ and (I ≠“Pfiú)≠1 is positive definite}, {maxs Tfik+1vfik(s) = maxs,˜fiT˜fivfik(s)} and 7 {’x Ø 0, maxs x(s) Æ 1T x}, we have: vfiú ≠vfik = (I ≠“Pfiú)≠1(Tfiúvfik ≠vfik) Æ 1 1 ≠“ max s Tfiúvfik(s) ≠vfik(s) Æ 1 1 ≠“ max s Tfik+1vfik(s) ≠vfik(s) Æ 1 1 ≠“ 1T (Tfik+1vfik ≠vfik), which implies (using {’x, 1T x Æ nÎxÎŒ}) that 1T (Tfik+1vfik ≠vfik) Ø (1 ≠“)Îvfiú ≠vfikÎŒ Ø 1 ≠“ n 1T (vfiú ≠vfik). (3) Combining Equations (2) and (3), we get: 1T (vfiú ≠vfik+1) = 1T (vfiú ≠vfik) ≠1T (vfik+1 ≠vfik) Æ 1T (vfiú ≠vfik) ≠1 ≠“ n 1T (vfiú ≠vfik) = 3 1 ≠1 ≠“ n 4 1T (vfiú ≠vfik). C A bound for Howard’s PI when “ < 1 (Proof of Theorem 3) For any k, by using Equation (1) and the fact {vú ≠vfik Ø 0 and Pfik positive definite}, we have: vú ≠Tfikvú = (I ≠“Pfik)(vú ≠vfik) Æ vú ≠vfik. Since vú≠Tfikvú is non negative, we can take the max norm and, using Lemma 2, Equation (1) and the fact that {Î(I ≠“Pfi0)≠1ÎŒ = 1 1≠“ }, we get: Îvú ≠TfikvúÎŒ Æ Îvú ≠vfikÎŒ Æ “kÎvfiú ≠vfi0ÎŒ = “kÎ(I ≠“Pfi0)≠1(vú ≠Tfi0vú)ÎŒ Æ “k 1 ≠“ Îvú ≠Tfi0vúÎŒ. (4) By definition of the max-norm, there exists a state s0 such that vú(s0) ≠[Tfi0vú](s0) = Îvú ≠Tfi0vúÎŒ. From Equation (4), we deduce that for all k, vú(s0) ≠[Tfikvú](s0) Æ Îvú ≠TfikvúÎŒ Æ “k 1 ≠“ Îvú ≠Tfi0vúÎŒ = “k 1 ≠“ (vú(s0) ≠[Tfi0vú](s0)). As a consequence, the action fik(s0) must be different from fi0(s0) when “k 1≠“ < 1, that is for all values of k satisfying k Ø kú = Ï log 1 1≠“ 1≠“ Ì > Ï log 1 1≠“ log 1 “ Ì . In other words, if some policy fi is not optimal, then one of its non-optimal actions will be eliminated for good after at most kú iterations. By repeating this argument, one can eliminate all non-optimal actions (they are at most n(m ≠1)), and the result follows. D A bound for Simplex-PI when “ < 1 (Proof of Theorem 4) Using {’x Ø 0, ÎxÎŒ Æ 1T x}, Lemma 3, {’x, 1T x Æ nÎxÎŒ}, Equation (1) and {Î(I ≠ “Pfi0)≠1ÎŒ = 1 1≠“ }, we have for all k, Îvfiú ≠TfikvfiúÎŒ Æ Îvfiú ≠vfikÎŒ Æ 1T (vfiú ≠vfik) Æ 3 1 ≠1 ≠“ n 4k 1T (vfiú ≠vfi0) Æ n 3 1 ≠1 ≠“ n 4k Îvfiú ≠vfi0ÎŒ = n 3 1 ≠1 ≠“ n 4k Î(I ≠“Pfi0)≠1(vú ≠Tfi0vú)ÎŒ Æ n 1 ≠“ 3 1 ≠1 ≠“ n 4k Îvfiú ≠Tfi0vfiúÎŒ Similarly to the proof for Howard’s PI, we deduce that a non-optimal action is eliminated after at most kú = Ï n 1≠“ log n 1≠“ Ì Ø 9 log n 1≠“ log(1≠1≠“ n ) : , and the overall number of iterations is obtained by noting that there are at most n(m ≠1) non optimal actions to eliminate. 8 References [1] D.P. Bertsekas and J.N. Tsitsiklis. Neurodynamic Programming. Athena Scientific, 1996. [2] J. Fearnley. Exponential lower bounds for policy iteration. In Proceedings of the 37th international colloquium conference on Automata, languages and programming: Part II, ICALP’10, pages 551–562, Berlin, Heidelberg, 2010. Springer-Verlag. [3] T.D. Hansen, P.B. Miltersen, and U. Zwick. Strategy iteration is strongly polynomial for 2-player turn-based stochastic games with a constant discount factor. J. ACM, 60(1):1:1–1:16, February 2013. [4] T.D. Hansen and U. Zwick. Lower bounds for howard’s algorithm for finding minimum mean-cost cycles. In ISAAC (1), pages 415–426, 2010. [5] R. Hollanders, J.C. Delvenne, and R. Jungers. The complexity of policy iteration is exponential for discounted markov decision processes. In 51st IEEE conference on Decision and control (CDC’12), 2012. [6] Y. Mansour and S.P. Singh. On the complexity of policy iteration. In UAI, pages 401–408, 1999. [7] M. Melekopoglou and A. Condon. On the complexity of the policy improvement algorithm for markov decision processes. INFORMS Journal on Computing, 6(2):188–192, 1994. [8] I. Post and Y. Ye. The simplex method is strongly polynomial for deterministic markov decision processes. Technical report, arXiv:1208.5083v2, 2012. [9] M. Puterman. Markov Decision Processes. Wiley, New York, 1994. [10] N. Schmitz. How good is howard’s policy improvement algorithm? Zeitschrift f¨ur Operations Research, 29(7):315–316, 1985. [11] Y. Ye. The simplex and policy-iteration methods are strongly polynomial for the markov decision problem with a fixed discount rate. Math. Oper. Res., 36(4):593–603, 2011. 9
|
2013
|
101
|
4,823
|
Approximate Inference in Continuous Determinantal Point Processes Raja Hafiz Affandi1, Emily B. Fox2, and Ben Taskar2 1University of Pennsylvania, rajara@wharton.upenn.edu 2University of Washington, {ebfox@stat,taskar@cs}.washington.edu Abstract Determinantal point processes (DPPs) are random point processes well-suited for modeling repulsion. In machine learning, the focus of DPP-based models has been on diverse subset selection from a discrete and finite base set. This discrete setting admits an efficient sampling algorithm based on the eigendecomposition of the defining kernel matrix. Recently, there has been growing interest in using DPPs defined on continuous spaces. While the discrete-DPP sampler extends formally to the continuous case, computationally, the steps required are not tractable in general. In this paper, we present two efficient DPP sampling schemes that apply to a wide range of kernel functions: one based on low rank approximations via Nystr¨om and random Fourier feature techniques and another based on Gibbs sampling. We demonstrate the utility of continuous DPPs in repulsive mixture modeling and synthesizing human poses spanning activity spaces. 1 Introduction Samples from a determinantal point process (DPP) [15] are sets of points that tend to be spread out. More specifically, given Ω⊆Rd and a positive semidefinite kernel function L : Ω× Ω7→R, the probability density of a point configuration A ⊂Ωunder a DPP with kernel L is given by PL(A) ∝det(LA) , (1) where LA is the |A| × |A| matrix with entries L(x, y) for each x, y ∈A. The tendency for repulsion is captured by the determinant since it depends on the volume spanned by the selected points in the associated Hilbert space of L. Intuitively, points similar according to L or points that are nearly linearly dependent are less likely to be selected. Building on the foundational work in [5] for the case where Ωis discrete and finite, DPPs have been used in machine learning as a model for subset selection in which diverse sets are preferred [2, 3, 9, 12, 13]. These methods build on the tractability of sampling based on the algorithm of Hough et al. [10], which relies on the eigendecomposition of the kernel matrix to recursively sample points based on their projections onto the subspace spanned by the selected eigenvectors. Repulsive point processes, like hard core processes [7, 16], many based on thinned Poisson processes and Gibbs/Markov distributions, have a long history in the spatial statistics community, where considering continuous Ωis key. Many naturally occurring phenomena exhibit diversity—trees tend to grow in the least occupied space [17], ant hill locations are over-dispersed relative to uniform placement [4] and the spatial distribution of nerve fibers is indicative of neuropathy, with hard-core processes providing a critical tool [25]. Repulsive processes on continuous spaces have garnered interest in machine learning as well, especially relating to generative mixture modeling [18, 29]. The computationally attractive properties of DPPs make them appealing to consider in these applications. On the surface, it seems that the eigendecomposition and projection algorithm of [10] for discrete DPPs would naturally extend to the continuous case. While this is true in a formal sense as L 1 becomes an operator instead of a matrix, the key steps such as the eigendecomposition of the kernel and projection of points on subspaces spanned by eigenfunctions are computationally infeasible except in a few very limited cases where approximations can be made [14]. The absence of a tractable DPP sampling algorithm for general kernels in continuous spaces has hindered progress in developing DPP-based models for repulsion. In this paper, we propose an efficient algorithm to sample from DPPs in continuous spaces using low-rank approximations of the kernel function. We investigate two such schemes: Nystr¨om and random Fourier features. Our approach utilizes a dual representation of the DPP, a technique that has proven useful in the discrete Ωsetting as well [11]. For k-DPPs, which only place positive probability on sets of cardinality k [13], we also devise a Gibbs sampler that iteratively samples points in the k-set conditioned on all k −1 other points. The derivation relies on representing the conditional DPPs using the Schur complement of the kernel. Our methods allow us to handle a broad range of typical kernels and continuous subspaces, provided certain simple integrals of the kernel function can be computed efficiently. Decomposing our kernel into quality and similarity terms as in [13], this includes, but is not limited to, all cases where the (i) spectral density of the quality and (ii) characteristic function of the similarity kernel can be computed efficiently. Our methods scale well with dimension, in particular with complexity growing linearly in d. In Sec. 2, we review sampling algorithms for discrete DPPs and the challenges associated with sampling from continuous DPPs. We then propose continuous DPP sampling algorithms based on low-rank kernel approximations in Sec. 3 and Gibbs sampling in Sec. 4. An empirical analysis of the two schemes is provided in Sec. 5. Finally, we apply our methods to repulsive mixture modeling and human pose synthesis in Sec. 6 and 7. 2 Sampling from a DPP When Ωis discrete with cardinality N, an efficient algorithm for sampling from a DPP is given in [10]. The algorithm, which is detailed in the supplement, uses an eigendecomposition of the kernel matrix L = PN n=1 λnvnv⊤ n and recursively samples points xi as follows, resulting in a set A ∼DPP(L) with A = {xi}: Phase 1 Select eigenvector vn with probability λn λn+1. Let V be the selected eigenvectors (k = |V |). Phase 2 For i = 1, . . . , k, sample points xi ∈Ωsequentially with probability based on the projection of xi onto the subspace spanned by V . Once xi is sampled, update V by excluding the subspace spanned by the projection of xi onto V . When Ωis discrete, both steps are straightforward since the first phase involves eigendecomposing a kernel matrix and the second phase involves sampling from discrete probability distributions based on inner products between points and eigenvectors. Extending this algorithm to a continuous space was considered by [14], but for a very limited set of kernels L and spaces Ω. For general L and Ω, we face difficulties in both phases. Extending Phase 1 to a continuous space requires knowledge of the eigendecomposition of the kernel function. When Ωis a compact rectangle in Rd, [14] suggest approximating the eigendecomposition using an orthonormal Fourier basis. Even if we are able to obtain the eigendecomposition of the kernel function (either directly or via approximations as considered in [14] and Sec. 3), we still need to implement Phase 2 of the sampling algorithm. Whereas the discrete case only requires sampling from a discrete probability function, here we have to sample from a probability density. When Ωis compact, [14] suggest using a rejection sampler with a uniform proposal on Ω. The authors note that the acceptance rate of this rejection sampler decreases with the number of points sampled, making the method inefficient in sampling large sets from a DPP. In most other cases, implementing Phase 2 even via rejection sampling is infeasible since the target density is in general non-standard with unknown normalization. Furthermore, a generic proposal distribution can yield extremely low acceptance rates. In summary, current algorithms can sample approximately from a continuous DPP only for translationinvariant kernels defined on a compact space. In Sec. 3, we propose a sampling algorithm that allows us to sample approximately from DPPs for a wide range of kernels L and spaces Ω. 2 3 Sampling from a low-rank continuous DPP Again considering Ωdiscrete with cardinality N, the sampling algorithm of Sec. 2 has complexity dominated by the eigendecomposition, O(N 3). If the kernel matrix L is low-rank, i.e. L = B⊤B, with B a D × N matrix and D ≪N, [11] showed that the complexity of sampling can be reduced to O(ND2 + D3). The basic idea is to exploit the fact that L and the dual kernel matrix C = BB⊤, which is D × D, share the same nonzero eigenvalues, and for each eigenvector vk of L, Bvk is the corresponding eigenvector of C. See the supplement for algorithmic details. While the dependence on N in the dual is sharply reduced, in continuous spaces, N is infinite. In order to extend the algorithm, we must find efficient ways to compute C for Phase 1 and manipulate eigenfunctions implicitly for the projections in Phase 2. Generically, consider sampling from a DPP on a continuous space Ωwith kernel L(x, y) = P∞ n=1 λnφn(x)φn(y),where λn and φn(x) are eigenvalues and eigenfunctions, and φn(y) is the complex conjugate of φn(y). Assume that we can approximate L by a low-dimensional (generally complex-valued) mapping, B(x) : Ω7→CD: ˜L(x, y) = B(x)∗B(y) , where B(x) = [B1(x), . . . , BD(x)]⊤. (2) Here, A∗denotes complex conjugate transpose of A. We consider two efficient low-rank approximation schemes in Sec. 3.1 and 3.2. Using such a low-rank representation, we propose an analog of the dual sampling algorithm for continuous spaces, described in Algorithm 1. A similar algorithm provides samples from a k-DPP, which only gives positive probability to sets of a fixed cardinality k [13]. The only change required is to the for-loop in Phase 1 to select exactly k eigenvectors using an efficient O(Dk) recursion. See the supplement for details. Algorithm 1 Dual sampler for a low-rank continuous DPP Input: ˜L(x, y) = B(x)∗B(y), a rank-D DPP kernel PHASE 1 Compute C = R ΩB(x)B(x)∗dx Compute eigendecomp. C = PD k=1 λkvkv∗ k J ←∅ for k = 1, . . . , D do J ←J ∪{k} with probability λk λk+1 V ←{ vk √ v∗ kCvk }k∈J PHASE 2 X ←∅ while |V | > 0 do Sample ˆx from f(x) = 1 |V | P v∈V |v∗B(x)|2 X ←X ∪{ˆx} Let v0 be a vector in V such that v∗ 0B(ˆx) ̸= 0 Update V ←{v −v∗B(ˆx) v∗ 0B(ˆx)v0 | v ∈V −{v0}} Orthonormalize V w.r.t. ⟨v1, v2⟩= v∗ 1Cv2 Output: X In this dual view, we still have the same two-phase structure, and must address two key challenges: Phase 1 Assuming a low-rank kernel function decomposition as in Eq. (2), we need to able to compute the dual kernel matrix, given by an integral: C = Z Ω B(x)B(x)∗dx . (3) Phase 2 In general, sampling directly from the density f(x) is difficult; instead, we can compute the cumulative distribution function (CDF) and sample x using the inverse CDF method [21]: F(ˆx = (ˆx1, . . . , ˆxd)) = d Y l=1 Z ˆxl −∞ f(x)1{xl∈Ω}dxl. (4) Assuming (i) the kernel function ˜L is finite-rank and (ii) the terms C and f(x) are computable, Algorithm 1 provides exact samples from a DPP with kernel ˜L. In what follows, approximations only arise from approximating general kernels L with low-rank kernels ˜L. If given a finite-rank kernel L to begin with, the sampling procedure is exact. One could imagine approximating L as in Eq. (2) by simply truncating the eigendecomposition (either directly or using numerical approximations). However, this simple approximation for known decompositions does not necessarily yield a tractable sampler, because the products of eigenfunctions required in Eq. (3) might not be efficiently integrable. For our approximation algorithm to work, not only do we need methods that approximate the kernel function well, but also that enable us to solve Eq. (3) and (4) directly for many different kernel functions. We consider two such approaches that enable an efficient sampler for a wide range of kernels: Nystr¨om and random Fourier features. 3 3.1 Sampling from RFF-approximated DPP Random Fourier features (RFF) [19] is an approach for approximating shift-invariant kernels, k(x, y) = k(x −y), using randomly selected frequencies. The frequencies are sampled independently from the Fourier transform of the kernel function, ωj ∼F(k(x −y)), and letting: ˜k(x −y) = 1 D D X j=1 exp{iω⊤ j (x −y)} , x, y ∈Ω. (5) To apply RFFs, we factor L into a quality function q and similarity kernel k (i.e., q(x) = p L(x, x)): L(x, y) = q(x)k(x, y)q(y) , x, y ∈Ωwhere k(x, x) = 1. (6) The RFF approximation can be applied to cases where the similarity function has a known characteristic function, e.g., Gaussian, Laplacian and Cauchy. Using Eq. (5), we can approximate the similarity kernel function to obtain a low-rank kernel and dual matrix: ˜LRF F (x, y) = 1 D D X j=1 q(x) exp{iω⊤ j (x −y)}q(y), CRF F jk = 1 D Z Ω q2(x) exp{i(ωj −ωk)⊤x}dx. The CDF of the sampling distribution f(x) in Algorithm 1 is given by: FRF F (ˆx) = 1 |V | X v∈V D X j=1 D X k=1 vjv∗ k d Y l=1 Z ˆxl −∞ q2(x) exp{i(ωj −ωk)⊤x}1{xl∈Ω}dxl. (7) where vj denotes the jth element of vector v. Note that equations CRF F and FRF F can be computed for many different combinations of Ωand q(x). In fact, this method works for any combination of (i) translation-invariant similarity kernel k with known characteristic function and (ii) quality function q with known spectral density. The resulting kernel L need not be translation invariant. In the supplement, we illustrate this method by considering a common and important example where Ω= Rd, q(x) is Gaussian, and k(x, y) is any kernel with known Fourier transform. 3.2 Sampling from a Nystr¨om-approximated DPP Another approach to kernel approximation is the Nystr¨om method [27]. In particular, given z1, . . . , zD landmarks sampled from Ω, we can approximate the kernel function and dual matrix as, ˜LNys(x, y) = D X j=1 D X k=1 W 2 jkL(x, zj)L(zk, y), CNys jk = D X n=1 D X m=1 WjnWmk Z Ω L(zn, x)L(x, zm)dx, where Wjk = L(zj, zk)−1/2. Denoting wj(v) = PD n=1 Wjnvn, the CDF of f(x) in Alg. 1 is: FNys(ˆx) = 1 |V | X v∈V D X j=1 D X k=1 wj(v)wk(v) d Y l=1 Z ˆxl −∞ L(x, zj)L(zk, x)1{xl∈Ω}dxl. (8) As with the RFF case, we consider a decomposition L(x, y) = q(x)k(x, y)q(y). Here, there are no translation-invariant requirements, even for the similarity kernel k. In the supplement, we provide the important example where Ω= Rd and both q(x) and k(x, y) are Gaussians and also when k(x, y) is polynomial, a case that cannot be handled by RFF since it is not translationally invariant. 4 Gibbs sampling For k-DPPs, we can consider a Gibbs sampling scheme. In the supplement, we derive that the full conditional for the inclusion of point xk given the inclusion of the k−1 other points is a 1-DPP with a modified kernel, which we know how to sample from. Let the kernel function be represented as before: L(x, y) = q(x)k(x, y)q(y). Denoting J\k = {xj}j̸=k and M \k = L−1 J\k the full conditional can be simplified using Schur’s determinantal equality [22]: p(xk|{xj}j̸=k) ∝L(xk, xk) − X i,j̸=k M \k ij L(xi, xk)L(xj, xk). (9) 4 0 1 2 3 4 5 0 0.2 0.4 0.6 0.8 1 σ2 Variational Distance Nystrom RFF 0 1 2 3 4 5 0 0.2 0.4 0.6 0.8 1 σ2 Variational Distance Nystrom RFF 0 1 2 3 4 5 0 0.2 0.4 0.6 0.8 1 σ2 Variational Distance Nystrom RFF 0 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1 Eigenvalues d=1 d=5 d=10 (a) (b) (c) (d) Figure 1: Estimates of total variational distance for Nystr¨om and RFF approximation methods to a DPP with Gaussian quality and similarity with covariances Γ = diag(ρ2, . . . , ρ2) and Σ = diag(σ2, . . . , σ2), respectively. (a)-(c) For dimensions d=1, 5 and 10, each plot considers ρ2 = 1 and varies σ2. (d) Eigenvalues for the Gaussian kernels with σ2 = ρ2 = 1 and varying dimension d. In general, sampling directly from this full conditional is difficult. However, for a wide range of kernel functions, including those which can be handled by the Nystr¨om approximation in Sec. 3.2, the CDF can be computed analytically and xk can be sampled using the inverse CDF method: F(ˆxl|{xj}j̸=k) = R ˆxl −∞L(xl, xl) −P i,j̸=k M \k ij L(xi, xl)L(xj, xl)1{xl∈Ω}dxl R ΩL(x, x) −P i,j̸=k M \k ij L(xi, x)L(xj, x)dx (10) In the supplement, we illustrate this method by considering the case where Ω= Rd and q(x) and k(x, y) are Gaussians. We use this same Schur complement scheme for sampling from the full conditionals in the mixture model application of Sec. 6. A key advantage of this scheme for several types of kernels is that the complexity of sampling scales linearly with the number of dimensions d making it suitable in handling high-dimensional spaces. As with any Gibbs sampling scheme, the mixing rate is dependent on the correlations between variables. In cases where the kernel introduces low repulsion we expect the Gibbs sampler to mix well, while in a high repulsion setting the sampler can mix slowly due to the strong dependencies between points and fact that we are only doing one-point-at-a-time moves. We explore the dependence of convergence on repulsion strength in the supplementary materials. Regardless, this sampler provides a nice tool in the k-DPP setting. Asymptotically, theory suggests that we get exact (though correlated) samples from the k-DPP. To extend this approach to standard DPPs, we can first sample k (this assumes knowledge of the eigenvalues of L) and then apply the above method to get a sample. This is fairly inefficient if many samples are needed. A more involved but potentially efficient approach is to consider a birth-death sampling scheme where the size of the set can grow/shrink by 1 at every step. 5 Empirical analysis To evaluate the performance of the RFF and Nystr¨om approximations, we compute the total variational distance ∥PL −P˜L∥1 = 1 2 P X |PL(X) −P˜L(X)|, where PL(X) denotes the probability of set X under a DPP with kernel L, as given by Eq. (1). We restrict our analysis to the case where the quality function and similarity kernel are Gaussians with isotropic covariances Γ = diag(ρ2, . . . , ρ2) and Σ = diag(σ2, . . . , σ2), respectively, enabling our analysis based on the easily computed eigenvalues [8]. We also focus on sampling from k-DPPs for which the size of the set X is always k. Details are in the supplement. Fig. 1 displays estimates of the total variational distance for the RFF and Nystr¨om approximations when ρ2 = 1, varying σ2 (the repulsion strength) and the dimension d. Note that the RFF method performs slightly worse as σ2 increases and is rather invariant to d while the Nystr¨om method performs much better for increasing σ2 but worse for increasing d. While this phenomenon seems perplexing at first, a study of the eigenvalues of the Gaussian kernel across dimensions sheds light on the rationale (see Fig. 1). Note that for fixed σ2 and ρ2, the decay of eigenvalues is slower in higher dimensions. It has been previously demonstrated that the Nystr¨om method performs favorably in kernel learning tasks compared to RFF in cases where there is a large eigengap in the kernel matrix [28]. The plot of the eigenvalues seems to indicate the same phenomenon here. Furthermore, this result is consistent with the comparison of RFF to Nystr¨om in approximating DPPs in the discrete Ωcase provided in [3]. This behavior can also be explained by looking at the theory behind these two approximations. For the RFF, while the kernel approximation is guaranteed to be an unbiased estimate of the true kernel element-wise, the variance is fairly high [19]. In our case, we note that the RFF estimates of minors are biased because of non-linearity in matrix entries, overestimating probabilities for point 5 configurations that are more spread out, which leads to samples that are overly-dispersed. For the Nystr¨om method, on the other hand, the quality of the approximation depends on how well the landmarks cover Ω. In our experiments the landmarks are sampled i.i.d. from q(x). When either the similarity bandwidth σ2 is small or the dimension d is high, the effective distance between points increases, thereby decreasing the accuracy of the approximation. Theoretical bounds for the Nystr¨om DPP approximation in the case when Ωis finite are provided in [3]. We believe the same result holds for continuous Ωby extending the eigenvalues and spectral norm of the kernel matrix to operator eigenvalues and operator norms, respectively. In summary, for moderate values of σ2 it is generally good to use the Nystr¨om approximation for low-dimensional settings and RFF for high-dimensional settings. 6 Repulsive priors for mixture models Mixture models are used in a wide range of applications from clustering to density estimation. A common issue with such models, especially in density estimation tasks, is the introduction of redundant, overlapping components that increase the complexity and reduce interpretability of the resulting model. This phenomenon is especially prominent when the number of samples is small. In a Bayesian setting, a common fix to this problem is to consider a sparse Dirichlet prior on the mixture weights, which penalizes the addition of non-zero-weight components. However, such approaches run the risk of inaccuracies in the parameter estimates [18]. Instead, [18] show that sampling the location parameters using repulsive priors leads to better separated clusters while maintaining the accuracy of the density estimate. They propose a class of repulsive priors that rely on explicitly defining a distance metric and the manner in which small distances are penalized. The resulting posterior computations can be fairly complex. The theoretical properties of DPPs make them an appealing choice as a repulsive prior. In fact, [29] considered using DPPs as repulsive priors in latent variable models. However, in the absence of a feasible continuous DPP sampling algorithm, their method was restricted to performing MAP inference. Here we propose a fully generative probabilistic mixture model using a DPP prior for the location parameters, with a K-component model using a K-DPP. In the common case of mixtures of Gaussians (MoG), our posterior computations can be performed using Gibbs sampling with nearly the same simplicity of the standard case where the location parameters µk are assumed to be i.i.d.. In particular, with the exception of updating the location parameters {µ1, . . . , µK}, our sampling steps are identical to standard MoG Gibbs updates in the uncollapsed setting. For the location parameters, instead of sampling each µk independently from its conditional posterior, our full conditional depends upon the other locations µ\k as well. Details are in the supplement, where we show that this full conditional has an interpretation as a single draw from a tilted 1-DPP. As such, we can employ the Gibbs sampling scheme of Sec. 4. We assess the clustering and density estimation performance of the DPP-based model on both synthetic and real datasets. In each case, we run 10,000 Gibbs iterations, discard 5,000 as burn-in and thin the chain by 10. Hyperparameter settings are in the supplement. We randomly permute the labels in each iteration to ensure balanced label switching. Draws are post-processed following the algorithm of [23] to address the label switching issue. Synthetic data To assess the role of the prior in a density estimation task, we generated a small sample of 100 observations from a mixture of two Gaussians. We consider two cases, the first with well-separated components and the second with poorly-separated components. We compare a mixture model with locations sampled i.i.d. (IID) to our DPP repulsive prior (DPP). In both cases, we set an upper bound of six mixture components. In Fig. 2, we see that both IID and DPP provide very similar density estimates. However, IID uses many large-mass components to describe the density. As a measure of simplicity of the resulting density description, we compute the average entropy of the posterior mixture membership distribution, which is a reasonable metric given the similarity of the overall densities. Lower entropy indicates a more concise representation in an information-theoretic sense. We also assess the accuracy of the density estimate by computing both (i) Hamming distance error relative to true cluster labels and (ii) held-out log-likelihood on 100 observations. The results are summarized in Table 1. We see that DPP results in (i) significantly lower entropy, (ii) lower overall clustering error, and (iii) statistically indistinguishable held-out log-likelihood. These results signify that we have a sparser representation with well-separated (interpretable) clusters while maintaining the accuracy of the density estimate. 6 −3 −2 −1 0 1 2 3 0 0.1 0.2 0.3 0.4 0.5 −3 −2 −1 0 1 2 3 0 0.1 0.2 0.3 0.4 0.5 −4 −2 0 2 4 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 −1 0 1 2 3 4 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 −3 −2 −1 0 1 2 3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 −3 −2 −1 0 1 2 3 0 0.1 0.2 0.3 0.4 0.5 −3 −2 −1 0 1 2 3 0 0.1 0.2 0.3 0.4 0.5 −4 −2 0 2 4 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 −1 0 1 2 3 4 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 −3 −2 −1 0 1 2 3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 −3 −2 −1 0 1 2 3 0 0.1 0.2 0.3 0.4 0.5 −3 −2 −1 0 1 2 3 0 0.1 0.2 0.3 0.4 0.5 −4 −2 0 2 4 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 −1 0 1 2 3 4 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 −3 −2 −1 0 1 2 3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Well-Sep Poor-Sep Galaxy Enzyme Acidity Figure 2: For each synthetic and real dataset: (top) histogram of data overlaid with actual Gaussian mixture generating the synthetic data, and posterior mean mixture model for (middle) IID and (bottom) DPP. Red dashed lines indicate resulting density estimate. Table 1: For IID and DPP on synthetic datasets: mean (stdev) for mixture membership entropy, cluster assignment error rate and held-out log-likelihood of 100 observations under the posterior mean density estimate. DATASET ENTROPY CLUSTERING ERROR HELDOUT LOG-LIKE. IID DPP IID DPP IID DPP Well-separated 1.11 (0.3) 0.88 (0.2) 0.19 (0.1) 0.19 (0.1) -169 (6) -171(8) Poorly-separated 1.46 (0.2) 0.92 (0.3) 0.47 (0.1) 0.39 (0.1) -211(10) -207(9) Real data We also tested our DPP model on three real density estimation tasks considered in [20]: 82 measurements of velocity of galaxies diverging from our own (galaxy), acidity measurement of 155 lakes in Wisconsin (acidity), and the distribution of enzymatic activity in the blood of 245 individuals (enzyme). We once again judge the complexity of the density estimates using the posterior mixture membership entropy as a proxy. To assess the accuracy of the density estimates, we performed 5-fold cross validation to estimate the predictive held-out log-likelihood. As with the synthetic data, we find that DPP visually results in better separated clusters (Fig. 2). The DPP entropy measure is also significantly lower for data that are not well separated (acidity and galaxy) while the differences in predictive log-likelihood estimates are not statistically significant (Table 2). Finally, we consider a classification task based on the iris dataset: 150 observations from three iris species with four length measurements. For this dataset, there has been significant debate on the optimal number of clusters. While there are three species in the data, it is known that two have very low separation. Based on loss minimization, [24, 26] concluded that the optimal number of clusters was two. Table 2 compares the classification error using DPP and IID when we assume for evaluation the real data has three or two classes (by collapsing two low-separation classes) , but consider a model with a maximum of six components. While both methods perform similarly for three classes, DPP has significantly lower classification error under the assumption of two classes, since DPP places large posterior mass on only two mixture components. This result hints at the possibility of using the DPP mixture model as a model selection method. 7 Generating diverse sample perturbations We consider another possible application of continuous-space sampling. In many applications of inverse reinforcement learning or inverse optimal control, the learner is presented with control trajectories executed by an expert and tries to estimate a reward function that would approximately reproduce such policies [1]. In order to estimate the reward function, the learner needs to compare the rewards of a large set of trajectories (or all, if possible), which becomes intractable in highdimensional spaces with complex non-linear dynamics. A typical approximation is to use a set of perturbed expert trajectories as a comparison set, where a good set of trajectories should cover as large a part of the space as possible. 7 Table 2: For IID and DPP, mean (stdev) of (left) mixture membership entropy and held-out log-likelihood for three density estimation tasks and (right) classification error under 2 vs. 2 of true classes for the iris data. DATA ENTROPY HELDOUT LL. IID DPP IID DPP Galaxy 0.89 (0.2) 0.74 (0.2) -20(2) -21(2) Acidity 1.32 (0.1) 0.98 (0.1) -49 (2) -48(3) Enzyme 1.01 (0.1) 0.96 (0.1) -55(2) -55(3) DATA CLASS ERROR IID DPP Iris (3 cls) 0.43 (0.02) 0.43 (0.02) Iris (2 cls) 0.23 (0.03) 0.15 (0.03) Original DPP Samples 0 50 100 0 0.2 0.4 0.6 0.8 1 ε Coverage Rate DPP IID Figure 3: Left: Diverse set of human poses relative to an original pose by sampling from an RFF (top) and Nystr¨om (bottom) approximations with kernel based on MoCap of the activity dance. Right: Fraction of data having a DPP/i.i.d. sample within an ϵ neighborhood. We propose using DPPs to sample a large-coverage set of trajectories, in particular focusing on a human motion application where we assume a set of motion capture (MoCap) training data taken from the CMU database [6]. Here, our dimension d is 62, corresponding to a set of joint angle measurements. For a given activity, such as dancing, we aim to select a reference pose and synthesize a set of diverse, perturbed poses. To achieve this, we build a kernel with Gaussian quality and similarity using covariances estimated from the training data associated with the activity. The Gaussian quality is centered about the selected reference pose and we synthesize new poses by sampling from our continuous DPP using the low-rank approximation scheme. In Fig. 3, we show an example of such DPP-synthesized poses. For the activity dance, to quantitatively assess our performance in covering the activity space, we compute a coverage rate metric based on a random sample of 50 poses from a DPP. For each training MoCap frame, we compute whether the frame has a neighbor in the DPP sample within an ϵ neighborhood. We compare our coverage to that of i.i.d. sampling from a multivariate Gaussian chosen to have variance matching our DPP sample. Despite favoring the i.i.d. case by inflating the variance to match the diverse DPP sample, the DPP poses still provide better average coverage over 100 runs. See Fig. 3 (right) for an assessment of the coverage metric. A visualization of the samples is in the supplement. Note that the i.i.d. case requires on average ϵ = 253 to cover all data whereas the DPP only requires ϵ = 82. By ϵ = 40, we cover over 90% of the data on average. Capturing the rare poses is extremely challenging with i.i.d. sampling, but the diversity encouraged by the DPP overcomes this issue. 8 Conclusion Motivated by the recent successes of DPP-based subset modeling in finite-set applications and the growing interest in repulsive processes on continuous spaces, we considered methods by which continuous-DPP sampling can be straightforwardly and efficiently approximated for a wide range of kernels. Our low-rank approach harnessed approximations provided by Nystr¨om and random Fourier feature methods and then utilized a continuous dual DPP representation. The resulting approximate sampler garners the same efficiencies that led to the success of the DPP in the discrete case. One can use this method as a proposal distribution and correct for the approximations via Metropolis-Hastings, for example. For k-DPPs, we devised an exact Gibbs sampler that utilized the Schur complement representation. Finally, we demonstrated that continuous-DPP sampling is useful both for repulsive mixture modeling (which utilizes the Gibbs sampling scheme) and in synthesizing diverse human poses (which we demonstrated with the low-rank approximation method). As we saw in the MoCap example, we can handle high-dimensional spaces d, with our computations scaling just linearly with d. We believe this work opens up opportunities to use DPPs as parts of many models. Acknowledgements: RHA and EBF were supported in part by AFOSR Grant FA9550-12-1-0453 and DARPA Grant FA9550-12-1-0406 negotiated by AFOSR. BT was partially supported by NSF CAREER Grant 1054215 and by STARnet, a Semiconductor Research Corporation program sponsored by MARCO and DARPA. 8 References [1] P. Abbeel and A.Y. Ng. Apprenticeship learning via inverse reinforcement learning. In Proc. ICML, 2004. [2] R. H. Affandi, A. Kulesza, and E. B. Fox. Markov determinantal point processes. In Proc. UAI, 2012. [3] R.H. Affandi, A. Kulesza, E.B. Fox, and B. Taskar. Nystr¨om approximation for large-scale determinantal processes. In Proc. AISTATS, 2013. [4] R. A. Bernstein and M. Gobbel. Partitioning of space in communities of ants. Journal of Animal Ecology, 48(3):931–942, 1979. [5] A. Borodin and E.M. Rains. Eynard-Mehta theorem, Schur process, and their Pfaffian analogs. Journal of statistical physics, 121(3):291–317, 2005. [6] CMU. Carnegie Mellon University graphics lab motion capture database. http://mocap.cs.cmu.edu/, 2009. [7] D.J. Daley and D. Vere-Jones. An introduction to the theory of point processes: Volume I: Elementary theory and methods. Springer, 2003. [8] G.E. Fasshauer and M.J. McCourt. Stable evaluation of Gaussian radial basis function interpolants. SIAM Journal on Scientific Computing, 34(2):737–762, 2012. [9] J. Gillenwater, A. Kulesza, and B. Taskar. Discovering diverse and salient threads in document collections. In Proc. EMNLP, 2012. [10] J.B. Hough, M. Krishnapur, Y. Peres, and B. Vir´ag. Determinantal processes and independence. Probability Surveys, 3:206–229, 2006. [11] A. Kulesza and B. Taskar. Structured determinantal point processes. In Proc. NIPS, 2010. [12] A. Kulesza and B. Taskar. k-DPPs: Fixed-size determinantal point processes. In ICML, 2011. [13] A. Kulesza and B. Taskar. Determinantal point processes for machine learning. Foundations and Trends in Machine Learning, 5(2–3), 2012. [14] F. Lavancier, J. Møller, and E. Rubak. Statistical aspects of determinantal point processes. arXiv preprint arXiv:1205.4818, 2012. [15] O. Macchi. The coincidence approach to stochastic point processes. Advances in Applied Probability, pages 83–122, 1975. [16] B. Mat´ern. Spatial variation. Springer-Verlag, 1986. [17] T. Neeff, G. S. Biging, L. V. Dutra, C. C. Freitas, and J. R. Dos Santos. Markov point processes for modeling of spatial forest patterns in Amazonia derived from interferometric height. Remote Sensing of Environment, 97(4):484–494, 2005. [18] F. Petralia, V. Rao, and D. Dunson. Repulsive mixtures. In NIPS, 2012. [19] A. Rahimi and B. Recht. Random features for large-scale kernel machines. NIPS, 2007. [20] S. Richardson and P. J. Green. On Bayesian analysis of mixtures with an unknown number of components (with discussion). JRSS:B, 59(4):731–792, 1997. [21] C.P. Robert and G. Casella. Monte Carlo Statistical Methods. Springer, 2nd edition, 2004. [22] J Schur. ¨Uber potenzreihen, die im innern des einheitskreises beschr¨ankt sind. Journal f¨ur die reine und angewandte Mathematik, 147:205–232, 1917. [23] M. Stephens. Dealing with label switching in mixture models. JRSS:B, 62(4):795–809, 2000. [24] C.A. Sugar and G.M. James. Finding the number of clusters in a dataset: An informationtheoretic approach. JASA, 98(463):750–763, 2003. [25] L. A. Waller, A. S¨arkk¨a, V. Olsbo, M. Myllym¨aki, I.G. Panoutsopoulou, W.R. Kennedy, and G. Wendelschafer-Crabb. Second-order spatial analysis of epidermal nerve fibers. Statistics in Medicine, 30(23):2827–2841, 2011. [26] J. Wang. Consistent selection of the number of clusters via crossvalidation. Biometrika, 97(4): 893–904, 2010. [27] C.K.I. Williams and M. Seeger. Using the Nystr¨om method to speed up kernel machines. NIPS, 2000. [28] T. Yang, Y.-F. Li, M. Mahdavi, R. Jin, and Z.-H. Zhou. Nystr¨om method vs random fourier features: A theoretical and empirical comparison. NIPS, 2012. [29] J. Zou and R.P. Adams. Priors for diversity in generative latent variable models. In NIPS, 2012. 9
|
2013
|
102
|
4,824
|
Streaming Variational Bayes Tamara Broderick, Nicholas Boyd, Andre Wibisono, Ashia C. Wilson University of California, Berkeley {tab@stat, nickboyd@eecs, wibisono@eecs, ashia@stat}.berkeley.edu Michael I. Jordan University of California, Berkeley jordan@cs.berkeley.edu Abstract We present SDA-Bayes, a framework for (S)treaming, (D)istributed, (A)synchronous computation of a Bayesian posterior. The framework makes streaming updates to the estimated posterior according to a user-specified approximation batch primitive. We demonstrate the usefulness of our framework, with variational Bayes (VB) as the primitive, by fitting the latent Dirichlet allocation model to two large-scale document collections. We demonstrate the advantages of our algorithm over stochastic variational inference (SVI) by comparing the two after a single pass through a known amount of data—a case where SVI may be applied—and in the streaming setting, where SVI does not apply. 1 Introduction Large, streaming data sets are increasingly the norm in science and technology. Simple descriptive statistics can often be readily computed with a constant number of operations for each data point in the streaming setting, without the need to revisit past data or have advance knowledge of future data. But these time and memory restrictions are not generally available for the complex, hierarchical models that practitioners often have in mind when they collect large data sets. Significant progress on scalable learning procedures has been made in recent years [e.g., 1, 2]. But the underlying models remain simple, and the inferential framework is generally non-Bayesian. The advantages of the Bayesian paradigm (e.g., hierarchical modeling, coherent treatment of uncertainty) currently seem out of reach in the Big Data setting. An exception to this statement is provided by [3–5], who have shown that a class of approximation methods known as variational Bayes (VB) [6] can be usefully deployed for large-scale data sets. They have applied their approach, referred to as stochastic variational inference (SVI), to the domain of topic modeling of document collections, an area with a major need for scalable inference algorithms. VB traditionally uses the variational lower bound on the marginal likelihood as an objective function, and the idea of SVI is to apply a variant of stochastic gradient descent to this objective. Notably, this objective is based on the conceptual existence of a full data set involving D data points (i.e., documents in the topic model setting), for a fixed value of D. Although the stochastic gradient is computed for a single, small subset of data points (documents) at a time, the posterior being targeted is a posterior for D data points. This value of D must be specified in advance and is used by the algorithm at each step. Posteriors for D′ data points, for D′ ̸= D, are not obtained as part of the analysis. We view this lack of a link between the number of documents that have been processed thus far and the posterior that is being targeted as undesirable in many settings involving streaming data. In this paper we aim at an approximate Bayesian inference algorithm that is scalable like SVI but 1 is also truly a streaming procedure, in that it yields an approximate posterior for each processed collection of D′ data points—and not just a pre-specified “final” number of data points D. To that end, we return to the classical perspective of Bayesian updating, where the recursive application of Bayes theorem provides a sequence of posteriors, not a sequence of approximations to a fixed posterior. To this classical recursive perspective we bring the VB framework; our updates need not be exact Bayesian updates but rather may be approximations such as VB. This approach is similar in spirit to assumed density filtering or expectation propagation [7–9], but each step of those methods involves a moment-matching step that can be computationally costly for models such as topic models. We are able to avoid the moment-matching step via the use of VB. We also note other related work in this general vein: MCMC approximations have been explored by [10], and VB or VB-like approximations have also been explored by [11, 12]. Although the empirical success of SVI is the main motivation for our work, we are also motivated by recent developments in computer architectures, which permit distributed and asynchronous computations in addition to streaming computations. As we will show, a streaming VB algorithm naturally lends itself to distributed and asynchronous implementations. 2 Streaming, distributed, asynchronous Bayesian updating Streaming Bayesian updating. Consider data x1, x2, . . . generated iid according to a distribution p(x | Θ) given parameter(s) Θ. Assume that a prior p(Θ) has also been specified. Then Bayes theorem gives us the posterior distribution of Θ given a collection of S data points, C1 := (x1, . . . , xS): p(Θ | C1) = p(C1)−1 p(C1 | Θ) p(Θ), where p(C1 | Θ) = p(x1, . . . , xS | Θ) = !S s=1 p(xs | Θ). Suppose we have seen and processed b−1 collections, sometimes called minibatches, of data. Given the posterior p(Θ | C1, . . . , Cb−1), we can calculate the posterior after the bth minibatch: p(Θ | C1, . . . , Cb) ∝p(Cb | Θ) p(Θ | C1, . . . , Cb−1). (1) That is, we treat the posterior after b −1 minibatches as the new prior for the incoming data points. If we can save the posterior from b −1 minibatches and calculate the normalizing constant for the bth posterior, repeated application of Eq. (1) is streaming; it automatically gives us the new posterior without needing to revisit old data points. In complex models, it is often infeasible to calculate the posterior exactly, and an approximation must be used. Suppose that, given a prior p(Θ) and data minibatch C, we have an approximation algorithm A that calculates an approximate posterior q: q(Θ) = A(C, p(Θ)). Then, setting q0(Θ) = p(Θ), one way to recursively calculate an approximation to the posterior is p(Θ | C1, . . . , Cb) ≈qb(Θ) = A (Cb, qb−1(Θ)) . (2) When A yields the posterior from Bayes theorem, this calculation is exact. This approach already differs from that of [3–5], which we will see (Sec. 3.2) directly approximates p(Θ | C1, . . . , CB) for fixed B without making intermediate approximations for b strictly between 1 and B. Distributed Bayesian updating. The sequential updates in Eq. (2) handle streaming data in theory, but in practice, the A calculation might take longer than the time interval between minibatch arrivals or simply take longer than desired. Parallelizing computations increases algorithm throughput. And posterior calculations need not be sequential. Indeed, Bayes theorem yields p(Θ | C1, . . . , CB) ∝ " B # b=1 p(Cb | Θ) $ p(Θ) ∝ " B # b=1 p(Θ | Cb) p(Θ)−1 $ p(Θ). (3) That is, we can calculate the individual minibatch posteriors p(Θ | Cb), perhaps in parallel, and then combine them to find the full posterior p(Θ | C1, . . . , CB). Given an approximating algorithm A as above, the corresponding approximate update would be p(Θ | C1, . . . , CB) ≈q(Θ) ∝ " B # b=1 A(Cb, p(Θ)) p(Θ)−1 $ p(Θ), (4) 2 for some approximating distribution q, provided the normalizing constant for the right-hand side of Eq. (4) can be computed. Variational inference methods are generally based on exponential family representations [6], and we will make that assumption here. In particular, we suppose p(Θ) ∝exp{ξ0 · T(Θ)}; that is, p(Θ) is an exponential family distribution for Θ with sufficient statistic T(Θ) and natural parameter ξ0. We suppose further that A always returns a distribution in the same exponential family; in particular, we suppose that there exists some parameter ξb such that qb(Θ) ∝exp{ξb · T(Θ)} for qb(Θ) = A(Cb, p(Θ)). (5) When we make these two assumptions, the update in Eq. (4) becomes p(Θ | C1, . . . , CB) ≈q(Θ) ∝exp %" ξ0 + B & b=1 (ξb −ξ0) $ · T(Θ) ' , (6) where the normalizing constant is readily obtained from the exponential family form. In what follows we use the shorthand ξ ←A(C, ξ0) to denote that A takes as input a minibatch C and a prior with exponential family parameter ξ0 and that it returns a distribution in the same exponential family with parameter ξ. So, to approximate p(Θ | C1, . . . , CB), we first calculate ξb via the approximation primitive A for each minibatch Cb; note that these calculations may be performed in parallel. Then we sum together the quantities ξb −ξ0 across b, along with the initial ξ0 from the prior, to find the final exponential family parameter to the full posterior approximation q. We previously saw that the general Bayes sequential update can be made streaming by iterating with the old posterior as the new prior (Eq. (2)). Similarly, here we see that the full posterior approximation q is in the same exponential family as the prior, so one may iterate these parallel computations to arrive at a parallelized algorithm for streaming posterior computation. We emphasize that while these updates are reminiscent of prior-posterior conjugacy, it is actually the approximate posteriors and single, original prior that we assume belong to the same exponential family. It is not necessary to assume any conjugacy in the generative model itself nor that any true intermediate or final posterior take any particular limited form. Asynchronous Bayesian updating. Performing B computations in parallel can in theory speed up algorithm running time by a factor of B, but in practice it is often the case that a single computation thread takes longer than the rest. Waiting for this thread to finish diminishes potential gains from distributing the computations. This problem can be ameliorated by making computations asynchronous. In this case, processors known as workers each solve a subproblem. When a worker finishes, it reports its solution to a single master processor. If the master gives the worker a new subproblem without waiting for the other workers to finish, it can decrease downtime in the system. Our asynchronous algorithm is in the spirit of Hogwild! [1]. To present the algorithm we first describe an asynchronous computation that we will not use in practice, but which will serve as a conceptual stepping stone. Note in particular that the following scheme makes the computations in Eq. (6) asynchronous. Have each worker continuously iterate between three steps: (1) collect a new minibatch C, (2) compute the local approximate posterior ξ ←A(C, ξ0), and (3) return ∆ξ := ξ −ξ0 to the master. The master, in turn, starts by assigning the posterior to equal the prior: ξ(post) ←ξ0. Each time the master receives a quantity ∆ξ from any worker, it updates the posterior synchronously: ξ(post) ←ξ(post) + ∆ξ. If A returns the exponential family parameter of the true posterior (rather than an approximation), then the posterior at the master is exact by Eq. (4). A preferred asynchronous computation works as follows. The master initializes its posterior estimate to the prior: ξ(post) ←ξ0. Each worker continuously iterates between four steps: (1) collect a new minibatch C, (2) copy the master posterior value locally ξ(local) ←ξ(post), (3) compute the local approximate posterior ξ ←A(C, ξ(local)), and (4) return ∆ξ := ξ −ξ(local) to the master. Each time the master receives a quantity ∆ξ from any worker, it updates the posterior synchronously: ξ(post) ←ξ(post) + ∆ξ. The key difference between the first and second frameworks proposed above is that, in the second, the latest posterior is used as a prior. This latter framework is more in line with the streaming update of Eq. (2) but introduces a new layer of approximation. Since ξ(post) might change at the master 3 while the worker is computing ∆ξ, it is no longer the case that the posterior at the master is exact when A returns the exponential family parameter of the true posterior. Nonetheless we find that the latter framework performs better in practice, so we focus on it exclusively in what follows. We refer to our overall framework as SDA-Bayes, which stands for (S)treaming, (D)istributed, (A)synchronous Bayes. The framework is intended to be general enough to allow a variety of local approximations A. Indeed, SDA-Bayes works out of the box once an implementation of A—and a prior on the global parameter(s) Θ—is provided. In the current paper our preferred local approximation will be VB. 3 Case study: latent Dirichlet allocation In what follows, we consider examples of the choices for the Θ prior and primitive A in the context of latent Dirichlet allocation (LDA) [13]. LDA models the content of D documents in a corpus. Themes potentially shared by multiple documents are described by topics. The unsupervised learning problem is to learn the topics as well as discover which topics occur in which documents. More formally, each topic (of K total topics) is a distribution over the V words in the vocabulary: βk = (βkv)V v=1. Each document is an admixture of topics. The words in document d are assumed to be exchangeable. Each word wdn belongs to a latent topic zdn chosen according to a documentspecific distribution of topics θd = (θdk)K k=1. The full generative model, with Dirichlet priors for βk and θd conditioned on respective parameters ηk and α, appears in [13]. To see that this model fits our specification in Sec. 2, consider the set of global parameters Θ = β. Each document wd = (wdn)Nd n=1 is distributed iid conditioned on the global topics. The full collection of data is a corpus C = w = (wd)D d=1 of documents. The posterior for LDA, p(β, θ, z | C, η, α), is equal to the following expression up to proportionality: ∝ " K # k=1 Dirichlet(βk | ηk) $ · " D # d=1 Dirichlet(θd | α) $ · " D # d=1 Nd # n=1 θdzdnβzdn,wdn $ . (7) The posterior for just the global parameters p(β |C, η, α) can be obtained from p(β, θ, z |C, η, α) by integrating out the local, document-specific parameters θ, z. As is common in complex models, the normalizing constant for Eq. (7) is intractable to compute, so the posterior must be approximated. 3.1 Posterior-approximation algorithms To apply SDA-Bayes to LDA, we use the prior specified by the generative model. It remains to choose a posterior-approximation algorithm A. We consider two possibilities here: variational Bayes (VB) and expectation propagation (EP). Both primitives take Dirichlet distributions as priors for β and both return Dirichlet distributions for the approximate posterior of the topic parameters β; thus the prior and approximate posterior are in the same exponential family. Hence both VB and EP can be utilized as a choice for A in the SDA-Bayes framework. Mean-field variational Bayes. We use the shorthand pD for Eq. (7), the posterior given D documents. We assume the approximating distribution, written qD for shorthand, takes the form qD(β, θ, z | λ, γ, φ) = " K # k=1 qD(βk | λk) $ · " D # d=1 qD(θd | γd) $ · " D # d=1 Nd # n=1 qD(zdn | φdwdn) $ (8) for parameters (λkv), (γdk), (φdvk) with k ∈{1, . . . , K}, v ∈{1, . . . , V }, d ∈{1, . . . , D}. Moreover, we set qD(βk | λk) = DirichletV (βk | λk), qD(θd | γd) = DirichletK(θd | γd), and qD(zdn | φdwdn) = CategoricalK(zdn | φdwdn). The subscripts on Dirichlet and Categorical indicate the dimensions of the distributions (and of the parameters). The problem of VB is to find the best approximating qD, defined as the collection of variational parameters λ, γ, φ that minimize the KL divergence from the true posterior: KL (qD ∥pD). Even finding the minimizing parameters is a difficult optimization problem. Typically the solution is approximated by coordinate descent in each parameter [6, 13] as in Alg. 1. The derivation of VB for LDA can be found in [4, 13] and Sup. Mat. A.1. 4 Algorithm 1: VB for LDA Input: Data (nd)D d=1; hyperparameters η, α Output: λ Initialize λ while (λ, γ, φ) not converged do for d = 1, . . . , D do (γd, φd) ←LocalVB(d, λ) ∀(k, v), λkv ←ηkv + (D d=1 φdvkndv Subroutine LocalVB(d, λ) Output: (γd, φd) Initialize γd while (γd, φd) not converged do ∀(k, v), set φdvk ∝ exp (Eq[log θdk] + Eq[log βkv]) (normalized across k) ∀k, γdk ←αk + (V v=1 φdvkndv Algorithm 2: SVI for LDA Input: Hyperparameters η, α, D, (ρt)T t=1 Output: λ Initialize λ for t = 1, . . . , T do Collect new data minibatch C foreach document indexed d in C do (γd, φd) ←LocalVB(d, λ) ∀(k, v), ˜λkv ←ηkv + D |C| ( d in C φdvkndv ∀(k, v), λkv ←(1 −ρt)λkv + ρt˜λkv Algorithm 3: SSU for LDA Input: Hyperparameters η, α Output: A sequence λ(1), λ(2), . . . Initialize ∀(k, v), λ(0) kv ←ηkv for b = 1, 2, . . . do Collect new data minibatch C foreach document indexed d in C do (γd, φd) ←LocalVB(d, λ) ∀(k, v), λ(b) kv ←λ(b−1) kv + ( d in C φdvkndv Figure 1: Algorithms for calculating λ, the parameters for the topic posteriors in LDA. VB iterates multiple times through the data, SVI makes a single pass, and SSU is streaming. Here, ndv represents the number of words v in document d. Expectation propagation. An EP [7] algorithm for approximating the LDA posterior appears in Alg. 6 of Sup. Mat. B. Alg. 6 differs from [14], which does not provide an approximate posterior for the topic parameters, and is instead our own derivation. Our version of EP, like VB, learns factorized Dirichlet distributions over topics. 3.2 Other single-pass algorithms for approximate LDA posteriors The algorithms in Sec. 3.1 pass through the data multiple times and require storing the data set in memory—but are useful as primitives for SDA-Bayes in the context of the processing of minibatches of data. Next, we consider two algorithms that can pass through a data set just one time (single pass) and to which we compare in the evaluations (Sec. 4). Stochastic variational inference. VB uses coordinate descent to find a value of qD, Eq. (8), that locally minimizes the KL divergence, KL (qD ∥pD). Stochastic variational inference (SVI) [3, 4] is exactly the application of a particular version of stochastic gradient descent to the same optimization problem. While stochastic gradient descent can often be viewed as a streaming algorithm, the optimization problem itself here depends on D via pD, the posterior on D data points. We see that, as a result, D must be specified in advance, appears in each step of SVI (see Alg. 2), and is independent of the number of data points actually processed by the algorithm. Nonetheless, while one may choose to visit D′ ̸= D data points or revisit data points when using SVI to estimate pD [3, 4], SVI can be made single-pass by visiting each of D data points exactly once and then has constant memory requirements. We also note that two new parameters, τ0 > 0 and κ ∈(0.5, 1], appear in SVI, beyond those in VB, to determine a learning rate ρt as a function of iteration t: ρt := (τ0 + t)−κ. Sufficient statistics. On each round of VB (Alg. 1), we update the local parameters for all documents and then compute λkv ←ηkv + (D d=1 φdvkndv. An alternative single-pass (and indeed streaming) option would be to update the local parameters for each minibatch of documents as they arrive and then add the corresponding terms φdvkndv to the current estimate of λ for each document d in the minibatch. This essential idea has been proposed previously for models other than LDA by [11, 12] and forms the basis of what we call the sufficient statistics update algorithm (SSU): Alg. 3. This algorithm is equivalent to SDA-Bayes with A chosen to be a single iteration over the global variable λ of VB (i.e., updating λ exactly once instead of iterating until convergence). 5 Wikipedia Nature 32-SDA 1-SDA SVI SSU 32-SDA 1-SDA SVI SSU Log pred prob −7.31 −7.43 −7.32 −7.91 −7.11 −7.19 −7.08 −7.82 Time (hours) 2.09 43.93 7.87 8.28 0.55 10.02 1.22 1.27 Table 1: A comparison of (1) log predictive probability of held-out data and (2) running time of four algorithms: SDA-Bayes with 32 threads, SDA-Bayes with 1 thread, SVI, and SSU. 4 Evaluation We follow [4] (and further [15, 16]) in evaluating our algorithms by computing (approximate) predictive probability. Under this metric, a higher score is better, as a better model will assign a higher probability to the held-out words. We calculate predictive probability by first setting aside held-out testing documents C(test) from the full corpus and then further setting aside a subset of held-out testing words Wd,test in each testing document d. The remaining (training) documents C(train) are used to estimate the global parameter posterior q(β), and the remaining (training) words Wd,train within the dth testing document are used to estimate the document-specific parameter posterior q(θd).1 To calculate predictive probability, an approximation is necessary since we do not know the predictive distribution—just as we seek to learn the posterior distribution. Specifically, we calculate the normalized predictive distribution and report “log predictive probability” as ( d∈C(test) log p(Wd,test | C(train), Wd,train) ( d∈C(test) |Wd,test| = ( d∈C(test) ( wtest∈Wd,test log p(wtest | C(train), Wd,train) ( d∈C(test) |Wd,test| , where we use the approximation p(wtest | C(train), Wd,train) = ) β ) θd * K & k=1 θdkβkwtest + p(θd | Wd,train, β) p(β | C(train)) dθd dβ ≈ ) β ) θd * K & k=1 θdkβkwtest + q(θd) q(β) dθd dβ = K & k=1 Eq[θdk] Eq[βkwtest]. To facilitate comparison with SVI, we use the Wikipedia and Nature corpora of [3, 5] in our experiments. These two corpora represent a range of sizes (3,611,558 training documents for Wikipedia and 351,525 for Nature) as well as different types of topics. We expect words in Wikipedia to represent an extremely broad range of topics whereas we expect words in Nature to focus more on the sciences. We further use the vocabularies of [3, 5] and SVI code available online at [17]. We hold out 10,000 Wikipedia documents and 1,024 Nature documents (not included in the counts above) for testing. In the results presented in the main text, we follow [3, 4] in fitting an LDA model with K = 100 topics and hyperparameters chosen as: ∀k, αk = 1/K, ∀(k, v), ηkv = 0.01. For both Wikipedia and Nature, we set the parameters in SVI according to the optimal values of the parameters described in Table 1 of [3] (number of documents D correctly set in advance, step size parameters κ = 0.5 and τ0 = 64). Figs. 3(a) and 3(d) demonstrate that both SVI and SDA are sensitive to minibatch size when ηkv = 0.01, with generally superior performance at larger batch sizes. Interestingly, both SVI and SDA performance improve and are steady across batch size when ηkv = 1 (Figs. 3(a) and 3(d)). Nonetheless, we use ηkv = 0.01 in what follows in the interest of consistency with [3, 4]. Moreover, in the remaining experiments, we use a large minibatch size of 215 = 32,768. This size is the largest before SVI performance degrades in the Nature data set (Fig. 3(d)). Performance and timing results are shown in Table 1. One would expect that with additional streaming capabilities, SDA-Bayes should show a performance loss relative to SVI. We see from Table 1 1 In all cases, we estimate q(θd) for evaluative purposes using VB since direct EP estimation takes prohibitively long. 6 1 2 4 8 16 32 −7.45 −7.4 −7.35 −7.3 number of threads log predictive probability sync async (a) Wikipedia 1 2 4 8 16 32 −7.2 −7.15 −7.1 number of threads log predictive probability sync async (b) Nature 1 2 4 8 16 32 0 10 20 30 40 number of threads run time (hours) sync async (c) Wikipedia 1 2 4 8 16 32 0 2 4 6 8 10 number of threads run time (hours) sync async (d) Nature Figure 2: SDA-Bayes log predictive probability (two left plots) and run time (two right plots) as a function of number of threads. that such loss is small in the single-thread case, while SSU performs much worse. SVI is faster than single-thread SDA-Bayes in this single-pass setting. Full SDA-Bayes improves run time with no performance cost. We handicap SDA-Bayes in the above comparisons by utilizing just a single thread. In Table 1, we also report performance of SDABayes with 32 threads and the same minibatch size. In the synchronous case, we consider minibatch size to equal the total number of data points processed per round; therefore, the minibatch size equals the number of data points sent to each thread per round times the total number of threads. In the asynchronous case, we analogously report minibatch size as this product. Fig. 2 shows the performance of SDA-Bayes when we run with {1, 2, 4, 8, 16, 32} threads while keeping the minibatch size constant. The goal in such a distributed context is to improve run time while not hurting performance. Indeed, we see dramatic run time improvement as the number of threads grows and in fact some slight performance improvement as well. We tried both a parallel version and a full distributed, asynchronous version of the algorithm; Fig. 2 indicates that the speedup and performance improvements we see here come from parallelizing—which is theoretically justified by Eq. (3) when A is Bayes rule. Our experiments indicate that our Hogwild!-style asynchrony does not hurt performance. In our experiments, the processing time at each thread seems to be approximately equal across threads and dominate any communication time at the master, so synchronous and asynchronous performance and running time are essentially identical. In general, a practitioner might prefer asynchrony since it is more robust to node failures. SVI is sensitive to the choice of total data size D. The evaluations above are for a single posterior over D data points. Of greater concern to us in this work is the evaluation of algorithms in the streaming setting. We have seen that SVI is designed to find the posterior for a particular, prechosen number of data points D. In practice, when we run SVI on the full data set but change the input value of D in the algorithm, we can see degradations in performance. In particular, we try values of D equal to {0.01, 0.1, 1, 10, 100} times the true D in Fig. 3(b) for the Wikipedia data set and in Fig. 3(e) for the Nature data set. A practitioner in the streaming setting will typically not know D in advance, or multiple values of D may be of interest. Figs. 3(b) and 3(e) illustrate that an estimate may not be sufficient. Even in the case where D is known in advance, it is reasonable to imagine a new influx of further data. One might need to run SVI again from the start (and, in so doing, revisit the first data set) to obtain the desired performance. SVI is sensitive to learning step size. [3, 5] use cross-validation to tune step-size parameters (τ0, κ) in the stochastic gradient descent component of the SVI algorithm. This cross-validation requires multiple runs over the data and thus is not suited to the streaming setting. Figs. 3(c) and 3(f) demonstrate that the parameter choice does indeed affect algorithm performance. In these figures, we keep D at the true training data size. [3] have observed that the optimal (τ0, κ) may interact with minibatch size, and we further observe that the optimal values may vary with D as well. We also note that recent work has suggested a way to update (τ0, κ) adaptively during an SVI run [18]. EP is not suited to LDA. Earlier attempts to apply EP to the LDA model in the non-streaming setting have had mixed success, with [19] in particular finding that EP performance can be poor for LDA and, moreover, that EP requires “unrealistic intermediate storage requirements.” We found 7 5 10 15 −8.5 −8.2 −7.9 −7.6 −7.3 −7 log batch size (base 2) log predictive probability SVI, η = 1.0 SVI, η = 0.01 SDA, η = 1.0 SDA, η = 0.01 (a) Sensitivity to minibatch size on Wikipedia 0 1e6 2e6 3e6 −7.5 −7.45 −7.4 −7.35 −7.3 number of examples seen log predictive probability D = 361155800 D = 36115580 D = 3611558 D = 361155 D = 36115 (b) SVI sensitivity to D on Wikipedia 0 1e6 2e6 3e6 −7.5 −7.45 −7.4 −7.35 −7.3 number of examples seen log predictive probability τ0 = 16, κ = 1.0 τ0 = 256, κ = 0.5 τ0 = 64, κ = 1.0 τ0 = 256, κ = 1.0 τ0 = 64, κ = 0.5 τ0 = 16, κ = 0.5 (c) SVI sensitivity to stepsize parameters on Wikipedia 5 10 15 −8.5 −8.2 −7.9 −7.6 −7.3 −7 log batch size (base 2) log predictive probability SDA, η = 1.0 SDA, η = 0.01 SVI, η = 0.01 SVI, η = 1.0 (d) Sensitivity to minibatch size on Nature 0 1e5 2e5 3e5 −8 −7.8 −7.6 −7.4 −7.2 −7 number of examples seen log predictive probability D = 3515250 D = 35152500 D = 351525 D = 35152 D = 3515 (e) SVI sensitivity to D on Nature 0 1e5 2e5 3e5 −8 −7.8 −7.6 −7.4 −7.2 −7 number of examples seen log predictive probability τ0 = 16, κ = 0.5 τ0 = 64, κ = 0.5 τ0 = 256, κ = 1.0 τ0 = 16, κ = 1.0 τ0 = 64, κ = 1.0 τ0 = 256, κ = 0.5 (f) SVI sensitivity to stepsize parameters on Nature Figure 3: Sensitivity of SVI and SDA-Bayes to some respective parameters. Legends have the same top-to-bottom order as the rightmost curve points. this to also be true in the streaming setting. We were not able to obtain competitive results with EP; based on an 8-thread implementation of SDA-Bayes with an EP primitive2, after over 91 hours on Wikipedia (and 6.7×104 data points), log predictive probability had stabilized at around −7.95 and, after over 97 hours on Nature (and 9.7×104 data points), log predictive probability had stabilized at around −8.02. Although SDA-Bayes with the EP primitive is not effective for LDA, it remains to be seen whether this combination may be useful in other domains where EP is known to be effective. 5 Discussion We have introduced SDA-Bayes, a framework for streaming, distributed, asynchronous computation of an approximate Bayesian posterior. Our framework makes streaming updates to the estimated posterior according to a user-specified approximation primitive. We have demonstrated the usefulness of our framework, with variational Bayes as the primitive, by fitting the latent Dirichlet allocation topic model to the Wikipedia and Nature corpora. We have demonstrated the advantages of our algorithm over stochastic variational inference and the sufficient statistics update algorithm, particularly with respect to the key issue of obtaining approximations to posterior probabilities based on the number of documents seen thus far, not posterior probabilities for a fixed number of documents. Acknowledgments We thank M. Hoffman, C. Wang, and J. Paisley for discussions, code, and data and our reviewers for helpful comments. TB is supported by the Berkeley Fellowship, NB by a Hertz Foundation Fellowship, and ACW by the Chancellor’s Fellowship at UC Berkeley. This research is supported in part by NSF award CCF-1139158, DARPA Award FA8750-12-2-0331, AMPLab sponsor donations, and the ONR under grant number N00014-11-1-0688. 2We chose 8 threads since any fewer was too slow to get results and anything larger created too high of a memory demand on our system. 8 References [1] F. Niu, B. Recht, C. R´e, and S. J. Wright. Hogwild!: A lock-free approach to parallelizing stochastic gradient descent. In Neural Information Processing Systems, 2011. [2] A. Kleiner, A. Talwalkar, P. Sarkar, and M. Jordan. The big data bootstrap. In International Conference on Machine Learning, 2012. [3] M. Hoffman, D. M. Blei, and F. Bach. Online learning for latent Dirichlet allocation. In Neural Information Processing Systems, volume 23, pages 856–864, 2010. [4] M. Hoffman, D. M. Blei, J. Paisley, and C. Wang. Stochastic variational inference. Journal of Machine Learning Research, 14:1303–1347. [5] C. Wang, J. Paisley, and D. M. Blei. Online variational inference for the hierarchical Dirichlet process. In Artificial Intelligence and Statistics, 2011. [6] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1–305, 2008. [7] T. P. Minka. Expectation propagation for approximate Bayesian inference. In Uncertainty in Artificial Intelligence, pages 362–369. Morgan Kaufmann, 2001. [8] T. P. Minka. A family of algorithms for approximate Bayesian inference. PhD thesis, Massachusetts Institute of Technology, 2001. [9] M. Opper. A Bayesian approach to on-line learning. [10] K. R Canini, L. Shi, and T. L Griffiths. Online inference of topics with latent Dirichlet allocation. In Artificial Intelligence and Statistics, volume 5, 2009. [11] A. Honkela and H. Valpola. On-line variational Bayesian learning. In International Symposium on Independent Component Analysis and Blind Signal Separation, pages 803–808, 2003. [12] J. Luts, T. Broderick, and M. P. Wand. Real-time semiparametric regression. Journal of Computational and Graphical Statistics, to appear. Preprint arXiv:1209.3550. [13] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022, 2003. [14] T. Minka and J. Lafferty. Expectation-propagation for the generative aspect model. In Uncertainty in Artificial Intelligence, pages 352–359. Morgan Kaufmann, 2002. [15] Y. Teh, D. Newman, and M. Welling. A collapsed variational Bayesian inference algorithm for latent Dirichlet allocation. In Neural Information Processing Systems, 2006. [16] A. Asuncion, M. Welling, P. Smyth, and Y. Teh. On smoothing and inference for topic models. In Uncertainty in Artificial Intelligence, 2009. [17] M. Hoffman. Online inference for LDA (Python code) at http://www.cs.princeton.edu/˜blei/downloads/onlineldavb.tar, 2010. [18] R. Ranganath, C. Wang, D. M. Blei, and E. P. Xing. An adaptive learning rate for stochastic variational inference. In International Conference on Machine Learning, 2013. [19] W. L. Buntine and A. Jakulin. Applying discrete PCA in data analysis. In Uncertainty in Artificial Intelligence. [20] M. Seeger. Expectation propagation for exponential families. Technical report, University of California at Berkeley, 2005. 9
|
2013
|
103
|
4,825
|
One-shot learning by inverting a compositional causal process Brenden M. Lake Dept. of Brain and Cognitive Sciences MIT brenden@mit.edu Ruslan Salakhutdinov Dept. of Statistics and Computer Science University of Toronto rsalakhu@cs.toronto.edu Joshua B. Tenenbaum Dept. of Brain and Cognitive Sciences MIT jbt@mit.edu Abstract People can learn a new visual class from just one example, yet machine learning algorithms typically require hundreds or thousands of examples to tackle the same problems. Here we present a Hierarchical Bayesian model based on compositionality and causality that can learn a wide range of natural (although simple) visual concepts, generalizing in human-like ways from just one image. We evaluated performance on a challenging one-shot classification task, where our model achieved a human-level error rate while substantially outperforming two deep learning models. We also tested the model on another conceptual task, generating new examples, by using a “visual Turing test” to show that our model produces human-like performance. 1 Introduction People can acquire a new concept from only the barest of experience – just one or a handful of examples in a high-dimensional space of raw perceptual input. Although machine learning has tackled some of the same classification and recognition problems that people solve so effortlessly, the standard algorithms require hundreds or thousands of examples to reach good performance. While the standard MNIST benchmark dataset for digit recognition has 6000 training examples per class [19], people can classify new images of a foreign handwritten character from just one example (Figure 1b) [23, 16, 17]. Similarly, while classifiers are generally trained on hundreds of images per class, using benchmark datasets such as ImageNet [4] and CIFAR-10/100 [14], people can learn a 1 2 3 Human drawers canonical 1 2 3 5 1 2 3 5.1 1 1 2 3 6.2 1 2 3 6.2 2 1 3 7.1 1 3 1 1 2 1 2 b) c) a) Figure 1: Can you learn a new concept from just one example? (a & b) Where are the other examples of the concept shown in red? Answers for b) are row 4 column 3 (left) and row 2 column 4 (right). c) The learned concepts also support many other abilities such as generating examples and parsing. 1 Figure 2: Four alphabets from Omniglot, each with five characters drawn by four different people. new visual object from just one example (e.g., a “Segway” in Figure 1a). These new larger datasets have developed along with larger and “deeper” model architectures, and while performance has steadily (and even spectacularly [15]) improved in this big data setting, it is unknown how this progress translates to the “one-shot” setting that is a hallmark of human learning [3, 22, 28]. Additionally, while classification has received most of the attention in machine learning, people can generalize in a variety of other ways after learning a new concept. Equipped with the concept “Segway” or a new handwritten character (Figure 1c), people can produce new examples, parse an object into its critical parts, and fill in a missing part of an image. While this flexibility highlights the richness of people’s concepts, suggesting they are much more than discriminative features or rules, there are reasons to suspect that such sophisticated concepts would be difficult if not impossible to learn from very sparse data. Theoretical analyses of learning express a tradeoff between the complexity of the representation (or the size of its hypothesis space) and the number of examples needed to reach some measure of “good generalization” (e.g., the bias/variance dilemma [8]). Given that people seem to succeed at both sides of the tradeoff, a central challenge is to explain this remarkable ability: What types of representations can be learned from just one or a few examples, and how can these representations support such flexible generalizations? To address these questions, our work here offers two contributions as initial steps. First, we introduce a new set of one-shot learning problems for which humans and machines can be compared side-byside, and second, we introduce a new algorithm that does substantially better on these tasks than current algorithms. We selected simple visual concepts from the domain of handwritten characters, which offers a large number of novel, high-dimensional, and cognitively natural stimuli (Figure 2). These characters are significantly more complex than the simple artificial stimuli most often modeled in psychological studies of concept learning (e.g., [6, 13]), yet they remain simple enough to hope that a computational model could see all the structure that people do, unlike domains such as natural scenes. We used a dataset we collected called “Omniglot” that was designed for studying learning from a few examples [17, 26]. While similar in spirit to MNIST, rather than having 10 characters with 6000 examples each, it has over 1600 character with 20 examples each – making it more like the “transpose” of MNIST. These characters were selected from 50 different alphabets on www.omniglot.com, which includes scripts from natural languages (e.g., Hebrew, Korean, Greek) and artificial scripts (e.g., Futurama and ULOG) invented for purposes like TV shows or video games. Since it was produced on Amazon’s Mechanical Turk, each image is paired with a movie ([x,y,time] coordinates) showing how that drawing was produced. In addition to introducing new one-shot learning challenge problems, this paper also introduces Hierarchical Bayesian Program Learning (HBPL), a model that exploits the principles of compositionality and causality to learn a wide range of simple visual concepts from just a single example. We compared the model with people and other competitive computational models for character recognition, including Deep Boltzmann Machines [25] and their Hierarchical Deep extension for learning with very few examples [26]. We find that HBPL classifies new examples with near human-level accuracy, substantially beating the competing models. We also tested the model on generating new exemplars, another natural form of generalization, using a “visual Turing test” to evaluate performance. In this test, both people and the model performed the same task side by side, and then other human participants judged which result was from a person and which was from a machine. 2 Hierarchical Bayesian Program Learning We introduce a new computational approach called Hierarchical Bayesian Program Learning (HBPL) that utilizes the principles of compositionality and causality to build a probabilistic generative model of handwritten characters. It is compositional because characters are represented as stochastic motor programs where primitive structure is shared and re-used across characters at multiple levels, including strokes and sub-strokes. Given the raw pixels, the model searches for a 2 x(m) 11 x11 y11 } } y12 x12 R1 x21 y21 R2 R1 R2 y(m) 11 L(m) 1 y(m) 12 L(m) 2 T (m) 1 T (m) 2 {A, ✏, σb}(m) I(m) x(m) 21 y(m) 21 x(m) 12 {A, ✏, σb}(m) I(m) L(m) 1 T (m) 1 L(m) 2 T (m) 2 = independent = along s11 = independent = start of s11 z11 = 17 z12 = 17 z21 = 42 z11 = 5 z21 = 17 character type 1 (= 2) character type 2 (= 2) ... 17 2 5 42 157 primitives x11 y11 x(m) 11y(m) 11 x21 y21 x(m) 21 y(m) 21 token level ✓(m) type level R(m) 1 R(m) 2 R(m) 1 R(m) 2 Figure 3: An illustration of the HBPL model generating two character types (left and right), where the dotted line separates the type-level from the token-level variables. Legend: number of strokes κ, relations R, primitive id z (color-coded to highlight sharing), control points x (open circles), scale y, start locations L, trajectories T, transformation A, noise ϵ and θb, and image I. “structural description” to explain the image by freely combining these elementary parts and their spatial relations. Unlike classic structural description models [27, 2], HBPL also reflects abstract causal structure about how characters are actually produced. This type of causal representation is psychologically plausible, and it has been previously theorized to explain both behavioral and neuro-imaging data regarding human character perception and learning (e.g., [7, 1, 21, 11, 12, 17]). As in most previous “analysis by synthesis” models of characters, strokes are not modeled at the level of muscle movements, so that they are abstract enough to be completed by a hand, a foot, or an airplane writing in the sky. But HBPL also learns a significantly more complex representation than earlier models, which used only one stroke (unless a second was added manually) [24, 10] or received on-line input data [9], sidestepping the challenging parsing problem needed to interpret complex characters. The model distinguishes between character types (an ‘A’, ‘B’, etc.) and tokens (an ‘A’ drawn by a particular person), where types provide an abstract structural specification for generating different tokens. The joint distribution on types ψ, tokens θ(m), and binary images I(m) is given as follows, P(ψ, θ(1), ..., θ(M), I(1), ..., I(M)) = P(ψ) M Y m=1 P(I(m)|θ(m))P(θ(m)|ψ). (1) Pseudocode to generate from this distribution is shown in the Supporting Information (Section SI-1). 2.1 Generating a character type A character type ψ = {κ, S, R} is defined by a set of κ strokes S = {S1, ..., Sκ} and spatial relations R = {R1, ..., Rκ} between strokes. The joint distribution can be written as P(ψ) = P(κ) κ Y i=1 P(Si)P(Ri|S1, ..., Si−1). (2) The number of strokes is sampled from a multinomial P(κ) estimated from the empirical frequencies (Figure 4b), and the other conditional distributions are defined in the sections below. All hyperparameters, including the library of primitives (top of Figure 3), were learned from a large “background set” of character drawings as described in Sections 2.3 and SI-4. Strokes. Each stroke is initiated by pressing the pen down and terminated by lifting the pen up. In between, a stroke is a motor routine composed of simple movements called substrokes Si = {si1, ..., sini} (colored curves in Figure 3), where sub-strokes are separated by 3 brief pauses of the pen. Each sub-stroke sij is modeled as a uniform cubic b-spline, which can be decomposed into three variables sij = {zij, xij, yij} with joint distribution P(Si) = P(zi) Qni j=1 P(xij|zij)P(yij|zij). The discrete class zij ∈N is an index into the library of primitive motor elements (top of Figure 3), and its distribution P(zi) = P(zi1) Qni j=2 P(zij|zi(j−1)) is a first-order Markov Process that adds sub-strokes at each step until a special “stop” state is sampled that ends the stroke. The five control points xij ∈R10 (small open circles in Figure 3) are sampled from a Gaussian P(xij|zij) = N(µzij, Σzij) , but they live in an abstract space not yet embedded in the image frame. The type-level scale yij of this space, relative to the image frame, is sampled from P(yij|zij) = Gamma(αzij, βzij). Relations. The spatial relation Ri specifies how the beginning of stroke Si connects to the previous strokes {S1, ..., Si−1}. The distribution P(Ri|S1, ..., Si−1) = P(Ri|z1, ..., zi−1), since it only depends on the number of sub-strokes in each stroke. Relations can come in four types with probabilities θR, and each type has different sub-variables and dimensionalities: • Independent relations, Ri = {Ji, Li}, where the position of stroke i does not depend on previous strokes. The variable Ji ∈N is drawn from P(Ji), a multinomial over a 2D image grid that depends on index i (Figure 4c). Since the position Li ∈R2 has to be real-valued, P(Li|Ji) is then sampled uniformly at random from within the image cell Ji. • Start or End relations, Ri = {ui}, where stroke i starts at either the beginning or end of a previous stroke ui, sampled uniformly at random from ui ∈{1, ..., i −1}. • Along relations, Ri = {ui, vi, τi}, where stroke i begins along previous stroke ui ∈{1, ..., i − 1} at sub-stroke vi ∈{1, ..., nui} at type-level spline coordinate τi ∈R, each sampled uniformly at random. 2.2 Generating a character token The token-level variables, θ(m) = {L(m), x(m), y(m), R(m), A(m), σ(m) b , ϵ(m)}, are distributed as P(θ(m)|ψ) = P(L(m)|θ(m) \L(m), ψ) Y i P(R(m) i |Ri)P(y(m) i |yi)P(x(m) i |xi)P(A(m), σ(m) b , ϵ(m)) (3) with details below. As before, Sections 2.3 and SI-4 describe how the hyperparameters were learned. Pen trajectories. A stroke trajectory T (m) i (Figure 3) is a sequence of points in the image plane that represents the path of the pen. Each trajectory T (m) i = f(L(m) i , x(m) i , y(m) i ) is a deterministic function of a starting location L(m) i ∈R2, token-level control points x(m) i ∈R10, and token-level scale y(m) i ∈R. The control points and scale are noisy versions of their type-level counterparts, P(x(m) ij |xij) = N(xij, σ2 xI) and P(y(m) ij |yij) ∝N(yij, σ2 y), where the scale is truncated below 0. To construct the trajectory T (m) i (see illustration in Figure 3), the spline defined by the scaled control points y(m) 1 x(m) 1 ∈R10 is evaluated to form a trajectory,1 which is shifted in the image plane to begin at L(m) i . Next, the second spline y(m) 2 x(m) 2 is evaluated and placed to begin at the end of the previous sub-stroke’s trajectory, and so on until all sub-strokes are placed. Token-level relations must be exactly equal to their type-level counterparts, P(R(m) i |Ri) = δ(R(m) i −Ri), except for the “along” relation which allows for token-level variability for the attachment along the spline using a truncated Gaussian P(τ (m) i |τi) ∝N(τi, σ2 τ). Given the pen trajectories of the previous strokes, the start position of L(m) i is sampled from P(L(m) i |R(m) i , T (m) 1 , ..., T (m) i−1 ) = N(g(R(m) i , T (m) 1 , ..., T (m) i−1 ), ΣL), where g(·) = Li when R(m) i is independent (Section 2.1), g(·) = end(T (m) ui ) or g(·) = start(T (m) ui ) when R(m) i is start or end, and g(·) is the proper spline evaluation when R(m) i is along. 1The number of spline evaluations is computed to be approximately 2 points for every 3 pixels of distance along the spline (with a minimum of 10 evaluations). 4 a) b) c) 1 2 ≥ 4 number of strokes stroke start positions library of motor primitives 0 2 4 6 8 0 2000 4000 6000 Number of strokes frequency 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 3 Figure 4: Learned hyperparameters. a) A subset of primitives, where the top row shows the most common ones. The first control point (circle) is a filled. b&c) Empirical distributions where the heatmap c) show how starting point differs by stroke number. Image. An image transformation A(m) ∈R4 is sampled from P(A(m)) = N([1, 1, 0, 0], ΣA), where the first two elements control a global re-scaling and the second two control a global translation of the center of mass of T (m). The transformed trajectories can then be rendered as a 105x105 grayscale image, using an ink model adapted from [10] (see Section SI-2). This grayscale image is then perturbed by two noise processes, which make the gradient more robust during optimization and encourage partial solutions during classification. These processes include convolution with a Gaussian filter with standard deviation σ(m) b and pixel flipping with probability ϵ(m), where the amount of noise σ(m) b and ϵ(m) are drawn uniformly on a pre-specified range (Section SI-2). The grayscale pixels then parameterize 105x105 independent Bernoulli distributions, completing the full model of binary images P(I(m)|θ(m)) = P(I(m)|T (m), A(m), σ(m) b , ϵ(m)). 2.3 Learning high-level knowledge of motor programs The Omniglot dataset was randomly split into a 30 alphabet “background” set and a 20 alphabet “evaluation” set, constrained such that the background set included the six most common alphabets as determined by Google hits. Background images, paired with their motor data, were used to learn the hyperparameters of the HBPL model, including a set of 1000 primitive motor elements (Figure 4a) and position models for a drawing’s first, second, and third stroke, etc. (Figure 4c). Wherever possible, cross-validation (within the background set) was used to decide issues of model complexity within the conditional probability distributions of HBPL. Details are provided in Section SI-4 for learning the models of primitives, positions, relations, token variability, and image transformations. 2.4 Inference Posterior inference in this model is very challenging, since parsing an image I(m) requires exploring a large combinatorial space of different numbers and types of strokes, relations, and sub-strokes. We developed an algorithm for finding K high-probability parses, ψ[1], θ(m)[1], ..., ψ[K], θ(m)[K], which are the most promising candidates proposed by a fast, bottom-up image analysis, shown in Figure 5a and detailed in Section SI-5. These parses approximate the posterior with a discrete distribution, P(ψ, θ(m)|I(m)) ≈ K X i=1 wiδ(θ(m) −θ(m)[i])δ(ψ −ψ[i]), (4) where each weight wi is proportional to parse score, marginalizing over shape variables x, wi ∝˜wi = P(ψ[i] \x, θ(m)[i], I(m)) (5) and constrained such that P i wi = 1. Rather than using just a point estimate for each parse, the approximation can be improved by incorporating some of the local variance around the parse. Since the token-level variables θ(m), which closely track the image, allow for little variability, and since it is inexpensive to draw conditional samples from the type-level P(ψ|θ(m)[i], I(m)) = P(ψ|θ(m)[i]) as it does not require evaluating the likelihood of the image, just the local variance around the type-level is estimated with the token-level fixed. Metropolis Hastings is run to produce N samples (Section SI-5.5) for each parse θ(m)[i], denoted by ψ[i1], ..., ψ[iN], where the improved approximation is P(ψ, θ(m)|I(m)) ≈Q(ψ, θ(m), I(m)) = K X i=1 wiδ(θ(m) −θ(m)[i]) 1 N N X j=1 δ(ψ −ψ[ij]). (6) 5 Image Thinned Traced graph (raw) traced graph (cleaned) Binary image Thinned image planning planning cleaned Binary image Thinned image planning planning cleaned 0 -60 -89 -159 -168 Binary image Thinned image planning planning cleaned a) b) i ii iii 1 2 train 0 1 2 −59.6 1 −88.9 1 2 −159 1 −168 1 2 test −2.12e+03 1 2 1 2 −1.98e+03 1 2 1 −2.07e+03 1 12 −2.09e+03 12 1 −2.12e+03 1 1 2 train 0 1 2 −59.6 1 −88.9 1 2 −159 1 −168 1 2 test −2.12e+03 1 2 1 2 −1.98e+03 1 2 1 −2.07e+03 1 12 −2.09e+03 12 1 −2.12e+03 1 1 2 train 0 1 2 −59.6 1 −88.9 1 2 −159 1 −168 1 2 test −2.12e+03 1 2 1 2 −1.98e+03 1 2 1 −2.07e+03 1 12 −2.09e+03 12 1 −2.12e+03 1 1 2 train 0 1 2 −59.6 1 −88.9 1 2 −159 1 −168 1 2 test −2.12e+03 1 2 1 2 −1.98e+03 1 2 1 −2.07e+03 1 12 −2.09e+03 12 1 −2.12e+03 1 1 2 train 0 1 2 −59.6 1 −88.9 1 2 −159 1 −168 1 2 test −2.12e+03 1 2 1 2 −1.98e+03 1 2 1 −2.07e+03 1 12 −2.09e+03 12 1 −2.12e+03 1 -1273 1 2 train 0 1 2 −59.6 1 −88.9 1 2 −159 1 −168 1 2 test −2.12e+03 1 2 1 2 −1.98e+03 1 2 1 −2.07e+03 1 12 −2.09e+03 12 1 −2.12e+03 1 1 2 train 0 1 2 −59.6 1 −88.9 1 2 −159 1 −168 1 2 test −2.12e+03 1 2 1 2 −1.98e+03 1 2 1 −2.07e+03 1 12 −2.09e+03 12 1 −2.12e+03 1 -831 1 2 train 0 1 2 −59.6 1 −88.9 1 2 −159 1 −168 1 2 test −831 1 2 1 2 −881 1 2 1 −983 1 1 2 −979 1 2 1 −1.17e+03 1 1 2 train 0 1 2 −59.6 1 −88.9 1 2 −159 1 −168 1 2 test −831 1 2 1 2 −881 1 2 1 −983 1 1 2 −979 1 2 1 −1.17e+03 1 1 2 train 0 1 2 −59.6 1 −88.9 1 2 −159 1 −168 1 2 test −1.41e+03 1 2 1 2 −1.22e+03 1 2 1 −1.18e+03 1 1 2 −1.72e+03 1 2 1 −1.54e+03 1 1 2 train 0 1 2 −59.6 1 −88.9 1 2 −159 1 −168 1 2 test −1.41e+03 1 2 1 2 −1.22e+03 1 2 1 −1.18e+03 1 1 2 −1.72e+03 1 2 1 −1.54e+03 1 -2041 Figure 5: Parsing a raw image. a) The raw image (i) is processed by a thinning algorithm [18] (ii) and then analyzed as an undirected graph [20] (iii) where parses are guided random walks (Section SI-5). b) The five best parses found for that image (top row) are shown with their log wj (Eq. 5), where numbers inside circles denote stroke order and starting position, and smaller open circles denote sub-stroke breaks. These five parses were re-fit to three different raw images of characters (left in image triplets), where the best parse (top right) and its associated image reconstruction (bottom right) are shown above its score (Eq. 9). Given an approximate posterior for a particular image, the model can evaluate the posterior predictive score of a new image by re-fitting the token-level variables (bottom Figure 5b), as explained in Section 3.1 on inference for one-shot classification. 3 Results 3.1 One-shot classification People, HBPL, and several alternative models were evaluated on a set of 10 challenging one-shot classification tasks. The tasks tested within-alphabet classification on 10 alphabets, with examples in Figure 2 and detailed in Section SI-6 . Each trial (of 400 total) consists of a single test image of a new character compared to 20 new characters from the same alphabet, given just one image each produced by a typical drawer of that alphabet. Figure 1b shows two example trials. People. Forty participants in the USA were tested on one-shot classification using Mechanical Turk. On each trial, as in Figure 1b, participants were shown an image of a new character and asked to click on another image that shows the same character. To ensure classification was indeed “one shot,” participants completed just one randomly selected trial from each of the 10 within-alphabet classification tasks, so that characters never repeated across trials. There was also an instructions quiz, two practice trials with the Latin and Greek alphabets, and feedback after every trial. Hierarchial Bayesian Program Learning. For a test image I(T ) and 20 training images I(c) for c = 1, ..., 20, we use a Bayesian classification rule for which we compute an approximate solution argmax c log P(I(T )|I(c)). (7) Intuitively, the approximation uses the HBPL search algorithm to get K = 5 parses of I(c), runs K MCMC chains to estimate the local type-level variability around each parse, and then runs K gradient-based searches to re-optimizes the token-level variables θ(T ) (all are continuous) to fit the test image I(T ). The approximation can be written as (see Section SI-7 for derivation) log P(I(T )|I(c)) ≈ log Z P(I(T )|θ(T ))P(θ(T )|ψ)Q(θ(c), ψ, I(c)) dψ dθ(c) dθ(T ) (8) ≈ log K X i=1 wi max θ(T ) P(I(T )|θ(T )) 1 N N X j=1 P(θ(T )|ψ[ij]), (9) where Q(·, ·, ·) and wi are from Eq. 6. Figure 5b shows examples of this classification score. While inference so far involves parses of I(c) refit to I(T ), it also seems desirable to include parses of I(T ) refit to I(c), namely P(I(c)|I(T )). We can re-write our classification rule (Eq. 7) to include just the reverse term (Eq. 10 center), and then to include both terms (Eq. 10 right), which is the rule we use, argmax c log P(I(T )|I(c)) = argmax c log P(I(c)|I(T )) P(I(c)) = argmax c log P(I(c)|I(T )) P(I(c)) P(I(T )|I(c)), (10) 6 where P(I(c)) ≈P i ˜wi from Eq. 5. These three rules are equivalent if inference is exact, but due to our approximation, the two-way rule performs better as judged by pilot results. Affine model. The full HBPL model is compared to a transformation-based approach that models the variance in image tokens as just global scales, translations, and blur, which relates to congealing models [23]. This HBPL model “without strokes” still benefits from good bottom-up image analysis (Figure 5) and a learned transformation model. The Affine model is identical to HBPL during search, but during classification, only the warp A(m), blur σ(m) b , and noise ϵ(m) are re-optimized to a new image (change the argument of “max” in Eq. 9 from θ(T ) to {A(T ), σ(T ) b , ϵ(T )}). Deep Boltzmann Machines (DBMs). A Deep Boltzmann Machine, with three hidden layers of 1000 hidden units each, was generatively pre-trained on an enhanced background set using the approximate learning algorithm from [25]. To evaluate classification performance, first the approximate posterior distribution over the DBMs top-level features was inferred for each image in the evaluation set, followed by performing 1-nearest neighbor in this feature space using cosine similarity. To speed up learning of the DBM and HD models, the original images were down-sampled, so that each image was represented by 28x28 pixels with greyscale values from [0,1]. To further reduce overfitting and learn more about the 2D image topology, which is built in to some deep models like convolution networks [19], the set of background characters was artificially enhanced by generating slight image translations (+/- 3 pixels), rotations (+/- 5 degrees), and scales (0.9 to 1.1). Hierarchical Deep Model (HD). A more elaborate Hierarchical Deep model is derived by composing hierarchical nonparametric Bayesian models with Deep Boltzmann Machines [26]. The HD model learns a hierarchical Dirichlet process (HDP) prior over the activities of the top-level features in a Deep Boltzmann Machine, which allows one to represent both a layered hierarchy of increasingly abstract features and a tree-structured hierarchy of super-classes for sharing abstract knowledge among related classes. Given a new test image, the approximate posterior over class assignments can be quickly inferred, as detailed in [26]. Simple Strokes (SS). A much simpler variant of HBPL that infers rigid “stroke-like” parts [16]. Nearest neighbor (NN). Raw images are directly compared using cosine similarity and 1-NN. Table 1: One-shot classifiers Learner Error rate Humans 4.5% HBPL 4.8% Affine 18.2 (31.8%) HD 34.8 (68.3%) DBM 38 (72%) SS 62.5% NN 78.3% Results. Performance is summarized in Table 1. As predicted, people were skilled one-shot learners, with an average error rate of 4.5%. HBPL achieved a similar error rate of 4.8%, which was significantly better than the alternatives. The Affine model achieved an error rate of 18.2% with the classification rule in Eq. 10 left, while performance was 31.8% error with Eq. 10 right. The deep learning models performed at 34.8% and 38% error, although performance was much lower without pre-training (68.3% and 72%). The Simple Strokes and Nearest Neighbor models had the highest error rates. 3.2 One-shot generation of new examples Not only can people classify new examples, they can generate new examples – even from just one image. While all generative classifiers can produce examples, it can be difficult to synthesize a range of compelling new examples in their raw form, especially since many models generate only features of raw stimuli (e.g, [5]). While DBMs [25] can generate realistic digits after training on thousands of examples, how well do these and other models perform from just a single training image? We ran another Mechanical Turk task to produce nine new examples of 50 randomly selected handwritten character images from the evaluation set. Three of these images are shown in the leftmost column of Figure 6. After correctly answering comprehension questions, 18 participants in the USA were asked to “draw a new example” of 25 characters, resulting in nine examples per character. To simulate drawings from nine different people, each of the models generated nine samples after seeing exactly the same images people did, as described in Section SI-8 and shown in Figure 6. Low-level image differences were minimized by re-rendering stroke trajectories in the same way for the models and people. Since the HD model does not always produce well-articulated strokes, it was not quantitatively analyzed, although there are clear qualitative differences between these and the human produced images (Figure 6). 7 People HBPL Affine HD Example Figure 6: Generating new examples from just a single “target” image (left). Each grid shows nine new examples synthesized by people and the three computational models. Visual Turing test. To compare the examples generated by people and the models, we ran a visual Turing test using 50 new participants in the USA on Mechanical Turk. Participants were told that they would see a target image and two grids of 9 images (Figure 6), where one grid was drawn by people with their computer mice and the other grid was drawn by a computer program that “simulates how people draw a new character.” Which grid is which? There were two conditions, where the “computer program” was either HBPL or the Affine model. Participants were quizzed on their comprehension and then they saw 50 trials. Accuracy was revealed after each block of 10 trials. Also, a button to review the instructions was always accessible. Four participants who reported technical difficulties were not analyzed. Results. Participants who tried to label drawings from people vs. HBPL were only 56% percent correct, while those who tried to label people vs. the Affine model were 92% percent correct. A 2-way Analysis of Variance showed a significant effect of condition (p < .001), but no significant effect of block and no interaction. While both group means were significantly better than chance, a subject analysis revealed only 2 of 21 participants were better than chance for people vs. HBPL, while 24 of 25 were significant for people vs. Affine. Likewise, 8 of 50 items were above chance for people vs. HBPL, while 48 of 50 items were above chance for people vs. Affine. Since participants could easily detect the overly consistent Affine model, it seems the difficulty participants had in detecting HBPL’s exemplars was not due to task confusion. Interestingly, participants did not significantly improve over the trials, even after seeing hundreds of images from the model. Our results suggest that HBPL can generate compelling new examples that fool a majority of participants. 4 Discussion Hierarchical Bayesian Program Learning (HBPL), by exploiting compositionality and causality, departs from standard models that need a lot more data to learn new concepts. From just one example, HBPL can both classify and generate compelling new examples, fooling judges in a “visual Turing test” that other approaches could not pass. Beyond the differences in model architecture, HBPL was also trained on the causal dynamics behind images, although just the images were available at evaluation time. If one were to incorporate this compositional and causal structure into a deep learning model, it could lead to better performance on our tasks. Thus, we do not see our model as the final word on how humans learn concepts, but rather, as a suggestion for the type of structure that best captures how people learn rich concepts from very sparse data. Future directions will extend this approach to other natural forms of generalization with characters, as well as speech, gesture, and other domains where compositionality and causality are central. Acknowledgments We would like to thank MIT CoCoSci for helpful feedback. This work was supported by ARO MURI contract W911NF-08-1-0242 and a NSF Graduate Research Fellowship held by the first author. 8 References [1] M. K. Babcock and J. Freyd. Perception of dynamic information in static handwritten forms. American Journal of Psychology, 101(1):111–130, 1988. [2] I. Biederman. Recognition-by-components: a theory of human image understanding. Psychological Review, 94(2):115–47, 1987. [3] S. Carey and E. Bartlett. Acquiring a single new word. Papers and Reports on Child Language Development, 15:17–29, 1978. [4] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009. [5] L. Fei-Fei, R. Fergus, and P. Perona. One-shot learning of object categories. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(4):594–611, 2006. [6] J. Feldman. The structure of perceptual categories. Journal of Mathematical Psychology, 41:145–170, 1997. [7] J. Freyd. Representing the dynamics of a static form. Memory and Cognition, 11(4):342–346, 1983. [8] S. Geman, E. Bienenstock, and R. Doursat. Neural Networks and the Bias/Variance Dilemma. Neural Computation, 4:1–58, 1992. [9] E. Gilet, J. Diard, and P. Bessi`ere. Bayesian action-perception computational model: interaction of production and recognition of cursive letters. PloS ONE, 6(6), 2011. [10] G. E. Hinton and V. Nair. Inferring motor programs from images of handwritten digits. In Advances in Neural Information Processing Systems 19, 2006. [11] K. H. James and I. Gauthier. Letter processing automatically recruits a sensory-motor brain network. Neuropsychologia, 44(14):2937–2949, 2006. [12] K. H. James and I. Gauthier. When writing impairs reading: letter perception’s susceptibility to motor interference. Journal of Experimental Psychology: General, 138(3):416–31, Aug. 2009. [13] C. Kemp and A. Jern. Abstraction and relational learning. In Advances in Neural Information Processing Systems 22, 2009. [14] A. Krizhevsky. Learning multiple layers of features from tiny images. PhD thesis, Unviersity of Toronto, 2009. [15] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems 25, 2012. [16] B. M. Lake, R. Salakhutdinov, J. Gross, and J. B. Tenenbaum. One shot learning of simple visual concepts. In Proceedings of the 33rd Annual Conference of the Cognitive Science Society, 2011. [17] B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum. Concept learning as motor program induction: A large-scale empirical study. In Proceedings of the 34th Annual Conference of the Cognitive Science Society, 2012. [18] L. Lam, S.-W. Lee, and C. Y. Suen. Thinning Methodologies - A Comprehensive Survey. IEEE Transactions of Pattern Analysis and Machine Intelligence, 14(9):869–885, 1992. [19] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-Based Learning Applied to Document Recognition. Proceedings of the IEEE, 86(11):2278–2323, 1998. [20] K. Liu, Y. S. Huang, and C. Y. Suen. Identification of Fork Points on the Skeletons of Handwritten Chinese Characters. IEEE Transactions of Pattern Analysis and Machine Intelligence, 21(10):1095–1100, 1999. [21] M. Longcamp, J. L. Anton, M. Roth, and J. L. Velay. Visual presentation of single letters activates a premotor area involved in writing. Neuroimage, 19(4):1492–1500, 2003. [22] E. M. Markman. Categorization and Naming in Children. MIT Press, Cambridge, MA, 1989. [23] E. G. Miller, N. E. Matsakis, and P. A. Viola. Learning from one example through shared densities on transformations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2000. [24] M. Revow, C. K. I. Williams, and G. E. Hinton. Using Generative Models for Handwritten Digit Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(6):592–606, 1996. [25] R. Salakhutdinov and G. E. Hinton. Deep Boltzmann Machines. In 12th Internationcal Conference on Artificial Intelligence and Statistics (AISTATS), 2009. [26] R. Salakhutdinov, J. B. Tenenbaum, and A. Torralba. Learning with Hierarchical-Deep Models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1958–71, 2013. [27] P. H. Winston. Learning structural descriptions from examples. In P. H. Winston, editor, The Psychology of Computer Vision. McGraw-Hill, New York, 1975. [28] F. Xu and J. B. Tenenbaum. Word Learning as Bayesian Inference. Psychological Review, 114(2):245– 272, 2007. 9
|
2013
|
104
|
4,826
|
Large Scale Distributed Sparse Precision Estimation Huahua Wang, Arindam Banerjee Dept. of Computer Science & Engg, University of Minnesota, Twin Cities {huwang,banerjee}@cs.umn.edu Cho-Jui Hsieh, Pradeep Ravikumar, Inderjit S. Dhillon Dept. of Computer Science, University of Texas, Austin {cjhsieh,pradeepr,inderjit}@cs.utexas.edu Abstract We consider the problem of sparse precision matrix estimation in high dimensions using the CLIME estimator, which has several desirable theoretical properties. We present an inexact alternating direction method of multiplier (ADMM) algorithm for CLIME, and establish rates of convergence for both the objective and optimality conditions. Further, we develop a large scale distributed framework for the computations, which scales to millions of dimensions and trillions of parameters, using hundreds of cores. The proposed framework solves CLIME in columnblocks and only involves elementwise operations and parallel matrix multiplications. We evaluate our algorithm on both shared-memory and distributed-memory architectures, which can use block cyclic distribution of data and parameters to achieve load balance and improve the efficiency in the use of memory hierarchies. Experimental results show that our algorithm is substantially more scalable than state-of-the-art methods and scales almost linearly with the number of cores. 1 Introduction Consider a p-dimensional probability distribution with true covariance matrix Σ0 ∈Sp ++ and true precision (or inverse covariance) matrix Ω0 = Σ−1 0 ∈Sp ++. Let [R1 · · · Rn] ∈ℜp×n be n independent and identically distributed random samples drawn from this p-dimensional distribution. The centered normalized sample matrix A = [a1 · · · an] ∈ℜp×n can be obtained as ai = 1 √n(Ri −¯R), where ¯R = 1 n P i Ri, so that the sample covariance matrix can be computed as C = AAT . In recent years, considerable effort has been invested in obtaining an accurate estimate of the precision matrix ˆΩbased on the sample covariance matrix C in the ‘low sample, high dimensions’ setting, i.e., n ≪p, especially when the true precision Ω0 is assumed to be sparse [28]. Suitable estimators and corresponding statistical convergence rates have been established for a variety of settings, including distributions with sub-Gaussian tails, polynomial tails [25, 3, 19]. Recent advances have also established parameter-free methods which achieve minimax rates of convergence [4, 19]. Spurred by these advances in the statistical theory of precision matrix estimation, there has been considerable recent work on developing computationally efficient optimization methods for solving the corresponding statistical estimation problems: see [1, 8, 14, 21, 13], and references therein. While these methods are able to efficiently solve problems up to a few thousand variables, ultralarge-scale problems with millions of variables remain a challenge. Note further that in precision matrix estimation, the number of parameters scales quadratically with the number of variables; so that with a million dimensions p = 106, the total number of parameters to be estimated is a trillion, p2 = 1012. The focus of this paper is on designing an efficient distributed algorithm for precision matrix estimation under such ultra-large-scale dimensional settings. We focus on the CLIME statistical estimator [3], which solves the following linear program (LP): min ∥ˆΩ∥1 s.t. ∥CˆΩ−I∥∞≤λ , (1) 1 where λ > 0 is a tuning parameter. The CLIME estimator not only has strong statistical guarantees [3], but also comes with inherent computational advantages. First, the LP in (1) does not explicitly enforce positive definiteness of ˆΩ, which can be a challenge to handle efficiently in highdimensions. Secondly, it can be seen that (1) can be decomposed into p independent LPs, one for each column of ˆΩ. This separable structure has motivated solvers for (1) which solve the LP column-by-column using interior point methods [3, 28] or the alternating direction method of multipliers (ADMM) [18]. However, these solvers do not scale well to ultra-high-dimensional problems: they are not designed to run on hundreds to thousands of cores, and in particular require the entire sample covariance matrix C to be loaded into the memory of a single machine, which is impractical even for moderate sized problems. In this paper, we present an efficient CLIME-ADMM variant along with a scalable distributed framework for the computations [2, 26]. The proposed CLIME-ADMM algorithm can scale up to millions of dimensions, and can use up to thousands of cores in a shared-memory or distributed-memory architecture. The scalability of our method relies on the following key innovations. First, we propose an inexact ADMM [27, 12] algorithm targeted to CLIME, where each step is either elementwise parallel or involves suitable matrix multiplications. We show that the rates of convergence of the objective to the optimum as well as residuals of constraint violation are both O(1/T). Second, we solve (1) in column-blocks of the precision matrix at a time, rather than one column at a time. Since (1) already decomposes columnwise, solving multiple columns together in blocks might not seem worthwhile. However, as we show our CLIME-ADMM working with column-blocks uses matrixmatrix multiplications which, building on existing literature [15, 5, 11] and the underlying low rank and sparse structure inherent in the precision matrix estimation problem, can be made substantially more efficient than repeated matrix-vector multiplications. Moreover, matrix multiplication can be further simplified as block-by-block operations, which allows choosing optimal block sizes to minimize cache misses, leading to high scalability and performance [16, 5, 15]. Lastly, since the core computations can be parallelized, CLIME-ADMM scales almost linearly with the number of cores. We experiment with shared-memory and distributed-memory architectures to illustrate this point. Empirically, CLIME-ADMM is shown to be much faster than existing methods for precision estimation, and scales well to high-dimensional problems, e.g., we estimate a precision matrix of one million dimension and one trillion parameters in 11 hours by running the algorithm on 400 cores. Our framework can be positioned as a part of the recent surge of effort in scaling up machine learning algorithms [29, 22, 6, 7, 20, 2, 23, 9] to “Big Data”. Scaling up machine learning algorithms through parallelization and distribution has been heavily explored on various architectures, including shared-memory architectures [22], distributed memory architectures [23, 6, 9] and GPUs [24]. Since MapReduce [7] is not efficient for optimization algorithms, [6] proposed a parameter server that can be used to parallelize gradient descent algorithms for unconstrained optimization problems. However, this framework is ill-suited for the constrained optimization problems we consider here, because gradient descent methods require the projection at each iteration which involves all variables and thus ruins the parallelism. In other recent related work based on ADMM, [23] introduce graph projection block splitting (GPBS) to split data into blocks so that examples and features can be distributed among multiple cores. Our framework uses a more general blocking scheme (block cyclic distribution), which provides more options in choosing the optimal block size to improve the efficiency in the use of memory hierarchies and minimize cache misses [16, 15, 5]. ADMM has also been used to solve constrained optimization in a distributed framework [9] for graphical model inference, but they consider local constraints, in contrast to the global constraints in our framework. Notation: A matrix is denoted by a bold face upper case letter, e.g., A. An element of a matrix is denoted by a upper case letter with row index i and column index j, e.g., Aij is the ij-th element of A. A block of matrix is denoted by a bold face lower case letter indexed by ij, e.g., Aij. ⃗Aij represents a collection of blocks of matrix A on the ij-th core (see block cyclic distribution in Section 4). A′ refers the transpose of A. Matrix norms used are all elementwise norms, e.g., ∥A∥1 = Pp i=1 Pn j=1 |Aij|, ∥A∥2 2 = Pp i=1 Pn j=1 A2 ij, ∥A∥∞= max1≤i≤p,1≤j≤n |Aij|. The matrix inner product is defined in elementwise, e.g., ⟨A, B⟩= Pp i=1 Pn j=1 AijBij. X ∈ℜp×k denotes k(1 ≤k ≤p) columns of the precision matrix ˆΩ, and E ∈ℜp×k denotes the same k columns of the identity matrix I ∈ℜp×p. Let λmax(C) be the largest eigenvalue of covariance matrix C. 2 Algorithm 1 Column Block ADMM for CLIME 1: Input: C, λ, ρ, η 2: Output: X 3: Initialization: X0, Z0, Y0, V0, ˆV0 = 0 4: for t = 0 to T −1 do 5: X-update: Xt+1 = soft(Xt −Vt, 1 η), where 6: Mat-Mul: sparse : Ut+1 = CXt+1 low rank : Ut+1 = A(A′Xt+1) 7: Z-update: Zt+1 = box(Ut+1 + Yt, λ), where 8: Y-update: Yt+1 = Yt + Ut+1 −Zt+1 9: Mat-Mul: sparse : ˆVt+1 = CYt+1 low rank : ˆVt+1 = A(A′Yt+1) 10: V-update: Vt+1 = ρ η(2 ˆVt+1 −ˆVt) 11: end for soft(X, γ) = ( Xij −γ , if Xij > γ , Xij + γ , if Xij < −γ , 0 , otherwise box(X, E, λ) = ( Eij + λ, if Xij −Eij > λ, Xij, if |Xij −Eij| ≤λ, Eij −λ, if Xij −Eij < −λ, 2 Column Block ADMM for CLIME In this section, we propose an algorithm to estimate the precision matrix in terms of column blocks instead of column-by-column. Assuming a column block contains k(1 ≤k ≤p) columns, the sparse precision matrix estimation amounts to solving ⌈p/k⌉independent linear programs. Denoting X ∈ℜp×k be k columns of ˆΩ, (1) can be written as min ∥X∥1 s.t. ∥CX −E∥∞≤λ , (2) which can be rewritten in the following equality-constrained form: min ∥X∥1 s.t. ∥Z −E∥∞≤λ, CX = Z . (3) Through the splitting variable Z ∈ℜp×k, the infinity norm constraint becomes a box constraint and is separated from the ℓ1 norm objective. We use ADMM to solve (3). The augmented Lagrangian of (3) is Lρ = ∥X∥1 + ρ⟨Y, CX −Z⟩+ ρ 2∥CX −Z∥2 2 , (4) where Y ∈ℜp×k is a scaled dual variable and ρ > 0. ADMM yields the following iterates [2]: Xt+1 = argminX ∥X∥1 + ρ 2∥CX −Zt + Yt∥2 2 , (5) Zt+1 = argmin ∥Z−E∥∞≤λ ρ 2∥CXt+1 −Z + Yt∥2 2 , (6) Yt+1 = Yt + CXt+1 −Zt+1 . (7) As a Lasso problem, (5) can be solved using exisiting Lasso algorithms, but that will lead to a double-loop algorithm. (5) does not have a closed-form solution since C in the quadratic penalty term makes X coupled. We decouple X by linearizing the quadratic penalty term and adding a proximal term as follows: Xt+1 = argminX ∥X∥1 + η⟨Vt, X⟩+ η 2∥X −Xt∥2 2 , (8) where Vt = ρ ηC(Yt + CXt −Zt) and η > 0. (8) is usually called an inexact ADMM update. Using (7), Vt = ρ ηC(2Yt −Yt−1). Let ˆVt = CYt, we have Vt = ρ η(2 ˆVt −ˆVt−1) . (8) has the following closed-form solution: Xt+1 = soft(Xt −Vt, 1 η ) , (9) where soft denotes the soft-thresholding and is defined in Step 5 of Algorithm 1. Let Ut+1 = CXt+1. (6) is a box constrained quadratic programming which has the following closed-form solution: Zt+1 = box(Ut+1 + Yt, E, λ) , (10) 3 where box denotes the projection onto the infinity norm constraint ∥Z −E∥∞≤λ and is defined in Step 7 of Algorithm 1. In particular, if ∥Ut+1 + Yt −E∥∞≤λ, Zt+1 = Ut+1 + Yt and thus Yt+1 = Yt + Ut+1 −Zt+1 = 0. The ADMM algorithm for CLIME is summarized in Algorithm 1. In Algorithm 1, while step 5, 7, 8 and 10 amount to elementwise operations which cost O(pk) operations, steps 6 and 9 involve matrix multiplication which is the most computationally intensive part and costs O(p2k) operations. The memory requirement includes O(pn) for A and O(pk) for the other six variables. As the following results show, Algorithm 1 has a O(1/T) convergence rate for both the objective function and the residuals of optimality conditions. The proof technique is similar to [26]. [12] shows a similar result as Theorem 2 but uses a different proof technique. For proofs, please see Appendix A in the supplement. Theorem 1 Let {Xt, Zt, Yt} be generated by Algorithm 1 and ¯XT = 1 T PT t=1 Xt. Assume X0 = Z0 = Y0 = 0 and η ≥ρλ2 max(C). For any CX = Z, we have ∥¯XT ∥1 −∥X∥1 ≤η∥X∥2 2 2T . (11) Theorem 2 Let {Xt, Zt, Yt} be generated by Algorithm 1 and {X∗, Z∗, Y∗} be a KKT point for the Lagrangian of (3). Assume X0 = Z0 = Y0 = 0 and η ≥ρλ2 max(C). We have ∥CXT −ZT ∥2 2 + ∥ZT −ZT −1∥2 2 + ∥XT −XT −1∥2η ρ I−C2 ≤ ∥Y∗∥2 2 + η ρ∥X∗∥2 2 T . (12) 3 Leveraging Sparse, Low-Rank Structure In this section, we consider a few possible directions that can further leverage the underlying structure of the problem; specifically sparse and low-rank structure. 3.1 Sparse Structure As we detail here, there could be sparsity in the intermediate iterates, or the sample covariance matrix itself (or a perturbed version thereof); which can be exploited to make our CLIME-ADMM variant more efficient. Iterate Sparsity: As the iterations progress, the soft-thresholding operation will yield a sparse Xt+1, which can help speed up step 6: Ut+1 = CXt+1, via sparse matrix multiplication. Further, the box-thresholding operation will yield a sparse Yt+1. In the ideal case, if ∥Ut+1+Yt−E∥∞≤λ in step 7, then Zt+1 = Ut+1 + Yt. Thus, ˆYt+1 = Yt + Ut+1 −Zt+1 = 0. More generally, Yt+1 will become sparse as the iterations proceed, which can help speed up step 9: ˆVt+1 = CYt+1. Sample Covariance Sparsity: We show that one can “perturb” the sample covariance to obtain a sparse and coarsened matrix, solve CLIME with this pertubed matrix, and yet have strong statistical guarantees. The statistical guarantees for CLIME [3], including convergence in spectral, matrix L1, and Frobenius norms, only require from the sample covariance matrix C a deviation bound of the form ∥C −Σ0∥∞≤c p log p/n, for some constant c. Accordingly, if we perturb the matrix C with a perturbation matrix ∆so that the perturbed matrix (C + ∆) continues to satisfy the deviation bound, the statistical guarantees for CLIME would hold even if we used the perturbed matrix (C + ∆). The following theorem (for details, please see Appendix B in the supplement) illustrates some perturbations ∆that satisfy this property: Theorem 3 Let the original random variables Ri be sub-Gaussian, with sample covariance C. Let ∆be a random perturbation matrix, where ∆ij are independent sub-exponential random variables. Then, for positive constants c1, c2, c3, P(∥C + ∆−Σ0∥∞≥c1 q log p n ) ≤c2p−c3. As a special case, one can thus perturb elements of Cij with suitable constants ∆ij with |∆ij| ≤ c p log p/n, so that the perturbed matrix is sparse, i.e., if |Cij| ≤c p log p/n, then it can be safely 4 truncated to 0. Thus, in practice, even if sample covariance matrix is only close to a sparse matrix [21, 13], or if it is close to being block diagonal [21, 13], the complexity of matrix multiplication in steps 6 and 9 can be significantly reduced via the above perturbations. 3.2 Low Rank Structure Although one can use sparse structures of matrices participating in the matrix multiplication to accelerate the algorithm, the implementation requires substantial work since dynamic sparsity of X and Y is unknown upfront and static sparsity of the sample covariance matrix may not exist. Since the method will operate in a low-sample setting, we can alternatively use the low rank of the sample covariance matrix to reduce the complexity of matrix multiplication. Since C = AAT and p ≫n, CX = A(AT X), and thus the computational complexity of matrix multiplication reduces from O(p2k) to O(npk), which can achieve significant speedup for small n. We use such low-rank multiplications for the experiments in Section 5. 4 Scalable Parallel Computation Framework In this section, we elaborate on scalable frameworks for CLIME-ADMM in both shared-memory and distributed-memory achitectures. In a shared-memory architecture (e.g., a single machine), data A is loaded to the memory and shared by q cores, as shown in Figure 1(a). Assume the p × p precision matrix ˆΩis evenly divided into l = p/k (≥q) column blocks, e.g., X1, · · · , Xq, · · · , Xl, and thus each column block contains k columns. The column blocks are assigned to q cores cyclically, which means the j-th column block is assigned to the mod(j, q)-th core. The q cores can solve q column blocks in parallel without communication and synchronization, which can be simply implemented via multithreading. Meanwhile, another q column blocks are waiting in their respective queues. Figure 1(a) gives an example of how to solve 8 column blocks on 4 cores in a shared-memory environment. While the 4 cores are solving the first 4 column blocks, the next 4 column blocks are waiting in queues (red arrows). Although the shared-memory framework is free from communication and synchronization, the limited resources prevent it from scaling up to datasets with millions of dimensions, which can not be loaded to the memory of a single machine or solved by tens of cores in a reasonble time. As more memory and computing power are needed for high dimensional datasets, we implement a framework for CLIME-ADMM in a distributed-memory architecture, which automatically distributes data among machines, parallelizes computation, and manages communication and synchronization among machines, as shown in Figure 1(b). Assume q processes are formed as a r × c process grid and the p × p precision matrix ˆΩis evenly divided into l = p/k (≥q) column blocks, e.g., Xj, 1 ≤j ≤l. We solve a column block Xj at a time in the process grid. Assume the data matrix A has been evenly distributed into the process grid and ⃗Aij is the data on the ij-th core, i.e., A is colletion of ⃗Aij under a mapping scheme, which we will discuss later. Figure 1(b) illustrates that the 2 × 2 process grid is computing the first column block X1 while the second column block X2 is waiting in queues (red lines), assuming X1, X2 are distributed into the process grid in the same way as A and ⃗X1 ij is the block of X1 assigned to the ij-th core. A typical issue in parallel computation is load imbalance, which is mainly caused by the computational disparity among cores and leads to unsatisfactory speedups. Since each step in CLIMEADMM are basic operations like matrix multiplication, the distribution of sub-matrices over processes has a major impact on the load balance and scalability. The following discussion focuses on the matrix multiplication in the step 6 in Algorithm 1. Other steps can be easily incorporated into the framework. The matrix multiplication U = A(A′X1) can be decomposed into two steps, i.e., W = A′X1 and U = AW, where A ∈ℜn×p, X1 ∈ℜp×k, W ∈ℜn×k and U ∈ℜn×k. Dividing matrices A, X evenly into r × c large consecutive blocks like [23] will lead to load imbalance. First, since the sparse structure of X changes over time (Section 3.1), large consecutive blocks may assign dense blocks to some processes and sparse blocks to the other processes. Second, there will be no blocks in some processes after the multiplication using large blocks since W is a small matrix compared to A, X, e.g., p could be millions and n, k are hundreds. Third, large blocks may not be fit in the cache, leading to cache misses. Therefore, we use block cyclic data distribution which uses a small nonconsecutive blocks and thus can largely achieve load balance and scalability. A matrix is first divided into consecutive blocks of size pb × nb. Then blocks are distributed into the process 5 2 X 1 X 6 X 5 X 4 X 3 X 8 X 7 X A (a) Shared-Memory 21 A & 1 21 X & 11 A & 1 11 X & 1 22 X & 22 A & 1 12 X & 12 A & 2 11 X & 2 21 X & 2 22 X & 2 22 X & Parallel IO (b) Distributed-Memory 13 A 12 A 14 A 11 A 23 A 22 A 24 A 21 A 33 A 32 A 34 A 31 A 43 A 42 A 44 A 41 A 53 A 52 A 54 A 51 A 63 A 62 A 64 A 61 A (c) Block Cyclic Figure 1: CLIME-ADMM on shared-memory and distribtued-memory architectures. grid cyclically. Figure 1(c) illustrates how to distribute the matrix to a 2 × 2 process grid. A is divided into 3 × 2 consecutive blocks, where each block is of size pb × nb. Blocks of the same color will be assigned to the same process. Green blocks will be assigned to the upper left process, i.e., ⃗A11 = {a11, a13, a31, a33, a51, a53} in Figure 1(b). The distribution of X1 can be done in a similar way except the block size should be pb × kb, where pb is to guarantee that matrix multiplication A′X1 works. In particular, we denote pb × nb × kb as the block size for matrix multiplication. To distribute the data in a block cyclic manner, we use a parallel I/O scheme, where processes can access the data in parallel and only read/write the assigned blocks. 5 Experimental Results In this section, we present experimental results to compare CLIME-ADMM with existing algorithms and show its scalability. In all experiments, we use the low rank property of the sample covariance matrix and do not assume any other special structures. Our algorithm is implemented in a shared-memory architecture using OpenMP (http://openmp.org/wp/) and a distributed-memory architecture using OpenMPI (http://www.open-mpi.org) and ScaLAPACK [15] (http://www.netlib.org/scalapack/). 5.1 Comparision with Existing Algorithms We compare CLIME-ADMM with three other methods for estimating the inverse covariance matrix, including CLIME, Tiger in package flare1 and divide and conquer QUIC (DC-QUIC) [13]. The comparisons are run on an Intel Zeon E5540 2.83GHz CPU with 32GB main memory. We test the efficiency of the above methods on both synthetic and real datasets. For synthetic datasets, we generate the underlying graphs with random nonzero pattern by the same way as in [14]. We control the sparsity of the underlying graph to be 0.05, and generate random graphs with various dimension. Since each estimator has different parameters to control the sparsity, we set them individually to recover the graph with sparsity 0.05, and compare the time to get the solution. The column block size k for CLIME-ADMM is 100. Figure 2(a) shows that CLIME-ADMM is the most scalable estimator for large graphs. We compare the precision and recall for different methods on recovering the groud truth graph structure. We run each method using different parameters (which controls the sparsity of the solution), and plot the precision and recall for each solution in Figure 2(b). As Tiger is parameter tuning free and achieves the minimax optimal rate [19], it achieves the best performance in terms of recall. The other three methods have the similar performance. CLIME can also be free of parameter tuning and achieve the optimal minimax rate by solving an additional linear program which is similar to (1) [4]. We refer the readers to [3, 4, 19] for detailed comparisons between the two models CLIME and Tiger, which is not the focus of this paper. We further test the efficiency of the above algorithms on two real datasets, Leukemia and Climate (see Table 1). Leukemia is gene expression data provided by [10], and the pre-processing was done by [17]. Climate dataset is the temperature data in year 2001 recorded by NCEP/NCAR Reanalysis data2and preprocessed by [13]. Since the ground truth for real datasets are unknown, we test the time taken for each method to recover graphs with 0.1 and 0.01 sparsity. The results are presented in Table 1. Although Tiger is faster than CLIME-ADMM on small dimensional dataset Leukemia, 1The interior point method in [3] is written in R and extremely slow. Therefore, we use flare which is implemented in C with R interface. http://cran.r-project.org/web/packages/flare/index.html 2www.esrl.noaa.gov/psd/data/gridded/data.ncep.reanalysis.surface.html 6 (a) Runtime (b) Precision and recall Figure 2: Synthetic datasets (a) Speedup Scol k (b) Speedup Score q Figure 3: Shared-Memory. (a) Speedup Scol k (b) Speedup Score q Figure 4: Distributed-Memory. it does not scale well on the high dimensional dataset as CLIME-ADMM, which is mainly due to the fact that ADMM is not competitive with other methods on small problems but has superior scalability on big datasets [2]. DC-QUIC runs faster than other methods for small sparsity but dramatically slows down when sparsity increases. DC-QUIC essentially works on a block-diagonal matrix by thresholding the off-diagonal elements of the sample covariance matrix. A small sparsity generally leads to small diagonal blocks, which helps DC-QUIC to make a giant leap forward in the computation. A block-diagonal structure in the sample covariance matrix can be easily incorporated into the matrix multiplication in CLIME-ADMM to achieve a sharp computational gain. On a single core, CLIME-ADMM is faster than flare ADMM. We also show the results of CLIME-ADMM on 8 cores, showing CLIME-ADMM achieves a linear speedup (more results will be seen in Section 5.2). Note Tiger can estimate the spase precision matrix column-by-column in parallel, while CLIMEADMM solves CLIME in column-blocks in parallel. 5.2 Scalability of CLIME ADMM We evaluate the scalability of CLIME-ADMM in a shared memory and a distributed memory architecture in terms of two kinds of speedups. The first speedup is defined as the time on 1 core T core 1 over q cores T core q , i.e., Score q = T core 1 /T core q . The second speedup is caused by the use of column blocks. Assume the total time for solving CLIME column-by-column (k = 1) is T col 1 , which is considered as the baseline. The speedup of solving CLIME in column block with size k over a single column is defined as Scol k = T col 1 /T col k . The experiments are done on synthetic data which is generated in the same way as in Section 5.1. The number of samples is fixed to be n = 200. Shared-memory We estimate a precision matrix with p = 104 dimensions on a server with 20 cores and 64G memory. We use OpenMP to parallelize column blocks. We run the algorithm on different number of cores q = 1, 5, 10, 20, and with different column block size k. The speedup Scol k is plotted in Figure 3(a), which shows the results on three different number of cores. When k ≤20, the speedups keep increasing with increasing number of columns k in each block. For k ≥20, the speedups are maintained on 1 core and 5 cores, but decreases on 10 and 20 cores. The total number of columns in the shared-memory is k ×q. For a fixed k, more columns are involved in the computation when more cores are used, leading to more memory consumption and competition for the usage of shared cache. The speedup Score q is plotted in Figure 3(b), where T core 1 is the time on a single core. The ideal linear speedups are archived on 5 cores for all block sizes k. On 10 cores, while small and medium column block sizes can maintain the ideal linear speedups, the large column block sizes fail to scale linearly. The failure to achieve a linear speedup propagate to small and medium column block sizes on 20 cores, although their speedups are larger than large column block size. As more and more column blocks are participating in the computation, the speed-ups decrease possibly because of the competition for resources (e.g., L2 cache) in the shared-memory environment. 7 Table 1: Comparison of runtime (sec) on real datasets. Dataset sparsity CLIME-ADMM DC-QUIC Tiger flare CLIME 1 core 8 cores Leukemia 0.1 48.64 6.27 93.88 34.56 142.5 (1255 × 72) 0.01 44.98 5.83 21.59 17.10 87.60 Climate 0.1 4.76 hours 0.6 hours 10.51 hours > 1 day > 1 day (10512 × 1464) 0.01 4.46 hours 0.56 hours 2.12 hours > 1 day > 1 day Table 2: Effect (runtime (sec)) of using different number of cores in a node with p = 106. Using one core per node is the most efficient as there is no resource sharing with other cores. node ×core k = 1 k = 5 k = 10 k = 50 k = 100 k = 500 k = 1000 100×1 0.56 1.26 2.59 6.98 13.97 62.35 136.96 25× 4 1.02 2.40 3.42 8.25 16.44 84.08 180.89 200×1 0.37 0.68 1.12 3.48 6.76 33.95 70.59 50×4 0.74 1.44 2.33 4.49 8.33 48.20 103.87 Distributed-memory We estimate a precision matrix with one million dimensions (p = 106), which contains one trillion parameters (p2 = 1012). The experiments are run on a cluster with 400 computing nodes. We use 1 core per node to avoid the competition for the resources as we observed in the shared-memory case. For q cores, we use the process grid q 2 × 2 since p ≫n. The block size pb×nb×kb for matrix multiplication is 10×10×1 for k ≤10 and 10×10×10 for k > 10. Since the column block CLIME problems are totally independent, we report the speedups on solving a single column block. The speedup Scol k is plotted in Figure 4(a), where the speedups are larger and more stable than that in the shared-memory environment. The speedup keeps increasing before arriving at a certain number as column block size increases. For any column block size, the speedup also increases as the number of cores increases. The speedup Score q is plotted in Figure 4(b), where T core 1 is the time on 50 cores. A single column (k = 1) fails to achieve linear speedups when hundreds of cores are used. However, if using a column block k > 1, the ideal linear speedups are achieved with increasing number of cores. Note that due to distributed memory, the larger column block sizes also scale linearly, unlike in the shared memory setting, where the speedups were limited due to resource sharing. As we have seen, k depends on the size of process grid, block size in matrix multiplication, cache size and probably the sparsity pattern of matrices. In Table 2, we compare the performance of 1 core per node to that of using 4 cores per node, which mixes the effects of shared-memory and distributed-memory architectures. For small column block size (k = 1, 5), the use of multiple cores in a node is almost two times slower than the use of a single core in a node. For other column block sizes, it is still 30% slower. Finally, we ran CLIME-ADMM on 400 cores with one node per core and block size k = 500, and the entire computation took about 11 hours. 6 Conclusions In this paper, we presented a large scale distributed framework for the estimation of sparse precision matrix using CLIME. Our framework can scale to millions of dimensions and run on hundreds of machines. The framework is based on inexact ADMM, which decomposes the constrained optimization problem into elementary matrix multiplications and elementwise operations. Convergence rates for both the objective and optimality conditions are established. The proposed framework solves the CLIME in column-blocks and uses block cyclic distribution to achieve load balancing. We evaluate our algorithm on both shared-memory and distributed-memory architectures. Experimental results show that our algorithm is substantially more scalable than state-of-the-art methods and scales almost linearly with the number of cores. The framework presented can be useful for a variety of other large scale constrained optimization problems and will be explored in future work. Acknowledgment H. W. and A. B. acknowledge the support of NSF via IIS-0953274, IIS-1029711, IIS- 0916750, IIS-0812183, and the technical support from the University of Minnesota Supercomputing Institute. H. W. acknowledges the support of DDF (2013-2014) from the University of Minnesota. C.-J.H. and I.S.D was supported by NSF grants CCF-1320746 and CCF-1117055. C.-J.H also acknowledge the support of IBM PhD fellowship. P.R. acknowledges the support of NSF via IIS-1149803, DMS1264033 and ARO via W911NF-12-1-0390. 8 References [1] O. Banerjee, L. E. Ghaoui, and A. dAspremont. Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. JMLR, 9:2261–2286, 2008. [2] S. Boyd, E. Chu N. Parikh, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundation and Trends Machine Learning, 3(1), 2011. [3] T. Cai, W. Liu, and X. Luo. A constrained ℓ1 minimization approach to sparse precision matrix estimation. Journal of American Statistical Association, 106:594–607, 2011. [4] T. Cai, W. Liu, and H. Zhou. Estimating sparse precision matrix: Optimal rates of convergence and adaptive estimation. Preprint, 2012. [5] J. Choi. A new parallel matrix multiplication algorithm on distributed-memory concurrent computers. In High Performance Computing on the Information Superhighway, 1997. [6] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, Q. Le, M. Mao, M. Ranzato, A. Senior, P. Tucker, K. Yang, and A. Y. Ng. Large scale distributed deep networks. In NIPS, 2012. [7] J. Dean and S. Ghemawat. Map-Reduce: simplified data processing on large clusters. In CACM, 2008. [8] J. Friedman, T. Hastie, and R. Tibshirani. Model selection through sparse maximum likelihood estimation for multivariate gaussian or binary data. Biostatistics, 9:432–441, 2008. [9] Q. Fu, H. Wang, and A. Banerjee. Bethe-ADMM for tree decomposition based parallel MAP inference. In UAI, 2013. [10] T. R. Golub, D. K. Slonim, P. Tamayo, C. Huard, M. Gaasenbeek, J. P. Mesirov, H. Coller, M. L. Loh, J. R. Downing, M. A. Caligiuri, and C. D. Bloomfield. Molecular classication of cancer: class discovery and class prediction by gene expression monitoring. Science, pages 531–537, 1999. [11] K. Goto and R. Van De Geijn. Highperformance implementation of the level-3 BLAS. ACM Transactions on Mathematical Software, 35:1–14, 2008. [12] B. He and X. Yuan. On non-ergodic convergence rate of Douglas-Rachford alternating direction method of multipliers. Preprint, 2012. [13] C. Hsieh, I. Dhillon, P. Ravikumar, and A. Banerjee. A divide-and-conquer method for sparse inverse covariance estimation. In NIPS, 2012. [14] C. Hsieh, M. Sustik, I. Dhillon, and P. Ravikumar. Sparse inverse covariance matrix estimation using quadratic approximation. In NIPS, 2011. [15] A. Cleary J. Demmel I. S. Dhillon J. Dongarra S. Hammarling G. Henry A. Petitet K. Stanley D. Walker L. Blackford, J. Choi and R.C. Whaley. ScaLAPACK Users’ Guide. SIAM, 1997. [16] M. Lam, E. Rothberg, and M. Wolf. The cache performance and optimization of blocked algorithms. In Architectural Support for Programming Languages and Operating Systems, 1991. [17] L. Li and K.-C. Toh. An inexact interior point method for L1-reguarlized sparse covariance selection. Mathematical Programming Computation, 2:291–315, 2010. [18] X. Li, T. Zhao, X. Yuan, and H. Liu. An R package flare for high dimensional linear regression and precision matrix estimation. http://cran.r-project.org/web/packages/flare, 2013. [19] H. Liu and L. Wang. Tiger: A tuning-insensitive approach for optimally estimating Gaussian graphical models. Preprint, 2012. [20] Y. Low, J. Gonzalez, A. Kyrola, D. Bickson, C. Guestrin, and J. Hellerstein. Distributed graphlab: A framework for machine learning in the cloud. In VLDB, 2012. [21] R. Mazumder and T. Hastie. Exact covariance thresholding into connected components for large-scale graphical lasso. JMLR, 13:723–736, 2012. [22] F. Niu, B. Retcht, C. Re, and S. J. Wright. Hogwild! a lock-free approach to parallelizing stochastic gradient descent. In NIPS, 2011. [23] N. Parikh and S. Boyd. Graph projection block splitting for distributed optimization. Preprint, 2012. [24] R. Raina, A. Madhavan, and A. Y. Ng. Large-scale deep unsupervised learning using graphics processors. In ICML, 2009. [25] P. Ravikumar, M. J. Wainwright, G. Raskutti, and B. Yu. High-dimensional covariance estimation by minimizing l1-penalized log-determinant divergence. Electronic Journal of Statistics, 5:935–980, 2011. [26] H. Wang and A. Banerjee. Online alternating direction method. In ICML, 2012. [27] J. Yang and Y. Zhang. Alternating direction algorithms for L1-problems in compressive sensing. ArXiv, 2009. [28] M. Yuan. Sparse inverse covariance matrix estimation via linear programming. JMLR, 11, 2010. [29] M. Zinkevich, M. Weimer, A. Smola, and L. Li. Parallelized stochastic gradient descent. In NIPS, 2010. 9
|
2013
|
105
|
4,827
|
Online Variational Approximations to non-Exponential Family Change Point Models: With Application to Radar Tracking Ryan Turner Northrop Grumman Corp. ryan.turner@ngc.com Steven Bottone Northrop Grumman Corp. steven.bottone@ngc.com Clay Stanek Northrop Grumman Corp. clay.stanek@ngc.com Abstract The Bayesian online change point detection (BOCPD) algorithm provides an efficient way to do exact inference when the parameters of an underlying model may suddenly change over time. BOCPD requires computation of the underlying model’s posterior predictives, which can only be computed online in O(1) time and memory for exponential family models. We develop variational approximations to the posterior on change point times (formulated as run lengths) for efficient inference when the underlying model is not in the exponential family, and does not have tractable posterior predictive distributions. In doing so, we develop improvements to online variational inference. We apply our methodology to a tracking problem using radar data with a signal-to-noise feature that is Rice distributed. We also develop a variational method for inferring the parameters of the (non-exponential family) Rice distribution. Change point detection has been applied to many applications [5; 7]. In recent years there have been great improvements to the Bayesian approaches via the Bayesian online change point detection algorithm (BOCPD) [1; 23; 27]. Likewise, the radar tracking community has been improving in its use of feature-aided tracking [10]: methods that use auxiliary information from radar returns such as signal-to-noise ratio (SNR), which depend on radar cross sections (RCS) [21]. Older systems would often filter only noisy position (and perhaps Doppler) measurements while newer systems use more information to improve performance. We use BOCPD for modeling the RCS feature. Whereas BOCPD inference could be done exactly when finding change points in conjugate exponential family models the physics of RCS measurements often causes them to be distributed in non-exponential family ways, often following a Rice distribution. To do inference efficiently we call upon variational Bayes (VB) to find approximate posterior (predictive) distributions. Furthermore, the nature of both BOCPD and tracking require the use of online updating. We improve upon the existing and limited approaches to online VB [24; 13]. This paper produces contributions to, and builds upon background from, three independent areas: change point detection, variational Bayes, and radar tracking. Although the emphasis in machine learning is on filtering, a substantial part of tracking with radar data involves data association, illustrated in Figure 1. Observations of radar returns contain measurements from multiple objects (targets) in the sky. If we knew which radar return corresponded to which target we would be presented with NT ∈N0 independent filtering problems; Kalman filters [14] (or their nonlinear extensions) are applied to “average out” the kinematic errors in the measurements (typically positions) using the measurements associated with each target. The data association problem is to determine which measurement goes to which track. In the classical setup, once a particular measurement is associated with a certain target, that measurement is plugged into the filter for that target as if we knew with certainty it was the correct assignment. The association algorithms, in effect, find the maximum a posteriori (MAP) estimate on the measurement-to-track association. However, approaches such as the joint probabilistic data association (JPDA) filter [2] and the probability hypothesis density (PHD) filter [16] have deviated from this. 1 To find the MAP estimate a log likelihood of the data under each possible assignment vector a must be computed. These are then used to construct cost matrices that reduce the assignment problem to a particular kind of optimization problem (the details of which are beyond the scope of this paper). The motivation behind feature-aided tracking is that additional features increase the probability that the MAP measurement-to-track assignment is correct. Based on physical arguments the RCS feature (SNR) is often Rice distributed [21, Ch. 3]; although, in certain situations RCS is exponential or gamma distributed [26]. The parameters of the RCS distribution are determined by factors such as the shape of the aircraft facing the radar sensor. Given that different aircraft have different RCS characteristics, if one attempts to create a continuous track estimating the path of an aircraft, RCS features may help distinguish one aircraft from another if they cross paths or come near one another, for example. RCS also helps distinguish genuine aircraft returns from clutter: a flock of birds or random electrical noise, for example. However, the parameters of the RCS distributions may also change for the same aircraft due to a change in angle or ground conditions. These must be taken into account for accurate association. Providing good predictions in light of a possible sudden change in the parameters of a time series is “right up the alley” of BOCPD and change point methods. The original BOCPD papers [1; 11] studied sudden changes in the parameters of exponential family models for time series. In this paper, we expand the set of applications of BOCPD to radar SNR data which often has the same change point structure found in other applications, and requires online predictions. The BOCPD model is highly modular in that it looks for changes in the parameters of any underlying process model (UPM). The UPM merely needs to provide posterior predictive probabilities, the UPM can otherwise be a “black box.” The BOCPD queries the UPM for a prediction of the next data point under each possible run length, the number of points since the last change point. If (and only if by Hipp [12]) the UPM is exponential family (with a conjugate prior) the posterior is computed by accumulating the sufficient statistics since the last potential change point. This allows for O(1) UPM updates in both computation and memory as the run length increases. We motivate the use of VB for implementing UPMs when the data within a regime is believed to follow a distribution that is not exponential family. The methods presented in this paper can be used to find variational run length posteriors for general non-exponential family UPMs in addition to the Rice distribution. Additionally, the methods for improving online updating in VB (Section 2.2) are applicable in areas outside of change point detection. clutter (birds) track 1 (747) track 2 (EMB 110) 0 5 10 15 20 SNR Likelihood Figure 1: Illustrative example of a tracking scenario: The black lines (−) show the true tracks while the red stars (∗) show the state estimates over time for track 2 and the blue stars for track 1. The 95% credible regions on the states are shown as blue ellipses. The current (+) and previous (×) measurements are connected to their associated tracks via red lines. The clutter measurements (birds in this case) are shown with black dots (·). The distributions on the SNR (RCS) for each track (blue and red) and the clutter (black) are shown on the right. To our knowledge this paper is the first to demonstrate how to compute Bayesian posterior distributions on the parameters of a Rice distribution; the closest work would be Lauwers et al. [15], which computes a MAP estimate. Other novel factors of this paper include: demonstrating the usefulness (and advantages over existing techniques) of change point detection for RCS estimation and tracking; and applying variational inference for UPMs where analytic posterior predictives are not possible. This paper provides four main technical contributions: 1) VB inference for inferring the parameters of a Rice distribution. 2) General improvements to online VB (which is then applied to updating the UPM in BOCPD). 3) Derive a VB approximation to the run length posterior when the UPM posterior predictive is intractable. 4) Handle censored measurements (particularly for a Rice distribution) in VB. This is key for processing missed detections in data association. 2 1 Background In this section we briefly review the three areas of background: BOCPD, VB, and tracking. 1.1 Bayesian Online Change Point Detection We briefly summarize the model setup and notation for the BOCPD algorithm; see [27, Ch. 5] for a detailed description. We assume we have a time series with n observations so far y1, . . . , yn ∈Y. In effect, BOCPD performs message passing to do online inference on the run length rn ∈0:n −1, the number of observations since the last change point. Given an underlying predictive model (UPM) and a hazard function h, we can compute an exact posterior over the run length rn. Conditional on a run length, the UPM produces a sequential prediction on the next data point using all the data since the last change point: p(yn|y(r), Θm) where (r) := (n −r):(n −1). The UPM is a simpler model where the parameters θ change at every change point and are modeled as being sampled from a prior with hyper-parameters Θm. The canonical example of a UPM would be a Gaussian whose mean and variance change at every change point. The online updates are summarized as: msgn := p(rn, y1:n) = X rn−1 P(rn|rn−1) | {z } hazard p(yn|rn−1, y(r)) | {z } UPM p(rn−1, y1:n−1) | {z } msgn−1 . (1) Unless rn = 0, the sum in (1) only contains one term since the only possibility is that rn−1 = rn−1. The indexing convention is such that if rn = 0 then yn+1 is the first observation sampled from the new parameters θ. The marginal posterior predictive on the next data point is easily calculated as: p(yn+1|y1:n) = X rn p(yn+1|y(r))P(rn|y1:n) . (2) Thus, the predictions from BOCPD fully integrate out any uncertainty in θ. The message updates (1) perform exact inference under a model where the number of change points is not known a priori. BOCPD RCS Model We show the Rice UPM as an example as it is required for our application. The data within a regime are assumed to be iid Rice observations, with a normal-gamma prior: yn ∼Rice(ν, σ) , ν ∼N(µ0, σ2/λ0) , σ−2 =: τ ∼Gamma(α0, β0) (3) =⇒p(yn|ν, σ) = ynτ exp(−τ(y2 n + ν2)/2)I0(ynντ)I{yn ≥0} (4) where I0(·) is a modified Bessel function of order zero, which is what excludes the Rice distribution from the exponential family. Although the normal-gamma is not conjugate to a Rice it will enable us to use the VB-EM algorithm. The UPM parameters are the Rice shape1 ν ∈R and scale σ ∈R+, θ := {ν, σ}, and the hyper-parameters are the normal-gamma parameters Θm := {µ0, λ0, α0, β0}. Every change point results in a new value for ν and σ being sampled. A posterior on θ is maintained for each run length, i.e. every possible starting point for the current regime, and is updated at each new data point. Therefore, BOCPD maintains n distinct posteriors on θ, and although this can be reduced with pruning, it necessitates posterior updates on θ that are computationally efficient. Note that the run length updates in (1) require the UPM to provide predictive log likelihoods at all sample sizes rn (including zero). Therefore, UPM implementations using such approximations as plug-in MLE predictions will not work very well. The MLE may not even be defined for run lengths smaller than the number of UPM parameters |θ|. For a Rice UPM, the efficient O(1) updating in exponential family models by using a conjugate prior and accumulating sufficient statistics is not possible. This motivates the use of VB methods for approximating the UPM predictions. 1.2 Variational Bayes We follow the framework of VB where when computation of the exact posterior distribution p(θ|y1:n) is intractable it is often possible to create a variational approximation q(θ) that is locally optimal in terms of the Kullback-Leibler (KL) divergence KL(q∥p) while constraining q to be in a certain family of distributions Q. In general this is done by optimizing a lower bound L(q) on the evidence log p(y1:n), using either gradient based methods or standard fixed point equations. 1 The shape ν is usually assumed to be positive (∈R+); however, there is nothing wrong with using a negative ν as Rice(x|ν, σ) = Rice(x|−ν, σ). It also allows for use of a normal-gamma prior. 3 The VB-EM Algorithm In many cases, such as the Rice UPM, the derivation of the VB fixed point equations can be simplified by applying the VB-EM algorithm [3]. VB-EM is applicable to models that are conjugate-exponential (CE) after being augmented with latent variables x1:n. A model is CE if: 1) The complete data likelihood p(x1:n, y1:n|θ) is an exponential family distribution; and 2) the prior p(θ) is a conjugate prior for the complete data likelihood p(x1:n, y1:n|θ). We only have to constrain the posterior q(θ, x1:n) = q(θ)q(x1:n) to factorize between the latent variables and the parameters; we do not constrain the posterior to be of any particular parametric form. Requiring the complete likelihood to be CE is a much weaker condition than requiring the marginal on the observed data p(y1:n|θ) to be CE. Consider a mixture of Gaussians: the model becomes CE when augmented with latent variables (class labels). This is also the case for the Rice distribution (Section 2.1). Like the ordinary EM algorithm [9] the VB-EM algorithm alternates between two steps: 1) Find the posterior of the latent variables treating the expected natural parameters ¯η := Eq(θ)[η] as correct: q(xi) ←p(xi|yi, η = ¯η). 2) Find the posterior of the parameters using the expected sufficient statistics ¯S := Eq(x1:n)[S(x1:n, y1:n)] as if they were the sufficient statistics for the complete data set: q(θ) ←p(θ|S(x1:n, y1:n) = ¯S). The posterior will be of the same exponential family as the prior. 1.3 Tracking In this section we review data association, which along with filtering constitutes tracking. In data association we estimate the association vectors a which map measurements to tracks. At each time step, n ∈N1, we observe NZ(n) ∈N0 measurements, Zn = {zi,n}NZ(n) i=1 , which includes returns from both real targets and clutter (spurious measurements). Here, zi,n ∈Z is a vector of kinematic measurements (positions in R3, or R4 with a Doppler), augmented with an RCS component R ∈R+ for the measured SNR, at time tn ∈R. The assignment vector at time tn is such that an(i) = j if measurement i is associated with track j > 0; an(i) = 0 if measurement i is clutter. The inverse mapping a−1 n maps tracks to measurements: meaning a−1 n (an(i)) = i if an(i) ̸= 0; and a−1 n (i) = 0 ⇔an(j) ̸= i for all j. For example, if NT = 4 and a = [2 0 0 1 4] then NZ = 5, Nc = 2, and a−1 = [4 1 0 5]. Each track is associated with at most one measurement, and vice-versa. In ND data association we jointly find the MAP estimate of the association vectors over a sliding window of the last N −1 time steps. We assume we have NT (n) ∈N0 total tracks as a known parameter: NT (n) is adjusted over time using various algorithms (see [2, Ch. 3]). In the generative process each track places a probability distribution on the next N −1 measurements, with both kinematic and RCS components. However, if the random RCS R for a measurement is below R0 then it will not be observed. There are Nc(n) ∈N0 clutter measurements from a Poisson process with λ := E[Nc(n)] (often with uniform intensity). The ordering of measurements in Zn is assumed to be uniformly random. For 3D data association the model joint p(Zn−1:n, an−1, an|Z1:n−2) is: NT Y i=1 pi(za−1 n (i),n, za−1 n−1(i),n−1) × n Y i=n−1 λNc(i) exp(−λ)/|Zi|! |Zi| Y j=1 p0(zj,i)I{ai(j)=0} , (5) where pi is the probability of the measurement sequence under track i; p0 is the clutter distribution. The probability pi is the product of the RCS component predictions (BOCPD) and the kinematic components (filter); informally, pi(z) = pi(positions) × pi(RCS). If there is a missed detection, i.e. a−1 n (i) = 0, we then use pi(za−1 n (i),n) = P(R < R0) under the RCS model for track i with no contribution from positional (kinematic) component. Just as BOCPD allows any black box probabilistic predictor to be used as a UPM, any black box model of measurement sequences can used in (5). The estimation of association vectors for the 3D case becomes an optimization problem of the form: (ˆan−1, ˆan) = argmax (an−1,an) log P(an−1, an|Z1:n) = argmax (an−1,an) log p(Zn−1:n, an−1, an|Z1:n−2) , (6) which is effectively optimizing (5) with respect to the assignment vectors. The optimization given in (6) can be cast as a multidimensional assignment (MDA) problem [2], which can be solved efficiently in the 2D case. Higher dimensional assignment problems, however, are NP-hard; approximate, yet typically very accurate, solvers must be used for real-time operation, which is usually required for tracking systems [20]. If a radar scan occurs at each time step and a target is not detected, we assume the SNR has not exceeded the threshold, implying 0 ≤R < R0. This is a (left) censored measurement and is treated differently than a missing data point. Censoring is accounted for in Section 2.3. 4 2 Online Variational UPMs We cover the four technical challenges for implementing non-exponential family UPMs in an efficient and online manner. We drop the index of the data point i when it is clear from context. 2.1 Variational Posterior for a Rice Distribution The Rice distribution has the property that x ∼N(ν, σ2) , y′ ∼N(0, σ2) =⇒R = p x2 + y′2 ∼Rice(ν, σ) . (7) For simplicity we perform inference using R2, as opposed to R, and transform accordingly: x ∼N(ν, σ2) , R2 −x2 ∼Gamma( 1 2, τ 2) , τ := 1/σ2 ∈R+ =⇒p(R2, x) = p(R2|x)p(x) = Gamma(R2 −x2| 1 2, τ 2)N(x|ν, σ2) . (8) The complete likelihood (8) is the product of two exponential family models and is exponential family itself, parameterized with base measure h and partition factor g: η = [ντ, −τ/2]⊤, S = [x, R2]⊤, h(R2, x) = (2π p R2 −x2)−1, g(ν, τ) = τ exp(−ν2τ/2) . By inspection we see that the natural parameters η and sufficient statistics S are the same as a Gaussian with unknown mean and variance. Therefore, we apply the normal-gamma prior on (ν, τ) as it is the conjugate prior for the complete data likelihood. This allows us to apply the VB-EM algorithm. We use yi := R2 i as the VB observation, not Ri as in (3). In (5), z·,·(end) is the RCS R. VB M-Step We derive the posterior updates to the parameters given expected sufficient statistics: ¯x := n X i=1 E[xi]/n , µn = λ0µ0 + P i E[xi] λ0 + n , λn = λ0 + n , αn = α0 + n , (9) βn = β0 + 1 2 n X i=1 (E[xi] −¯x)2 + 1 2 nλ0 λ0 + n(¯x −µ0)2 + 1 2 n X i=1 R2 i −E[xi]2 . (10) This is the same as an observation from a Gaussian and a gamma that share a (inverse) scale τ. VB E-Step We then must find both expected sufficient statistics ¯S. The expectation E[R2 i |R2 i ] = R2 i trivially; leaving E[xi|R2 i ]. Recall that the joint on (x, y′) is a bivariate normal; if we constrain the radius to R, the angle ω will be distributed by a von Mises (VM) distribution. Therefore, ω := arccos(x/R) ∼VM(0, κ) , κ = R E[ντ] =⇒E[x] = R E[cos ω] = RI1(κ)/I0(κ) , (11) where computing κ constitutes the VB E-step and we have used the trigonometric moment on ω [18]. This completes the computations required to do the VB updates on the Rice posterior. Variational Lower Bound For completeness, and to assess convergence, we derive the VB lower bound L(q). Using the standard formula [4] for L(q) = Eq[log p(y1:n, x1:n, θ)] + H[q] we get: n X i=1 E[log τ/2] −1 2E[τ]R2 i + (E[ντ] −κi/Ri)E[xi] −1 2E[ν2τ] + log I0(κi) −KL(q∥p) , (12) where p in the KL is the prior on (ν, τ) which is easy to compute as q and p are both normal-gamma. Equivalently, (12) can be optimized directly instead of using the VB-EM updates. 2.2 Online Variational Inference In Section 2.1 we derived an efficient way to compute the variational posterior for a Rice distribution for a fixed data set. However, as is apparent from (1) we need online predictions from the UPM; we must be able to update the posterior one data point at a time. When the UPM is exponential family and we can compute the posterior exactly, we merely use the posterior from the previous step as the prior. However, since we are only computing a variational approximation to the posterior, using the previous posterior as the prior does not give the exact same answer as re-computing the posterior from batch. This gives two obvious options: 1) recompute the posterior from batch every update at O(n) cost or 2) use the previous posterior as the prior at O(1) cost and reduced accuracy. 5 The difference between the options is encapsulated by looking at the expected sufficient statistics: ¯S = Pn i=1 Eq(xi|y1:n)[S(xi, yi)]. Naive online updating uses old expected sufficient statistics whose posterior effectively uses ¯S = Pn i=1 Eq(xi|y1:i)[S(xi, yi)]. We get the best of both worlds if we adjust those estimates over time. We in fact can do this if we project the expected sufficient statistics into a “feature space” in terms of the expected natural parameters. For some function f, q(xi) = p(xi|yi, η = ¯η) =⇒Eq(xi|y1:n)[S(xi, yi)] = f(yi, ¯η) . (13) If f is piecewise continuous then we can represent it with an inner product [8, Sec. 2.1.6] f(yi, ¯η) = φ(¯η)⊤ψ(yi) =⇒¯S = Pn i=1φ(¯η)⊤ψ(yi) = φ(¯η)⊤Pn i=1ψ(yi) , (14) where an infinite dimensional φ and ψ may be required for exact representation, but can be approximated by a finite inner product. In the Rice distribution case we use (11) f(yi, ¯η) = E[xi] = RiI′(Ri E[ντ]) = RiI′((Ri/µ0) µ0E[ντ]) , I′(·) := I1(·)/I0(·) , (15) where recall that yi = R2 i and ¯η1 = E[ντ]. We can easily represent f with an inner product if we can represent I′ as an inner product: I′(uv) = φ(u)⊤ψ(v). We use unitless φi(u) = I′(ciu) with c1:G as a log-linear grid from 10−2 to 103 and G = 50. We use a lookup table for ψ(v) that was trained to match I′ using non-negative least squares, which left us with a sparse lookup table. Online updating for VB posteriors was also developed in [24; 13]. These methods involved introducing forgetting factors to forget the contributions from old data points that might be detrimental to accuracy. Since the VB predictions are “embedded” in a change point method, they are automatically phased out if the posterior predictions become inaccurate making the forgetting factors unnecessary. 2.3 Censored Data As mentioned in Section 1.3, we must handle censored RCS observations during a missed detection. In the VB-EM framework we merely have to compute the expected sufficient statistics given the censored measurement: E[S|R < R0]. The expected sufficient statistic from (11) is now: E[x|R < R0] = Z R0 0 E[x|R]p(R)dR RiceCDF(R0|ν, τ) = ν(1 −Q2( ν σ, R0 σ ))/(1 −Q1( ν σ, R0 σ )) , where QM is the Marcum Q function [17] of order M. Similar updates for E[S|R < R0] are possible for exponential or gamma UPMs, but are not shown as they are relatively easy to derive. 2.4 Variational Run Length Posteriors: Predictive Log Likelihoods Both updating the BOCPD run length posterior (1) and finding the marginal predictive log likelihood of the next point (2) require calculating the UPM’s posterior predictive log likelihood log p(yn+1|rn, y(r)). The marginal posterior predictive from (2) is used in data association (6) and benchmarking BOCPD against other methods. However, the exact posterior predictive distribution obtained by integrating the Rice likelihood against the VB posterior is difficult to compute. We can break the BOCPD update (1) into a time and measurement update. The measurement update corresponds to a Bayesian model comparison (BMC) calculation with prior p(rn|y1:n): p(rn|y1:n+1) ∝p(yn+1|rn, y(r))p(rn|y1:n) . (16) Using the BMC results in Bishop [4, Sec. 10.1.4] we find a variational posterior on the run length by using the variational lower bound for each run length Li(q) ≤log p(yn+1|rn =i, y(r)), calculated using (12), as a proxy for the exact UPM posterior predictive in (16). This gives the exact VB posterior if the approximating family Q is of the form: q(rn, θ, x) = qUPM(θ, x|rn)q(rn) =⇒q(rn = i) = exp(Li(q))p(rn =i|y1:n)/ exp(L(q)) , (17) where qUPM contains whatever constraints we used to compute Li(q). The normalizer on q(rn) serves as a joint VB lower bound: L(q) = log P i exp(Li(q))p(rn =i|y1:n) ≤log p(yn+1|y1:n). Note that the conditional factorization is different than the typical independence constraint on q. Furthermore, we derive the estimation of the assignment vectors a in (6) as a VB routine. We use a similar conditional constraint on the latent BOCPD variables given the assignment and constrain the assignment posterior to be a point mass. In the 2D assignment case, for example, q(an, X1:NT ) = q(X1:NT |an)q(an) = q(X1:NT |an)I{an = ˆan} , (18) 6 10 20 30 40 50 10 −2 10 −1 10 0 10 1 10 2 Sample Size KL (nats) (a) Online Updating 0 100 200 300 400 0 2 4 6 8 10 Time RCS RMSE (dBsm) (b) Exponential RCS 0 100 200 300 400 0 1 2 3 4 5 Time RCS RMSE (dBsm) (c) Rice RCS Figure 2: Left: KL from naive updating (△), Sato’s method [24] (□), and improved online VB (◦) to the batch VB posterior vs. sample size n; using a standard normal-gamma prior. Each curve represents a true ν in the generating Rice distribution: ν = 3.16 (red), ν = 10.0 (green), ν = 31.6 (blue) and τ = 1. Middle: The RMSE (dB scale) of the estimate on the mean RCS distribution E[Rn] is plotted for an exponential RCS model. The curves are BOCPD (blue), IMM (black), identity (magenta), α-filter (green), and median filter (red). Right: Same as the middle but for the Rice RCS case. The dashed lines are 95% confidence intervals. where each track’s Xi represents all the latent variables used to compute the variational lower bound on log p(zj,n|an(j)=i). In the BOCPD case, Xi := {rn, x, θ}. The resulting VB fixed point equations find the posterior on the latent variables Xi by taking ˆan as the true assignment and solving the VB problem of (17); the assignment ˆan is found by using (6) and taking the joint BOCPD lower bound L(q) as a proxy for the BOCPD predictive log likelihood component of log pi in (5). 3 Results 3.1 Improved Online Solution We first demonstrate the accuracy of the online VB approximation (Section 2.2) on a Rice estimation example; here, we only test the VB posterior as no change point detection is applied. Figure 2(a) compares naive online updating, Sato’s method [24], and our improved online updating in KL(online∥batch) of the posteriors for three different true parameters ν as sample size n increases. The performance curves are the KL divergence between these online approximations to the posterior and the batch VB solution (i.e. restarting VB from “scratch” every new data point) vs sample size. The error for our method stays around a modest 10−2 nats while naive updating incurs large errors of 1 to 50 nats [19, Ch. 4]. Sato’s method tends to settle in around a 1 nat approximation error. The recommended annealing schedule, i.e. forgetting factors, in [24] performed worse than naive updating. We did a grid search over annealing exponents and show the results for the best performing schedule of n−0.52. By contrast, our method does not require the tuning of an annealing schedule. 3.2 RCS Estimation Benchmarking We now compare BOCPD with other methods for RCS estimation. We use the same experimental example as Slocumb and Klusman III [25], which uses an augmented interacting multiple model (IMM) based method for estimating the RCS; we also compare against the same α-filter and median filter used in [25]. As a reference point, we also consider the “identity filter” which is merely an unbiased filter that uses only yn to estimate the mean RCS E[Rn] at time step n. We extend this example to look at Rice RCS in addition to the exponential RCS case. The bias correction constants in the IMM were adjusted for the Rice distribution case as per [25, Sec. 3.4]. The results on exponential distributions used in [25] and the Rice distribution case are shown in Figures 2(b) and 2(c). The IMM used in [25] was hard-coded to expect jumps in the SNR of multiples of ±10 dB, which is exactly what is presented in the example (a sequence of 20, 10, 30, and 10 dB). In [25] the authors mention that the IMM reaches an RMSE “floor” at 2 dB, yet BOCPD continues to drop as low as 0.56 dB. The RMSE from BOCPD does not spike nearly as high as the other methods upon a change in E[Rn]. The α-filter and median filter appear worse than both the IMM and BOCPD. The RMSE and confidence intervals are calculated from 5000 runs of the experiment. 7 1 2 3 4 5 −5 0 5 10 15 20 25 30 35 40 45 Difficulty Improvement (%) (a) SIAP Metrics Easting (km) Northing (km) 0 20 40 60 80 100 −20 0 20 40 60 80 (b) Heathrow (LHR) Figure 3: Left: Average relative improvements (%) for SIAP metrics: position accuracy (red △), velocity accuracy (green □), and spurious tracks (blue ◦) across difficulty levels. Right: LHR: true trajectories shown as black lines (−), estimates using a BOCPD RCS model for association shown as blue stars (∗), and the standard tracker as red circles (◦). The standard tracker has spurious tracks over east London and near Ipswich. Background map data: Google Earth (TerraMetrics, Data SIO, NOAA, U.S. Navy, NGA, GEBCO, Europa Technologies) 3.3 Flightradar24 Tracking Problem Finally, we used real flight trajectories from flightradar24 and plugged them into our 3D tracking algorithm. We compare tracking performance between using our BOCPD model and the relatively standard constant probability of detection (no RCS) [2, Sec. 3.5] setup. We use the single integrated air picture (SIAP) metrics [6] to demonstrate the improved performance of the tracking. The SIAP metrics are a standard set of metrics used to compare tracking systems. We broke the data into 30 regions during a one hour period (in Sept. 2012) sampled every 5 s, each within a 200 km by 200 km area centered around the world’s 30 busiest airports [22]. Commercial airport traffic is typically very orderly and does not allow aircraft to fly close to one another or cross paths. Feature-aided tracking is most necessary in scenarios with a more chaotic air situation. Therefore, we took random subsets of 10 flight paths and randomly shifted their start time to allow for scenarios of greater interest. The resulting SIAP metric improvements are shown in Figure 3(a) where we look at performance by a difficulty metric: the number of times in a scenario any two aircraft come within ∼400 m of each other. The biggest improvements are seen for difficulties above three where positional accuracy increases by 30%. Significant improvements are also seen for velocity accuracy (11%) and the frequency of spurious tracks (6%). Significant performance gains are seen at all difficulty levels considered. The larger improvements at level three over level five are possibly due to some level five scenarios that are not resolvable simply through more sophisticated models. We demonstrate how our RCS methods prevent the creation of spurious tracks around London Heathrow in Figure 3(b). 4 Conclusions We have demonstrated that it is possible to use sophisticated and recent developments in machine learning such as BOCPD, and use the modern inference method of VB, to produce demonstrable improvements in the much more mature field of radar tracking. We first closed a “hole” in the literature in Section 2.1 by deriving variational inference on the parameters of a Rice distribution, with its inherent applicability to radar tracking. In Sections 2.2 and 2.4 we showed that it is possible to use these variational UPMs for non-exponential family models in BOCPD without sacrificing its modular or online nature. The improvements in online VB are extendable to UPMs besides a Rice distribution and more generally beyond change point detection. We can use the variational lower bound from the UPM and obtain a principled variational approximation to the run length posterior. Furthermore, we cast the estimation of the assignment vectors themselves as a VB problem, which is in large contrast to the tracking literature. More algorithms from the tracking literature can possibly be cast in various machine learning frameworks, such as VB, and improved upon from there. 8 References [1] Adams, R. P. and MacKay, D. J. (2007). Bayesian online changepoint detection. Technical report, University of Cambridge, Cambridge, UK. [2] Bar-Shalom, Y., Willett, P., and Tian, X. (2011). Tracking and Data Fusion: A Handbook of Algorithms. YBS Publishing. [3] Beal, M. and Ghahramani, Z. (2003). The variational Bayesian EM algorithm for incomplete data: with application to scoring graphical model structures. In Bayesian Statistics, volume 7, pages 453–464. [4] Bishop, C. M. (2007). Pattern Recognition and Machine Learning. Springer. [5] Braun, J. V., Braun, R., and M¨uller, H.-G. (2000). Multiple changepoint fitting via quasilikelihood, with application to DNA sequence segmentation. Biometrika, 87(2):301–314. [6] Byrd, E. (2003). Single integrated air picture (SIAP) attributes version 2.0. Technical Report 2003-029, DTIC. [7] Chen, J. and Gupta, A. (1997). Testing and locating variance changepoints with application to stock prices. Journal of the Americal Statistical Association, 92(438):739–747. [8] Courant, R. and Hilbert, D. (1953). Methods of Mathematical Physics. Interscience. [9] Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1–38. [10] Ehrman, L. M. and Blair, W. D. (2006). Comparison of methods for using target amplitude to improve measurement-to-track association in multi-target tracking. In Information Fusion, 2006 9th International Conference on, pages 1–8. IEEE. [11] Fearnhead, P. and Liu, Z. (2007). Online inference for multiple changepoint problems. Journal of the Royal Statistical Society, Series B, 69(4):589–605. [12] Hipp, C. (1974). Sufficient statistics and exponential families. The Annals of Statistics, 2(6):1283–1292. [13] Honkela, A. and Valpola, H. (2003). On-line variational Bayesian learning. In 4th International Symposium on Independent Component Analysis and Blind Signal Separation, pages 803–808. [14] Kalman, R. E. (1960). A new approach to linear filtering and prediction problems. Transactions of the ASME — Journal of Basic Engineering, 82(Series D):35–45. [15] Lauwers, L., Barb´e, K., Van Moer, W., and Pintelon, R. (2009). Estimating the parameters of a Rice distribution: A Bayesian approach. In Instrumentation and Measurement Technology Conference, 2009. I2MTC’09. IEEE, pages 114–117. IEEE. [16] Mahler, R. (2003). Multi-target Bayes filtering via first-order multi-target moments. IEEE Trans. AES, 39(4):1152–1178. [17] Marcum, J. (1950). Table of Q functions. U.S. Air Force RAND Research Memorandum M-339, Rand Corporation, Santa Monica, CA. [18] Mardia, K. V. and Jupp, P. E. (2000). Directional Statistics. John Wiley & Sons, New York. [19] Murray, I. (2007). Advances in Markov chain Monte Carlo methods. PhD thesis, Gatsby computational neuroscience unit, University College London, London, UK. [20] Poore, A. P., Rijavec, N., Barker, T. N., and Munger, M. L. (1993). Data association problems posed as multidimensional assignment problems: algorithm development. In Optical Engineering and Photonics in Aerospace Sensing, pages 172–182. International Society for Optics and Photonics. [21] Richards, M. A., Scheer, J., and Holm, W. A., editors (2010). Principles of Modern Radar: Basic Principles. SciTech Pub. [22] Rogers, S. (2012). The world’s top 100 airports: listed, ranked and mapped. The Guardian. [23] Saatc¸i, Y., Turner, R., and Rasmussen, C. E. (2010). Gaussian process change point models. In 27th International Conference on Machine Learning, pages 927–934, Haifa, Israel. Omnipress. [24] Sato, M.-A. (2001). Online model selection based on the variational Bayes. Neural Computation, 13(7):1649–1681. [25] Slocumb, B. J. and Klusman III, M. E. (2005). A multiple model SNR/RCS likelihood ratio score for radar-based feature-aided tracking. In Optics & Photonics 2005, pages 59131N–59131N. International Society for Optics and Photonics. [26] Swerling, P. (1954). Probability of detection for fluctuating targets. Technical Report RM-1217, Rand Corporation. [27] Turner, R. (2011). Gaussian Processes for State Space Models and Change Point Detection. PhD thesis, University of Cambridge, Cambridge, UK. 9
|
2013
|
106
|
4,828
|
RNADE: The real-valued neural autoregressive density-estimator Benigno Uria and Iain Murray School of Informatics University of Edinburgh {b.uria,i.murray}@ed.ac.uk Hugo Larochelle D´epartement d’informatique Universit´e de Sherbrooke hugo.larochelle@usherbrooke.ca Abstract We introduce RNADE, a new model for joint density estimation of real-valued vectors. Our model calculates the density of a datapoint as the product of onedimensional conditionals modeled using mixture density networks with shared parameters. RNADE learns a distributed representation of the data, while having a tractable expression for the calculation of densities. A tractable likelihood allows direct comparison with other methods and training by standard gradientbased optimizers. We compare the performance of RNADE on several datasets of heterogeneous and perceptual data, finding it outperforms mixture models in all but one case. 1 Introduction Probabilistic approaches to machine learning involve modeling the probability distributions over large collections of variables. The number of parameters required to describe a general discrete distribution grows exponentially in its dimensionality, so some structure or regularity must be imposed, often through graphical models [e.g. 1]. Graphical models are also used to describe probability densities over collections of real-valued variables. Often parts of a task-specific probabilistic model are hard to specify, and are learned from data using generic models. For example, the natural probabilistic approach to image restoration tasks (such as denoising, deblurring, inpainting) requires a multivariate distribution over uncorrupted patches of pixels. It has long been appreciated that large classes of densities can be estimated consistently by kernel density estimation [2], and a large mixture of Gaussians can closely represent any density. In practice, a parametric mixture of Gaussians seems to fit the distribution over patches of pixels and obtains state-of-the-art restorations [3]. It may not be possible to fit small image patches significantly better, but alternative models could further test this claim. Moreover, competitive alternatives to mixture models might improve performance in other applications that have insufficient training data to fit mixture models well. Restricted Boltzmann Machines (RBMs), which are undirected graphical models, fit samples of binary vectors from a range of sources better than mixture models [4, 5]. One explanation is that RBMs form a distributed representation: many hidden units are active when explaining an observation, which is a better match to most real data than a single mixture component. Another explanation is that RBMs are mixture models, but the number of components is exponential in the number of hidden units. Parameter tying among components allows these more flexible models to generalize better from small numbers of examples. There are two practical difficulties with RBMs: the likelihood of the model must be approximated, and samples can only be drawn from the model approximately by Gibbs sampling. The Neural Autoregressive Distribution Estimator (NADE) overcomes these difficulties [5]. NADE is a directed graphical model, or feed-forward neural network, initially derived as an approximation to an RBM, but then fitted as a model in its own right. 1 In this work we introduce the Real-valued Autoregressive Density Estimator (RNADE), an extension of NADE. An autoregressive model expresses the density of a vector as an ordered product of one-dimensional distributions, each conditioned on the values of previous dimensions in the (perhaps arbitrary) ordering. We use the parameter sharing previously introduced by NADE, combined with mixture density networks [6], an existing flexible approach to modeling real-valued distributions with neural networks. By construction, the density of a test point under RNADE is cheap to compute, unlike RBM-based models. The neural network structure provides a flexible way to alter the mean and variance of a mixture component depending on context, potentially modeling non-linear or heteroscedastic data with fewer components than unconstrained mixture models. 2 Background: Autoregressive models Both NADE [5] and our RNADE model are based on the chain rule (or product rule), which factorizes any distribution over a vector of variables into a product of terms: p(x) = QD d=1 p(xd | x<d), where x<d denotes all attributes preceding xd in a fixed arbitrary ordering of the attributes. This factorization corresponds to a Bayesian network where every variable is a parent of all variables after it. As this model assumes no conditional independences, it says nothing about the distribution in itself. However, the (perhaps arbitrary) ordering we choose will matter if the form of the conditionals is constrained. If we assume tractable parametric forms for each of the conditional distributions, then the joint distribution can be computed for any vector, and the parameters of the model can be locally fitted to a penalized maximum likelihood objective using any gradient-based optimizer. For binary data, each conditional distribution can be modeled with logistic regression, which is called a fully visible sigmoid belief network (FVSBN) [7]. Neural networks can also be used for each binary prediction task [8]. The neural autoregressive distribution estimator (NADE) also uses neural networks for each conditional, but with parameter sharing inspired by a mean-field approximation to Restricted Boltzmann Machines [5]. In detail, each conditional is given by a feed-forward neural network with one hidden layer, hd ∈RH: p(xd = 1|x<d) = sigm v⊤ d hd + bd where hd = sigm (W ·,<dx<d + c) , (1) where vd ∈RH, bd ∈R, c ∈RH, and W ∈RH×(D−1) are neural network parameters, and sigm represents the logistic sigmoid function 1/(1 + e−x). The weights between the inputs and the hidden units for each neural network are tied: W ·,<d is the first d−1 columns of a shared weight matrix W . This parameter sharing reduces the total number of parameters from quadratic in the number of input dimensions to linear, lessening the need for regularisation. Computing the probability of a datapoint can also be done in time linear in dimensionality, O(DH), by sharing the computation when calculating the hidden activation of each neural network (ad = W ·,<dx<d + c): a1 = c, ad+1 = ad + xdW ·,d. (2) When approximating Restricted Boltzmann Machines, the output weights {vd} in (1) were originally tied to the input weights W . Untying these weights gave better statistical performance on a range of tasks, with negligible extra computational cost [5]. NADE has recently been extended to count data [9]. The possibility of extending generic neural autoregressive models to continuous data has been mentioned [8, 10], but has not been previously explored to our knowledge. An autoregressive mixture of experts with scale mixture model experts has been developed as part of a sophisticated multi-resolution model specifically for natural images [11]. In more general work, Gaussian processes have been used to model the conditional distributions of a fully visible Bayesian network [12]. However, these ‘Gaussian process networks’ cannot deal with multimodal conditional distributions or with large datasets (currently ⪆104 points would require further approximation). In the next section we propose a more flexible and scalable approach. 3 Real-valued neural autoregressive density estimators The original derivation of NADE suggests deriving a real-valued version from a mean-field approximation to the conditionals of a Gaussian-RBM. However, we discarded this approach because the 2 limitations of the Gaussian-RBM are well documented [13, 14]: its isotropic conditional noise model does not give competitive density estimates. Approximating a more capable RBM model, such as the mean-covariance RBM [15] or the spike-and-slab RBM [16], might be a fruitful future direction. The main characteristic of NADE is the tying of its input-to-hidden weights. The output layer was ‘untied’ from the approximation to the RBM to give the model greater flexibility. Taking this idea further, we add more parameters to NADE to represent each one-dimensional conditional distribution with a mixture of Gaussians instead of a Bernoulli distribution. That is, the outputs are mixture density networks [6], with a shared hidden layer, using the same parameter tying as NADE. Thus, our Real-valued Neural Autoregressive Density-Estimator or RNADE model represents the probability density of a vector as: p(x) = D Y d=1 p(xd |x<d) with p(xd |x<d) = pM(xd |θd), (3) where pM is a mixture of Gaussians with parameters θd. The mixture model parameters are calculated using a neural network with all of the preceding dimensions, x<d, as inputs. We now give the details. RNADE computes the same hidden unit activations, ad, as before using (2). As discussed by Bengio [10], as an RNADE (or a NADE) with sigmoidal units progresses across the input dimensions d ∈{1 . . . D}, its hidden units will tend to become more and more saturated, due to their input being a weighted sum of an increasing number of inputs. Bengio proposed alleviating this effect by rescaling the hidden units’ activation by a free factor ρd at each step, making the hidden unit values hd = sigm (ρdad) . (4) Learning these extra rescaling parameters worked slightly better, and all of our experiments use them. Previous work on neural networks with real-valued outputs has found that rectified linear units can work better than sigmoidal non-linearities [17]. The hidden values for rectified linear units are: hd = ρdad if ρdad > 0 0 otherwise. (5) In preliminary experiments we found that these hidden units worked better than sigmoidal units in RNADE, and used them throughout (except for an example result with sigmoidal units in Table 2). Finally, the mixture of Gaussians parameters for the d-th conditional, θd = {αd, µd, σd}, are set by: K mixing fractions, αd = softmax V α d ⊤hd + bα d (6) K component means, µd = V µ d ⊤hd + bµ d (7) K component standard deviations, σd = exp V σ d ⊤hd + bσ d , (8) where free parameters V α d , V µ d, V σ d are H×K matrices, and bα d , bµ d, bσ d are vectors of size K. The softmax [18] ensures the mixing fractions are positive and sum to one, the exponential ensures the standard deviations are positive. Fitting an RNADE can be done using gradient ascent on the model’s likelihood given a training set of examples. We used minibatch stochastic gradient ascent in all our experiments. In those RNADE models with MoG conditionals, we multiplied the gradient of each component mean by its standard deviation (for a Gaussian, Newton’s method multiplies the gradient by its variance, but empirically multiplying by the standard deviation worked better). This gradient scaling makes tight components move more slowly than broad ones, a heuristic that we found allows the use of higher learning rates. Variants: Using a mixture of Gaussians to represent the conditional distributions in RNADE is an arbitrary parametric choice. Given several components, the mixture model can represent a rich set of skewed and multimodal distributions with different tail behaviors. However, other choices could be appropriate in particular circumstances. For example, work on natural images often uses scale mixtures, where components share a common mean. Conditional distributions of perceptual data are often assumed to be Laplacian [e.g. 19]. We call our main variant with mixtures of Gaussians RNADE-MoG, but also experiment with mixtures of Laplacian outputs, RNADE-MoL. 3 Table 1: Average test-set log-likelihood per datapoint for 4 different models on five UCI datasets. Performances not in bold can be shown to be significantly worse than at least one of the results in bold as per a paired t-test on the ten mean-likelihoods, with significance level 0.05. Dataset dim size Gaussian MFA FVBN RNADE-MoG RNADE-MoL Red wine 11 1599 −13.18 −10.19 −11.03 −9.36 −9.46 White wine 11 4898 −13.20 −10.73 −10.52 −10.23 −10.38 Parkinsons 15 5875 −10.85 −1.99 −0.71 −0.90 −2.63 Ionosphere 32 351 −41.24 −17.55 −26.55 −2.50 −5.87 Boston housing 10 506 −11.37 −4.54 −3.41 −0.64 −4.04 4 Experiments We compared RNADE to mixtures of Gaussians (MoG) and factor analyzers (MFA), which are surprisingly strong baselines in some tasks [20, 21]. Given the known poor performance of discrete mixtures [4, 5], we limited our experiments to modeling continuous attributes. However it would be easy to include both discrete and continuous variables in a NADE-like architecture. 4.1 Low-dimensional data We first considered five UCI datasets [22], previously used to study the performance of other density estimators [23, 20]. These datasets have relatively low dimensionality, with between 10 and 32 attributes, but have hard thresholds and non-linear dependencies that may make it difficult to fit mixtures of Gaussians or factor analyzers. Following Tang et al. [20], we eliminated discrete-valued attributes and an attribute from every pair with a Pearson correlation coefficient greater than 0.98. Each dimension of the data was normalized by subtracting its training subset sample mean and dividing by its standard deviation. All results are reported on the normalized data. As baselines we fitted full-covariance Gaussians and mixtures of factor analysers. To measure the performance of the different models, we calculated their log-likelihood on held-out test data. Because these datasets are small, we used 10-folds, with 90% of the data for training, and 10% for testing. We chose the hyperparameter values for each model by doing per-fold cross-validation; using a ninth of the training data as validation data. Once the hyperparameter values had been chosen, we trained each model using all the training data (including the validation data) and measured its performance on the 10% of held-out testing data. In order to avoid overfitting, we stopped the training after reaching a training likelihood higher than the one obtained on the best validation-wise iteration of the corresponding validation run. Early stopping is crucial to avoid overfitting the RNADE models. It also improves the results of the MFAs, but to a lesser degree. The MFA models were trained using the EM algorithm [24, 25], the number of components and factors were crossvalidated. The number of factors was chosen from even numbers from 2 . . . D, where selecting D gives a mixture of Gaussians. The number of components was chosen among all even numbers from 2 . . . 50 (crossvalidation always selected fewer than 50 components). RNADE-MoG and RNADE-MoL models were fitted using minibatch stochastic gradient descent, using minibatches of size 100, for 500 epochs, each epoch comprising 10 minibatches. For each experiment, the number of hidden units (50), the non-linear activation-function of the hidden units (RLU), and the form of the conditionals were fixed. Three hyperparameters were crossvalidated using grid-search: the number of components on each one-dimensional conditional was chosen from the set {2, 5, 10, 20}; the weight-decay (used only to regularize the input to hidden weights) from the set {2.0, 1.0, 0.1, 0.01, 0.001, 0}; and the learning rate from the set {0.1, 0.05, 0.025, 0.0125}. Learning-rates were decreased linearly to reach 0 after the last epoch. We also trained fully-visible Bayesian networks (FVBN), an autoregressive model where each onedimensional conditional is modelled by a separate mixture density network using no parameter tying. 4 Figure 1: Top: 15 8x8 patches from the BSDS test set. Center: 15 samples from Zoran and Weiss’s MoG model with 200 components. Bottom: 15 samples from an RNADE with 512 hidden units and 10 output components per dimension. All data and samples were drawn randomly. The same cross-validation procedure and hyperparameters as for RNADE training were used. The best validationwise MDN for each one-dimensional conditional was chosen. The results are shown in Table 1. Autoregressive methods obtained statistical performances superior to mixture models on all datasets. An RNADE with mixture of Gaussian conditionals was among the statistically significant group of best models on all datasets. Unfortunately we could not reproduce the data-folds used by previous work, however, our improvements are larger than those demonstrated by a deep mixture of factor analyzers over standard MFA [20]. 4.2 Natural image patches We also measured the ability of RNADE to model small patches of natural images. Following the recent work of Zoran and Weiss [3], we use 8-by-8-pixel patches of monochrome natural images, obtained from the BSDS300 dataset [26] (Figure 1 gives examples). Pixels in this dataset can take a finite number of brightness values ranging from 0 to 255. Modeling discretized data using a real-valued distribution can lead to arbitrarily high density values, by locating narrow high density spike on each of the possible discrete values. In order to avoid this ‘cheating’ solution, we added noise uniformly distributed between 0 and 1 to the value of each pixel. We then divided by 256, making each pixel take a value in the range [0, 1]. In previous experiments, Zoran and Weiss [3] subtracted the mean pixel value from each patch, reducing the dimensionality of the data by one: the value of any pixel could be perfectly predicted as minus the sum of all other pixel values. However, the original study still used a mixture of fullcovariance 64-dimensional Gaussians. Such a model could obtain arbitrarily high model likelihoods, so unfortunately the likelihoods reported in previous work on this dataset [3, 20] are difficult to interpret. In our preliminary experiment using RNADE, we observed that if we model the 64dimensional data, the 64th pixel is always predicted by a very thin spike centered at its true value. The ability of RNADE to capture this spurious dependency is reassuring, but we wouldn’t want our results to be dominated by it. Recent work by Zoran and Weiss [21], projects the data on the leading 63 eigenvectors of each component, when measuring the model likelihood [27]. For comparison amongst a range of methods, we advocate simply discarding the 64th (bottom-right) pixel. We trained our model using patches drawn randomly from 180 images in the training subset of BSDS300. A validation dataset containing 1,000 random patches from the remaining 20 images in the training subset were used for early-stopping when training RNADE. We measured the performance of each model by measuring their log-likelihood on one million patches drawn randomly from the test subset, which is composed of 100 images not present in the training subset. Given the larger scale of this dataset, hyperparameters of the RNADE and MoG models were chosen manually using the performance of preliminary runs on the validation data, rather than by an extensive search. The RNADE model had 512 rectified-linear hidden units and a mixture of 20 one-dimensional Gaussian components per output. Training was done by minibatch gradient descent, with 25 datapoints per minibatch, for a total of 200 epochs, each comprising 1,000 minibatches. The learning-rate was scheduled to start at 0.001 and linearly decreased to reach 0 after the last epoch. Gradient momentum with momentum factor 0.9 was used, but initiated at the beginning of the second epoch. A weight decay rate of 0.001 was applied to the input-to-hidden weight matrix only. Again, we found that multiplying the gradient of the mean output parameters by the standard deviation improves results. RNADE training was early stopped but didn’t show signs of overfitting. We produced a further run 5 Table 2: Average per-example log-likelihood of several mixture of Gaussian and RNADE models, with mixture of Gaussian (MoG) or mixture of Laplace (MoL) conditionals, on 8-by-8 patches of natural images. These results are measured in nats and were calculated using one million patches. Standard errors due to the finite test sample size are lower than 0.1 in every case. K gives the number of one-dimensional components for each conditional in RNADE, and the number of full-covariance components for MoG. Model Training LogL Test LogL MoG K =200 (Z&W) 161.9 152.8 MoG K =100 152.8 144.7 MoG K =200 159.3 150.4 MoG K =300 159.3 150.4 RNADE-MoG K =5 158.0 149.1 RNADE-MoG K =10 160.0 151.0 RNADE-MoG K =20 158.6 149.7 RNADE-MoL K =5 150.2 141.5 RNADE-MoL K =10 149.7 141.1 RNADE-MoL K =20 150.1 141.5 RNADE-MoG K =10 (sigmoid h. units) 155.1 146.4 RNADE-MoG K =10 (1024 units, 400 epochs) 161.1 152.1 with 1024 hidden units for 400 epochs, with still no signs of overfitting; even larger models might perform better. The MoG model was trained using minibatch EM, for 1,000 iterations. At each iteration 20,000 randomly sampled datapoints were used in an EM update. A step was taken from the previous mixture model towards the parameters resulting from the M-step: θt = (1 −η)θt−1 + ηθEM, where the step size (η) was scheduled to start at 0.1 and linearly decreased to reach 0 after the last update. The training of the MoG was also early-stopped and also showed no signs of overfitting. The results are shown in Table 2. We compare RNADE with a mixtures of Gaussians model trained on 63 pixels, and with a MoG trained by Zoran and Weiss (downloaded from Daniel Zoran’s website) from which we removed the 64th row and column of each covariance matrix. The best RNADE test log-likelihood is, on average, 0.7 nats per patch lower than Zoran and Weiss’s MoG, which had a different training procedure than our mixture of Gaussians. Figure 1 shows a few examples from the test set, and samples from the MoG and RNADE models. Some of the samples from RNADE are unnaturally noisy, with pixel values outside the legal range (see fourth sample from the right in Figure 1). If we constrain the pixels values to a unit range, by rejection sampling or otherwise, these artifacts go away. Limiting the output range of the model would also improve test likelihood scores slightly, but not by much: log-likelihood does not strongly penalize models for putting a small fraction of probability mass on ‘junk’ images. All of the results in this section were obtained by fitting the pixels in a raster-scan order. Perhaps surprisingly, but consistent with previous results on NADE [5] and by Frey [28], randomizing the order of the pixels made little difference to these results. The difference in performance was comparable to the differences between multiple runs with the same pixel ordering. 4.3 Speech acoustics We also measured the ability of RNADE to model small patches of speech spectrograms, extracted from the TIMIT dataset [29]. The patches contained 11 frames of 20 filter-banks plus energy; totaling 231 dimensions per datapoint. These filter-bank encoding is common in speech-recognition, and better for visualization than the more frequently used MFCC features. A good generative model of speech could be used, for example, in denoising, or speech detection tasks. We fitted the models using the standard TIMIT training subset, and compared RNADE with a MoG by measuring their log-likelihood on the complete TIMIT core-test dataset. 6 Table 3: Log-likelihood of several MoG and RNADE models on the core-test set of TIMIT measured in nats. Standard errors due to the finite test sample size are lower than 0.3 nats in every case. RNADE obtained a higher (better) log-likelihood. Model Training LogL Test LogL MoG N =50 111.6 110.4 MoG N =100 113.4 112.0 MoG N =200 113.9 112.5 MoG N =300 114.1 112.5 RNADE-MoG K =10 125.9 123.9 RNADE-MoG K =20 126.7 124.5 RNADE-MoL K =10 120.3 118.0 RNADE-MoL K =20 122.2 119.8 Figure 2: Top: 15 datapoints from the TIMIT core-test set. Center: 15 samples from a MoG model with 200 components. Bottom: 15 samples from an RNADE with 1024 hidden units and output components per dimension. On each plot, time is shown on the horizontal axis, the bottom row displays the energy feature, while the others display the filter bank features (in ascending frequency order from the bottom). All data and samples were drawn randomly. The RNADE model has 1024 rectified-linear hidden units and a mixture of 20 one-dimensional Gaussian components per output. Given the larger scale of this dataset hyperparameter choices were again made manually using validation data, and the same minibatch training procedures for RNADE and MoG were used as for natural image patches. The results are shown in Table 3. RNADE obtained, on average, 10 nats more per test example than a mixture of Gaussians. In Figure 2 a few examples from the test set, and samples from the MoG and RNADE models are shown. In contrast with the log-likelihood measure, there are no marked differences between the samples from each model. Both set of samples look like blurred spectrograms, but RNADE seems to capture sharper formant structures (peaks of energy at the lower frequency bands characteristic of vowel sounds). 5 Discussion Mixture Density Networks (MDNs) [6] are a flexible conditional model of probability densities, that can capture skewed, heavy-tailed, and multi-modal distributions. In principle, MDNs can be applied to multi-dimensional data. However, the number of parameters that the network has to output grows quadratically with the number of targets, unless the targets are assumed independent. RNADE exploits an autoregressive framework to apply practical, one-dimensional MDNs to unsupervised density estimation. To specify an RNADE we needed to set the parametric form for the output distribution of each MDN. A sufficiently large mixture of Gaussians can closely represent any density, but it is hard to learn the conditional densities found in some problems with this representation. The marginal for the brightness of a pixel in natural image patches is heavy tailed, closer to a Laplace distribution 7 (a) −0.4 −0.2 0.0 0.2 0.4 0 2 4 6 8 10 (b) p(x1|x<1) −0.4 −0.2 0.0 0.2 0.4 −4 −2 0 2 4 (c) log p(x1|x<1) −0.4 −0.2 0.0 0.2 0.4 −4 −2 0 2 4 (d) log p(x19|x<19) −0.4 −0.2 0.0 0.2 0.4 −4 −2 0 2 4 (e) log p(x37|x<37) 10 20 30 40 50 60 −0.2 −0.1 0.0 0.1 (f) log pMoG(xi|x<i) −log pMoL(xi|x<i) Figure 3: Comparison of Mixture of Gaussian (MoG) and Mixture of Laplace (MoL) conditionals. (a) Example test patch. (b) Density of p(x1) under RNADE-MoG (dashed-red) and RNADE-MoL (solid-blue), both with K =10. RNADE-MoL closely matches a histogram of brightness values from patches in the test-set (green). The vertical line indicates the value in (a). (c) Log-density of the distributions in (b). (d) Log-density of MoG and MoL conditionals of pixel 19 in (a). (e) Log-density of MoG and MoL conditionals of pixel 37 in (a). (f) Difference in predictive log-density between MoG and MoL conditionals for each pixel, averaged over 10,000 test patches. than Gaussian. Therefore, RNADE-MoG must fit predictions of the first pixel, p(x1), with several Gaussians of different widths, that coincidentally have zero mean. This solution can be difficult to fit, and RNADE with a mixture of Laplace outputs predicted the first pixel of image patches better than with a mixture of Gaussians (Figure 3b and c). However, later pixels were predicted better with Gaussian outputs (Figure 3f); the mixture of Laplace model is not suitable for predicting with large contexts. For image patches, a scale mixture can work well [11], and could be explored within our framework. However for general applications, scale mixtures within RNADE would be too restrictive (e.g., p(x1) would be zero-mean and unimodal). More flexible one-dimensional forms may aid RNADE to generalize better for different context sizes and across a range of applications. One of the main drawbacks of RNADE, and of neural networks in general, is the need to decide the value of several training hyperparameters. The gradient descent learning rate can be adjusted automatically using, for example, the techniques developed by Schaul et al. [30]. Also, methods for choosing hyperparameters more efficiently than grid search have been recently developed [31, 32]. These, and several other recent improvements in the neural network field, like dropouts [33], should be directly applicable to RNADE, and possibly obtain even better performance than shown in this work. RNADE makes it relatively straight-forward to translate advances in the neural-network field into better density estimators, or at least into new estimators with different inductive biases. In summary, we have presented RNADE, a novel ‘black-box’ density estimator. Both likelihood computation time and the number of parameters scale linearly with the dataset dimensionality. Generalization across a range of tasks, representing arbitrary feature vectors, image patches, and auditory spectrograms is excellent. Performance on image patches was close to a recently reported state-of-the-art mixture model [3], and RNADE outperformed mixture models on all other datasets considered. Acknowledgments We thank John Bridle, Steve Renals, Amos Storkey, and Daniel Zoran for useful interactions. References [1] D. Koller and N. Friedman. Probabilistic graphical models: principles and techniques. MIT Press, 2009. [2] T. Cacoullos. Estimation of a multivariate density. Annals of the Institute of Statistical Mathematics, 18 (1):179–189, 1966. [3] D. Zoran and Y. Weiss. From learning models of natural image patches to whole image restoration. In International Conference on Computer Vision, pages 479–486. IEEE, 2011. [4] R. Salakhutdinov and I. Murray. On the quantitative analysis of deep belief networks. In Proceedings of the 25th International Conference on Machine learning, pages 872–879. Omnipress, 2008. 8 [5] H. Larochelle and I. Murray. The neural autoregressive distribution estimator. Journal of Machine Learning Research W&CP, 15:29–37, 2011. [6] C. M. Bishop. Mixture density networks. Technical Report NCRG 4288, Neural Computing Research Group, Aston University, Birmingham, 1994. [7] B. J. Frey, G. E. Hinton, and P. Dayan. Does the wake-sleep algorithm produce good density estimators? In Advances in Neural Information Processing Systems 8, pages 661–670. MIT Press, 1996. [8] Y. Bengio and S. Bengio. Modeling high-dimensional discrete data with multi-layer neural networks. Advances in Neural Information Processing Systems, 12:400–406, 2000. [9] H. Larochelle and S. Lauly. A neural autoregressive topic model. In Advances in Neural Information Processing Systems 25, 2012. [10] Y. Bengio. Discussion of the neural autoregressive distribution estimator. Journal of Machine Learning Research W&CP, 15:38–39, 2011. [11] L. Theis, R. Hosseini, and M. Bethge. Mixtures of conditional Gaussian scale mixtures applied to multiscale image representations. PLoS ONE, 7(7), 2012. doi: 10.1371/journal.pone.0039857. [12] N. Friedman and I. Nachman. Gaussian process networks. In Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence, pages 211–219. Morgan Kaufmann Publishers Inc., 2000. [13] I. Murray and R. Salakhutdinov. Evaluating probabilities under high-dimensional latent variable models. In Advances in Neural Information Processing Systems 21, pages 1137–1144, 2009. [14] L. Theis, S. Gerwinn, F. Sinz, and M. Bethge. In all likelihood, deep belief is not enough. Journal of Machine Learning Research, 12:3071–3096, 2011. [15] M. A. Ranzato and G. E. Hinton. Modeling pixel means and covariances using factorized third-order Boltzmann machines. In Computer Vision and Pattern Recognition, pages 2551–2558. IEEE, 2010. [16] A. Courville, J. Bergstra, and Y. Bengio. A spike and slab restricted Boltzmann machine. Journal of Machine Learning Research, W&CP, 15, 2011. [17] V. Nair and G. E. Hinton. Rectified linear units improve restricted Boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning, pages 807–814. Omnipress, 2010. [18] J. S. Bridle. Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition. In Neuro-computing: algorithms, architectures and applications, pages 227–236. Springer-Verlag, 1989. [19] T. Robinson. SHORTEN: simple lossless and near-lossless waveform compression. Technical Report CUED/F-INFENG/TR.156, Engineering Department, Cambridge University, 1994. [20] Y. Tang, R. Salakhutdinov, and G. Hinton. Deep mixtures of factor analysers. In Proceedings of the 29th International Conference on Machine Learning, pages 505–512. Omnipress, 2012. [21] D. Zoran and Y. Weiss. Natural images, Gaussian mixtures and dead leaves. Advances in Neural Information Processing Systems, 25:1745–1753, 2012. [22] K. Bache and M. Lichman. UCI machine learning repository, 2013. http://archive.ics.uci.edu/ml. [23] R. Silva, C. Blundell, and Y. W. Teh. Mixed cumulative distribution networks. Journal of Machine Learning Research W&CP, 15:670–678, 2011. [24] Z. Ghahramani and G. E. Hinton. The EM algorithm for mixtures of factor analyzers. Technical Report CRG-TR-96-1, University of Toronto, 1996. [25] J. Verbeek. Mixture of factor analyzers Matlab implementation, 2005. http://lear.inrialpes.fr/ verbeek/code/. [26] D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In International Conference on Computer Vision, volume 2, pages 416–423. IEEE, July 2001. [27] D. Zoran. Personal communication, 2013. [28] B. Frey. Graphical models for machine learning and digital communication. MIT Press, 1998. [29] J. S. Garofolo, L. F. Lamel, W. M. Fisher, J. G. Fiscus, D. S. Pallett, N. L. Dahlgren, and V. Zue. Timit acoustic-phonetic continuous speech corpus. Linguistic Data Consortium, 10(5):0, 1993. [30] T. Schaul, S. Zhang, and Y. LeCun. No More Pesky Learning Rates. In Proceedings of the 30th international conference on Machine learning, 2013. [31] J. Bergstra and Y. Bengio. Random search for hyper-parameter optimization. The Journal of Machine Learning Research, 13:281–305, 2012. [32] J. Snoek, H. Larochelle, and R. Adams. Practical Bayesian optimization of machine learning algorithms. In Advances in Neural Information Processing Systems 25, pages 2960–2968, 2012. [33] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. Arxiv preprint arXiv:1207.0580, 2012. 9
|
2013
|
107
|
4,829
|
Estimating the Unseen: Improved Estimators for Entropy and other Properties Gregory Valiant ∗ Stanford University Stanford, CA 94305 valiant@stanford.edu Paul Valiant † Brown University Providence, RI 02912 pvaliant@gmail.com Abstract Recently, Valiant and Valiant [1, 2] showed that a class of distributional properties, which includes such practically relevant properties as entropy, the number of distinct elements, and distance metrics between pairs of distributions, can be estimated given a sublinear sized sample. Specifically, given a sample consisting of independent draws from any distribution over at most n distinct elements, these properties can be estimated accurately using a sample of size O(n/ log n). We propose a novel modification of this approach and show: 1) theoretically, this estimator is optimal (to constant factors, over worst-case instances), and 2) in practice, it performs exceptionally well for a variety of estimation tasks, on a variety of natural distributions, for a wide range of parameters. Perhaps unsurprisingly, the key step in our approach is to first use the sample to characterize the “unseen” portion of the distribution. This goes beyond such tools as the Good-Turing frequency estimation scheme, which estimates the total probability mass of the unobserved portion of the distribution: we seek to estimate the shape of the unobserved portion of the distribution. This approach is robust, general, and theoretically principled; we expect that it may be fruitfully used as a component within larger machine learning and data analysis systems. 1 Introduction What can one infer about an unknown distribution based on a random sample? If the distribution in question is relatively “simple” in comparison to the sample size—for example if our sample consists of 1000 independent draws from a distribution supported on 100 domain elements—then the empirical distribution given by the sample will likely be an accurate representation of the true distribution. If, on the other hand, we are given a relatively small sample in relation to the size and complexity of the distribution—for example a sample of size 100 drawn from a distribution supported on 1000 domain elements—then the empirical distribution may be a poor approximation of the true distribution. In this case, can one still extract accurate estimates of various properties of the true distribution? Many real–world machine learning and data analysis tasks face this challenge; indeed there are many large datasets where the data only represent a tiny fraction of an underlying distribution we hope to understand. This challenge of inferring properties of a distribution given a “too small” sample is encountered in a variety of settings, including text data (typically, no matter how large the corpus, around 30% of the observed vocabulary only occurs once), customer data (many customers or website users are only seen a small number of times), the analysis of neural spike trains [15], ∗http://theory.stanford.edu/~valiant/ A portion of this work was done while at Microsoft Research. †http://cs.brown.edu/people/pvaliant/ 1 and the study of genetic mutations across a population1. Additionally, many database management tasks employ sampling techniques to optimize query execution; improved estimators would allow for either smaller sample sizes or increased accuracy, leading to improved efficiency of the database system (see, e.g. [6, 7]). We introduce a general and robust approach for using a sample to characterize the “unseen” portion of the distribution. Without any a priori assumptions about the distribution, one cannot know what the unseen domain elements are. Nevertheless, one can still hope to estimate the “shape” or histogram of the unseen portion of the distribution—essentially, we estimate how many unseen domain elements occur in various probability ranges. Given such a reconstruction, one can then use it to estimate any property of the distribution which only depends on the shape/histogram; such properties are termed symmetric and include entropy and support size. In light of the long history of work on estimating entropy by the neuroscience, statistics, computer science, and information theory communities, it is compelling that our approach (which is agnostic to the property in question) outperforms these entropy-specific estimators. Additionally, we extend this intuition to develop estimators for properties of pairs of distributions, the most important of which are the distance metrics. We demonstrate that our approach can accurately estimate the total variational distance (also known as statistical distance or 1 distance) between distributions using small samples. To illustrate the challenge of estimating variational distance (between distributions over discrete domains) given small samples, consider drawing two samples, each consisting of 1000 draws from a uniform distribution over 10,000 distinct elements. Each sample can contain at most 10% of the domain elements, and their intersection will likely contain only 1% of the domain elements; yet from this, one would like to conclude that these two samples must have been drawn from nearly identical distributions. 1.1 Previous work: estimating distributions, and estimating properties There is a long line of work on inferring information about the unseen portion of a distribution, beginning with independent contributions from both R.A. Fisher and Alan Turing during the 1940’s. Fisher was presented with data on butterflies collected over a 2 year expedition in Malaysia, and sought to estimate the number of new species that would be discovered if a second 2 year expedition were conducted [8]. (His answer was “≈75.”) At nearly the same time, as part of the British WWII effort to understand the statistics of the German enigma ciphers, Turing and I.J. Good were working on the related problem of estimating the total probability mass accounted for by the unseen portion of a distribution [9]. This resulted in the Good-Turing frequency estimation scheme, which continues to be employed, analyzed, and extended by our community (see, e.g. [10, 11]). More recently, in similar spirit to this work, Orlitsky et al. posed the following natural question: given a sample, what distribution maximizes the likelihood of seeing the observed species frequencies, that is, the number of species observed once, twice, etc.? [12, 13] (What Orlitsky et al. term the pattern of a sample, we call the fingerprint, as in Definition 1.) Orlitsky et al. show that such likelihood maximizing distributions can be found in some specific settings, though the problem of finding or approximating such distributions for typical patterns/fingerprints may be difficult. Recently, Acharya et al. showed that this maximum likelihood approach can be used to yield a nearoptimal algorithm for deciding whether two samples originated from identical distributions, versus distributions that have large distance [14]. In contrast to this approach of trying to estimate the “shape/histogram” of a distribution, there has been nearly a century of work proposing and analyzing estimators for particular properties of distributions. In Section 3 we describe several standard, and some recent estimators for entropy, though we refer the reader to [15] for a thorough treatment. There is also a large literature on estimating support size (also known as the “species problem”, and the related “distinct elements” problem), and we refer the reader to [16] and to [17] for several hundred references. Over the past 15 years, the theoretical computer science community has spent significant effort developing estimators and establishing worst-case information theoretic lower bounds on the sample size required for various distribution estimation tasks, including entropy and support size (e.g. [18, 19, 20, 21]). 1Three recent studies (appearing in Science last year) found that very rare genetic mutations are especially abundant in humans, and observed that better statistical tools are needed to characterize this “rare events” regime, so as to resolve fundamental problems about our evolutionary process and selective pressures [3, 4, 5]. 2 The algorithm we present here is based on the intuition of the estimator described in our theoretical work [1]. That estimator is not practically viable, and additionally, requires as input an accurate upper bound on the support size of the distribution in question. Both the algorithm proposed in this current work and that of [1] employ linear programming, though these programs differ significantly (to the extent that the linear program of [1] does not even have an objective function and simply defines a feasible region). Our proof of the theoretical guarantees in this work leverages some of the machinery of [1] (in particular, the “Chebyshev bump construction”) and achieves the same theoretical worst-case optimality guarantees. See Appendix A for further theoretical and practical comparisons with the estimator of [1]. 1.2 Definitions and examples We begin by defining the fingerprint of a sample, which essentially removes all the label-information from the sample. For the remainder of this paper, we will work with the fingerprint of a sample, rather than the with the sample itself. Definition 1. Given a samples X = (x1, . . . , xk), the associated fingerprint, F = (F1, F2, . . .), is the “histogram of the histogram” of the sample. Formally, F is the vector whose ith component, Fi, is the number of elements in the domain that occur exactly i times in sample X. For estimating entropy, or any other property whose value is invariant to relabeling the distribution support, the fingerprint of a sample contains all the relevant information (see [21], for a formal proof of this fact). We note that in some of the literature, the fingerprint is alternately termed the pattern, histogram, histogram of the histogram or collision statistics of the sample. In analogy with the fingerprint of a sample, we define the histogram of a distribution, a representation in which the labels of the domain have been removed. Definition 2. The histogram of a distribution D is a mapping hD : (0, 1] →N ∪{0}, where hD(x) is equal to the number of domain elements that each occur in distribution D with probability x. Formally, hD(x) = |{α : D(α) = x}|, where D(α) is the probability mass that distribution D assigns to domain element α. We will also allow for “generalized histograms” in which hD does not necessarily take integral values. Since h(x) denotes the number of elements that have probability x, we have x:h(x)=0 x·h(x) = 1, as the total probability mass of a distribution is 1. Any symmetric property is a function of only the histogram of the distribution: • The Shannon entropy H(D) of a distribution D is defined to be H(D) := − α∈sup(D) D(α) log2 D(α) = − x:hD(x)=0 hD(x)x log2 x. • The support size is the number of domain elements that occur with positive probability: |sup(D)| := |{α : D(α) > 0}| = x:hD(x)=0 hD(x). We provide an example to illustrate the above definitions: Example 3. Consider a sequence of animals, obtained as a sample from the distribution of animals on a certain island, X = (mouse, mouse, bird, cat, mouse, bird, bird, mouse, dog, mouse). We have F = (2, 0, 1, 0, 1), indicating that two species occurred exactly once (cat and dog), one species occurred exactly three times (bird), and one species occurred exactly five times (mouse). Consider the following distribution of animals: Pr(mouse) = 1/2, Pr(bird) = 1/4, Pr(cat) = Pr(dog) = Pr(bear) = Pr(wolf) = 1/16. The associated histogram of this distribution is h : (0, 1] →Z defined by h(1/16) = 4, h(1/4) = 1, h(1/2) = 1, and for all x ∈{1/16, 1/4, 1/2}, h(x) = 0. As we will see in Example 5 below, the fingerprint of a sample is intimately related to the Binomial distribution; the theoretical analysis will be greatly simplified by reasoning about the related Poisson distribution, which we now define: Definition 4. We denote the Poisson distribution of expectation λ as Poi(λ), and write poi(λ, j) := e−λλj j! , to denote the probability that a random variable with distribution Poi(λ) takes value j. 3 Example 5. Let D be the uniform distribution with support size 1000. Then hD(1/1000) = 1000, and for all x = 1/1000, hD(x) = 0. Let X be a sample consisting of 500 independent draws from D. Each element of the domain, in expectation, will occur 1/2 times in X, and thus the number of occurrences of each domain element in the sample X will be roughly distributed as Poi(1/2). (The exact distribution will be Binomial(500, 1/1000), though the Poisson distribution is an accurate approximation.) By linearity of expectation, the expected fingerprint satisfies E[Fi] ≈1000 · poi(1/2, i). Thus we expect to see roughly 303 elements once, 76 elements twice, 13 elements three times, etc., and in expectation 607 domain elements will not be seen at all. 2 Estimating the unseen Given the fingerprint F of a sample of size k, drawn from a distribution with histogram h, our highlevel approach is to find a histogram hthat has the property that if one were to take k independent draws from a distribution with histogram h, the fingerprint of the resulting sample would be similar to the observed fingerprint F. The hope is then that h and hwill be similar, and, in particular, have similar entropies, support sizes, etc. As an illustration of this approach, suppose we are given a sample of size k = 500, with fingerprint F = (301, 78, 13, 1, 0, 0, . . .); recalling Example 5, we recognize that F is very similar to the expected fingerprint that we would obtain if the sample had been drawn from the uniform distribution over support 1000. Although the sample only contains 391 unique domain elements, we might be justified in concluding that the entropy of the true distribution from which the sample was drawn is close to H(Unif(1000)) = log2(1000). In general, how does one obtain a “plausible” histogram from a fingerprint in a principled fashion? We must start by understanding how to obtain a plausible fingerprint from a histogram. Given a distribution D, and some domain element α occurring with probability x = D(α), the probability that it will be drawn exactly i times in k independent draws from D is Pr[Binomial(k, x) = i] ≈poi(kx, i). By linearity of expectation, the expected ith fingerprint entry will roughly satisfy E[Fi] ≈ x:hD(x)=0 h(x)poi(kx, i). (1) This mapping between histograms and expected fingerprints is linear in the histogram, with coefficients given by the Poisson probabilities. Additionally, it is not hard to show that V ar[Fi] ≤E[Fi], and thus the fingerprint is tightly concentrated about its expected value. This motivates a “first moment” approach. We will, roughly, invert the linear map from histograms to expected fingerprint entries, to yield a map from observed fingerprints, to plausible histograms h. There is one additional component of our approach. For many fingerprints, there will be a large space of equally plausible histograms. To illustrate, suppose we obtain fingerprint F = (10, 0, 0, 0, . . .), and consider the two histograms given by the uniform distributions with respective support sizes 10,000, and 100,000. Given either distribution, the probability of obtaining the observed fingerprint from a set of 10 samples is > .99, yet these distributions are quite different and have very different entropy values and support sizes. They are both very plausible–which distribution should we return? To resolve this issue in a principled fashion, we strengthen our initial goal of “returning a histogram that could have plausibly generated the observed fingerprint”: we instead return the simplest histogram that could have plausibly generated the observed fingerprint. Recall the example above, where we observed only 10 distinct elements, but to explain the data we could either infer an additional 9,900 unseen elements, or an additional 99,000. In this sense, inferring “only” 9,900 additional unseen elements is the simplest explanation that fits the data, in the spirit of Occam’s razor.2 2.1 The algorithm We pose this problem of finding the simplest plausible histogram as a pair of linear programs. The first linear program will return a histogram hthat minimizes the distance between its expected fingerprint and the observed fingerprint, where we penalize the discrepancy between Fi and E[Fh i ] in proportion to the inverse of the standard deviation of Fi, which we estimate as 1/√1 + Fi, since 2The practical performance seems virtually unchanged if one returns the “plausible” histogram of minimal entropy, instead of minimal support size (see Appendix B). 4 Poisson distributions have variance equal to their expectation. The constraint that hcorresponds to a histogram simply means that the total probability mass is 1, and all probability values are nonnegative. The second linear program will then find the histogram hof minimal support size, subject to the constraint that the distance between its expected fingerprint, and the observed fingerprint, is not much worse than that of the histogram found by the first linear program. To make the linear programs finite, we consider a fine mesh of values x1, . . . , x∈(0, 1] that between them discretely approximate the potential support of the histogram. The variables of the linear program, h 1, . . . , h will correspond to the histogram values at these mesh points, with variable h i representing the number of domain elements that occur with probability xi, namely h(xi). A minor complicating issue is that this approach is designed for the challenging “rare events” regime, where there are many domain elements each seen only a handful of times. By contrast if there is a domain element that occurs very frequently, say with probability 1/2, then the number of times it occurs will be concentrated about its expectation of k/2 (and the trivial empirical estimate will be accurate), though fingerprint Fk/2 will not be concentrated about its expectation, as it will take an integer value of either 0, 1 or 2. Hence we will split the fingerprint into the “easy” and “hard” portions, and use the empirical estimator for the easy portion, and our linear programming approach for the hard portion. The full algorithm is below (see our websites or Appendix D for Matlab code). Algorithm 1. ESTIMATE UNSEEN Input: Fingerprint F = F1, F2, . . . , Fm, derived from a sample of size k, vector x = x1, . . . , xwith 0 < xi ≤1, and error parameter α > 0. Output: List of pairs (y1, h y1), (y2, h y2), . . . , with yi ∈(0, 1], and h yi ≥0. • Initialize the output list of pairs to be empty, and initialize a vector F to be equal to F. • For i = 1 to k, – If j∈{i− √ i,...,i+ √ i} Fj ≤2 √ i [i.e. if the fingerprint is “sparse” at index i] Set F i = 0, and append the pair (i/k, Fi) to the output list. • Let vopt be the objective function value returned by running Linear Program 1 on input F , x. • Let h be the histogram returned by running Linear Program 2 on input F , x, vopt, α. • For all i s.t. hi > 0, append the pair (xi, hi) to the output list. Linear Program 1. FIND PLAUSIBLE HISTOGRAM Input: Fingerprint F = F1, F2, . . . , Fm, derived from a sample of size k, vector x = x1, . . . , xconsisting of a fine mesh of points in the interval (0, 1]. Output: vector h= h 1, . . . , h , and objective value vopt ∈R. Let h 1, . . . , h and vopt be, respectively, the solution assignment, and corresponding objective function value of the solution of the following linear program, with variables h 1, . . . , h : Minimize: m i=1 1 √1 + Fi Fi − j=1 h j · poi(kxj, i) Subject to: j=1 xjh j = i Fi/k, and ∀j, h j ≥0. Linear Program 2. FIND SIMPLEST PLAUSIBLE HISTOGRAM Input: Fingerprint F = F1, F2, . . . , Fm, derived from a sample of size k, vector x = x1, . . . , xconsisting of a fine mesh of points in the interval (0, 1], optimal objective function value vopt from Linear Program 1, and error parameter α > 0. Output: vector h= h 1, . . . , h . Let h 1, . . . , h be the solution assignment of the following linear program, with variables h 1, . . . , h : Minimize: j=1 h j Subject to: m i=1 1 √ 1+Fi Fi − j=1 h j · poi(kxj, i) ≤vopt+α, j=1 xjh j = i Fi/k, and ∀j, h j ≥0. Theorem 1. There exists a constant C0 > 0 and assignment of parameter α := α(k) of Algorithm 1 such that for any c > 0, for sufficiently large n, given a sample of size k = c n log n consisting of independent draws from a distribution D over a domain of size at most n, with probability at least 1 −e−nΩ(1) over the randomness in the selection of the sample, Algorithm 13, when run with a sufficiently fine mesh x1, . . . , x, returns a histogram hsuch that |H(D) −H(h)| ≤C0 √c. 3For simplicity, we prove this statement for Algorithm 1 with the second bullet step of the algorithm modified as follows: there is an explicit cutoff N such that the linear programming approach is applied to fingerprint entries Fi for i ≤N, and the empirical estimate is applied to fingerprints Fi for i > N. 5 The above theorem characterizes the worst-case performance guarantees of the above algorithm in terms of entropy estimation. The proof of Theorem 1 is rather technical and we provide the complete proof together with a high-level overview of the key components, in Appendix C. In fact, we prove a stronger theorem—guaranteeing that the histogram returned by Algorithm 1 is close (in a specific metric) to the histogram of the true distribution; this stronger theorem then implies that Algorithm 1 can accurately estimate any statistical property that is sufficiently Lipschitz continuous with respect to the specific metric on histograms. The information theoretic lower bounds of [1] show that there is some constant C1 such that for sufficiently large k, no algorithm can estimate the entropy of (worst-case) distributions of support size n to within ±0.1 with any probability of success greater 0.6 when given a sample of size at most k = C1 n log n. Together with Theorem 1, this establishes the worst-case optimality of Algorithm 1 (to constant factors). 3 Empirical results In this section we demonstrate that Algorithm 1 performs well, in practice. We begin by briefly discussing the five entropy estimators to which we compare our estimator in Figure 1. The first three are standard, and are, perhaps, the most commonly used estimators [15]. We then describe two recently proposed estimators that have been shown to perform well [22]. The “naive” estimator: the entropy of the empirical distribution, namely, given a fingerprint F derived from a set of k samples, Hnaive(F) := − i Fi i k| log2 i k|. The Miller-Madow corrected estimator [23]: the naive estimator Hnaive corrected to try to account for the second derivative of the logarithm function, namely HMM(F) := Hnaive(F) + ( i Fi)−1 2k , though we note that the numerator of the correction term is sometimes replaced by various related quantities, see [24]. The jackknifed naive estimator [25, 26]: HJK(F) := k ·Hnaive(F)−k−1 k k j=1 Hnaive(F−j), where F−j is the fingerprint given by removing the contribution of the jth sample. The coverage adjusted estimator (CAE) [27]: Chao and Shen proposed the CAE, which is specifically designed to apply to settings in which there is a significant component of the distribution that is unseen, and was shown to perform well in practice in [22].4 Given a fingerprint F derived from a set of k samples, let Ps := 1 −F1/k be the Good–Turing estimate of the probability mass of the “seen” portion of the distribution [9]. The CAE adjusts the empirical probabilities according to Ps, then applies the Horvitz–Thompson estimator for population totals [28] to take into account the probability that the elements were seen. This yields: HCAE(F) := − i Fi (i/k)Ps log2 ((i/k)Ps) 1 −(1 −(i/k)Ps)k . The Best Upper Bound estimator [15]: The final estimator to which we compare ours is the Best Upper Bound (BUB) estimator of Paninski. This estimator is obtained by searching for a minimax linear estimator, with respect to a certain error metric. The linear estimators of [2] can be viewed as a variant of this estimator with provable performance bounds.5 The BUB estimator requires, as input, an upper bound on the support size of the distribution from which the samples are drawn; if the bound provided is inaccurate, the performance degrades considerably, as was also remarked in [22]. In our experiments, we used Paninski’s implementation of the BUB estimator (publicly available on his website), with default parameters. For the distributions with finite support, we gave the true support size as input, and thus we are arguably comparing our estimator to the best–case performance of the BUB estimator. See Figure 1 for the comparison of Algorithm 1 with these estimators. 4One curious weakness of the CAE, is that its performance is exceptionally poor on some simple large instances. Given a sample of size k from a uniform distribution over k elements, it is not hard to show that the bias of the CAE is Ω(log k). This error is not even bounded! For comparison, even the naive estimator has error bounded by a constant in the limit as k →∞in this setting. This bias of the CAE is easily observed in our experiments as the “hump” in the top row of Figure 1. 5We also implemented the linear estimators of [2], though found that the BUB estimator performed better. 6 10 2 10 3 0 0.5 1 RMSE Sample Size Unif[n], n=1,000 10 2 10 3 0 0.5 1 RMSE Sample Size MixUnif[n], n=1,000 10 2 10 3 0 0.5 1 1.5 RMSE Sample Size Zipf[n], n=1,000 10 2 10 3 0 0.5 1 1.5 RMSE Sample Size Zipf2[n], n=1,000 10 2 10 3 0 0.5 1 1.5 RMSE Sample Size Geom[n], n=1,000 10 2 10 3 0 0.5 1 1.5 RMSE Sample Size MixGeomZipf[n], n=1,000 10 3 10 4 10 5 0 0.5 1 RMSE Sample Size Unif[n], n=10,000 10 3 10 4 10 5 0 0.5 1 RMSE Sample Size MixUnif[n], n=10,000 10 3 10 4 10 5 0 0.5 1 1.5 RMSE Sample Size Zipf[n], n=10,000 10 3 10 4 10 5 0 0.5 1 1.5 RMSE Sample Size Zipf2[n], n=10,000 10 3 10 4 10 5 0 0.5 1 1.5 RMSE Sample Size Geom[n], n=10,000 10 3 10 4 10 5 0 0.5 1 1.5 RMSE Sample Size MixGeomZipf[n], n=10,000 10 4 10 6 10 5 0 0.5 1 RMSE Sample Size Unif[n], n=100,000 10 4 10 6 10 5 0 0.5 1 RMSE Sample Size MixUnif[n], n=100,000 10 4 10 6 10 5 0 0.5 1 1.5 RMSE Sample Size Zipf[n], n=100,000 10 4 10 6 10 5 0 0.5 1 1.5 RMSE Sample Size Zipf2[n], n=100,000 10 4 10 6 10 5 0 0.5 1 1.5 RMSE Sample Size Geom[n], n=100,000 10 4 10 6 10 5 0 0.5 1 1.5 RMSE Sample Size MixGeomZipf[n], n=100,000 Naive Miller−Madow Jackknifed CAE BUB Unseen Figure 1: Plots depicting the square root of the mean squared error (RMSE) of each entropy estimator over 500 trials, plotted as a function of the sample size; note the logarithmic scaling of the x-axis. The samples are drawn from six classes of distributions: the uniform distribution, Unif[n] that assigns probability pi = 1/n for i = 1, 2, . . . , n; an even mixture of Unif[ n 5 ] and Unif[ 4n 5 ], which assigns probability pi = 5 2n for i = 1, . . . , n 5 and probability pi = 5 8n for i = n 5 + 1, . . . , n; the Zipf distribution Zipf[n] that assigns probability pi = 1/i n j=1 1/j for i = 1, 2, . . . , n and is commonly used to model naturally occurring “power law” distributions, particularly in natural language processing; a modified Zipf distribution with power–law exponent 0.6, Zipf2[n], that assigns probability pi = 1/i0.6 n j=1 1/j0.6 for i = 1, 2, . . . , n; the geometric distribution Geom[n], which has infinite support and assigns probability pi = (1/n)(1 −1/n)i, for i = 1, 2 . . .; and lastly an even mixture of Geom[n/2] and Zipf[n/2]. For each distribution, we considered three settings of the parameter n: n = 1, 000 (left column), n = 10, 000 (center column), and n = 100, 000 (right column). In each plot, the sample size ranges over the interval [n0.6, n1.25]. All experiments were run in Matlab. The error parameter α in Algorithm 1 was set to be 0.5 for all trials, and the vector x = x1, x2, . . . used as the support of the returned histogram was chosen to be a coarse geometric mesh, with x1 = 1/k2, and xi = 1.1xi−1. The experimental results are essentially unchanged if the parameter α varied within the range [0.25, 1], or if x1 is decreased, or if the mesh is made more fine (see Appendix B). Appendix D contains our Matlab implementation of Algorithm 1 (also available from our websites). The unseen estimator performs far better than the three standard estimators, dominates the CAE estimator for larger sample sizes and on samples from the Zipf distributions, and also dominates the BUB estimator, even for the uniform and Zipf distributions for which the BUB estimator received the true support sizes as input. 7 10 3 10 4 10 5 0 0.2 0.4 0.6 0.8 1 Sample Size Estimated L1 Distance Estimating Distance (d=0) Naive Unseen 10 3 10 4 10 5 0 0.2 0.4 0.6 0.8 1 Sample Size Estimated L1 Distance Estimating Distance (d=0.5) Naive Unseen 10 3 10 4 10 5 0 0.2 0.4 0.6 0.8 1 Sample Size Estimated L1 Distance Estimating Distance (d=1) Naive Unseen Figure 2: Plots depicting the estimated the total variation distance (1 distance) between two uniform distributions on n = 10, 000 points, in three cases: the two distributions are identical (left plot, d = 0), the supports overlap on half their domain elements (center plot, d = 0.5), and the distributions have disjoint supports (right plot, d = 1). The estimate of the distance is plotted along with error bars at plus and minus one standard deviation; our results are compared with those for the naive estimator (the distance between the empirical distributions). The unseen estimator can be seen to reliably distinguish between the d = 0, d = 1 2, and d = 1 cases even for samples as small as several hundred. 3.1 Estimating 1 distance and number of words in Hamlet The other two properties that we consider do not have such widely-accepted estimators as entropy, and thus our evaluation of the unseen estimator will be more qualitative. We include these two examples here because they are of a substantially different flavor from entropy estimation, and highlight the flexibility of our approach. Figure 2 shows the results of estimating the total variation distance (1 distance). Because total variation distance is a property of two distributions instead of one, fingerprints and histograms are two-dimensional objects in this setting (see Section 4.6 of [29]), and Algorithm 1 and the linear programs are extended accordingly, replacing single indices by pairs of indices, and Poisson coefficients by corresponding products of Poisson coefficients. Finally, in contrast to the synthetic tests above, we also evaluated our estimator on a real-data problem which may be seen as emblematic of the challenges in a wide gamut of natural language processing problems: given a (contiguous) fragment of Shakespeare’s Hamlet, estimate the number of distinct words in the whole play. We use this example to showcase the flexibility of our linear programming approach—our estimator can be customized to particular domains in powerful and principled ways by adding or modifying the constraints of the linear program. To estimate the histogram of word frequencies in Hamlet, we note that the play is of length ≈25, 000, and thus the minimum probability with which any word can occur is 1 25,000. Thus in contrast to our previous approach of using Linear Program 2 to bound the support of the returned histogram, we instead simply modify the input vector x of Linear Program 1 to contain only probability values ≥ 1 25,000, and forgo running Linear Program 2. The results are plotted in Figure 3. The estimates converge towards the true value of 4268 distinct words extremely rapidly, and are slightly negatively biased, perhaps reflecting the fact that words appearing close together are correlated. In contrast to Hamlet’s charge that “there are more things in heaven and earth...than are dreamt of in your philosophy,” we can say that there are almost exactly as many things in Hamlet as can be dreamt of from 10% of Hamlet. 0 0.5 1 1.5 2 2.5 x 10 4 0 2000 4000 6000 8000 Length of Passage Estimate Estimating # Distinct Words in Hamlet Naive CAE Unseen Figure 3: Estimates of the total number of distinct word forms in Shakespeare’s Hamlet (excluding stage directions and proper nouns) as a functions of the length of the passage from which the estimate is inferred. The true value, 4268, is shown as the horizontal line. 8 References [1] G. Valiant and P. Valiant. Estimating the unseen: an n/ log(n)–sample estimator for entropy and support size, shown optimal via new CLTs. In Symposium on Theory of Computing (STOC), 2011. [2] G. Valiant and P. Valiant. The power of linear estimators. In IEEE Symposium on Foundations of Computer Science (FOCS), 2011. [3] M. R. Nelson et al. An abundance of rare functional variants in 202 drug target genes sequenced in 14,002 people. Science, 337(6090):100–104, 2012. [4] J. A. Tennessen et al. Evolution and functional impact of rare coding variation from deep sequencing of human exomes. Science, 337(6090):64–69, 2012. [5] A. Keinan and A. G. Clark. Recent explosive human population growth has resulted in an excess of rare genetic variants. Science, 336(6082):740–743, 2012. [6] F. Olken and D. Rotem. Random sampling from database files: a survey. In Proceedings of the Fifth International Workshop on Statistical and Scientific Data Management, 1990. [7] P. J. Haas, J. F. Naughton, S. Seshadri, and A. N. Swami. Selectivity and cost estimation for joins based on random sampling. Journal of Computer and System Sciences, 52(3):550–569, 1996. [8] R.A. Fisher, A. Corbet, and C.B. Williams. The relation between the number of species and the number of individuals in a random sample of an animal population. Journal of the British Ecological Society, 12(1):42–58, 1943. [9] I. J. Good. The population frequencies of species and the estimation of population parameters. Biometrika, 40(16):237–264, 1953. [10] D. A. McAllester and R.E. Schapire. On the convergence rate of Good-Turing estimators. In Conference on Learning Theory (COLT), 2000. [11] A. Orlitsky, N.P. Santhanam, and J. Zhang. Always Good Turing: Asymptotically optimal probability estimation. Science, 302(5644):427–431, October 2003. [12] A. Orlitsky, N. Santhanam, K.Viswanathan, and J. Zhang. On modeling profiles instead of values. Uncertainity in Artificial Intelligence, 2004. [13] J. Acharya, A. Orlitsky, and S. Pan. The maximum likelihood probability of unique-singleton, ternary, and length-7 patterns. In IEEE Symp. on Information Theory, 2009. [14] J. Acharya, H. Das, A. Orlitsky, and S. Pan. Competitive closeness testing. In COLT, 2011. [15] L. Paninski. Estimation of entropy and mutual information. Neural Comp., 15(6):1191–1253, 2003. [16] J. Bunge and M. Fitzpatrick. Estimating the number of species: A review. Journal of the American Statistical Association, 88(421):364–373, 1993. [17] J. Bunge. Bibliography of references on the problem of estimating support size, available at http://www.stat.cornell.edu/˜bunge/bibliography.html. [18] Z. Bar-Yossef, R. Kumar, and D. Sivakumar. Sampling algorithms: lower bounds and applications. In STOC, 2001. [19] T. Batu Testing Properties of Distributions Ph.D. thesis, Cornell, 2001. [20] M. Charikar, S. Chaudhuri, R. Motwani, and V.R. Narasayya. Towards estimation error guarantees for distinct values. In SODA, 2000. [21] T. Batu, L. Fortnow, R. Rubinfeld, W.D. Smith, and P. White. Testing that distributions are close. In IEEE Symposium on Foundations of Computer Science (FOCS), 2000. [22] V.Q. Vu, B. Yu, and R.E. Kass. Coverage-adjusted entropy estimation. Statistics in Medicine, 26(21):4039–4060, 2007. [23] G. Miller. Note on the bias of information estimates. Information Theory in Psychology II-B, ed H Quastler (Glencoe, IL: Free Press):pp 95–100, 1955. [24] S. Panzeri and A Treves. Analytical estimates of limited sampling biases in different information measures. Network: Computation in Neural Systems, 7:87–107, 1996. [25] S. Zahl. Jackknifing an index of diversity. Ecology, 58:907–913, 1977. [26] B. Efron and C. Stein. The jacknife estimate of variance. Annals of Statistics, 9:586–596, 1981. [27] A. Chao and T.J. Shen. Nonparametric estimation of shannons index of diversity when there are unseen species in sample. Environmental and Ecological Statistics, 10:429–443, 2003. [28] D.G. Horvitz and D.J. Thompson. A generalization of sampling without replacement from a finite universe. Journal of the American Statistical Association, 47(260):663–685, 1952. [29] P. Valiant. Testing Symmetric Properties of Distributions. SIAM J. Comput., 40(6):1927–1968,2011. 9
|
2013
|
108
|
4,830
|
Dynamic Clustering via Asymptotics of the Dependent Dirichlet Process Mixture Trevor Campbell MIT Cambridge, MA 02139 tdjc@mit.edu Miao Liu Duke University Durham, NC 27708 miao.liu@duke.edu Brian Kulis Ohio State University Columbus, OH 43210 kulis@cse.ohio-state.edu Jonathan P. How MIT Cambridge, MA 02139 jhow@mit.edu Lawrence Carin Duke University Durham, NC 27708 lcarin@duke.edu Abstract This paper presents a novel algorithm, based upon the dependent Dirichlet process mixture model (DDPMM), for clustering batch-sequential data containing an unknown number of evolving clusters. The algorithm is derived via a lowvariance asymptotic analysis of the Gibbs sampling algorithm for the DDPMM, and provides a hard clustering with convergence guarantees similar to those of the k-means algorithm. Empirical results from a synthetic test with moving Gaussian clusters and a test with real ADS-B aircraft trajectory data demonstrate that the algorithm requires orders of magnitude less computational time than contemporary probabilistic and hard clustering algorithms, while providing higher accuracy on the examined datasets. 1 Introduction The Dirichlet process mixture model (DPMM) is a powerful tool for clustering data that enables the inference of an unbounded number of mixture components, and has been widely studied in the machine learning and statistics communities [1–4]. Despite its flexibility, it assumes the observations are exchangeable, and therefore that the data points have no inherent ordering that influences their labeling. This assumption is invalid for modeling temporally/spatially evolving phenomena, in which the order of the data points plays a principal role in creating meaningful clusters. The dependent Dirichlet process (DDP), originally formulated by MacEachern [5], provides a prior over such evolving mixture models, and is a promising tool for incrementally monitoring the dynamic evolution of the cluster structure within a dataset. More recently, a construction of the DDP built upon completely random measures [6] led to the development of the dependent Dirichlet process Mixture model (DDPMM) and a corresponding approximate posterior inference Gibbs sampling algorithm. This model generalizes the DPMM by including birth, death and transition processes for the clusters in the model. The DDPMM is a Bayesian nonparametric (BNP) model, part of an ever-growing class of probabilistic models for which inference captures uncertainty in both the number of parameters and their values. While these models are powerful in their capability to capture complex structures in data without requiring explicit model selection, they suffer some practical shortcomings. Inference techniques for BNPs typically fall into two classes: sampling methods (e.g., Gibbs sampling [2] 1 or particle learning [4]) and optimization methods (e.g., variational inference [3] or stochastic variational inference [7]). Current methods based on sampling do not scale well with the size of the dataset [8]. Most optimization methods require analytic derivatives and the selection of an upper bound on the number of clusters a priori, where the computational complexity increases with that upper bound [3, 7]. State-of-the-art techniques in both classes are not ideal for use in contexts where performing inference quickly and reliably on large volumes of streaming data is crucial for timely decision-making, such as autonomous robotic systems [9–11]. On the other hand, many classical clustering methods [12–14] scale well with the size of the dataset and are easy to implement, and advances have recently been made to capture the flexibility of Bayesian nonparametrics in such approaches [15]. However, as of yet there is no classical algorithm that captures dynamic cluster structure with the same representational power as the DDP mixture model. This paper discusses the Dynamic Means algorithm, a novel hard clustering algorithm for spatiotemporal data derived from the low-variance asymptotic limit of the Gibbs sampling algorithm for the dependent Dirichlet process Gaussian mixture model. This algorithm captures the scalability and ease of implementation of classical clustering methods, along with the representational power of the DDP prior, and is guaranteed to converge to a local minimum of a k-means-like cost function. The algorithm is significantly more computationally tractable than Gibbs sampling, particle learning, and variational inference for the DDP mixture model in practice, while providing equivalent or better clustering accuracy on the examples presented. The performance and characteristics of the algorithm are demonstrated in a test on synthetic data, with a comparison to those of Gibbs sampling, particle learning and variational inference. Finally, the applicability of the algorithm to real data is presented through an example of clustering a spatio-temporal dataset of aircraft trajectories recorded across the United States. 2 Background The Dirichlet process (DP) is a prior over mixture models, where the number of mixture components is not known a priori[16]. In general, we denote D ∼DP(µ), where αµ ∈R+ and µ : Ω→ R+, R Ωdµ = αµ are the concentration parameter and base measure of the DP, respectively. If D ∼DP, then D = {(θk, πk)}∞ k=0 ⊂Ω× R+, where θk ∈Ωand πk ∈R+[17]. The reader is directed to [1] for a more thorough coverage of Dirichlet processes. The dependent Dirichlet process (DDP)[5], an extension to the DP, is a prior over evolving mixture models. Given a Poisson process construction[6], the DDP essentially forms a Markov chain of DPs (D1, D2, . . . ), where the transitions are governed by a set of three stochastic operations: Points θk may be added, removed, and may move during each step of the Markov chain. Thus, they become parameterized by time, denoted by θkt. In slightly more detail, if Dt is the DP at time step t, then the following procedure defines the generative model of Dt conditioned on Dt−1 ∼DP(µt−1): 1. Subsampling: Define a function q : Ω→[0, 1]. Then for each point (θ, π) ∈Dt−1, sample a Bernoulli distribution bθ ∼Be(q(θ)). Set D′ t to be the collection of points (θ, π) such that bθ = 1, and renormalize the weights. Then D′ t ∼DP(qµt−1), where (qµ)(A) = R A q(θ)µ(dθ). 2. Transition: Define a distribution T : Ω× Ω→R+. For each point (θ, π) ∈D′ t, sample θ′ ∼T(θ′|θ), and set D′′ t to be the collection of points (θ′, π). Then D′′ t ∼DP(Tqµt−1), where (Tµ)(A) = R A R ΩT(θ′|θ)µ(dθ). 3. Superposition: Sample F ∼DP(ν), and sample (cD, cF ) ∼Dir(Tqµt−1(Ω), ν(Ω)). Then set Dt to be the union of (θ, cDπ) for all (θ, π) ∈D′′ t and (θ, cF π) for all (θ, π) ∈F. Thus, Dt is a random convex combination of D′′ t and F, where Dt ∼DP(Tqµt−1 + ν). If the DDP is used as a prior over a mixture model, these three operations allow new mixture components to arise over time, and old mixture components to exhibit dynamics and perhaps disappear over time. As this is covered thoroughly in [6], the mathematics of the underlying Poisson point process construction are not discussed in more depth in this work. However, an important result of using such a construction is the development of an explicit posterior for Dt given observations of the points θkt at timestep t. For each point k that was observed in Dτ for some τ : 1 ≤τ ≤t, define: nkt ∈N as the number of observations of point k in timestep t; ckt ∈N as the number of past 2 observations of point k prior to timestep t, i.e. ckt = Pt−1 τ=1 nkτ; qkt ∈(0, 1) as the subsampling weight on point k at timestep t; and ∆tk as the number of time steps that have elapsed since point k was last observed. Further, let νt be the measure for unobserved points at time step t. Then, Dt|Dt−1 ∼DP νt + X k:nkt=0 qktcktT(· |θk(t−∆tk)) + X k:nkt>0 (ckt + nkt)δθkt ! (1) where ckt = 0 for any point k that was first observed during timestep t. This posterior leads directly to the development of a Gibbs sampling algorithm for the DDP, whose low-variance asymptotics are discussed further below. 3 Asymptotic Analysis of the DDP Mixture The dependent Dirichlet process Gaussian mixture model (DDP-GMM) serves as the foundation upon which the present work is built. The generative model of a DDP-GMM at time step t is {θkt, πkt}∞ k=1 ∼DP(µt) {zit}Nt i=1 ∼Categorical({πkt}∞ k=1) {yit}Nt i=1 ∼N(θzitt, σI) (2) where θkt is the mean of cluster k, πkt is the categorical weight for class k, yit is a d-dimensional observation vector, zit is a cluster label for observation i, and µt is the base measure from equation (1). Throughout the rest of this paper, the subscript kt refers to quantities related to cluster k at time step t, and subscript it refers to quantities related to observation i at time step t. The Gibbs sampling algorithm for the DDP-GMM iterates between sampling labels zit for datapoints yit given the set of parameters {θkt}, and sampling parameters θkt given each group of data {yit : zit = k}. Assuming the transition model T is Gaussian, and the subsampling function q is constant, the functions and distributions used in the Gibbs sampling algorithm are: the prior over cluster parameters, θ ∼N(φ, ρI); the likelihood of an observation given its cluster parameter, yit ∼N(θkt, σI); the distribution over the transitioned cluster parameter given its last known location after ∆tk time steps, θkt ∼N(θk(t−∆tk), ξ∆tkI); and the subsampling function q(θ) = q ∈(0, 1). Given these functions and distributions, the low-variance asymptotic limits (i.e. σ →0) of these two steps are discussed in the following sections. 3.1 Setting Labels Given Parameters In the label sampling step, a datapoint yit can either create a new cluster, join a current cluster, or revive an old, transitioned cluster. Using the distributions defined previously, the label assignment probabilities are p(zit = k| . . . ) ∝ αt(2π(σ + ρ))−d/2 exp −||yit−φ||2 2(σ+ρ) k = K + 1 (ckt + nkt)(2πσ)−d/2 exp −||yit−θkt||2 2σ nkt > 0 qktckt(2π(σ + ξ∆tk))−d/2 exp − ||yit−θk(t−∆tk)||2 2(σ+ξ∆tk) nkt = 0 (3) where qkt = q∆tk due to the fact that q(θ) is constant over Ω, and αt = αν 1−qt 1−q where αν is the concentration parameter for the innovation process, Ft. The low-variance asymptotic limit of this label assignment step yields meaningful assignments as long as αν, ξ, and q vary appropriately with σ; thus, setting αν, ξ, and q as follows (where λ, τ and Q are positive constants): αν = (1 + ρ/σ)d/2 exp −λ 2σ , ξ = τσ, q = exp −Q 2σ (4) yields the following assignments in the limit as σ →0: zit = arg min k {Jk} , Jk = ||yit −θkt||2 if θk instantiated Q∆tk + ||yit−θk(t−∆tk)||2 τ∆tk+1 if θk old, uninstantiated λ if θk new . (5) In this assignment step, Q∆tk acts as a cost penalty for reviving old clusters that increases with the time since the cluster was last seen, τ∆tk acts as a cost reduction to account for the possible motion of clusters since they were last instantiated, and λ acts as a cost penalty for introducing a new cluster. 3 3.2 Setting Parameters given Labels In the parameter sampling step, the parameters are sampled using the distribution p(θkt|{yit : zit = k}) ∝p({yit : zit = k}|θkt)p(θkt) (6) There are two cases to consider when setting a parameter θkt. Either ∆tk = 0 and the cluster is new in the current time step, or ∆tk > 0 and the cluster was previously created, disappeared for some amount of time, and then was revived in the current time step. New Cluster Suppose cluster k is being newly created. In this case, θkt ∼N(φ, ρ). Using the fact that a normal prior is conjugate a normal likelihood, the closed-form posterior for θkt is θkt|{yit : zit = k} ∼N (θpost, σpost) θpost = σpost φ ρ + Pnkt i=1 yit σ , σpost = 1 ρ + nkt σ −1 (7) Then letting σ →0, θkt = (Pnkt i=1 yit) nkt def = mkt (8) where mkt is the mean of the observations in the current timestep. Revived Cluster Suppose there are ∆tk time steps where cluster k was not observed, but there are now nkt data points with mean mkt assigned to it in this time step. In this case, p(θkt) = Z θ T(θkt|θ)p(θ) dθ, θ ∼N(θ′, σ′). (9) Again using conjugacy of normal likelihoods and priors, θkt|{yit : zit = k} ∼N (θpost, σpost) θpost = σpost θ′ ξ∆tk + σ′ + Pnkt i=1 yit σ , σpost = 1 ξ∆tk + σ′ + nkt σ −1 (10) Similarly to the label assignment step, let ξ = τσ. Then as long as σ′ = σ/w, w > 0 (which holds if equation (10) is used to recursively keep track of the parameter posterior), taking the asymptotic limit of this as σ →0 yields: θkt = θ′(w−1 + ∆tkτ)−1 + nktmkt (w−1 + ∆tkτ)−1 + nkt (11) that is to say, the revived θkt is a weighted average of estimates using current timestep data and previous timestep data. τ controls how much the current data is favored - as τ increases, the weight on current data increases, which is explained by the fact that our uncertainty in where the old θ′ transitioned to increases with τ. It is also noted that if τ = 0, this reduces to a simple weighted average using the amount of data collected as weights. Combined Update Combining the updates for new cluster parameters and old transitioned cluster parameters yields a recursive update scheme: θk0 = mk0 wk0 = nk0 and γkt = (wk(t−∆tk))−1 + ∆tkτ −1 θkt = θk(t−∆tk)γkt + nktmkt γkt + nkt wkt = γkt + nkt (12) where time step 0 here corresponds to when the cluster is first created. An interesting interpretation of this update is that it behaves like a standard Kalman filter, in which w−1 kt serves as the current estimate variance, τ serves as the process noise variance, and nkt serves as the inverse of the measurement variance. 4 Algorithm 1 Dynamic Means Input: {Yt} tf t=1, Q, λ, τ C1 ←∅ for t = 1 →tf do (Kt, Zt, Lt) ←CLUSTER(Yt, Ct, Q, λ, τ) Ct+1 ←UPDATEC(Zt, Kt, Ct) end for return {Kt, Zt, Lt} tf t=1 Algorithm 2 CLUSTER Input: Yt, Ct, Q, λ, τ Kt ←∅, Zt ←∅, L0 ←∞ for n = 1 →∞do (Zt, Kt) ←ASSIGNLABELS(Yt, Zt, Kt, Ct) (Kt, Ln) ←ASSIGNPARAMS(Yt, Zt, Ct) if Ln = Ln−1 then return Kt, Zt, Ln end if end for 4 The Dynamic Means Algorithm In this section, some further notation is required for brevity: Yt = {yit}Nt i=1, Zt = {zit}Nt i=1 Kt = {(θkt, wkt) : nkt > 0}, Ct = {(∆tk, θk(t−∆tk), wk(t−∆tk))} (13) where Yt and Zt are the sets of observations and labels at time step t, Kt is the set of currently active clusters (some are new with ∆tk = 0, and some are revived with ∆tk > 0), and Ct is the set of old cluster information. 4.1 Algorithm Description As shown in the previous section, the low-variance asymptotic limit of the DDP Gibbs sampling algorithm is a deterministic observation label update (5) followed by a deterministic, weighted leastsquares parameter update (12). Inspired by the original K-Means algorithm, applying these two updates iteratively yields an algorithm which clusters a set of observations at a single time step given cluster means and weights from past time steps (Algorithm 2). Applying Algorithm 2 to a sequence of batches of data yields a clustering procedure that is able to track a set of dynamically evolving clusters (Algorithm 1), and allows new clusters to emerge and old clusters to be forgotten. While this is the primary application of Algorithm 1, the sequence of batches need not be a temporal sequence. For example, Algorithm 1 may be used as an any-time clustering algorithm for large datasets, where the sequence of batches is generated by selecting random subsets of the full dataset. The ASSIGNPARAMS function is exactly the update from equation (12) applied to each k ∈Kt. Similarly, the ASSIGNLABELS function applies the update from equation (5) to each observation; however, in the case that a new cluster is created or an old one is revived by an observation, ASSIGNLABELS also creates a parameter for that new cluster based on the parameter update equation (12) with that single observation. Note that the performance of the algorithm depends on the order in which ASSIGNLABELS assigns labels. Multiple random restarts of the algorithm with different assignment orders may be used to mitigate this dependence. The UPDATEC function is run after clustering observations from each time step, and constructs Ct+1 by setting ∆tk = 1 for any new or revived cluster, and by incrementing ∆tk for any old cluster that was not revived: Ct+1 = {(∆tk + 1, θk(t−∆tk), wk(t−∆tk)) : k ∈Ct, k /∈Kt} ∪{(1, θkt, wkt) : k ∈Kt} (14) An important question is whether this algorithm is guaranteed to converge while clustering data in each time step. Indeed, it is; Theorem 1 shows that a particular cost function Lt monotonically decreases under the label and parameter updates (5) and (12) at each time step. Since Lt ≥0, and it is monotonically decreased by Algorithm 2, the algorithm converges. Note that the Dynamic Means is only guaranteed to converge to a local optimum, similarly to the k-means[12] and DP-Means[15] algorithms. Theorem 1. Each iteration in Algorithm 2 monotonically decreases the cost function Lt, where Lt = X k∈Kt New Cost z }| { λ [∆tk = 0] + Revival Cost z }| { Q∆tk + Weighted-Prior Sum-Squares Cost z }| { γkt||θkt −θk(t−∆tk)||2 2 + X yit∈Yt zit=k ||yit −θkt||2 2 (15) The cost function is comprised of a number of components for each currently active cluster k ∈Kt: A penalty for new clusters based on λ, a penalty for old clusters based on Q and ∆tk, and finally 5 a prior-weighted sum of squared distance cost for all the observations in cluster k. It is noted that for new clusters, θkt = θk(t−∆tk) since ∆tk = 0, so the least squares cost is unweighted. The ASSIGNPARAMS function calculates this cost function in each iteration of Algorithm 2, and the algorithm terminates once the cost function does not decrease during an iteration. 4.2 Reparameterizing the Algorithm In order to use the Dynamic Means algorithm, there are three free parameters to select: λ, Q, and τ. While λ represents how far an observation can be from a cluster before it is placed in a new cluster, and thus can be tuned intuitively, Q and τ are not so straightforward. The parameter Q represents a conceptual added distance from any data point to a cluster for every time step that the cluster is not observed. The parameter τ represents a conceptual reduction of distance from any data point to a cluster for every time step that the cluster is not observed. How these two quantities affect the algorithm, and how they interact with the setting of λ, is hard to judge. Instead of picking Q and τ directly, the algorithm may be reparameterized by picking NQ, kτ ∈R+, NQ > 1, kτ ≥1, and given a choice of λ, setting Q =λ/NQ τ = NQ(kτ −1) + 1 NQ −1 . (16) If Q and τ are set in this manner, NQ represents the number (possibly fractional) of time steps a cluster can be unobserved before the label update (5) will never revive that cluster, and kτλ represents the maximum squared distance away from a cluster center such that after a single time step, the label update (5) will revive that cluster. As NQ and kτ are specified in terms of concrete algorithmic behavior, they are intuitively easier to set than Q and τ. 5 Related Work Prior k-means clustering algorithms that determine the number of clusters present in the data have primarily involved a method for iteratively modifying k using various statistical criteria [13, 14, 18]. In contrast, this work derives this capability from a Bayesian nonparametric model, similarly to the DP-Means algorithm [15]. In this sense, the relationship between the Dynamic Means algorithm and the dependent Dirichlet process [6] is exactly that between the DP-Means algorithm and Dirichlet process [16], where the Dynamic Means algorithm may be seen as an extension to the DP-Means that handles sequential data with time-varying cluster parameters. MONIC [19] and MC3 [20] have the capability to monitor time-varying clusters; however, these methods require datapoints to be identifiable across timesteps, and determine cluster similarity across timesteps via the commonalities between label assignments. The Dynamic Means algorithm does not require such information, and tracks clusters essentially based on similarity of the parameters across timesteps. Evolutionary clustering [21, 22], similar to Dynamic Means, minimizes an objective consisting of a cost for clustering the present data set and a cost related to the comparison between the current clustering and past clusterings. The present work can be seen as a theoretically-founded extension of this class of algorithm that provides methods for automatic and adaptive prior weight selection, forming correspondences between old and current clusters, and for deciding when to introduce new clusters. Finally, some sequential Monte-Carlo methods (e.g. particle learning [23] or multi-target tracking [24, 25]) can be adapted for use in the present context, but suffer the drawbacks typical of particle filtering methods. 6 Applications 6.1 Synthetic Gaussian Motion Data In this experiment, moving Gaussian clusters on [0, 1] × [0, 1] were generated synthetically over a period of 100 time steps. In each step, there was some number of clusters, each having 15 data points. The data points were sampled from a symmetric Gaussian distribution with a standard deviation of 0.05. Between time steps, the cluster centers moved randomly, with displacements sampled from the same distribution. At each time step, each cluster had a 0.05 probability of being destroyed. 6 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 λ 2 3 4 5 6 7 8 9 10 TQ 0.240 0.320 0.320 0.400 0.400 0.480 0.480 0.560 0.560 (a) 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 λ 1 2 3 4 5 6 kτ 0.240 0.320 0.320 0.400 0.480 0.560 (b) −4.0 −3.8 −3.6 −3.4 −3.2 −3.0 −2.8 −2.6 −2.4 −2.2 CPU Time (log10 s) per Step 0 50 100 150 200 250 (c) 0 5 10 15 20 # Clusters 0 20 40 60 80 100 % Label Accuracy Gibbs VB PL DynMeans (d) 0 5 10 15 20 # Clusters 10−5 10−4 10−3 10−2 10−1 100 101 102 CPU Time (s) per Step Gibbs VB PL DynMeans (e) 10−5 10−4 10−3 10−2 10−1 100 101 102 CPU Time (s) per Step 20 40 60 80 100 % Accuracy Gibbs Gibbs NC DynMeans DynMeans NC (f) Figure 1: (1a - 1c): Accuracy contours and CPU time histogram for the Dynamic Means algorithm. (1d - 1e): Comparison with Gibbs sampling, variational inference, and particle learning. Shaded region indicates 1σ interval; in (1e), only upper half is shown. (1f): Comparison of accuracy when enforcing (Gibbs, DynMeans) and not enforcing (Gibbs NC, DynMeans NC) correct cluster tracking. This data was clustered with Dynamic Means (with 3 random assignment ordering restarts), DDPGMM Gibbs sampling [6], variational inference [3], and particle learning [4] on a computer with an Intel i7 processor and 16GB of memory. First, the number of clusters was fixed to 5, and the parameter space of each algorithm was searched for the best possible cluster label accuracy (taking into account correct cluster tracking across time steps). The results of this parameter sweep for the Dynamic Means algorithm with 50 trials at each parameter setting are shown in Figures 1a–1c. Figures 1a and 1b show how the average clustering accuracy varies with the parameters after fixing either kτ or TQ to their values at the maximum accuracy parameter setting over the full space. The Dynamic Means algorithm had a similar robustness with respect to variations in its parameters as the comparison algorithms. The histogram in Figure 1c demonstrates that the clustering speed is robust to the setting of parameters. The speed of Dynamic Means, coupled with the smoothness of its performance with respect to its parameters, makes it well suited for automatic tuning [26]. Using the best parameter setting for each algorithm, the data as described above were clustered in 50 trials with a varying number of clusters present in the data. For the Dynamic Means algorithm, parameter values λ = 0.04, TQ = 6.8, and kτ = 1.01 were used, and the algorithm was again given 3 attempts with random labeling assignment orders, where the lowest cost solution of the 3 was picked to proceed to the next time step. For the other algorithms, the parameter values α = 1 and q = 0.05 were used, with a Gaussian transition distribution variance of 0.05. The number of samples for the Gibbs sampling algorithm was 5000 with one recorded for every 5 samples, the number of particles for the particle learning algorithm was 100, and the variational inference algorithm was run to a tolerance of 10−20 with the maximum number of iterations set to 5000. In Figures 1d and 1e, the labeling accuracy and clustering time (respectively) for the algorithms is shown. The sampling algorithms were handicapped to generate Figure 1d; the best posterior sample in terms of labeling accuracy was selected at each time step, which required knowledge of the true labeling. Further, the accuracy computation included enforcing consistency across timesteps, to allow tracking individual cluster trajectories. If this is not enforced (i.e. accuracy considers each time step independently), the other algorithms provide accuracies more comparable to those of the Dynamic Means algorithm. This effect is demonstrated in Figure 1f, which shows the time/accuracy tradeoff for Gibbs sampling (varying the number of samples) and Dynamic Means (varying the number of restarts). These examples illustrate that Dynamic Means outperforms standard inference algorithms in both label accuracy and computation time for cluster tracking problems. 7 −0.2 −0.1 0.0 0.1 0.2 −0.5 −0.4 −0.3 −0.2 −0.1 0.0 −0.2 −0.1 0.0 0.1 0.2 JFK MIA HOU LAX SEA ORD MSP Fri Sat Sun Mon Tue Wed Thu Fri UTC Date 0123456789 10 11 12 Cluster # Figure 2: Results of the GP aircraft trajectory clustering. Left: A map (labeled with major US city airports) showing the overall aircraft flows for 12 trajectories, with colors and 1σ confidence ellipses corresponding to takeoff region (multiple clusters per takeoff region), colored dots indicating mean takeoff position for each cluster, and lines indicating the mean trajectory for each cluster. Right: A track of plane counts for the 12 clusters during the week, with color intensity proportional to the number of takeoffs at each time. 6.2 Aircraft Trajectory Clustering In this experiment, the Dynamic Means algorithm was used to find the typical spatial and temporal patterns in the motions of commercial aircraft. Automatic dependent surveillance-broadcast (ADS-B) data, including plane identification, timestamp, latitude, longitude, heading and speed, was collected from all transmitting planes across the United States during the week from 2013-3-22 1:30:0 to 2013-3-28 12:0:0 UTC. Then, individual ADS-B messages were connected together based on their plane identification and timestamp to form trajectories, and erroneous trajectories were filtered based on reasonable spatial/temporal bounds, yielding 17,895 unique trajectories. Then, for each trajectory, a Gaussian process was trained using the latitude and longitude of each ADS-B point along the trajectory as the inputs and the North and East components of plane velocity at those points as the outputs. Next, the mean latitudinal and longitudinal velocities from the Gaussian process were queried for each point on a regular lattice across the USA (10 latitudes and 20 longitudes), and used to create a 400-dimensional feature vector for each trajectory. Of the resulting 17,895 feature vectors, 600 were hand-labeled (each label including a confidence weight in [0, 1]). The feature vectors were clustered using the DP-Means algorithm on the entire dataset in a single batch, and using Dynamic Means / DDPGMM Gibbs sampling (with 50 samples) with half-hour takeoff window batches. Table 1: Mean computational time & accuracy on hand-labeled aircraft trajectory data Alg. % Acc. Time (s) DynM 55.9 2.7 × 102 DPM 55.6 3.1 × 103 Gibbs 36.9 1.4 × 104 The results of this exercise are provided in Figure 2 and Table 1. Figure 2 shows the spatial and temporal properties of the 12 most popular clusters discovered by Dynamic Means, demonstrating that the algorithm successfully identified major flows of commercial aircraft across the US. Table 1 corroborates these qualitative results with a quantitative comparison of the computation time and accuracy for the three algorithms tested over 20 trials. The confidence-weighted accuracy was computed by taking the ratio between the sum of the weights for correctly labeled points and the sum of all weights. The DDPGMM Gibbs sampling algorithm was handicapped as described in the synthetic experiment section. Of the three algorithms, Dynamic Means provided the highest labeling accuracy, while requiring orders of magnitude less computation time than both DP-Means and DDPGMM Gibbs sampling. 7 Conclusion This work developed a clustering algorithm for batch-sequential data containing temporally evolving clusters, derived from a low-variance asymptotic analysis of the Gibbs sampling algorithm for the dependent Dirichlet process mixture model. Synthetic and real data experiments demonstrated that the algorithm requires orders of magnitude less computational time than contemporary probabilistic and hard clustering algorithms, while providing higher accuracy on the examined datasets. The speed of inference coupled with the convergence guarantees provided yield an algorithm which is suitable for use in time-critical applications, such as online model-based autonomous planning systems. Acknowledgments This work was supported by NSF award IIS-1217433 and ONR MURI grant N000141110688. 8 References [1] Yee Whye Teh. Dirichlet processes. In Encyclopedia of Machine Learning. Springer, New York, 2010. [2] Radford M. Neal. Markov chain sampling methods for dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9(2):249–265, 2000. [3] David M. Blei and Michael I. Jordan. Variational inference for dirichlet process mixtures. Bayesian Analysis, 1(1):121–144, 2006. [4] Carlos M. Carvalho, Hedibert F. Lopes, Nicholas G. Polson, and Matt A. Taddy. Particle learning for general mixtures. Bayesian Analysis, 5(4):709–740, 2010. [5] Steven N. MacEachern. Dependent nonparametric processes. In Proceedings of the Bayesian Statistical Science Section. American Statistical Association, 1999. [6] Dahua Lin, Eric Grimson, and John Fisher. Construction of dependent dirichlet processes based on poisson processes. In Neural Information Processing Systems, 2010. [7] Matt Hoffman, David Blei, Chong Wang, and John Paisley. Stochastic variational inference. arXiv ePrint 1206.7051, 2012. [8] Finale Doshi-Velez and Zoubin Ghahramani. Accelerated sampling for the indian buffet process. In Proceedings of the International Conference on Machine Learning, 2009. [9] Felix Endres, Christian Plagemann, Cyrill Stachniss, and Wolfram Burgard. Unsupervised discovery of object classes from range data using latent dirichlet allocation. In Robotics Science and Systems, 2005. [10] Matthias Luber, Kai Arras, Christian Plagemann, and Wolfram Burgard. Classifying dynamic objects: An unsupervised learning approach. In Robotics Science and Systems, 2004. [11] Zhikun Wang, Marc Deisenroth, Heni Ben Amor, David Vogt, Bernard Sch¨olkopf, and Jan Peters. Probabilistic modeling of human movements for intention inference. In Robotics Science and Systems, 2008. [12] Stuart P. Lloyd. Least squares quantization in pcm. IEEE Transactions on Information Theory, 28(2):129– 137, 1982. [13] Dan Pelleg and Andrew Moore. X-means: Extending k-means with efficient estimation of the number of clusters. In Proceedings of the 17th International Conference on Machine Learning, 2000. [14] Robert Tibshirani, Guenther Walther, and Trevor Hastie. Estimating the number of clusters in a data set via the gap statistic. Journal of the Royal Statistical Society B, 63(2):411–423, 2001. [15] Brian Kulis and Michael I. Jordan. Revisiting k-means: New algorithms via bayesian nonparametrics. In Proceedings of the 29th International Conference on Machine Learning (ICML), Edinburgh, Scotland, 2012. [16] Thomas S. Ferguson. A bayesian analysis of some nonparametric problems. The Annals of Statistics, 1(2):209–230, 1973. [17] Jayaram Sethuraman. A constructive definition of dirichlet priors. Statistica Sinica, 4:639–650, 1994. [18] Tsunenori Ishioka. Extended k-means with an efficient estimation of the number of clusters. In Proceedings of the 2nd International Conference on Intelligent Data Engineering and Automated Learning, pages 17–22, 2000. [19] Myra Spiliopoulou, Irene Ntoutsi, Yannis Theodoridis, and Rene Schult. Monic - modeling and monitoring cluster transitions. In Proceedings of the 12th International Conference on Knowledge Discovering and Data Mining, pages 706–711, 2006. [20] Panos Kalnis, Nikos Mamoulis, and Spiridon Bakiras. On discovering moving clusters in spatio-temporal data. In Proceedings of the 9th International Symposium on Spatial and Temporal Databases, pages 364–381. Springer, 2005. [21] Deepayan Chakraborti, Ravi Kumar, and Andrew Tomkins. Evolutionary clustering. In Proceedings of the SIGKDD International Conference on Knowledge Discovery and Data Mining, 2006. [22] Kevin Xu, Mark Kliger, and Alfred Hero III. Adaptive evolutionary clustering. Data Mining and Knowledge Discovery, pages 1–33, 2012. [23] Carlos M. Carvalho, Michael S. Johannes, Hedibert F. Lopes, and Nicholas G. Polson. Particle learning and smoothing. Statistical Science, 25(1):88–106, 2010. [24] Carine Hue, Jean-Pierre Le Cadre, and Patrick P´erez. Tracking multiple objects with particle filtering. IEEE Transactions on Aerospace and Electronic Systems, 38(3):791–812, 2002. [25] Jaco Vermaak, Arnaud Doucet, and Partick P´erez. Maintaining multi-modality through mixture tracking. In Proceedings of the 9th IEEE International Conference on Computer Vision, 2003. [26] Jasper Snoek, Hugo Larochelle, and Ryan Adams. Practical bayesian optimization of machine learning algorithms. In Neural Information Processing Systems, 2012. 9
|
2013
|
109
|
4,831
|
Accelerated Mini-Batch Stochastic Dual Coordinate Ascent Shai Shalev-Shwartz School of Computer Science and Engineering Hebrew University, Jerusalem, Israel Tong Zhang Department of Statistics Rutgers University, NJ, USA Abstract Stochastic dual coordinate ascent (SDCA) is an effective technique for solving regularized loss minimization problems in machine learning. This paper considers an extension of SDCA under the mini-batch setting that is often used in practice. Our main contribution is to introduce an accelerated mini-batch version of SDCA and prove a fast convergence rate for this method. We discuss an implementation of our method over a parallel computing system, and compare the results to both the vanilla stochastic dual coordinate ascent and to the accelerated deterministic gradient descent method of Nesterov [2007]. 1 Introduction We consider the following generic optimization problem. Let φ1, . . . , φn be a sequence of vector convex functions from Rd to R, and let g : Rd →R be a strongly convex regularization function. Our goal is to solve minx∈Rd P(x) where P(x) = " 1 n n X i=1 φi(x) + g(x) # . (1) For example, given a sequence of n training examples (v1, y1), . . . , (vn, yn), where vi ∈Rd and yi ∈R, ridge regression is obtained by setting g(x) = λ 2 ∥x∥2 and φi(x) = (x⊤vi −yi)2. Regularized logistic regression is obtained by setting φi(x) = log(1 + exp(−yix⊤vi)). The dual problem of (1) is defined as follows: For each i, let φ∗ i : Rd →R be the convex conjugate of φi, namely, φ∗ i (u) = maxz∈Rd(z⊤u−φi(z)). Similarly, let g∗be the convex conjugate of g. The dual problem is: max α∈Rd×n D(α) where D(α) = " 1 n n X i=1 −φ∗ i (−αi) −g∗ 1 n n X i=1 αi !# , (2) where for each i, αi is the i’th column of the matrix α. The dual objective has a different dual vector associated with each primal function. Dual Coordinate Ascent (DCA) methods solve the dual problem iteratively, where at each iteration of DCA, the dual objective is optimized with respect to a single dual vector, while the rest of the dual vectors are kept in tact. Recently, Shalev-Shwartz and Zhang [2013a] analyzed a stochastic version of dual coordinate ascent, abbreviated by SDCA, in which at each round we choose which dual vector to optimize uniformly at random (see also Richt´arik and Tak´aˇc [2012a]). In particular, let x∗be the optimum of (1). We say that a solution x is ϵ-accurate if P(x) −P(x∗) ≤ϵ. Shalev-Shwartz and Zhang [2013a] have derived the following convergence guarantee for SDCA: If g(x) = λ 2 ∥x∥2 2 and each φi is (1/γ)-smooth, then for every ϵ > 0, if we run SDCA for at least n + 1 λγ log((n + 1 λγ ) · 1 ϵ ) 1 iterations, then the solution of the SDCA algorithm will be ϵ-accurate (in expectation). This convergence rate is significantly better than the more commonly studied stochastic gradient descent (SGD) methods that are related to SDCA1. Another approach to solving (1) is deterministic gradient descent methods. In particular, Nesterov [2007] proposed an accelerated gradient descent (AGD) method for solving (1). Under the same conditions mentioned above, AGD finds an ϵ-accurate solution after performing O 1 √λγ log( 1 ϵ ) iterations. The advantage of SDCA over AGD is that each iteration involves only a single dual vector and usually costs O(d). In contrast, each iteration of AGD requires Ω(nd) operations. On the other hand, AGD has a better dependence on the condition number of the problem — the iteration bound of AGD scales with 1/√λγ while the iteration bound of SDCA scales with 1/(λγ). In this paper we describe and analyze a new algorithm that interpolates between SDCA and AGD. At each iteration of the algorithm, we randomly pick a subset of m indices from {1, . . . , n} and update the dual vectors corresponding to this subset. This subset is often called a mini-batch. The use of mini-batches is common with SGD optimization, and it is beneficial when the processing time of a mini-batch of size m is much smaller than m times the processing time of one example (minibatch of size 1). For example, in the practical training of neural networks with SGD, one is always advised to use mini-batches because it is more efficient to perform matrix-matrix multiplications over a mini-batch than an equivalent amount of matrix-vector multiplication operations (each over a single training example). This is especially noticeable when GPU is used: in some cases the processing time of a mini-batch of size 100 may be the same as that of a mini-batch of size 10. Another typical use of mini-batch is for parallel computing, which was studied by various authors for stochastic gradient descent (e.g., Dekel et al. [2012]). This is also the application scenario we have in mind, and will be discussed in greater details in Section 3. Recently, Tak´ac et al. [2013] studied mini-batch variants of SDCA in the context of the Support Vector Machine (SVM) problem. They have shown that the naive mini-batching method, in which m dual variables are optimized in parallel, might actually increase the number of iterations required. They then describe several “safe” mini-batching schemes, and based on the analysis of ShalevShwartz and Zhang [2013a], have shown several speed-up results. However, their results are for the non-smooth case and hence they do not obtain linear convergence rate. In addition, the speed-up they obtain requires some spectral properties of the training examples. We take a different approach and employ Nesterov’s acceleration method, which has previously been applied to mini-batch SGD optimization. This paper shows how to achieve acceleration for SDCA in the mini-batch setting. The pseudo code of our Accelerated Mini-Batch SDCA, abbreviated by ASDCA, is presented below. Procedure Accelerated Mini-Batch SDCA Parameters scalars λ, γ and θ ∈[0, 1] ; mini-batch size m Initialize α(0) 1 = · · · = α(0) n = ¯α(t) = 0, x(0) = 0 Iterate: for t = 1, 2, . . . u(t−1) = (1 −θ)x(t−1) + θ∇g∗(¯α(t−1)) Randomly pick subset I ⊂{1, . . . , n} of size m and update the dual variables in I α(t) i = (1 −θ)α(t−1) i −θ∇φi(u(t−1)) for i ∈I α(t) j = α(t−1) j for j /∈I ¯α(t) = ¯α(t−1) + n−1 P i∈I(α(t) i −α(t−1) i ) x(t) = (1 −θ)x(t−1) + θ∇g∗(¯α(t)) end In the next section we present our main result — an analysis of the number of iterations required by ASDCA. We focus on the case of Euclidean regularization, namely, g(x) = λ 2 ∥x∥2. Analyzing more general strongly convex regularization functions is left for future work. In Section 3 we discuss 1An exception is the recent analysis given in Le Roux et al. [2012] for a variant of SGD. 2 parallel implementations of ASDCA and compare it to parallel implementations of AGD and SDCA. In particular, we explain in which regimes ASDCA can be better than both AGD and SDCA. In Section 4 we present some experimental results, demonstrating how ASDCA interpolates between AGD and SDCA. The proof of our main theorem is differed to a long version of this paper (ShalevShwartz and Zhang [2013b]). We conclude with a discussion of our work in light of related works in Section 5. 2 Main Results Our main result is a bound on the number of iterations required by ASDCA to find an ϵ-accurate solution. In our analysis, we only consider the squared Euclidean norm regularization, g(x) = λ 2 ∥x∥2, where ∥· ∥is the Euclidean norm and λ > 0 is a regularization parameter. The analysis for general λ-strongly convex regularizers is left for future work. For the squared Euclidean norm we have g∗(α) = 1 2λ∥α∥2 and ∇g∗(α) = 1 λα . We further assume that each φi is 1/γ-smooth with respect to ∥· ∥, namely, ∀x, z, φi(x) ≤φi(z) + ∇φi(z)⊤(x −z) + 1 2γ ∥x −z∥2. For example, if φi(x) = (x⊤vi −yi)2, then it is ∥vi∥2-smooth. The smoothness of φi also implies that φ∗ i (α) is γ-strongly convex: ∀θ ∈[0, 1], φ∗ i ((1 −θ)α + θβ) ≤(1 −θ)φ∗ i (α) + θφ∗ i (β) −θ(1 −θ)γ 2 ∥α −β∥2. We have the following result for our method. Theorem 1. Assume that g(x) = 1 2λ∥x∥2 2 and for each i, φi is (1/γ)-smooth w.r.t. the Euclidean norm. Suppose that the ASDCA algorithm is run with parameters λ, γ, m, θ, where θ ≤1 4 min ( 1 , r γλn m , γλn , (γλn)2/3 m1/3 ) . (3) Define the dual sub-optimality by ∆D(α) = D(α∗) −D(α), where α∗is the optimal dual solution, and the primal sub-optimality by ∆P(x) = P(x) −D(α∗). Then, m E ∆P(x(t)) + n E ∆D(α(t)) ≤(1 −θm/n)t[m∆P(x(0)) + n∆D(α(0))]. It follows that after performing t ≥n/m θ log m∆P(x(0)) + n∆D(α(0)) mϵ iterations, we have that E[P(x(t)) −D(α(t))] ≤ϵ. Let us now discuss the bound, assuming θ is taken to be the right-hand side of (3). The dominating factor of the bound on t becomes n mθ = n m · max 1 , r m γλn , 1 γλn , m1/3 (γλn)2/3 (4) = max ( n m , s n/m γλ , 1/m γλ , n1/3 (γλm)2/3 ) . (5) Table 1 summarizes several interesting cases, and compares the iteration bound of ASDCA to the iteration bound of the vanilla SDCA algorithm (as analyzed in Shalev-Shwartz and Zhang [2013a]) 3 Algorithm γλn = Θ(1) γλn = Θ(1/m) γλn = Θ(m) SDCA n nm n ASDCA n/√m n n/m AGD √n √nm p n/m Table 1: Comparison of Iteration Complexity Algorithm γλn = Θ(1) γλn = Θ(1/m) γλn = Θ(m) SDCA n nm n ASDCA n√m nm n AGD n√n n√nm n p n/m Table 2: Comparison of Number of Examples Processed and the Accelerated Gradient Descent (AGD) algorithm of Nesterov [2007]. In the table, we ignore constants and logarithmic factors. As can be seen in the table, the ASDCA algorithm interpolates between SDCA and AGD. In particular, ASDCA has the same bound as SDCA when m = 1 and the same bound as AGD when m = n. Recall that the cost of each iteration of AGD scales with n while the cost of each iteration of SDCA does not scale with n. The cost of each iteration of ASDCA scales with m. To compensate for the difference cost per iteration for different algorithms, we may also compare the complexity in terms of the number of examples processed (see Table 2). This is also what we will study in our empirical experiments. It should be mentioned that this comparison is meaningful in a single processor environment, but not in a parallel computing environment when multiple examples can be processed simultaneously in a minibatch. In the next section we discuss under what conditions the overall runtime of ASDCA is better than both AGD and SDCA. 3 Parallel Implementation In recent years, there has been a lot of interest in implementing optimization algorithms using a parallel computing architecture (see Section 5). We now discuss how to implement AGD, SDCA, and ASDCA when having a computing machine with s parallel computing nodes. In the calculations below, we use the following facts: • If each node holds a d-dimensional vector, we can compute the sum of these vectors in time O(d log(s)) by applying a “tree-structure” summation (see for example the All-Reduce architecture in Agarwal et al. [2011]). • A node can broadcast a message with c bits to all other nodes in time O(c log2(s)). To see this, order nodes on the corners of the log2(s)-dimensional hypercube. Then, at each iteration, each node sends the message to its log(s) neighbors (namely, the nodes whose code word is at a hamming distance of 1 from the node). The message between the furthest away nodes will pass after log(s) iterations. Overall, we perform log(s) iterations and each iteration requires transmitting c log(s) bits. • All nodes can broadcast a message with c bits to all other nodes in time O(cs log2(s)). To see this, simply apply the broadcasting of the different nodes mentioned above in parallel. The number of iterations will still be the same, but now, at each iteration, each node should transmit cs bits to its log(s) neighbors. Therefore, it takes O(cs log2(s)) time. For concreteness of the discussion, we consider problems in which φi(x) takes the form of ℓ(x⊤vi, yi), where yi is a scalar and vi ∈Rd. This is the case in supervised learning of linear predictors (e.g. logistic regression or ridge regression). We further assume that the average number of non-zero elements of vi is ¯d. In very large-scale problems, a single machine cannot hold all of the data in its memory. However, we assume that a single node can hold a fraction of 1/s of the data in its memory. 4 Let us now discuss parallel implementations of the different algorithms starting with deterministic gradient algorithms (such as AGD). The bottleneck operation of deterministic gradient algorithms is the calculation of the gradient. In the notation mentioned above, this amounts to performing order of n ¯d operations. If the data is distributed over s computing nodes, where each node holds n/s examples, we can calculate the gradient in time O(n ¯d/s + d log(s)) as follows. First, each node calculates the gradient over its own n/s examples (which takes time O(n ¯d/s)). Then, the s resulting vectors in Rd are summed up in time O(d log(s)). Next, let us consider the SDCA algorithm. On a single computing node, it was observed that SDCA is much more efficient than deterministic gradient descent methods, since each iteration of SDCA costs only Θ( ¯d) while each iteration of AGD costs Θ(n ¯d). When we have s nodes, for the SDCA algorithm, dividing the examples into s computing nodes does not yield any speed-up. However, we can divide the features into the s nodes (that is, each node will hold d/s of the features for all of the examples). This enables the computation of x⊤vi in (expected) time of O( ¯d/s + s log2(s)). Indeed, node t will calculate P j∈Jt xjvi,j, where Jt is the set of features stored in node t (namely, |Jt| = d/s). Then, each node broadcasts the resulting scalar to all the other nodes. Note that we will obtain a speed-up over the naive implementation only if s log2(s) ≪¯d. For the ASDCA algorithm, each iteration involves the computation of the gradient over m examples. We can choose to implement it by dividing the examples to the s nodes (as we did for AGD) or by dividing the features into the s nodes (as we did for SDCA). In the first case, the cost of each iteration is O(m ¯d/s + d log(s)) while in the latter case, the cost of each iteration is O(m ¯d/s + ms log2(s)). We will choose between these two implementations based on the relation between d, m, and s. The runtime and communication time of each iteration is summarized in the table below. Algorithm partition type runtime communication time SDCA features ¯d/s s log2(s) ASDCA features ¯dm/s ms log2(s) ASDCA examples ¯dm/s d log(s) AGD examples ¯dn/s d log(s) We again see that ASDCA nicely interpolates between SDCA and AGD. In practice, it is usually the case that there is a non-negligible cost of opening communication channels between nodes. In that case, it will be better to apply the ASDCA with a value of m that reflects an adequate tradeoff between the runtime of each node and the communication time. With the appropriate value of m (which depends on constants like the cost of opening communication channels and sending packets of bits between nodes), ASDCA may outperform both SDCA and AGD. 4 Experimental Results In this section we demonstrate how ASDCA interpolates between SDCA and AGD. All of our experiments are performed for the task of binary classification with a smooth variant of the hingeloss (see Shalev-Shwartz and Zhang [2013a]). Specifically, let (v1, y1), . . . , (vm, ym) be a set of labeled examples, where for every i, vi ∈Rd and yi ∈{±1}. Define φi(x) to be φi(x) = 0 yix⊤vi > 1 1/2 −yix⊤vi yix⊤vi < 0 1 2(1 −yix⊤vi)2 o.w. We also set the regularization function to be g(x) = λ 2 ∥x∥2 2 where λ = 1/n. This is the default value for the regularization parameter taken in several optimization packages. Following Shalev-Shwartz and Zhang [2013a], the experiments were performed on three large datasets with very different feature counts and sparsity. The astro-ph dataset classifies abstracts of papers from the physics ArXiv according to whether they belong in the astro-physics section; 5 astro-ph CCAT cov1 10 5 10 6 10 7 10 −3 10 −2 10 −1 #processed examples Primal suboptimality m=3 m=30 m=299 AGD SDCA 10 6 10 7 10 8 10 9 10 −3 10 −2 10 −1 #processed examples Primal suboptimality m=78 m=781 m=7813 AGD SDCA 10 6 10 7 10 8 10 −3 10 −2 10 −1 #processed examples Primal suboptimality m=52 m=523 m=5229 AGD SDCA 10 5 10 6 10 7 0.05 0.06 0.07 0.08 0.09 0.1 0.11 0.12 0.13 0.14 0.15 #processed examples Test Loss m=3 m=30 m=299 AGD SDCA 10 6 10 7 10 8 10 9 0.07 0.08 0.09 0.1 0.11 0.12 0.13 0.14 0.15 0.16 0.17 #processed examples Test Loss m=78 m=781 m=7813 AGD SDCA 10 6 10 7 10 8 0.29 0.3 0.31 0.32 0.33 0.34 0.35 0.36 0.37 0.38 0.39 #processed examples Test Loss m=52 m=523 m=5229 AGD SDCA 10 5 10 6 10 7 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.11 0.12 0.13 #processed examples Test Error m=3 m=30 m=299 AGD SDCA 10 6 10 7 10 8 10 9 0.05 0.06 0.07 0.08 0.09 0.1 0.11 0.12 0.13 0.14 0.15 #processed examples Test Error m=78 m=781 m=7813 AGD SDCA 10 6 10 7 10 8 0.22 0.23 0.24 0.25 0.26 0.27 0.28 0.29 0.3 0.31 0.32 #processed examples Test Error m=52 m=523 m=5229 AGD SDCA Figure 1: The figures presents the performance of AGD, SDCA, and ASDCA with different values of mini-batch size, m. In all figures, the x axis is the number of processed examples. The three columns are for the different datasets. Top: primal sub-optimality. Middle: average value of the smoothed hinge loss function over a test set. Bottom: average value of the 0-1 loss over a test set. CCAT is a classification task taken from the Reuters RCV1 collection; and cov1 is class 1 of the covertype dataset of Blackard, Jock & Dean. The following table provides details of the dataset characteristics. Dataset Training Size Testing Size Features Sparsity astro-ph 29882 32487 99757 0.08% CCAT 781265 23149 47236 0.16% cov1 522911 58101 54 22.22% We ran ASDCA with values of m from the set {10−4n, 10−3n, 10−2n}. We also ran the SDCA algorithm and the AGD algorithm. In Figure 1 we depict the primal sub-optimality of the different algorithms as a function of the number of examples processed. Note that each iteration of SDCA processes a single example, each iteration of ASDCA processes m examples, and each iteration of AGD processes n examples. As can be seen from the graphs, ASDCA indeed interpolates between SDCA and AGD. It is clear from the graphs that SDCA is much better than AGD when we have a single computing node. ASDCA performance is quite similar to SDCA when m is not very large. As discussed in Section 3, when we have parallel computing nodes and there is a non-negligible cost of opening communication channels between nodes, running ASDCA with an appropriate value of m (which depends on constants like the cost of opening communication channels) may yield the best performance. 6 5 Discussion and Related Work We have introduced an accelerated version of stochastic dual coordinate ascent with mini-batches. We have shown, both theoretically and empirically, that the resulting algorithm interpolates between the vanilla stochastic coordinate descent algorithm and the accelerated gradient descent algorithm. Using mini-batches in stochastic learning has received a lot of attention in recent years. E.g. ShalevShwartz et al. [2007] reported experiments showing that applying small mini-batches in Stochastic Gradient Descent (SGD) decreases the required number of iterations. Dekel et al. [2012] and Agarwal and Duchi [2012] gave an analysis of SGD with mini-batches for smooth loss functions. Cotter et al. [2011] studied SGD and accelerated versions of SGD with mini-batches and Tak´ac et al. [2013] studied SDCA with mini-batches for SVMs. Duchi et al. [2010] studied dual averaging in distributed networks as a function of spectral properties of the underlying graph. However, all of these methods have a polynomial dependence on 1/ϵ, while we consider the strongly convex and smooth case in which a log(1/ϵ) rate is achievable.2 Parallel coordinate descent has also been recently studied in Fercoq and Richt´arik [2013], Richt´arik and Tak´aˇc [2013]. It is interesting to note that most3 of these papers focus on mini-batches as the method of choice for distributing SGD or SDCA, while ignoring the option to divide the data by features instead of by examples. A possible reason is the cost of opening communication sockets as discussed in Section 3. There are various practical considerations that one should take into account when designing a practical system for distributed optimization. We refer the reader, for example, to Dekel [2010], Low et al. [2010, 2012], Agarwal et al. [2011], Niu et al. [2011]. The more general problem of distributed PAC learning has been studied recently in Daume III et al. [2012], Balcan et al. [2012]. See also Long and Servedio [2011]. In particular, they obtain algorithms with O(log(1/ϵ)) communication complexity. However, these works consider efficient algorithms only in the realizable case. Acknowledgements: Shai Shalev-Shwartz is supported by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI). Tong Zhang is supported by the following grants: NSF IIS-1016061, NSF DMS-1007527, and NSF IIS-1250985. References Alekh Agarwal and John C Duchi. Distributed delayed stochastic optimization. In Decision and Control (CDC), 2012 IEEE 51st Annual Conference on, pages 5451–5452. IEEE, 2012. Alekh Agarwal, Olivier Chapelle, Miroslav Dud´ık, and John Langford. A reliable effective terascale linear learning system. arXiv preprint arXiv:1110.4198, 2011. Maria-Florina Balcan, Avrim Blum, Shai Fine, and Yishay Mansour. Distributed learning, communication complexity and privacy. arXiv preprint arXiv:1204.3514, 2012. Joseph K Bradley, Aapo Kyrola, Danny Bickson, and Carlos Guestrin. Parallel coordinate descent for l1-regularized loss minimization. In ICML, 2011. Andrew Cotter, Ohad Shamir, Nathan Srebro, and Karthik Sridharan. Better mini-batch algorithms via accelerated gradient methods. arXiv preprint arXiv:1106.4574, 2011. Hal Daume III, Jeff M Phillips, Avishek Saha, and Suresh Venkatasubramanian. Protocols for learning classifiers on distributed data. arXiv preprint arXiv:1202.6078, 2012. Ofer Dekel. Distribution-calibrated hierarchical classification. In NIPS, 2010. Ofer Dekel, Ran Gilad-Bachrach, Ohad Shamir, and Lin Xiao. Optimal distributed online prediction using mini-batches. The Journal of Machine Learning Research, 13:165–202, 2012. 2It should be noted that one can use our results for Lipschitz functions as well by smoothing the loss function (see Nesterov [2005]). By doing so, we can interpolate between the 1/ϵ2 rate of non-accelerated method and the 1/ϵ rate of accelerated gradient. 3There are few exceptions in the context of stochastic coordinate descent in the primal. See for example Bradley et al. [2011], Richt´arik and Tak´aˇc [2012b] 7 John Duchi, Alekh Agarwal, and Martin J Wainwright. Distributed dual averaging in networks. Advances in Neural Information Processing Systems, 23, 2010. Olivier Fercoq and Peter Richt´arik. Smooth minimization of nonsmooth functions with parallel coordinate descent methods. arXiv preprint arXiv:1309.5885, 2013. Nicolas Le Roux, Mark Schmidt, and Francis Bach. A Stochastic Gradient Method with an Exponential Convergence Rate for Strongly-Convex Optimization with Finite Training Sets. arXiv preprint arXiv:1202.6258, 2012. Phil Long and Rocco Servedio. Algorithms and hardness results for parallel large margin learning. In NIPS, 2011. Yucheng Low, Joseph Gonzalez, Aapo Kyrola, Danny Bickson, Carlos Guestrin, and Joseph M Hellerstein. Graphlab: A new framework for parallel machine learning. arXiv preprint arXiv:1006.4990, 2010. Yucheng Low, Danny Bickson, Joseph Gonzalez, Carlos Guestrin, Aapo Kyrola, and Joseph M Hellerstein. Distributed graphlab: A framework for machine learning and data mining in the cloud. Proceedings of the VLDB Endowment, 5(8):716–727, 2012. Yurii Nesterov. Smooth minimization of non-smooth functions. Mathematical Programming, 103 (1):127–152, 2005. Yurii Nesterov. Gradient methods for minimizing composite objective function, 2007. Feng Niu, Benjamin Recht, Christopher R´e, and Stephen J Wright. Hogwild!: A lock-free approach to parallelizing stochastic gradient descent. arXiv preprint arXiv:1106.5730, 2011. Peter Richt´arik and Martin Tak´aˇc. Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function. Mathematical Programming, pages 1–38, 2012a. Peter Richt´arik and Martin Tak´aˇc. Parallel coordinate descent methods for big data optimization. arXiv preprint arXiv:1212.0873, 2012b. Peter Richt´arik and Martin Tak´aˇc. Distributed coordinate descent method for learning with big data. arXiv preprint arXiv:1310.2059, 2013. Shai Shalev-Shwartz and Tong Zhang. Stochastic dual coordinate ascent methods for regularized loss minimization. Journal of Machine Learning Research, 14:567–599, Feb 2013a. Shai Shalev-Shwartz and Tong Zhang. Accelerated mini-batch stochastic dual coordinate ascent. arxiv, 2013b. Shai Shalev-Shwartz, Yoram Singer, and Nathan Srebro. Pegasos: Primal Estimated sub-GrAdient SOlver for SVM. In ICML, pages 807–814, 2007. Martin Tak´ac, Avleen Bijral, Peter Richt´arik, and Nathan Srebro. Mini-batch primal and dual methods for svms. arxiv, 2013. 8
|
2013
|
11
|
4,832
|
Parametric Task Learning Ichiro Takeuchi Nagoya Institute of Technology Nagoya, 466-8555, Japan takeuchi.ichiro@nitech.ac.jp Tatsuya Hongo Nagoya Institute of Technology Nagoya, 466-8555, Japan hongo.mllab.nit@gmail.com Masashi Sugiyama Tokyo Institute of Technology Tokyo, 152-8552, Japan sugi@cs.titech.ac.jp Shinichi Nakajima Nikon Corporation Tokyo, 140-8601, Japan nakajima.s@nikon.co.jp Abstract We introduce an extended formulation of multi-task learning (MTL) called parametric task learning (PTL) that can systematically handle infinitely many tasks parameterized by a continuous parameter. Our key finding is that, for a certain class of PTL problems, the path of the optimal task-wise solutions can be represented as piecewise-linear functions of the continuous task parameter. Based on this fact, we employ a parametric programming technique to obtain the common shared representation across all the continuously parameterized tasks. We show that our PTL formulation is useful in various scenarios such as learning under non-stationarity, cost-sensitive learning, and quantile regression. We demonstrate the advantage of our approach in these scenarios. 1 Introduction Multi-task learning (MTL) has been studied for learning multiple related tasks simultaneously. A key assumption behind MTL is that there exists a common shared representation across the tasks. Many MTL algorithms attempt to find such a common representation and at the same time to learn multiple tasks under that shared representation. For example, we can enforce all the tasks to share a common feature subspace or a common set of variables by using an algorithm introduced in [1, 2] that alternately optimizes the shared representation and the task-wise solutions. Although the standard MTL formulation can handle only a finite number of tasks, it is sometimes more natural to consider infinitely many tasks parameterized by a continuous parameter, e.g., in learning under non-stationarity [3] where learning problems change over continuous time, costsensitive learning [4] where loss functions are asymmetric with continuous cost balance, and quantile regression [5] where the quantile is a continuous variable between zero and one. In order to handle these infinitely many parametrized tasks, we propose in this paper an extended formulation of MTL called parametric-task learning (PTL). The key contribution of this paper is to show that, for a certain class of PTL problems, the optimal common representation shared across infinitely many parameterized tasks can be obtainable. Specifically, we develop an alternating minimization algorithm `a la [1, 2] for finding the entire continuum of solutions and the common feature subspace (or the common set of variables) among infinitely many parameterized tasks. Our algorithm exploits the fact that, for those classes of PTL problems, the path of task-wise solutions is piecewise-linear in the task parameter. We use the parametric programming technique [6, 7, 8, 9] for computing those piecewise linear solutions. 1 Notations: Let us denote by R, R+, and R++ the set of real, nonnegative, and positive numbers, respectively, while we define Nn := {1, . . . , n} for every natural number n. We denote by Sd ++ the set of d × d positive definite matrices, and let I(·) be the indicator function. 2 Review of Multi-Task Learning (MTL) In this section, we review an MTL method developed in [1, 2]. Let {(xi, yi)}i∈Nn be the set of n training instances, where xi ∈X ⊆Rd is the input and yi ∈Y is the output. We define wi(t) ∈[0, 1], t ∈NT as the weight of the ith instance for the tth task, where T is the number of tasks. We consider an affine model ft(x) = βt,0 + β⊤ t x for each task, where βt,0 ∈R and βt ∈Rd. For notational simplicity, we define augmented vectors ˜β := (β0, β1, . . . , βd)⊤∈Rd+1 and ˜x := (1, x1, . . . , xd)⊤∈Rd+1, and write the affine model as ft(x) = ˜β⊤ t ˜x. The multi-task feature learning method discussed in [1] is formulated as min { ˜βt}t∈NT D∈Sd ++,tr(D)≤1 ∑ t∈NT ∑ t∈NT wi(t)ℓt(r(yi, ˜β⊤ t ˜xi)) + γ T ∑ t∈NT β⊤ t D−1βt, (1) where tr(D) is the trace of D, ℓt : R →R+ is the loss function for the tth task incurred on the residual r(yi, ˜β⊤ t ˜xi)1, and γ > 0 is the regularization parameter2. It was shown [1] that the problem (1) is equivalent to min { ˜βt}t∈NT ∑ t∈NT ∑ i∈NN wi(t)ℓt(r(yi, ˜β⊤ t ˜xi)) + γ T ||B||2 tr, where B is the d × T matrix whose tth column is given by the vector βt, and ||B||tr := tr((BB⊤)1/2) is the trace norm of B. As shown in [10], the trace norm is the convex upper envelope of the rank of B, and (1) can be interpreted as the problem of finding a common feature subspace across T tasks. This problem is often referred to as multi-task feature learning. If the matrix D is restricted to be diagonal, the formulation (1) is reduced to multi-task variable selection [11, 12]. In order to solve the problem (1), the alternating minimization algorithm was suggested in [1] (see Algorithm 1). This algorithm alternately optimizes the task-wise solutions { ˜βt}t∈NT and the common representation matrix D. It is worth noting that, when D is fixed, each ˜βt can be independently optimized (Step 1). On the other hand, when { ˜βt}t∈NT are fixed, the optimization of the matrix D can be reduced to the minimization over d eigenvalues λ1, . . . , λd of the matrix C := BB⊤, and the optimal D can be analytically computed (Step 2). 3 Parametric-Task Learning (PTL) We consider the case where we have infinitely many tasks parametrized by a single continuous parameter. Let θ ∈[θL, θU] be a continuous task parameter. Instead of the set of weights wi(t), t ∈ NT , we consider a weight function wi : [θL, θU] →[0, 1] for each instance i ∈Nn. In PTL, we learn a parameter vector ˜βθ ∈Rd+1 as a continuous function of the task parameter θ: min { ˜βθ}θ∈[θL,θU] D∈Sd ++,tr(D)≤1 ∫θU θL ∑ i∈Nn wi(θ) ℓθ(r(yi, ˜β⊤ θ ˜xi)) dθ + γ ∫θU θL β⊤ θ D−1βθ dθ, (2) where, note that, the loss function ℓθ possibly depends on θ. As we will explain in the next section, the above PTL formulation is useful in various important machine learning scenarios including learning under non-stationarity, cost-sensitive learning, and 1For example, r(yi, ˜β⊤ t ˜xi) = (yi −˜β⊤˜xi)2 for regression problems with yi ∈R, while r(yi, ˜β⊤ t ˜xi) = 1 −yi ˜β⊤ t ˜xi for binary classification problems with yi ∈{−1, 1}. 2In [1], wi(t) takes either 1 or 0. It takes 1 only if the ith instance is used in the tth task. We slightly generalize the setup so that each instance can be used in multiple tasks with different weights. 2 Algorithm 1 ALTERNATING MINIMIZATION ALGORITHM FOR MTL [1] 1: Input: Data {(xi, yi)}i∈Nn and weights {wi(t)}i∈Nn,t∈NT ; 2: Initialize: D ←Id/d (Id is d × d identity matrix) 3: while convergence condition is not true do 4: Step 1: For t = 1, . . . , T do ˜βt ←arg min ˜β ∑ i∈Nn wi(t)ℓt(r(yi, ˜β⊤˜xi)) + γ T β⊤D−1β 5: Step 2: D ← C1/2 tr(C)1/2 = arg min D∈Sd ++,tr(D)≤1 ∑ t∈NT β⊤ t D−1βt, where C := BB⊤whose (j, k)th element is defined as Cj,k := ∑ t∈NT βtjβtk. 6: end while 7: Output: { ˜βt}t∈NT and D; quantile regression. However, at first glance, the PTL optimization problem (2) seems computationally intractable since we need to find infinitely many task-wise solutions as well as the common feature subspace (or the common set of variables if D is restricted to be diagonal) shared by infinitely many tasks. Our key finding is that, for a certain class of PTL problems, when D is fixed, the optimal path of the task-wise solutions ˜βθ is shown to be piecewise-linear in θ. By exploiting this piecewise-linearity, we can efficiently handle infinitely many parameterized tasks, and the optimal solutions of those class of PTL problems can be exactly computed. In the following theorem, we prove that the task-wise solutions ˜βθ is piecewise-linear in θ if the weight functions and the loss function satisfy certain conditions. Theorem 1 For any d × d positive-definite matrix D ∈Sd ++, the optimal solution path of ˜βθ ←arg min ˜β ∑ i∈Nn wi(θ)ℓθ(r(yi, ˜β⊤˜xi)) + γβ⊤D−1β (3) for θ ∈[θL, θU] is written as a piecewise-linear function of θ if the residual r(y, ˜β⊤˜x) can be written as an affine function of ˜β, and the weight functions wi : [θL, θU] →[0, 1], i ∈Nn and the loss function ℓ: R →R+ satisfy either of the following conditions (a) or (b): (a) All the weight functions are piecewise-linear functions, and the loss function is a convex piecewise-linear function which does not depend on θ; (b) All the weight functions are piecewise-constant functions, and the loss function is a convex piecewise-linear function which depends on θ in the following form: ℓθ(r) = ∑ h∈NH max{(ah + bhr)(ch + dhθ), 0}, (4) where H is a positive integer, and ah, bh, ch, dh ∈R are constants such that ch + dhθ ≥0 for all θ ∈[θL, θU]. In the proof in Appendix A, we show that, if the weight functions and the loss function satisfy the conditions (a) or (b), the problem (3) is reformulated as a parametric quadratic program (parametric QP), where the parameter θ only appears in the linear term of the objective function. As shown, for example, in [9], the optimal solution path of this class of parametric QP has a piecewise-linear form. If ˜βθ is piecewise-linear in θ, we can exactly compute the entire solution path by using parametric programming. In machine learning literature, parametric programming is often used in the context 3 Algorithm 2 ALTERNATING MINIMIZATION ALGORITHM FOR PTL 1: Input: Data {(xi, yi)}i∈Nn and weight functions wi : [θL, θU] :→[0, 1] for all i ∈Nn; 2: Initialize: D ←Id/d (Id is d × d identity matrix) 3: while convergence condition is not true do 4: Step 1: For all the continuum of θ ∈[θL, θU] do ˜βθ ←arg min ˜β ∑ i∈Nn wi(θ)ℓθ(r(yi, ˜β⊤˜xi)) + γβ⊤D−1β by using parametric programming; 5: Step 2: D ← C1/2 tr(C)1/2 = arg min D∈Sd ++,tr(D)≤1 ∫θU θL β⊤ θ D−1βθdθ, (5) where (j, k)th element of C ∈Rd×d is defined as Cj,k := ∫θU θL βθ,jβθ,kdθ; 6: end while 7: Output: { ˜βθ} for θ ∈[θL, θU] and D; of regularization path-following [13, 14, 15]3. We start from the solution at θ = θL, and follow the path of the optimal solutions while θ is continuously increased. This is efficiently conducted by exploiting the piecewise-linearity. Our proposed algorithm for solving the PTL problem (2) is described in Algorithm 2, which is essentially a continuous version of the MTL algorithm shown in Algorithm 1. Note that, by exploiting the piecewise linearity of βθ, we can compute the integral at Step 2 (Eq. (5)) in Algorithm 2. Algorithm 2 can be changed to parametric-task variable selection if Step 2 is replaced with D ←diag(λ1, . . . , λd) where λj = √∫θU θL β2 θ,jdθ ∑ j′∈Nd √∫θU θL β2 θ,j′dθ for all j ∈Nd, which can also be computed efficiently by exploiting the piecewise linearity of βθ. 4 Examples of PTL Problems In this section, we present three examples where our PTL formulation (2) is useful. Binary Classification Under Non-Stationarity Suppose that we observe n training instances sequentially, and denote them as {(xi, yi, τi)}i∈Nn, where xi ∈Rd, yi ∈{−1, 1}, and τi is the time when the ith instance is observed. Without loss of generality, we assume that τ1 < . . . < τn. Under non-stationarity, if we are requested to learn a classifier to predict the output for a test input x observed at time τ, the training instances observed around time τ should have more influence on the classifier than others. Let wi(τ) denote the weight of the ith instance when training a classifier for a test point at time τ. We can for example use the following triangular weight function (see Figure1): wi(τ) = 1 + s−1(τi −τ) if τ −s ≤τi < τ, 1 −s−1(τi −τ) if τ ≤τi < τ + s, 0 otherwise, (6) where s > 0 determines the width of the triangular time windows. The problem of training a classifier for time τ is then formulated as min ˜β ∑ i∈Nn wi(τ) max(0, 1 −yi ˜β⊤˜xi) + γ||β||2 2, where we used the hinge loss. 3In regularization path-following, one computes the optimal solution path w.r.t. the regularization parameter, whereas we compute the optimal solution path w.r.t. the task parameter θ. 4 Figure 1: Examples of weight functions {wi(τ)}i∈Nn in non-stationary time-series learning. Given a training instances (xi, yi) at time τi for i = 1, . . . , n under non-stationary condition, it is reasonable to use the weights {wi(τ)}i∈Nn as shown here when we learn a classifier to predict the output of a test input at time τ. If we have the belief that a set of classifiers for different time should have some common structure, we can apply our PTL approach to this problem. If we consider a time interval τ ∈[τL, τU], the parametric-task feature learning problem is formulated as min { ˜β(τ)}τ∈[τL,τU] D∈Sd ++,tr(D)≤1 ∫τU τL ∑ i∈Nn wi(τ) max(0, 1 −yi ˜β⊤ τ ˜xi) dτ + γ ∫τU τL β⊤ τ D−1βτ dτ. (7) Note that the problem (7) satisfies the condition (a) in Theorem 1. Joint Cost-Sensitive Learning Next, let us consider cost-sensitive binary classification. When the costs of false positives and false negatives are unequal, or when the numbers of positive and negative training instances are highly imbalanced, it is effective to use the cost-sensitive learning approach [16]. Suppose that we are given a set of training instances {(xi, yi)}i∈Nn with xi ∈Rd and yi ∈{−1, 1}. If we know that the ratio of the false positive and false negative costs is approximately θ : (1 −θ), it is reasonable to solve the following cost-sensitive SVM [17]: min ˜β ∑ i∈Nn wi(θ) max(0, 1 −yi ˜β⊤˜xi) + γ||β||2 2, where the weight wi(θ) is defined as wi(θ) = { θ if yi = −1, 1 −θ if yi = +1. When the exact false positive and false negative costs in the test scenario are unknown [4], it is often desirable to train several cost-sensitive SVMs with different values of θ. If we have the belief that a set of classifiers for different cost ratios should have some common structure, we can apply our PTL approach to this problem. If we consider an interval θ ∈[θL, θU], 0 < θL < θU < 1, the parametric-task feature learning problem is formulated as min { ˜βθ}θ∈[θL,θU] D∈Sd ++,tr(D)≤1 ∫θU θL ∑ i∈Nn wi(θ) max(0, 1 −yi ˜β⊤ θ ˜xi) dθ + γ ∫θU θL β⊤ θ D−1βθ dθ. (8) The problem (8) also satisfies the condition (a) in Theorem 1. Figure 2 shows an example of joint cost-sensitive learning applied to a toy 2D binary classification problem. Joint Quantile Regression Given a set of training instances {(xi, yi)}i∈Nn with xi ∈Rd and yi ∈R drawn from a joint distribution P(X, Y ), quantile regression [19] is used to estimate the conditional τ th quantile F −1 Y |X=x(τ) as a function of x, where τ ∈(0, 1) and FY |X=x is the cumulative distribution function of the conditional distribution P(Y |X = x). Jointly estimating multiple conditional quantile functions is often useful for exploring the stochastic relationship between X and Y (see Section 5 for an example of joint quantile regression problems). Linear quantile regression along with L2 regularization [20] at order τ ∈(0, 1) is formulated as min ˜β ∑ i∈Nn ρτ(yi −˜β⊤˜xi) + γ||β||2 2, ρτ(r) := { (1 −τ)|r| if r ≤0, τ|r| if r > 0. 5 -4 -2 0 2 4 -4 -2 0 2 4 6 x2 x1 -4 -2 0 2 4 -4 -2 0 2 4 6 x2 x1 (a) Independent cost-sensitive learning (b) Joint cost-sensitive learning Figure 2: An example of joint cost-sensitive learning on 2D toy dataset (2D input x is expanded to n-dimension by radial basis functions centered on each xi). In each plot, the decision boundaries of five cost-sensitive SVMs (θ = 0.1, 0.25, 0.5, 0.75, 0.9) are shown. (a) Left plot is the results obtained by independently training each cost-sensitive SVMs. (b) Right plot is the results obtained by jointly training infinitely many cost-sensitive SVMs for all the continuum of θ ∈[0.05, 0.95] using the methodology we present in this paper (both are trained with the same regularization parameter γ). When independently trained, the inter-relationship among different cost-sensitive SVMs looks inconsistent (c.f., [18]). If we have the belief that a family of quantile regressions at various τ ∈(0, 1) have some common structure, we can apply our PTL framework to joint estimation of the family of quantile regressions This PTL problem satisfies the condition (b) in Theorem 1, and is written as min {βτ }τ∈(0,1) D∈Sd ++,tr(D)≤1 ∫1 0 ∑ i∈Nn ρτ(yi −β⊤ τ xi)dτ + γ ∫1 0 β⊤ τ D−1βτdτ, where we do not need any weighting and omit wi(τ) = 1 for all i ∈Nn and τ ∈[0, 1]. 5 Numerical Illustrations In this section, we illustrate various aspects of PTL with the three examples discussed in the previous section. Artificial Example for Learning under Non-stationarity We first consider a simple artificial problem with non-stationarity, where the data generating mechanism gradually changes. We assume that our data generating mechanism produces the training set {(xi, yi, τi)}i∈Nn with n = 100 as follows. For each τi ∈{0, 1 2π n , 2 2π n , . . . , (n −1) 2π n }, the output yi is first determined as yi = 1 if i is odd, while yi = −1 if i is even. Then, xi ∈Rd is generated as xi1 ∼N(yi cos τi, 12), xi2 ∼N(yi sin τi, 12), xij ∼N(0, 12), ∀j ∈{3, . . . , d}, (9) where N(µ, σ2) is the normal distribution with mean µ and variance σ2. Namely, only the first two dimensions of x differ in two classes, and the remaining d −2 dimensions are considered as noise. In addition, according to the value of τi, the means of the class-wise distributions in the first two dimensions gradually change. The data distributions of the first two dimensions for τ = 0, 0.5π, π, 1.5π are illustrated in Figure 3. Here, we applied our PT feature learning approach with triangular time windows in (6) with s = 0.25π. Figure 4 shows the mis-classification rate of PT feature learning (PTFL) and ordinary independent learning (IND) on a similarly generated test sample with size 1000. When the input dimension d = 2, there is no advantage for learning common features since these two input dimensions are important for classification. On the other hand, as d increases, PT feature learning becomes more and more advantageous. Especially when the regularization parameter γ is large, the independent learning approach is completely deteriorated as d increases, while PTFL works reasonably well in all the setups. 6 Figure 3: The first 2 input dimensions of artificial example at τ = 0, 0.5π, π, 1.5π. The class-wise distributions in these two dimensions gradually change with τ ∈[0, 2π]. 0 0.1 0.2 0.3 0.4 0.5 2 5 10 20 50 100 Mis-classification Rate Input Dimension PTL IND 0 0.1 0.2 0.3 0.4 0.5 2 5 10 20 50 100 Mis-classification Rate Input Dimension PTL IND 0 0.1 0.2 0.3 0.4 0.5 2 5 10 20 50 100 Mis-classification Rate Input Dimension PTL IND Figure 4: Experimental results on artificial example under non-stationarity. Mis-classification rate on test sample with size 1000 for various setups d ∈{2, 5, 10, 20, 50, 100} and γ ∈{0.1, 1, 10} are shown. The red symbols indicate the results of our PT feature learning (PTFL) whereas the blue symbols indicate ordinary independent learning (IND). The plotted are average (and standard deviation) over 100 replications with different random seeds. All the differences except d = 2 are statistically significant (p < 0.01). Joint Cost-Sensitive SVM Learning on Benchmark Datasets Here, we report the experimental results on joint cost-sensitive SVM learning discussed in Section 4. Although our main contribution is not just claiming favorable generalization properties of parametric task learning solutions, we compared, as an illustration, the generalization performances of PT feature learning (PTFL) and PT variable selection (PTVS) with the ordinary independent learning approach (IND). In PTFL and PTVS, we learned common feature subspaces and common sets of variables shared across the continuum of cost-sensitive SVM for θ ∈[0.05, 0.95] for 10 benchmark datasets (see Table 1). In each data set, we divided the entire sample into training, validation, and test sets with almost equal size. The average test errors (and the standard deviation) of 10 different data splits are reported in Table 1. The total test errors for cost-sensitive SVMs with θ = 0.1, 0.2, . . . , 0.9 are defined as ∑ θ∈{0.1,...,0.9} ( θ ∑ i:yi=−1 I(fθ(xi) > 0) + (1 −θ) ∑ i:yi=1 I(fθ(xi) ≤0) ) , where fθ is the trained SVM with the cost ratio θ. Model selection was conducted by using the same criterion on validation sets. We see that, in most cases, PTFL or PTVS had better generalization performance than IND. Joint Quantile Regression Finally, we applied PT feature learning to joint quantile regression problems. Here, we took a slightly different approach from what was described in the previous section. Given a training set {(xi, yi)}i∈Nn, we first estimated conditional mean function E[Y |X = x] by least-square regression, and computed the residual ri := yi −ˆE[Y |X = xi], where ˆE is the estimated conditional mean function. Then, we applied PT feature learning to {(xi, ri)}i∈Nn, and estimated the conditional τ th quantile function as ˆF −1 Y |X=x(τ) := ˆE[Y |X = xi]+ ˆfres(x|τ), where ˆfres(·|τ) is the estimated τ th quantile regression fitted to the residuals. When multiple quantile regressions with different τs are independently learned, we often encounter a notorious problem known as quantile crossing (see Section 2.5 in [5]). For example, in Figure 5(a), some of the estimated conditional quantile functions cross each other (which never happens in the true conditional quantile functions). One possible approach to mitigate this problem is to assume a model on the heteroscedastic structure. In the simplest case, if we assume that the data is homoscedastic (i.e., the conditional distribution P(Y |x) does not depend on x except its location), 7 Table 1: Average (and standard deviation) of test errors obtained by joint cost-sensitive SVMs on benchmark datasets. n is the sample size, d is the input dimension, Ind indicates the results when each cost-sensitive SVM was trained independently, while PTFL and PTVS indicate the results from PT feature learning and PT feature selection, respectively. The bold numbers in the table indicate the best performance among three methods. Data Name n d Ind PTFL PTVS Parkinson 195 20 32.30 (10.60) 30.21 (9.09) 30.25 (8.53) Breast Cancer Diagnostic 569 30 20.36 (7.77) 18.49 (6.15) 19.46 (5.89) Breast Cancer Prognostic 194 33 48.97 (12.92) 49.28 ( 9.83) 48.68 (5.89) Australian 690 14 117.97 (22.97) 106.25 (12.66) 111.22 (15.95) Diabetes 768 8 185.90 (21.13) 179.89 (16.31) 175.95 (16.26) Fourclass 862 2 181.69 (22.13) 179.30 (14.25) 178.67 (19.24) Germen 1000 24 242.21 (18.35) 219.66 (16.22) 237.20 (15.78) Splice 1000 60 179.80 (24.22) 151.69 (18.02) 183.54 (21.27) SVM Guide 300 10 175.70 (15.55) 170.16 (9.99) 179.76 (14.76) DVowel 528 10 175.16 (13.78) 175.74 (9.37) 175.50 (7.38) quantile regressions at different τs can be obtained by just vertically shifting other quantile regression function (see Figure 5(f)). Our PT feature learning approach, when applied to the joint quantile regression problem, allows us to interpolate these two extreme cases. Figure 5 shows a joint QR example on the bone mineral density (BMD) data [21]. We applied our approach after expanding univariate input x to a d = 5 dimensional vector by using evenly allocated RBFs. When (a) γ →0, our approach is identical with independently estimating each quantile regression, while it coincides with homoscedastic case when (f) γ →∞. In our experience, the best solution is usually found somewhere between these two extremes: in this example, (d) γ = 5 was chosen as the best model by 10-fold cross-validation. -2 -1 0 1 2 3 4 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 (Standardized) Relative BMD Change (Standardized) Age 0.05, 0.10, ..., 0.95 conditional quantile functions -2 -1 0 1 2 3 4 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 (Standardized) Relative BMD Change (Standardized) Age 0.05, 0.10, ..., 0.95 conditional quantile functions -2 -1 0 1 2 3 4 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 (Standardized) Relative BMD Change (Standardized) Age 0.05, 0.10, ..., 0.95 conditional quantile functions (a) γ →0 (b) γ = 0.1 (c) γ = 1 -2 -1 0 1 2 3 4 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 (Standardized) Relative BMD Change (Standardized) Age 0.05, 0.10, ..., 0.95 conditional quantile functions -2 -1 0 1 2 3 4 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 (Standardized) Relative BMD Change (Standardized) Age 0.05, 0.10, ..., 0.95 conditional quantile functions -2 -1 0 1 2 3 4 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 (Standardized) Relative BMD Change (Standardized) Age 0.05, 0.10, ..., 0.95 conditional quantile functions (d) γ = 5 (e) γ = 10 (f) γ →∞ Figure 5: Joint quantile regression examples on BMD data [21] for six different γs. 6 Conclusions In this paper, we introduced parametric-task learning (PTL) approach that can systematically handle infinitely many tasks parameterized by a continuous parameter. We illustrated the usefulness of this approach by providing three examples that can be naturally formulated as PTL. We believe that there are many other practical problems that falls into this PTL framework. Acknowledgments The authors thank the reviewers for fruitful comments. IT, MS, and SN thank the support from MEXT Kakenhi 23700165, JST CREST Program, MEXT Kakenhi 23120004, respectively. 8 References [1] A. Argyriou, T. Evgeniou, and M. Pontil. Multi-task feature learning. In Advances in Neural Information Processing Systems, volume 19, pages 41–48. 2007. [2] A. Argyriou, C. A. Micchelli, M. Pontil, and Y. Ying. A spectral regularization framework for multi-task structure learning. In Advances in Neural Information Processing Systems, volume 20, pages 25–32. 2008. [3] L. Cao and F. Tay. Support vector machine with adaptive parameters in finantial time series forecasting. IEEE Transactions on Neural Networks, 14(6):1506–1518, 2003. [4] F. R. Bach, D. Heckerman, and E. Horvits. Considering cost asymmetry in learning classifiers. Journal of Machine Learning Research, 7:1713–41, 2006. [5] R. Koenker. Quantile Regression. Cambridge University Press, 2005. [6] K. Ritter. On parametric linear and quadratic programming problems. mathematical Programming: Proceedings of the International Congress on Mathematical Programming, pages 307–335, 1984. [7] E. L. Allgower and K. George. Continuation and path following. Acta Numerica, 2:1–63, 1993. [8] T. Gal. Postoptimal Analysis, Parametric Programming, and Related Topics. Walter de Gruyter, 1995. [9] M. J. Best. An algorithm for the solution of the parametric quadratic programming problem. Applied Mathemetics and Parallel Computing, pages 57–76, 1996. [10] M. Fazel, H. Hindi, and S. P. Boyd. A rank minimization heuristic with application to minimum order system approximation. In Proceedings of the American Control Conference, volume 6, pages 4734–4739, 2001. [11] B. A. Turlach, W. N. Venables, and S. J. Wright. Simultaneous variable selection. Technometrics, 47:349–363, 2005. [12] G. Obozinski, B. Taskar, and M. Jordan. Joint covariate selection and joint sbspace selection for multiple classification problems. Statistics and Computing, 20(2):231–252, 2010. [13] M. R. Osborne, B. Presnell, and B. A. Turlach. A new approach to variable selection in least squares problems. IMA Journal of Numerical Analysis, 20(20):389–404, 2000. [14] B. Efron and R. Tibshirani. Least angle regression. Annals of Statistics, 32(2):407–499, 2004. [15] T. Hastie, S. Rosset, R. Tibshirani, and J. Zhu. The entire regularization path for the support vector machine. Journal of Machine Learning Research, 5:1391–415, 2004. [16] Y. Lin, Y. Lee, and G. Wahba. Support vector machines for classification in nonstandard situations. Machine Learning, 46:191–202, 2002. [17] M. A. Davenport, R. G. Baraniuk, and C. D. Scott. Tuning support vector machine for minimax and Neyman-Pearson classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010. [18] G. Lee and C. Scott. Nested support vector machines. IEEE Transactions on Signal Processing, 58(3):1648–1660, 2010. [19] R. Koenker. Quantile Regression. Cambridge University Press, 2005. [20] I. Takeuchi, Q. V. Le, T. Sears, and A. J. Smola. Nonparametric quantile estimation. Journal of Machine Learning Research, 7:1231–1264, 2006. [21] L. K. Bachrach, T. Hastie, M. C. Wang, B. Narasimhan, and R. Marcus. Acquisition in healthy Asian, hispanic, black and caucasian youth. a longitudinal study. The Journal of Clinical Endocrinology and Metabolism, 84:4702–4712, 1999. [22] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. 9
|
2013
|
110
|
4,833
|
Generalized Denoising Auto-Encoders as Generative Models Yoshua Bengio, Li Yao, Guillaume Alain, and Pascal Vincent D´epartement d’informatique et recherche op´erationnelle, Universit´e de Montr´eal Abstract Recent work has shown how denoising and contractive autoencoders implicitly capture the structure of the data-generating density, in the case where the corruption noise is Gaussian, the reconstruction error is the squared error, and the data is continuous-valued. This has led to various proposals for sampling from this implicitly learned density function, using Langevin and Metropolis-Hastings MCMC. However, it remained unclear how to connect the training procedure of regularized auto-encoders to the implicit estimation of the underlying datagenerating distribution when the data are discrete, or using other forms of corruption process and reconstruction errors. Another issue is the mathematical justification which is only valid in the limit of small corruption noise. We propose here a different attack on the problem, which deals with all these issues: arbitrary (but noisy enough) corruption, arbitrary reconstruction loss (seen as a log-likelihood), handling both discrete and continuous-valued variables, and removing the bias due to non-infinitesimal corruption noise (or non-infinitesimal contractive penalty). 1 Introduction Auto-encoders learn an encoder function from input to representation and a decoder function back from representation to input space, such that the reconstruction (composition of encoder and decoder) is good for training examples. Regularized auto-encoders also involve some form of regularization that prevents the auto-encoder from simply learning the identity function, so that reconstruction error will be low at training examples (and hopefully at test examples) but high in general. Different variants of auto-encoders and sparse coding have been, along with RBMs, among the most successful building blocks in recent research in deep learning (Bengio et al., 2013b). Whereas the usefulness of auto-encoder variants as feature learners for supervised learning can directly be assessed by performing supervised learning experiments with unsupervised pre-training, what has remained until recently rather unclear is the interpretation of these algorithms in the context of pure unsupervised learning, as devices to capture the salient structure of the input data distribution. Whereas the answer is clear for RBMs, it is less obvious for regularized auto-encoders. Do they completely characterize the input distribution or only some aspect of it? For example, clustering algorithms such as k-means only capture the modes of the distribution, while manifold learning algorithms characterize the low-dimensional regions where the density concentrates. Some of the first ideas about the probabilistic interpretation of auto-encoders were proposed by Ranzato et al. (2008): they were viewed as approximating an energy function through the reconstruction error, i.e., being trained to have low reconstruction error at the training examples and high reconstruction error elsewhere (through the regularizer, e.g., sparsity or otherwise, which prevents the auto-encoder from learning the identity function). An important breakthrough then came, yielding a first formal probabilistic interpretation of regularized auto-encoders as models of the input distribution, with the work of Vincent (2011). This work showed that some denoising auto-encoders (DAEs) correspond to a Gaussian RBM and that minimizing the denoising reconstruction error (as a squared error) estimates the energy function through a regularized form of score matching, with the regularization disappearing as the amount of corruption noise goes to 0, and then converging to the same solution as score matching (Hyv¨arinen, 2005). This connection and its generalization to other 1 energy functions, giving rise to the general denoising score matching training criterion, is discussed in several other papers (Kingma and LeCun, 2010; Swersky et al., 2011; Alain and Bengio, 2013). Another breakthrough has been the development of an empirically successful sampling algorithm for contractive auto-encoders (Rifai et al., 2012), which basically involves composing encoding, decoding, and noise addition steps. This algorithm is motivated by the observation that the Jacobian matrix (of derivatives) of the encoding function provides an estimator of a local Gaussian approximation of the density, i.e., the leading singular vectors of that matrix span the tangent plane of the manifold near which the data density concentrates. However, a formal justification for this algorithm remains an open problem. The last step in this development (Alain and Bengio, 2013) generalized the result from Vincent (2011) by showing that when a DAE (or a contractive auto-encoder with the contraction on the whole encode/decode reconstruction function) is trained with small Gaussian corruption and squared error loss, it estimates the score (derivative of the log-density) of the underlying data-generating distribution, which is proportional to the difference between reconstruction and input. This result does not depend on the parametrization of the auto-encoder, but suffers from the following limitations: it applies to one kind of corruption (Gaussian), only to continuous-valued inputs, only for one kind of loss (squared error), and it becomes valid only in the limit of small noise (even though in practice, best results are obtained with large noise levels, comparable to the range of the input). What we propose here is a different probabilistic interpretation of DAEs, which is valid for any data type, any corruption process (so long as it has broad enough support), and any reconstruction loss (so long as we can view it as a log-likelihood). The basic idea is that if we corrupt observed random variable X into ˜X using conditional distribution C( ˜X|X), we are really training the DAE to estimate the reverse conditional P(X| ˜X). Combining this estimator with the known C( ˜X|X), we show that we can recover a consistent estimator of P(X) through a Markov chain that alternates between sampling from P(X| ˜X) and sampling from C( ˜X|X), i.e., encode/decode, sample from the reconstruction distribution model P(X| ˜X), apply the stochastic corruption procedure C( ˜X|X), and iterate. This theoretical result is validated through experiments on artificial data in a non-parametric setting and experiments on real data in a parametric setting (with neural net DAEs). We find that we can improve the sampling behavior by using the model itself to define the corruption process, yielding a training procedure that has some surface similarity to the contrastive divergence algorithm (Hinton, 1999; Hinton et al., 2006). Algorithm 1 THE GENERALIZED DENOISING AUTO-ENCODER TRAINING ALGORITHM requires a training set or training distribution D of examples X, a given corruption process C( ˜X|X) from which one can sample, and with which one trains a conditional distribution Pθ(X| ˜X) from which one can sample. repeat • sample training example X ∼D • sample corrupted input ˜X ∼C( ˜X|X) • use (X, ˜X) as an additional training example towards minimizing the expected value of −log Pθ(X| ˜X), e.g., by a gradient step with respect to θ. until convergence of training (e.g., as measured by early stopping on out-of-sample negative loglikelihood) 2 Generalizing Denoising Auto-Encoders 2.1 Definition and Training Let P(X) be the data-generating distribution over observed random variable X. Let C be a given corruption process that stochastically maps an X to a ˜X through conditional distribution C( ˜X|X). The training data for the generalized denoising auto-encoder is a set of pairs (X, ˜X) with X ∼ P(X) and ˜X ∼C( ˜X|X). The DAE is trained to predict X given ˜X through a learned conditional distribution Pθ(X| ˜X), by choosing this conditional distribution within some family of distributions 2 indexed by θ, not necessarily a neural net. The training procedure for the DAE can generally be formulated as learning to predict X given ˜X by possibly regularized maximum likelihood, i.e., the generalization performance that this training criterion attempts to minimize is L(θ) = −E[log Pθ(X| ˜X)] (1) where the expectation is taken over the joint data-generating distribution P(X, ˜X) = P(X)C( ˜X|X). (2) 2.2 Sampling We define the following pseudo-Gibbs Markov chain associated with Pθ: Xt ∼Pθ(X| ˜Xt−1) ˜Xt ∼C( ˜X|Xt) (3) which can be initialized from an arbitrary choice X0. This is the process by which we are going to generate samples Xt according to the model implicitly learned by choosing θ. We define T(Xt|Xt−1) the transition operator that defines a conditional distribution for Xt given Xt−1, independently of t, so that the sequence of Xt’s forms a homogeneous Markov chain. If the asymptotic marginal distribution of the Xt’s exists, we call this distribution π(X), and we show below that it consistently estimates P(X). Note that the above chain is not a proper Gibbs chain in general because there is no guarantee that Pθ(X| ˜Xt−1) and C( ˜X|Xt) are consistent with a unique joint distribution. In that respect, the situation is similar to the sampling procedure for dependency networks (Heckerman et al., 2000), in that the pairs (Xt, ˜Xt−1) are not guaranteed to have the same asymptotic distribution as the pairs (Xt, ˜Xt) as t →∞. As a follow-up to the results in the next section, it is shown in Bengio et al. (2013a) that dependency networks can be cast into the same framework (which is that of Generative Stochastic Networks), and that if the Markov chain is ergodic, then its stationary distribution will define a joint distribution between the random variables (here that would be X and ˜X), even if the conditionals are not consistent with it. 2.3 Consistency Normally we only have access to a finite number n of training examples but as n →∞, the empirical training distribution approaches the data-generating distribution. To compensate for the finite training set, we generally introduce a (possibly data-dependent) regularizer Ωand the actual training criterion is a sum over n training examples (X, ˜X), Ln(θ) = 1 n X X∼P(X), ˜ X∼C( ˜ X|X) λnΩ(θ, X, ˜X) −log Pθ(X| ˜X) (4) where we allow the regularization coefficient λn to be chosen according to the number of training examples n, with λn →0 as n →∞. With λn →0 we get that Ln →L (i.e. converges to generalization error, Eq. 1), so consistent estimators of P(X| ˜X) stay consistent. We define θn to be the minimizer of Ln(θ) when given n training examples. We define Tn to be the transition operator Tn(Xt|Xt−1) = R Pθn(Xt| ˜X)C( ˜X|Xt−1)d ˜X associated with θn (the parameter obtained by minimizing the training criterion with n examples), and define πn to be the asymptotic distribution of the Markov chain generated by Tn (if it exists). We also define T be the operator of the Markov chain associated with the learned model as n →∞. Theorem 1. If Pθn(X| ˜X) is a consistent estimator of the true conditional distribution P(X| ˜X) and Tn defines an ergodic Markov chain, then as the number of examples n →∞, the asymptotic distribution πn(X) of the generated samples converges to the data-generating distribution P(X). Proof. If Tn is ergodic, then the Markov chain converges to a πn. Based on our definition of the “true” joint (Eq. 2), one obtains a conditional P(X| ˜X) ∝P(X)C( ˜X|X). This conditional, along with P( ˜X|X) = C( ˜X|X) can be used to define a proper Gibbs chain where one alternatively samples from P( ˜X|X) and from P(X| ˜X). Let T be the corresponding “true” transition operator, which maps the t-th sample X to the t + 1-th in that chain. That is, T (Xt|Xt−1) = R P(Xt| ˜X)C( ˜X|Xt−1)d ˜X. T produces P(X) as asymptotic marginal distribution over X (as we 3 consider more samples from the chain) simply because P(X) is the marginal distribution of the joint P(X)C( ˜X|X) to which the chain converges. By hypothesis we have that Pθn(X| ˜X) →P(X| ˜X) as n →∞. Note that Tn is defined exactly as T but with P(Xt| ˜X) replaced by Pθn(X| ˜X). Hence Tn →T as n →∞. Now let us convert the convergence of Tn to T into the convergence of πn(X) to P(X). We will exploit the fact that for the 2-norm, matrix M and unit vector v, ||Mv||2 ≤sup||x||2=1 ||Mx||2 = ||M||2. Consider M = T −Tn and v the principal eigenvector of T , which, by the Perron-Frobenius theorem, corresponds to the asymptotic distribution P(X). Since Tn →T , ||T −Tn||2 →0. Hence ||(T −Tn)v||2 ≤||T −Tn||2 →0, which implies that Tnv →T v = v, where the last equality comes from the Perron-Frobenius theorem (the leading eigenvalue is 1). Since Tnv →v, it implies that v becomes the leading eigenvector of Tn, i.e., the asymptotic distribution of the Markov chain, πn(X) converges to the true data-generating distribution, P(X), as n →∞. Hence the asymptotic sampling distribution associated with the Markov chain defined by Tn (i.e., the model) implicitly defines the distribution πn(X) learned by the DAE over the observed variable X. Furthermore, that estimator of P(X) is consistent so long as our (regularized) maximum likelihood estimator of the conditional Pθ(X| ˜X) is also consistent. We now provide sufficient conditions for the ergodicity of the chain operator (i.e. to apply theorem 1). Corollary 1. If Pθ(X| ˜X) is a consistent estimator of the true conditional distribution P(X| ˜X), and both the data-generating distribution and denoising model are contained in and non-zero in a finite-volume region V (i.e., ∀˜X, ∀X /∈V, P(X) = 0, Pθ(X| ˜X) = 0), and ∀˜X, ∀X ∈V, P(X) > 0, Pθ(X| ˜X) > 0, C( ˜X|X) > 0 and these statements remain true in the limit of n →∞, then the asymptotic distribution πn(X) of the generated samples converges to the data-generating distribution P(X). Proof. To obtain the existence of a stationary distribution, it is sufficient to have irreducibility (every value reachable from every other value), aperiodicity (no cycle where only paths through the cycle allow to return to some value), and recurrence (probability 1 of returning eventually). These conditions can be generalized to the continuous case, where we obtain ergodic Harris chains rather than ergodic Markov chains. If Pθ(X| ˜X) > 0 and C( ˜X|X) > 0 (for X ∈V ), then Tn(Xt|Xt−1) > 0 as well, because T(Xt|Xt−1) = Z Pθ(Xt| ˜X)C( ˜X|Xt−1)d ˜X This positivity of the transition operator guarantees that one can jump from any point in V to any other point in one step, thus yielding irreducibility and aperiodicity. To obtain recurrence (preventing the chain from diverging to infinity), we rely on the assumption that the domain V is bounded. Note that although Tn(Xt|Xt−1) > 0 could be true for any finite n, we need this condition to hold for n →∞as well, to obtain the consistency result of theorem 1. By assuming this positivity (Boltzmann distribution) holds for the data-generating distribution, we make sure that πn does not converge to a distribution which puts 0’s anywhere in V . Having satisfied all the conditions for the existence of a stationary distribution for Tn as n →∞, we can apply theorem 1 and obtain its conclusion. Note how these conditions take care of the various troubling cases one could think of. We avoid the case where there is no corruption (which would yield a wrong estimation, with the DAE simply learning a dirac probability its input). Second, we avoid the case where the chain wanders to infinity by assuming a finite volume where the model and data live, a real concern in the continuous case. If it became a real issue, we could perform rejection sampling to make sure that P(X| ˜X) produces X ∈V . 2.4 Locality of the Corruption and Energy Function If we believe that P(X| ˜X) is well estimated for all (X, ˜X) pairs, i.e., that it is approximately consistent with C( ˜X|X), then we get as many estimators of the energy function as we want, by picking a particular value of ˜X. Let us define the notation P(·) to denote the probability of the joint, marginals or conditionals over the pairs (Xt, ˜Xt−1) that are produced by the model’s Markov chain T as t →∞. So P(X) = π(X) 4 is the asymptotic distribution of the Markov chain T, and P( ˜X) the marginal over the ˜X’s in that chain. The above assumption means that P( ˜Xt−1|Xt) ≈C( ˜Xt−1|Xt) (which is not guaranteed in general, but only asymptotically as P approaches the true P). Then, by Bayes rule, P(X) = P (X| ˜ X)P ( ˜ X) P ( ˜ X|X) ≈P (X| ˜ X)P ( ˜ X) C( ˜ X|X) ∝P (X| ˜ X) C( ˜ X|X) so that we can get an estimated energy function from any given choice of ˜X through energy(X) ≈−log P(X| ˜X) + log C( ˜X|X). where one should note that the intractable partition function depends on the chosen value of ˜X. How much can we trust that estimator and how should ˜X be chosen? First note that P(X| ˜X) has only been trained for pairs (X, ˜X) for which ˜X is relatively close to X (assuming that the corruption is indeed changing X generally into some neighborhood). Hence, although in theory (with infinite amount of data and capacity) the above estimator should be good, in practice it might be poor when X is far from ˜X. So if we pick a particular ˜X the estimated energy might be good for X in the neighborhood of ˜X but poor elsewhere. What we could do though, is use a different approximate energy function in different regions of the input space. Hence the above estimator gives us a way to compare the probabilities of nearby points X1 and X2 (through their difference in energy), picking for example a midpoint ˜X = 1 2(X1 + X2). One could also imagine that if X1 and XN are far apart, we could chart a path between X1 and XN with intermediate points Xk and use an estimator of the relative energies between the neighbors Xk, Xk+1, add them up, and obtain an estimator of the relative energy between X1 and XN. Figure 1: Although P(X) may be complex and multi-modal, P(X| ˜X) is often simple and approximately unimodal (e.g., multivariate Gaussian, pink oval) for most values of ˜X when C( ˜X|X) is a local corruption. P(X) can be seen as an infinite mixture of these local distributions (weighted by P( ˜X)). This brings up an interesting point. If we could always obtain a good estimator P(X| ˜X) for any ˜X, we could just train the model with C( ˜X|X) = C( ˜X), i.e., with an unconditional noise process that ignores X. In that case, the estimator P(X| ˜X) would directly equal P(X) since ˜X and X are actually sampled independently in its “denoising” training data. We would have gained nothing over just training any probabilistic model just directly modeling the observed X’s. The gain we expect from using the denoising framework is that if ˜X is a local perturbation of X, then the true P(X| ˜X) can be well approximated by a much simpler distribution than P(X). See Figure 1 for a visual explanation: in the limit of very small perturbations, one could even assume that P(X| ˜X) can be well approximated by a simple unimodal distribution such as the Gaussian (for continuous data) or factorized binomial (for discrete binary data) commonly used in DAEs as the reconstruction probability function (conditioned on ˜X). This idea is already behind the non-local manifold Parzen windows (Bengio et al., 2006a) and non-local manifold tangent learning (Bengio et al., 2006b) algorithms: the local density around a point ˜X can be approximated by a multivariate Gaussian whose covariance matrix has leading eigenvectors that span the local tangent of the manifold near which the data concentrates (if it does). The idea of a locally Gaussian approximation of a density with a manifold structure is also exploited in the more recent work on the contractive auto-encoder (Rifai et al., 2011) and associated sampling procedures (Rifai et al., 2012). Finally, strong theoretical evidence in favor of that idea comes from the result from Alain and Bengio (2013): when the amount of corruption noise converges to 0 and the input variables have a smooth continuous density, then a unimodal Gaussian reconstruction density suffices to fully capture the joint distribution. Hence, although P(X| ˜X) encapsulates all information about P(X) (assuming C given), it will generally have far fewer non-negligible modes, making easier to approximate it. This can be seen analytically by considering the case where P(X) is a mixture of many Gaussians and the corruption 5 Figure 2: Walkback samples get attracted by spurious modes and contribute to removing them. Segment of data manifold in violet and example walkback path in red dotted line, starting on the manifold and going towards a spurious attractor. The vector field represents expected moves of the chain, for a unimodal P(X| ˜X), with arrows from ˜X to X. is a local Gaussian: P(X| ˜X) remains a Gaussian mixture, but one for which most of the modes have become negligible (Alain and Bengio, 2013). We return to this in Section 3, suggesting that in order to avoid spurious modes, it is better to have non-infinitesimal corruption, allowing faster mixing and successful burn-in not pulled by spurious modes far from the data. 3 Reducing the Spurious Modes with Walkback Training Sampling in high-dimensional spaces (like in experiments below) using a simple local corruption process (such as Gaussian or salt-and-pepper noise) suggests that if the corruption is too local, the DAE’s behavior far from the training examples can create spurious modes in the regions insufficiently visited during training. More training iterations or increasing the amount of corruption noise helps to substantially alleviate that problem, but we discovered an even bigger boost by training the DAE Markov chain to walk back towards the training examples (see Figure 2). We exploit knowledge of the currently learned model P(X| ˜X) to define the corruption, so as to pick values of ˜X that would be obtained by following the generative chain: wherever the model would go if we sampled using the generative Markov chain starting at a training example X, we consider to be a kind of “negative example” ˜X from which the auto-encoder should move away (and towards X). The spirit of this procedure is thus very similar to the CD-k (Contrastive Divergence with k MCMC steps) procedure proposed to train RBMs (Hinton, 1999; Hinton et al., 2006). More precisely, the modified corruption process ˜C we propose is the following, based on the original corruption process C. We use it in a version of the training algorithm called walkback, where we replace the corruption process C of Algorithm 1 by the walkback process ˜C of Algorithm 2. This also provides extra training examples (taking advantage of the ˜X samples generated along the walk away from X). It is called walkback because it forces the DAE to learn to walk back from the random walk it generates, towards the X’s in the training set. Algorithm 2: THE WALKBACK ALGORITHM is based on the walkback corruption process ˜C( ˜X|X), defined below in terms of a generic original corruption process C( ˜X|X) and the current model’s reconstruction conditional distribution P(X| ˜X). For each training example X, it provides a sequence of additional training examples (X, ˜X∗) for the DAE. It has a hyper-parameter that is a geometric distribution parameter 0 < p < 1 controlling the length of these walks away from X, with p = 0.5 by default. Training by Algorithm 1 is the same, but using all ˜X∗in the returned list L to form the pairs (X, ˜X∗) as training examples instead of just (X, ˜X). 1: X∗←X, L ←[ ] 2: Sample ˜X∗∼C( ˜X|X∗) 3: Sample u ∼Uniform(0, 1) 4: if u > p then 5: Append ˜X∗to L and return L 6: If during training, append ˜X∗to L, so (X, ˜X∗) will be an additional training example. 7: Sample X∗∼P(X| ˜X∗) 8: goto 2. Proposition 1. Let P(X) be the implicitly defined asymptotic distribution of the Markov chain alternating sampling from P(X| ˜X) and C( ˜X|X), where C is the original local corruption process. Under the assumptions of corollary 1, minimizing the training criterion in walkback training algo6 rithm for generalized DAEs (combining Algorithms 1 and 2) produces a P(X) that is a consistent estimator of the data generating distribution P(X). Proof. Consider that during training, we produce a sequence of estimators Pk(X| ˜X) where Pk corresponds to the k-th training iteration (modifying the parameters after each iteration). With the walkback algorithm, Pk−1 is used to obtain the corrupted samples ˜X from which the next model Pk is produced. If training converges, Pk ≈Pk+1 = P and we can then consider the whole corruption process ˜C fixed. By corollary 1, the Markov chain obtained by alternating samples from P(X| ˜X) and samples from ˜C( ˜X|X) converges to an asymptotic distribution P(X) which estimates the underlying data-generating distribution P(X). The walkback corruption ˜C( ˜X|X) corresponds to a few steps alternating sampling from C( ˜X|X) (the fixed local corruption) and sampling from P(X| ˜X). Hence the overall sequence when using ˜C can be seen as a Markov chain obtained by alternatively sampling from C( ˜X|X) and from P(X| ˜X) just as it was when using merely C. Hence, once the model is trained with walkback, one can sample from it usingc orruption C( ˜X|X). A consequence is that the walkback training algorithm estimates the same distribution as the original denoising algorithm, but may do it more efficiently (as we observe in the experiments), by exploring the space of corruptions in a way that spends more time where it most helps the model. 4 Experimental Validation Non-parametric case. The mathematical results presented here apply to any denoising training criterion where the reconstruction loss can be interpreted as a negative log-likelihood. This remains true whether or not the denoising machine P(X| ˜X) is parametrized as the composition of an encoder and decoder. This is also true of the asymptotic estimation results in Alain and Bengio (2013). We experimentally validate the above theorems in a case where the asymptotic limit (of enough data and enough capacity) can be reached, i.e., in a low-dimensional non-parametric setting. Fig. 3 shows the distribution recovered by the Markov chain for discrete data with only 10 different values. The conditional P(X| ˜X) was estimated by multinomial models and maximum likelihood (counting) from 5000 training examples. 5000 samples were generated from the chain to estimate the asymptotic distribution πn(X). For continuous data, Figure 3 also shows the result of 5000 generated samples and 500 original training examples with X ∈R10, with scatter plots of pairs of dimensions. The estimator is also non-parametric (Parzen density estimator of P(X| ˜X)). Figure 3: Top left: histogram of a data-generating distribution (true, blue), the empirical distribution (red), and the estimated distribution using a denoising maximum likelihood estimator. Other figures: pairs of variables (out of 10) showing the training samples and the model-generated samples. 7 MNIST digits. We trained a DAE on the binarized MNIST data (thresholding at 0.5). A Theano1 (Bergstra et al., 2010) implementation is available2. The 784-2000-784 auto-encoder is trained for 200 epochs with the 50000 training examples and salt-and-pepper noise (probability 0.5 of corrupting each bit, setting it to 1 or 0 with probability 0.5). It has 2000 tanh hidden units and is trained by minimizing cross-entropy loss, i.e., maximum likelihood on a factorized Bernoulli reconstruction distribution. With walkback training, a chain of 5 steps was used to generate 5 corrupted examples for each training example. Figure 4 shows samples generated with and without walkback. The quality of the samples was also estimated quantitatively by measuring the log-likelihood of the test set under a non-parametric density estimator ˆP(x) = mean ˜ XP(x| ˜X) constructed from 10000 consecutively generated samples ( ˜X from the Markov chain). The expected value of E[ ˆP(x)] over the samples can be shown (Bengio and Yao, 2013) to be a lower bound (i.e. conservative estimate) of the true (implicit) model density P(x). The test set log-likelihood bound was not used to select among model architectures, but visual inspection of samples generated did guide the preliminary search reported here. Optimization hyper-parameters (learning rate, momentum, and learning rate reduction schedule) were selected based on the training objective. We compare against a state-ofthe-art RBM (Cho et al., 2013) with an AIS log-likelihood estimate of -64.1 (AIS estimates tend to be optimistic). We also drew samples from the RBM and applied the same estimator (using the mean of the RBM’s P(x|h) with h sampled from the Gibbs chain), and obtained a log-likelihood non-parametric bound of -233, skipping 100 MCMC steps between samples (otherwise numbers are very poor for the RBM, which does not mix at all). The DAE log-likelihood bound with and without walkback is respectively -116 and -142, confirming visual inspection suggesting that the walkback algorithm produces less spurious samples. However, the RBM samples can be improved by a spatial blur. By tuning the amount of blur (the spread of the Gaussian convolution), we obtained a bound of -112 for the RBM. Blurring did not help the auto-encoder. Figure 4: Successive samples generated by Markov chain associated with the trained DAEs according to the plain sampling scheme (left) and walkback sampling scheme (right). There are less “spurious” samples with the walkback algorithm. 5 Conclusion and Future Work We have proven that training a model to denoise is a way to implicitly estimate the underlying datagenerating process, and that a simple Markov chain that alternates sampling from the denoising model and from the corruption process converges to that estimator. This provides a means for generating data from any DAE (if the corruption is not degenerate, more precisely, if the above chain converges). We have validated those results empirically, both in a non-parametric setting and with real data. This study has also suggested a variant of the training procedure, walkback training, which seem to converge faster to same the target distribution. One of the insights arising out of the theoretical results presented here is that in order to reach the asymptotic limit of fully capturing the data distribution P(X), it may be necessary for the model’s P(X| ˜X) to have the ability to represent multi-modal distributions over X (given ˜X). Acknowledgments The authors would acknowledge input from A. Courville, I. Goodfellow, R. Memisevic, K. Cho as well as funding from NSERC, CIFAR (YB is a CIFAR Fellow), and Canada Research Chairs. 1http://deeplearning.net/software/theano/ 2git@github.com:yaoli/GSN.git 8 References Alain, G. and Bengio, Y. (2013). What regularized auto-encoders learn from the data generating distribution. In International Conference on Learning Representations (ICLR’2013). Bengio, Y. and Yao, L. (2013). Bounding the test log-likelihood of generative models. Technical report, U. Montreal, arXiv. Bengio, Y., Larochelle, H., and Vincent, P. (2006a). Non-local manifold Parzen windows. In NIPS’05, pages 115–122. MIT Press. Bengio, Y., Monperrus, M., and Larochelle, H. (2006b). Nonlocal estimation of manifold structure. Neural Computation, 18(10). Bengio, Y., Thibodeau-Laufer, E., and Yosinski, J. (2013a). Deep generative stochastic networks trainable by backprop. Technical Report arXiv:1306.1091, Universite de Montreal. Bengio, Y., Courville, A., and Vincent, P. (2013b). Unsupervised feature learning and deep learning: A review and new perspectives. IEEE Trans. Pattern Analysis and Machine Intelligence (PAMI). Bergstra, J., Breuleux, O., Bastien, F., Lamblin, P., Pascanu, R., Desjardins, G., Turian, J., WardeFarley, D., and Bengio, Y. (2010). Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy). Cho, K., Raiko, T., and Ilin, A. (2013). Enhanced gradient for training restricted boltzmann machines. Neural computation, 25(3), 805–831. Heckerman, D., Chickering, D. M., Meek, C., Rounthwaite, R., and Kadie, C. (2000). Dependency networks for inference, collaborative filtering, and data visualization. Journal of Machine Learning Research, 1, 49–75. Hinton, G. E. (1999). Products of experts. In ICANN’1999. Hinton, G. E., Osindero, S., and Teh, Y. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18, 1527–1554. Hyv¨arinen, A. (2005). Estimation of non-normalized statistical models using score matching. Journal of Machine Learning Research, 6, 695–709. Kingma, D. and LeCun, Y. (2010). Regularized estimation of image statistics by score matching. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 1126–1134. Ranzato, M., Boureau, Y.-L., and LeCun, Y. (2008). Sparse feature learning for deep belief networks. In NIPS’07, pages 1185–1192, Cambridge, MA. MIT Press. Rifai, S., Vincent, P., Muller, X., Glorot, X., and Bengio, Y. (2011). Contractive auto-encoders: Explicit invariance during feature extraction. In ICML’2011. Rifai, S., Bengio, Y., Dauphin, Y., and Vincent, P. (2012). A generative process for sampling contractive auto-encoders. In ICML’2012. Swersky, K., Ranzato, M., Buchman, D., Marlin, B., and de Freitas, N. (2011). On autoencoders and score matching for energy based models. In ICML’2011. ACM. Vincent, P. (2011). A connection between score matching and denoising autoencoders. Neural Computation, 23(7). 9
|
2013
|
111
|
4,834
|
Local Privacy and Minimax Bounds: Sharp Rates for Probability Estimation John C. Duchi1 Michael I. Jordan1,2 Martin J. Wainwright1,2 1Department of Electrical Engineering and Computer Science 2Department of Statistics University of California, Berkeley {jduchi,jordan,wainwrig}@eecs.berkeley.edu Abstract We provide a detailed study of the estimation of probability distributions— discrete and continuous—in a stringent setting in which data is kept private even from the statistician. We give sharp minimax rates of convergence for estimation in these locally private settings, exhibiting fundamental trade-offs between privacy and convergence rate, as well as providing tools to allow movement along the privacy-statistical efficiency continuum. One of the consequences of our results is that Warner’s classical work on randomized response is an optimal way to perform survey sampling while maintaining privacy of the respondents. 1 Introduction The original motivation for providing privacy in statistical problems, first discussed by Warner [23], was that “for reasons of modesty, fear of being thought bigoted, or merely a reluctance to confide secrets to strangers,” respondents to surveys might prefer to be able to answer certain questions non-truthfully, or at least without the interviewer knowing their true response. With this motivation, Warner considered the problem of estimating the fractions of the population belonging to certain strata, which can be viewed as probability estimation within a multinomial model. In this paper, we revisit Warner’s probability estimation problem, doing so within a theoretical framework that allows us to characterize optimal estimation under constraints on privacy. We also apply our theoretical tools to a further probability estimation problem—that of nonparametric density estimation. In the large body of research on privacy and statistical inference [e.g., 23, 14, 10, 15], a major focus has been on the problem of reducing disclosure risk: the probability that a member of a dataset can be identified given released statistics of the dataset. The literature has stopped short, however, of providing a formal treatment of disclosure risk that would permit decision-theoretic tools to be used in characterizing trade-offs between the utility of achieving privacy and the utility associated with an inferential goal. Recently, a formal treatment of disclosure risk known as “differential privacy” has been proposed and studied in the cryptography, database and theoretical computer science literatures [11, 1]. Differential privacy has strong semantic privacy guarantees that make it a good candidate for declaring a statistical procedure or data collection mechanism private, and it has been the focus of a growing body of recent work [13, 16, 24, 21, 6, 18, 8, 5, 9]. In this paper, we bring together the formal treatment of disclosure risk provided by differential privacy with the tools of minimax decision theory to provide a theoretical treatment of probability estimation under privacy constraints. Just as in classical minimax theory, we are able to provide lower bounds on the convergence rates of any estimator, in our case under a restriction to estimators that guarantee privacy. We complement these results with matching upper bounds that are achievable using computationally efficient algorithms. We thus bring classical notions of privacy, as introduced by Warner [23], into contact with differential privacy and statistical decision theory, obtaining quantitative trade-offs between privacy and statistical efficiency. 1 1.1 Setting and contributions Let us develop some basic formalism before describing our main results. We study procedures that receive private views Z1, . . . , Zn ∈Z of an original set of observations, X1, . . . , Xn ∈X, where X is the (known) sample space. In our setting, Zi is drawn conditional on Xi via the channel distribution Qi(Zi | Xi = x); typically we omit the dependence of Qi on i. We focus in this paper on the non-interactive setting (in information-theoretic terms, on memoryless channels), where Qi is chosen prior to seeing data; see Duchi et al. [9] for more discussion. We assume each of these private views Zi is α-differentially private for the original data Xi. To give a precise definition for this type of privacy, known as “local privacy,” let σ(Z) be the σ-field on Z over which the channel Q is defined. Then Q provides α-local differential privacy if sup Q(S | Xi = x) Q(S | Xi = x′) | S ∈σ(Z), and x, x′ ∈X ≤exp(α). (1) This formulation of local privacy was first proposed by Evfimievski et al. [13]. The likelihood ratio bound (1) is attractive for many reasons. It means that any individual providing data guarantees his or her own privacy—no further processing or mistakes by a collection agency can compromise one’s data—and the individual has plausible deniability about taking a value x, since any outcome z is nearly as likely to have come from some other initial value x′. The likelihood ratio also controls the error rate in tests for the presence of points x in the data [24]. In the current paper, we study minimax convergence rates when the data provided satisfies the local privacy guarantee (1). Our two main results quantify the penalty that must be paid when local privacy at a level α is provided in multinomial estimation and density estimation problems. At a high level, our first result implies that for estimation of a d-dimensional multinomial probability mass function, the effective sample size of any statistical estimation procedure decreases from n to nα2/d whenever α is a sufficiently small constant. A consequence of our results is that Warner’s randomized response procedure [23] enjoys optimal sample complexity; it is interesting to note that even with the recent focus on privacy and statistical inference, the optimal privacy-preserving strategy for problems such as survey collection has been known for almost 50 years. Our second main result, on density estimation, exhibits an interesting departure from standard minimax estimation results. If the density being estimated has β continuous derivatives, then classical results on density estimation [e.g., 26, 25, 22] show that the minimax integrated squared error scales (in the sample size n) as n−2β/(2β+1). In the locally private case, we show that there is a difference in the polynomial rate of convergence: we obtain a scaling of (α2n)−2β/(2β+2). We give efficiently implementable algorithms that attain sharp upper bounds as companions to our lower bounds, which in some cases exhibit the necessity of non-trivial sampling strategies to guarantee privacy. Notation: Given distributions P and Q defined on a space X, each absolutely continuous with respect to a measure µ (with densities p and q), the KL-divergence between P and Q is Dkl (P∥Q) := Z X dP log dP dQ = Z X p log p q dµ. Letting σ(X) denote an appropriate σ-field on X, the total variation distance between P and Q is ∥P −Q∥TV := sup S∈σ(X) |P(S) −Q(S)| = 1 2 Z X |p(x) −q(x)| dµ(x). Let X be distributed according to P and Y | X be distributed according to Q(· | X), and let M = R Q(· | x)dP(x) denote the marginal of Y . The mutual information between X and Y is I(X; Y ) := EP [Dkl (Q(· | X)∥M(·))] = Z Dkl (Q(· | X = x)∥M(·)) dP(x). A random variable Y has Laplace(α) distribution if its density pY (y) = α 2 exp (−α|y|). We write an ≲bn to denote an = O(bn) and an ≍bn to denote an = O(bn) and bn = O(an). For a convex set C ⊂Rd, we let ΠC denote the orthogonal projection operator onto C. 2 2 Background and Problem Formulation In this section, we provide the necessary background on the minimax framework used throughout the paper, more details of which can be found in standard sources [e.g., 17, 25, 26, 22]. We also reference our work [9] paper on statistical inference under differential privacy constraints; we restate two theorems from the paper [9] to keep our presentation self-contained. 2.1 Minimax framework Let P denote a class of distributions on the sample space X, and let θ : P →Θ denote a function defined on P. The range Θ depends on the underlying statistical model; for example, for density estimation, Θ may consist of the set of probability densities defined on [0, 1]. We let ρ denote the semi-metric on the space Θ that we use to measure the error of an estimator for θ, and Φ : R+ →R+ be a non-decreasing function with Φ(0) = 0 (for example, Φ(t) = t2). Recalling that Z is the domain of the private variables Zi, let bθ : Zn →Θ denote an arbitrary estimator for θ. Let Qα denote the set of conditional (or channel) distributions guaranteeing α-local privacy (1). Looking uniformly over all channels Q ∈Qα, we define the central object of interest for this paper, the α-private minimax rate for the family θ(P), Mn(θ(P), Φ ◦ρ, α) := inf bθ,Q∈Qα sup P ∈P EP,Q h Φ ρ(bθ(Z1, . . . , Zn), θ(P)) i . (2) associated with estimating θ based on (Z1, . . . , Zn). We remark here (see also the discussion in [9]) that the private minimax risk (2) is different from previous work on optimality in differential privacy (e.g. [2, 16, 8]): prior work focuses on accurate estimation of a sample quantity θ(x1:n) based on the sample x1:n, while we provide lower bounds on error of the population estimator θ(P). Lower bounds on population estimation imply those on sample estimation, so our lower bounds are stronger than most of those in prior work. A standard route for lower bounding the minimax risk (2) is by reducing the estimation problem to the testing problem of identifying a point θ ∈Θ from a collection of well-separated points [26, 25]. Given an index set V, the indexed family of distributions {Pν, ν ∈V} ⊂P is a 2δ-packing of Θ if ρ(θ(Pν), θ(Pν′)) ≥2δ for all ν ̸= ν′ in V. The setup is that of a standard hypothesis testing problem: nature chooses V ∈V uniformly at random, then data (X1, . . . , Xn) are drawn i.i.d. from P n ν , conditioning on V = ν. The problem is to identify the member ν of the packing set V. In this work we have the additional complication that all the statistician observes are the private samples Z1, . . . , Zn. To that end, if we let Qn(· | x1:n) denote the conditional distribution of Z1, . . . , Zn given that X1 = x1, . . . , Xn = xn, we define the marginal channel M n ν via the expression M n ν (A) := Z Qn(A | x1, . . . , xn)dPν(x1, . . . , xn) for A ∈σ(Zn). (3) Letting ψ : Zn →V denote an arbitrary testing procedure, we have the following minimax bound, whose two parts are known as Le Cam’s two-point method [26, 22] and Fano’s inequality [25, 7, 22]. Lemma 1 (Minimax risk bound). For the previously described estimation and testing problems, Mn(θ(P), Φ ◦ρ, Q) ≥Φ(δ) inf ψ P(ψ(Z1, . . . , Zn) ̸= V ), (4) where the infimum is taken over all testing procedures. For a binary test specified by V = {ν, ν′}, inf ψ P (ψ(Z1, . . . , Zn) ̸= V ) = 1 2 −1 2 ∥M n ν −M n ν′∥TV , (5a) and more generally, inf ψ P(ψ(Z1, . . . , Zn) ̸= V ) ≥ 1 −I(Z1, . . . , Zn; V ) + log 2 log |V| . (5b) 3 2.2 Information bounds The main step in proving minimax lower bounds is to control the divergences involved in the lower bounds (5a) and (5b). We review two results from our work [9] that obtain such bounds as a function of the amount of privacy provided. The second of the results provides a variational upper bound on the mutual information I(Z1, . . . , Zn; V ), in that we optimize jointly over subset S ⊂X. To state the proposition, we require a bit of notation: for each i ∈{1, . . . , n}, let Pν,i be the distribution of Xi conditional on the random packing element V = ν, and let M n ν be the marginal distribution (3) induced by passing Xi through Q. Define the mixture distribution P i = 1 |V| P ν∈V Pν,i, We can then state a proposition summarizing the results we require from Duchi et al. [9]: Proposition 1 (Information bounds). For any ν, ν′ ∈V and α ≥0, Dkl (M n ν ∥M n ν′) ≤4(eα −1)2 n X i=1 ∥Pν,i −Pν′,i∥2 TV . (6) Additionally for V chosen uniformly at random from V, we have the variational bound I(Z1, . . . , Zn; V ) ≤eα (eα −e−α)2 |V| n X i=1 sup S∈σ(X) X ν∈V Pν,i(S) −P(S) 2 . (7) By combining Proposition 1 with Lemma 1, it is possible to derive sharp lower bounds on arbitrary estimation procedures under α-local privacy. In the remainder of the paper, we demonstrate this combination for probability estimation problems; we provide proofs of all results in [9]. 3 Multinomial Estimation under Local Privacy In this section we return to the classical problem of avoiding answer bias in surveys, the original motivation for studying local privacy [23]. 3.1 Minimax rates of convergence for multinomial estimation Let ∆d := θ ∈Rd | θ ≥0, Pd j=1 θj = 1 denote the probability simplex in Rd. The multinomial estimation problem is defined as follows. Given a vector θ ∈∆d, samples X are drawn i.i.d. from a multinomial with parameters θ, where Pθ(X = j) = θj for j ∈{1, . . . , d}, and the goal is to estimate θ. In one of the earliest evaluations of privacy, Warner [23] studied the Bernoulli variant of this problem and proposed randomized response: for a given survey question, respondents provide a truthful answer with probability p > 1/2 and lie with probability 1 −p. In our setting, we assume the statistician sees α-locally private (1) random variables Zi for the corresponding samples Xi from the multinomial. In this case, we have the following result, which characterizes the minimax rate of estimation of a multinomial in both mean-squared error E[∥bθ −θ∥2 2] and absolute error E[∥bθ −θ∥1]; the latter may be more relevant for probability estimation problems. Theorem 1. There exist universal constants 0 < cℓ≤cu < 5 such that for all α ∈[0, 1], the minimax rate for multinomial estimation satisfies the bounds cℓmin 1, 1 √ nα2 , d nα2 ≤Mn ∆d, ∥·∥2 2 , α ≤cu min 1, d nα2 , (8) and cℓmin 1, d √ nα2 ≤Mn (∆d, ∥·∥1 , α) ≤cu min 1, d √ nα2 . (9) Theorem 1 shows that providing local privacy can sometimes be quite detrimental to the quality of statistical estimators. Indeed, let us compare this rate to the classical rate in which there is no privacy. Then estimating θ via proportions (i.e., maximum likelihood), we have E h ∥bθ −θ∥2 2 i = d X j=1 E h (bθj −θj)2i = 1 n d X j=1 θj(1 −θj) ≤1 n 1 −1 d < 1 n. By inequality (8), for suitably large sample sizes n, the effect of providing differential privacy at a level α causes a reduction in the effective sample size of n 7→nα2/d. 4 3.2 Optimal mechanisms: attainability for multinomial estimation An interesting consequence of the lower bound in (8) is the following fact that we now demonstrate: Warner’s classical randomized response mechanism [23] (with minor modification) achieves the optimal convergence rate. There are also other relatively simple estimation strategies that achieve convergence rate d/nα2; the perturbation approach Dwork et al. [11] propose, where Laplace(α) noise is added to each coordinate of a multinomial sample, is one such strategy. Nonetheless, the ease of use and explainability of randomized response, coupled with our optimality results, provide support for randomized response as a preferred method for private estimation of population probabilities. We now prove that randomized response attains the optimal rate of convergence. There is a bijection between multinomial samples x ∈{1, . . . , d} and the d standard basis vectors e1, . . . , ed ∈Rd, so we abuse notation and represent samples x as either when designing estimation strategies. In randomized response, we construct the private vector Z ∈{0, 1}d from a multinomial observation x ∈{e1, . . . , ed} by sampling d coordinates independently via the procedure [Z]j = ( xj with probability exp(α/2) 1+exp(α/2) 1 −xj with probability 1 1+exp(α/2). (10) We claim that this channel (10) is α-differentially private: indeed, note that for any x, x′ ∈∆d and any vector z ∈{0, 1}d we have Q(Z = z | x) Q(Z = z | x′) = exp α 2 (∥z −x∥1 −∥z −x′∥1) ∈[exp(−α), exp(α)] , where we used the triangle inequality to assert that | ∥z −x∥1 −∥z −x′∥1 | ≤∥x −x′∥1 ≤2. We can compute the expected value and variance of the random variables Z; indeed, by definition (10) E[Z | x] = eα/2 1 + eα/2 x + 1 1 + eα/2 ( 1 −x) = eα/2 −1 eα/2 + 1x + 1 1 + eα/2 1. Since the Z are Bernoulli, we obtain the variance bound E[∥Z −E[Z]∥2 2] < d/4+1 < d. Recalling the definition of the projection Π∆d onto the simplex, we arrive at the natural estimator bθpart := 1 n n X i=1 Zi − 1/(1 + eα/2) eα/2 + 1 eα/2 −1 and bθ := Π∆d bθpart . (11) The projection of bθpart onto the probability simplex can be done in time linear in the dimension d of the problem [3], so the estimator (11) is efficiently computable. Since projections only decrease distance, vectors in the simplex are at most distance √ 2 apart, and Eθ[bθpart] = θ, we find E h ∥bθ −θ∥2 2 i ≤min n 2, E h ∥bθpart −θ∥2 2 io ≤min 2, d n eα/2 + 1 eα/2 −1 2 ≲min 1, d nα2 . A similar argument shows that randomized response is minimax optimal for the ℓ1-loss as well. 4 Density Estimation under Local Privacy In this section, we turn to studying a nonparametric statistical problem in which the effects of local differential privacy turn out to be somewhat more severe. We show that for the problem of density estimation, instead of just multiplicative loss in the effective sample size as in the previous section, imposing local differential privacy leads to a different convergence rate. In more detail, we consider estimation of probability densities f : R →R+, R f(x)dx = 1 and f ≥0, defined on the real line, focusing on a standard family of densities of varying smoothness [e.g. 22]. Throughout this section, we let β ∈N denote a fixed positive integer. Roughly, we consider densities that have bounded βth derivative, and we study density estimation using the squared L2norm ∥f∥2 2 := R f 2(x)dx as our metric; in formal terms, we impose these constraints in terms of Sobolev classes (e.g. [22, 12]). Let the countable collection of functions {ϕj}∞ j=1 be an orthonormal basis for L2([0, 1]). Then any function f ∈L2([0, 1]) can be expanded as a sum P∞ j=1 θjϕj in terms of the basis coefficients θj := R f(x)ϕj(x)dx, where {θj}∞ j=1 ∈ℓ2(N). The Sobolev space Fβ[C] is obtained by enforcing a particular decay rate on the coefficients θ: 5 Definition 1 (Elliptical Sobolev space). For a given orthonormal basis {ϕj} of L2([0, 1]), smoothness parameter β > 1/2 and radius C, the function class Fβ[C] is given by Fβ[C] := f ∈L2([0, 1]) | f = ∞ X j=1 θjϕj such that ∞ X j=1 j2βϕ2 j ≤C2 . If we choose the trigonometric basis as our orthonormal basis, then membership in the class Fβ[C] corresponds to certain smoothness constraints on the derivatives of f. More precisely, for j ∈N, consider the orthonormal basis for L2([0, 1]) of trigonometric functions: ϕ0(t) = 1, ϕ2j(t) = √ 2 cos(2πjt), ϕ2j+1(t) = √ 2 sin(2πjt). (12) Now consider a β-times almost everywhere differentiable function f for which |f (β)(x)| ≤C for almost every x ∈[0, 1] satisfying f (k)(0) = f (k)(1) for k ≤β −1. Uniformly for such f, there is a universal constant c such that that f ∈Fβ[cC] [22, Lemma A.3]. Thus, Definition 1 (essentially) captures densities that have Lipschitz-continuous (β −1)th derivative. In the sequel, we write Fβ when the bound C in Fβ[C] is O(1). It is well known [26, 25, 22] that the minimax risk for nonprivate estimation of densities in the class Fβ scales as Mn Fβ, ∥·∥2 2 , ∞ ≍n− 2β 2β+1 . (13) Our main result is to demonstrate that the classical rate (13) is no longer attainable when we require α-local differential privacy. In Sections 4.2 and 4.3, we show how to achieve the (new) optimal rate using histogram and orthogonal series estimators. 4.1 Lower bounds on density estimation We begin by giving our main lower bound on the minimax rate of estimation of densities when are kept differentially private, providing the proof in the longer paper [9]. Theorem 2. Consider the class of densities Fβ defined using the trigonometric basis (12). For some α ∈[0, 1], suppose Zi are α-locally private (1) for the samples Xi ∈[0, 1]. There exists a constant cβ > 0, dependent only on β, such that Mn Fβ, ∥·∥2 2 , α ≥cβ nα2− 2β 2β+2 . (14) In comparison with the classical minimax rate (13), the lower bound (14) involves a different polynomial exponent: privacy reduces the exponent from 2β/(2β + 1) to 2β/(2β + 2). For example, for Lipschitz densities we have β = 1, and the rate degrades from n−2/3 to n−1/2. Interestingly, no estimator based on Laplace (or exponential) perturbation of the samples Xi themselves can attain the rate of convergence (14). In their study of the deconvolution problem, Carroll and Hall [4] show that if samples Xi are perturbed by additive noise W, where the characteristic function φW of the additive noise has tails behaving as |φW (t)| = O(|t|−a) for some a > 0, then no estimator can deconvolve the samples X + W and attain a rate of convergence better than n−2β/(2β+2a+1). Since the Laplace distribution’s characteristic function has tails decaying as t−2, no estimator based on perturbing the samples directly can attain a rate of convergence better than n−2β/(2β+5). If the lower bound (14) is attainable, we must then study privacy mechanisms that are not simply based on direct perturbation of the samples {Xi}n i=1. 4.2 Achievability by histogram estimators We now turn to the mean-squared errors achieved by specific practical schemes, beginning with the special case of Lipschitz density functions (β = 1), for which it suffices to consider a private version of a classical histogram estimate. For a fixed positive integer k ∈N, let {Xj}k j=1 denote the partition of X = [0, 1] into the intervals Xj = [(j −1)/k, j/k) for j = 1, 2, . . . , k −1, and Xk = [(k −1)/k, 1]. 6 Any histogram estimate of the density based on these k bins can be specified by a vector θ ∈k∆k, where we recall ∆k ⊂Rk + is the probability simplex. Any such vector defines a density estimate via the sum fθ := Pk j=1 θj1Xj, where 1E denotes the characteristic (indicator) function of the set E. Let us now describe a mechanism that guarantees α-local differential privacy. Given a data set {X1, . . . , Xn} of samples from the distribution f, consider the vectors Zi := ek(Xi) + Wi, for i = 1, 2, . . . , n, (15) where ek(Xi) ∈∆k is a k-vector with the jth entry equal to one if Xi ∈Xj, and zeroes in all other entries, and Wi is a random vector with i.i.d. Laplace(α/2) entries. The variables {Zi}n i=1 so-defined are α-locally differentially private for {Xi}n i=1. Using these private variables, we then form the density estimate bf := fbθ = Pk j=1 bθj1Xj based on bθ := Πk k n n X i=1 Zi , (16) where Πk denotes the Euclidean projection operator onto the set k∆k. By construction, we have bf ≥0 and R 1 0 bf(x)dx = 1, so bf is a valid density estimate. Proposition 2. Consider the estimate bf based on k = (nα2)1/4 bins in the histogram. For any 1-Lipschitz density f : [0, 1] →R+, we have Ef h
bf −f
2 2 i ≤5(α2n)−1 2 + √αn−3/4. (17) For any fixed α > 0, the first term in the bound (17) dominates, and the O((α2n)−1 2 ) rate matches the minimax lower bound (14) in the case β = 1: the privatized histogram estimator is minimaxoptimal for Lipschitz densities. This result provides the private analog of the classical result that histogram estimators are minimax-optimal (in the non-private setting) for Lipschitz densities. 4.3 Achievability by orthogonal projection estimators For higher degrees of smoothness (β > 1), histogram estimators no longer achieve optimal rates in the classical setting [20]. Accordingly, we turn to estimators based on orthogonal series and show that even under local privacy, they achieve the lower bound (14) for all orders of smoothness β ≥1. Recall the elliptical Sobolev space (Definition 1), in which a function f is represented as f = P∞ j=1 θjϕj, where θj = R f(x)ϕj(x)dx. This representation underlies the classical method of orthonormal series estimation: given a data set, {X1, X2, . . . , Xn}, drawn i.i.d. according to a density f ∈L2([0, 1]), we first compute the empirical basis coefficients bθj = 1 n n X i=1 ϕj(Xi) and then set bf = k X j=1 bθjϕj, (18) where the value k ∈N is chosen either a priori based on known properties of the estimation problem or adaptively, for example, using cross-validation [12, 22]. In the setting of local privacy, we consider a mechanism that, instead of releasing the vector of coefficients ϕ1(Xi), . . . , ϕk(Xi) for each data point, employs a random vector Zi = (Zi,1, . . . , Zi,k) with the property that E[Zi,j | Xi] = ϕj(Xi) for each j = 1, 2, . . . , k. We assume the basis functions are uniformly bounded; i.e., there exists a constant B0 = supj supx |ϕj(x)| < ∞. For a fixed number B strictly larger than B0 (to be specified momentarily), consider the following scheme: Sampling strategy Given a vector τ ∈[−B0, B0]k, construct eτ ∈{−B0, B0}k with coordinates eτj sampled independently from {−B0, B0} with probabilities 1 2 − τj 2B0 and 1 2 + τj 2B0 . Sample T from a Bernoulli(eα/(eα + 1)) distribution. Then choose Z ∈{−B, B}k via Z ∼ Uniform on z ∈{−B, B}k : ⟨z, eτ⟩> 0 if T = 1 Uniform on z ∈{−B, B}k : ⟨z, eτ⟩≤0 if T = 0. (19) 7 By inspection, Z is α-differentially private for any initial vector in the box [−B0, B0]k, and moreover, the samples (19) are efficiently computable (for example by rejection sampling). Starting from the vector τ ∈Rk, τj = ϕj(Xi), in the above sampling strategy we have E[[Z]j | X = x] = ck B B0 √ k eα eα + 1 − 1 eα + 1 ϕj(x) = ck B B0 √ k eα −1 eα + 1ϕj(x), (20) for a constant ck that may depend on k but is O(1) and bounded away from 0. Consequently, to attain the unbiasedness condition E[[Zi]j | Xi] = ϕj(Xi), it suffices to take B = O(B0 √ k/α). The full sampling and inferential scheme are as follows: (i) given a data point Xi, construct the vector τ = [ϕj(Xi)]k j=1; (ii) sample Zi according to strategy (19) using τ and the bound B = B0 √ k(eα + 1)/ck(eα −1). (The constant ck is as in the expression (20).) Using the estimator bf := 1 n n X i=1 k X j=1 Zi,jϕj, (21) we obtain the following proposition. Proposition 3. Let {ϕj} be a B0-bounded orthonormal basis for L2([0, 1]). There exists a constant c (depending only on C and B0) such that the estimator (21) with k = (nα2)1/(2β+2) satisfies sup f∈Fβ[C] Ef h ∥f −bf∥2 2 i ≤c nα2− 2β 2β+2 . Propositions 2 and 3 make clear that the minimax lower bound (14) is sharp, as claimed. Before concluding our exposition, we make a few remarks on other potential density estimators. Our orthogonal-series estimator (21) (and sampling scheme (20)), while similar in spirit to that proposed by Wasserman and Zhou [24, Sec. 6], is different in that it is locally private and requires a different noise strategy to obtain both α-local privacy and optimal convergence rate. Lei [19] considers private M-estimators based on first performing a histogram density estimate, then using this to construct a second estimator; his estimator is not locally private, and the resulting M-estimators have sub-optimal convergence rates. Finally, we remark that density estimators that are based on orthogonal series and Laplace perturbation are sub-optimal: they can achieve (at best) rates of (nα2)− 2β 2β+3 , which is polynomially worse than the sharp result provided by Proposition 3. It appears that appropriately chosen noise mechanisms are crucial for obtaining optimal results. 5 Discussion We have linked minimax analysis from statistical decision theory with differential privacy, bringing some of their respective foundational principles into close contact. In this paper particularly, we showed how to apply our divergence bounds to obtain sharp bounds on the convergence rate for certain nonparametric problems in addition to standard finite-dimensional settings. By providing sharp convergence rates for many standard statistical inference procedures under local differential privacy, we have developed and explored some tools that may be used to better understand privacy-preserving statistical inference and estimation procedures. We have identified a fundamental continuum along which privacy may be traded for utility in the form of accurate statistical estimates, providing a way to adjust statistical procedures to meet the privacy or utility needs of the statistician and the population being sampled. Formally identifying this trade-off in other statistical problems should allow us to better understand the costs and benefits of privacy; we believe we have laid some of the groundwork to do so. Acknowledgments JCD was supported by a Facebook Graduate Fellowship and an NDSEG fellowship. Our work was supported in part by the U.S. Army Research Laboratory, U.S. Army Research Office under grant number W911NF-11-1-0391, and Office of Naval Research MURI grant N00014-11-1-0688. 8 References [1] B. Barak, K. Chaudhuri, C. Dwork, S. Kale, F. McSherry, and K. Talwar. Privacy, accuracy, and consistency too: A holistic solution to contingency table release. In Proceedings of the 26th ACM Symposium on Principles of Database Systems, 2007. [2] A. Beimel, K. Nissim, and E. Omri. Distributed private data analysis: Simultaneously solving how and what. In Advances in Cryptology, volume 5157 of Lecture Notes in Computer Science, pages 451–468. Springer, 2008. [3] P. Brucker. An O(n) algorithm for quadratic knapsack problems. Operations Research Letters, 3(3): 163–166, 1984. [4] R. Carroll and P. Hall. Optimal rates of convergence for deconvolving a density. Journal of the American Statistical Association, 83(404):1184–1186, 1988. [5] K. Chaudhuri and D. Hsu. Convergence rates for differentially private statistical estimation. In Proceedings of the 29th International Conference on Machine Learning, 2012. [6] K. Chaudhuri, C. Monteleoni, and A. D. Sarwate. Differentially private empirical risk minimization. Journal of Machine Learning Research, 12:1069–1109, 2011. [7] T. M. Cover and J. A. Thomas. Elements of Information Theory, Second Edition. Wiley, 2006. [8] A. De. Lower bounds in differential privacy. In Proceedings of the Ninth Theory of Cryptography Conference, 2012. URL http://arxiv.org/abs/1107.2183. [9] J. C. Duchi, M. I. Jordan, and M. J. Wainwright. Local privacy and statistical minimax rates. arXiv:1302.3203 [math.ST], 2013. URL http://arxiv.org/abs/1302.3203. [10] G. T. Duncan and D. Lambert. Disclosure-limited data dissemination. Journal of the American Statistical Association, 81(393):10–18, 1986. [11] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis. In Proceedings of the 3rd Theory of Cryptography Conference, pages 265–284, 2006. [12] S. Efromovich. Nonparametric Curve Estimation: Methods, Theory, and Applications. Springer-Verlag, 1999. [13] A. V. Evfimievski, J. Gehrke, and R. Srikant. Limiting privacy breaches in privacy preserving data mining. In Proceedings of the Twenty-Second Symposium on Principles of Database Systems, pages 211–222, 2003. [14] I. P. Fellegi. On the question of statistical confidentiality. Journal of the American Statistical Association, 67(337):7–18, 1972. [15] S. E. Fienberg, U. E. Makov, and R. J. Steele. Disclosure limitation using perturbation and related methods for categorical data. Journal of Official Statistics, 14(4):485–502, 1998. [16] M. Hardt and K. Talwar. On the geometry of differential privacy. In Proceedings of the FourtySecond Annual ACM Symposium on the Theory of Computing, pages 705–714, 2010. URL http://arxiv.org/abs/0907.3754. [17] I. A. Ibragimov and R. Z. Has’minskii. Statistical Estimation: Asymptotic Theory. Springer-Verlag, 1981. [18] S. P. Kasiviswanathan, H. K. Lee, K. Nissim, S. Raskhodnikova, and A. Smith. What can we learn privately? SIAM Journal on Computing, 40(3):793–826, 2011. [19] J. Lei. Differentially private M-estimators. In Advances in Neural Information Processing Systems 25, 2011. [20] D. Scott. On optimal and data-based histograms. Biometrika, 66(3):605–610, 1979. [21] A. Smith. Privacy-preserving statistical estimation with optimal convergence rates. In Proceedings of the Fourty-Third Annual ACM Symposium on the Theory of Computing, 2011. [22] A. B. Tsybakov. Introduction to Nonparametric Estimation. Springer, 2009. [23] S. Warner. Randomized response: a survey technique for eliminating evasive answer bias. Journal of the American Statistical Association, 60(309):63–69, 1965. [24] L. Wasserman and S. Zhou. A statistical framework for differential privacy. Journal of the American Statistical Association, 105(489):375–389, 2010. [25] Y. Yang and A. Barron. Information-theoretic determination of minimax rates of convergence. Annals of Statistics, 27(5):1564–1599, 1999. [26] B. Yu. Assouad, Fano, and Le Cam. In Festschrift for Lucien Le Cam, pages 423–435. Springer-Verlag, 1997. 9
|
2013
|
112
|
4,835
|
Reward Mapping for Transfer in Long-Lived Agents Xiaoxiao Guo Computer Science and Eng. University of Michigan guoxiao@umich.edu Satinder Singh Computer Science and Eng. University of Michigan baveja@umich.edu Richard Lewis Department of Psychology University of Michigan rickl@umich.edu Abstract We consider how to transfer knowledge from previous tasks (MDPs) to a current task in long-lived and bounded agents that must solve a sequence of tasks over a finite lifetime. A novel aspect of our transfer approach is that we reuse reward functions. While this may seem counterintuitive, we build on the insight of recent work on the optimal rewards problem that guiding an agent’s behavior with reward functions other than the task-specifying reward function can help overcome computational bounds of the agent. Specifically, we use good guidance reward functions learned on previous tasks in the sequence to incrementally train a reward mapping function that maps task-specifying reward functions into good initial guidance reward functions for subsequent tasks. We demonstrate that our approach can substantially improve the agent’s performance relative to other approaches, including an approach that transfers policies. 1 Introduction We consider agents that live for a long time in a sequential decision-making environment. While many different interpretations are possible for the notion of long-lived, here we consider agents that have to solve a sequence of tasks over a continuous lifetime. Thus, our problem is closely related to that of transfer learning in sequential decision-making, which can be thought of as a problem faced by agents that have to solve a set of tasks. Transfer learning [18] has explored the reuse across tasks of many different components of a reinforcement learning (RL) architecture, including value functions [16, 5, 8], policies [9, 20], and models of the environment [1, 17]. Other transfer approaches have considered parameter transfer [19], selective reuse of sample trajectories from previous tasks [7], as well as reuse of learned abstract representations such as options [12, 6]. A novel aspect of our transfer approach in long-lived agents is that we will reuse reward functions. At first blush, it may seem odd to consider using a reward function different from the one specifying the current task in the sequence (indeed, in most RL research rewards are considered an immutable part of the task description). But there is now considerable work on designing good reward functions, including reward-shaping [10], inverse RL [11], optimal rewards [13] and preference-elicitation [3]. In this work, we specifically build on the insight of the optimal rewards problem (ORP; described in more detail in the next section) that guiding an agent’s behavior with reward functions other than the task-specifying reward function can help overcome computational bounds in the agent architecture. We base our work on an algorithm from Sorg et.al. [14] that learns good guidance reward functions incrementally in a single-task setting. Our main contribution in this paper is a new approach to transfer in long-lived agents in which we use good guidance reward functions learned on previous tasks in the sequence to incrementally train a reward mapping function that maps task-specifying reward functions into good initial guidance reward functions for subsequent tasks. We demonstrate that our approach can substantially improve a long-lived agent’s performance relative to other approaches, first on an illustrative grid world domain, and second on a networking domain from prior work [9] on the reuse of policies for transfer. 1 In the grid world domain only the task-specifying reward function changes with tasks, while in the networking domain both the reward function and the state transition function change with tasks. 2 Background: Optimal Rewards for Bounded Agents in Single Tasks We consider sequential decision-making environments formulated as controlled Markov processes (CMPs); these are defined via a state space S, an action space A, and a transition function T that determines a distribution over next states given a current state and action. A task in such a CMP is defined via a reward function R that maps state-action pairs to scalar values. The objective of the agent in a task is to execute the optimal policy, i.e., to choose actions in such a way as to optimize utility defined as the expected value of cumulative reward over some lifetime. A CMP and reward function together define a Markov decision process or MDP; hence tasks in this paper are MDPs. There are many approaches to planning an optimal policy in MDPs. Here we will use UCT [4] which incrementally plans the action to take in the current state. It simulates a number of trajectories from the current state up to some maximum depth, choosing actions at each point based on the sum of an estimated action-value that encourages exploitation and a reward bonus that encourages exploration. It has theoretical guarantees of convergence and works well in practice on a variety of large-scale planning problems. We use UCT in this paper because it is one of the state of the art algorithms in RL planning and because there exists a good optimal reward finding algorithm for it [14]. Optimal Rewards Problem (ORP). In almost all of RL research, the reward function is considered part of the task specification and thus unchangeable. The optimal reward framework of Singh et al. [13] stems from the observation that a reward function plays two roles simultaneously in RL problems. The first role is that of evaluation in that the task-specifying reward function is used by the agent designer to evaluate the actual behavior of the agent. The second is that of guidance in that the reward function is also used by the RL algorithm implemented by the agent to determine its behavior (e.g., via Q-learning [21] or UCT planning [4]). The optimal rewards problem separates these two roles into two separate reward functions, the task-specifying objective reward function used to evaluate performance, and an internal reward function used to guide agent behavior. Given a CMP M, an objective reward function Ro, an agent A parameterized by an internal reward function, and a space of possible internal reward functions R, an optimal internal reward function Ri∗is defined as follows (throughout superscript o will denoted objective evaluation quantities and superscript i will denote internal quantities): Ri∗= arg max Ri∈R Eh∼⟨A(Ri),M⟩ n U o(h) o , where A(Ri) is the agent with internal reward function Ri, h ∼⟨A(Ri), M⟩is a random history (trajectory of alternating states and actions) obtained by the interaction of agent A(Ri) with CMP M, and U o(h) is the objective utility (as specified by Ro) to the agent designer of interaction history h. The optimal internal reward function will depend on the agent A’s architecture and its limitations, and this distinguishes ORP from other reward-design approaches such as inverse-RL. When would the optimal internal reward function be different from the objective reward function? If an agent is unbounded in its capabilities with respect to the CMP then the objective reward function is always an optimal internal reward function. More crucially though, in the realistic setting of bounded agents, optimal internal reward functions may be quite different from objective reward functions. Singh et al.[13] and Sorg et al.[14] provide many examples and some theory of when a good choice of internal reward can mitigate agent bounds, including bounds corresponding to limited lifetime to learn [13], limited memory [14], and limited resources for planning (the specific bound of interest in this paper). PGRD: Solving the ORP on-line while planning. Computing Ri∗can be computationally nontrivial. We will use Sorg et.al.’s [14, 15] policy gradient reward design (PGRD) method that is based on the insight that any planning algorithm can be viewed as procedurally translating the internal reward function Ri into behavior—that is, Ri are indirect parameters of the agent’s policy. PGRD cheaply computes the gradient of the objective utility with respect to the Ri parameters through UCT planning. Specifically, it takes a simulation model of the CMP and an objective reward function and uses UCT to simultaneously plan actions with respect to the current internal reward function as well as to update the internal reward function in the direction of the gradient of the objective utility for use in the next planning step. 2 (a) Conventional Agent (b) Non-transfer ORP Agent Task Sequence θo 1 θo 2 θo n θo 3 tn t3 t2 t1 Agent (ActorAgent) Environment Agent (Actor-Agent) time evaluation reward Task Sequence θo 1 θo 2 θo n θo 3 tn t3 t2 t1 time evaluation reward ActorAgent Environment Agent θi 1 θi 2 θi 3 θi n Critic-Agent Actor-Agent guidance reward (c) Reward Mapping Transfer ORP Agent (d) Sequential Transfer ORP Agent Task Sequence θo 1 θo 2 θo n θo 3 tn t3 t2 t1 time evaluation reward for all j, θi j =fλ(θo j) Environment Agent Critic-Agent Actor-Agent reward mapping ActorAgent θi 1 θi 2 θi 3 θi n initialize initialize initialize initialize Task Sequence θo 1 θo 2 θo n θo 3 tn t3 t2 t1 time evaluation reward ActorAgent Environment Agent θi 1 θi 2 θi 3 θi n Critic-Agent Actor-Agent guidance reward initialize initialize initialize Figure 1: The four agent types compared in this paper. In each figure, time flows from left to right. The sequence of objective reward parameters and task durations for n tasks are shown in the environment portion of each figure. In figures (b-d) the agent portion of the figure is further split into a critic-agent and an actoragent; figure (a) does not have this split because it is the conventional agent. The critic-agent translates the objective reward parameters θo into the internal reward parameters θi. The actor-agent is a UCT agent in all our implementations. The critic-agent component varies across the figures and is crucial to understanding the differences among the agents (see text for detailed descriptions). 3 Four Agent Architectures for the Long-Lived Agent Problem Long-Lived Agent’s Objective Utility. We will consider the case where objective rewards are linear functions of objective reward features. Formally, the jth task is defined by objective reward function Ro j(s, a) = θo j · ψo(s, a), where θo j is the parameter vector for the jth task, ψo are the taskindependent objective reward features of state and action, and ‘·’ denotes the inner-product. Note that the features are constant across tasks while the parameters vary. The jth task lasts for tj time steps. Given some agent A the expected objective utility achieved for a particular task sequence {θo j, tj}K j=1, is Eh∼⟨A,M⟩ PK j=1 n U θo j (hj) o , where for ease of exposition we denote the history during task j simply as hj. In general, there may be a distribution over task sequences, and the expected objective utility would then be a further expectation over such a distribution. In some transfer or other long-lived agent research, the emphasis is on learning in that the agent is assumed to lack complete knowledge of the CMP and the task specifications. Our emphasis here is on planning in that the agent is assumed to know the CMP perfectly as well as the task specifications as they change. If the agent were unbounded in planning capacity, there would be nothing interesting left to consider because the agent could simply find the optimal policy for each new task and execute it. What makes our problem interesting therefore is that our UCT-based planning agent is computationally limited: the depth and number of trajectories feasible are small enough (relative 3 to the size of the CMP) that it cannot find near-optimal actions. This sets up the potential for both the use of the ORP and of transfer across tasks. Note that basic UCT does use a reward function but does not use an initial value function or policy and hence changing a reward function is a natural and consequential way to influence UCT. While non-trivial modifications of UCT could allow use of value functions and/or policies, we do not consider them here. In addition, in our setting a model of the CMP is available to the agent and so there is no scope for transfer by reuse of model knowledge. Thus, our reuse of reward functions may well be the most consequential option available in UCT. Next we discuss four different agent architectures represented graphically in Figure 1, starting with a conventional agent that ignores both the potential of transfer and that of ORP, followed by three different agents that do not to varying degrees. Conventional Agent. Figure 1(a) shows the baseline conventional UCT-based agent that ignores the possibility of transfer and treats each task separately. It also ignores ORP and treats each task’s objective reward as the internal reward for UCT planning during that task. The remaining three agents will all consider the ORP, and share the following details: The space of internal reward functions R is the space of all linear functions of internal reward features ψi(s, a), i.e., R(s, a) = {θ · ψi(s, a)}θ∈Θ, where Θ is the space of possible parameters θ (in this paper all finite vectors). Note that the internal reward features ψi and the objective reward features ψo do not have to be identical. Non-Transfer ORP Agent. Figure 1(b) shows the non-transfer agent that ignores the possibility of transfer but exploits ORP. It initializes the internal reward function to the objective reward function of each new task as it starts and then uses PGRD to adapt the internal reward function while acting in that task. Nothing is transferred across task boundaries. This agent was designed to help separate the contributions of ORP and transfer to performance gains. Reward-Mapping-Transfer ORP Agent. Figure 1(c) shows the reward-mapping agent that incorporates our main new idea. It exploits both transfer and ORP via incrementally learning a reward mapping function. A reward mapping function f maps objective reward function parameters to internal reward function parameters: ∀j, θi j = f(θo j). The reward mapping function is used to initialize the internal reward function at the beginning of each new task. PGRD is used to continually adapt the initialized internal reward function throughout each task. The reward mapping function is incrementally trained as follows: when task j ends, the objective reward function parameters θo j and the adapted internal reward function parameters ˆθi j are used as an input-output pair to update the reward mapping function. In our work, we use nonparametric kernel-regression to learn the reward mapping function. Pseudocode for a general reward mapping agent is presented in Algorithm 1. Sequential-Transfer ORP Agent. Figure 1(d) shows the sequential-transfer agent. It also exploits both transfer and ORP. However, it does not use a reward mapping function but instead continually updates the internal reward function across task boundaries using PGRD. The internal reward function at the end of a task becomes the initial internal reward function at the start of the next task achieving a simple form of sequential transfer. 4 Empirical Evaluation The four agent architectures are compared to demonstrate that the reward mapping approach can substantially improve the bounded agent’s performance, first on an illustrative grid world domain, and second on a networking routing domain from prior work [9] on the transfer of policies. 4.1 Food-and-Shelter Domain The purpose of the experiments in this domain are (1) to systematically explore the relative benefits of the use of ORP, and of transfer (with and without the use of the reward-mapping function), each in isolation and together, (2) to explore the sensitivity and dependence of these relative benefits on parameters of the long-lived setting such as mean duration of tasks, and (3) to visualize what is learned by the reward mapping function. 4 Algorithm 1 General pseudocode for Reward Mapping Agent (Figure 1(c)) 1: Input: {θo j, tj}k j=1, where j is task indicator, tj is task duration, and θo j are the objective reward function parameters specifying task j. 2: 3: for t = 1, 2, 3, ... do 4: if a new task j starts then 5: obtain current objective reward parameters θo j 6: compute: θi j = f(θo j) 7: initialize the internal reward function using θi j 8: end if 9: at := planning(st; θi j) (select action using UCT guided by reward function θi j) 10: (st+1, rt+1) := takeAction(st, at) 11: θi := updateInternalRewardFunction(θi, st, at, st+1, rt+1) (via PGRD) 12: 13: if current task ends then 14: obtain current internal reward parameters as ˆθi j 15: update reward mapping function f using training pair (θo, ˆθi j) 16: end if 17: end for Agent shelter possible food locations food A E L G I B H K J P O Q R N M C F D 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 3 3 3 3 3 2 1 (a) Food-and-Shelter Domain. (b) Network Routing Domain. Figure 2: Domains used in empirical evaluation; the network routing domain comes from [9]. The environment is a simple 3 by 3 maze with three left-to-right corridors. Thick black lines indicate impassable walls. The position of the shelter and possible positions of food are shown in Figure 2. Dynamics. The shelter breaks down with a probability of 0.1 at each time step. Once the shelter is broken, it remains broken until repaired by the agent. Food appears at the rightmost column of one of the three corridors and can be eaten by the agent when the agent is at the same location with the food. When food is eaten, new food reappears in a different corridor. The agent can move in four cardinal directions, and every movement action has a probability of 0.1 to result in movement in a random direction; if the direction is blocked by a wall or the boundary, the action results in no movement. The agent eats food and repairs shelter automatically whenever collocated with food and shelter respectively. The discount factor γ = 0.95. State. A state is a tuple (l, f, h), where l is the location of the agent, f is the location of the food, and h indicates whether the shelter is broken. Objective Reward Function. At each time step, the agent receives a positive reward of e (the eatbonus) for eating food and a negative reward of b (the broken-cost) if the shelter is broken. Thus, the objective reward function’s parameters are θo j = (ej, bj), where ej ∈[0, 1] and bj ∈[−1, 0]. Different tasks will require the agent to behave in different ways. For example, if (ej, bj) = (1,0), the agent should explore the maze to eat more food. If (ej, bj) = (0, -1), the agent should remain at the shelter’s location in order to repair the shelter as it breaks. Space of Internal Reward Functions. The internal reward function is Ri j(s) = Ro j(s) + θi jψi(s), where Ro j(s) is the objective reward function, ψi(s) = 1 − 1 nl(s) is the inverse recency feature 5 t=50 t=200 t=500 −0.01 −0.005 0 0.005 0.01 0.015 0.02 0.025 avg. objective reward per time step Reward Mapping Sequential Transfer Non−Transfer Conventional 0 1 2 3 4 −0.01 −0.005 0 0.005 0.01 0.015 0.02 0.025 milliseconds per decision avg. objective reward per time step mean task duration 50 Reward Mapping Sequential Transfer Non−Transfer Conventional 0 1 2 3 −0.01 0 0.01 0.02 0.03 0.04 milliseconds per decision avg. objective reward per time step mean task duration 500 Figure 3: (Left) Performance of four agents in food-and-shelter domain at three different mean task durations. (Middle and Right) Comparing performance while accounting for computational overhead of learning and using the reward mapping function. See text for details. and nl(s) is the number of time steps since the agent’s last visit to the location in state s. Since there is exactly one internal reward parameter, θi j is a scalar. A positive θi j encourages the agent to visit locations not visited recently, and a negative θi j encourages the agent to visit locations visited recently. Results: Performance advantage of reward mapping. 100 sequences of 200 tasks were generated, with Poisson distributions for task durations, and with objective reward function parameters sampled uniformly from their ranges. The agents used UCT with depth 2 and 500 trajectories; the conventional agent is thereby bounded as evidenced in its poor performance (see Figure 3). −0.90 0.76 0.82 0.60 0.50 0.46 0.46 0.54 0.74 0.72 −0.76 −0.84 0.36 0.46 0.42 0.46 0.46 0.60 0.62 0.90 −1.00 −0.80 −0.74 0.36 0.36 0.32 0.42 0.50 0.62 0.58 −0.86 −0.78 −0.76 0.36 0.38 0.42 0.40 0.34 0.46 0.42 −0.92 −0.74 −0.60 −0.70 0.42 0.56 0.52 0.44 0.46 0.40 −0.90 −0.84 −0.86 −0.70 −0.86 0.38 0.58 0.58 0.44 0.42 −0.98 −0.68 −0.72 −0.94 −0.68 0.30 0.36 0.36 0.54 0.54 −0.66 −0.90 −0.58 −0.62 −0.94 −0.76 0.36 0.40 0.48 0.40 −0.76 −0.94 −0.96 −0.82 −0.74 −0.80 −0.76 0.48 0.50 0.44 −0.60 −0.82 −0.86 −0.74 −0.98 −0.66 −0.96 0.40 0.56 0.42 broken cost eat bonus Optimal Internal Reward for UCT −0.1 −0.2 −0.3 −0.4 −0.5 −0.6 −0.7 −0.8 −0.9 −1.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.18 0.22 0.26 0.31 0.37 0.42 0.43 0.43 0.44 0.47 0.11 0.14 0.19 0.25 0.32 0.37 0.39 0.39 0.39 0.39 0.02 0.05 0.11 0.18 0.25 0.30 0.32 0.31 0.30 0.30 −0.06 −0.03 0.03 0.10 0.17 0.22 0.24 0.22 0.22 0.23 −0.13 −0.09 −0.03 0.04 0.11 0.16 0.17 0.16 0.16 0.19 −0.17 −0.12 −0.07 −0.01 0.05 0.10 0.13 0.13 0.14 0.16 −0.19 −0.15 −0.10 −0.04 0.01 0.06 0.10 0.12 0.13 0.13 −0.22 −0.18 −0.14 −0.09 −0.03 0.03 0.09 0.12 0.13 0.12 −0.26 −0.23 −0.18 −0.13 −0.06 0.01 0.07 0.12 0.13 0.11 −0.30 −0.27 −0.22 −0.16 −0.08 −0.00 0.07 0.11 0.13 0.11 broken cost eat bonus Reward Mapping learned after 50 tasks −0.1 −0.2 −0.3 −0.4 −0.5 −0.6 −0.7 −0.8 −0.9 −1.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Figure 4: Reward mapping function visualization: Top: Optimal mapping, Bottom: Mapping found by the Reward Mapping agent after 50 tasks. The left panel in Figure 3 shows average objective reward per time step (with standard error bars). There are three sets of four bars each where each bar within a set is for a different architecture (see legend), and each set is for a different mean task duration (50, 200, and 500 from left to right). For each task duration the reward mapping agent does best and the conventional agent does the worst. These results demonstrate transfer helps performance and that transfer via the new reward mapping approach can substantially improve a bounded longlived agent’s performance relative to transfer via the competing method of sequential transfer. As task durations get longer the ratio of the reward-mapping agent’s performance to the nontransfer agent’s performance get smaller, though remains > 1 (by visually taking the ratio of the corresponding bars). This is expected because the longer the task duration the more time PGRD has to adapt to the task, and thus the less the better initialization provided by the reward mapping function matters. In addition, the sequential transfer agent does better than the non-transfer agent for the shortest task duration of 50 while the situation reverses for the longest task duration of 500. This is intuitive and significant as follows. Recall that the initialization of the internal reward function from the final internal reward function of the previous task can hurt performance in the sequential transfer setting if the current task requires quite different behavior from the previous—but it can help if two successive tasks are similar. Correcting the internal reward function could cost a large number of steps. These effects are exacerbated by longer task durations because the agent then has longer to adapt its internal reward function to each task. In general, as task duration increases, the non-transfer agent improves but the sequential transfer agent worsens. Results: Performance Comparison considering computational overhead. The above results ignore the computational overhead incurred by learning and using the reward mapping function. The two rightmost plots in the bottom row of Figure 3 show the average objective reward per time step as a function of milliseconds per decision for the four agent architectures for a range of depth {1, . . . , 6}, and trajectory-count {200, 300, . . . , 600} parameters for UCT. The plots show that for 6 the entire range of time-per-decision, the best performing agents are reward-mapping agents—in other words, it is not better to spend the overhead time of the reward-mapping on additional UCT search. This can be seen by observing that the highest dot at any vertical column on the x-axis belongs to the reward mapping agent. Thus, the overhead of the reward mapping function in the reward mapping agent is insignificant relative to the computational cost of UCT (this last cost is all the conventional agent incurs). Results: Reward mapping visualization. Using a fixed set of tasks (as described above) with mean duration of 500, we estimated the optimal internal reward parameter (the coefficient of the inverse-recency feature) for UCT by a brute-force grid search. The optimal internal reward parameter is visualized as a function of the two parameters of the objective reward function (broken cost and eat bonus) in Figure 4, top. Negative coefficients (light color squares) for inverse-recency feature discourage exploration while positive coefficients (dark color squares) encourage exploration. As would be expected the top right corner (high penalty for broken shelter and low reward for eating) discourages exploration while the bottom left corner (high reward for eating and low cost for broken shelter) encourages exploration. Figure 4, bottom, visualizes the learned reward mapping function after training on 50 tasks. There is a clearly similar pattern to the optimal mapping in the upper graph, though it has not captured the finer details. 4.2 Network Routing Domain The purposes of the following experiments are to (1) compare performance of our agents to a competing policy transfer method [9] from a closely related setting on a networking application domain defined by the competing method; (2) demonstrate that our reward mapping and other agents can be extended to a multi-agent setting as required by this domain; and (3) demonstrate that the rewardmapping approach can be extended to handle task changes that involve changes to the transition function as well as objective reward. The network routing domain [9] (see Figure 2(b)) is defined from the following components. (1) A set of routers, or nodes. Every router has a queue to store packets. In our experiments, all queues are of size three. (2) A set of links between two routers. All links are bidirectional and full-duplex, and every link has a weight (uniformly sampled from {1,2,3}) to indicate the cost of transmitting a packet. (3) A set of active packets. Every packet is a tuple (source, destination, alive-time), where source is the node which generated the packet, destination is the node that the packet is sent to, and alive-time is the time period that the packet has existed in the network. When a packet is delivered to its destination node, the alive-time is the end-to-end delay. (4) A set of packet generators. Every node has a packet generator that specifies a stochastic method to generate packets. (5) A set of power consumption functions. Every node’s power consumption at time t is the number of packets in its queue multiplied by a scalar parameter sampled uniformly in the range [0, 0.5]. Actions, dynamics, and states. Every node makes its routing decision separately and has its own action space (these determine which neighbor the first packet in the queue is sent to). If multiple packets reach the same node simultaneously, they are inserted into the queue in random order. Packets that arrives after the queue is full cause network congestion and result in packet loss. The global state at time t consists of the contents of all queues at all nodes at t. Transition function. In a departure from the original definition of the routing domain, we parameterize the transition function to allow a comparison of agents’ performance when transition functions change. Originally, the state transition function in the routing problem was determined by the fixed network topology and by the parameters of the packet generators that determined among other things the destination of packets. In our modification, nodes in the network are partitioned into three groups (G1, G2, and G3) and the probabilities that the destination of a packet belongs to each group of nodes (pG1, pG2, and pG3) are parameters we manipulate to change the state transition function. Objective reward function. The objective reward function is a linear combination of three objective reward features, the delay measured as the sum of the inverse end-to-end delay of all packets received at all nodes at time t, the loss measured as the number of lost packets at time t, and power measured as the sum of the power consumption of all nodes at time t. The weights of these three features are the parameters of the objective reward function. The weight for the delay feature ∈(0, 1), while the weights for both loss and power are ∈(−0.2, 0); different choices of these weights correspond to different objective reward functions. 7 Internal reward function. The internal reward function for the agent at node k is Ri j,k(s, a) = Ro j(s, a)+θi j,kψi k(s, a), where Ro j(s, a) is the objective reward function, ψi k(s, a) is a binary feature vector with one binary feature for each (packet destination, action) pair. It sets the bits corresponding to the destination of the first packet in node k’s queue at state s and action a to 1; all other bits are set to 0. The internal reward features are capable of representing arbitrary policies (and thus we also implemented classical policy gradient with these features using OLPOMDP [2] but found it to be far slower than the use of PGRD with UCT and hence don’t present those results here). Extension of Reward Mapping Agent to handle transition function changes. The parameters describing the transition function are concatenated with the parameters defining the objective reward function and used as input to the reward mapping function (whose output remains the initial internal reward function). Handling Multi-Agency. Every nodes’ agent observes the full state of the environment. All agents make decisions independently at each time step. Nodes do not know other nodes’ policies, but can observe how the other nodes have acted in the past and use the empirical counts of past actions to sample other nodes’ actions accordingly during UCT planning. R only T only R and T 0 0.1 0.2 0.3 0.4 avg. objective reward per time step Reward Mapping Sequential Transfer Non−Transfer Conventional Policy Transfer Figure 5: Performance on the network routing domain. (Left) tasks differ in objective reward functions (R) only. (Middle) tasks differ in transition function (T) only. (Right) tasks differ in both objective reward and transition (R and T) functions. See text for details. Competing policy transfer method. The competing policy transfer agent from [9] reuses policy knowledge across tasks based on a model-based average-reward RL algorithm. Their method keeps a library of policies derived from previous tasks and for each new task chooses an appropriate policy from the library and then improves the initial policy with experience. Their policy selection criterion was designed for the case when only the linear reward parameters change. However, in our experiments, tasks could differ in three different ways: (1) only reward functions change, (2) only transition functions change, and (3) both reward functions and transition functions change. Their policy selection criterion is applied to cases (1) and (3). For case (2), when only transition functions change, their method is modified to select the library-policy whose transition function parameters are closest to the new transition function parameters. Results: Performance advantage of Reward Mapping Agent. Three sets of 100 task sequences were generated, one in which the tasks differed in objective reward function only, another in which they differed in state transition function only, and third in which they differed in both. Figure 5 compares the average objective reward per time step for all four agents defined above as well as the competing policy transfer agent on the three sets. In all cases, the reward-mapping agent works best and the conventional agent worst. The competing policy transfer agent is second best when only the reward-function changes—just the setting for which it was designed. 5 Conclusion and Discussion Reward functions are a particularly consequential locus for knowledge transfer; reward functions specify what the agent is to do but not how, and can thus transfer across changes in the environment dynamics (transition function) unlike previously explored loci for knowledge transfer such as value functions or policies or models. Building on work on the optimal reward problem for single task settings, our main algorithmic contribution for our long-lived agent setting is to take good guidance reward functions found for previous objective rewards and learn a mapping used to effectively initialize the guidance reward function for subsequent tasks. We demonstrated that our reward mapping approach can outperform alternate approaches; current and future work is focused on greater theoretical understanding of the general conditions under which this is true. Acknowledgments. This work was supported by NSF grant IIS-1148668. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the views of the sponsors. 8 References [1] Christopher G. Atkeson and Juan Carlos Santamaria. A comparison of direct and model-based reinforcement learning. In International Conference on Robotics and Automation, pages 3557–3564, 1997. [2] Peter L Bartlett and Jonathan Baxter. Stochastic optimization of controlled partially observable markov decision processes. In Proceedings of the 39th IEEE Conference on Decision and Control., volume 1, pages 124–129, 2000. [3] Urszula Chajewska, Daphne Koller, and Ronald Parr. Making rational decisions using adaptive utility elicitation. In Proceedings of the Seventeenth National Conference on Artificial Intelligence, pages 363– 369, 2000. [4] Levente Kocsis and Csaba Szepesv´ari. Bandit based monte-carlo planning. In Machine Learning: ECML, pages 282–293. Springer, 2006. [5] George Konidaris and Andrew Barto. Autonomous shaping: Knowledge transfer in reinforcement learning. In Proceedings of the 23rd International Conference on Machine learning, pages 489–496, 2006. [6] George Konidaris and Andrew G Barto. Building portable options: Skill transfer in reinforcement learning. In Proceedings of the 20th International Joint Conference on Artificial Intelligence, volume 2, pages 895–900, 2007. [7] Alessandro Lazaric, Marcello Restelli, and Andrea Bonarini. Transfer of samples in batch reinforcement learning. In Proceedings of the 25th International Conference on Machine learning, pages 544–551, 2008. [8] Yaxin Liu and Peter Stone. Value-function-based transfer for reinforcement learning using structure mapping. In Proceedings of the Twenty-First National Conference on Artificial Intelligence, volume 21(1), page 415, 2006. [9] Sriraam Natarajan and Prasad Tadepalli. Dynamic preferences in multi-criteria reinforcement learning. In Proceedings of the 22nd International Conference on Machine learning, 2005. [10] Andrew Y. Ng, Daishi Harada, and Stuart Russell. Policy invariance under reward transformations: Theory and application to reward shaping. In Proceedings of the Sixteenth International Conference on Machine Learning, pages 278–287, 1999. [11] Andrew Y Ng and Stuart Russell. Algorithms for inverse reinforcement learning. In Proceedings of the Seventeenth International Conference on Machine Learning, pages 663–670, 2000. [12] Theodore J Perkins and Doina Precup. Using options for knowledge transfer in reinforcement learning. University of Massachusetts, Amherst, MA, USA, Tech. Rep, 1999. [13] Satinder Singh, Richard L Lewis, Andrew G Barto, and Jonathan Sorg. Intrinsically motivated reinforcement learning: An evolutionary perspective. IEEE Transactions on Autonomous Mental Development., 2(2):70–82, 2010. [14] Jonathan Sorg, Satinder Singh, and Richard L Lewis. Reward design via online gradient ascent. Advances of Neural Information Processing Systems, 23, 2010. [15] Jonathan Sorg, Satinder Singh, and Richard L Lewis. Optimal rewards versus leaf-evaluation heuristics in planning agents. In Proceedings of the Twenty-Fifth Conference on Artificial Intelligence, 2011. [16] Fumihide Tanaka and Masayuki Yamamura. Multitask reinforcement learning on the distribution of mdps. In Proceedings IEEE International Symposium on Computational Intelligence in Robotics and Automation., volume 3, pages 1108–1113, 2003. [17] Matthew E Taylor, Nicholas K Jong, and Peter Stone. Transferring instances for model-based reinforcement learning. In Machine Learning and Knowledge Discovery in Databases, pages 488–505. 2008. [18] Matthew E Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey. The Journal of Machine Learning Research, 10:1633–1685, 2009. [19] Matthew E Taylor, Shimon Whiteson, and Peter Stone. Transfer via inter-task mappings in policy search reinforcement learning. In Proceedings of the 6th International Joint Conference on Autonomous Agents and Multiagent Systems, page 37, 2007. [20] Lisa Torrey and Jude Shavlik. Policy transfer via Markov logic networks. In Inductive Logic Programming, pages 234–248. Springer, 2010. [21] Christopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8(3-4):279–292, 1992. 9
|
2013
|
113
|
4,836
|
Distributed Exploration in Multi-Armed Bandits Eshcar Hillel Yahoo Labs, Haifa eshcar@yahoo-inc.com Zohar Karnin Yahoo Labs, Haifa zkarnin@yahoo-inc.com Tomer Koren∗ Technion — Israel Inst. of Technology tomerk@technion.ac.il Ronny Lempel Yahoo Labs, Haifa rlempel@yahoo-inc.com Oren Somekh Yahoo Labs, Haifa orens@yahoo-inc.com Abstract We study exploration in Multi-Armed Bandits in a setting where k players collaborate in order to identify an ε-optimal arm. Our motivation comes from recent employment of bandit algorithms in computationally intensive, large-scale applications. Our results demonstrate a non-trivial tradeoff between the number of arm pulls required by each of the players, and the amount of communication between them. In particular, our main result shows that by allowing the k players to communicate only once, they are able to learn √ k times faster than a single player. That is, distributing learning to k players gives rise to a factor √ k parallel speedup. We complement this result with a lower bound showing this is in general the best possible. On the other extreme, we present an algorithm that achieves the ideal factor k speed-up in learning performance, with communication only logarithmic in 1/ε. 1 Introduction Over the past years, multi-armed bandit (MAB) algorithms have been employed in an increasing amount of large-scale applications. MAB algorithms rank results of search engines [23, 24], choose between stories or ads to showcase on web sites [2, 8], accelerate model selection and stochastic optimization tasks [21, 22], and more. In many of these applications, the workload is simply too high to be handled by a single processor. In the web context, for example, the sheer volume of user requests and the high rate at which they arrive, require websites to use many front-end machines that run in multiple data centers. In the case of model selection tasks, a single evaluation of a certain model or configuration might require considerable computation time, so that distributing the exploration process across several nodes may result with a significant gain in performance. In this paper, we study such large-scale MAB problems in a distributed environment where learning is performed by several independent nodes that may take actions and observe rewards in parallel. Following recent MAB literature [14, 3, 15, 18], we focus on the problem of identifying a “good” bandit arm with high confidence. In this problem, we may repeatedly choose one arm (corresponding to an action) and observe a reward drawn from a probability distribution associated with that arm. Our goal is to find an arm with an (almost) optimal expected reward, with as few arm pulls as possible (that is, minimize the simple regret [7]). Our objective is thus explorative in nature, and in ∗Most of this work was done while the author was at Yahoo Labs, Haifa. 1 particular we do not mind the incurred costs or the involved regret. This is indeed the natural goal in many applications, such as in the case of model selection problems mentioned above. In our setup, a distributed strategy is evaluated by the number of arm pulls per node required for the task, which correlates with the parallel speed-up obtained by distributing the learning process. We abstract a distributed MAB system as follows. In our model, there are k players that correspond to k independent machines in a cluster. The players are presented with a set of arms, with a common goal of identifying a good arm. Each player receives a stream of queries upon each it chooses an arm to pull. This stream is usually regulated by some load balancer ensuring the load is roughly divided evenly across players. To collaborate, the players may communicate with each other. We assume that the bandwidth of the underlying network is limited, so that players cannot simply share every piece of information. Also, communicating over the network might incur substantial latencies, so players should refrain from doing so as much as possible. When measuring communication of a certain multi-player protocol we consider the number of communication rounds it requires, where in a round of communication each player broadcasts a single message (of arbitrary size) to all other players. Round-based models are natural in distributed learning scenarios, where frameworks such as MapReduce [11] are ubiquitous. What is the tradeoff between the learning performance of the players, and the communication between them? At one extreme, if all players broadcast to each other each and every arm reward as it is observed, they can simply simulate the decisions of a serial, optimal algorithm. However, the communication load of this strategy is of course prohibitive. At the other extreme, if the players never communicate, each will suffer the learning curve of a single player, thereby avoiding any possible speed-up the distributed system may provide. Our goal in this work is to better understand this tradeoff between inter-player communication and learning performance. Considering the high cost of communication, perhaps the simplest and most important question that arises is how well can the players learn while keeping communication to the very minimum. More specifically, is there a non-trivial strategy by which the players can identify a “good” arm while communicating only once, at the end of the process? As we discuss later on, this is a non-trivial question. On the positive side, we present a k-player algorithm that attains an asymptotic parallel speed-up of √ k factor, as compared to the conventional, serial setting. In fact, our approach demonstrates how to convert virtually any serial exploration strategy to a distributed algorithm enjoying such speed-up. Ideally, one could hope for a factor k speed-up in learning performance; however, we show a lower bound on the required number of pulls in this case, implying that our √ k speed-up is essentially optimal. At the other end of the trade-off, we investigate how much communication is necessary for obtaining the ideal factor k parallel speed-up. We present a k-player strategy achieving such speed-up, with communication only logarithmic in 1/ε. As a corollary, we derive an algorithm that demonstrates an explicit trade-off between the number of arm pulls and the amount of inter-player communication. 1.1 Related Work Recently there has been an increasing interest in distributed and collaborative learning problems. In the MAB literature, several recent works consider multi-player MAB scenarios in which players actually compete with each other, either on arm-pulls resources [15] or on the rewards received [19]. In contrast, we study a collaborative multi-player problem and investigate how sharing observations helps players achieve their common goal. The related work of Kanade et al. [17] in the context of non-stochastic (i.e. adversarial) experts also deals with a collaborative problem in a similar distributed setup, and examine the trade-off between communication and the cumulative regret. Another line of recent work was focused on distributed stochastic optimization [13, 1, 12] and distributed PAC models [6, 10, 9], investigating the involved communication trade-offs. The techniques developed there, however, are inherently “batch” learning methods and thus are not directly applicable to our MAB problem which is online in nature. Questions involving network topology [13, 12] and delays [1] are relevant to our setup as well; however, our present work focuses on establishing the first non-trivial guarantees in a distributed collaborative MAB setting. 2 2 Problem Setup and Statement of Results In our model of the Distributed Multi-Armed Bandit problem, there are k ≥1 individual players. The players are given n arms, enumerated by [n] := {1, 2, . . . , n}. Each arm i ∈[n] is associated with a reward, which is a [0, 1]-valued random variable with expectation pi. For convenience, we assume that the arms are ordered by their expected rewards, that is p1 ≥p2 ≥· · · ≥pn. At every time step t = 1, 2, . . . , T, each player pulls one arm of his choice and observes an independent sample of its reward. Each player may choose any of the arms, regardless of the other players and their actions. At the end of the game, each player must commit to a single arm. In a communication round, that may take place at any predefined time step, each player may broadcast a message to all other players. While we do not restrict the size of each message, in a reasonable implementation a message should not be larger than ˜O(n) bits. In the best-arm identification version of the problem, the goal of a multi-player algorithm given some target confidence level δ > 0, is that with probability at least 1 −δ all players correctly identify the best arm (i.e. the arm having the maximal expected reward). For simplicity, we assume in this setting that the best arm is unique. Similarly, in the (ε, δ)-PAC variant the goal is that each player finds an ε-optimal (or “ε-best”) arm, that is an arm i with pi ≥p1 −ε, with high probability. In this paper we focus on the more general (ε, δ)-PAC setup, which also includes best-arm identification for ε = 0. We use the notation ∆i := p1 −pi to denote the suboptimality gap of arm i, and occasionally use ∆⋆:= ∆2 for denoting the minimal gap. In the best-arm version of the problem, where we assume that the best arm is unique, we have ∆i > 0 for all i > 1. When dealing with the (ε, δ)-PAC setup, we also consider the truncated gaps ∆ε i := max{∆i, ε}. In the context of MAB problems, we are interested in deriving distribution-dependent bounds, namely, bounds that are stated as a function of ε, δ and also the distribution-specific values ∆:= (∆2, . . . , ∆n). The ˜O notation in our bounds hides polylogarithmic factors in n, k, ε, δ, and also in ∆2, . . . , ∆n. In the case of serial exploration algorithms (i.e., when there is only one player), the lower bounds of Mannor and Tsitsiklis [20] and Audibert et al. [3] show that in general ˜Ω(Hε) pulls are necessary for identifying an ε-arm, where Hε := n X i=2 1 (∆ε i)2 . (1) Intuitively, the hardness of the task is therefore captured by the quantity Hε, which is roughly the number of arm pulls needed to find an ε-best arm with a reasonable probability; see also [3] for a discussion. Our goal in this work is therefore to establish bounds in the distributed model that are expressed as a function of Hε, in the same vein of the bounds known in the classic MAB setup. 2.1 Baseline approaches We now discuss several baseline approaches for the problem, starting with our main focus—the single round setting. The first obvious approach, already mentioned earlier, is the no-communication strategy: just let each player explore the arms in isolation of the other players, following an independent instance of some serial strategy; at the end of the executions, all players hold an ε-best arm. Clearly, this approach performs poorly in terms of learning performance, needing ˜Ω(Hε) pulls per player in the worst case and not leading to any parallel speed-up. Another straightforward approach is to employ a majority vote among the players: let each player independently identify an arm, and choose the arm having most of the votes (alternatively, at least half of the votes). However, this approach does not lead to any improvement in performance: for this vote to work, each player has to solve the problem correctly with reasonable probability, which already require ˜Ω(Hε) pulls of each. Even if we somehow split the arms between players and let each player explore a share of them, a majority vote would still fail since those players getting the “good” arms might have to pull arms ˜Ω(Hε) times—a small MAB instance might be as hard as the full-sized problem (in terms of the complexity measure Hε). When considering algorithms employing multiple communication rounds, we use an ideal simulated serial algorithm (i.e., a full-communication approach) as our baseline. This approach is of course prohibited in our context, but is able to achieve the optimal parallel speed-up, linear in the number of players k. 3 2.2 Our results We now discuss our approach and overview our algorithmic results. These are summarized in Table 1 below, that compares the different algorithms in terms of parallel speed-up and communication. Our approach for the one-round case is based on the idea of majority vote. For the best-arm identification task, our observation is that by letting each player explore a smaller set of n/ √ k arms chosen at random and choose one of them as “best”, about √ k of the players would come up with the global best arm. This (partial) consensus on a single arm is a key aspect in our approach, since it allows the players to identify the correct best arm among the votes of all k players, after sharing information only once. Our approach leads to a factor √ k parallel speed-up which, as we demonstrate in our lower bound, is the optimal factor in this setting. Although our goal here is pure exploration, in our algorithms each player follows an explore-exploit strategy. The idea is that a player should sample his recommended arm as much as his budget permits, even if it was easy to identify in his smallsized problem. This way we can guarantee that the top arms are sampled to a sufficient precision by the time each of the players has to choose a single best arm. The algorithm for the (ε, δ)-PAC setup is similar, but its analysis is more challenging. As mentioned above, an agreement on a single arm is essential for a vote to work. Here, however, there might be several ε-best arms, so arriving at a consensus on a single one is more difficult. Nonetheless, by examining two different regimes, namely when there are “many” ε-best arms and when there are “few” of them, our analysis shows that a vote can still work and achieve the √ k multiplicative speed-up. In the case of multiple communication rounds, we present a distributed elimination-based algorithm that discards arms right after each communication round. Between rounds, we share the work load between players uniformly. We show that the number of such rounds can be reduced to as low as O(log(1/ε)), by eliminating all 2−r-suboptimal arms in the r’th round. A similar idea was employed in [4] for improving the regret bound of UCB with respect to the parameters ∆i. We also use this technique to develop an algorithm that performs only R communication rounds, for any given parameter R ≥1, that achieves a slightly worse multiplicative ε2/Rk speed-up. SETTING ALGORITHM SPEED-UP COMMUNICATION ONE-ROUND No-Communication 1 none Majority Vote 1 1 round Algorithm 1,2 √ k 1 round MULTI-ROUND Serial (simulated) k every time step Algorithm 3 k O(log 1 ε) rounds Algorithm 3’ ε2/R · k R rounds Table 1: Summary of baseline approaches and our results. The speed-up results are asymptotic (logarithmic factors are omitted). 3 One Communication Round This section considers the most basic variant of the multi-player MAB problem, where each player is only allowed a single transmission, when finishing her queries. For the clarity of exposition, we first consider the best-arm identification setting in Section 3.1. Section 3.2 deals with the (ε, δ)-PAC setup. We demonstrate the tightness of our result in Section 3.3 with a lower bound for the required budget of arm pulls in this setting. Our algorithms in this section assume the availability of a serial algorithm A(A, ε), that given a set of arms A and target accuracy ε, identifies an ε-best arm in A with probability at least 2/3 using no more than cA X i∈A 1 (∆ε i)2 log |A| ∆ε i (2) 4 arm pulls, for some constant cA > 1. For example, the Successive Elimination algorithm [14] and the Exp-Gap Elimination algorithm [18] provide a guarantee of this form. Essentially, any exploration strategy whose guarantee is expressed as a function of Hε can be used as the procedure A, with technical modifications in our analysis. 3.1 Best-arm Identification Algorithm We now describe our one-round best-arm identification algorithm. For simplicity, we present a version matching δ = 1/3, meaning that the algorithm produces the correct arm with probability at least 2/3; we later explain how to extend it to deal with arbitrary values of δ. Our algorithm is akin to a majority vote among the multiple players, in which each player pulls arms in two stages. In the first EXPLORE stage, each player independently solves a “smaller” MAB instance on a random subset of the arms using the exploration strategy A. In the second EXPLOIT stage, each player exploits the arm identified as “best” in the first stage, and communicates that arm and its observed average reward. See Algorithm 1 below for a precise description. An appealing feature of our algorithm is that it requires each player to transmit a single message of constant size (up to logarithmic factors). Algorithm 1 ONE-ROUND BEST-ARM input time horizon T output an arm 1: for player j = 1 to k do 2: choose a subset Aj of 6n/ √ k arms uniformly at random 3: EXPLORE: execute ij ←A(Aj, 0) using at most 1 2T pulls (and halting the algorithm early if necessary); if the algorithm fails to identify any arm or does not terminate gracefully, let ij be an arbitrary arm 4: EXPLOIT: pull arm ij for 1 2T times and let ˆqj be its average reward 5: communicate the numbers ij, ˆqj 6: end for 7: let ki be the number of players j with ij = i, and define A = {i : ki > √ k} 8: let ˆpi = (1/ki) P {j : ij=i} ˆqj for all i 9: return arg maxi∈A ˆpi; if the set A is empty, output an arbitrary arm. In Theorem 3.1 we prove that Algorithm 1 indeed achieves the promised upper bound. Theorem 3.1. Algorithm 1 identifies the best arm correctly with probability at least 2/3 using no more than O 1 √ k · n X i=2 1 ∆2 i log n ∆i ! arm pulls per player, provided that 6 ≤ √ k ≤ n. The algorithm uses a single communication round, in which each player communicates ˜O(1) bits. By repeating the algorithm O(log(1/δ)) times and taking the majority vote of the independent runs, we can amplify the success probability to 1 −δ for any given δ > 0. Note that we can still do that with one communication round (at the end of all executions), but each player now has to communicate O(log(1/δ)) values1. Theorem 3.2. There exists a k-player algorithm that given ˜O 1 √ k Pn i=2 1/∆2 i arm pulls, identifies the best arm correctly with probability at least 1 −δ. The algorithm uses a single communication round, in which each player communicates O(log(1/δ)) numerical values. We now prove Theorem 3.1. We show that a budget T of samples (arm pulls) per player, where T ≥24cA √ k · n X i=2 1 ∆2 i ln n ∆i , (3) suffices for the players to jointly identify the best arm i⋆with the desired probability. Clearly, this would imply the bound stated in Theorem 3.1. We note that we did not try to optimize the constants in the above expression. We begin by analyzing the EXPLORE phase of the algorithm. Our first lemma shows that each player chooses the global best arm and identifies it as the local best arm with sufficiently large probability. 1In fact, by letting each player pick a slightly larger subset of O( p log(1/δ) · n/ √ k) arms, we can amplify the success probability to 1 −δ without needing to communicate more than 2 values per player. However, this approach only works when k = Ω(log(1/δ)). 5 Lemma 3.3. When (3) holds, each player identifies the (global) best arm correctly after the EXPLORE phase with probability at least 2/ √ k. We next address the EXPLOIT phase. The next simple lemma shows that the popular arms (i.e. those selected by many players) are estimated to a sufficient precision. Lemma 3.4. Provided that (3) holds, we have |ˆpi −pi| ≤1 2∆⋆for all arms i ∈A with probability at least 5/6. Due to lack of space, the proofs of the above lemmas are omitted and can be found in [16]. We can now prove Theorem 3.1. Proof (of Theorem 3.1). Let us first show that with probability at least 5/6, the best arm i is contained in the set A. To this end, notice that ki⋆is the sum of k i.i.d. Bernoulli random variables {Ij}j where Ij is the indicator of whether player j chooses arm i⋆after the EXPLORE phase. By Lemma 3.3 we have that E[Ij] ≥2/ √ k for all j, hence by Hoeffding’s inequality, Pr[ki⋆≤ √ k] ≤Pr[ki⋆−E[ki⋆] ≤− √ k] ≤exp(−2k/k) ≤1/6 which implies that i⋆∈A with probability at least 5/6. Next, note that with probability at least 5/6 the arm i ∈A having the highest empirical reward ˆpi is the one with the highest expected reward pi. Indeed, this follows directly from Lemma 3.4 that shows that with probability at least 5/6, for all arms i ∈A the estimate ˆpi is within 1 2∆of the true bias pi. Hence, via a union bound we conclude that with probability at least 2/3, the best arm is in A and has the highest empirical reward. In other words, with probability at least 2/3 the algorithm outputs the best arm i⋆. 3.2 (ε, δ)-PAC Algorithm We now present an algorithm whose purpose is to recover an ε-optimal arm. Here, there might be more than one ε-best arm, so each “successful” player might come up with a different ε-best arm. Nevertheless, our analysis below shows that with high probability, a subset of the players can still agree on a single ε-best arm, which makes it possible to identify it among the votes of all players. Our algorithm is described in Algorithm 2, and the following theorem states its guarantees. Theorem 3.5. Algorithm 2 identifies a 2ε-best arm with probability at least 2/3 using no more than O 1 √ k · n X i=2 1 (∆ε i)2 log n ∆ε i ! arm pulls per player, provided that 24 ≤ √ k ≤n. The algorithm uses a single communication round, in which each player communicates ˜O(1) bits. Before proving the theorem, we first state several key lemmas. In the following, let nε and n2ε denote the number of ε-best and 2ε-best arms respectively. Our analysis considers two different regimes: n2ε ≤ 1 50 √ k and n2ε > 1 50 √ k, and shows that in any case, T ≥400cA √ k n X i=2 1 (∆ε i)2 ln 24n ∆ε i (4) suffices for identifying a 2ε-best arm with the desired probability. Clearly, this implies the bound stated in Theorem 3.5. The first lemma shows that at least one of the players is able to find an ε-best arm. As we later show, this is sufficient for the success of the algorithm in case there are many 2ε-best arms. Lemma 3.6. When (4) holds, at least one player successfully identifies an ε-best arm in the EXPLORE phase, with probability at least 5/6. The next lemma is more refined and states that in case there are few 2ε-best arms, the probability of each player to successfully identify an ε-best arm grows linearly with nε. Lemma 3.7. Assume that n2ε ≤ 1 50 √ k. When (4) holds, each player identifies an ε-best arm in the EXPLORE phase, with probability at least 2nε/ √ k. 6 Algorithm 2 ONE-ROUND ε-ARM input time horizon T, accuracy ε output an arm 1: for player j = 1 to k do 2: choose a subset Aj of 12n/ √ k arms uniformly at random 3: EXPLORE: execute ij ←A(Aj, ε) using at most 1 2T pulls (and halting the algorithm early if necessary); if the algorithm fails to identify any arm or does not terminate gracefully, let ij be an arbitrary arm 4: EXPLOIT: pull arm ij for 1 2T times, and let ˆqj be the average reward 5: communicate the numbers ij, ˆqj 6: end for 7: let ki be the number of players j with ij = i 8: let ti = 1 2kiT and ˆpi = (1/ki) P {j : ij=i} ˆqj for all i 9: define A = {i ∈[n] : ti ≥(1/ε2) ln(12n)} 10: return arg maxi∈A ˆpi; if the set A is empty, output an arbitrary arm. The last lemma we need analyzes the accuracy of the estimated rewards of arms in the set A. Lemma 3.8. With probability at least 5/6, we have |ˆpi −pi| ≤ε/2 for all arms i ∈A. For the proofs of the above lemmas, refer to [16]. We now turn to prove Theorem 3.5. Proof. We shall prove that with probability 5/6 the set A contains at least one ε-best arm. This would complete the proof, since Lemma 3.8 assures that with probability 5/6, the estimates ˆpi of all arms i ∈A are at most ε/2-away from the true reward pi, and in turn implies (via a union bound) that with probability 2/3 the arm i ∈A having the maximal empirical reward ˆpi must be a 2ε-best arm. First, consider the case n2ε > 1 50 √ k. Lemma 3.6 shows that with probability 5/6 there exists a player j that identifies an ε-best arm ij. Since for at least n2ε arms ∆i ≤2ε, we have tij ≥1 2T ≥400 2 √ k · n2ε −1 (2ε)2 ln 24n 2ε ≥1 ε2 ln(12n) , that is, ij ∈A. Next, consider the case n2ε ≤ 1 50 √ k. Let N denote the number of players that identified some ε-best arm. The random variable N is a sum of Bernoulli random variables {Ij}j where Ij indicates whether player j identified some ε-best arm. By Lemma 3.7, E[Ij] ≥2nε/ √ k and thus by Hoeffding’s inequality, Pr[N < nε √ k] = Pr[N −E[N] ≤−nε √ k] ≤exp(−2n2 ε) ≤1/6 . That is, with probability 5/6, at least nε √ k players found an ε-best arm. A pigeon-hole argument now shows that in this case there exists an ε-best arm i⋆selected by at least √ k players. Hence, with probability 5/6 the number of samples of this arm collected in the EXPLOIT phase is at least ti⋆≥ √ kT/2 > (1/ε2) ln(12n), which means that i⋆∈A. 3.3 Lower Bound The following theorem suggests that in general, for identifying the best arm k players achieve a multiplicative speed-up of at most ˜O( √ k) when allowing one transmission per player (at the end of the game). Clearly, this also implies that a similar lower bound holds in the PAC setup, and proves that our algorithmic results for the one-round case are essentially tight. Theorem 3.9. For any k-player strategy that uses a single round of communication, there exist rewards p1, . . . , pn ∈[0, 1] and integer T such that • each individual player must use at least T/ √ k arm pulls for them to collectively identify the best arm with probability at least 2/3; • there exist a single-player algorithm that needs at most ˜O(T) pulls for identifying the best arm with probability at least 2/3. The proof of the theorem is omitted due to space constraints and can be found in [16]. 4 Multiple Communication Rounds In this section we establish an explicit tradeoff between the performance of a multi-player algorithm and the number of communication rounds it uses, in terms of the accuracy ε. Our observation is that 7 by allowing O(log(1/ε)) rounds of communication, it is possible to achieve the optimal speedup of factor k. That is, we do not gain any improvement in learning performance by allowing more than O(log(1/ε)) rounds. Algorithm 3 MULTI-ROUND ε-ARM input (ε, δ) output an arm 1: initialize S0 ←[n], r ←0, t0 ←0 2: repeat 3: set r ←r + 1 4: let εr ←2−r, tr ←(2/kε2 r) ln(4nr2/δ) 5: for player j = 1 to k do 6: sample each arm i ∈Sr−1 for tr −tr−1 times 7: let ˆpr j,i be the average reward of arm i (in all rounds so far of player j) 8: communicate the numbers ˆpr j,1, . . . , ˆpr j,n 9: end for 10: let ˆpr i = (1/k) Pk j=1 ˆpr j,i for all i ∈Sr−1, and let ˆpr ⋆= maxi∈Sr−1 ˆpr i 11: set Sr ←Sr−1\{i ∈Sr−1 : ˆpr i < ˆpr ⋆−εr} 12: until εr ≤ε/2 or |Sr| = 1 13: return an arm from Sr Our algorithm is given in Algorithm 3. The idea is to eliminate in each round r (i.e., right after the rth communication round) all 2−rsuboptimal arms. We accomplish this by letting each player sample uniformly all remaining arms and communicate the results to other players. Then, players are able to eliminate suboptimal arms with high confidence. If each such round is successful, after log2(1/ε) rounds only ε-best arms survive. Theorem 4.1 below bounds the number of arm pulls used by this algorithm (a proof can be found in [16]). Theorem 4.1. With probability at least 1 −δ, Algorithm 3 • identifies the optimal arm using O 1 k · n X i=2 1 (∆ε i)2 log n δ log 1 ∆ε i ! arm pulls per player; • terminates after at most 1 + ⌈log2(1/ε)⌉rounds of communication (or after 1 + ⌈log2(1/∆⋆)⌉ rounds for ε = 0). By properly tuning the elimination thresholds εr of Algorithm 3 in accordance with the target accuracy ε, we can establish an explicit trade-off between the number of communication rounds and the number of arm pulls each player needs. In particular, we can design a multi-player algorithm that terminates after at most R communication rounds, for any given parameter R > 0. This, however, comes at the cost of a compromise in learning performance as quantified in the following corollary. Corollary 4.2. Given a parameter R > 0, set εr ←εr/R for all r ≥1 in Algorithm 3. With probability at least 1 −δ, the modified algorithm • identifies an ε-best arm using ˜O((ε−2/R/k) · Pn i=2(1/∆ε i)2) arm pulls per player; • terminates after at most R rounds of communication. 5 Conclusions and Further Research We have considered a collaborative MAB exploration problem, in which several independent players explore a set of arms with a common goal, and obtained the first non-trivial results in such setting. Our main results apply for the specifically interesting regime where each of the players is allowed a single transmission; this setting fits naturally to common distributed frameworks such as MapReduce. An interesting open question in this context is whether one can obtain a strictly better speed-up result (which, in particular, is independent of ε) by allowing more than a single round. Even when allowing merely two communication rounds, it is unclear whether the √ k speed-up can be improved. Intuitively, the difficulty here is that in the second phase of a reasonable strategy each player should focus on the arms that excelled in the first phase; this makes the sub-problems being faced in the second phase as hard as the entire MAB instance, in terms of the quantity Hε. Nevertheless, we expect our one-round approach to serve as a building-block in the design of future distributed exploration algorithms, that are applicable in more complex communication models. An additional interesting problem for future research is how to translate our results to the regret minimization setting. In particular, it would be nice to see a conversion of algorithms like UCB [5] to a distributed setting. In this respect, perhaps a more natural distributed model is a one resembling that of Kanade et al. [17], that have established a regret vs. communication trade-off in the nonstochastic setting. 8 References [1] A. Agarwal and J. C. Duchi. Distributed delayed stochastic optimization. In NIPS, pages 873–881, 2011. [2] D. Agarwal, B.-C. Chen, P. Elango, N. Motgi, S.-T. Park, R. Ramakrishnan, S. Roy, and J. Zachariah. Online models for content optimization. In NIPS, pages 17–24, December 2008. [3] J.-Y. Audibert, S. Bubeck, and R. Munos. Best arm identification in multi-armed bandits. In COLT, pages 41–53, 2010. [4] P. Auer and R. Ortner. UCB revisited: Improved regret bounds for the stochastic multi-armed bandit problem. Periodica Mathematica Hungarica, 61(1-2):55–65, 2010. [5] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2):235–256, 2002. [6] M. Balcan, A. Blum, S. Fine, and Y. Mansour. Distributed learning, communication complexity and privacy. Arxiv preprint arXiv:1204.3514, 2012. [7] S. Bubeck, R. Munos, and G. Stoltz. Pure exploration in multi-armed bandits problems. In Algorithmic Learning Theory, pages 23–37. Springer, 2009. [8] D. Chakrabarti, R. Kumar, F. Radlinski, and E. Upfal. Mortal multi-armed bandits. In NIPS, pages 273–280, 2008. [9] H. Daum´e III, J. M. Phillips, A. Saha, and S. Venkatasubramanian. Efficient protocols for distributed classification and optimization. In ALT, 2012. [10] H. Daum´e III, J. M. Phillips, A. Saha, and S. Venkatasubramanian. Protocols for learning classifiers on distributed data. AISTAT, 2012. [11] J. Dean and S. Ghemawat. MapReduce: simplified data processing on large clusters. Commun. ACM, 51(1):107–113, Jan. 2008. [12] O. Dekel, R. Gilad-Bachrach, O. Shamir, and L. Xiao. Optimal distributed online prediction using mini-batches. Journal of Machine Learning Research, 13:165–202, 2012. [13] J. Duchi, A. Agarwal, and M. J. Wainwright. Distributed dual averaging in networks. NIPS, 23, 2010. [14] E. Even-Dar, S. Mannor, and Y. Mansour. Action elimination and stopping conditions for the multi-armed bandit and reinforcement learning problems. The Journal of Machine Learning Research, 7:1079–1105, 2006. [15] V. Gabillon, M. Ghavamzadeh, A. Lazaric, and S. Bubeck. Multi-bandit best arm identification. NIPS, 2011. [16] E. Hillel, Z. Karnin, T. Koren, R. Lempel, and O. Somekh. Distributed exploration in multiarmed bandits. arXiv preprint arXiv:1311.0800, 2013. [17] V. Kanade, Z. Liu, and B. Radunovic. Distributed non-stochastic experts. In Advances in Neural Information Processing Systems 25, pages 260–268, 2012. [18] Z. Karnin, T. Koren, and O. Somekh. Almost optimal exploration in multi-armed bandits. In Proceedings of the 30th International Conference on Machine Learning, 2013. [19] K. Liu and Q. Zhao. Distributed learning in multi-armed bandit with multiple players. IEEE Transactions on Signal Processing, 58(11):5667–5681, Nov. 2010. [20] S. Mannor and J. Tsitsiklis. The sample complexity of exploration in the multi-armed bandit problem. The Journal of Machine Learning Research, 5:623–648, 2004. [21] O. Maron and A. W. Moore. Hoeffding races: Accelerating model selection search for classification and function approximation. In NIPS, 1994. [22] V. Mnih, C. Szepesv´ari, and J.-Y. Audibert. Empirical bernstein stopping. In ICML, pages 672–679. ACM, 2008. [23] F. Radlinski, M. Kurup, and T. Joachims. How does clickthrough data reflect retrieval quality? In CIKM, pages 43–52, October 2008. [24] Y. Yue and T. Joachims. Interactively optimizing information retrieval systems as a dueling bandits problem. In ICML, page 151, June 2009. 9
|
2013
|
114
|
4,837
|
It is all in the noise: Efficient multi-task Gaussian process inference with structured residuals Barbara Rakitsch Machine Learning and Computational Biology Research Group Max Planck Institutes T¨ubingen, Germany rakitsch@tuebingen.mpg.de Christoph Lippert Microsoft Research Los Angeles, USA lippert@microsoft.com Karsten Borgwardt1,2 Machine Learning and Computational Biology Research Group Max Planck Institutes T¨ubingen, Germany karsten.borgwardt@tuebingen.mpg.de Oliver Stegle2 European Molecular Biology Laboratory European Bioinformatics Institute Cambridge, UK oliver.stegle@ebi.ac.uk Abstract Multi-task prediction methods are widely used to couple regressors or classification models by sharing information across related tasks. We propose a multi-task Gaussian process approach for modeling both the relatedness between regressors and the task correlations in the residuals, in order to more accurately identify true sharing between regressors. The resulting Gaussian model has a covariance term in form of a sum of Kronecker products, for which efficient parameter inference and out of sample prediction are feasible. On both synthetic examples and applications to phenotype prediction in genetics, we find substantial benefits of modeling structured noise compared to established alternatives. 1 Introduction Multi-task Gaussian process (GP) models are widely used to couple related tasks or functions for joint regression. This coupling is achieved by designing a structured covariance function, yielding a prior on vector-valued functions. An important class of structured covariance functions can be derived from a product of a kernel function c relating the tasks (task covariance) and a kernel function r relation the samples (sample covariance) cov(fn,t, fn′,t′) = c(t, t′) | {z } task covariance · r(n, n′) | {z } sample covariance , (1) where fn,t are latent function values that induce the outputs yn,t by adding some Gaussian noise. If the outputs yn,t are fully observed, with one training example per sample and task, the resulting covariance matrix between the latent factors can be written as a Kronecker product between the sample covariance matrix and the task covariance matrix (e.g. [1]). More complex multi-task covariance structures can be derived from generalizations of this product structure, for example via convolution of multiple features, e.g. [2]. In [3], a parameterized covariance over the tasks is used, assuming that task-relevant features are observed. The authors of [4] couple the latent features over the tasks exploiting a dependency in neural population activity over time. 1Also at Zentrum f¨ur Bioinformatik, Eberhard Karls Universit¨at T¨ubingen,T¨ubingen, Germany 2Both authors contributed equally to this work. 1 Work proposing this type of multi-task GP regression builds on Bonilla and Williams [1], who have emphasized that the power of Kronecker covariance models for GP models (Eqn. (1)) is linked to non-zero observation noise. In fact, in the limit of noise-free training observations, the coupling of tasks for predictions is lost in the predictive model, reducing to ordinary GP regressors for each individual task. Most multi task GP models build on a simple independent noise model, an assumption that is mainly routed in computational convenience. For example [5] show that this assumption renders the evaluation of the model likelihood and parameter gradients tractable, avoiding the explicit evaluation of the Kronecker covariance. In this paper, we account for residual noise structure by modeling the signal and the noise covariance matrix as two separate Kronecker products. The structured noise covariance is independent of the inputs but instead allows to capture residual correlation between tasks due to latent causes; moreover, the model is simple and extends the widely used product covariance structure. Conceptually related noise models have been proposed in animal breeding [6, 7]. In geostatistics [8], linear coregionalization models have been introduced to allow for more complicated covariance structures: the signal covariance matrix is modeled as a sum of Kronecker products and the noise covariance as a single Kronecker product. In machine learning, the Gaussian process regression networks [9] considers an adaptive mixture of GPs to model related tasks. The mixing coefficients are dependent on the input signal and control the signal and noise correlation simultaneously. The remainder of this paper is structured as follows. First, we show that unobserved regressors or causal processes inevitably lead to correlated residual, motivating the need to account for structured noise (Section 2). This extension of the multi task GP model allows for more accurate estimation of the task-task relationships, thereby improving the performance for out-of-sample predictions. At the same time, we show how an efficient inference scheme can be derived for this class of models. The proposed implementation handles closed form marginal likelihoods and parameter gradients for matrix-variate normal models with a covariance structure represented by the sum of two Kronecker products. These operations can be implemented at marginal extra computational cost compared to models that ignore residual task correlations (Section 3). In contrast to existing work extending Gaussian process multi task models by defining more complex covariance structures [2, 9, 8], our model utilizes the gradient of the marginal likelihood for parameter estimation and does not require expected maximization, variational approximation or MCMC sampling. We apply the resulting model in simulations and real settings, showing that correlated residuals are a concern in important applications (Section 4). 2 Multi-task Gaussian processes with structured noise Let Y ∈RN×T denote the N × T output training matrix for N samples and T tasks. A column of this matrix corresponds to a particular task t is denoted as yt, and vecY = y⊤ 1 . . . y⊤ T ⊤denotes the vector obtained by vertical concatenation of all columns of Y. We indicate the dimensions of the matrix as capital subscripts when needed for clarity. A more thoughtful derivation of all equations can be found in the Supplementary Material. Multivariate linear model equivalence The multi-task Gaussian process regression model with structured noise can be derived from the perspective of a linear multivariate generative model. For a particular task t, the outputs are determined by a linear function of the training inputs across F features S = {s1, . . . , sF }, yt = F X f=1 sfwf,t + ψt. (2) Multi-task sharing is achieved by specifying a multivariate normal prior across tasks, both for the regression weights wf,t and the noise variances ψt: p(W⊤) = F Y f=1 N (wf | 0, CT T ) p(Ψ⊤) = N Y n=1 N (ψn | 0, ΣT T ) . 2 Marginalizing out the weights W and the residuals Ψ results in a matrix-variate normal model with sum of Kronecker products covariance structure p(vecY | C, R, Σ) = N vecYNT | 0, CT T ⊗RNN | {z } signal covariance + ΣT T ⊗INN | {z } noise covariance , (3) where RNN = SS⊤is the sample covariance matrix that results from the marginalization over the weights W in Eqn. (2). In the following, we will refer to a Gaussian process model with this type of sum of Kronecker products covariance structure as GP-kronsum1. As common to any kernel method, the linear covariance R can be replaced with any positive semi-definite covariance function. Predictive distribution In a GP-kronsum model, predictions for unseen test instances can be carried out by using the standard Gaussian process framework [10]: p(vecY∗|R∗, Y) = N (vecY∗| vec M∗, V∗) . (4) Here, M∗denotes the mean prediction and V∗is the predictive covariance. Analytical expression for both can be obtained by considering the joint distribution of observed and unobserved outputs and completing the square, yielding: vec M∗= (CT T ⊗R∗ N ∗N) (CT T ⊗RNN + ΣT T ⊗INN)−1 vecYNT , V∗= (CT T ⊗R∗ N ∗N ∗) −(CT T ⊗R∗ N ∗N) (CT T ⊗RNN + ΣT T ⊗INN)−1 (CT T ⊗R∗ NN ∗) , where R∗ N ∗N is the covariance matrix between the test and training instances, and R∗ N ∗N ∗is the covariance matrix between the test samples. Design of multi-task covariance function In practice, neither the form of C nor the form of Σ is known a priori and hence needs to be inferred from data, fitting a set of corresponding covariance parameters θC and θΣ. If the number of tasks T is large, learning a free-form covariance matrix is prone to overfitting, as the number of free parameters grows quadratically with T. In the experiments, we consider a rank-k approximation of the form PK k=1 xkx⊤ k + σ2I for the task matrices. Task cancellation when the task covariance matrices are equal A notable form of the predictive distribution (4) arises for the special case C = Σ, that is the task covariance matrix of signal and noise are identical. Similar to previous results for noise-free observations [1], maximizing the marginal likelihood p(vecY|C, R, Σ) with respect to the parameters θR becomes independent of C and the predictions are decoupled across tasks, i.e. the benefits from joint modeling are lost: vec M∗= vec R∗ N ∗N(RNN + INN)−1YNT (5) In this case, the predictions depend on the sample covariance, but not on the task covariance. Thus, the GP-kronsum model is most useful when the task covariances on observed features and on noise reflect two independent sharing structures. 3 Efficient Inference In general, efficient inference can be carried out for Gaussian models with a sum covariance of two arbitrary Kronecker products p(vecY | C, R, Σ) = N (vecY | 0, CT T ⊗RNN + ΣT T ⊗ΩNN) . (6) The key idea is to first consider a suitable data transformation that leads to a diagonalization of all covariance matrices and second to exploit Kronecker tricks whenever possible. Let Σ = UΣSΣU⊤ Σ be the eigenvalue decomposition of Σ, and analogously for Ω. Borrowing ideas from [11], we can first bring the covariance matrix in a more amenable form by factoring out the structured noise: 1the covariance is defined as the sum of two Kronecker products and not as the classical Kronecker sum C ⊕R = C ⊗I + I ⊗R. 3 K = C ⊗R + Σ ⊗Ω = UΣS 1 2 Σ ⊗UΩS 1 2 Ω ˜C ⊗˜R + I ⊗I S 1 2 ΣU⊤ Σ ⊗S 1 2 ΩU⊤ Ω , (7) where ˜C = S −1 2 Σ U⊤ ΣCUΣS −1 2 Σ and ˜R = S −1 2 ΩU⊤ ΩRUΩS −1 2 Ω. In the following, we use definition ˜K = ˜C ⊗˜R + I ⊗I for this transformed covariance. Efficient log likelihood evaluation. The log model likelihood (Eqn. (6)) can be expressed in terms of the transformed covariance ˜K: L = −NT 2 ln(2π) −1 2 ln|K| −1 2vecY⊤K−1vecY = −NT 2 ln(2π) −1 2 ln| ˜K| −1 2|SΣ ⊗SΩ| −1 2vec ˜Y⊤˜K−1vec ˜Y, (8) where vec ˜Y = S −1 2 Σ U⊤ Σ ⊗S −1 2 ΩU⊤ Ω vecY = vec S −1 2 ΩUT ΩYUΣS −1 2 Σ is the projected output. Except for the additional term |SΣ ⊗SΩ|, resulting from the transformation, the log likelihood has the exactly same form as for multi-task GP regression with iid noise [1, 5]. Using an analogous derivation, we can now efficiently evaluate the log likelihood: L = −NT 2 ln(2π) −1 2 ln|S ˜C ⊗S ˜R + I ⊗I| −N 2 ln|SΣ| −T 2 |SΩ| −1 2vec U⊤ ˜R ˜YU ˜C ⊤ (S ˜C ⊗S ˜R + I ⊗I)−1 vec U⊤ ˜R ˜YU ˜C , (9) where we have defined the eigenvalue decomposition of ˜C as U ˜CS ˜CU⊤ ˜C and similar for ˜R. Efficient gradient evaluation The derivative of the log marginal likelihood with respect to a covariance parameter θR can be expressed as: ∂ ∂θR L = −1 2 ∂ ∂θR ln | ˜K| −1 2vec ˜Y⊤ ∂ ∂θR ˜K−1 vec( ˜Y) = −1 2diag (S ˜C ⊗S ˜R + I ⊗I)−1⊤ diag S ˜C ⊗U⊤ ˜R ∂ ∂θR ˜R U ˜R + 1 2vec( ˆY)⊤vec U⊤ ˜R ∂ ∂θR ˜R U ˜R ˆYS ˜C , (10) where vec( ˆY) = (S ˜C ⊗S ˜R + I ⊗I)−1 vec U⊤ ˜R ˜YU ˜C . Analogous gradients can be derived for the task covariance parameters θC and θΣ. The proposed speed-ups also apply to the special cases where Σ is modeled as being diagonal as in [1], or for optimizing the parameters of a kernel function. Since the sum of Kronecker products generally can not be written as a single Kronecker product, the speed-ups cannot be generalized to larger sums of Kronecker products. Efficient prediction Similarly, the mean predictor (Eqn. (4)) can be efficiently evaluated vec M∗= vec h R∗UΩS −1 2 Ω U ˜R ˆYU⊤ ˜C S −1 2 Σ U⊤ ΣC⊤i . (11) Gradient-based parameter inference The closed-form expression of the marginal likelihood (Eqn. (9)) and gradients with respect to covariance parameters (Eqn. (10)) allow for use of gradientbased parameter inference. In the experiments, we employ a variant of L-BFGS-B [12]. Computational cost. While the naive approach has a runtime of O(N 3 ·T 3) and memory requirement of O(N 2 ·T 2), as it explicitly computes and inverts the Kronecker products, our reformulation reduces the runtime to O(N 3 + T 3) and the memory requirement to O(N 2 + T 2), making it applicable to large numbers of samples and tasks. The empirical runtime savings over the naive approach are explored in Section 4.1. 4 (a) Efficient Implementation (b) Naive Implementation Figure 1: Runtime comparison on synthetic data. We compare our efficient GPkronsum implementation (left) versus its naive counterpart (right). Shown is the runtime in seconds on a logarithmic scale as a function of the sample size and the number of tasks. The optimization was stopped prematurely if it did not complete after 104 seconds. 4 Experiments We investigated the performance of the proposed GP-kronsum model in both simulated datasets and response prediction problems in statistical genetics. To investigate the benefits of structured residual covariances, we compared the GP-kronsum model to a Gaussian process (GP-kronprod) with iid noise [5] as well as independent modeling of tasks using a standard Gaussian process (GP-single), and joint modeling of all tasks using a standard Gaussian on a pooled dataset, naively merging data from all tasks (GP-pool). The predictive performance of individual models was assessed through 10-fold cross-validation. For each fold, model parameters were fit on the training data only. To avoid local optima during training, parameter fitting was carried out using five random restarts of the parameters on 90% of the training instances. The remaining 10% of the training instances were used for out of sample selection using the maximum log likelihood as criterion. Unless stated otherwise, in the multi-task models the relationship between tasks was parameterized as xx⊤+ σ2I, the sum of a rank-1 matrix and a constant diagonal component. Both parameters, x and σ2, were learnt by optimizing the marginal likelihood. Finally, we measured the predictive performance of the different methods via the averaged square of Pearson’s correlation coefficient r2 between the true and the predicted output, averaged over tasks. The squared correlation coefficient is commonly used in statistical genetics to evaluate the performance of different predictors [13]. 4.1 Simulations First, we considered simulated experiments to explore the runtime behavior and to find out if there are settings in which GP-kronsum performs better than existing methods. Runtime evaluation. As a first experiment, we examined the runtime behavior of our method as a function of the number of samples and of the number of tasks. Both parameters were varied in the range {16, 32, 64, 128, 256}. The simulated dataset was drawn from the GP-kronsum model (Eqn. (3)) using a linear kernel for the sample covariance matrix R and rank-1 matrices for the task covariances C and Σ. The runtime of this model was assessed for a single likelihood optimization on an AMD Opteron Processor 6,378 using a single core (2.4GHz, 2,048 KB Cache, 512 GB Memory) and compared to a naive implementation. The optimization was stopped prematurely if it did not converge within 104 seconds. In the experiments, we used a standard linear kernel on the features of the samples as sample covariance while learning the task covariances. This modeling choice results in a steeper runtime increase with the number of tasks, due to the increasing number of model parameters to be estimated. Figure 1 demonstrates the significant speed-up. While our algorithm can handle 256 samples/256 tasks with ease, the naive implementation failed to process more than 32 samples/32 tasks. Unobserved causal process induces structured noise A common source of structured residuals are unobserved causal processes that are not captured via the inputs. To explore this setting, we generated simulated outputs from a sum of two different processes. For one of the processes, we assumed that the causal features Xobs were observed, whereas for the second process the causal features Xhidden were hidden and independent of the observed measurements. Both processes were simulated to have a linear effect on the output. The effect from the observed features was again divided up into an independent effect, which is task-specific, and a common effect, which, up to 5 rescaling rcommon, is shared over all tasks: Ycommon = XobsWcommon, Wcommon = rcommon ⊗wcommon, rcommon ∼N(0, I), wcommon ∼N(0, I) The trade-off parameter µcommon determines the extent of relatedness between tasks: Yobs = µcommonYcommon + (1 −µcommon)Yind. The effect of the hidden features was simulated analogously. A second trade-off parameter µhidden was introduced, controlling the ratio between the observed and hidden effect: Y = µsignal [(1 −µhidden)Yobs + µhiddenYhidden] + (1 −µsignal)Ynoise, where Ynoise is Gaussian observation noise, and µsignal is a third trade-off parameter defining the ratio between noise and signal. To investigate the impact of the different trade-off parameters, we considered a series of datasets varying one of the parameters while keeping others fixed. We varied µsignal in the range {0.1, 0.3, 0.5, 0.7, 0.9, 1.0}, µcommon ∈ {0.0, 0.1, 0.3, 0.5, 0.7, 0.9, 1.0} and µhidden ∈ {0.0, 0.1, 0.3, 0.5, 0.7, 0.9, 1.0}, with default values marked in bold. Note that the best possible explained variance for the default setting is 45%, as the causal signal is split up equally between the observed and the hidden process. For all simulation experiments, we created datasets with 200 samples and 10 tasks. The number of observed features was set to 200, as well as the number of hidden features. For each such simulation setting, we created 30 datasets. First, we considered the impact of variation in signal strength µsignal (Figure 2a), where the overall signal was divided up equally between the observed and hidden signal. Both GP-single and GPkronsum performed better as the overall signal strength increased. The performance of GP-kronsum was superior, as the model can exploit the relatedness between the different tasks. Second, we explored the ability of the different methods to cope with an underlying hidden process (Figure 2b). In the absence of a hidden process (µhidden = 0), GP-kronprod and GP-kronsum had very similar performances, as both methods leverage the shared signal of the observed process, thereby outperforming the single-task GPs. However, as the magnitude of the hidden signal increases, GP-kronprod falsely explains the task correlation completely by the covariance term representing the observed process which leads to loss of predictive power. Last, we examined the ability of different methods to exploit the relatedness between the tasks (Figure 2c). Since GP-single assumed independent tasks, the model performed very similarly across the full range of common signal. GP-kronprod suffered from the same limitations as previously described, because the correlation between tasks in the hidden process increases synchronously with the correlation in the observed process as µcommon increases. In contrast, GP-kronsum could take advantage of the shared component between the tasks, as knowledge is transferred between them. GP-pool was consistently outperformed by all competitors as two of its main assumptions are heavily violated: samples of different tasks do not share the same signal and the residuals are neither independent of each other, nor do they have the same noise level. In summary, the proposed model is robust to a range of different settings and clearly outperforms its competitors when the tasks are related to each other and not all causal processes are observed. 4.2 Applications to phenotype prediction As a real world application we considered phenotype prediction in statistical genetics. The aim of these experiments was to demonstrate the relevance of unobserved causes in real world prediction problems and hence warrant greater attention. Gene expression prediction in yeast We considered gene expression levels from a yeast genetics study [14]. The dataset comprised of gene expression levels of 5, 493 genes and 2, 956 SNPs (features), measured for 109 yeast crosses. Expression levels for each cross were measured in two conditions (glucose and ethanol as carbon source), yielding a total of 218 samples. In this experiment, we treated the condition information as a hidden factor instead of regressing it out, which is analogous to the hidden process in the simulation experiments. The goal of this experiment was to investigate how alternative methods can deal and correct for this hidden covariate. We normalized all features and all tasks to zero mean and unit variance. Subsequently, we filtered out all genes that were not consistently expressed in at least 90% of the samples (z-score cutoff 1.5). We also 6 (a) Total Signal (b) Hidden Signal (c) Shared Signal Figure 2: Evaluation of alternative methods for different simulation settings. From left to right: (a) Evaluation for varying signal strength. (b) Evaluation for variable impact of the hidden signal. (c) Evaluation for different strength of relatedness between the tasks. In each simulation setting, all other parameters were kept constant at default parameters marked with the yellow star symbol. (a) Empirical (b) Signal (c) Noise Figure 3: Fitted task covariance matrices for gene expression levels in yeast. From left to right: (a) Empirical covariance matrix of the gene expression levels. (b) Signal covariance matrix learnt by GP-kronsum. (c) Noise covariance matrix learnt by GP-kronsum. The ordering of the tasks was determined using hierarchical clustering on the empirical covariance matrix. discarded genes with low signal (< 10% of the variance) or were close to noise free (> 90% of the variance), reducing the number of genes to 123, which we considered as tasks in our experiment. The signal strength was estimated by a univariate GP model. We used a linear kernel calculated on the SNP features for the sample covariance. Figure 3 shows the empirical covariance and the learnt task covariances by GP-kronsum. Both learnt covariances are highly structured, demonstrating that the assumption of iid noise in the GP-kronprod model is violated in this dataset. While the signal task covariance matrix reflects genetic signals that are shared between the gene expression levels, the noise covariance matrix mainly captures the mean shift between the two conditions the gene expression levels were measured in (Figure 4). To investigate the robustness of the reconstructed latent factor, we repeated the training 10 times. The mean latent factors and its standard errors were 0.2103 ± 0.0088 (averaged over factors, over the 10 best runs selected by out-of-sample likelihood), demonstrating robustness of the inference. When considering alternative methods for out of sample prediction, the proposed Kronecker Sum model (r2(GP-kronsum)=0.3322±0.0014) performed significantly better than previous approaches (r2(GP-pool)=0.0673 ± 0.0004, r2(GP-single)=0.2594 ± 0.0011, r2(GP-kronprod)=0.1820 ± 0.0020). The results are averages over 10 runs and ± denotes the corresponding standard errors. Multi-phenotype prediction in Arabidopsis thaliana. As a second dataset, we considered a genome-wide association study in Arabidopsis thaliana [15] to assess the prediction of developmental phenotypes from genomic data. This dataset consisted of 147 samples and 216,130 single nucleotide polymorphisms (SNPs, here used as features). As different tasks, we considered the phenotypes flowering period duration, life cycle period, maturation period and reproduction period. To avoid outliers and issues due to non-Gaussianity, we preprocessed the phenotypic data by first converting it to ranks and squashing the ranks through the inverse cumulative Gaussian distribution. The SNPs in Arabidopsis thaliana are binary and we discarded features with a frequency of less 7 Corr(Glucose,Ethanol) XC (a) Signal Corr(Glucose,Ethanol) XSigma (b) Noise Figure 4: Correlation between the mean difference of the two conditions and the latent factors on the yeast dataset. Shown is the strength of the latent factor of the signal (left) and the noise (right) task covariance matrix as a function of the mean difference between the two environmental conditions. Each dot corresponds to one gene expression level. than 10% in all samples, resulting in 176,436 SNPs. Subsequently, we normalized the features to zero mean and unit variance. Again, we used a linear kernel on the SNPs as sample covariance. Since the causal processes in Arabidopsis thaliana are complex, we allowed the rank of the signal and noise matrix to vary between 1 and 3. The appropriate rank complexity was selected on the 10% hold out data of the training fold. We considered the average squared correlation coefficient on the holdout fraction of the training data to select the model for prediction on the test dataset. Notably, for GP-kronprod, the selected task complexity was rank(C) = 3, whereas GP-kronsum selected a simpler structure for the signal task covariance (rank(C) = 1) and chose a more complex noise covariance, rank(Σ) = 2. The cross validation prediction performance of each model is shown in Table 1. For reproduction period, GP-single is outperformed by all other methods. For the phenotype life cycle period, the noise estimates of the univariate GP model were close to zero, and hence all methods, except of GP-pool, performed equally well since the measurements of the other phenotypes do not provide additional information. For maturation period, GP-kronsum and GP-kronprod showed improved performance compared to GP-single and GP-pool. For flowering period duration, GP-kronsum outperformed its competitors. Flowering period Life cycle Maturation Reproduction duration period period period GP-pool 0.0502 ± 0.0025 0.1038 ± 0.0034 0.0460 ± 0.0024 0.0478 ± 0.0013 GP-single 0.0385 ± 0.0017 0.3500 ± 0.0069 0.1612 ± 0.0027 0.0272 ± 0.0024 GP-kronprod 0.0846 ± 0.0021 0.3417 ± 0.0062 0.1878 ± 0.0042 0.0492 ± 0.0032 GP-kronsum 0.1127 ± 0.0049 0.3485 ± 0.0068 0.1918 ± 0.0041 0.0501 ± 0.0033 Table 1: Predictive performance of the different methods on the Arabidopsis thaliana dataset. Shown is the squared correlation coefficient and its standard error (measured by repeating 10-fold cross-validation 10 times). 5 Discussion and conclusions Multi-task Gaussian process models are a widely used tool in many application domains, ranging from the prediction of user preferences in collaborative filtering to the prediction of phenotypes in computational biology. Many of these prediction tasks are complex and important causal features may remain unobserved or are not modeled. Nevertheless, most approaches in common usage assume that the observation noise is independent between tasks. We here propose the GP-kronsum model, which allows to efficiently model data where the noise is dependent between tasks by building on a sum of Kronecker products covariance. In applications to statistical genetics, we have demonstrated (1) the advantages of the dependent noise model over an independent noise model, as well as (2) the feasibility of applying larger data sets by the efficient learning algorithm. Acknowledgement We thank Francesco Paolo Casale for helpful discussions. OS was supported by an Marie Curie FP7 fellowship. KB was supported by the Alfried Krupp Prize for Young University Teachers of the Alfried Krupp von Bohlen und Halbach-Stiftung. 8 References [1] Edwin V. Bonilla, Kian Ming Adam Chai, and Christopher K. I. Williams. Multi-task gaussian process prediction. In NIPS, 2007. [2] Mauricio A. ´Alvarez and Neil D. Lawrence. Sparse convolved gaussian processes for multioutput regression. In NIPS, pages 57–64, 2008. [3] Edwin V. Bonilla, Felix V. Agakov, and Christopher K. I. Williams. Kernel multi-task learning using task-specific features. In AISTATS, 2007. [4] Byron M. Yu, John P. Cunningham, Gopal Santhanam, Stephen I. Ryu, Krishna V. Shenoy, and Maneesh Sahani. Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. In NIPS, pages 1881–1888, 2008. [5] Oliver Stegle, Christoph Lippert, Joris M. Mooij, Neil D. Lawrence, and Karsten M. Borgwardt. Efficient inference in matrix-variate gaussian models with iid observation noise. In NIPS, pages 630–638, 2011. [6] Karin Meyer. Estimating variances and covariances for multivariate animal models by restricted maximum likelihood. Genetics Selection Evolution, 23(1):67–83, 1991. [7] V Ducrocq and H Chapuis. Generalizing the use of the canonical transformation for the solution of multivariate mixed model equations. Genetics Selection Evolution, 29(2):205–224, 1997. [8] Hao Zhang. Maximum-likelihood estimation for multivariate spatial linear coregionalization models. Environmetrics, 18(2):125–139, 2007. [9] Andrew Gordon Wilson, David A. Knowles, and Zoubin Ghahramani. Gaussian process regression networks. In ICML, 2012. [10] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). The MIT Press, 2005. [11] Alfredo A. Kalaitzis and Neil D. Lawrence. Residual components analysis. In ICML, 2012. [12] Ciyou Zhu, Richard H. Byrd, Peihuang Lu, and Jorge Nocedal. Algorithm 778: L-bfgs-b: Fortran subroutines for large-scale bound-constrained optimization. ACM Trans. Math. Softw., 23(4):550–560, December 1997. [13] Ulrike Ober, Julien F. Ayroles, Eric A. Stone, Stephen Richards, and et al. Using WholeGenome Sequence Data to Predict Quantitative Trait Phenotypes in Drosophila melanogaster. PLoS Genetics, 8(5):e1002685+, May 2012. [14] Erin N Smith and Leonid Kruglyak. Gene–environment interaction in yeast gene expression. PLoS Biology, 6(4):e83, 2008. [15] S. Atwell, Y. S. Huang, B. J. Vilhjalmsson, Willems, and et al. Genome-wide association study of 107 phenotypes in Arabidopsis thaliana inbred lines. Nature, 465(7298):627–631, Jun 2010. 9
|
2013
|
115
|
4,838
|
Projecting Ising Model Parameters for Fast Mixing Justin Domke NICTA, The Australian National University justin.domke@nicta.com.au Xianghang Liu NICTA, The University of New South Wales xianghang.liu@nicta.com.au Abstract Inference in general Ising models is difficult, due to high treewidth making treebased algorithms intractable. Moreover, when interactions are strong, Gibbs sampling may take exponential time to converge to the stationary distribution. We present an algorithm to project Ising model parameters onto a parameter set that is guaranteed to be fast mixing, under several divergences. We find that Gibbs sampling using the projected parameters is more accurate than with the original parameters when interaction strengths are strong and when limited time is available for sampling. 1 Introduction High-treewidth graphical models typically yield distributions where exact inference is intractable. To cope with this, one often makes an approximation based on a tractable model. For example, given some intractable distribution q, mean-field inference [14] attempts to minimize KL(p||q) over p ∈TRACT, where TRACT is the set of fully-factorized distributions. Similarly, structured meanfield minimizes the KL-divergence, but allows TRACT to be the set of distributions that obey some tree [16] or a non-overlapping clustered [20] structure. In different ways, loopy belief propagation [21] and tree-reweighted belief propagation [19] also make use of tree-based approximations, while Globerson and Jaakkola [6] provide an approximate inference method based on exact inference in planar graphs with zero field. In this paper, we explore an alternative notion of a “tractable” model. These are “fast mixing” models, or distributions that, while they may be high-treewidth, have parameter-space conditions guaranteeing that Gibbs sampling will quickly converge to the stationary distribution. While the precise form of the parameter space conditions is slightly technical (Sections 2-3), informally, it is simply that interaction strengths between neighboring variables are not too strong. In the context of the Ising model, we attempt to use these models in the most basic way possible– by taking an arbitrary (slow-mixing) set of parameters, projecting onto the fast-mixing set, using four different divergences. First, we show how to project in the Euclidean norm, by iteratively thresholding a singular value decomposition (Theorem 7). Secondly, we experiment with projecting using the “zero-avoiding” divergence KL(q||p). Since this requires taking (intractable) expectations with respect to q, it is of only theoretical interest. Third, we suggest a novel “piecewise” approximation of the KL divergence, where one drops edges from both q and p until a low-treewidth graph remains where the exact KL divergence can be calculated. Experimentally, this does not perform as well as the true KL-divergence, but is easy to evaluate. Fourth, we consider the “zero forcing” divergence KL(q||p). Since this requires expectations with respect to p, which is constrained to be fast-mixing, it can be approximated by Gibbs sampling, and the divergence can be minimized through stochastic approximation. This can be seen as a generalization of mean-field where the set of approximating distributions is expanded from fully-factorized to fast-mixing. 1 2 Background The literature on mixing times in Markov chains is extensive, including a recent textbook [10]. The presentation in the rest of this section is based on that of Dyer et al. [4]. Given a distribution p(x), one will often wish to draw samples from it. While in certain cases (e.g. the Normal distribution) one can obtain exact samples, for Markov random fields (MRFs), one must generally resort to iterative Markov chain Monte Carlo (MCMC) methods that obtain a sample asymptotically. In this paper, we consider the classic Gibbs sampling method [5], where one starts with some configuration x, and repeatedly picks a node i, and samples xi from p(xi|x−i). Under mild conditions, this can be shown to sample from a distribution that converges to p as t →∞. It is common to use more sophisticated methods such as block Gibbs sampling, the Swendsen-Wang algorithm [18], or tree sampling [7]. In principle, each algorithm could have unique parameter-space conditions under which it is fast mixing. Here, we focus on the univariate case for simplicity and because fast mixing of univariate Gibbs is sufficient for fast mixing of some other methods [13]. Definition 1. Given two finite distributions p and q, the total variation distance || · ||T V is ||p(X) −q(X)||T V = 1 2 ! x |p(X = x) −q(X = x)|. We need a property of a distribution that can guarantee fast mixing. The dependency Rij of xi on xj is defined by considering two configurations x and x′, and measuring how much the conditional distribution of xi can vary when xk = x′ k for all k ̸= j. Definition 2. Given a distribution p, the dependency matrix R is defined by Rij = max x,x′:x−j=x′ −j ||p(Xi|x−i) −p(Xi|x′ −i)||T V . Given some threshold ϵ, the mixing time is the number of iterations needed to guarantee that the total variation distance of the Gibbs chain to the stationary distribution is less than ϵ. Definition 3. Suppose that {Xt} denotes the sequence of random variables corresponding to running Gibbs sampling on some distribution p. The mixing time τ(ϵ) is the minimum time t such that the total variation distance between Xt and the stationary distribution is at most ϵ. That is, τ(ϵ) = min{t : d(t) < ϵ}, d(t) = max x ||P(Xt|X0 = x) −p(X)||T V . Unfortunately, the mixing time can be extremely long, which makes the use of Gibbs sampling delicate in practice. For example, for the two-dimensional Ising model with zero field and uniform interactions, it is known that mixing time is polynomial (in the size of the grid) when the interaction strengths are below a threshold βc, and exponential for stronger interactions [11]. For more general distributions, such tight bounds are not generally known, but one can still derive sufficient conditions for fast mixing. The main result we will use is the following [8]. Theorem 4. Consider the dependency matrix R corresponding to some distribution p(X1, ..., Xn). For Gibbs sampling with random updates, if ||R||2 < 1, the mixing time is bounded by τ(ϵ) ≤ n 1 −||R||2 ln "n ϵ # . Roughly speaking, if the spectral norm (maximum singular value) of R is less than one, rapid mixing will occur. A similar result holds in the case of systematic scan updates [4, 8]. Some of the classic ways of establishing fast mixing can be seen as special cases of this. For example, the Dobrushin criterion is that ||R||1 < 1, which can be easier to verify in many cases, since ||R||1 = maxj $ i |Rij| does not require the computation of singular values. However, for symmetric matrices, it can be shown that ||R||2 ≤||R||1, meaning the above result is tighter. 2 3 Mixing Time Bounds For variables xi ∈{−1, +1}, an Ising model is of the form p(x) = exp # i,j βijxixj + # i αixi −A(β, α) , where βij is the interaction strength between variables i and j, αi is the “field” for variable i, and A ensures normalization. This can be seen as a member of the exponential family p(x) = exp (θ · f(x) −A(θ)) , where f(x) = {xixj∀(i, j)} ∪{xi∀i} and θ contains both β and α. Lemma 5. For an Ising model, the dependency matrix is bounded by Rij ≤tanh |βij| ≤|βij| Hayes [8] proves this for the case of constant β and zero-field, but simple modifications to the proof can give this result. Thus, to summarize, an Ising model can be guaranteed to be fast mixing if the spectral norm of the absolute value of interactions terms is less than one. 4 Projection In this section, we imagine that we have some set of parameters θ, not necessarily fast mixing, and would like to obtain another set of parameters ψ which are as close as possible to θ, but guaranteed to be fast mixing. This section derives a projection in the Euclidean norm, while Section 5 will build on this to consider other divergence measures. We will use the following standard result that states that given a matrix A, the closest matrix with a maximum spectral norm can be obtained by thresholding the singular values. Theorem 6. If A has a singular value decomposition A = USV T , and ||·||F denotes the Frobenius norm, then B = arg min B:||B||2≤c ||A −B||F can be obtained as B = US′V T , where S ′ ii = min(Sii, c2). We denote this projection by B = Πc[A]. This is close to providing an algorithm for obtaining the closest set of Ising model parameters that obey a given spectral norm constraint. However, there are two issues. First, in general, even if A is sparse, the projected matrix B will be dense, meaning that projecting will destroy a sparse graph structure. Second, this result constrains the spectral norm of B itself, rather than R = |B|, which is what needs to be controlled. The theorem below provides a dual method that fixed these issues. Here, we take some matrix Z that corresponds to the graph structure, by setting Zij = 0 if (i, j) is an edge, and Zij = 1 otherwise. Then, enforcing that B obeys the graph structure is equivalent to enforcing that ZijBij = 0 for all (i, j). Thus, finding the closest set of parameters B is equivalent to solving min B,D ||A −B||F subject to ||D||2 ≤c, ZijDij = 0, D = |B|. (1) We find it convenient to solve this minimization by performing some manipulations, and deriving a dual. The proof of this theorem is provided in the appendix. To accomplish the maximization of g over M and Λ, we use LBFGS-B [1], with bound constraints used to enforce that M ≥0. The following theorem uses the “triple dot product” notation of A · B · C = & ij AijBijCij. Theorem 7. Define R = |A|. The minimization in Eq. 1 is equivalent to the problem of maxM≥0,Λ g(Λ, M), where the objective and gradient of g are, for D(Λ, M) = Πc[R+M −Λ⊙Z], g(Λ, M) = 1 2||D(Λ, M) −R||2 F + Λ · Z · D(Λ, M) (2) dg dΛ = Z ⊙D(Λ, M) (3) dg dM = D(Λ, M). (4) 3 5 Divergences Again, we would like to find a parameter vector ψ that is close to a given vector θ, but is guaranteed to be fast mixing, but with several notions of “closeness” that vary in terms of accuracy and computational convenience. Formally, if Ψ is the set of parameters that we can guarantee to be fast mixing, and D(θ, ψ) is a divergence between θ and ψ, then we would like to solve arg min ψ∈Ψ D(θ, ψ). (5) As we will see, in selecting D there appears to be something of a trade-off between the quality of the approximation, and the ease of computing the projection in Eq. 5. In this section, we work with the generic exponential family representation p(x; θ) = exp(θ · f(x) −A(θ)). We use µ to denote the mean value of f. By a standard result, this is equal to the gradient of A, i.e. µ(θ) = ! x p(x; θ)f(x) = ∇A(θ). 5.1 Euclidean Distance The simplest divergence is simply the l2 distance between the parameter vectors, D(θ, ψ) = ||θ − ψ||2. For the Ising model, Theorem 7 provides a method to compute the projection arg minψ∈Ψ ||θ− ψ||2. While simple, this has no obvious probabilistic interpretation, and other divergences perform better in the experiments below. However, it also forms the basis of our projected gradient descent strategy for computing the projection in Eq. 5 under more general divergences D. Specifically, we will do this by iterating 1. ψ′ ←ψ −λ d dψD(θ, ψ) 2. ψ ←arg minψ∈Ψ ||ψ′ −ψ||2 for some step-size λ. In some cases, dD/dψ can be calculated exactly, and this is simply projected gradient descent. In other cases, one needs to estimate dD/dψ by sampling from ψ. As discussed below, we do this by maintaining a “pool” of samples. In each iteration, a few Markov chain steps are applied with the current parameters, and then the gradient is estimated using them. Since the gradients estimated at each time-step are dependent, this can be seen as an instance of Ergodic Mirror Descent [3]. This guarantees convergence if the number of Markov chain steps, and the step-size λ are both functions of the total number of optimization iterations. 5.2 KL-Divergence Perhaps the most natural divergence to use would be the “inclusive” KL-divergence D(θ, ψ) = KL(θ||ψ) = ! x p(x; θ) log p(x; θ) p(x; ψ). (6) This has the “zero-avoiding” property [12] that ψ will tend to assign some probability to all configurations that θ assigns nonzero probability to. It is easy to show that the derivative is dD(θ, ψ) dψ = µ(ψ) −µ(θ), (7) where µθ = Eθ[f(X)]. Unfortunately, this requires inference with respect to both the parameter vectors θ and ψ. Since ψ will be enforced to be fast-mixing during optimization, one could approximate µ(ψ) by sampling. However, θ is presumed to be slow-mixing, making µ(θ) difficult to compute. Thus, this divergence is only practical on low-treewidth “toy” graphs. 4 5.3 Piecewise KL-Divergences Inspired by the piecewise likelihood [17] and likelihood approximations based on mixtures of trees [15], we seek tractable approximations of the KL-divergence based on tractable subgraphs. Our motivation is the the following: if θ and ψ define the same distribution, then if a certain set of edges are removed from both, they should continue to define the same distribution1. Thus, given some graph T , we define the “projection” θ(T ) onto the tree such by setting all edge parameters to zero if not part of T . Then, given a set of graphs T , the piecewise KL-divergence is D(θ, ψ) = max T KL(θ(T )||ψ(T )). Computing the derivative of this divergence is not hard– one simply computes the KL-divergence for each graph, and uses the gradient as in Eq. 7 for the maximizing graph. There is some flexibility of selecting the graphs T . In the simplest case, one could simply select a set of trees (assuring that each edge is covered by one tree), which makes it easy to compute the KLdivergence on each tree using the sum-product algorithm. We will also experiment with selecting low-treewidth graphs, where exact inference can take place using the junction tree algorithm. 5.4 Reversed KL-Divergence We also consider the “zero-forcing” KL-divergence D(θ, ψ) = KL(ψ||θ) = ! x p(x; ψ) log p(x; ψ) p(x; θ) . Theorem 8. The divergence D(θ, ψ) = KL(ψ||θ) has the gradient d dψ D(θ, ψ) = ! x p(x; ψ)(ψ −θ) · f(x) (f(x) −µ(ψ)) . Arguably, using this divergence is inferior to the “zero-avoiding” KL-divergence. For example, since the parameters ψ may fail to put significant probability at configurations where θ does, using importance sampling to reweight samples from ψ to estimate expectations with respect to θ could have high variance Further, it can be non-convex with respect to ψ. Nevertheless, it often work well in practice. Minimizing this divergence under the constraint that the dependency matrix R corresponding to ψ have a limited spectral norm is closely related to naive mean-field, which can be seen as a degenerate case where one constrains R to have zero norm. This is easier to work with than the “zero-avoiding” KL-divergence in Eq. 6 since it involves taking expectations with respect to ψ, rather than θ: since ψ is enforced to be fast-mixing, these expectations can be approximated by sampling. Specifically, suppose that one has generated a set of samples x1, ..., xK using the current parameters ψ. Then, one can first approximate the marginals by ˆµ = 1 K "K k=1 f(xk), and then approximate the gradient by ˆg = 1 K K ! k=1 # (ψ −θ) · f(xk) $ # f(xk) −ˆµ $ . (8) It is a standard result that if two estimators are unbiased and independent, the product of the two estimators will also be unbiased. Thus, if one used separate sets of perfect samples to estimate ˆµ and ˆg, then ˆg would be an unbiased estimator of dD/dψ. In practice, of course, we generate the samples by Gibbs sampling, so they are not quite perfect. We find in practice that using the same set of samples twice makes makes little difference, and do so in the experiments. 1Technically, here, we assume that the exponential family is minimal. However, in the case of an overcomplete exponential family, enforcing this will simply ensure that θ and ψ use the same reparameterization. 5 0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 Interaction Strength Marginal Error Grid, Mixed LBP TRW Mean−Field Original Parameters Euclidean Piecewise KL(θ||ψ) (TW 1) Piecewise KL(θ||ψ) (TW 2) KL(ψ||θ) KL(θ||ψ) 0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 Interaction Strength Marginal Error Grid, Attractive LBP TRW Mean−Field Original Parameters Euclidean Piecewise KL(θ||ψ) (TW 1) Piecewise KL(θ||ψ) (TW 2) KL(ψ||θ) KL(θ||ψ) 0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 Interaction Strength Marginal Error Edge Density = 0.3, Attractive Loopy BP TRW Mean−Field Original Parameters Euclidean Piecewise KL(θ||ψ) (TW 1) KL(ψ||θ) KL(θ||ψ) 0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 Interaction Strength Marginal Error Edge Density = 0.3, Mixed Loopy BP TRW Mean−Field Original Parameters Euclidean Piecewise KL(θ||ψ) (TW 1) KL(ψ||θ) KL(θ||ψ) Figure 1: The mean error of estimated univariate marginals on 8x8 grids (top row) and low-density random graphs (bottom row), comparing 30k iterations of Gibbs sampling after projection to variational methods. To approximate the computational effort of projection (Table 1), sampling on the original parameters with 250k iterations is also included as a lower curve. (Full results in appendix.) 6 Experiments Our experimental evaluation follows that of Hazan and Shashua [9] in evaluating the accuracy of the methods using the Ising model in various configurations. In the experiments, we approximate randomly generated Ising models with rapid-mixing distributions using the projection algorithms described previously. Then, the marginals of rapid-mixing approximate distribution are compared against those of the target distributions by running a Gibbs chain on each. We calculate the mean absolute distance of the marginals as the accuracy measure, with the marginals computed via the exact junction-tree algorithm. We evaluate projecting under the Euclidean distance (Section 5.1), the piecewise divergence(Section 5.3), and the zero-forcing KL-divergence KL(ψ||θ) (Section 5.4). On small graphs, it is possible to minimize the zero-avoiding KL-divergence KL(θ||ψ) by computing marginals using the junctiontree algorithm. However, as minimizing this KL-divergence leads to exact marginal estimates, it doesn’t provide a useful measure of marginal accuracy. Our methods are compared with four other inference algorithms, namely loopy belief-propagation (LBP), Tree-reweighted belief-propagation (TRW), Naive mean-field (MF), and Gibbs sampling on the original parameters. LBP, MF and TRW are among the most widely applied variational methods for approximate inference. The MF algorithm uses a fully factorized distribution as the tractable family, and can be viewed as an extreme case of minimizing the zero forcing KL-divergence KL(ψ||θ) under the constraint of zero spectral norm. The tractable family that it uses guarantees “instant” mixing but is much more restrictive. Theoretically, Gibbs sampling on the original parameters will produce highly accurate marginals if run long enough. However, this can take exponentially long and convergence is generally hard to diagnose [2]. In contrast, Gibbs sampling on the rapid-mixing approximation is guaranteed to converge rapidly but will result in less accurate marginals asymptotically. Thus, we also include time-accuracy comparisons between these two strategies in the experiments. 6 Grid, Strength 1.5 Grid, Strength 3 Random Graph, Strength 3. Gibbs Steps SVDs Gibbs Steps SVDs Gibbs Steps SVDs 30,000 Gibbs steps 30k / 0.17s 30k / 0.17s 30k / 0.04s 250,000 Gibbs steps 250k / 1.4s 250k / 1.4s 250k / 0.33s Euclidean Projection 22 / 0.04s 78 / 0.15s 17 / .0002s Piecewise-1 Projection 322 / 0.61s 547 / 1.0s 408 / 0.047s KL Projection 30k / 0.17s 265 / 0.55s 30k / 0.17s 471 / 0.94s 30k / 0.04s 300 / 0.037s Table 1: Running Times on various attractive graphs, showing the number of Gibbs passes and Singular Value Decompositions, as well as the amount of computation time. The random graph is based on an edge density of 0.7. Mean-Field, Loopy BP, and TRW take less than 0.01s. 6.1 Configurations Two types of graph topologies are used: two-dimensional 8 × 8 grids and random graphs with 10 nodes. Each edge is independently present with probability pe ∈{0.3, 0.5, 0.7}. Node parameters θi are uniformly drawn from unif(−dn, dn) and we fix the field strength to dn = 1.0. Edge parameters θij are uniformly drawn from unif(−de, de) or unif(0, de) to obtain mixed or attractive interactions respectively. We generate graphs with different interaction strength de = 0, 0.5, . . ., 4. All results are averaged over 50 random trials. To calculate piecewise divergences, it remains to specify the set of subgraphs T . It can be any tractable subgraph of the original distribution. For the grids, one straightforward choice is to use the horizontal and the vertical chains as subgraphs. We also test with chains of treewidth 2. For random graphs, we use the sets of random spanning trees which can cover every edge of the original graphs as the set of subgraphs. A stochastic gradient descent algorithm is applied to minimize the zero forcing KL-divergence KL(ψ||θ). In this algorithm, a “pool” of samples is repeatedly used to estimate gradients as in Eq. 8. After each parameter update, each sample is updated by a single Gibbs step, consisting of one pass over all variables. The performance of this algorithm can be affected by several parameters, including the gradient search step size, the size of the sample pool, the number of Gibbs updates, and the number of total iterations. (This algorithm can be seen as an instance of Ergodic Mirror Descent [3].) Without intensive tuning of these parameters, we choose a constant step size of 0.1, sample pool size of 500 and 60 total iterations, which performed reasonably well in practice. For each original or approximate distribution, a single chain of Gibbs sampling is run on the final parameters, and marginals are estimated from the samples drawn. Each Gibbs iteration is one pass of systematical scan over the variables in fixed order. Note that this does not take into account the computational effort deployed during projection, which ranges from 30,000 total Gibbs iterations with repeated Euclidean projection (KL(ψ||θ)) to none at all (Original parameters). It has been our experience that more aggressive parameters can lead to this procedure being more accurate than Gibbs in a comparison of total computational effort, but such a scheduling tends to also reduce the accuracy of the final parameters, making results more difficult to interpret. In Section 3.2, we show that for Ising models, a sufficient condition for rapid-mixing is the spectral norm of pairwise weight matrix is less than 1.0. However, we find in practice using a spectral norm bound of 2.5 instead of 1.0 can still preserve the rapid-mixing property and gives better approximation to the original distributions. (See Section 7 for a discussion.) 7 Discussion Inference in high-treewidth graphical models is intractable, which has motivated several classes of approximations based on tractable families. In this paper, we have proposed a new notion of “tractability”, insisting not that a graph has a fast algorithm for exact inference, but only that it obeys parameter-space conditions ensuring that Gibbs sampling will converge rapidly to the stationary distribution. For the case of Ising models, we use a simple condition that can guarantee rapid mixing, namely that the spectral norm of the matrix of interaction strengths is less than one. 7 10 0 10 1 10 2 10 3 10 4 10 5 0 0.1 0.2 0.3 0.4 0.5 Number of samples (log scale) Marginal error Grid, Interaction Strength 4.0, Mixed LBP TRW Mean−Field Original Parameters Euclidean Piecewise KL(θ||ψ) (TW 1) Piecewise KL(θ||ψ) (TW 2) KL(ψ||θ) KL(θ||ψ) 10 0 10 1 10 2 10 3 10 4 10 5 0 0.1 0.2 0.3 0.4 0.5 Number of samples (log scale) Marginal error Grid, Interaction Strength 4.0, Attractive LBP TRW Mean−Field Original Parameters Euclidean Piecewise KL(θ||ψ) (TW 1) Piecewise KL(θ||ψ) (TW 2) KL(ψ||θ) KL(θ||ψ) 10 0 10 1 10 2 10 3 10 4 10 5 0 0.1 0.2 0.3 0.4 0.5 Number of samples (log scale) Marginal error Edge Density = 0.3, Interaction Strength 3.0, Mixed LBP TRW Mean−Field Original Parameters Euclidean Piecewise KL(θ||ψ) (TW 1) KL(ψ||θ) KL(θ||ψ) 10 0 10 1 10 2 10 3 10 4 10 5 0 0.1 0.2 0.3 0.4 0.5 Number of samples (log scale) Marginal error Edge Density = 0.3, Interaction Strength 3.0, Attractive LBP TRW Mean−Field Original Parameters Euclidean Piecewise KL(θ||ψ) (TW 1) KL(ψ||θ) KL(θ||ψ) Figure 2: Example plots of the accuracy of obtained marginals vs. the number of samples. Top: Grid graphs. Bottom: Low-Density Random graphs. (Full results in appendix.) Given an intractable set of parameters, we consider using this approximate family by “projecting” the intractable distribution onto it under several divergences. First, we consider the Euclidean distance of parameters, and derive a dual algorithm to solve the projection, based on an iterative thresholding of the singular value decomposition. Next, we extend this to more probabilistic divergences. Firstly, we consider a novel “piecewise” divergence, based on computing the exact KL-divergnce on several low-treewidth subgraphs. Secondly, we consider projecting onto the KL-divergence. This requires a stochastic approximation approach where one repeatedly generates samples from the model, and projects in the Euclidean norm after taking a gradient step. We compare experimentally to Gibbs sampling on the original parameters, along with several standard variational methods. The proposed methods are more accurate than variational approximations. Given enough time, Gibbs sampling using the original parameters will always be more accurate, but with finite time, projecting onto the fast-mixing set to generally gives better results. Future work might extend this approach to general Markov random fields. This will require two technical challenges. First, one must find a bound on the dependency matrix for general MRFs, and secondly, an algorithm is needed to project onto the fast-mixing set defined by this bound. Fast-mixing distributions might also be used for learning. E.g., if one is doing maximum likelihood learning using MCMC to estimate the likelihood gradient, it would be natural to constrain the parameters to a fast mixing set. One weakness of the proposed approach is the apparent looseness of the spectral norm bound. For the two dimensional Ising model with no univariate terms, and a constant interaction strength β, there is a well-known threshold βc = 1 2 ln(1 + √ 2) ≈.4407, obtained using more advanced techniques than the spectral norm [11]. Roughly, for β < βc, mixing is known to occur quickly (polynomial in the grid size) while for β > βc, mixing is exponential. On the other hand, the spectral bound norm will be equal to one for β = .25, meaning the bound is too conservative in this case by a factor of βc/.25 ≈1.76. A tighter bound on when rapid mixing will occur would be more informative. 8 References [1] Richard H. Byrd, Peihuang Lu, Jorge Nocedal, and Ciyou Zhu. A limited memory algorithm for bound constrained optimization. SIAM J. Sci. Comput., 16(5):1190–1208, 1995. [2] Mary Kathryn Cowles and Bradley P. Carlin. Markov chain monte carlo convergence diagnostics: A comparative review. Journal of the American Statistical Association, 91:883–904, 1996. [3] John C. Duchi, Alekh Agarwal, Mikael Johansson, and Michael I. Jordan. Ergodic mirror descent. SIAM Journal on Optimization, 22(4):1549–1578, 2012. [4] Martin E. Dyer, Leslie Ann Goldberg, and Mark Jerrum. Matrix norms and rapid mixing for spin systems. Ann. Appl. Probab., 19:71–107, 2009. [5] Stuart Geman and Donald Geman. Stochastic relaxation, gibbs distributions, and the bayesian restoration of images. IEEE Trans. Pattern Anal. Mach. Intell., 6(6):721–741, 1984. [6] Amir Globerson and Tommi Jaakkola. Approximate inference using planar graph decomposition. In NIPS, pages 473–480, 2006. [7] Firas Hamze and Nando de Freitas. From fields to trees. In UAI, 2004. [8] Thomas P. Hayes. A simple condition implying rapid mixing of single-site dynamics on spin systems. In FOCS, pages 39–46, 2006. [9] Tamir Hazan and Amnon Shashua. Convergent message-passing algorithms for inference over general graphs with convex free energies. In UAI, pages 264–273, 2008. [10] David A. Levin, Yuval Peres, and Elizabeth L. Wilmer. Markov chains and mixing times. American Mathematical Society, 2006. [11] Eyal Lubetzky and Allan Sly. Critical Ising on the square lattice mixes in polynomial time. Commun. Math. Phys., 313(3):815–836, 2012. [12] Thomas Minka. Divergence measures and message passing. Technical report, 2005. [13] Yuval Peres and Peter Winkler. Can extra updates delay mixing? arXiv/1112.0603, 2011. [14] C. Peterson and J. R. Anderson. A mean field theory learning algorithm for neural networks. Complex Systems, 1:995–1019, 1987. [15] Patrick Pletscher, Cheng S. Ong, and Joachim M. Buhmann. Spanning Tree Approximations for Conditional Random Fields. In AISTATS, 2009. [16] Lawrence K. Saul and Michael I. Jordan. Exploiting tractable substructures in intractable networks. In NIPS, pages 486–492, 1995. [17] Charles Sutton and Andrew Mccallum. Piecewise training for structured prediction. Machine Learning, 77:165–194, 2009. [18] Robert H. Swendsen and Jian-Sheng Wang. Nonuniversal critical dynamics in monte carlo simulations. Phys. Rev. Lett., 58:86–88, Jan 1987. [19] Martin Wainwright, Tommi Jaakkola, and Alan Willsky. A new class of upper bounds on the log partition function. IEEE Transactions on Information Theory, 51(7):2313–2335, 2005. [20] Eric P. Xing, Michael I. Jordan, and Stuart Russell. A generalized mean field algorithm for variational inference in exponential families. In UAI, 2003. [21] Jonathan Yedidia, William Freeman, and Yair Weiss. Constructing free energy approximations and generalized belief propagation algorithms. IEEE Transactions on Information Theory, 51:2282–2312, 2005. 9
|
2013
|
116
|
4,839
|
Low-rank matrix reconstruction and clustering via approximate message passing Ryosuke Matsushita NTT DATA Mathematical Systems Inc. 1F Shinanomachi Rengakan, 35, Shinanomachi, Shinjuku-ku, Tokyo, 160-0016, Japan matsur8@gmail.com Toshiyuki Tanaka Department of Systems Science, Graduate School of Informatics, Kyoto University Yoshida Hon-machi, Sakyo-ku, Kyoto-shi, 606-8501 Japan tt@i.kyoto-u.ac.jp Abstract We study the problem of reconstructing low-rank matrices from their noisy observations. We formulate the problem in the Bayesian framework, which allows us to exploit structural properties of matrices in addition to low-rankedness, such as sparsity. We propose an efficient approximate message passing algorithm, derived from the belief propagation algorithm, to perform the Bayesian inference for matrix reconstruction. We have also successfully applied the proposed algorithm to a clustering problem, by reformulating it as a low-rank matrix reconstruction problem with an additional structural property. Numerical experiments show that the proposed algorithm outperforms Lloyd’s K-means algorithm. 1 Introduction Low-rankedness of matrices has frequently been exploited when one reconstructs a matrix from its noisy observations. In such problems, there are often demands to incorporate additional structural properties of matrices in addition to the low-rankedness. In this paper, we consider the case where a matrix A0 ∈Rm×N to be reconstructed is factored as A0 = U0V ⊤ 0 , U0 ∈Rm×r, V0 ∈RN×r (r ≪m, N), and where one knows structural properties of the factors U0 and V0 a priori. Sparseness and non-negativity of the factors are popular examples of such structural properties [1,2]. Since the properties of the factors to be exploited vary according to the problem, it is desirable that a reconstruction method has enough flexibility to incorporate a wide variety of properties. The Bayesian approach achieves such flexibility by allowing us to select prior distributions of U0 and V0 reflecting a priori knowledge on the structural properties. The Bayesian approach, however, often involves computationally expensive processes such as high-dimensional integrations, thereby requiring approximate inference methods in practical implementations. Monte Carlo sampling methods and variational Bayes methods have been proposed for low-rank matrix reconstruction to meet this requirement [3–5]. We present in this paper an approximate message passing (AMP) based algorithm for Bayesian lowrank matrix reconstruction. Developed in the context of compressed sensing, the AMP algorithm reconstructs sparse vectors from their linear measurements with low computational cost, and achieves a certain theoretical limit [6]. AMP algorithms can also be used for approximating Bayesian inference with a large class of prior distributions of signal vectors and noise distributions [7]. These successes of AMP algorithms motivate the use of the same idea for low-rank matrix reconstruction. The IterFac algorithm for the rank-one case [8] has been derived as an AMP algorithm. An AMP algorithm for the general-rank case is proposed in [9], which, however, can only treat estimation of posterior means. We extend their algorithm so that one can deal with other estimations such as the maximum a posteriori (MAP) estimation. It is the first contribution of this paper. 1 As the second contribution, we apply the derived AMP algorithm to K-means type clustering to obtain a novel efficient clustering algorithm. It is based on the observation that our formulation of the low-rank matrix reconstruction problem includes the clustering problem as a special case. Although the idea of applying low-rank matrix reconstruction to clustering is not new [10,11], our proposed algorithm is, to our knowledge, the first that directly deals with the constraint that each datum should be assigned to exactly one cluster in the framework of low-rank matrix reconstruction. We present results of numerical experiments, which show that the proposed algorithm outperforms Lloyd’s K-means algorithm [12] when data are high-dimensional. Recently, AMP algorithms for dictionary learning and blind calibration [13] and for matrix reconstruction with a generalized observation model [14] were proposed. Although our work has some similarities to these studies, it differs in that we fix the rank r rather than the ratio r/m when taking the limit m, N →∞in the derivation of the algorithm. Another difference is that our formulation, explained in the next section, does not assume statistical independence among the components of each row of U0 and V0. A detailed comparison among these algorithms remains to be made. 2 Problem setting 2.1 Low-rank matrix reconstruction We consider the following problem setting. A matrix A0 ∈Rm×N to be estimated is defined by two matrices U0 := (u0,1, . . . , u0,m)⊤∈Rm×r and V0 := (v0,1, . . . , v0,N)⊤∈RN×r as A0 := U0V ⊤ 0 , where u0,i, v0,j ∈Rr. We consider the case where r ≪m, N. Observations of A0 are corrupted by additive noise W ∈Rm×N, whose components Wi,j are i.i.d. Gaussian random variables following N(0, mτ). Here τ > 0 is a noise variance parameter and N(a, σ2) denotes the Gaussian distribution with mean a and variance σ2. The factor m in the noise variance is introduced to allow a proper scaling in the limit where m and N go to infinity in the same order, which is employed in deriving the algorithm. An observed matrix A ∈Rm×N is given by A := A0 + W. Reconstructing A0 and (U0, V0) from A is the problem considered in this paper. We take the Bayesian approach to address this problem, in which one requires prior distributions of variables to be estimated, as well as conditional distributions relating observations with variables to be estimated. These distributions need not be the true ones because in some cases they are not available so that one has to assume them arbitrarily, and in some other cases one expects advantages by assuming them in some specific manner in view of computational efficiencies. In this paper, we suppose that one uses the true conditional distribution p(A|U0, V0) = 1 (2πmτ) mN 2 exp ( − 1 2mτ ∥A −U0V ⊤ 0 ∥2 F ) , (1) where ∥· ∥F denotes the Frobenius norm. Meanwhile, we suppose that the assumed prior distributions of U0 and V0, denoted by ˆpU and ˆpV, respectively, may be different from the true distributions pU and pV, respectively. We restrict ˆpU and ˆpV to distributions of the form ˆpU(U0) = ∏ i ˆpu(u0,i) and ˆpV(V0) = ∏ j ˆpv(v0,j), respectively, which allows us to construct computationally efficient algorithms. When U ∼ˆpU(U) and V ∼ˆpV(V ), the posterior distribution of (U, V ) given A is ˆp(U, V |A) ∝exp ( − 1 2mτ ∥A −UV ⊤∥2 F ) ˆpU(U)ˆpV(V ). (2) Prior probability density functions (p.d.f.s) ˆpu and ˆpv can be improper, that is, they can integrate to infinity, as long as the posterior p.d.f. (2) is proper. We also consider cases where the assumed rank ˆr may be different from the true rank r. We thus suppose that estimates U and V are of size m × ˆr and N × ˆr, respectively. We consider two problems appearing in the Bayesian approach. The first problem, which we call the marginalization problem, is to calculate the marginal posterior distributions given A, ˆpi,j(ui, vj|A) := ∫ ˆp(U, V |A) ∏ k̸=i duk ∏ l̸=j dvl. (3) These are used to calculate the posterior mean E[UV ⊤|A] and the marginal MAP estimates uMMAP i := arg maxu ∫ ˆpi,j(u, v|A)dv and vMMAP j := arg maxv ∫ ˆpi,j(u, v|A)du. Because 2 calculation of ˆpi,j(ui, vj|A) typically involves high-dimensional integrations requiring high computational cost, approximation methods are needed. The second problem, which we call the MAP problem, is to calculate the MAP estimate arg maxU,V ˆp(U, V |A). It is formulated as the following optimization problem: min U,V CMAP(U, V ), (4) where CMAP(U, V ) is the negative logarithm of (2): CMAP(U, V ) := 1 2mτ ∥A −UV ⊤∥2 F − m ∑ i=1 log ˆpu(ui) − N ∑ j=1 log ˆpv(vj). (5) Because ∥A −UV ⊤∥2 F is a non-convex function of (U, V ), it is generally hard to find the global optimal solutions of (4) and therefore approximation methods are needed in this problem as well. 2.2 Clustering as low-rank matrix reconstruction A clustering problem can be formulated as a problem of low-rank matrix reconstruction [11]. Suppose that v0,j ∈{e1, . . . , er}, j = 1, . . . , N, where el ∈{0, 1}r is the vector whose lth component is 1 and the others are 0. When V0 and U0 are fixed, aj follows one of the r Gaussian distributions N(˜u0,l, mτI), l = 1, . . . , r, where ˜u0,l is the lth column of U0. We regard that each Gaussian distribution defines a cluster, ˜u0,l being the center of cluster l and v0,j representing the cluster assignment of the datum aj. One can then perform clustering on the dataset {a1, . . . , aN} by reconstructing U0 and V0 from A = (a1, . . . , aN) under the structural constraint that every row of V0 should belong to {e1, . . . , eˆr}, where ˆr is an assumed number of clusters. Let us consider maximum likelihood estimation arg maxU,V p(A|U, V ), or equivalently, MAP estimation with the (improper) uniform prior distributions ˆpu(u) = 1 and ˆpv(v) = ˆr−1 ∑ˆr l=1 δ(v−el). The corresponding MAP problem is min U∈Rm׈r,V ∈{0,1}N׈r ∥A −UV ⊤∥2 F subject to vj ∈{e1, . . . , eˆr}. (6) When V satisfies the constraints, the objective function ∥A −UV ⊤∥2 F = ∑N j=1 ∑ˆr l=1 ∥aj − ˜ul∥2 2I(vj = el) is the sum of squared distances, each of which is between a datum and the center of the cluster that the datum is assigned to. The optimization problem (6), its objective function, and clustering based on it are called in this paper the K-means problem, the K-means loss function, and the K-means clustering, respectively. One can also use the marginal MAP estimation for clustering. If U0 and V0 follow ˆpU and ˆpV, respectively, the marginal MAP estimation is optimal in the sense that it maximizes the expectation of accuracy with respect to ˆp(V0|A). Here, accuracy is defined as the fraction of correctly assigned data among all data. We call the clustering using approximate marginal MAP estimation the maximum accuracy clustering, even when incorrect prior distributions are used. 3 Previous work Existing methods for approximately solving the marginalization problem and the MAP problem are divided into stochastic methods such as Markov-Chain Monte-Carlo methods and deterministic ones. A popular deterministic method is to use the variational Bayesian formalism. The variational Bayes matrix factorization [4, 5] approximates the posterior distribution p(U, V |A) as the product of two functions pVB U (U) and pVB V (V ), which are determined so that the Kullback-Leibler (KL) divergence from pVB U (U)pVB V (V ) to p(U, V |A) is minimized. Global minimization of the KL divergence is difficult except for some special cases [15], so that an iterative method to obtain a local minimum is usually adopted. Applying the variational Bayes matrix factorization to the MAP problem, one obtains the iterated conditional modes (ICM) algorithm, which alternates minimization of CMAP(U, V ) over U for fixed V and minimization over V for fixed U. The representative algorithm to solve the K-means problem approximately is Lloyd’s K-means algorithm [12]. Lloyd’s K-means algorithm is regarded as the ICM algorithm: It alternates minimization of the K-means loss function over U for fixed V and minimization over V for fixed U iteratively. 3 Algorithm 1 (Lloyd’s K-means algorithm). nt l = N ∑ j=1 I(vt j = el), ˜ut l = 1 nt l N ∑ j=1 ajI(vt j = el), (7a) lt+1 j = arg min l∈{1,...,ˆr} ∥aj −˜ut l∥2 2, vt+1 j = elt+1 j . (7b) Throughout this paper, we represent an algorithm by a set of equations as in the above. This representation means that the algorithm begins with a set of initial values and repeats the update of the variables using the equations presented until it satisfies some stopping criteria. Lloyd’s K-means algorithm begins with a set of initial assignments V 0 ∈{e1, . . . , eˆr}N. This algorithm easily gets stuck in local minima and its performance heavily depends on the initial values of the algorithm. Some methods for initialization to obtain a better local minimum are proposed [16]. Maximum accuracy clustering can be solved approximately by using the variational Bayes matrix factorization, since it gives an approximation to the marginal posterior distribution of vj given A. 4 Proposed algorithm 4.1 Approximate message passing algorithm for low-rank matrix reconstruction We first discuss the general idea of the AMP algorithm and advantages of the AMP algorithm compared with the variational Bayes matrix factorization. The AMP algorithm is derived by approximating the belief propagation message passing algorithm in a way thought to be asymptotically exact for large-scale problems with appropriate randomness. Fixed points of the belief propagation message passing algorithm correspond to local minima of the KL divergence between a kind of trial function and the posterior distribution [17]. Therefore, the belief propagation message passing algorithm can be regarded as an iterative algorithm based on an approximation of the posterior distribution, which is called the Bethe approximation. The Bethe approximation can reflect dependence of random variables (dependence between U and V in ˆp(U, V |A) in our problem) to some extent. Therefore, one can intuitively expect that performance of the AMP algorithm is better than that of the variational Bayes matrix factorization, which treats U and V as if they were independent in ˆp(U, V |A). An important property of the AMP algorithm, aside from its efficiency and effectiveness, is that one can predict performance of the algorithm accurately for large-scale problems by using a set of equations, called the state evolution [6]. Analysis with the state evolution also shows that required iteration numbers are O(1) even when the problem size is large. Although we can present the state evolution for the algorithm proposed in this paper and give a proof of its validity like [8,18], we do not discuss the state evolution here due to the limited space available. We introduce a one-parameter extension of the posterior distribution ˆp(U, V |A) to treat the marginalization problem and the MAP problem in a unified manner. It is defined as follows: ˆp(U, V |A; β) ∝exp ( −β 2mτ ∥A −UV ⊤∥2 F )( ˆpU(U)ˆpV(V ) )β, (8) which is proportional to ˆp(U, V |A)β, where β > 0 is the parameter. When β = 1, ˆp(U, V |A; β) is reduced to ˆp(U, V |A). In the limit β →∞, the distribution ˆp(U, V |A; β) concentrates on the maxima of ˆp(U, V |A). An algorithm for the marginalization problem on ˆp(U, V |A; β) is particularized to the algorithms for the marginalization problem and for the MAP problem for the original posterior distribution ˆp(U, V |A) by letting β = 1 and β →∞, respectively. The AMP algorithm for the marginalization problem on ˆp(U, V |A; β) is derived in a way similar to that described in [9], as detailed in the Supplementary Material. In the derived algorithm, the values of variables Bt u = (bt u,1, . . . , bt u,m)⊤∈Rm׈r, Bt v = (bt v,1, . . . , bt v,N)⊤∈RN׈r, Λt u ∈Rˆr׈r, Λt v ∈Rˆr׈r, U t = (ut 1, . . . , ut m)⊤∈Rm׈r, V t = (vt 1, . . . , vt N)⊤∈RN׈r, St 1, . . . , St m ∈Rˆr׈r, and T t 1, . . . , T t N ∈Rˆr׈r are calculated iteratively, where the superscript t ∈N ∪{0} represents iteration numbers. Variables with a negative iteration number are defined as 0. The algorithm is as follows: 4 Algorithm 2. Bt u = 1 mτ AV t − 1 mτ U t−1 N ∑ j=1 T t j , Λt u = 1 mτ (V t)⊤V t + 1 βmτ N ∑ j=1 T t j − 1 mτ N ∑ j=1 T t j , (9a) ut i = f(bt u,i, Λt u; ˆpu), St i = G(bt u,i, Λt u; ˆpu), (9b) Bt v = 1 mτ A⊤U t − 1 mτ V t m ∑ i=1 St i, Λt v = 1 mτ (U t)⊤U t + 1 βmτ m ∑ i=1 St i − 1 mτ m ∑ i=1 St i, (9c) vt+1 j = f(bt v,j, Λt v; ˆpv), T t+1 j = G(bt v,j, Λt v; ˆpv). (9d) Algorithm 2 is almost symmetric in U and V . Equations (9a)–(9b) and (9c)–(9d) update quantities related to the estimates of U0 and V0, respectively. The algorithm requires an initial value V 0 and begins with T 0 j = O. The functions f(·, ·; ˆp) : Rˆr×Rˆr׈r →Rˆr and G(·, ·; ˆp) : Rˆr×Rˆr׈r →Rˆr׈r, which have a p.d.f. ˆp : Rˆr →R as a parameter, are defined by f(b, Λ; ˆp) := ∫ uˆq(u; b, Λ, ˆp)du, G(b, Λ; ˆp) := ∂f(b, Λ; ˆp) ∂b , (10) where ˆq(u; b, Λ, ˆp) is the normalized p.d.f. of u defined by ˆq(u; b, Λ, ˆp) ∝exp ( −β (1 2u⊤Λu −b⊤u −log ˆp(u) )) . (11) One can see that f(b, Λ; ˆp) is the mean of the distribution ˆq(u; b, Λ, ˆp) and that G(b, Λ; ˆp) is its covariance matrix scaled by β. The function f(b, Λ; ˆp) need not be differentiable everywhere; Algorithm 2 works if f(b, Λ; ˆp) is differentiable at b for which one needs to calculate G(b, Λ; ˆp) in running the algorithm. We assume in the rest of this section the convergence of Algorithm 2, although the convergence is not guaranteed in general. Let B∞ u , B∞ v , Λ∞ u , Λ∞ v , S∞ i , T ∞ j , U ∞, and V ∞be the converged values of the respective variables. First, consider running Algorithm 2 with β = 1. The marginal posterior distribution is then approximated as ˆpi,j(ui, vj|A) ≈ˆq(ui; b∞ u,i, Λ∞ u , ˆpu)ˆq(vj; b∞ v,j, Λ∞ v , ˆpv). (12) Since u∞ i and v∞ j are the means of ˆq(u; b∞ u,i, Λ∞ u , ˆpu) and ˆq(v; b∞ v,j, Λ∞ v , ˆpv), respectively, the posterior mean E[UV ⊤|A] = ∫ UV ⊤ˆp(U, V |A)dUdV is approximated as E[UV ⊤|A] ≈U ∞(V ∞)⊤. (13) The marginal MAP estimates uMMAP i and vMMAP j are approximated as uMMAP i ≈arg max u ˆq(u; b∞ u,i, Λ∞ u , ˆpu), vMMAP j ≈arg max v ˆq(v; b∞ v,j, Λ∞ v , ˆpv). (14) Taking the limit β →∞in Algorithm 2 yields an algorithm for the MAP problem (4). In this case, the functions f and G are replaced with f∞(b, Λ; ˆp) := arg min u [1 2u⊤Λu −b⊤u −log ˆp(u) ] , G∞(b, Λ; ˆp) := ∂f∞(b, Λ; ˆp) ∂b . (15) One may calculate G∞(b, Λ; ˆp) from the Hessian of log ˆp(u) at u = f∞(b, Λ; ˆp), denoted by H, via the identity G∞(b, Λ; ˆp) = ( Λ−H )−1. This identity follows from the implicit function theorem under some additional assumptions and helps in the case where the explicit form of f∞(b, Λ; ˆp) is not available. The MAP estimate is approximated by (U ∞, V ∞). 4.2 Properties of the algorithm Algorithm 2 has several plausible properties. First, it has a low computational cost. The computational cost per iteration is O(mN), which is linear in the number of components of the matrix A. Calculation of f(·, ·; ˆp) and G(·, ·; ˆp) is performed O(N + m) times per iteration. The constant 5 factor depends on ˆp and β. Calculation of f for β < ∞generally involves an ˆr-dimensional numerical integration, although they are not needed in cases where an analytic expression of the integral is available and cases where the variables take only discrete values. Calculation of f∞involves minimization over an ˆr-dimensional vector. When −log ˆp is a convex function and Λ is positive semidefinite, this minimization problem is convex and can be solved at relatively low cost. Second, Algorithm 2 has a form similar to that of an algorithm based on the variational Bayesian matrix factorization. In fact, if the last terms on the right-hand sides of the four equations in (9a) and (9c) are removed, the resulting algorithm is the same as an algorithm based on the variational Bayesian matrix factorization proposed in [4] and, in particular, the same as the ICM algorithm when β →∞. (Note, however, that [4] only treats the case where the priors ˆpu and ˆpv are multivariate Gaussian distributions.) Note that additional computational cost for these extra terms is O(m + N), which is insignificant compared with the cost of the whole algorithm, which is O(mN). Third, when one deals with the MAP problem, the value of CMAP(U, V ) may increase in iterations of Algorithm 2. The following proposition, however, guarantees optimality of the output of Algorithm 2 in a certain sense, if it has converged. Proposition 1. Let (U ∞, V ∞, S∞ 1 , . . . , S∞ m , T ∞ 1 , . . . , T ∞ N ) be a fixed point of the AMP algorithm for the MAP problem and suppose that ∑m i=1 S∞ i and ∑N j=1 T ∞ j are positive semidefinite. Then U ∞is a global minimum of CMAP(U, V ∞) and V ∞is a global minimum of CMAP(U ∞, V ). The proof is in the Supplementary Material. The key to the proof is the following reformulation: U t = arg min U [ CMAP(U, V t) −tr ( (U −U t−1) ( 1 2mτ N ∑ j=1 T t j ) (U −U t−1)⊤)] (16) If ∑N j=1 T t j is positive semidefinite, the second term of the minimand is the negative squared pseudometric between U and U t−1, which is interpreted as a penalty on nearness to the temporal estimate. Positive semidefiniteness of ∑m i=1 St i and ∑N j=1 T t j holds in almost all cases. In fact, we only have to assume limβ→∞G(b, Λ; ˆp) = G∞(b, Λ; ˆp), since G(b, Λ; ˆp) is a scaled covariance matrix of ˆq(u; b, Λ, ˆp), which is positive semidefinite. It follows from Proposition 1 that any fixed point of the AMP algorithm is also a fixed point of the ICM algorithm. It has two implications: (i) Execution of the ICM algorithm initialized with the converged values of the AMP algorithm does not improve CMAP(U t, V t). (ii) The AMP algorithm has not more fixed points than the ICM algorithm. The second implication may help the AMP algorithm avoid getting stuck in bad local minima. 4.3 Clustering via AMP algorithm One can use the AMP algorithm for the MAP problem to perform the K-means clustering by letting ˆpu(u) = 1 and ˆpv(v) = ˆr−1 ∑ˆr l=1 δ(v −el). Noting that f∞(b, Λ; ˆpv) is piecewise constant with respect to b and hence G∞(b, Λ; ˆpv) is O almost everywhere, we obtain the following algorithm: Algorithm 3 (AMP algorithm for the K-means clustering). Bt u = 1 mτ AV t, Λt u = 1 mτ (V t)⊤V t, U t = Bt u(Λt u)−1, St = (Λt u)−1, (17a) Bt v = 1 mτ A⊤U t −1 τ V tSt, Λt v = 1 mτ (U t)⊤U t −1 τ St, (17b) vt+1 j = arg min v∈{e1,...,eˆr} [1 2v⊤Λt vv −v⊤bt v,j ] . (17c) It is initialized with an assignment V 0 ∈{e1, . . . , eˆr}N. Algorithm 3 is rewritten as follows: nt l = N ∑ j=1 I(vt j = el), ˜ut l = 1 nt l N ∑ j=1 ajI(vt j = el), (18a) lt+1 j = arg min l∈{1,...,ˆr} [ 1 mτ ∥aj −˜ut l∥2 2 + 2m nt l I(vt j = el) −m nt l ] , vt+1 j = elt+1 j . (18b) 6 The parameter τ appearing in the algorithm does not exist in the K-means clustering problem. In fact, τ appears because m−2 ∑m i=1 A2 ijSt i was estimated by τm−1 ∑m i=1 St i in deriving Algorithm 2, which can be justified for large-sized problems. In practice, we propose using m−2N −1∥A − U t(V t)⊤∥2 F as a temporary estimate of τ at tth iteration. While the AMP algorithm for the Kmeans clustering updates the value of U in the same way as Lloyd’s K-means algorithm, it performs assignments of data to clusters in a different way. In the AMP algorithm, in addition to distances from data to centers of clusters, the assignment at present is taken into consideration in two ways: (i) A datum is less likely to be assigned to the cluster that it is assigned to at present. (ii) Data are more likely to be assigned to a cluster whose size at present is smaller. The former can intuitively be understood by observing that if vt j = el, one should take account of the fact that the cluster center ˜ut l is biased toward aj. The term 2m(nt l)−1I(vt j = el) in (18b) corrects this bias, which, as it should be, is inversely proportional to the cluster size. The AMP algorithm for maximum accuracy clustering is obtained by letting β = 1 and ˆpv(v) be a discrete distribution on {e1, . . . , eˆr}. After the algorithm converges, arg maxv ˆq(v; v∞ j , Λ∞ v , ˆpv) gives the final cluster assignment of the jth datum and U ∞gives the estimate of the cluster centers. 5 Numerical experiments We conducted numerical experiments on both artificial and real data sets to evaluate performance of the proposed algorithms for clustering. In the experiment on artificial data sets, we set m = 800 and N = 1600 and let ˆr = r. Cluster centers ˜u0,l, l = 1, . . . , r, were generated according to the multivariate Gaussian distribution N(0, I). Cluster assignments v0,j, j = 1, . . . , N, were generated according to the uniform distribution on {e1, . . . , er}. For fixed τ = 0.1 and r, we generated 500 problem instances and solved them with five algorithms: Lloyd’s K-means algorithm (K-means), the AMP algorithm for the K-means clustering (AMP-KM), the variational Bayes matrix factorization [4] for maximum accuracy clustering (VBMF-MA), the AMP algorithm for maximum accuracy clustering (AMP-MA), and the K-means++ [16]. The K-means++ updates the variables in the same way as Lloyd’s K-means algorithm with an initial value chosen in a sophisticated manner. For the other algorithms, initial values v0 j , j = 1, . . . , N, were randomly generated from the same distribution as v0,j. We used the true prior distributions of U and V for maximum accuracy clustering. We ran Lloyd’s K-means algorithm and the K-means++ until no change was observed. We ran the AMP algorithm for the K-means clustering until either V t = V t−1 or V t = V t−2 is satisfied. This is because we observed oscillations of assignments of a small number of data. For the other two algorithms, we terminated the iteration when ∥U t −U t−1∥2 F < 10−15∥U t−1∥2 F and ∥V t − V t−1∥2 F < 10−15∥V t−1∥2 F were met or the number of iterations exceeded 3000. We then evaluated the following performance measures for the obtained solution (U ∗, V ∗): • Normalized K-means loss ∥A−U ∗(V ∗)⊤∥2 F /(∑N j=1 ∥aj−¯a∥2 2), where ¯a := 1 N ∑N j=1 aj. • Accuracy maxP N −1 ∑N j=1 I(Pv∗ j = v0,j), where the maximization is taken over all r-by-r permutation matrices. We used the Hungarian algorithm [19] to solve this maximization problem efficiently. • Number of iterations needed to converge. We calculated the averages and the standard deviations of these performance measures over 500 instances. We conducted the above experiments for various values of r. Figure 1 shows the results. The AMP algorithm for the K-means clustering achieves the smallest Kmeans loss among the five algorithms, while the Lloyd’s K-means algorithm and K-means++ show large K-means losses for r ≥5. We emphasize that all the three algorithms are aimed to minimize the same K-means loss and the differences lie in the algorithms for minimization. The AMP algorithm for maximum accuracy clustering achieves the highest accuracy among the five algorithms. It also shows fast convergence. In particular, the convergence speed of the AMP algorithm for maximum accuracy clustering is comparable to that of the AMP algorithm for the K-means clustering when the two algorithms show similar accuracy (r < 9). This is in contrast to the common observation that the variational Bayes method often shows slower convergence than the ICM algorithm. 7 0.97 0.975 0.98 0.985 0.99 0.995 1 2 4 6 8 10 12 14 16 18 r Normalized K-means loss K-means AMP-KM VBMF-MA AMP-MA K-means++ (a) 0 0.2 0.4 0.6 0.8 1 2 4 6 8 10 12 14 16 18 r Accuracy K-means AMP-KM VBMF-MA AMP-MA K-means++ (b) 0 500 1000 1500 2000 2500 2 4 6 8 10 12 14 16 18 r Number of iterations K-means AMP-KM VBMF-MA AMP-MA K-means++ (c) 0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 Iteration number Accuracy AMP-KM VBMF-MA AMP-MA (d) Figure 1: (a)–(c) Performance for different r: (a) Normalized K-means loss. (b) Accuracy. (c) Number of iterations needed to converge. (d) Dynamics for r = 5. Average accuracy at each iteration is shown. Error bars represent standard deviations. 0.39 0.4 0.41 0.42 0.43 0.44 0.45 0.46 0 10 20 30 40 50 Number of trials Normalized K-means loss K-means++ AMP-KM (a) 0.5 0.55 0.6 0.65 0.7 0.75 0 10 20 30 40 50 Number of trials Accuracy K-means++ AMP-KM (b) Figure 2: Performance measures in real-data experiments. (a) Normalized K-means loss. (b) Accuracy. The results for the 50 trials are shown in the descending order of performance for AMP-KM. The worst two results for AMP-KM are out of the range. In the experiment on real data, we used the ORL Database of Faces [20], which contains 400 images of human faces, ten different images of each of 40 distinct subjects. Each image consists of 112 × 92 = 10304 pixels whose value ranges from 0 to 255. We divided N = 400 images into ˆr = 40 clusters with the K-means++ and the AMP algorithm for the K-means clustering. We adopted the initialization method of the K-means++ also for the AMP algorithm, because random initialization often yielded empty clusters and almost all data were assigned to only one cluster. The parameter τ was estimated in the way proposed in Subsection 4.3. We ran 50 trials with different initial values, and Figure 2 summarizes the results. The AMP algorithm for the K-means clustering outperformed the standard K-means++ algorithm in 48 out of the 50 trials in terms of the K-means loss and in 47 trials in terms of the accuracy. The AMP algorithm yielded just one cluster with all data assigned to it in two trials. The attained minimum value of K-means loss is 0.412 with the K-means++ and 0.400 with the AMP algorithm. The accuracies at these trials are 0.635 with the K-means++ and 0.690 with the AMP algorithm. The average number of iterations was 6.6 with the K-means++ and 8.8 with the AMP algorithm. These results demonstrate efficiency of the proposed algorithm on real data. 8 References [1] P. Paatero, “Least squares formulation of robust non-negative factor analysis,” Chemometrics and Intelligent Laboratory Systems, vol. 37, no. 1, pp. 23–35, May 1997. [2] P. O. Hoyer, “Non-negative matrix factorization with sparseness constraints,” The Journal of Machine Learning Research, vol. 5, pp. 1457–1469, Dec. 2004. [3] R. Salakhutdinov and A. Mnih, “Bayesian probabilistic matrix factorization using Markov chain Monte Carlo,” in Proceedings of the 25th International Conference on Machine Learning, New York, NY, Jul. 5– Aug. 9, 2008, pp. 880–887. [4] Y. J. Lim and Y. W. Teh, “Variational Bayesian approach to movie rating prediction,” in Proceedings of KDD Cup and Workshop, San Jose, CA, Aug. 12, 2007. [5] T. Raiko, A. Ilin, and J. Karhunen, “Principal component analysis for large scale problems with lots of missing values,” in Machine Learning: ECML 2007, ser. Lecture Notes in Computer Science, J. N. Kok, J. Koronacki, R. L. de Mantaras, S. Matwin, D. Mladeniˇc, and A. Skowron, Eds. Springer Berlin Heidelberg, 2007, vol. 4701, pp. 691–698. [6] D. L. Donoho, A. Maleki, and A. Montanari, “Message-passing algorithms for compressed sensing,” Proceedings of the National Academy of Sciences USA, vol. 106, no. 45, pp. 18 914–18 919, Nov. 2009. [7] S. Rangan, “Generalized approximate message passing for estimation with random linear mixing,” in Proceedings of 2011 IEEE International Symposium on Information Theory, St. Petersburg, Russia, Jul. 31– Aug. 5, 2011, pp. 2168–2172. [8] S. Rangan and A. K. Fletcher, “Iterative estimation of constrained rank-one matrices in noise,” in Proceedings of 2012 IEEE International Symposium on Information Theory, Cambridge, MA, Jul. 1–6, 2012, pp. 1246–1250. [9] R. Matsushita and T. Tanaka, “Approximate message passing algorithm for low-rank matrix reconstruction,” in Proceedings of the 35th Symposium on Information Theory and its Applications, Oita, Japan, Dec. 11–14, 2012, pp. 314–319. [10] W. Xu, X. Liu, and Y. Gong, “Document clustering based on non-negative matrix factorization,” in Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, Toronto, Canada, Jul. 28–Aug. 1, 2003, pp. 267–273. [11] C. Ding, T. Li, and M. Jordan, “Convex and semi-nonnegative matrix factorizations,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 1, pp. 45–55, Jan. 2010. [12] S. P. Lloyd, “Least squares quantization in PCM,” IEEE Transactions on Information Theory, vol. IT-28, no. 2, pp. 129–137, Mar. 1982. [13] F. Krzakala, M. M´ezard, and L. Zdeborov´a, “Phase diagram and approximate message passing for blind calibration and dictionary learning,” preprint, Jan. 2013, arXiv:1301.5898v1 [cs.IT]. [14] J. T. Parker, P. Schniter, and V. Cevher, “Bilinear generalized approximate message passing,” preprint, Oct. 2013, arXiv:1310.2632v1 [cs.IT]. [15] S. Nakajima and M. Sugiyama, “Theoretical analysis of Bayesian matrix factorization,” Journal of Machine Learning Research, vol. 12, pp. 2583–2648, Sep. 2011. [16] D. Arthur and S. Vassilvitskii, “k-means++: the advantages of careful seeding,” in SODA ’07 Proceedings of the 18th Annual ACM-SIAM Symposium on Discrete Algorithms, New Orleans, Louisiana, Jan. 7–9, 2007, pp. 1027–1035. [17] J. S. Yedidia, W. T. Freeman, and Y. Weiss, “Constructing free-energy approximations and generalized belief propagation algorithms,” IEEE Transactions on Information Theory, vol. 51, no. 7, pp. 2282–2312, Jul. 2005. [18] M. Bayati and A. Montanari, “The dynamics of message passing on dense graphs, with applications to compressed sensing,” IEEE Transactions on Information Theory, vol. 57, no. 2, pp. 764–785, Feb. 2011. [19] H. W. Kuhn, “The Hungarian method for the assignment problem,” Naval Research Logistics Quarterly, vol. 2, no. 1–2, pp. 83–97, Mar. 1955. [20] F. S. Samaria and A. C. Harter, “Parameterisation of a stochastic model for human face identification,” in Proceedings of 2nd IEEE Workshop on Applications of Computer Vision, Sarasota FL, Dec. 1994, pp. 138–142. [Online]. Available: http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html 9
|
2013
|
117
|
4,840
|
Inverse Density as an Inverse Problem: the Fredholm Equation Approach Qichao Que, Mikhail Belkin Department of Computer Science and Engineering The Ohio State University {que,mbelkin}@cse.ohio-state.edu Abstract We address the problem of estimating the ratio q p where p is a density function and q is another density, or, more generally an arbitrary function. Knowing or approximating this ratio is needed in various problems of inference and integration often referred to as importance sampling in statistical inference. It is also closely related to the problem of covariate shift in transfer learning. Our approach is based on reformulating the problem of estimating the ratio as an inverse problem in terms of an integral operator corresponding to a kernel, known as the Fredholm problem of the first kind. This formulation, combined with the techniques of regularization leads to a principled framework for constructing algorithms and for analyzing them theoretically. The resulting family of algorithms (FIRE, for Fredholm Inverse Regularized Estimator) is flexible, simple and easy to implement. We provide detailed theoretical analysis including concentration bounds and convergence rates for the Gaussian kernel for densities defined on Rd and smooth d-dimensional sub-manifolds of the Euclidean space. Model selection for unsupervised or semi-supervised inference is generally a difficult problem. It turns out that in the density ratio estimation setting, when samples from both distributions are available, simple completely unsupervised model selection methods are available. We call this mechanism CD-CV for Cross-Density Cross-Validation. We show encouraging experimental results including applications to classification within the covariate shift framework. 1 Introduction In this paper we address the problem of estimating the ratio of two functions, q(x) p(x) where p is given by a sample and q(x) is either a known function or another probability density function given by a sample. This estimation problem arises naturally when one attempts to integrate a function with respect to one density, given its values on a sample obtained from another distribution. Recently there have been a significant amount of work on estimating the density ratio (also known as the importance function) from sampled data, e.g., [6, 10, 9, 22, 2]. Many of these papers consider this problem in the context of covariate shift assumption [19] or the so-called selection bias [27]. The approach taken in our paper is based on reformulating the density ratio estimation as an integral equation, known as the Fredholm equation of the first kind, and solving it using the tools of regularization and Reproducing Kernel Hilbert Spaces. That allows us to develop simple and flexible algorithms for density ratio estimation within the popular kernel learning framework. The connection to the classical operator theory setting makes it easier to apply the standard tools of spectral and Fourier analysis to obtain theoretical results. We start with the following simple equality underlying the importance sampling method: Eq(h(x)) = Z h(x)q(x)dx = Z h(x)q(x) p(x)p(x)dx = Ep h(x)q(x) p(x) (1) 1 By replacing the function h(x) with a kernel k(x, y), we obtain Kp q p(x) := Z k(x, y)q(y) p(y)p(y)dy = Z k(x, y)q(y)dy := Kq1(x). (2) Thinking of the function q(x) p(x) as an unknown quantity and assuming that the right hand side is known this becomes a Fredholm integral equation. Note that the right-hand side can be estimated given a sample from q while the operator on the left can be estimated using a sample from p. To push this idea further, suppose kt(x, y) is a “local” kernel, (e.g., the Gaussian, kt(x, y) = 1 (2πt)d/2 e−∥x−y∥2 2t ) such that R Rd kt(x, y)dx = 1. When we use δ-kernels, like Gaussian, and f satisfies some smoothness conditions, we have R Rd kt(x, y)f(x)dx = f(y) + O(t) (see [24], Ch. 1). Thus we get another (approximate) integral equality: Kt,p q p(y) := Z Rd kt(x, y)q(x) p(x)p(x)dx ≈q(y). (3) It becomes an integral equation for q(x) p(x), assuming that q is known or can be approximated. We address these inverse problems by formulating them within the classical framework of TiknonovPhilips regularization with the penalty term corresponding to the norm of the function in the Reproducing Kernel Hilbert Space H with kernel kH used in many machine learning algorithms. [Type I]: q p ≈arg min f∈H ∥Kpf −Kq1(x)∥2 L2,p+λ∥f∥2 H [II]: q p ≈arg min f∈H ∥Kt,pf −q∥2 L2,p+λ∥f∥2 H Importantly, given a sample x1, . . . , xn from p, the integral operator Kpf applied to a function f can be approximated by the corresponding discrete sum Kpf(x) ≈ 1 n P i f(xi)K(xi, x), while L2,p norm is approximated by an average: ∥f∥2 L2,p ≈1 n P i f(xi)2. Of course, the same holds for a sample from q. We see that the Type I formulation is useful when q is a density and samples from both p and q are available, while the Type II is useful, when the values of q (which does not have to be a density function at all1) are known at the data points sampled from p. Since all of these involve only function evaluations at the sample points, an application of the usual representer theorem for Reproducing Kernel Hilbert Spaces, leads to simple, explicit and easily implementable algorithms, representing the solution of the optimization problem as linear combinations of the kernels over the points of the sample P i αikH(xi, x) (see Section 2). We call the resulting algorithms FIRE for Fredholm Inverse Regularized Estimator. Remark: Other norms and loss functions. Norms and loss functions other that L2,p can also be used in our setting as long as they can be approximated from a sample using function evaluations. 1. Perhaps, the most interesting is L2,q norm available in the Type I setting, when a sample from the probability distribution q is available. In fact, given a sample from both p and q we can use the combined empirical norm γ∥· ∥L2,p + (1 −γ)∥· ∥L2,q. Optimization using those norms leads to some interesting kernel algorithms described in Section 2. We note that the solution is still a linear combination of kernel functions centered on the sample from p and can still be written explicitly. 2. In Type I formulation, if the kernels k(x, y) and kH(x, y) coincide, it is possible to use the RKHS norm ∥· ∥H instead of L2,p. This formulation (see Section 2) also yields an explicit formula and is related to the Kernel Mean Matching [9] , although with a different optimization procedure. Since we are dealing with a classical inverse problem for integral operators, our formulation allows for theoretical analysis using the methods of spectral theory. In Section 3 we present concentration and error bounds as well as convergence rates for our algorithms when data are sampled from a distribution defined in Rd, a domain in Rd with boundary or a compact d-dimensional sub-manifold of a Euclidean space RN for the case of the Gaussian kernel. In Section 4 we introduce a unsupervised method, referred as CD-CV (for cross-density crossvalidation) for model selection and discuss the experimental results on several data sets comparing our method FIRE with the available alternatives, Kernel Mean Matching (KMM) [9] and LSIF [10] as well as the base-line thresholded inverse kernel density estimator2 (TIKDE) and importance sampling (when available). 1This could be useful in sampling procedures, when the normalizing coefficients are hard to estimate. 2The standard kernel density estimator for q divided by a thresholded kernel density estimator for p. 2 We summarize the contributions of the paper as follows: 1. We provide a formulation of estimating the density ratio (importance function) as a classical inverse problem, known as the Fredholm equation, establishing a connections to the methods of classical analysis. The underlying idea is to “linearize” the properties of the density by studying an associated integral operator. 2. To solve the resulting inverse problems we apply regularization with an RKHS norm penalty. This provides a flexible and principled framework, with a variety of different norms and regularization techniques available. It separates the underlying inverse problem from the necessary regularization and leads to a family of very simple and direct algorithms within the kernel learning framework in machine learning. 3. Using the techniques of spectral analysis and concentration, we provide a detailed theoretical analysis for the case of the Gaussian kernel, for Euclidean case as well as for distributions supported on a sub-manifold. We prove error bounds and as well as the convergence rates. 4. We also propose a completely unsupervised technique, CD-CV, for cross-validating the parameters of our algorithm and demonstrate its usefulness, thus addressing in our setting one of the most thorny issues in unsupervised/semi-supervised learning. We evaluate and compare our methods on several different data sets and in various settings and demonstrate strong performance and better computational efficiency compared to the alternatives. Related work. Recently the problem of density ratio estimation has received significant attention due in part to the increased interest in transfer learning [15] and, in particular to the form of transfer learning known as covariate shift [19]. To give a brief summary, given the feature space X and the label space Y , two probability distributions p and q on X × Y satisfy the covariate assumption if for all x, y, p(y|x) = q(y|x). It is easy to see that training a classifier to minimize the error for q, given a sample from p requires estimating the ratio of the marginal distributions qX(x) pX(x). The work on covariate shift, density ratio estimation and related settings includes [27, 2, 6, 10, 22, 9, 23, 14, 7]. The algorithm most closely related to ours is Kernel Mean Matching [9]. It is based on the equation: Eq(Φ(x)) = Ep( q pΦ(x)), where Φ is the feature map corresponding to an RKHS H. It is rewritten as an optimization problem q(x) p(x) ≈arg minβ∈L2,β(x)>0,Ep(β)=1 ∥Eq(Φ(x)) −Ep(β(x)Φ(x))∥H. The quantity on the right can be estimated given a sample from p and a sample from q and the minimization becomes a quadratic optimization problem over the values of β at the points sampled from p. Writing down the feature map explicitly, i.e., recalling that Φ(x) = KH(x, ·), we see that the equality Eq(Φ(x)) = Ep( q pΦ(x)) is equivalent to the integral equation Eq. 2 considered as an identity in the Hilbert space H. Thus the problem of KMM can be viewed within our setting Type I (see the Remark 2 in the introduction), with a RKHS norm but a different optimization algorithm. However, while the KMM optimization problem uses the RKHS norm, the weight function β itself is not in the RKHS. Thus, unlike most other algorithms in the RKHS framework (in particular, FIRE), the empirical optimization problem does not have a natural out-of-sample extension. Also, since there is no regularizing term, the problem is less stable (see Section 4 for some experimental comparisons) and the theoretical analysis is harder (however, see [6] and the recent paper [26] for some nice theoretical analysis of KMM in certain settings). Another related recent algorithm is Least Squares Importance Sampling (LSIF) [10], which attempts to estimate the density ratio by choosing a parametric linear family of functions and choosing a function from this family to minimize the L2,p distance to the density ratio. A similar setting with the Kullback-Leibler distance (KLIEP) was proposed in [23]. This has an advantage of a natural out-of-sample extension property. We note that our method for unsupervised parameter selection in Section 4 is related to their ideas. However, in our case the set of test functions does not need to form a good basis since no approximation is required. We note that our methods are closely related to a large body of work on kernel methods in machine learning and statistical estimation (e.g., [21, 17, 16]). Many of these algorithms can be interpreted as inverse problems, e.g., [3, 20] in the Tikhonov regularization or other regularization frameworks. In particular, we note interesting methods for density estimation proposed in [12] and estimating the support of density through spectral regularization in [4], as well as robust density estimation using RKHS formulations [11] and conditional density [8]. We also note the connections of the methods in this paper to properties of density-dependent operators in classification and clustering [25, 18, 1]. Among those works that provide theoretical analysis of algorithms for estimating density ratios, 3 [14] establishes minimax rates for likelihood ratio estimation. Another recent theoretical analysis of KMM in [26] contains bounds for the output of the corresponding integral operators. 2 Settings and Algorithms Settings and objects. We start by introducing objects and function spaces important for our development. As usual, the norm in space of square-integrable functions with respect to a measure ρ, is defined as follows: L2,ρ = f : R Ω|f(x)|2dρ < ∞ . This is a Hilbert space with the inner product defined in the usual way by ⟨f, g⟩2,ρ = R Ωf(x)g(x)dρ. Given a kernel k(x, y) we define the operator Kρ: Kρf(y) := R Ωk(x, y)f(x)dρ(x). We will use the notation Kt,ρ to explicitly refer to the parameter of the kernel function kt(x, y), when it is a δ-family. If the function k(x, y) is symmetric and positive definite, then there is a corresponding Reproducing Kernel Hilbert space (RKHS) H. We recall the key property of the kernel kH: for any f ∈H, ⟨f, kH(x, ·)⟩H = f(x). The Representer Theorem allows us to write solutions to various optimization problems over H in terms of linear combinations of kernels supported on sample points (see [21] for an in-depth discussion or the RKHS theory and the issues related to learning). Given a sample x1, . . . , xn from p, one can approximate the L2,p norm of a sufficiently smooth functionf by ∥f∥2 2,p ≈1 n P i |f(xi)|2, and similarly, the integral operator Kpf(x) ≈ 1 n P i k(xi, x)f(xi). These approximate equalities can be made precise by using appropriate concentration inequalities. The FIRE Algorithms. As discussed in the introduction, the starting point for our development is the two integral equalities, [I]: Kp q p(·) = Z k(·, y)q(y) p(y)dp(y) = Kq1(·) [II]:Kt,p q p(·) = Z kt(·, y)q(y) p(y)dp(y) = q(·) + o(1) (4) Notice that in the Type I setting, the kernel does not have to be in a δ-family. For example, a linear kernel is admissible. Type II setting comes from the fact Kt,qf(x) ≈f(x)p(x) + O(t) for a “δfunction-like” kernel and we keep t in the notation in that case. Assuming that either Kq1 or q are (approximately) known (Type I and II settings, respectively) equalities in Eqs. 4 become integral equations for p q , known as Fredholm equations of the first kind. To estimate p q , we need to obtain an approximation to the solution which (a) can be obtained computationally from sampled data, (b) is stable with respect to sampling and other perturbation of the input function, (c) can be analyzed using the standard machinery of functional analysis. To provide a framework for solving these inverse problems, we apply the classical techniques of regularization combined with the RKHS norm popular in machine learning. In particular a simple formulation of Type I using Tikhonov regularization, ([5], Ch. 5), with the L2,p norm is as follows: [Type I]: f I λ = arg min f∈H ∥Kpf −Kq1∥2 2,p + λ∥f∥2 H (5) Here H is an appropriate Reproducing Kernel Hilbert Space. Similarly Type II can be solved by [Type II]: f II λ = arg min f∈H ∥Kt,pf −q∥2 2,p + λ∥f∥2 H (6) We will now discuss the empirical versions of these equations and the resulting algorithms. Type I setting. Algorithm for L2,p norm. Given an iid sample from p, zp = {xi}n i=1 and an iid sample from q, zq = {x′ j}m j=1 (z for the combined sample), we can approximate the integral operators Kp and Kq by Kzpf(x) = 1 n P xi∈zp k(xi, x)f(xi) and Kzqf(x) = 1 m P x′ i∈zq k(x′ i, x)f(x′ i). Thus the empirical version of Eq. 5 becomes f I λ,z = arg min f∈H 1 n X xi∈zp ((Kzpf)(xi) −(Kzq1)(xi))2 + λ∥f∥2 H (7) The first term of the optimization problem involves only evaluations of the function f at the points of the sample. From Representer Theorem and matrix manipulation, we obtain the following: f I λ,z(x) = X xi∈zp kH(xi, x)vi and v = K2 p,pKH + nλI −1 Kp,pKp,q1zq. (8) where the kernel matrices are defined as follows: (Kp,p)ij = 1 nk(xi, xj), (KH)ij = kH(xi, xj) for xi, xj ∈zp and Kp,q is defined as (Kp,q)ij = 1 mk(xi, x′ j) for xi ∈zp and x′ j ∈zq. 4 If KH and Kp,p are the same kernel we simply have: v = 1 n K3 p,p + λI −1 Kp,pKp,q1zq. Algorithms for γL2,p +(1−γ)L2,q norm. Depending on the setting, we may want to minimize the error of the estimate over the probability distribution p, q or over some linear combination of these. A significant potential benefit of using a linear combination is that both samples can be used at the same time in the loss function. First we state the continuous version of the problem: f * λ = arg min f∈H γ∥Kpf −Kq1∥2 2,p + (1 −γ)∥Kpf −Kq1∥2 2,q + λ∥f∥2 H (9) Given a sample from p, zp = {x1, x2, . . . , xn} and a sample from q, zq = {x′ 1, x′ 2, . . . , x′ m} we obtain an empirical version of the Eq. 9: f ∗ λ,z(x) = arg min f∈H γ n X xi∈zp Kzpf(xi) −Kzq1(xp i ) 2 + 1 −γ m X x′ i∈zq (Kzpf)(x′ i) −(Kzq1)(x′ i) 2 + λ∥f∥2 H From the Representer Theorem f ∗ λ,z(x) = P xi∈zp vikH(xi, x) v = (K + nλI)−1 K11zq K = γ n(Kp,p)2 + 1 −γ m KT q,pKq,p KH and K1 = γ nKp,pKp,q + 1 −γ m KT q,pKq,q where (Kp,p)ij = 1 nk(xi, xj), (KH)ij = kH(xi, xj) for xi, xj ∈zp, and (Kp,q)ij = 1 mk(xi, x′ j) and (Kq,p)ji = 1 nk(x′ j, xi) for xi ∈zp,x′ j ∈zq. Despite the loss function combining both samples, the solution is still a summation of kernels over the points in the sample from p. Algorithms for the RKHS norm. In addition to using the RKHS norm for regularization norm, we can also use it as a loss function: f * λ = arg minf∈H ∥Kpf −Kq1∥2 H′ + λ∥f∥2 H Here the Hilbert space H′ must correspond to the kernel k and can potentially be different from the space H used for regularization. Note that this formulation is only applicable in the Type I setting since it requires the function q to belong to the RKHS H′. Given two samples zp, zq, it is easy to write down the empirical version of this problem, leading to the following formula: f ∗ λ,z(x) = X xi∈zp vikH(xi, x) v = (Kp,pKH + nλI)−1 Kp,q1zq. (10) The result is somewhat similar to our Type I formulation with the L2,p norm. We note the connection between this formulation of using the RKHS norm as a loss function and the KMM algorithm [9]. When the kernels K and KH are the same, Eq. 10 can be viewed as a regularized version of KMM (with a different optimization procedure). Type II setting. In Type II setting we assume that we have a sample z = {xi}n i=1 drawn from p and that we know the function values q(xi) at the points of the sample. Replacing the norm and the integral operator with their empirical versions, we obtain the following optimization problem: f II λ,z = arg min f∈H 1 n X xi∈z (Kt,zpf(xi) −q(xi))2 + λ∥f∥2 H (11) As before, using the Representer Theorem we obtain an analytical formula for the solution: f II λ,z(x) = X xi∈z kH(xi, x)vi where v = K2KH + nλI −1 Kq. where the kernel matrix K is defined by Kij = 1 nkt(xi, xj), (KH)ij = kH(xi, xj) and qi = q(xi). Comparison of type I and type II settings. 1. In Type II setting q does not have to be a density function (i.e., non-negative and integrate to one). 2. Eq. 7 of the Type I setting cannot be easily solved in the absence of a sample zq from q, since estimating Kq requires either sampling from q (if it is a density) or estimating the integral in some other way, which may be difficult in high dimension but perhaps of interest in certain low-dimensional application domains. 3. There are a number of problems (e.g., many problems involving MCMC) where q(x) is known explicitly (possibly up to a multiplicative constant), while sampling from q is expensive or even impossible computationally [13]. 4. Unlike Eq. 5, Eq. 6 has an error term depending on the kernel. For example, in the important case of the Gaussian kernel, the error is of the order O(t), where t is the variance of Gaussian. 5. Several norms are available in the Type I setting, but only the L2,p norm is available for Type II. 5 3 Theoretical analysis: bounds and convergence rates for Gaussian Kernels In this section, we state our main results on bounds and convergence rates for our algorithm based on Tikhonov regularization with a Gaussian kernel. We consider both Type I and Type II settings for the Euclidean and manifold cases and make a remark on the Euclidean domains with boundary. To simplify the theoretical development, the integral operator and the RKHS H will correspond to the same Gaussian kernel kt(x, y). The proofs will be found in the supplemental material. Assumptions: The set Ω, where the density function p is defined, could be one of the following: (1) the whole Rd; (2) a compact smooth Riemannian sub-manifold M of d-dimension in Rn. We also need p(x) < Γ, q(x) < Γ for any x ∈Ωand that q p, q p2 are in Sobolev space W 2 2 (Ω). Theorem 1. ( Type I setting.) Let p and q be two density functions on Ω. Given n points, zp = {x1, x2, . . . , xn}, i.i.d. sampled from p and m points, zq = {x′ 1, x′ 2, . . . , x′ m}, i.i.d. sampled from q, and for small enough t, for the solution to the optimization problem in (7), with confidence at least 1 −2e−τ, we have (1) If the domain Ωis Rd, for some constants C1, C2, C3 independent of t, λ.
f I λ,z −q p
2,p ≤C1t + C2λ 1 2 + C3 √τ λtd/2 1 √m + 1 λ1/6√n (12) (2) If the domain Ωis a compact sub-manifold without boundary of d dimension, for some 0 < ε < 1, C1, C2, C3 independent of t, λ.
f I λ,z −q p
2,p ≤C1t1−ε + C2λ 1 2 + C3 √τ λtd/2 1 √m + 1 λ1/6√n (13) Corollary 2. ( Type I setting.) Assuming m > λ1/3n, with confidence at least 1 −2e−τ, when (1) Ω= Rd, (2) Ωis a d-dimensional sub-manifold of a Euclidean space, we have (1)
f I λ,z −q p
2 2,p = O √τn− 1 3.5+d/2 (2)
f I λ,z −q p
2 2,p = O √τn− 1 3.5(1−ε)+d/2 ∀ε ∈(0, 1) Theorem 3. ( Type II setting.) Let p be a density function on Ωand q be a function satisfying the assumptions. Given n points z = {x1, x2, . . . , xn} sampled i.i.d. from p, and for sufficiently small t, for the solution to the optimization problem in (11), with confidence at least 1 −2e−τ, we have (1) If the domain Ωis Rd,
f II λ,z −q p
2,p ≤C1t + C2λ 1 2 + C3λ−1 3 ∥Kt,q1 −q∥2,p + C4 √τ λ3/2td/2√n, (14) where C1, C2, C3, C4 are constants independent of t, λ. Moreover, ∥Kt,q1 −q∥2,p = O(t). (2) If Ωis a d-dimensional sub-manifold of a Euclidean space, for any 0 < ε < 1
f II λ,z −q p
2,p ≤C1t1−ε + C2λ 1 2 + C3λ−1 3 ∥Kt,q1 −q∥2,p + C4 √τ λ3/2td/2√n, (15) where C1, C2, C3, C4 are independent of t, λ. Moreover, ∥Kt,q1 −q∥2,p = O(t1−η), ∀η > 0. Corollary 4. ( Type II setting.) With confidence at least 1 −2e−τ, when (1) Ω= Rd, (2) Ωis a d-dimensional sub-manifold of a Euclidean space, we have (1)
f II λ,z −q p
2 2,p = O √τn − 1 4+ 5 6 d (2)
f II λ,z −q p
2 2,p = O √τn − 1−η 4−4η+ 5 6 d ∀η ∈(0, 1) 4 Model Selection and Experiments We describe an unsupervised technique for parameter selection, Cross-Density Cross-Validation (CD-CV) based on a performance measure unique to our setting. We proceed to evaluate our method. The setting. In our experiments, we have Xp = {xp 1, . . . , xp n} and Xq = {xq 1, . . . , xq m}. The goal is to estimate q p, assuming that Xp, Xq are i.i.d. sampled from p, q respectively. Note that 6 learning q p is unsupervised and our algorithms typically have two parameters: the kernel width t and regularization parameter λ. Performance Measures and CD-CV Model Selection. We describe a set of performance measures used for parameter selection. For a given function u, we have the following importance sampling equality (Eq. 1): Ep(u(x)) = Eq u(x) p(x) q(x) . If f is an approximation of the true ratio q p, and Xp, Xq are samples from p, q respectively, we will have the following approximation to the previous equation: 1 n Pn i=1 u(xp i )f(xp i ) ≈ 1 m Pm j=1 u(xq j). So after obtaining an estimate f of the ratio, we can validate it using the following performance measure: JCD(f; Xp, Xq, U) = 1 F F X l=1 n X i=1 ul(xp i )f(xp i ) − m X j=1 ul(xq j) 2 (16) where U = {u1, . . . , uF } is a collection of test functions. Using this performance measure allows various cross-validation procedures to be used for parameter selection. We note that this way to measure error is related to the LSIF [10] and KLIEP [23] algorithms. However, there a similar measure is used to construct an approximation to the ratio q p using functions u1, . . . , uF as a basis. In our setting, we can use test functions (e.g., linear functions) which are poorly suited as a basis for approximating the density ratio. We will use the following two families of test functions for parameter selection: (1) Sets of random linear functions ui(x) = βT x where β ∼N(0, Id); (2) Sets of random half-space indicator functions, ui(x) = 1βT x>0. Procedures for parameter selection. The performance is optimized using five-fold cross-validation by splitting the data set into two parts Xp train and Xq train for training and Xp cv and Xq cv for validation. The range we use for kernel width t is (t0, 2t0, . . . , 29t0), where t0 is the average distance of the 10 nearest neighbors. The range for regularization parameter λ is (1e −5, 1e −6, . . . , 1e −10). Data sets and Resampling We use two datasets, CPUsmall and Kin8nm, for regression; and USPS handwritten digits for classification. And we draw the first 500 or 1000 points from the original data set as Xp. To obtain Xq, the following two ways of resampling, using the features or the label information, are used (along the lines of those in [6]). Given a set of data with labels {(x1, y1), (x2, y2), . . . , (xn, yn)} and denoting Pi the probability of i’th instance being chosen, we resample as follows: (1) Resampling using features (labels yi are not used). Pi = e(a⟨xi,e1⟩−b)/σv 1+e(a⟨xi,e1⟩−b)/σv , where a, b are the resampling parameters, e1 is the first principal component, and σv is the standard deviation of the projections to e1. This resampling method will be denoted by PCA(a, b). (2) Resampling using labels. Pi = 1 yi ∈Lq 0 Otherwise. where yi ∈L = {1, 2, . . . , k} and Lq is a subset of the whole label set L. It only applies to binary problems obtained by aggregating different classes in multi-class setting. Testing the FIRE algorithm. In the first experiment, we test our method for selecting parameters by focusing on the error JCD(f; Xp, Xq, U) in Eq. 16 for different function classes U. Parameters are chosen using a family of functions U1, while the performance of the parameter is measured using an independent function family U2. This measure is important because in practice the functions we are interested in may not be the ones chosen for validation. We use the USPS data sets for this experiment. As a basis for comparison we use TIKDE (Thresholded Inverse Kernel Density Estimator). TIKDE estimates ˆp and ˆq respectively using Kernel Density Estimation (KDE), and assigns ˆp(x) = α to any x satisfying ˆp(x) < α. TIKDE then outputs ˆq/ˆp. We note that chosen threshold α is key to reasonable performance. One issue of this heuristic is that it could underestimate at the region with high density ratio, due to the uniform thresholding. We also compare our methods to LSIF [10]. In these experiments we do not compare with KMM as out-of-sample extension is necessary for fair comparison. Table 1 shows the average errors of various methods, defined in Eq. 16 on held-out set Xerr over 5 trials. We use different validation functions f cv(Columns) and error-measuring functions f err(Row). N is the number of random functions used for validation. The error-measuring function families U2 are as follows: (1) Linear(L.): random linear functions f(x) = βT x where β ∼N(0, Id); (2) Half7 space(H.S.): Sets of random half-space indicator functions, f(x) = 1βT x; (3) Kernel(K.): random linear combinations of kernel functions centered at training data, f(x) = γT K where γ ∼N(0, Id) and Kij = k(xi, xj) for xi from training set; (4) Kernel indicator(K.I.) functions f(x) = 1g(x)>0, where g is as in (3). Table 1: USPS data set with resampling using PCA(5, σv) with |Xp| = 500, |Xq| = 1371. Around 400 in Xp and 700 in Xq are used in 5-fold CV, the rest are held-out for computing the error. Linear Half-Spaces N 50 200 50 200 L. TIKDE 10.9 10.9 10.9 10.9 LSIF 14.1 14.1 26.8 28.2 FIREp 3.6 3.7 5.5 6.3 FIREp,q 4.7 4.7 7.4 6.8 FIREq 5.9 6.2 9.3 9.3 H.S. TIKDE 2.6 2.6 2.6 2.6 LSIF 3.9 3.9 3.7 3.9 FIREp 1.0 0.9 1.0 1.2 FIREp,q 0.9 1.0 1.4 1.1 FIREq 1.2 1.4 1.6 1.6 Linear Half-Spaces N 50 200 50 200 K. TIKDE 4.7 4.7 4.7 4.7 LSIF 16.1 16.1 15.6 13.8 FIREp 1.2 1.1 2.8 3.6 FIREp,q 2.1 2.0 4.2 2.6 FIREq 5.2 4.3 6.1 6.1 K.I. TIKDE 4.2 4.2 4.2 4.2 LSIF 4.4 4.4 5.3 4.4 FIREp 0.9 0.7 1.2 1.1 FIREp,q 0.6 0.6 1.9 1.1 FIREq 1.2 0.9 2.2 2.2 Supervised Learning: Regression and Classification. We compare our FIRE algorithm with several other methods in regression and classification tasks. We consider the situation where part of the data set Xp are labeled and all of Xq are unlabeled. We use weighted ordinary least-square for regression and weighted linear SVM for classification. Regression. Square loss function is used for regression. The performance is measured using normalized square loss, Pn i=1 (ˆyi−yi)2 Var(ˆy−y). Xq is resampled using PCA resampler, described before. L is for Linear, and HS is for Half-Space function families for parameter selection. Table 2: Mean normalized square loss on the CPUsmall and Kin8nm. |Xp| = 1000, |Xq| = 2000. CPUsmall, resampled by PCA(5, σv) Kin8nm, resampled by PCA(1, σv) No. of Labeled 100 200 500 100 200 500 Weights L HS L HS L HS L HS L HS L HS OLS .74 .50 .83 .59 .55 0.54 TIKDE .38 .36 .30 .29 .28 .28 .57 .57 .55 .55 .53 .53 KMM 1.86 1.86 1.9 1.9 2.5 2.5 .58 .58 .55 .55 .52 .52 LSIF .39 .39 .31 .31 .33 .33 .57 .56 .54 .54 .52 .52 FIREp .33 .33 .29 .29 .27 .27 .57 .56 .55 .54 .52 .52 FIREp,q .33 .33 .29 .29 .27 .27 .56 .56 .55 .54 .52 .52 FIREq .32 .33 .28 .29 .27 .27 .56 .56 .55 .54 .52 .52 Classification. Weighted linear SVM. Percentage of incorrectly labeled test set instances. Table 3: Average error on USPS with +1 class= {0 −4}, −1 class= {5 −9} and |Xp| = 1000 and |Xq| = 2000. Left half of the table uses resampling PCA(5, σv), where σv. Right half shows resampling based on Label information. PCA(5, σv) L = {{0 −4}, {5 −9}},L′ = {0, 1, 5, 6} No. of Labeled 100 200 500 100 200 500 Weights L HS L HS L HS L HS L HS L HS SVM 10.2 8.1 5.7 18.6 16.4 12.9 TIKDE 9.4 9.4 7.2 7.2 4.9 4.9 18.5 18.5 16.4 16.4 12.4 12.4 KMM 8.1 8.1 5.9 5.9 4.7 4.7 17.5 17.5 13.5 13.5 10.3 10.3 LSIF 9.5 10.2 7.3 8.1 5.0 5.7 18.5 18.5 16.2 16.3 12.2 12.2 FIREp 8.9 6.8 5.3 5.0 4.1 4.1 17.9 18.4 16.1 16.1 11.5 12.0 FIREp,q 7.0 7.0 5.1 5.1 4.1 4.1 18.0 18.5 16.1 16.2 11.6 12.0 FIREq 5.5 7.3 4.8 5.4 4.1 4.4 18.3 18.4 16.0 16.2 11.8 12.0 Acknowledgements. The work was partially supported by NSF Grants IIS 0643916, IIS 1117707. 8 References [1] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. JMLR, 7:2399–2434, 2006. [2] S. Bickel, M. Br¨uckner, and T. Scheffer. Discriminative learning for differing training and test distributions. In ICML, 2007. [3] E. De Vito, L. Rosasco, A. Caponnetto, U. De Giovannini, and F. Odone. Learning from examples as an inverse problem. JMLR, 6:883, 2006. [4] E. De Vito, L. Rosasco, and A. Toigo. Spectral regularization for support estimation. In NIPS, pages 487–495, 2010. [5] H. W. Engl, M. Hanke, and A. Neubauer. Regularization of inverse problems. Springer, 1996. [6] A. Gretton, A. Smola, J. Huang, M. Schmittfull, K. Borgwardt, and B. Sch¨olkopf. Covariate shift by kernel mean matching. Dataset shift in machine learning, pages 131–160, 2009. [7] S. Gr¨unew¨alder, A. Gretton, and J. Shawe-Taylor. Smooth operators. In ICML, 2013. [8] S. Gr¨unew¨alder, G. Lever, L. Baldassarre, S. Patterson, A. Gretton, and M. Pontil. Conditional mean embeddings as regressors. In ICML, 2012. [9] J. Huang, A. Gretton, K. M. Borgwardt, B. Sch¨olkopf, and A. Smola. Correcting sample selection bias by unlabeled data. In NIPS, pages 601–608, 2006. [10] T. Kanamori, S. Hido, and M. Sugiyama. A least-squares approach to direct importance estimation. JMLR, 10:1391–1445, 2009. [11] J. S. Kim and C. Scott. Robust kernel density estimation. In ICASSP, pages 3381–3384, 2008. [12] S. Mukherjee and V. Vapnik. Support vector method for multivariate density estimation. In Center for Biological and Computational Learning. Department of Brain and Cognitive Sciences, MIT. CBCL, volume 170, 1999. [13] R. M. Neal. Annealed importance sampling. Statistics and Computing, 11(2):125–139, 2001. [14] X. Nguyen, M. J. Wainwright, and M. I. Jordan. Estimating divergence functionals and the likelihood ratio by penalized convex risk minimization. NIPS, 20:1089–1096, 2008. [15] S. J. Pan and Q. Yang. A survey on transfer learning. Knowledge and Data Engineering, IEEE Transactions on, 22(10):1345–1359, 2010. [16] B. Sch¨olkopf and A. J. Smola. Learning with kernels: Support vector machines, regularization, optimization, and beyond. MIT press, 2001. [17] J. Shawe-Taylor and N. Cristianini. Kernel methods for pattern analysis. Cambridge university press, 2004. [18] T. Shi, M. Belkin, and B. Yu. Data spectroscopy: Eigenspaces of convolution operators and clustering. The Annals of Statistics, 37(6B):3960–3984, 2009. [19] H. Shimodaira. Improving predictive inference under covariate shift by weighting the loglikelihood function. Journal of Statistical Planning and Inference, 90(2):227–244, 2000. [20] A. J. Smola and B. Sch¨olkopf. On a kernel-based method for pattern recognition, regression, approximation, and operator inversion. Algorithmica, 22(1):211–231, 1998. [21] I. Steinwart and A. Christmann. Support vector machines. Springer, 2008. [22] M. Sugiyama, M. Krauledat, and K. M¨uller. Covariate shift adaptation by importance weighted cross validation. JMLR, 8:985–1005, 2007. [23] Masashi Sugiyama, Shinichi Nakajima, Hisashi Kashima, Paul Von Buenau, and Motoaki Kawanabe. Direct importance estimation with model selection and its application to covariate shift adaptation. NIPS, 20:1433–1440, 2008. [24] A. Tsybakov. Introduction to nonparametric estimation. Springer, 2009. [25] C. Williams and M. Seeger. The effect of the input density distribution on kernel-based classifiers. In ICML, 2000. [26] Y. Yu and C. Szepesv´ari. Analysis of kernel mean matching under covariate shift. In ICML, 2012. [27] B. Zadrozny. Learning and evaluating classifiers under sample selection bias. In ICML, 2004. 9
|
2013
|
118
|
4,841
|
Modeling Overlapping Communities with Node Popularities Prem Gopalan1, Chong Wang2, and David M. Blei1 1Department of Computer Science, Princeton University, {pgopalan,blei}@cs.princeton.edu 2Machine Learning Department, Carnegie Mellon University, {chongw}@cs.cmu.edu Abstract We develop a probabilistic approach for accurate network modeling using node popularities within the framework of the mixed-membership stochastic blockmodel (MMSB). Our model integrates two basic properties of nodes in social networks: homophily and preferential connection to popular nodes. We develop a scalable algorithm for posterior inference, based on a novel nonconjugate variant of stochastic variational inference. We evaluate the link prediction accuracy of our algorithm on nine real-world networks with up to 60,000 nodes, and on simulated networks with degree distributions that follow a power law. We demonstrate that the AMP predicts significantly better than the MMSB. 1 Introduction Social network analysis is vital to understanding and predicting interactions between network entities [6, 19, 21]. Examples of such networks include online social networks, collaboration networks and hyperlinked blogs. A central problem in social network analysis is to identify hidden community structures and node properties that can best explain the network data and predict connections [19]. Two node properties underlie the most successful models that explain how network connections are generated. The first property is popularity. This is the basis for preferential attachment [12], according to which nodes preferentially connect to popular nodes. The resulting degree distributions from this process are known to satisfy empirically observed properties such as power laws [24]. The second property that underlies many network models is homophily or similarity, according to which nodes with similar observed or unobserved attributes are more likely to connect to each other. To best explain social network data, a probabilistic model must capture these competing node properties. Recent theoretical work [24] has argued that optimizing the trade-offs between popularity and similarity best explains the evolution of many real networks. It is intuitive that combining both notions of attractiveness, i.e., popularity and similarity, is essential to explain how networks are generated. For example, on the Internet a user’s web page may link to another user due to a common interest in skydiving. The same user’s page may also link to popular web pages such as Google.com. In this paper, we develop a probabilistic model of networks that captures both popularity and homophily. To capture homophily, our model is built on the mixed-membership stochastic blockmodel (MMSB) [2], a community detection model that allows nodes to belong to multiple communities. (For example, a member of a large social network might belong to overlapping communities of neighbors, co-workers, and school friends.) The MMSB provides better fits to real network data than single community models [23, 27], but cannot account for node popularities. Specifically, we extend the assortative MMSB [9] to incorporate per-community node popularity. We develop a scalable algorithm for posterior inference, based on a novel nonconjugate variant of stochastic variational inference [11]. We demonstrate that our model predicts significantly better 1 BARABASI, A JEONG, H NEWMAN, M SOLE, R PASTORSATORRAS, R HOLME, P NETSCIENCE COLLABORATION NETWORK POLITICAL BLOG NETWORK AMP MMSB AMP Figure 1: We visualize the discovered community structure and node popularities in a giant component of the netscience collaboration network [22] (Left). Each link denotes a collaboration between two authors, colored by the posterior estimate of its community assignment. Each author node is sized by its estimated posterior popularity and colored by its dominant research community. The network is visualized using the FructermanReingold algorithm [7]. Following [14], we show an example where incorporating node popularities helps in accurately identifying communities (Right). The division of the political blog network [1] discovered by the AMP corresponds closely to the liberal and conservative blogs identified in [1]; the MMSB has difficulty in delineating these groups. than the stochastic variational inference algorithm for the MMSB [9] on nine large real-world networks. Further, using simulated networks, we show that node popularities are essential for predictive accuracy in the presence of power-law distributed node degrees. Related work. There have been several research efforts to incorporate popularity into network models. Karrer et al. [14] proposed the degree-corrected blockmodel that extends the classic stochastic blockmodels [23] to incorporate node popularities. Krivitsky et al. [16] proposed the latent cluster random effects model that extends the latent space model [10] to include node popularities. Both models capture node similarity and popularity, but assume that unobserved similarity arises from each node participating in a single community. Finally, the Poisson community model [4] is a probabilistic model of overlapping communities that implicitly captures degree-corrected mixedmemberships. However, the standard EM inference under this model drives many of the per-node community parameters to zero, which makes it ineffective for prediction or model metrics based on prediction (e.g., to select the number of communities). 2 Modeling node popularity and similarity The assortative mixed-membership stochastic blockmodel (MMSB) [9] treats the links or non-links yab of a network as arising from interactions between nodes a and b. Each node a is associated with community memberships πa, a distribution over communities. The probability that two nodes are linked is governed by the similarity of their community memberships and the strength of their shared communities. Given the communities of a pair of nodes, the link indicators yab are independent. We draw yab repeatedly by choosing a community assignment (za→b, za←b) for a pair of nodes (a, b), and drawing a binary value from a community distribution. Specifically, the conditional probability of a link in MMSB is p(yab = 1|za→b,i, za←b,j, β) = PK i=1 PK j=1 za→b,iza←b,jβij, where β is the blockmodel matrix of community strength parameters to be estimated. In the assortative MMSB [9], the non-diagonal entries of the blockmodel matrix are set close to 0. This captures node similarity in community memberships—if two nodes are linked, it is likely that the latent community indicators were the same. 2 In the proposed model, assortative MMSB with node popularities, or AMP, we introduce latent variables θa to capture the popularity of each node a, i.e., its propensity to attract links independent of its community memberships. We capture the effect of node popularity and community similarity on link generation using a logit model logit (p(yab = 1|za→b, za←b, θ, β)) ≡θa + θb + PK k=1 δk abβk, (1) where we define indicators δk ab = za→b,kza←b,k. The indicator δk ab is one if both nodes assume the same community k. Eq. 1 is a log-linear model [20]. In log-linear models, the random component, i.e., the expected probability of a link, has a multiplicative dependency on the systematic components, i.e., the covariates. This model is also similar in the spirit of the random effects model [10]—the node-specific effect θa captures the popularity of individual nodes while the PK k=1 δk abβk term captures the interactions through latent communities. Notice that we can easily extend the predictor in Eq. 1 to include observed node covariates, if any. We now define a hierarchical generative process for the observed link or non-link under the AMP: 1. Draw K community strengths βk ∼N(µ0, σ2 0). 2. For each node a, (a) Draw community memberships πa ∼Dirichlet(α). (b) Draw popularity θa ∼N(0, σ2 1). 3. For each pair of nodes a and b, (a) Draw interaction indicator za→b ∼πa. (b) Draw interaction indicator za←b ∼πb. (c) Draw the probability of a link yab|za→b, za←b, θ, β ∼logit−1(za→b, za←b, θ, β). Under the AMP, the similarities between the nodes’ community memberships and their respective popularities compete to explain the observations. We can make AMP simpler by replacing the vector of K latent community strengths β with a single community strength β. In §4, we demonstrate that this simpler model gives good predictive performance on small networks. We analyze data with the AMP via the posterior distribution over the latent variables p(π1:N, θ1:N, z, β1:K|y, α, µ0, σ2 0, σ2 1), where θ1:N represents the node popularities, and the posterior over π1:N represents the community memberships of the nodes. With an estimate of this latent structure, we can characterize the network in many useful ways. Figure 1 gives an example. This is a subgraph of the netscience collaboration network [22] with N = 1460 nodes. We analyzed this network with K = 100 communities, using the algorithm from §3. This results in posterior estimates of the community memberships and popularities for each node and posterior estimates of the community assignments for each link. With these estimates, we visualized the discovered community structure and the popular authors. In general, with an estimate of this latent structure, we can study individual links, characterizing the extent to which they occur due to similarity between nodes and the extent to which they are an artifact of the popularity of the nodes. 3 A stochastic gradient algorithm for nonconjugate variational inference Our goal is to compute the posterior distribution p(π1:N, θ1:N, z, β1:K|y, α, µ0σ2 0, σ2 1). Exact inference is intractable; we use variational inference [13]. Traditionally, variational inference is a coordinate ascent algorithm. However, the AMP presents two challenges. First, in variational inference the coordinate updates are available in closed form only when all the nodes in the graphical model satisfy conditional conjugacy. The AMP is not conditionally conjugate. To see this, note that the Gaussian priors on the popularity θ and the community strengths β are not conjugate to the conditional likelihood of the data. Second, coordinate ascent algorithms iterate over all the O(N 2) node pairs making inference intractable for large networks. 3 We address these challenges by deriving a stochastic gradient algorithm that optimizes a tractable lower bound of the variational objective [11]. Our algorithm avoids the O(N 2) computational cost per iteration by subsampling a “mini-batch” of random nodes and a subset of their interactions in each iteration [9]. 3.1 The variational objective In variational inference, we define a family of distributions over the hidden variables q(β, θ, π, z) and find the member of that family that is closest to the true posterior. We use the mean-field family, with the following variational distributions: q(za→b = i, za←b = j) = φij ab; q(πn) = Dirichlet(πn; γn); q(βk) = N(βk; µk, σ2 β); q(θn) = N(θn; λn, σ2 θ). (2) The posterior over the joint distribution of link community assignments per node pair (a, b) is parameterized by the per-interaction memberships φab 1, the community memberships by γ, the community strength distributions by µ and the popularity distributions by λ. Minimizing the KL divergence between q and the true posterior is equivalent to optimizing an evidence lower bound (ELBO) L, a bound on the log likelihood of the observations. We obtain this bound by applying Jensen’s inequality [13] to the data likelihood. The ELBO is L = P n Eq[log p(πn|α)] −P n Eq[log q(πn|γn)] + P n Eq[log p(θn|σ2 1)] −P n Eq[log q(θn|λn, σ2 θ)] + P k Eq[log p(βk|µ0, σ2 0)] −P k Eq[log q(βk|µk, σ2 β)] + P a,b Eq[log p(za→b|πa)] + Eq[log p(za←b|πb)] −Eq[log q(za→b, za←b|φab)] + P a,b Eq[log p(yab|za→b, za←b, θ, β)]. (3) Notice that the first three lines in Eq. 3 contains summations over communities and nodes; we call these global terms. They relate to the global parameters which are (γ, λ, µ). The remaining lines contain summations over all node pairs; we call these local terms. They relate to the local parameters which are the φab. The distinction between the global and local parameters is important—the updates to global parameters depends on all (or many) local parameters, while the updates to local parameters for a pair of nodes only depends on the relevant global and local parameters in that context. Estimating the global variational parameters is a challenging computational problem. Coordinate ascent inference must consider each pair of nodes at each iteration, but even a single pass through the O(N 2) node pairs can be prohibitive. Previous work [9] has taken advantage of conditional conjugacy of the MMSB to develop fast stochastic variational inference algorithms. Unlike the MMSB, the AMP is not conditionally conjugate. Nevertheless, by carefully manipulating the variational objective, we can develop a scalable stochastic variational inference algorithm for the AMP. 3.2 Lower bounding the variational objective To optimize the ELBO with respect to the local and global parameters we need its derivatives. The data likelihood terms in the ELBO can be written as Eq[log p(yab|za→b, za←b, θ, β)] = yabEq[xab] −Eq[log(1 + exp(xab))], (4) where we define xab ≡θa + θb + PK k=1 βkδk ab. The terms in Eq. 4 cannot be expanded analytically. To address this issue, we further lower bound −Eq[log(1+exp(xab))] using Jensen’s inequality [13], −Eq[log(1 + exp(xab))] ≥−log[Eq(1 + exp(xab))] = −log[1 + Eq[exp(θa + θb + PK k=1 βkδk ab)]] = −log[1 + exp(λa + σ2 θ/2) exp(λb + σ2 θ/2)sab], (5) 1Following [15], we use a structured mean-field assumption. 4 Algorithm 1 The stochastic AMP algorithm 1: Initialize variational parameters. See §3.5. 2: while convergence criteria is not met do 3: Sample a mini-batch S of nodes. Let P be the set of node pairs in S. 4: local step 5: Optimize φab ∀(a, b) ∈P using Eq. 11 and Eq. 12. 6: global step 7: Update memberships γa, for each node a ∈S, using stochastic natural gradients in Eq. 6. 8: Update popularities λa, for each node a ∈S using stochastic gradients in Eq. 7. 9: Update community strengths µ using stochastic gradients in Eq. 9. 10: Set ρa(t) = (τ0 + ta)−κ; ta ←ta + 1, for each node a ∈S. 11: Set ρ′(t) = (τ0 + t)−κ; t ←t + 1. 12: end while where we define sab ≡PK k=1 φkk ab exp{µk + σ2 β/2} + (1 −PK k=1 φkk ab). In simplifying Eq. 5, we have used that q(θn) is a Gaussian. Using the mean of a log-normal distribution, we have Eq[exp(θn)] = exp(λn + σ2 θ/2). A similar substitution applies for the terms involving βk in Eq. 5. We substitute Eq. 5 in Eq. 3 to obtain a tractable lower bound L′ of the ELBO L in Eq. 3. This allows us to develop a coordinate ascent algorithm that iteratively updates the local and global parameters to optimize this lower bound on the ELBO. 3.3 The global step We optimize the ELBO with respect to the global variational parameters using stochastic gradient ascent. Stochastic gradient algorithms follow noisy estimates of the gradient with a decreasing stepsize. If the expectation of the noisy gradient equals to the gradient and if the step-size decreases according to a certain schedule, then the algorithm converges to a local optimum [26]. Subsampling the data to form noisy gradients scales inference as we avoid the expensive all-pairs sums in Eq. 3. The global step updates the global community memberships γ, the global popularity parameters λ and the global community strength parameters µ with a stochastic gradient of the lower bound on the ELBO L′. In [9], the authors update community memberships of all nodes after each iteration by obtaining the natural gradients of the ELBO 2 with respect to the vector γ of dimension N × K. We use natural gradients for the memberships too, but use distinct stochastic optimizations for the memberships and popularity parameters of each node and maintain a separate learning rate for each node. This restricts the per-iteration updates to nodes in the current mini-batch. Since the variational objective is a sum of terms, we can cheaply compute a stochastic gradient by first subsampling a subset of terms and then forming an appropriately scaled gradient. We use a variant of the random node sampling method proposed in [9]. At each iteration we sample a node uniformly at random from the N nodes in the network. (In practice we sample a “mini-batch” S of nodes per update to reduce noise [11, 9].) While a naive method will include all interactions of a sampled node as the observed pairs, we can leverage network sparsity for efficiency; in many real networks, only a small fraction of the node pairs are linked. Therefore, for each sampled node, we include as observations all of its links and a small uniform sample of m0 non-links. Let ∂γt a be the natural gradient of L′ with respect to γa, and ∂λt a and ∂µt k be the gradients of L′ with respect to λa and µk, respectively. Following [2, 9], we have ∂γt a,k = −γt−1 a,k + αk + P (a,b)∈links(a) φkk ab(t) + P (a,b)∈nonlinks(a) φkk ab(t), (6) where links(a) and nonlinks(a) correspond to the set of links and non-links of a in the training set. Notice that an unbiased estimate of the summation term over non-links in Eq. 6 can be obtained from a subsample of the node’s non-links. Therefore, the gradient of L′ with respect to the membership parameter γa, computed using all of the nodes’ links and a subsample of its non-links, is a noisy but unbiased estimate of the natural gradient in Eq. 6. 2The natural gradient [3] points in the direction of steepest ascent in the Riemannian space. The local distance in the Riemannian space is defined by KL divergence, a better measure of dissimilarity between probability distributions than Euclidean distance [11]. 5 The gradient of the approximate ELBO with respect to the popularity parameter λa is ∂λt a = −λt−1 a σ2 1 + P (a,b)∈links(a) ∪nonlinks(a)(yab −rabsab), (7) where we define rab as rab ≡ exp{λa+σ2 θ/2} exp{λb+σ2 θ/2} 1+exp{λa+σ2 θ/2} exp{λb+σ2 θ/2}sab . (8) Finally, the stochastic gradient of L′ with respect to the global community strength parameter µk is ∂µt k = µ0−µt−1 k σ2 0 + N 2|S| P (a,b)∈links(S) ∪nonlinks(S) φkk ab(yab −rab exp{µk + σ2 β/2}). (9) As with the community membership gradients, notice that an unbiased estimate of the summation term over non-links in Eq. 7 and Eq. 9 can be obtained from a subsample of the node’s non-links. To obtain an unbiased estimate of the true gradient with respect to µk, the summation over a node’s links and non-links must be scaled by the inverse probability of subsampling that node in Eq. 9. Since each pair is shared between two nodes, and we use a mini-batch with S nodes, the summations over the node pairs are scaled by N 2|S| in Eq. 9. We can interpret the gradients in Eq. 7 and Eq. 9 by studying the terms involving rab in Eq. 7 and Eq. 9. In Eq. 7, (yab −rabsab) is the residual for the pair (a, b), while in Eq. 9, (yab −rab exp{µk + σ2 β/2}) is the residual for the pair (a, b) conditional on the latent community assignment for both nodes a and b being set to k. Further, notice that the updates for the global parameters of node a and b, and the updates for µ depend only on the diagonal entries of the indicator variational matrix φab. We can similarly obtain stochastic gradients for the variational variances σβ and σθ; however, in our experiments we found that fixing them already gives good results. (See §4.) The global step for the global parameters follows the noisy gradient with an appropriate step-size: γa ←γa + ρa(t)∂γt a; λa ←λa + ρa(t)∂λt a; µ ←µ + ρ′(t)∂µt. (10) We maintain separate learning rates ρa for each node a, and only update the γ and λ for the nodes in the mini-batch in each iteration. There is a global learning rate ρ′ for the community strength parameters µ, which are updated in every iteration. For each of these learning rates ρ, we require that P t ρ(t)2 < ∞and P t ρ(t) = ∞for convergence to a local optimum [26]. We set ρ(t) ≜ (τ0 + t)−κ, where κ ∈(0.5, 1] is the learning rate and τ0 ≥0 downweights early iterations. 3.4 The local step We now derive the updates for the local parameters. The local step optimizes the per-interaction memberships φ with respect to a subsample of the network. There is a per-interaction variational parameter of dimension K × K for each node pair—φab—representing the posterior approximation of which pair of communities are active in determining the link or non-link. The coordinate ascent update for φab is φkk ab ∝exp n Eq[log πa,k] + Eq[log πb,k] + yabµk −rab(exp{µk + σ2 β/2} −1) o (11) φij ab ∝exp n Eq[log πa,i] + Eq[log πb,j] o , i ̸= j, (12) where rab is defined in Eq. 8. We present the full stochastic variational inference in Algorithm 1. 3.5 Initialization and convergence We initialize the community memberships γ using approximate posterior memberships from the variational inference algorithm for the MMSB [9]. We initialized popularities λ to the logarithm of the normalized node degrees added to a small random offset, and initialized the strengths µ to zero. We measure convergence by computing the link prediction accuracy on a validation set with 1% of the networks links, and an equal number of non-links. The algorithm stops either when the change in log-likelihood on this validation set is less than 0.0001%, or if the log-likelihood decreases for consecutive iterations. 6 Figure 2: Network data sets. N is the number of nodes, d is the percent of node pairs that are links and P is the mean perplexity over the links and nonlinks in the held-out test set. DATA SET N d(%) PAMP PMMSB TYPE SOURCE US AIR 712 1.7% 2.75 ± 0.04 3.41 ± 0.15 TRANSPORT [25] POLITICAL BLOGS 1224 1.9% 2.97 ± 0.03 3.12 ± 0.01 HYPERLINK [1] NETSCIENCE 1450 0.2% 2.73 ± 0.11 3.02 ± 0.19 COLLAB. [22] RELATIVITY 4158 0.1% 3.69 ± 0.18 6.53 ± 0.37 COLLAB. [18] HEP-TH 8638 0.05% 12.35 ± 0.17 23.06 ± 0.87 COLLAB. [18] HEP-PH 11204 0.16% 2.75 ± 0.06 3.310 ± 0.15 COLLAB. [18] ASTRO-PH 17903 0.11% 5.04 ± 0.02 5.28 ± 0.07 COLLAB. [18] COND-MAT 36458 0.02% 10.82 ± 0.09 13.52 ± 0.21 COLLAB. [22] BRIGHTKITE 56739 0.01% 10.98 ± 0.39 41.11 ± 0.89 SOCIAL [18] Number of recommendations Mean precision 2% 3% 4% 5% 6% 7% 8% relativity 10 50 100 0% 1% 2% 3% 4% 5% 6% astro 10 50 100 4% 6% 8% 10% 12% hepph 10 50 100 0% 0.5% 1% 1.5% 2% hepth 10 50 100 0% 0.2% 0.4% 0.6% 0.8% cond−mat 10 50 100 0% 0.5% 1% 1.5% 2% brightkite 10 50 100 amp mmsb Number of recommendations Mean recall 10% 15% 20% 25% 30% relativity 10 50 100 0% 5% 10% 15% astro 10 50 100 5% 10% 15% 20% hepph 10 50 100 0% 5% 10% 15% 20% hepth 10 50 100 0% 2% 4% 6% 8% 10% 12% cond−mat 10 50 100 0% 2% 4% 6% 8% 10% 12% 14% brightkite 10 50 100 amp mmsb Figure 3: The AMP model outperforms the MMSB model of [9] in predictive accuracy on real networks. Both models were fit using stochastic variational inference [11]. For the data sets shown, the number of communities K was set to 100 and hyperparameters were set to the same values across data sets. The perplexity results are based on five replications. A single replication is shown for the mean precision and mean recall. 4 Empirical study We use the predictive approach to evaluating model fitness [8], comparing the predictive accuracy of AMP (Algorithm 1) to the stochastic variational inference algorithm for the MMSB with link sampling [9]. In all data sets, we found that AMP gave better fits to real-world networks. Our networks range in size from 712 nodes to 56,739 nodes. Some networks are sparse, having as little as 0.01% of all pairs as links, while others have up to 2% of all pairs as links. Our data sets contain four types of networks: hyperlink, transportation, collaboration and social networks. We implemented Algorithm 1 in 4,800 lines of C++ code. 3 Metrics. We used perplexity, mean precision and mean recall in our experiments to evaluate the predictive accuracy of the algorithms. We computed the link prediction accuracy using a test set of node pairs that are not observed during training. The test set consists of 10% of randomly selected links and non-links from each data set. During training, these test set observations are treated as zeros. We approximate the predictive distribution of a held-out node pair yab under the AMP using posterior estimates ˆθ, ˆβ and ˆπ as p(yab|y) ≈P za→b P za←b p(yab|za→b, za←b, ˆθ, ˆβ)p(za→b|ˆπa)p(za←b|ˆπb). (13) 3Our software is available at https://github.com/premgopalan/sviamp. 7 Ratio of max. degree to avg. degree Mean precision 0.020 0.025 0.030 0.035 mu 0 2 4 6 8 0.018 0.020 0.022 0.024 0.026 0.028 mu 0.2 2 4 6 8 0.016 0.018 0.020 0.022 0.024 0.026 0.028 mu 0.4 2 4 6 8 amp mmsb Figure 4: The AMP predicts significantly better than the MMSB [9] on 12 LFR benchmark networks [17]. Each plot shows 4 networks with increasing right-skewness in degree distribution. µ is the fraction of noisy links between dissimilar nodes—nodes that share no communities. The precision is computed at 50 recommendations for each node, and is averaged over all nodes in the network. Perplexity is the exponential of the average predictive log likelihood of the held-out node pairs. For mean precision and recall, we generate the top n pairs for each node ranked by the probability of a link between them. The ranked list of pairs for each node includes nodes in the test set, as well as nodes in the training set that were non-links. We compute precision-at-m, which measures the fraction of the top m recommendations present in the test set; and we compute recall-at-m, which captures the fraction of nodes in the test set present in the top m recommendations. We vary m from 10 to 100. We then obtain the mean precision and recall across all nodes. 4 Hyperparameters and constants. For the stochastic AMP algorithm, we set the “mini-batch” size S = N/100, where N is the number of nodes in the network and we set the non-link sample size m0 = 100. We set the number of communities K = 2 for the political blog network and K = 20 for the US air; for all other networks, K was set to 100. We set the hyperparameters σ2 0 = 1.0, σ2 1 = 10.0 and µ0 = 0, fixed the variational variances at σθ = 0.1 and σβ = 0.5 and set the learning parameters τ0 = 65536 and κ = 0.5. We set the Dirichlet hyperparameter α = 1 K for the AMP and the MMSB. Results on real networks. Figure 2 compares the AMP and the MMSB stochastic algorithms on a number of real data sets. The AMP definitively outperforms the MMSB in predictive performance. All hyperparameter settings were held fixed across data sets. The first four networks are small in size, and were fit using the AMP model with a single community strength parameter. All other networks were fit with the AMP model with K community strength parameters. As N increases, the gap between the mean precision and mean recall performance of these algorithms appears to increase. Without node popularities, MMSB is dependent entirely on node memberships and community strengths to predict links. Since K is held fixed, communities are likely to have more nodes as N increases, making it increasingly difficult for the MMSB to predict links. For the small US air, political blogs and netscience data sets, we obtained similar performance for the replication shown in Figure 2. For the AMP the mean precision at 10 for US Air, political blogs and netscience were 0.087, 0.07, 0.092, respectively; for the MMSB the corresponding values were 0.007, 0.0, 0.063, respectively. Results on synthetic networks. We generated 12 LFR benchmark networks [17], each with 1000 nodes. Roughly 50% of the nodes were assigned to 4 overlapping communities, and the other 50% were assigned to single communities. We set a community size range of [200, 500] and a mean node degree of 10 with power-law exponent set to 2.0. Figure 4 shows that the MMSB performs poorly as the skewness is increased, while the AMP performs significantly better in the presence of both noisy links and right-skewness, both characteristics of real networks. The skewness in degree distributions causes the community strength parameters of MMSB to overestimate or underestimate the linking patterns within communities. The per-node popularities in the AMP can capture the heterogeneity in node degrees, while learning the corrected community strengths. Acknowledgments David M. Blei is supported by ONR N00014-11-1-0651, NSF CAREER IIS-0745520, and the Alfred P. Sloan foundation. Chong Wang is supported by NSF DBI-0546594 and NIH 1R01GM093156. 4Precision and recall are better metrics than ROC AUC on highly skewed data sets [5]. 8 References [1] L. A. Adamic and N. Glance. The political blogosphere and the 2004 U.S. election: divided they blog. In LinkKDD, LinkKDD ’05, page 3643, New York, NY, USA, 2005. ACM. [2] E. M. Airoldi, D. M. Blei, S. E. Fienberg, and E. P. Xing. Mixed membership stochastic blockmodels. J. Mach. Learn. Res., 9:1981–2014, June 2008. [3] S. Amari. Differential geometry of curved exponential Families-Curvatures and information loss. The Annals of Statistics, 10(2):357–385, June 1982. [4] B. Ball, B. Karrer, and M. E. J. Newman. Efficient and principled method for detecting communities in networks. Physical Review E, 84(3):036103, Sept. 2011. [5] J. Davis and M. Goadrich. The relationship between precision-recall and ROC curves. In Proceedings of the 23rd international conference on Machine learning, ICML ’06, pages 233–240, New York, NY, USA, 2006. ACM. [6] S. Fortunato. Community detection in graphs. Physics Reports, 486(35):75–174, Feb. 2010. [7] T. M. J. Fruchterman and E. M. Reingold. Graph drawing by force-directed placement. Softw. Pract. Exper., 21(11):1129–1164, Nov. 1991. [8] S. Geisser and W. Eddy. A predictive approach to model selection. Journal of the American Statistical Association, 74:153–160, 1979. [9] P. K. Gopalan and D. M. Blei. Efficient discovery of overlapping communities in massive networks. Proceedings of the National Academy of Sciences, 110(36):14534–14539, 2013. [10] P. Hoff, A. Raftery, and M. Handcock. Latent space approaches to social network analysis. Journal of the American Statistical Association, 97(460):1090–1098, 2002. [11] M. Hoffman, D. M. Blei, C. Wang, and J. Paisley. Stochastic variational inference. Journal of Machine Learning Research, 2013. [12] H. Jeong, Z. Nda, and A. L. Barabsi. Measuring preferential attachment in evolving networks. EPL (Europhysics Letters), 61(4):567, 2003. [13] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for graphical models. Mach. Learn., 37(2):183–233, Nov. 1999. [14] B. Karrer and M. E. J. Newman. Stochastic blockmodels and community structure in networks. Phys. Rev. E, 83:016107, Jan 2011. [15] D. I. Kim, P. Gopalan, D. M. Blei, and E. B. Sudderth. Efficient online inference for bayesian nonparametric relational models. In Neural Information Processing Systems, 2013. [16] P. N. Krivitsky, M. S. Handcock, A. E. Raftery, and P. D. Hoff. Representing degree distributions, clustering, and homophily in social networks with latent cluster random effects models. Social Networks, 31(3):204–213, July 2009. [17] A. Lancichinetti and S. Fortunato. Benchmarks for testing community detection algorithms on directed and weighted graphs with overlapping communities. Physical Review E, 80(1):016118, July 2009. [18] J. Leskovec, K. J. Lang, A. Dasgupta, and M. W. Mahone. Community structure in large networks: Natural cluster sizes and the absence of large well-defined cluster. In Internet Mathematics, 2008. [19] D. Liben-Nowell and J. Kleinberg. The link prediction problem for social networks. In Proceedings of the twelfth international conference on Information and knowledge management, CIKM ’03, pages 556–559, New York, NY, USA, 2003. ACM. [20] P. McCullagh and J. A. Nelder. Generalized Linear Models, Second Edition. Chapman and Hall/CRC, 2 edition, Aug. 1989. [21] M. E. J. Newman. Assortative mixing in networks. Physical Review Letters, 89(20):208701, Oct. 2002. [22] M. E. J. Newman. Finding community structure in networks using the eigenvectors of matrices. Physical Review E, 74(3):036104, 2006. [23] K. Nowicki and T. A. B. Snijders. Estimation and prediction for stochastic blockstructures. Journal of the American Statistical Association, 96(455):1077–1087, Sept. 2001. [24] F. Papadopoulos, M. Kitsak, M. . Serrano, M. Bogu, and D. Krioukov. Popularity versus similarity in growing networks. Nature, 489(7417):537–540, Sept. 2012. [25] RITA. U.S. Air Carrier Traffic Statistics, Bur. Trans. Stats, 2010. [26] H. Robbins and S. Monro. A stochastic approximation method. The Annals of Mathematical Statistics, 22(3):400–407, Sept. 1951. [27] Y. J. Wang and G. Y. Wong. Stochastic blockmodels for directed graphs. Journal of the American Statistical Association, 82(397):8–19, 1987. 9
|
2013
|
119
|
4,842
|
A Scalable Approach to Probabilistic Latent Space Inference of Large-Scale Networks Junming Yin School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 junmingy@cs.cmu.edu Qirong Ho School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 qho@cs.cmu.edu Eric P. Xing School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 epxing@cs.cmu.edu Abstract We propose a scalable approach for making inference about latent spaces of large networks. With a succinct representation of networks as a bag of triangular motifs, a parsimonious statistical model, and an efficient stochastic variational inference algorithm, we are able to analyze real networks with over a million vertices and hundreds of latent roles on a single machine in a matter of hours, a setting that is out of reach for many existing methods. When compared to the state-of-the-art probabilistic approaches, our method is several orders of magnitude faster, with competitive or improved accuracy for latent space recovery and link prediction. 1 Introduction In the context of network analysis, a latent space refers to a space of unobserved latent representations of individual entities (i.e., topics, roles, or simply embeddings, depending on how users would interpret them) that govern the potential patterns of network relations. The problem of latent space inference amounts to learning the bases of such a space and reducing the high-dimensional network data to such a lower-dimensional space, in which each entity has a position vector. Depending on model semantics, the position vectors can be used for diverse tasks such as community detection [1, 5], user personalization [4, 13], link prediction [14] and exploratory analysis [9, 19, 8]. However, scalability is a key challenge for many existing probabilistic methods, as even recent stateof-the-art methods [5, 8] still require days to process modest networks of around 100, 000 nodes. To perform latent space analysis on at least million-node (if not larger) real social networks with many distinct latent roles [24], one must design inferential mechanisms that scale in both the number of vertices N and the number of latent roles K. In this paper, we argue that the following three principles are crucial for successful large-scale inference: (1) succinct but informative representation of networks; (2) parsimonious statistical modeling; (3) scalable and parallel inference algorithms. Existing approaches [1, 5, 7, 8, 14] are limited in that they consider only one or two of the above principles, and therefore can not simultaneously achieve scalability and sufficient accuracy. For example, the mixed-membership stochastic blockmodel (MMSB) [1] is a probabilistic latent space model for edge representation of networks. Its batch variational inference algorithm has O(N 2K2) time complexity and hence cannot be scaled to large networks. The a-MMSB [5] improves upon MMSB by applying principles (2) and (3): it reduces the dimension of the parameter space from O(K2) to O(K), and applies a stochastic variational algorithm for fast inference. Fundamentally, however, the a-MMSB still depends on the O(N 2) adjacency matrix representation of networks, just like the MMSB. The a-MMSB inference algorithm mitigates this issue by downsampling zero elements in the matrix, but is still not fast enough to handle networks with N ≥100, 000. But looking beyond the edge-based relations and features, other higher-order structural statistics (such as the counts of triangles and k-stars) are also widely used to represent the probability distribution over the space of networks, and are viewed as crucial elements in building a good-fitting exponential random graph model (ERGM) [11]. These higher-order relations have motivated the development of the triangular representation of networks [8], in which each network is represented succinctly as a bag of triangular motifs with size typically much smaller than Θ(N 2). This succinct representation has proven effective in extracting informative mixed-membership roles from 1 networks with high fidelity, thus achieving the first principle (1). However, the corresponding statistical model, called the mixed-membership triangular model (MMTM), only scales well against the size of a network, but does not scale to large numbers of latent roles (i.e., dimension of the latent space). To be precise, if there are K distinct latent roles, its tensor of triangle-generating parameters is of size O(K3), and its blocked Gibbs sampler requires O(K3) time per iteration. Our own experiments show that the MMTM Gibbs algorithm is unusable for K > 10. We now present a scalable approach to both latent space modeling and inference algorithm design that encompasses all three aforementioned principles for large networks. Specifically, we build our approach on the bag-of-triangles representation of networks [8] and apply principles (2) and (3), yielding a fast inference procedure that has time complexity O(NK). In Section 3, we propose the parsimonious triangular model (PTM), in which the dimension of the triangle-generating parameters only grows linearly in K. The dramatic reduction is principally achieved by sharing parameters among certain groups of latent roles. Then, in Section 4, we develop a fast stochastic natural gradient ascent algorithm for performing variational inference, where an unbiased estimate of the natural gradient is obtained by subsampling a “mini-batch” of triangular motifs. Instead of adopting a fully factorized, naive mean-field approximation, which we find performs poorly in practice, we pursue a structured mean-field approach that captures higher-order dependencies between latent variables. These new developments all combine to yield an efficient inference algorithm that usually converges after 2 passes on each triangular motif (or up to 4-5 passes at worst), and achieves competitive or improved accuracy for latent space recovery and link prediction on synthetic and real networks. Finally, in Section 5, we demonstrate that our algorithm converges and infers a 100-role latent space on a 1M-node Youtube social network in just 4 hours, using a single machine with 8 threads. 2 Triangular Representation of Networks We take a scalable approach to network modeling by representing each network succinctly as a bag of triangular motifs [8]. Each triangular motif is a connected subgraph over a vertex triple containing 2 or 3 edges (called open triangle and closed triangle respectively). Empty and singleedge triples are ignored. Although this triangular format does not preserve all network information found in an edge representation, these three-node connected subgraphs are able to capture a number of informative structural features in the network. For example, in social network theory, the notion of triadic closure [21, 6] is commonly measured by the relative number of closed triangles compared to the total number of connected triples, known as the global clustering coefficient or transitivity [17]. The same quantity is treated as a general network statistic in the exponential random graph model (ERGM) literature [16]. Furthermore, the most significant and recurrent structural patterns in many complex networks, so-called “network motifs”, turn out to be connected three-node subgraphs [15]. Most importantly of all, triangular modeling requires much less computational cost compared to edge-based models, with little or no degradation of performance for latent space recovery [8]. In networks with N vertices and low maximum vertex degree D, the number of triangular motifs Θ(ND2) is normally much smaller than Θ(N 2), allowing us to construct more efficient inference algorithms scalable to larger networks. For high-maximum-degree networks, the triangular motifs can be subsampled in a node-centric fashion as a local data reduction step. For each vertex i with degree higher than a user-chosen threshold δ, uniformly sample δ 2 triangles from the set composed of (a) its adjacent closed triangles, and (b) its adjacent open triangles that are centered on i. Vertices with degree ≤δ keep all triangles from their set. It has been shown that this δ-subsampling procedure can approximately preserve the distribution over open and closed triangles, and allows for much faster inference algorithms (linear growth in N) at a small cost in accuracy [8]. In what follows, we assume that a preprocessing step has been performed — namely, extracting and δ-subsampling triangular motifs (which can be done in O(1) time per sample, and requires < 1% of the actual inference time) — to yield a bag-of-triangles representation of the input network. For each triplet of vertices i, j, k ∈{1, . . . , N} , i < j < k, let Eijk denote the observed type of triangular motif formed among these three vertices: Eijk = 1, 2 and 3 represent an open triangle with i, j and k in the center respectively, and Eijk = 4 if a closed triangle is formed. Because empty and single-edge triples are discarded, the set of triples with triangular motifs formed, I = {(i, j, k) : i < j < k, Eijk = 1, 2, 3 or 4}, is of size O(Nδ2) after δ-subsampling [8]. 3 Parsimonious Triangular Model Given the input network, now represented as a bag of triangular motifs, our goal is to make inference about the latent position vector θi of each vertex i ∈{1, . . . , N}. We take a mixed-membership 2 (si,jk, sj,ik, sk,ij) Equivalence classes Conditional probability of Eijk ∈{1, 2, 3, 4} x = si,jk = sj,ik = sk,ij {1, 2, 3}, {4} Discrete Bxxx,1 3 , Bxxx,1 3 , Bxxx,1 3 , Bxxx,2 x = si,jk = sj,ik ̸= sk,ij {1, 2}, {3}, {4} Discrete Bxx,1 2 , Bxx,1 2 , Bxx,2, Bxx,3 x = si,jk = sk,ij ̸= sj,ik {1, 3}, {2}, {4} Discrete Bxx,1 2 , Bxx,2, Bxx,1 2 , Bxx,3 x = sj,ik = sk,ij ̸= si,jk {2, 3}, {1}, {4} Discrete Bxx,2, Bxx,1 2 , Bxx,1 2 , Bxx,3 sk,ij ̸= si,jk ̸= sj,ik {1, 2, 3}, {4} Discrete B0,1 3 , B0,1 3 , B0,1 3 , B0,2 Table 1: Equivalence classes and conditional probabilities of Eijk given si,jk, sj,ik, sk,ij (see text for details). approach: each vertex i can take a mixture distribution over K latent roles governed by a mixedmembership vector θi ∈∆K−1 restricted to the (K −1)-simplex. Such vectors can be used for performing community detection and link prediction, as demonstrated in Section 5. Following a design principle similar to the Mixed-Membership Triangular Model (MMTM) [8], our Parsimonious Triangular Model (PTM) is essentially a latent-space model that defines the generative process for a bag of triangular motifs. However, compared to the MMTM, the major advantage of the PTM lies in its more compact and lower-dimensional nature that allows for more efficient inference algorithms (see Global Update step in Section 4). The dimension of triangle-generating parameters in the PTM is just O(K), rather than O(K3) in the MMTM (see below for further discussion). To form a triangular motif Eijk for each triplet of vertices (i, j, k), a triplet of role indices si,jk, sj,ik, sk,ij ∈{1, . . . , K} is first chosen based on the mixed-membership vectors θi, θj, θk. These indices designate the roles taken by each vertex participating in this triangular motif. There are O(K3) distinct configurations of such latent role triplet, and the MMTM uses a tensor of trianglegenerating parameters of the same size to define the probability of Eijk, one entry Bxyz for each possible configuration (x, y, z). In the PTM, we reduce the number of such parameters by partitioning the O(K3) configuration space into several groups, and then sharing parameters within the same group. The partitioning is based on the number of distinct states in the configuration of the role triplet: 1) if the three role indices are all in the same state x, the triangle-generating probability is determined by Bxxx; 2) if only two role indices exhibit the same state x (called majority role), the probability of triangles is governed by Bxx, which is shared across different minority roles; 3) if the three role indices are all distinct, the probability of triangular motifs depends on B0, a single parameter independent of the role configurations. This sharing yields just O(K) parameters B0, Bxx, Bxxx, x ∈{1, . . . , K}, allowing PTM to scale to far more latent roles than MMTM. A similar idea was proposed in a-MMSB [5], using one parameter ϵ to determine inter-role link probabilities, rather than O(K2) parameters for all pairs of distinct roles, as in the original MMSB [1]. Once the role triplet (si,jk, sj,ik, sk,ij) is chosen, some of the triangular motifs can become indistinguishable. To illustrate, in the case of x = si,jk = sj,ik ̸= sk,ij, one cannot distinguish the open triangle with i in the center (Eijk = 1) from that with j in the center (Eijk = 2), because both are open triangles centered at a vertex with majority role x, and are thus structurally equivalent under the given role configuration. Formally, this configuration induces a set of triangle equivalence classes {{1, 2}, {3}, {4}} of all possible triangular motifs {1, 2, 3, 4}. We treat the triangular motifs within the same equivalence class as stochastically equivalent; that is, the conditional probabilities of events Eijk = 1 and Eijk = 2 are the same if x = si,jk = sj,ik ̸= sk,ij. All possible cases are enumerated as follows (see also Table 1): 1. If all three vertices have the same role x, all three open triangles are equivalent and the induced set of equivalence classes is {{1, 2, 3}, {4}}. The probability of Eijk is determined by Bxxx ∈∆1, where Bxxx,1 represents the total probability of sampling an open triangle from {1, 2, 3} and Bxxx,2 represents the closed triangle probability. Thus, the probability of a particular open triangle is Bxxx,1/3. 2. If only two vertices have the same role x (majority role), the probability of Eijk is governed by Bxx ∈∆2. Here, Bxx,1 and Bxx,2 represent the open triangle probabilities (for open triangles centered at a vertex in majority and minority role respectively), and Bxx,3 represents the closed triangle probability. There are two possible open triangles with a vertex in majority role at the center, and hence each has probability Bxx,1/2. 3. If all three vertices have distinct roles, the probability of Eijk depends on B0 ∈∆1, where B0,1 represents the total probability of sampling an open triangle from {1, 2, 3} (regardless of the center vertex’s role) and B0,2 represents the closed triangle probability. To summarize, the PTM assumes the following generative process for a bag of triangular motifs: • Choose B0 ∈∆1, Bxx ∈∆2 and Bxxx ∈∆1 for each role x ∈{1, . . . , K} according to symmetric Dirichlet distributions Dirichlet(λ). 3 • For each vertex i ∈{1, . . . , N}, draw a mixed-membership vector θi ∼Dirichlet (α). • For each triplet of vertices (i, j, k) , i < j < k, −Draw role indices si,jk ∼Discrete (θi), sj,ik ∼Discrete (θj), sk,ij ∼Discrete (θk). −Choose a triangular motif Eijk ∈{1, 2, 3, 4} based on B0, Bxx, Bxxx and the configuration of (si,jk, sj,ik, sk,ij) (see Table 1 for the conditional probabilities). It is worth pointing out that, similar to the MMTM, our PTM is not a generative model of networks per se because (a) empty and single-edge motifs are not modeled, and (b) one can generate a set of triangles that does not correspond to any network, because the generative process does not force overlapping triangles to have consistent edge values. However, given a bag of triangular motifs E extracted from a network, the above procedure defines a valid probabilistic model p(E | α, λ) and we can legitimately use it for performing posterior inference p(s, θ, B | E, α, λ). We stress that our goal is latent space inference, not network simulation. 4 Scalable Stochastic Variational Inference In this section, we present a stochastic variational inference algorithm [10] for performing approximate inference under our model. Although it is also feasible to develop such algorithm for the MMTM [8], the O(NK3) computational complexity precludes its application to large numbers of latent roles. However, due to the parsimonious O(K) parameterization of the PTM, our efficient algorithm has only O(NK) complexity. We adopted a structured mean-field approximation method, in which the true posterior of latent variables p(s, θ, B | E, α, λ) is approximated by a partially factorized distribution q(s, θ, B), q(s, θ, B) = Y (i,j,k)∈I q(si,jk, sj,ik, sk,ij | φijk) N Y i=1 q(θi | γi) K Y x=1 q(Bxxx | ηxxx) K Y x=1 q(Bxx | ηxx)q(B0 | η0), where I = {(i, j, k) : i < j < k, Eijk = 1, 2, 3 or 4} and |I| = O(Nδ2). The strong dependencies among the per-triangle latent roles (si,jk, sj,ik, sk,ij) suggest that we should model them as a group, rather than completely independent as in a naive mean-field approximation1. Thus, the variational posterior of (si,jk, sj,ik, sk,ij) is the discrete distribution q(si,jk = x, sj,ik = y, sk,ij = z) .= qijk(x, y, z) = φxyz ijk , x, y, z = 1, . . . , K. (1) The posterior q(θi) is a Dirichlet(γi); and the posteriors of Bxxx, Bxx, B0 are parameterized as: q(Bxxx) = Dirichlet(ηxxx), q(Bxx) = Dirichlet(ηxx), and q(B0) = Dirichlet(η0). The mean field approximation aims to minimize the KL divergence KL(q ∥p) between the approximating distribution q and the true posterior p; it is equivalent to maximizing a lower bound L(φ, η, γ) of the log marginal likelihood of the triangular motifs (based on Jensen’s inequality) with respect to the variational parameters {φ, η, γ} [22]. log p(E | α, λ) ≥Eq[log p(E, s, θ, B | α, λ)] −Eq[log q(s, θ, B)] .= L(φ, η, γ). (2) To simplify the notation, we decompose the variational objective L(φ, η, γ) into a global term and a summation of local terms, one term for each triangular motif (see Appendix for details). L(φ, η, γ) = g(η, γ) + X (i,j,k)∈I ℓ(φijk, η, γ). (3) The global term g(η, γ) depends only on the global variational parameters η, which govern the posterior of the triangle-generating probabilities B, as well as the per-node mixed-membership parameters γ. Each local term ℓ(φijk, η, γ) depends on per-triangle parameters φijk as well as the global parameters. Define L(η, γ) .= maxφ L(φ, η, γ), which is the variational objective achieved by fixing the global parameters η, γ and optimizing the local parameters φ. By equation (3), L(η, γ) = g(η, γ) + X (i,j,k)∈I max φijk ℓ(φijk, η, γ). (4) Stochastic variational inference is a stochastic gradient ascent algorithm [3] that maximizes L(η, γ), based on noisy estimates of its gradient with respect to η and γ. Whereas computing the true gradient ∇L(η, γ) involves a costly summation over all triangular motifs as in (4), an unbiased noisy approximation of the gradient can be obtained much more cheaply by summing over a small subsample of triangles. With this unbiased estimate of the gradient and a suitable adaptive step size, the algorithm is guaranteed to converge to a stationary point of the variational objective L(η, γ) [18]. 1 We tested a naive mean-field approximation, and it performed very poorly. This is because the tensor of role probabilities q(x, y, z) is often of high rank, whereas naive mean-field is a rank-1 approximation. 4 Algorithm 1 Stochastic Variational Inference 1: t = 0. Initialize the global parameters η and γ. 2: Repeat the following steps until convergence. (1) Sample a mini-batch of triangles S. (2) Optimize the local parameters qijk(x, y, z) for all sampled triangles in parallel by (6). (3) Accumulate sufficient statistics for the natural gradients of η, γ (and then discard qijk(x, y, z)). (4) Optimize the global parameters η and γ by the stochastic natural gradient ascent rule (7). (5) ρt ←τ0(τ1 + t)−κ, t ←t + 1. In our setting, the most natural way to obtain an unbiased gradient of L(η, γ) is to sample a “minibatch” of triangular motifs at each iteration, and then average the gradient of local terms in (4) only for these sampled triangles. Formally, let m be the total number of triangles and define LS(η, γ) = g(η, γ) + m |S| X (i,j,k)∈S max φijk ℓ(φijk, η, γ), (5) where S is a mini-batch of triangles sampled uniformly at random. It is easy to verify that ES[LS(η, γ)] = L(η, γ), hence ∇LS(η, γ) is unbiased: ES[∇LS(η, γ)] = ∇L(η, γ). Exact Local Update. To obtain the gradient ∇LS(η, γ), one needs to compute the optimal local variational parameters φijk (keeping η and γ fixed) for each sampled triangle (i, j, k) in the minibatch S; these optimal φijk’s are then used in equation (5) to compute ∇LS(η, γ). Taking partial derivatives of (3) with respect to each local term φxyz ijk and setting them to zero, we get for distinct x, y, z ∈{1, . . . , K}, φxyz ijk ∝exp n Eq[log B0,2]I[Eijk = 4] + Eq[log(B0,1/3)]I[Eijk ̸= 4] + Eq[log θi,x + log θj,x + log θk,x] o . (6) See Appendix for the update equations of φxxx ijk and φxxy ijk (x ̸= y). O(K) Approximation to Local Update. For each sampled triangle (i, j, k), the exact local update requires O(K3) work to solve for all φxyz ijk , making it unscalable. To enable a faster local update, we replace qijk(x, y, z | φijk) in (1) with a simpler “mixture-of-deltas” variational distribution, qijk(x, y, z | δijk) = X a δaaa ijk I[x = y = z = a] + X (a,b,c)∈A δabc ijk I[x = a, y = b, z = c], where A is a randomly chosen set of triples (a, b, c) with size O(K), and P a δaaa ijk + P (a,b,c)∈A δabc ijk = 1. In other words, we assume the probability mass of the variational posterior q(si,jk, sj,ik, sk,ij) falls entirely on the K “diagonal” role combinations (a, a, a) as well as O(K) randomly chosen “off-diagonals” (a, b, c). Conveniently, the δ update equations are identical to their φ counterparts as in (6), except that we normalize over the δ’s instead. In our implementation, we generate A by picking 3K combinations of the form (a, a, b), (a, b, a) or (a, a, b), and another 3K combinations of the form (a, b, c), thus mirroring the parameter structure of B. Furthermore, we re-pick A every time we perform the local update on some triangle (i, j, k), thus avoiding any bias due to a single choice of A. We find that this approximation works as well as the full parameterization in (1), yet requires only O(K) work per sampled triangle. Note that any choice of A yields a valid lower bound to the true log-likelihood; this follows from standard variational inference theory. Global Update. We appeal to stochastic natural gradient ascent [2, 20, 10] to optimize the global parameters η and γ, as it greatly simplifies the update rules while maintaining the same asymptotic convergence properties as classical stochastic gradient. The natural gradient ˜∇LS(η, γ) is obtained by a premultiplication of the ordinary gradient ∇LS(η, γ) with the inverse of the Fisher information of the variational posterior q. See Appendix for the exact forms of the natural gradients with respect to η and γ. To update the parameters η and γ, we apply the stochastic natural gradient ascent rule ηt+1 = ηt + ρt ˜∇ηLS(ηt, γt), γt+1 = γt + ρt ˜∇γLS(ηt, γt), (7) where the step size is given by ρt = τ0(τ1 + t)−κ. To ensure convergence, the τ0, τ1, κ are set such that P t ρ2 t < ∞and P t ρt = ∞(Section 5 has our experimental values). The global update only costs O(NK) time per iteration due to the parsimonious O(K) parameterization of our PTM. Our full inferential procedure is summarized in Algorithm 1. Within a mini-batch S, steps 2-3 can be trivially parallelized across triangles. Furthermore, the local parameters qijk(x, y, z) can 5 be discarded between iterations, since all natural gradient sufficient statistics can be accumulated during the local update. This saves up to tens of gigabytes of memory on million-node networks. 5 Experiments We demonstrate that our stochastic variational algorithm achieves latent space recovery accuracy comparable to or better than prior work, but in only a fraction of the time. In addition, we perform heldout link prediction and likelihood lower bound (i.e. perplexity) experiments on several large real networks, showing that our approach is orders of magnitude more scalable than previous work. 5.1 Generating Synthetic Data We use two latent space models as the simulator for our experiments — the MMSB model [1] (which the MMSB batch variational algorithm solves for), and a model that produces power-law networks from a latent space (see Appendix for details). Briefly, the MMSB model produces networks with “blocks” of nodes characterized by high edge probabilities, whereas the Power-Law model produces “communities” centered around a high-degree hub node. We show that our algorithm rapidly and accurately recovers latent space roles based on these two notions of node-relatedness. For both models, we synthesized ground truth role vectors θi’s to generate networks of varying difficulty. We generated networks with N ∈{500, 1000, 2000, 5000, 10000} nodes, with the number of roles growing as K = N/100, to simulate the fact that large networks can have more roles [24]. We generated “easy” networks where each θi contains 1 to 2 nonzero roles, and “hard” networks with 1 to 4 roles per θi. A full technical description of our networks can be found in the Appendix. 5.2 Latent Space Recovery on Synthetic Data Task and Evaluation. Given one of the synthetic networks, the task is to recover estimates ˆθi’s of the original latent space vectors θi’s used to generate the network. Because we are comparing different algorithms (with varying model assumptions) on different networks (generated under their own assumptions), we standardize our evaluation by thresholding all outputs ˆθi’s at 1/8 = 0.125 (because there are no more than 4 roles per θi), and use Normalized Mutual Information (NMI) [12, 23], a commonly-used measure of overlapping cluster accuracy, to compare the ˆθi’s with the true θi’s (thresholded similarly). In other words, we want to recover the set of non-zero roles. Competing Algorithms and Initialization. We tested the following algorithms: • Our PTM stochastic variational algorithm. We used δ = 50 subsampling2 (i.e. 50 2 = 1225 triangles per node), hyperparameters α = λ = 0.1, and a 10% minibatch size with step-size τ0(τ1 + t)κ, where τ0 = 100, τ1 = 10000, κ = −0.5, and t is the iteration number. Our algorithm has a runtime complexity of O(Nδ2K). Since our algorithm can be run in parallel, we conduct all experiments using 4 threads — compared to single-threaded execution, we observe this reduces runtime to about 40%. • MMTM collapsed blocked Gibbs sampler, according to [8]. We also used δ = 50 subsampling. The algorithm has O(Nδ2K3) time complexity, and is single-threaded. • PTM collapsed blocked Gibbs sampler. Like the above MMTM Gibbs, but using our PTM model. Because of block sampling, complexity is still O(Nδ2K3). Single-threaded. • MMSB batch variational [1]. This algorithm has O(N 2K2) time complexity, and is single-threaded. All these algorithms are locally-optimal search procedures, and thus sensitive to initial values. In particular, if nodes from two different roles are initialized to have the same role, the output is likely to merge all nodes in both roles into a single role. To ensure a meaningful comparison, we therefore provide the same fixed initialization to all algorithms — for every role x, we provide 2 example nodes i, and initialize the remaining nodes to have random roles. In other words, we seed 2% of the nodes with one of their true roles, and let the algorithms proceed from there3. Recovery Accuracy. Results of our method, MMSB Variational, MMTM Gibbs and PTM Gibbs are in Figure 1. Our method exhibits high accuracy (i.e. NMI close to 1) across almost all networks, validating its ability to recover latent roles under a range of network sizes N and roles K. In contrast, as N (and thus K) increases, MMSB Variational exhibits degraded performance despite having converged, while MMTM/PTM Gibbs converge to and become stuck in local minima 2 We chose δ = 50 because almost all our synthetic networks have median degree ≤50. Choosing δ above the median degree ensures that more than 50% of the nodes will receive all their assigned triangles. 3 In general, one might not have any ground truth roles or labels to seed the algorithm with. For such cases, our algorithm can be initialized as follows: rank all nodes according to the number of 3-triangles they touch, and then seed the top K nodes with different roles x. The intuition is that “good” roles may be defined as having a high ratio of 3-triangles to 2-triangles among participating nodes. 6 Latent space recovery on Synthetic Power-Law and MMSB Networks Accuracy vs MMSB, MMTM Runtime Full vs Mini-Batch 0.51 2 5 10 0 0.2 0.4 0.6 0.8 1 1,000s of nodes N NMI NMI: MMSB easy Our method MMTM Gibbs PTM Gibbs MMSB Variational 0.51 2 5 10 0 0.2 0.4 0.6 0.8 1 1,000s of nodes N NMI NMI: MMSB hard Our method MMTM Gibbs PTM Gibbs MMSB Variational 0.51 2 5 10 0 200 400 600 800 1000 1,000s of nodes N Seconds per data pass Runtime: MMSB hard Our method MMTM Gibbs PTM Gibbs MMSB Variational 0 2 4 6 8 10 0 0.2 0.4 0.6 0.8 1 Data passes NMI Convergence: MMSB Hard, N=1000 10% mini−batches Full batch variational 0.51 2 5 10 0 0.2 0.4 0.6 0.8 1 1,000s of nodes N NMI NMI: Power−Law easy Our method MMTM Gibbs PTM Gibbs MMSB Variational 0.51 2 5 10 0 0.2 0.4 0.6 0.8 1 1,000s of nodes N NMI NMI: Power−Law hard Our method MMTM Gibbs PTM Gibbs MMSB Variational 0.51 2 5 10 0 200 400 600 800 1000 1,000s of nodes N Seconds per data pass Runtime: Power−Law hard Our method MMTM Gibbs PTM Gibbs MMSB Variational 0 2 4 6 8 10 0 0.2 0.4 0.6 0.8 1 Data passes NMI Convergence: Power−Law Hard, N=1000 10% mini−batches Full batch variational Figure 1: Synthetic Experiments. Left/Center: Latent space recovery accuracy (measured using Normalized Mutual Information) and runtime per data pass for our method and baselines. With the MMTM/PTM Gibbs and MMSB Variational algorithms, the larger networks did not complete within 12 hours. The runtime plots for MMSB easy and Power-Law easy experiments are very similar to the hard experiments, so we omit them. Right: Convergence of our stochastic variational algorithm (with 10% minibatches) versus a batch variational version of our algorithm. On N = 1, 000 networks, our minibatch algorithm converges within 1-2 data passes. Link Prediction on Synthetic and Real Networks Network Type Synthetic Dictionary Biological arXiv Collaboration Internet Social Name MMSB Power-law Roget Odlis Yeast GrQc AstroPh Stanford Youtube Nodes N 2.0K 2.0K 1.0K 2.9K 2.4K 5.2K 18.7K 282K 1.1M Edges 40K 40K 3.6K 16K 6.6K 14K 200K 2.0M 3.0M Our Method AUC 0.93 0.97 0.65 0.81 0.75 0.82 0.86 0.94 0.71 MMSB Variational AUC 0.91 0.94 0.72 0.88 0.81 0.77 — — — Table 2: Link Prediction Experiments, measured using AUC. Our method performs similarly to MMSB Variational on synthetic data. MMSB performs better on smaller, non-social networks, while we perform better on larger, social networks (or MMSB fails to complete due to lack of scalability). Roget, Odlis and Yeast networks are from Pajek datasets (http://vlado.fmf.uni-lj.si/pub/networks/data/); the rest are from Stanford Large Network Dataset Collection (http://snap.stanford.edu/data/). (even after many iterations and trials), without reaching a good solution4. We believe our method maintains high accuracy due to its parsimonious O(K) parameter structure — compared to MMSB Variational’s O(K2) block matrix and MMTM Gibbs’s O(K3) tensor of triangle parameters. Having fewer parameters may lead to better parameter estimates, and better task performance. Runtime. On the larger networks, MMSB Variational and MMTM/PTM Gibbs did not even finish execution due to their high runtime complexity. This can be seen in the runtime graphs, which plot the time taken per data pass5: at N = 5, 000, all 3 baselines require orders of magnitude more time than our method does at N = 10, 000. Recall that K = O(N), and that our method has time complexity O(Nδ2K), while MMSB Variational has O(N 2K2), and MMTM/PTM Gibbs has O(Nδ2K3) — hence, our method runs in O(N 2) on these synthetic networks, while the others run in O(N 4). This highlights the need for network methods that are linear in N and K. Convergence of stochastic vs. batch algorithms. We also demonstrate that our stochastic variational algorithm with 10% mini-batches converges much faster to the correct solution than a nonstochastic, full-batch implementation. The convergence graphs in Figure 1 plot NMI as a function of data passes, and show that our method converges to the (almost) correct solution in 1-2 data passes. In contrast, the batch algorithm takes 10 or more data passes to converge. 5.3 Heldout Link Prediction on Real and Synthetic Networks We compare MMSB Variational and our method on a link prediction task, in which 10% of the edges are randomly removed (set to zero) from the network, and, given this modified network, the task is to rank these heldout edges against an equal number of randomly chosen non-edges. For MMSB, we simply ranked according to the link probability under the MMSB model. For our 4 With more generous initializations (20 out of 100 ground truth nodes per role), MMTM/PTM Gibbs converge correctly. In practice however, this is an unrealistic amount of prior knowledge to expect. We believe that more sophisticated MCMC schemes may fix this convergence issue with MMTM/PTM models. 5One data pass is defined as performing variational inference on m triangles, where m is equal to the total number of triangles. This takes the same amount of time for both the stochastic and batch algorithms. 7 Real Networks — Statistics, Experimental Settings and Runtime Name Nodes Edges δ 2,3-Tris (for δ) Frac. 3-Tris Roles K Threads Runtime (10 data passes) Brightkite 58K 214K 50 3.5M 0.11 64 4 34 min Brightkite || || || || || 300 4 2.6 h Slashdot Feb 2009 82K 504K 50 9.0M 0.030 100 4 2.4 h Slashdot Feb 2009 || || || || || 300 4 6.7 h Stanford Web 282K 2.0M 20 11.4M 0.57 5 4 10 min Stanford Web || || 50 25.0M 0.42 100 4 6.3 h Berkeley-Stanford Web 685K 6.6M 30 57.6M 0.55 100 8 15.2 h Youtube 1.1M 3.0M 50 36.0M 0.053 100 8 9.1 h Table 3: Real Network Experiments. All networks were taken from the Stanford Large Network Dataset Collection; directed networks were converted to undirected networks via symmetrization. Some networks were run with more than one choice of settings. Runtime is the time taken for 10 data passes (which was more than sufficient for convergence on all networks, see Figure 2). Real Networks — Heldout lower bound of our method 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 −1.5 −1 −0.5 x 10 7 Training LB Hours Brightkite K=64, 4 threads 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7−2 −1.5 −1 x 10 6 Heldout LB 0 0.5 1 1.5 2 2.5 3 −2 −1.5 −1 −0.5 x 10 7 Training LB Hours Brightkite K=300, 4 threads 0 0.5 1 1.5 2 2.5 3−2.5 −2 −1.5 −1 x 10 6 Heldout LB 0 0.5 1 1.5 2 2.5 −4 −3 −2 −1 x 10 7 Training LB Hours Slashdot K=100, 4 threads 0 0.5 1 1.5 2 2.5−6 −4 −2 0x 10 6 Heldout LB 0 1 2 3 4 5 6 7 −3.5 −3 −2.5 −2 −1.5 −1 x 10 7 Training LB Hours Slashdot K=300, 4 threads 0 1 2 3 4 5 6 7−4.5 −4 −3.5 −3 −2.5 −2 x 10 6 Heldout LB 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 −3 −2.5 −2 −1.5 −1 x 10 7 Training LB Hours Stanford K=5, 4 threads 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 −4.5 −4 −3.5 −3 −2.5 x 10 6 Heldout LB 0 1 2 3 4 5 6 7 −10 −5 0 x 10 7 Training LB Hours Stanford K=100, 4 threads 0 1 2 3 4 5 6 7−1.5 −1 −0.5 x 10 7 Heldout LB 0 2 4 6 8 10 12 14 16 −1.8 −1.6 −1.4 −1.2 −1 −0.8 −0.6 x 10 8 Training LB Hours Berk−Stan K=100, 8 threads 0 2 4 6 8 10 12 14 16−2.4 −2.2 −2 −1.8 −1.6 −1.4 −1.2 x 10 7 Heldout LB 0 2 4 6 8 10 −1.5 −1 −0.5 x 10 8 Training LB Hours Youtube K=100, 8 threads 0 2 4 6 8 10−2 −1.5 −1 x 10 7 Heldout LB Figure 2: Real Network Experiments. Training and heldout variational lower bound (equivalent to perplexity) convergence plots for all experiments in Table 3. Each plot shows both lower bounds over 10 data passes (i.e. 100 iterations with 10% minibatches). In all cases, we observe convergence between 2-5 data passes, and the shape of the heldout curve closely mirrors the training curve (i.e. no overfitting). method, we ranked possible links i −j by the probability that the triangle (i, j, k) will include edge i −j, marginalizing over all choices of the third node k and over all possible role choices for nodes i, j, k. Table 2 displays results for a variety of networks, and our triangle-based method does better on larger social networks than the edge-based MMSB. This matches what has been observed in the network literature [24], and further validates our triangle modeling assumptions. 5.4 Real World Networks — Convergence on Heldout Data Finally, we demonstrate that our approach is capable of scaling to large real-world networks, achieving convergence in a fraction of the time reported by recent work on scalable network modeling. Table 3 lists the networks that we tested on, ranging in size from N = 58K to N = 1.1M. With a few exceptions, the experiments were conducted with δ = 50 and 4 computational threads. In particular, for every network, we picked δ to be larger than the average degree, thus minimizing the amount of triangle data lost to subsampling. Figure 2 plots the training and heldout variational lower bound for several experiments, and shows that our algorithm always converges in 2-5 data passes. We wish to highlight two experiments, namely the Brightkite network for K = 64, and the Stanford network for K = 5 (the first and fifth rows respectively in Table 3). Gopalan et al. ([5]) reported convergence on Brightkite in 8 days using their scalable a-MMSB algorithm with 4 threads, while Ho et al. ([8]) converged on Stanford in 18.5 hours using the MMTM Gibbs algorithm on 1 thread. In both settings, our algorithm is orders of magnitude faster — using 4 threads, it converged on Brightkite and Stanford in just 12 and 4 minutes respectively, as seen in Figure 2. In summary, we have constructed a latent space network model with O(NK) parameters and devised a stochastic variational algorithm for O(NK) inference. Our implementation allows network analysis with millions of nodes N and hundreds of roles K in hours on a single multi-core machine, with competitive or improved accuracy for latent space recovery and link prediction. These results are orders of magnitude faster than recent work on scalable latent space network modeling [5, 8]. Acknowledgments This work was supported by AFOSR FA9550010247, NIH 1R01GM093156 and DARPA FA87501220324 to Eric P. Xing. Qirong Ho is supported by an A-STAR, Singapore fellowship. Junming Yin is supported by a Ray and Stephanie Lane Research Fellowship. 8 References [1] E. Airoldi, D. Blei, S. Fienberg, and E. Xing. Mixed membership stochastic blockmodels. Journal of Machine Learning Research, 9:1981–2014, 2008. [2] S. Amari. Natural gradient works efficiently in learning. Neural Computation, 10(2):251–276, 1998. [3] L. Bottou. Stochastic learning. Advanced Lectures on Machine Learning, pages 146–168, 2004. [4] M. Carman, F. Crestani, M. Harvey, and M. Baillie. Towards query log based personalization using topic models. In Proceedings of the 19th ACM international conference on Information and knowledge management (CIKM ’10), pages 1849–1852, 2010. [5] P. Gopalan, D. Mimno, S. Gerrish, M. Freedman, and D. Blei. Scalable inference of overlapping communities. In Advances in Neural Information Processing Systems 25, pages 2258–2266. 2012. [6] M. Granovetter. The strength of weak ties. American Journal of Sociology, 78(6):1360–1380, 1973. [7] Q. Ho, A. Parikh, and E. Xing. A multiscale community blockmodel for network exploration. Journal of the American Statistical Association, 107(499), 2012. [8] Q. Ho, J. Yin, and E. Xing. On triangular versus edge representations — towards scalable modeling of networks. In Advances in Neural Information Processing Systems 25, pages 2141–2149. 2012. [9] P. Hoff, A. Raftery, and M. Handcock. Latent space approaches to social network analysis. Journal of the American Statistical Association, 97(460):1090–1098, 2002. [10] M. Hoffman, D. Blei, C. Wang, and J. Paisley. Stochastic variational inference. Journal of Machine Learning Research, 14:1303–1347, 2013. [11] D. Hunter, S. Goodreau, and M. Handcock. Goodness of fit of social network models. Journal of the American Statistical Association, 103(481):248–258, 2008. [12] A. Lancichinetti, S. Fortunato, and J. Kert´esz. Detecting the overlapping and hierarchical community structure in complex networks. New Journal of Physics, 11(3):033015+, 2009. [13] Y. Low, D. Agarwal, and A. Smola. Multiple domain user personalization. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining (KDD ’11), pages 123–131, 2011. [14] K. Miller, T. Griffiths, and M. Jordan. Nonparametric latent feature models for link prediction. In Advances in Neural Information Processing Systems 22, pages 1276–1284. 2009. [15] R. Milo, S. Shen-Orr, S. Itzkovitz, N. Kashtan, D. Chklovskii, and U. Alon. Network motifs: Simple building blocks of complex networks. Science, 298(5594):824–827, 2002. [16] M. Morris, M. Handcock, and D. Hunter. Specification of exponential-family random graph models: Terms and computational aspects. Journal of Statistical Software, 24(4), 2008. [17] M. Newman, S. Strogatz, and D. Watts. Random graphs with arbitrary degree distributions and their applications. Physical Review E, 64(2), 2001. [18] H. Robbins and S. Monro. A stochastic approximation method. The Annals of Mathematical Statistics, 22(3):400–407, 1951. [19] P. Sarkar and A. Moore. Dynamic social network analysis using latent space models. ACM SIGKDD Explorations Newsletter, 7(2):31–40, 2005. [20] M. Sato. Online model selection based on the variational Bayes. Neural Computation, 13(7):1649–1681, 2001. [21] G. Simmel and K. Wolff. The Sociology of Georg Simmel. Free Press, 1950. [22] M. Wainwright and M. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1–305, 2008. [23] J. Xie, S. Kelley, and B. Szymanski. Overlapping community detection in networks: the state of the art and comparative study. ACM Computing Surveys, 45(4), 2013. [24] J. Yang and J. Leskovec. Defining and evaluating network communities based on ground-truth. In Proceedings of the ACM SIGKDD Workshop on Mining Data Semantics. ACM, 2012. 9
|
2013
|
12
|
4,843
|
Reflection methods for user-friendly submodular optimization Stefanie Jegelka UC Berkeley Berkeley, CA, USA Francis Bach INRIA - ENS Paris, France Suvrit Sra MPI for Intelligent Systems T¨ubingen, Germany Abstract Recently, it has become evident that submodularity naturally captures widely occurring concepts in machine learning, signal processing and computer vision. Consequently, there is need for efficient optimization procedures for submodular functions, especially for minimization problems. While general submodular minimization is challenging, we propose a new method that exploits existing decomposability of submodular functions. In contrast to previous approaches, our method is neither approximate, nor impractical, nor does it need any cumbersome parameter tuning. Moreover, it is easy to implement and parallelize. A key component of our method is a formulation of the discrete submodular minimization problem as a continuous best approximation problem that is solved through a sequence of reflections, and its solution can be easily thresholded to obtain an optimal discrete solution. This method solves both the continuous and discrete formulations of the problem, and therefore has applications in learning, inference, and reconstruction. In our experiments, we illustrate the benefits of our method on two image segmentation tasks. 1 Introduction Submodularity is a rich combinatorial concept that expresses widely occurring phenomena such as diminishing marginal costs and preferences for grouping. A set function F : 2V →R on a set V is submodular if for all subsets S, T ⊆V , we have F(S ∪T) + F(S ∩T) ≤F(S) + F(T). Submodular functions underlie the goals of numerous problems in machine learning, computer vision and signal processing [1]. Several problems in these areas can be phrased as submodular optimization tasks: notable examples include graph cut-based image segmentation [7], sensor placement [30], or document summarization [31]. A longer list of examples may be found in [1]. The theoretical complexity of submodular optimization is well-understood: unconstrained minimization of submodular set functions is polynomial-time [19] while submodular maximization is NP-hard. Algorithmically, however, the picture is different. Generic submodular maximization admits efficient algorithms that can attain approximate optima with global guarantees; these algorithms are typically based on local search techniques [16, 35]. In contrast, although polynomial-time solvable, submodular function minimization (SFM) which seeks to solve min S⊆V F(S), (1) poses substantial algorithmic difficulties. This is partly due to the fact that one is commonly interested in an exact solution (or an arbitrarily close approximation thereof), and “polynomial-time” is not necessarily equivalent to “practically fast”. Submodular minimization algorithms may be obtained from two main perspectives: combinatorial and continuous. Combinatorial algorithms for SFM typically use close connections to matroid and 1 maximum flow methods; the currently theoretically fastest combinatorial algorithm for SFM scales as O(n6 + n5τ), where τ is the time to evaluate the function oracle [37] (for an overview of other algorithms, see e.g., [33]). These combinatorial algorithms are typically nontrivial to implement. Continuous methods offer an alternative by instead minimizing a convex extension. This idea exploits the fundamental connection between a submodular function F and its Lov´asz extension f [32], which is continuous and convex. The SFM problem (1) is then equivalent to min x∈[0,1]n f(x). (2) The Lov´asz extension f is nonsmooth, so we might have to resort to subgradient methods. While a fundamental result of Edmonds [15] demonstrates that a subgradient of f can be computed in O(n log n) time, subgradient methods can be sensitive to choices of the step size, and can be slow. They theoreticaly converge at a rate of O(1/ √ t) (after t iterations). The “smoothing technique” of [36] does not in general apply here because computing a smoothed gradient is equivalent to solving the submodular minimization problem. We discuss this issue further in Section 2. An alternative to minimizing the Lov´asz extension directly on [0, 1]n is to consider a slightly modified convex problem. Specifically, the exact solution of the discrete problem minS⊆V F(S) and of its nonsmooth convex relaxation minx∈[0,1]n f(x) may be found as a level set S0 = {k | x∗ k ⩾0} of the unique point x∗that minimizes the strongly convex function [1, 10]: f(x) + 1 2∥x∥2. (3) We will refer to the minimization of (3) as the proximal problem due to its close similarity to proximity operators used in convex optimization [12]. When F is a cut function, (3) becomes a total variation problem (see, e.g., [9] and references therein) that also occurs in other regularization problems [1]. Two noteworthy points about (3) are: (i) addition of the strongly convex component 1 2∥x∥2; (ii) the ensuing removal of the box-constraints x ∈[0, 1]n. These changes allow us to consider a convex dual which is amenable to smooth optimization techniques. Typical approaches to generic SFM include Frank-Wolfe methods [17] that have cheap iterations and O(1/t) convergence, but can be quite slow in practice (Section 5); or the minimum-normpoint/Fujishige-Wolfe algorithm [20] that has expensive iterations but finite convergence. Other recent methods are approximate [24]. In contrast to several iterative methods based on convex relaxations, we seek to obtain exact discrete solutions. To the best of our knowledge, all generic algorithms that use only submodularity are several orders of magnitude slower than specialized algorithms when they exist (e.g., for graph cuts). However, the submodular function is not always generic and given via a black-box, but has known structure. Following [28, 29, 38, 41], we make the assumption that F(S) = Pr i=1 Fi(S) is a sum of sufficiently “simple” functions (see Sec. 3). This structure allows the use of (parallelizable) dual decomposition techniques for the problem in Eq. (2), with [11, 38] or without [29] Nesterov’s smoothing technique, or with direct smoothing [41] techniques. But existing approaches typically have two drawbacks: (1) they use smoothing or step-size parameters whose selection may be critical and quite tedious; and (2) they still exhibit slow convergence (see Section 5). These drawbacks arise from working with formulation (2). Our main insight is that, despite seemingly counter-intuitive, the proximal problem (3) offers a much more user-friendly tool for solving (1) than its natural convex counterpart (2), both in implementation and running time. We approach problem (3) via its dual. This allows decomposition techniques which combine well with orthogonal projection and reflection methods that (a) exhibit faster convergence, (b) are easily parallelizable, (c) require no extra hyperparameters, and (d) are extremely easy to implement. The main three algorithms that we consider are: (i) dual block-coordinate descent (equivalently, primal-dual proximal-Dykstra), which was already shown to be extremely efficient for total variation problems [2] that are special cases of Problem (3); (ii) Douglas-Rachford splitting using the careful variant of [4], which for our formulation (Section 4.2) requires no hyper-parameters; and (iii) accelerated projected gradient [5]. We will see these alternative algorithms can offer speedups beyond known efficiencies. Our observations have two implications: first, from the viewpoint of solving Problem (3), they offers speedups for often occurring denoising and reconstruction problems that employ total variation. Second, our experiments suggest that projection and reflection methods can work very well for solving the combinatorial problem (1). 2 In summary, we make the following contributions: (1) In Section 3, we cast the problem of minimizing decomposable submodular functions as an orthogonal projection problem and show how existing optimization techniques may be brought to bear on this problem, to obtain fast, easy-to-code and easily parallelizable algorithms. In addition, we show examples of classes of functions amenable to our approach. In particular, for simple functions, i.e., those for which minimizing F(S) −a(S) is easy for all vectors1 a ∈Rn, the problem in Eq. (3) may be solved in O(log 1 ε) calls to such minimization routines, to reach a precision ε (Section 2,3). (2) In Section 5, we demonstrate the empirical gains of using accelerated proximal methods, Douglas-Rachford and block coordinate descent methods over existing approaches: fewer hyperparameters and faster convergence. 2 Review of relevant results from submodular analysis The relevant concepts we review here are the Lov´asz extension, base polytopes of submodular functions, and relationships between proximal and discrete problems. For more details, see [1, 19]. Lov´asz extension and convexity. The power set 2V may be naturally identified with the vertices of the hypercube, i.e., {0, 1}n. The Lov´asz extension f of any set function is defined by linear interpolation, so that for any S ⊂V , F(S) = f(1S). It may be computed in closed form once the components of x are sorted: if xσ(1) ⩾· · · ⩾xσ(n), then f(x) = Pn k=1 xσ(k) F({σ(1), . . . , σ(k)}) −F({σ(1), . . . , σ(k −1)}) [32]. For the graph cut function, f is the total variation. In this paper, we are going to use two important results: (a) if the set function F is submodular, then its Lov´asz extension f is convex, and (b) minimizing the set function F is equivalent to minimizing f(x) with respect to x ∈[0, 1]n. Given x ∈[0, 1]n, all of its level sets may be considered and the function may be evaluated (at most n times) to obtain a set S. Moreover, for a submodular function, the Lov´asz extension happens to be the support function of the base polytope B(F) defined as B(F) = {y ∈Rn | ∀S ⊂V, y(S) ⩽F(S) and y(V ) = F(V )}, that is f(x) = maxy∈B(F ) y⊤x [15]. A maximizer of y⊤x (and hence the value of f(x)), may be computed by the “greedy algorithm”, which first sorts the components of w in decreasing order xσ(1) ⩾· · · ⩾xσ(n), and then compute yσ(k) = F({σ(1), . . . , σ(k)}) −F({σ(1), . . . , σ(k −1)}). In other words, a linear function can be maximized over B(F) in time O(n log n + nτ) (note that the term nτ may be improved in many special cases). This is crucial for exploiting convex duality. Dual of discrete problem. We may derive a dual problem to the discrete problem in Eq. (1) and the convex nonsmooth problem in Eq. (2), as follows: min S⊆V F(S) = min x∈[0,1]n f(x) = min x∈[0,1]n max y∈B(F ) y⊤x = max y∈B(F ) min x∈[0,1]n y⊤x = max y∈B(F )(y)−(V ), (4) where (y)−= min{y, 0} applied elementwise. This allows to obtain dual certificates of optimality from any y ∈B(F) and x ∈[0, 1]n. Proximal problem. The optimization problem (3), i.e., minx∈Rn f(x) + 1 2∥x∥2, has intricate relations to the SFM problem [10]. Given the unique optimal solution x∗of (3), the maximal (resp. minimal) optimizer of the SFM problem is the set S∗of nonnegative (resp. positive) elements of x∗. More precisely, solving (3) is equivalent to minimizing F(S) + µ|S| for all µ ∈R. A solution Sµ ⊆V is obtained from a solution x∗as S∗ µ = {i | x∗ i ⩾µ}. Conversely, x∗may be obtained from all S∗ µ as x∗ k = sup{µ ∈R | k ∈S∗ µ} for all k ∈V . Moreover, if x is an ε-optimal solution of Eq. (3), then we may construct √εn-optimal solutions for all Sµ [1; Prop. 10.5]. In practice, the duality gap of the discrete problem is usually much lower than that of the proximal version of the same problem, as we will see in Section 5. Note that the problem in Eq. (3) provides much more information than Eq. (2), as all µ-parameterized discrete problems are solved. The dual problem of Problem (3) reads as follows: min x∈Rn f(x)+ 1 2∥x∥2 2 = min x∈Rn max y∈B(F ) y⊤x+ 1 2∥x∥2 2 = max y∈B(F ) min x∈Rn y⊤x+ 1 2∥x∥2 2 = max y∈B(F ) −1 2∥y∥2 2, where primal and dual variables are linked as x = −y. Observe that this dual problem is equivalent to finding the orthogonal projection of 0 onto B(F). 1Every vector a ∈Rn may be viewed as a modular (linear) set function: a(S) ≜P i∈S a(i). 3 Divide-and-conquer strategies for the proximal problems. Given a solution x∗of the proximal problem, we have seen how to get S∗ µ for any µ by simply thresholding x∗at µ. Conversely, one can recover x∗exactly from at most n well-chosen values of µ. A known divide-and-conquer strategy [19, 21] hinges upon the fact that for any µ, one can easily see which components of x∗are greater or smaller than µ by computing S∗ µ. The resulting algorithm makes O(n) calls to the submodular function oracle. In [25], we extend an alternative approach by Tarjan et al. [42] for cuts to general submodular functions and obtain a solution to (3) up to precision ε in O(min{n, log 1 ε}) iterations. This result is particularly useful if our function F is a sum of functions for each of which by itself the SFM problem is easy. Beyond squared ℓ2-norms, our algorithm equally applies to computing all minimizers of f(x) + Pp j=1 hj(xj) for arbitrary smooth strictly convex functions hj, j = 1, . . . , n. 3 Decomposition of submodular functions Following [28, 29, 38, 41], we assume that our function F may be decomposed as the sum F(S) = Pr j=1 Fj(S) of r “simple” functions. In this paper, by “simple” we mean functions G for which G(S) −a(S) can be minimized efficiently for all vectors a ∈Rn (more precisely, we require that S 7→G(S ∪T) −a(S) can be minimized efficiently over all subsets of V \ T, for any T ⊆V and a). Efficiency may arise from the functional form of G, or from the fact that G has small support. For such functions, Problems (1) and (3) become min S⊆V Xr j=1 Fj(S) = min x∈[0,1]n Xr j=1 fj(x) min x∈Rn Xr j=1 fj(x) + 1 2∥x∥2 2. (5) The key to the algorithms presented here is to be able to minimize 1 2∥x−z∥2 2 +fj(x), or equivalently, to orthogonally project z onto B(Fj): min 1 2∥y −z∥2 2 subject to y ∈B(Fj). We next sketch some examples of functions F and their decompositions into simple functions Fj. As shown at the end of Section 2, projecting onto B(Fj) is easy as soon as the corresponding submodular minimization problems are easy. Here we outline some cases for which specialized fast algorithms are known. Graph cuts. A widely used class of submodular functions are graph cuts. Graphs may be decomposed into substructures such as trees, simple paths or single edges. Message passing algorithms apply to trees, while the proximal problem for paths is very efficiently solved by [2]. For single edges, it is solvable in closed form. Tree decompositions are common in graphical models, whereas path decompositions are frequently used for TV problems [2]. Concave functions. Another important class of submodular functions is that of concave functions of cardinality, i.e., Fj(S) = h(|S|) for a concave function h. Problem (3) for such functions may be solved in O(n log n) time (see [18] and our appendix in [25]). Functions of this class have been used in [24, 27, 41]. Such functions also include covering functions [41]. Hierarchical functions. Here, the ground set corresponds to the leaves of a rooted, undirected tree. Each node has a weight, and the cost of a set of nodes S ⊆V is the sum of the weights of all nodes in the smallest subtree (including the root) that spans S. This class of functions too admits to solve the proximal problem in O(n log n) time [22, 23, 26]. Small support. Any general, potentially slower algorithm such as the minimum-norm-point algorithm can be applied if the support of each Fj is only a small subset of the ground set. 3.1 Dual decomposition of the nonsmooth problem We first review existing dual decomposition techniques for the nonsmooth problem (1). We always assume that F = Pr j=1 Fj, and define Hr := Qr j=1 Rn ≃Rn×r. We follow [29] to derive a dual formulation (see appendix in [25]): Lemma 1. The dual of Problem (1) may be written in terms of variables λ1, . . . , λr ∈Rn as max Xr j=1 gj(λj) s.t. λ ∈ (λ1, . . . , λr) ∈Hr | Xr j=1 λj = 0 (6) where gj(λj) = minS⊂V Fj(S) −λj(S) is a nonsmooth concave function. The dual is the maximization of a nonsmooth concave function over a convex set, onto which it is easy to project: the projection of a vector y has j-th block equal to yj −1 r Pr k=1 yk. Moreover, in our setup, functions gj and their subgradients may be computed efficiently through SFM. 4 We consider several existing alternatives for the minimization of f(x) on x ∈[0, 1]n, most of which use Lemma 1. Computing subgradients for any fj means calling the greedy algorithm, which runs in time O(n log n). All of the following algorithms require the tuning of an appropriate step size. Primal subgradient descent (primal-sgd): Agnostic to any decomposition properties, we may apply a standard simple subgradient method to f. A subgradient of f may be obtained from the subgradients of the components fj. This algorithm converges at rate O(1/ √ t). Dual subgradient descent (dual-sgd) [29]: Applying a subgradient method to the nonsmooth dual in Lemma 1 leads to a convergence rate of O(1/ √ t). Computing a subgradient requires minimizing the submodular functions Fj individually. In simulations, following [29], we consider a step-size rule similar to Polyak’s rule (dual-sgd-P) [6], as well as a decaying step-size (dual-sgd-F), and use discrete optimization for all Fj. Primal smoothing (primal-smooth) [41]: The nonsmooth primal may be smoothed in several ways by smoothing the fj individually; one example is ˜f ε j (xj) = maxyj∈B(Fj) y⊤ j xj −ε 2∥yj∥2. This leads to a function that is (1/ε)-smooth. Computing ˜f ε j means solving the proximal problem for Fj. The convergence rate is O(1/t), but, apart from step size which may be set relatively easily, the smoothing constant ε needs to be defined. Dual smoothing (dual-smooth): Instead of the primal, the dual (6) may be smoothed, e.g., by entropy [8, 38] applied to each gj as ˜gε j(λj) = minx∈[0,1]n fj(x) + εh(x) where h(x) is a negative entropy. Again, the convergence rate is O(1/t) but there are two free parameters (in particular the smoothing constant ε which is hard to tune). This method too requires solving proximal problems for all Fj in each iteration. Dual smoothing with entropy also admits coordinate descent methods [34] that exploit the decomposition, but we do not compare to those here. 3.2 Dual decomposition methods for proximal problems We may also consider Eq. (3) and first derive a dual problem using the same technique as in Section 3.1. Lemma 2 (proved in the appendix in [25]) formally presents our dual formulation as a best approximation problem. The primal variable can be recovered as x = −P j yj. Lemma 2. The dual of Eq. (3) may be written as the best approximation problem min λ,y ∥y −λ∥2 2 s.t. λ ∈ (λ1, . . . , λr) ∈Hr | Xr j=1 λj = 0 , y ∈ Yr j=1 B(Fj). (7) We can actually eliminate the λj and obtain the simpler looking dual problem max y −1 2
Xr j=1 yj
2 2 s.t. yj ∈B(Fj), j ∈{1, . . . , r} (8) Such a dual was also used in [40]. In Section 5, we will see the effect of solving one of these duals or the other. For the simpler dual (8) the case r = 2 is of special interest; it reads max y1∈B(F1), y2∈B(F2) −1 2∥y1 + y2∥2 2 ⇐⇒ min y1∈B(F1),−y2∈−B(F2) ∥y1 −(−y2)∥2. (9) We write problem (9) in this suggestive form to highlight its key geometric structure: it is, like (7), a best approximation problem: i.e., the problem of finding the closest point between the polytopes B(F1) and −B(F2). Notice, however, that (7) is very different from (9)—the former operates in a product space while the latter does not, a difference that can have impact in practice (see Section 5). We are now ready to present algorithms that exploit our dual formulations. 4 Algorithms We describe a few competing methods for solving our smooth dual formulations. We describe the details for the special 2-block case (9); the same arguments apply to the block dual from Lemma 2. 4.1 Block coordinate descent or proximal-Dykstra Perhaps the simplest approach to solving (9) (viewed as a minimization problem) is to use a block coordinate descent (BCD) procedure, which in this case performs the alternating projections: yk+1 1 ←argminy1∈B(F1) ∥y1 −(−yk 2)∥2 2; yk+1 2 ←argminy2∈B(F2) ∥y2 −(−yk+1 1 )∥2. (10) 5 The iterations for solving (8) are analogous. This BCD method (applied to (9)) is equivalent to applying the so-called proximal-Dykstra method [12] to the primal problem. This may be seen by comparing the iterates. Notice that the BCD iteration (10) is nothing but alternating projections onto the convex polyhedra B(F1) and B(F2). There exists a large body of literature studying method of alternating projections—we refer the interested reader to the monograph [13] for further details. However, despite its attractive simplicity, it is known that BCD (in its alternating projections form), can converge arbitrarily slowly [4] depending on the relative orientation of the convex sets onto which one projects. Thus, we turn to a potentially more effective method. 4.2 Douglas-Rachford splitting The Douglas-Rachford (DR) splitting method [14] includes algorithms like ADMM as a special case [12]. It avoids the slowdowns alluded to above by replacing alternating projections with alternating “reflections”. Formally, DR applies to convex problems of the form [3, 12] minx φ1(x) + φ2(x), (11) subject to the qualification ri(dom φ1) ∩ri(dom φ2) ̸= ∅. To solve (11), DR starts with some z0, and performs the three-step iteration (for k ≥0): 1. xk = proxφ2(zk); 2. vk = proxφ1(2xk −zk); 3. zk+1 = zk + γk(vk −zk), (12) where γk ∈[0, 2] is a sequence of scalars that satisfy P k γk(2 −γk) = ∞. The sequence {xk} produced by iteration (12) can be shown to converge to a solution of (11) [3; Thm. 25.6]. Introducing the reflection operator Rφ := 2 proxφ −I, and setting γk = 1, the DR iteration (12) may be written in a more symmetric form as xk = proxφ2(zk), zk+1 = 1 2[Rφ1Rφ2 + I]zk, k ≥0. (13) Applying DR to the duals (7) or (9), requires first putting them in the form (11), either by introducing extra variables or by going back to the primal, which is unnecessary. This is where the special structure of our dual problem proves crucial, a recognition that is subtle yet remarkably important. Instead of applying DR to (9), consider the closely related problem miny δ1(y) + δ− 2 (y), (14) where δ1, δ− 2 are indicator functions for B(F1) and −B(F2), respectively. Applying DR directly to (14) does not work because usually ri(dom δ1) ∩ri(dom δ2) = ∅. Indeed, applying DR to (14) generates iterates that diverge to infinity [4; Thm. 3.13(ii)]. Fortunately, even though the DR iterates for (14) may diverge, Bauschke et al. [4] show how to extract convergent sequences from these iterates, which actually solve the corresponding best approximation problem; for us this is nothing but the dual (9) that we wanted to solve in the first place. Theorem 3, which is a simplified version of [4; Thm. 3.13], formalizes the above discussion. Theorem 3. [4] Let A and B be nonempty polyhedral convex sets. Let ΠA (ΠB) denote orthogonal projection onto A (B), and let RA := 2ΠA −I (similarly RB) be the corresponding reflection operator. Let {zk} be the sequence generated by the DR method (13) applied to (14). If A ∩B ̸= ∅, then {zk}k≥0 converges weakly to a fixed-point of the operator T := 1 2[RARB + I]; otherwise ∥zk∥2 →∞. The sequences {xk} and {ΠAΠBzk} are bounded; the weak cluster points of either of the two sequences {(ΠARBzk, xk)}k≥0 {(ΠAxk, xk)}k≥0, (15) are solutions best approximation problem mina,b ∥a −b∥such that a ∈A and b ∈B. The key consequence of Theorem 3 is that we can apply DR with impunity to (14), and extract from its iterates the optimal solution to problem (9) (from which recovering the primal is trivial). The most important feature of solving the dual (9) in this way is that absolutely no stepsize tuning is required, making the method very practical and user friendly. 6 pBCD, iter 1 pBCD, iter 7 DR, iter 1 DR, iter 4 smooth gap νs = 3.4 · 106 νs = 4.4 · 105 νs = 4.17 · 105 νs = 8.05 · 104 discrete gap νd = 4.6 · 103 νd = 5.5 · 102 νd = 6.6 · 103 νd = 5.9 · 10−1 Figure 1: Segmentation results for the slowest and fastest projection method, with smooth (νs) and discrete (νd) duality gaps. Note how the background noise disappears only for small duality gaps. 5 Experiments We empirically compare the proposed projection methods2 to the (smoothed) subgradient methods discussed in Section 3.1. For solving the proximal problem, we apply block coordinate descent (BCD) and Douglas-Rachford (DR) to Problem (8) if applicable, and also to (7) (BCD-para, DR-para). In addition, we use acceleration to solve (8) or (9) [5]. The main iteration cost of all methods except for the primal subgradient method is the orthogonal projection onto polytopes B(Fj). The primal subgradient method uses the greedy algorithm in each iteration, which runs in O(n log n). However, as we will see, its convergence is so slow to counteract any benefit that may arise from not using projections. We do not include Frank-Wolfe methods here, since FW is equivalent to a subgradient descent on the primal and converges correspondingly slowly. As benchmark problems, we use (i) graph cut problems for segmentation, or MAP inference in a 4-neighborhood grid-structured MRF, and (ii) concave functions similar to [41], but together with graph cut functions. The functions in (i) decompose as sums over vertical and horizontal paths. All horizontal paths are independent and can be solved together in parallel, and similarly all vertical paths. The functions in (ii) are constructed by extracting regions Rj via superpixels and, for each Rj, defining the function Fj(S) = |S||Rj \ S|. We use 200 and 500 regions. The problems have size 640 × 427. Hence, for (i) we have r = 640 + 427 (but solve it as r = 2) and for (ii) r = 640 + 427 + 500 (solved as r = 3). More details and experimental results may be found in [25]. Two functions (r = 2). Figure 2 shows the duality gaps for the discrete and smooth (where applicable) problems for two instances of segmentation problems. The algorithms working with the proximal problems are much faster than the ones directly solving the nonsmooth problem. In particular DR converges extremely fast, faster even than BCD which is known to be a state-of-the-art algorithms for this problem [2]. This, in itself, is a new insight for solving TV. If we aim for parallel methods, then again DR outperforms BCD. Figure 3 (right) shows the speedup gained from parallel processing. Using 8 cores, we obtain a 5-fold speed-up. We also see that the discrete gap shrinks faster than the smooth gap, i.e., the optimal discrete solution does not require to solve the smooth problem to extremely high accuracy. Figure 1 illustrates example results for different gaps. More functions (r > 2). Figure 3 shows example results for four problems of sums of concave and cut functions. Here, we can only run DR-para. Overall, BCD, DR-para and the accelerated gradient method perform very well. In summary, our experiments suggest that projection methods can be extremely useful for solving the combinatorial submodular minimization problem. Of the tested methods, DR, cyclic BCD and accelerated gradient perform very well. For parallelism, applying DR on (9) converges much faster than BCD on the same problem. Moreover, in terms of running times, running the DR method with a mixed Matlab/C implementation until convergence on a single core is only 3-8 times slower than the optimized efficient C code of [7], and only 2-4 times on 2 cores. These numbers should be read while considering that, unlike [7], the projection methods naturally lead to parallel implementations, and are able to integrate a large variety of functions. 6 Conclusion We have presented a novel approach to submodular function minimization based on the equivalence with a best approximation problem. The use of reflection methods avoids any hyperparameters and reduce the number of iterations significantly, suggesting the suitability of reflection methods 2Code and data corresponding to this paper are available at https://sites.google.com/site/mloptstat/drsubmod 7 200 400 600 800 1000 −1 0 1 2 3 4 iteration log10(duality gap) discrete gaps − non−smooth problems − 1 dual−sgd−P dual−sgd−F dual−smooth primal−smooth primal−sgd 20 40 60 80 100 −1 0 1 2 3 4 iteration log10(duality gap) discrete gaps − smooth problems− 1 grad−accel BCD DR BCD−para DR−para 20 40 60 80 100 −4 −2 0 2 4 6 iteration log10(duality gap) smooth gaps − smooth problems − 1 grad−accel BCD DR BCD−para DR−para 200 400 600 800 1000 −1 0 1 2 3 4 5 iteration log10(duality gap) discrete gaps − non−smooth problems − 4 dual−sgd−P dual−sgd−F dual−smooth primal−smooth primal−sgd 20 40 60 80 100 −1 0 1 2 3 4 5 iteration log10(duality gap) discrete gaps − smooth problems− 4 grad−accel BCD DR BCD−para DR−para 20 40 60 80 100 −2 −1 0 1 2 3 4 5 6 7 iteration log10(duality gap) smooth gaps − smooth problems − 4 grad−accel BCD DR BCD−para DR−para Figure 2: Comparison of convergence behaviors. Left: discrete duality gaps for various optimization schemes for the nonsmooth problem, from 1 to 1000 iterations. Middle: discrete duality gaps for various optimization schemes for the smooth problem, from 1 to 100 iterations. Right: corresponding continuous duality gaps. From top to bottom: two different images. 50 100 150 200 −3 −2 −1 0 1 2 3 4 5 6 iteration log10(duality gap) discrete gaps − 2 dual−sgd−P DR−para BCD BCD−para grad−accel 20 40 60 80 100 −3 −2 −1 0 1 2 3 4 5 6 iteration log10(duality gap) discrete gaps − 3 dual−sgd−P DR−para BCD BCD−para grad−accel 0 2 4 6 8 0 1 2 3 4 5 6 40 iterations of DR # cores speedup factor Figure 3: Left two plots: convergence behavior for graph cut plus concave functions. Right: Speedup due to parallel processing. for combinatorial problems. Given the natural parallelization abilities of our approach, it would be interesting to perform detailed empirical comparisons with existing parallel implementations of graph cuts (e.g., [39]). Moreover, a generalization beyond submodular functions of the relationships between combinatorial optimization problems and convex problems would enable the application of our framework to other common situations such as multiple labels (see, e.g., [29]). Acknowledgments. This research was in part funded by the Office of Naval Research under contract/grant number N00014-11-1-0688, by NSF CISE Expeditions award CCF-1139158, by DARPA XData Award FA875012-2-0331, and the European Research Council (SIERRA project), as well as gifts from Amazon Web Services, Google, SAP, Blue Goji, Cisco, Clearstory Data, Cloudera, Ericsson, Facebook, General Electric, Hortonworks, Intel, Microsoft, NetApp, Oracle, Samsung, Splunk, VMware and Yahoo!. References [1] F. Bach. Learning with submodular functions: A convex optimization perspective. Arxiv preprint arXiv:1111.6453v2, 2013. [2] A. Barbero and S. Sra. Fast Newton-type methods for total variation regularization. In ICML, 2011. [3] H. H. Bauschke and P. L. Combettes. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, 2011. [4] H. H. Bauschke, P. L. Combettes, and D. R. Luke. Finding best approximation pairs relative to two closed convex sets in Hilbert spaces. J. Approx. Theory, 127(2):178–192, 2004. [5] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009. 8 [6] D. P. Bertsekas. Nonlinear programming. Athena Scientific, 1999. [7] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. IEEE TPAMI, 23(11):1222–1239, 2001. [8] B.Savchynskyy, S.Schmidt, J.H.Kappes, and C.Schn¨orr. Efficient MRF energy minimization via adaptive diminishing smoothing. In UAI, 2012. [9] A. Chambolle. An algorithm for total variation minimization and applications. J Math. Imaging and Vision, 20(1):89–97, 2004. [10] A. Chambolle and J. Darbon. On total variation minimization and surface evolution using parametric maximum flows. Int. Journal of Comp. Vision, 84(3):288–307, 2009. [11] F. Chudak and K. Nagano. Efficient solutions to relaxations of combinatorial problems with submodular penalties via the Lov´asz extension and non-smooth convex optimization. In SODA, 2007. [12] P. L. Combettes and J.-C. Pesquet. Proximal Splitting Methods in Signal Processing. In Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pages 185–212. Springer, 2011. [13] F. R. Deutsch. Best Approximation in Inner Product Spaces. Springer Verlag, first edition, 2001. [14] J. Douglas and H. H. Rachford. On the numerical solution of the heat conduction problem in 2 and 3 space variables. Tran. Amer. Math. Soc., 82:421–439, 1956. [15] J. Edmonds. Submodular functions, matroids, and certain polyhedra. In Combinatorial optimization Eureka, you shrink!, pages 11–26. Springer, 2003. [16] U. Feige, V. S. Mirrokni, and J. Vondrak. Maximizing non-monotone submodular functions. SIAM J Comp, 40(4):1133–1153, 2011. [17] M. Frank and P. Wolfe. An algorithm for quadratic programming. Naval Research Logistics Quarterly, 3: 95–110, 1956. [18] S. Fujishige. Lexicographically optimal base of a polymatroid with respect to a weight vector. Mathematics of Operations Research, pages 186–196, 1980. [19] S. Fujishige. Submodular Functions and Optimization. Elsevier, 2005. [20] S. Fujishige and S. Isotani. A submodular function minimization algorithm based on the minimum-norm base. Pacific Journal of Optimization, 7:3–17, 2011. [21] H. Groenevelt. Two algorithms for maximizing a separable concave function over a polymatroid feasible region. European Journal of Operational Research, 54(2):227–236, 1991. [22] D.S. Hochbaum and S.-P. Hong. About strongly polynomial time algorithms for quadratic optimization over submodular constraints. Math. Prog., pages 269–309, 1995. [23] S. Iwata and N. Zuiki. A network flow approach to cost allocation for rooted trees. Networks, 44:297–301, 2004. [24] S. Jegelka, H. Lin, and J. Bilmes. On fast approximate submodular minimization. In NIPS, 2011. [25] S. Jegelka, F. Bach, and S. Sra. Reflection methods for user-friendly submodular optimization (extended version). arXiv, 2013. [26] R. Jenatton, J. Mairal, G. Obozinski, and F. Bach. Proximal methods for hierarchical sparse coding. Journal of Machine Learning Research, pages 2297–2334, 2011. [27] P. Kohli, L. Ladick´y, and P. Torr. Robust higher order potentials for enforcing label consistency. Int. Journal of Comp. Vision, 82, 2009. [28] V. Kolmogorov. Minimizing a sum of submodular functions. Disc. Appl. Math., 160(15), 2012. [29] N. Komodakis, N. Paragios, and G. Tziritas. MRF energy minimization and beyond via dual decomposition. IEEE TPAMI, 33(3):531–552, 2011. [30] A. Krause and C. Guestrin. Submodularity and its applications in optimized information gathering. ACM Transactions on Intelligent Systems and Technology, 2(4), 2011. [31] H. Lin and J. Bilmes. A class of submodular functions for document summarization. In NAACL/HLT, 2011. [32] L. Lov´asz. Submodular functions and convexity. Mathematical programming: the state of the art, Bonn, pages 235–257, 1982. [33] S. T. McCormick. Submodular function minimization. Discrete Optimization, 12:321–391, 2005. [34] O. Meshi, T. Jaakkola, and A. Globerson. Convergence rate analysis of MAP coordinate minimization algorithms. In NIPS, 2012. [35] G.L. Nemhauser, L.A. Wolsey, and M.L. Fisher. An analysis of approximations for maximizing submodular set functions–I. Math. Prog., 14(1):265–294, 1978. [36] Y. Nesterov. Smooth minimization of non-smooth functions. Math. Prog., 103(1):127–152, 2005. [37] J. B. Orlin. A faster strongly polynomial time algorithm for submodular function minimization. Math. Prog., 118(2):237–251, 2009. [38] B. Savchynskyy, S. Schmidt, J. Kappes, and C. Schn¨orr. A study of Nesterov’s scheme for Lagrangian decomposition and MAP labeling. In CVPR, 2011. [39] A. Shekhovtsov and V. Hlav´ac. A distributed mincut/maxflow algorithm combining path augmentation and push-relabel. In Energy Minimization Methods in Computer Vision and Pattern Recognition, 2011. [40] P. Stobbe. Convex Analysis for Minimizing and Learning Submodular Set functions. PhD thesis, California Institute of Technology, 2013. [41] P. Stobbe and A. Krause. Efficient minimization of decomposable submodular functions. In NIPS, 2010. [42] R. Tarjan, J. Ward, B. Zhang, Y. Zhou, and J. Mao. Balancing applied to maximum network flow problems. In European Symp. on Algorithms (ESA), pages 612–623, 2006. 9
|
2013
|
120
|
4,844
|
Compressive Feature Learning Hristo S. Paskov Department of Computer Science Stanford University hpaskov@cs.stanford.edu Robert West Department of Computer Science Stanford University west@cs.stanford.edu John C. Mitchell Department of Computer Science Stanford University mitchell@cs.stanford.edu Trevor J. Hastie Department of Statistics Stanford University hastie@stanford.edu Abstract This paper addresses the problem of unsupervised feature learning for text data. Our method is grounded in the principle of minimum description length and uses a dictionary-based compression scheme to extract a succinct feature set. Specifically, our method finds a set of word k-grams that minimizes the cost of reconstructing the text losslessly. We formulate document compression as a binary optimization task and show how to solve it approximately via a sequence of reweighted linear programs that are efficient to solve and parallelizable. As our method is unsupervised, features may be extracted once and subsequently used in a variety of tasks. We demonstrate the performance of these features over a range of scenarios including unsupervised exploratory analysis and supervised text categorization. Our compressed feature space is two orders of magnitude smaller than the full k-gram space and matches the text categorization accuracy achieved in the full feature space. This dimensionality reduction not only results in faster training times, but it can also help elucidate structure in unsupervised learning tasks and reduce the amount of training data necessary for supervised learning. 1 Introduction Machine learning algorithms rely critically on the features used to represent data; the feature set provides the primary interface through which an algorithm can reason about the data at hand. A typical pitfall for many learning problems is that there are too many potential features to choose from. Intelligent subselection is essential in these scenarios because it can discard noise from irrelevant features, thereby requiring fewer training examples and preventing overfitting. Computationally, a smaller feature set is almost always advantageous as it requires less time and space to train the algorithm and make inferences [10, 9]. Various heuristics have been proposed for feature selection, one class of which work by evaluating each feature separately with respect to its discriminative power. Some examples are document frequency, chi-square value, information gain, and mutual information [26, 9]. More sophisticated methods attempt to achieve feature sparsity by optimizing objective functions containing an L1 regularization penalty [25, 27]. Unsupervised feature selection methods [19, 18, 29, 13] are particularly attractive. First, they do not require labeled examples, which are often expensive to obtain (e.g., when humans have to provide them) or might not be available in advance (e.g., in text classification, the topic to be retrieved might be defined only at some later point). Second, they can be run a single time in an offline preprocessing 1 step, producing a reduced feature space that allows for subsequent rapid experimentation. Finally, a good data representation obtained in an unsupervised way captures inherent structure and can be used in a variety of machine learning tasks such as clustering, classification, or ranking. In this work we present a novel unsupervised method for feature selection for text data based on ideas from data compression and formulated as an optimization problem. As the universe of potential features, we consider the set of all word k-grams.1 The basic intuition is that substrings appearing frequently in a corpus represent a recurring theme in some of the documents, and hence pertain to class representation. However, it is not immediately clear how to implement this intuition. For instance, consider a corpus of NIPS papers. The bigram ‘supervised learning’ will appear often, but so will the constituent unigrams ‘supervised’ and ‘learning’. So shall we use the bigram, the two separate unigrams, or a combination, as features? Our solution invokes the principle of minimum description length (MDL) [23]: First, we compress the corpus using a dictionary-based lossless compression method. Then, the substrings that are used to reconstruct each document serve as the feature set. We formulate the compression task as a numerical optimization problem. The problem is non-convex, but we develop an efficient approximate algorithm that is linear in the number of words in the corpus and highly parallelizable. In the example, the bigram ‘supervised learning’ would appear often enough to be added to the dictionary; ‘supervised’ and ‘learning’ would also be chosen as features if they appear separately in combinations other than ‘supervised learning’ (because the compression paradigm we choose is lossless). We apply our method to two datasets and compare it to a canonical bag-of-k-grams representation. Our method reduces the feature set size by two orders of magnitude without incurring a loss of performance on several text categorization tasks. Moreover, it expedites training times and requires significantly less labeled training data on some text categorization tasks. 2 Compression and Machine Learning Our work draws on a deep connection between data compression and machine learning, exemplified early on by the celebrated MDL principle [23]. More recently, researchers have experimented with off-the-shelf compression algorithms as machine learning subroutines. Instances are Frank et al.’s [7] compression-based approach to text categorization, as well as compression-based distance measures, where the basic intuition is that, if two texts x and y are very similar, then the compressed version of their concatenation xy should not be much longer than the compressed version of either x or y separately. Such approaches have been shown to work well on a variety of tasks such as language clustering [1], authorship attribution [1], time-series clustering [6, 11], anomaly detection [11], and spam filtering [3]. Distance-based approaches are akin to kernel methods, and thus suffer from the problem that constructing the full kernel matrix for large datasets might be infeasible. Furthermore, Frank et al. [7] deplore that “it is hard to see how efficient feature selection could be incorporated” into the compression algorithm. But Sculley and Brodley [24] show that many compression-based distance measures can be interpreted as operating in an implicit high-dimensional feature space, spanned by the dictionary elements found during compression. We build on this observation to address Frank et al.’s above-cited concern about the impossibility of feature selection for compression-based methods. Instead of using an off-the-shelf compression algorithm as a black-box kernel operating in an implicit high-dimensional feature space, we develop an optimization-based compression scheme whose explicit job it is to perform feature selection. It is illuminating to discuss a related approach suggested (as future work) by Sculley and Brodley [24], namely “to store substrings found by Lempel–Ziv schemes as explicit features”. This simplistic approach suffers from a serious flaw that our method overcomes. Imagine we want to extract features from an entire corpus. We would proceed by concatenating all documents in the corpus into a single large document D, which we would compress using a Lempel–Ziv algorithm. The problem is that the extracted substrings are dependent on the order in which we concatenate the documents to form the input D. For the sake of concreteness, consider LZ77 [28], a prominent member of the Lempel– Ziv family (but the argument applies equally to most standard compression algorithms). Starting from the current cursor position, LZ77 scans D from left to right, consuming characters until it 1In the remainder of this paper, the term ‘k-grams’ includes sequences of up to (rather than exactly) k words. 2 m a n a m a n a m a n m a n a m a n a m a n a m a n a m a n a m a n a m a n a 3 + (0 × 8) = 3 4 + (1 × 2) = 6 8 + (8 × 1) = 16 Document Pointers Dictionary Cost Min. dictionary cost Min. combined cost Min. pointer cost Figure 1: Toy example of our optimization problem for text compression. Three different solutions shown for representing the 8-word document D = manamana in terms of dictionary and pointers. Dictionary cost: number of characters in dictionary. Pointer cost: λ × number of pointers. Costs given as dictionary cost + pointer cost. Left: dictionary cost only (λ = 0). Right: expensive pointer cost (λ = 8). Center: balance of dictionary and pointer costs (λ = 1). has found the longest prefix matching a previously seen substring. It then outputs a pointer to that previous instance—we interpret this substring as a feature—and continues with the remaining input string (if no prefix matches, the single next character is output). This approach produces different feature sets depending on the order in which documents are concatenated. Even in small instances such as the 3-document collection {D1 = abcd,D2 = ceab,D3 = bce}, the order (D1,D2,D3) yields the feature set {ab,bc}, whereas (D2,D3,D1) results in {ce,ab} (plus, trivially, the set of all single characters). As we will demonstrate in our experiments section, this instability has a real impact on performance and is therefore undesirable. Our approach, like LZ77, seeks common substrings. However, our formulation is not affected by the concatenation order of corpus documents and does not suffer from LZ77’s instability issues. 3 Compressive Feature Learning The MDL principle implies that a good feature representation for a document D = x1x2 ...xn of n words minimizes some description length of D. Our dictionary-based compression scheme accomplishes this by representing D as a dictionary—a subset of D’s substrings—and a sequence of pointers indicating where copies of each dictionary element should be placed in order to fully reconstruct the document. The compressed representation is chosen so as to minimize the cost of storing each dictionary element in plaintext as well as all necessary pointers. This scheme achieves a shorter description length whenever it can reuse dictionary elements at different locations in D. For a concrete example, see Fig. 1, which shows three ways of representing a document D in terms of a dictionary and pointers. These representations are obtained by using the same pointer storage cost λ for each pointer and varying λ. The two extreme solutions focus on minimizing either the dictionary cost (λ = 0) or the pointer cost (λ = 8) solely, while the middle solution (λ = 1) trades off between minimizing a combination of the two. We are particularly interested in this tradeoff: when all pointers have the same cost, the dictionary and pointer costs pull the solution in opposite directions. Varying λ allows us to ‘interpolate’ between the two extremes of minimum dictionary cost and minimum pointer cost. In other words, λ can be interpreted as tracing out a regularization path that allows a more flexible representation of D. To formalize our compression criterion, let S = {xi ...xi+t−1 | 1 ≤t ≤k,1 ≤i ≤n−t +1} be the set of all unique k-grams in D, and P = {(s,l) | s = xl ...xl+|s|−1 } be the set of all m = |P| (potential) pointers. Without loss of generality, we assume that P is an ordered set, i.e., each i ∈{1,...,m} corresponds to a unique pi ∈P, and we define J(s) ⊂{1,...,m} to be the set of indices of all pointers which share the same string s. Given a binary vector w ∈{0,1}m, w reconstructs word x j if for some wi = 1 the corresponding pointer pi = (s,l) satisfies l ≤j < l +|s|. This notation uses wi to indicate whether pointer pi should be used to reconstruct (part of) D by pasting a copy of string s into location l. Finally, w reconstructs D if every xj is reconstructed by w. 3 Compressing D can be cast as a binary linear minimization problem over w; this bit vector tells us which pointers to use in the compressed representation of D and it implicitly defines the dictionary (a subset of S). In order to ensure that w reconstructs D, we require that Xw ≥1. Here X ∈{0,1}n×m indicates which words each wi = 1 can reconstruct: the i-th column of X is zero everywhere except for a contiguous sequence of ones corresponding to the words which wi = 1 reconstructs. Next, we assume the pointer storage cost of setting wi = 1 is given by di ≥0 and that the cost of storing any s ∈S is c(s). Note that s must be stored in the dictionary if ∥wJ(s)∥∞= 1, i.e., some pointer using s is used in the compression of D. Putting everything together, our lossless compression criterion is minimize w wTd + X s∈S c(s)∥wJ(s)∥∞ subject to Xw ≥1, w ∈{0,1}m. (1) Finally, multiple documents can be compressed jointly by concatenating them in any order into a large document and disallowing any pointers that span document boundaries. Since this objective is invariant to the document concatenating order, it does not suffer from the same problems as LZ77 (cf. Section 2). 4 Optimization Algorithm The binary constraint makes the problem in (1) non-convex. We solve it approximately via a series of related convex problems P(1),P(2),... that converge to a good optimum. Each P(i) relaxes the binary constraint to only require 0 ≤w ≤1 and solves a weighted optimization problem minimize w wT ˜d(i) + X s∈S c(s)∥D(i) J(s)J(s)wJ(s)∥∞ subject to Xw ≥1, 0 ≤w ≤1. (2) Here, D(i) is an m × m diagonal matrix of positive weights and ˜d(i) = D(i)d for brevity. We use an iterative reweighting scheme that uses D(1) = I and D(i+1) j j = max n 1,(w(i) j +ϵ)−1o , where w(i) is the solution to P(i). This scheme is inspired by the iterative reweighting method of Cand`es et al. [5] for solving problems involving L0 regularization. At a high level, reweighting can be motivated by noting that (2) recovers the correct binary solution if ϵ is sufficiently small and we use as weights a nearly binary solution to (1). Since we do not know the correct weights, we estimate them from our best guess to the solution of (1). In turn, D(i+1) punishes coefficients that were small in w(i) and, taken together with the constraint Xw ≥1, pushes the solution to be binary. ADMM Solution We demonstrate an efficient and parallel algorithm to solve (2) based on the Alternating Directions Method of Multipliers (ADMM) [2]. Problem (2) is a linear program solvable by a general purpose method in O(m3) time. However, if all potential dictionary elements are no longer than k words in length, we can use problem structure to achieve a run time of O(k2n) per step of ADMM, i.e., linear in the document length. This is helpful because k is relatively small in most scenarios: long k-grams tend to appear only once and are not helpful for compression. Moreover, they are rarely used in NLP applications since the relevant signal is captured by smaller fragments. ADMM is an optimization framework that operates by splitting a problem into two subproblems that are individually easier to solve. It alternates solving the subproblems until they both agree on the solution, at which point the full optimization problem has been solved. More formally, the optimum of a convex function h(w) = f(w) + g(w) can be found by minimizing f(w) + g(z) subject to the constraint that w = z. ADMM acccomplishes this by operating on the augmented Lagrangian Lρ(w,z,y) = f(w)+g(z)+yT(w−z)+ ρ 2∥w−z∥2 2. (3) It minimizes Lρ with respect to w and z while maximizing with respect to dual variable y ∈Rm in order to enforce the condition w = z. This minimization is accomplished by, at each step, solving for w, then z, then updating y according to [2]. These steps are repeated until convergence. 4 Dropping the D(i) superscripts for legibility, we can exploit problem structure by splitting (2) into f(w) = wT ˜d + X s∈S c(s)∥DJ(s)J(s)wJ(s)∥∞+I+(w), g(z) = I+(Xz−1) (4) where I+(·) is 0 if its argument is non-negative and ∞otherwise. We eliminated the w ≤1 constraint because it is unnecessary—any optimal solution will automatically satisfy it. Minimizing w The dual of this problem is a quadratic knapsack problem solvable in linear expected time [4], we provide a similar algorithm that solves the primal formulation. We solve for each wJ(s) separately since the optimization is separable in each block of variables. It can be shown [21] that wJ(s) = 0 if ∥D−1 J(s)J(s)qJ(s)∥1 ≤c(s), where qJ(s) = max ρzJ(s) −˜dJ(s) −yJ(s),0 and the max operation is applied elementwise. Otherwise, wJ(s) is non-zero and the L∞norm only affects the maximal coordinates of DJ(s)J(s)wJ(s). For simplicity of exposition, we assume that the coefficients of wJ(s) are sorted in decreasing order according to DJ(s)J(s)qJ(s), i.e., [DJ(s)J(s)qJ(s)]j ≥ [DJ(s)J(s)qJ(s)]j+1. This is always possible by permuting coordinates. We show in [21] that, if DJ(s)J(s)wJ(s) has r maximal coordinates, then wJ(s)j = D−1 J(s)jJ(s)j min ( DJ(s)jJ(s)jqJ(s)j, Pr v=1 D−1 J(s)vJ(s)vqJ(s)v −c(s) Pr v=1 D−2 J(s)vJ(s)v ) . (5) We can find r by searching for the smallest value of r for which exactly r coefficients in DJ(s)J(s)wJ(s) are maximal when determined by the formula above. As discussed in [21], an algorithm similar to the linear-time median-finding algorithm can be used to determine wJ(s) in linear expected time. Minimizing z Solving for z is tantamount to projecting a weighted combination of w and y onto the polyhderon given by Xz ≥1 and is best solved by taking the dual. It can be shown [21] that the dual optimization problem is minimize α 1 2αTHα−αT(ρ1−X(y+ρw)) subject to α ≥0 (6) where α ∈Rn + is a dual variable enforcing Xz ≥1 and H = XXT. Strong duality obtains and z can be recovered via z = ρ−1(y+ρw+XTα). The matrix H has special structure when S is a set of k-grams no longer than k words. In this case, [21] shows that H is a (k −1)–banded positive definite matrix so we can find its Cholesky decomposition in O(k2n). We then use an active-set Newton method [12] to solve (6) quickly in approximately 5 Cholesky decompositions. A second important property of H is that, if N documents n1,...,nN words long are compressed jointly and no k-gram spans two documents, then H is block-diagonal with block i an ni ×ni (k −1)–banded matrix. This allows us to solve (6) separately for each document. Since the majority of the time is spent solving for z, this property allows us to parallelize the algorithm and speed it up considerably. 5 Experiments 20 Newsgroups Dataset The majority of our experiments are performed on the 20 Newsgroups dataset [15, 22], a collection of about 19K messages approximately evenly split among 20 different newsgroups. Since each newsgroup discusses a different topic, some more closely related than others, we investigate our compressed features’ ability to elucidate class structure in supervised and unsupervised learning scenarios. We use the “by-date” 60%/40% training/testing split described in [22] for all classification tasks. This split makes our results comparable to the existing literature and makes the task more difficult by removing correlations from messages that are responses to one another. 5 Feature Extraction and Training We compute a bag-of-k-grams representation from a compressed document by counting the number of pointers that use each substring in the compressed version of the document. This method retrieves the canonical bag-of-k-grams representation when all pointers are used, i.e., w = 1. Our compression criterion therefore leads to a less redundant representation. Note that we extract features for a document corpus by compressing all of its documents jointly and then splitting into testing and training sets. Since this process involves no label information, it ensures that our estimate of testing error is unbiased. All experiments were limited to using 5-grams as features, i.e., k = 5 for our compression algorithm. Each substring’s dictionary cost was its word length and the pointer cost was uniformly set to 0 ≤ λ ≤5. We found that an overly large λ hurts accuracy more than an overly small value since the former produces long, infrequent substrings, while the latter tends to a unigram representation. It is also worthwhile to note that the storage cost (i.e., the value of the objective function) of the binary solution was never more than 1.006 times the storage cost of the relaxed solution, indicating that we consistently found a good local optimum. Finally, all classification tasks use an Elastic-Net–regularized logistic regression classifier implemented by glmnet [8]. Since this regularizer is a mix of L1 and L2 penalties, it is useful for feature selection but can also be used as a simple L2 ridge penalty. Before training, we normalize each document by its L1 norm and then normalize features by their standard deviation. We use this scheme so as to prevent overly long documents from dominating the feature normalization. AG GA Rand Alt Ours 0 0.02 0.04 0.06 0.08 Misclassification Error LZ77 Order Sensitivity Figure 2: Misclassification error and standard error bars when classifying alt.atheism (A) vs. comp.graphics (G) from 20 Newsgroups. The four leftmost results are on features from running LZ77 on documents ordered by class (AG, GA), randomly (Rand), or by alternating classes (Alt); the rightmost is on our compressed features. LZ77 Comparison Our first experiment demonstrates LZ77’s sensitivity to document ordering on a simple binary classification task of predicting whether a document is from the alt.atheism (A) or comp.graphics (G) newsgroup. Features were computed by concatenating documents in different orders: (1) by class, i.e., all documents in A before those in G, or G before A; (2) randomly; (3) by alternating the class every other document. Fig. 5 shows the testing error compared to features computed from our criterion. Error bars were estimated by bootstrapping the testing set 100 times, and all regularization parameters were chosen to minimize testing error while λ was fixed at 0.03. As predicted in Section 2, document ordering has a marked impact on performance, with the by-class and random orders performing significantly worse than the alternating ordering. Moreover, order invariance and the ability to tune the pointer cost lets our criterion select a better set of 5-grams. PCA Next, we investigate our features in a typical exploratory analysis scenario: a researcher looking for interesting structure by plotting all pairs of the top 10 principal components of the data. In particular, we verify PCA’s ability to recover binary class structure for the A and G newsgroups, as well as multiclass structure for the A, comp.sys.ibm.pc.hardware (PC), rec.motorcycles (M), sci.space (S), and talk.politics.mideast (PM) newsgroups. Fig. 3 plots the pair of principal components that best exemplifies class structure using (1) compressed features and (2) all 5-grams. For the sake of fairness, the components were picked by training a logistic regression on every pair of the top 10 principal components and selecting the pair with the lowest training error. In both the binary and multiclass scenarios, PCA is inundated by millions of features when using all 5-grams and cannot display good class structure. In contrast, compression reduces the feature set to tens of thousands (by two orders of magnitude) and clearly shows class structure. The star pattern of the five classes stands out even when class labels are hidden. 6 Figure 3: PCA plots for 20 Newsgroups. Left: alt.atheism (blue), comp.graphics (red). Right: alt.atheism (blue), comp.sys.ibm.pc.hardware (green), rec.motorcycles (red), sci.space (cyan), talk.politics.mideast (magenta). Top: compressed features (our method). Bottom: all 5-grams. Table 1: Classification accuracy on the 20 Newsgroups and IMDb datasets Method 20 Newsgroups IMDb Discriminative RBM [16] 76.2 — Bag-of-Words SVM [14, 20] 80.8 88.2 Na¨ıve Bayes [17] 81.8 — Word Vectors [20] — 88.9 All 5-grams 82.8 90.6 Compressed (our method) 83.0 90.4 Classification Tasks Table 1 compares the performance of compressed features with all 5-grams on two tasks: (1) categorizing posts from the 20 Newsgroups corpus into one of 20 classes; (2) categorizing movie reviews collected from IMDb [20] into one of two classes (there are 25,000 training and 25,000 testing examples evenly split between the classes). For completeness, we include comparisons with previous work for 20 Newsgroups [16, 14, 17] and IMDb [20]. All regularization parameters, including λ, were chosen through 10-fold cross validation on the training set. We also did not L1-normalize documents in the binary task because it was found to be counterproductive on the training set. Our classification performance is state of the art in both tasks, with the compressed and all-5-gram features tied in performance. Since both datasets feature copious amounts of labeled data, we expect the 5-gram features to do well because of the power of the Elastic-Net regularizer. What is remarkable is that the compression retains useful features without using any label information. There are tens of millions of 5-grams, but compression reduces them to hundreds of thousands (by two orders of magnitude). This has a particularly noticeable impact on training time for the 20 Newsgroups dataset. Cross-validation takes 1 hour with compressed features and 8–16 hours for all 5-grams on our reference computer depending on the sparsity of the resulting classifier. Training-Set Size Our final experiment explores the impact of training-set size on binary-classification accuracy for the A vs. G and rec.sport.baseball (B) vs. rec.sport.hockey (H) newsgroups. Fig. 4 plots testing error as the amount of training data varies, comparing compressed features to full 7 0 20 40 60 80 100 0 0.1 0.2 0.3 0.4 0.5 Percent of Training Data Misclassification Error Error on A vs. G Compressed All 5-grams L2 All 5-grams EN (a) 0 20 40 60 80 100 0 0.1 0.2 0.3 0.4 0.5 Percent of Training Data Misclassification Error Error on B vs. H Compressed All 5-grams L2 All 5-grams EN (b) Figure 4: Classification accuracy as the training set size varies for two classification tasks from 20 Newsgroups: (a) alt.atheism (A) vs. comp.graphics (G); (b) rec.sport.baseball (B) vs. rec.sport.hockey (H). To demonstrate the effects of feature selection, L2 indicates L2-regularization while EN indicates elastic-net regularization. 5-grams; we explore the latter with and without feature selection enabled (i.e., Elastic Net vs. L2 regularizer). We resampled the training set 100 times for each training-set size and report the average accuracy. All regularization parameters were chosen to minimize the testing error (so as to eliminate effects from imperfect tuning) and λ = 0.03 in both tasks. For the A–G task, the compressed features require substantially less data than the full 5-grams to come close to their best testing error. The B–H task is harder and all three classifiers benefit from more training data, although the gap between compressed features and all 5-grams is widest when less than half of the training data is available. In all cases, the compressed features outperform the full 5-grams, indicating that that latter may benefit from even more training data. In future work it will be interesting to investigate the efficacy of compressed features on more intelligent sampling schemes such as active learning. 6 Discussion We develop a feature selection method for text based on lossless data compression. It is unsupervised and can thus be run as a task-independent, one-off preprocessing step on a corpus. Our method achieves state-of-the-art classification accuracy on two benchmark datasets despite selecting features without any knowledge of the class labels. In experiments comparing it to a full 5-gram model, our method reduces the feature-set size by two orders of magnitude and requires only a fraction of the time to train a classifier. It selects a compact feature set that can require significantly less training data and reveals unsupervised problem structure (e.g., when using PCA). Our compression scheme is more robust and less arbitrary compared to a setup which uses off-theshelf compression algorithms to extract features from a document corpus. At the same time, our method has increased flexibility since the target k-gram length is a tunable parameter. Importantly, the algorithm we present is based on iterative reweighting and ADMM and is fast enough—linear in the input size when k is fixed, and highly parallelizable—to allow for computing a regularization path of features by varying the pointer cost. Thus, we may adapt the compression to the data at hand and select features that better elucidate its structure. Finally, even though we focus on text data in this paper, our method is applicable to any sequential data where the sequence elements are drawn from a finite set (such as the universe of words in the case of text data). In future work we plan to compress click stream data from users browsing the Web. We also plan to experiment with approximate text representations obtained by making our criterion lossy. Acknowledgments We would like to thank Andrej Krevl, Jure Leskovec, and Julian McAuley for their thoughtful discussions and help with our paper. 8 References [1] D. Benedetto, E. Caglioti, and V. Loreto. Language trees and zipping. PRL, 88(4):048702, 2002. [2] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1):1– 122, 2011. [3] A. Bratko, B. Filipiˇc, G. V. Cormack, T. R. Lynam, and B. Zupan. Spam filtering using statistical data compression models. JMLR, 7:2673–2698, 2006. [4] P. Brucker. An O(n) algorithm for quadratic knapsack problems. Operations Research Letters, 3(3):163– 166, 1984. [5] E. Cand`es, M. Wakin, and S. Boyd. Enhancing sparsity by reweighted ℓ1 minimization. J Fourier Analysis and Applications, 14(5-6):877–905, 2008. [6] R. Cilibrasi and P. M. Vit´anyi. Clustering by compression. TIT, 51(4):1523–1545, 2005. [7] E. Frank, C. Chui, and I. Witten. Text categorization using compression models. Technical Report 00/02, University of Waikato, Department of Computer Science, 2000. [8] J. Friedman, T. Hastie, and R. Tibshirani. Regularization paths for generalized linear models via coordinate descent. J Stat Softw, 33(1):1–22, 2010. [9] E. Gabrilovich and S. Markovitch. Text categorization with many redundant features: Using aggressive feature selection to make SVMs competitive with C4.5. In ICML, 2004. [10] I. Guyon and A. Elisseeff. An introduction to variable and feature selection. JMLR, 3:1157–1182, 2003. [11] E. Keogh, S. Lonardi, and C. A. Ratanamahatana. Towards parameter-free data mining. In KDD, 2004. [12] D. Kim, S. Sra, and I. S. Dhillon. Tackling box-constrained optimization via a new projected quasi-newton approach. SIAM Journal on Scientific Computing, 32(6):3548–3563, 2010. [13] V. Kuleshov. Fast algorithms for sparse principal component analysis based on Rayleigh quotient iteration. In ICML, 2013. [14] M. Lan, C. Tan, and H. Low. Proposing a new term weighting scheme for text categorization. In AAAI, 2006. [15] K. Lang. Newsweeder: Learning to filter netnews. In ICML, 1995. [16] H. Larochelle and Y. Bengio. Classification using discriminative restricted Boltzmann machines. In ICML, 2008. [17] B. Li and C. Vogel. Improving multiclass text classification with error-correcting output coding and sub-class partitions. In Can Conf Adv Art Int, 2010. [18] H. Liu and L. Yu. Toward integrating feature selection algorithms for classification and clustering. TKDE, 17(4):491–502, 2005. [19] T. Liu, S. Liu, Z. Chen, and W. Ma. An evaluation on feature selection for text clustering. In ICML, 2003. [20] A. Maas, R. Daly, P. Pham, D. Huang, A. Y. Ng, and C. Potts. Learning word vectors for sentiment analysis. In ACL, 2011. [21] H. S. Paskov, R. West, J. C. Mitchell, and T. J. Hastie. Supplementary material for Compressive Feature Learning, 2013. [22] J. Rennie. 20 Newsgroups dataset, 2008. http://qwone.com/˜jason/20Newsgroups (accessed May 31, 2013). [23] J. Rissanen. Modeling by shortest data description. Automatica, 14(5):465–471, 1978. [24] D. Sculley and C. E. Brodley. Compression and machine learning: A new perspective on feature space vectors. In DCC, 2006. [25] R. Tibshirani. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. B, 58(1):267–288, 1996. [26] Y. Yang and J. Pedersen. A comparative study on feature selection in text categorization. In ICML, 1997. [27] J. Zhu, S. Rosset, T. Hastie, and R. Tibshirani. 1-norm support vector machines. In NIPS, 2004. [28] J. Ziv and A. Lempel. A universal algorithm for sequential data compression. TIT, 23(3):337–343, 1977. [29] H. Zou, T. Hastie, and R. Tibshirani. Sparse principal component analysis. JCGS, 15(2):265–286, 2006. 9
|
2013
|
121
|
4,845
|
Sparse nonnegative deconvolution for compressive calcium imaging: algorithms and phase transitions Eftychios A. Pnevmatikakis and Liam Paninski Department of Statistics, Center for Theoretical Neuroscience Grossman Center for the Statistics of Mind, Columbia University, New York, NY {eftychios, liam}@stat.columbia.edu Abstract We propose a compressed sensing (CS) calcium imaging framework for monitoring large neuronal populations, where we image randomized projections of the spatial calcium concentration at each timestep, instead of measuring the concentration at individual locations. We develop scalable nonnegative deconvolution methods for extracting the neuronal spike time series from such observations. We also address the problem of demixing the spatial locations of the neurons using rank-penalized matrix factorization methods. By exploiting the sparsity of neural spiking we demonstrate that the number of measurements needed per timestep is significantly smaller than the total number of neurons, a result that can potentially enable imaging of larger populations at considerably faster rates compared to traditional raster-scanning techniques. Unlike traditional CS setups, our problem involves a block-diagonal sensing matrix and a non-orthogonal sparse basis that spans multiple timesteps. We provide tight approximations to the number of measurements needed for perfect deconvolution for certain classes of spiking processes, and show that this number undergoes a “phase transition,” which we characterize using modern tools relating conic geometry to compressed sensing. 1 Introduction Calcium imaging methods have revolutionized data acquisition in experimental neuroscience; we can now record from large neural populations to study the structure and function of neural circuits (see e.g. Ahrens et al. (2013)), or from multiple locations on a dendritic tree to examine the detailed computations performed at a subcellular level (see e.g. Branco et al. (2010)). Traditional calcium imaging techniques involve a raster-scanning protocol where at each cycle/timestep the microscope scans the image in a voxel-by-voxel fashion, or some other predetermined pattern, e.g. through random access multiphoton (RAMP) microscopy (Reddy et al., 2008), and thus the number of measurements per timestep is equal to the number of voxels of interest. Although this protocol produces “eye-interpretable” measurements, it introduces a tradeoff between the size of the imaged field and the imaging frame rate; very large neural populations can be imaged only with a relatively low temporal resolution. This unfavorable situation can potentially be overcome by noticing that many acquired measurements are redundant; voxels can be “void” in the sense that no neurons are located there, and active voxels at nearby locations or timesteps will be highly correlated. Moreover, neural activity is typically sparse; most neurons do not spike at every timestep. During recent years, imaging practitioners have developed specialized techniques to leverage this redundancy. For example, Nikolenko et al. (2008) describe a microscope that uses a spatial light modulator and allows for the simultaneous imaging of different (predefined) image regions. More broadly, the advent of compressed sensing (CS) has found many applications in imaging such as MRI (Lustig et al., 2007), hyperspectral imaging (Gehm et al., 2007), sub-diffraction microscopy (Rust et al., 2006) and ghost imaging (Katz et al., 2009), 1 with available hardware implementations (see e.g. Duarte et al. (2008)). Recently, Studer et al. (2012) presented a fluorescence microscope based on the CS framework, where each measurement is obtained by projection of the whole image on a random pattern. This framework can lead to significant undersampling ratios for biological fluorescence imaging. In this paper we propose the application of the imaging framework of Studer et al. (2012) to the case of neural population calcium imaging to address the problem of imaging large neural populations with high temporal resolution. The basic idea is to not measure the calcium at each location individually, but rather to take a smaller number of “mixed” measurements (based on randomized projections of the data). Then we use convex optimization methods that exploit the sparse structure in the data in order to simultaneously demix the information from the randomized projection observations and deconvolve the effect of the slow calcium indicator to recover the spikes. Our results indicate that the number of required randomized measurements scales merely with the number of expected spikes rather than the ambient dimension of the signal (number of voxels/neurons), allowing for the fast monitoring of large neural populations. We also address the problem of estimating the (potentially overlapping) spatial locations of the imaged neurons and demixing these locations using methods for nuclear norm minimization and nonnegative matrix factorization. Our methods scale linearly with the experiment length and are largely parallelizable, ensuring computational tractability. Our results indicate that calcium imaging can be potentially scaled up to considerably larger neuron populations and higher imaging rates by moving to compressive signal acquisition. In the traditional static compressive imaging paradigm the sensing matrix is dense; every observation comes from the projection of all the image voxels to a random vector/matrix. Moreover, the underlying image can be either directly sparse (most of the voxels are zero) or sparse in some orthogonal basis (e.g. Fourier, or wavelet). In our case the sensing matrix has a block-diagonal form (we can only observe the activity at one specific time in each measurement) and the sparse basis (which corresponds to the inverse of the matrix implementing the convolution of the spikes from the calcium indicator) is non-orthogonal and spans multiple timelags. We analyze the effect of these distinctive features in Sec. 3 in a noiseless setting. We show that as the number of measurements increases, the probability of successful recovery undergoes a phase transition, and study the resulting phase transition curve (PTC), i.e., the number of measurements per timestep required for accurate deconvolution as a function of the number of spikes. Our analysis uses recent results that connect CS with conic geometry through the “statistical dimension” (SD) of descent cones (Amelunxen et al., 2013). We demonstrate that in many cases of interest, the SD provides a very good estimate of the PTC. 2 Model description and approximate maximum-a-posteriori inference See e.g. Vogelstein et al. (2010) for background on statistical models for calcium imaging data. Here we assume that at every timestep an image or light field (either two- or three-dimensional) is observed for a duration of T timesteps. Each observed field contains a total number of d voxels and can be vectorized in a single column vector. Thus all the activity can be described by d × T matrix F. Now assume that the field contains a total number of N neurons, where N is in general unknown. Each spike causes a rapid increase in the calcium concentration which then decays with a time constant that depends on the chemical properties of the calcium indicator. For each neuron i we assume that the “calcium activity” ci can be described as a stable autoregressive process AR(1) process1 that filters the neuron’s spikes si(t) according to the fast-rise slow-decay procedure described before: ci(t) = γci(t −1) + si(t), (1) where γ is the discrete time constant which satisfies 0 < γ < 1 and can be approximated as γ = 1 −exp(−∆t/τ), where ∆t is the length of each timestep and τ is the continuous time constant of the calcium indicator. In general we assume that each si(t) is binary due to the small length of the timestep in the proposed compressive imaging setting, and we use an i.i.d. prior for each neuron p(si(t) = 1) = πi.2 Moreover, let ai ∈Rd + the (nonnegative) location vector for neuron i, and b ∈Rd + the (nonnegative) vector of baseline concentration for all the voxels. The spatial calcium concentration profile at time t can be described as f(t) = XN i=1 aici(t) + b. (2) 1Generalization to general AR(p) processes is straightforward, but we keep p = 1 for simplicity. 2This choice is merely for simplicity; more general prior distributions can be incorporated in our framework. 2 In conventional raster-scanning experiments, at each timestep we observe a noisy version of the ddimensional image f(t). Since d is typically large, the acquisition of this vector can take a significant amount of time, leading to a lengthy timestep ∆t and low temporal resolution. Instead, we propose to observe the projections of f(t) onto a random matrix Bt ∈Rn×d (e.g. each entry of Bt could be chosen as 0 or 1 with probability 0.5): y(t) = Btf(t) + εt, εt ∼N(0, Σt), (3) where εt denotes measurement noise (Gaussian, with diagonal covariance Σt, for simplicity). If n = dim(y(t)) satisfies n ≪d, then y(t) represents a compression of f(t) that can potentially be obtained more quickly than the full f(t). Now if we can use statistical methods to recover f(t) (or equivalently the location ai and spikes si of each neuron) from the compressed measurements y(t), the total imaging throughput will be increased by a factor proportional to the undersampling ratio d/n. Our assumption here is that the random projection matrices Bt can be constructed quickly. Recent technological innovations have enabled this fast construction by using digital micromirror devices that enable spatial light modulation and can construct different excitation patterns with a high frequency (order of kHz). The total fluorescence can then be detected with a single photomultiplier tube. For more details we refer to Duarte et al. (2008); Nikolenko et al. (2008); Studer et al. (2012). We discuss the statistical recovery problem next. For future reference, note that eqs. (1)-(3) can be written in matrix form as (vec(·) denotes the vectorizing operator) S = CGT F = AC + b1T T vec(Y ) = Bvec(F) + ε, with G = 1 0 . . . 0 −γ 1 . . . 0 ... ... ... ... 0 . . . −γ 1 , B = blkdiag{B1, . . . , BT }. (4) 2.1 Approximate MAP inference with an interior point method For now we assume that A is known. In general MAP inference of S is difficult due to the discrete nature of S. Following Vogelstein et al. (2010) we relax S to take continuous values in the interval [0, 1] (remember that we assume binary spikes), and appropriately modify the prior for si(t) to log p(si(t)) ∝−(λisi(t))1(0 ≤si(t) ≤1), where λi is chosen such that the relaxed prior has the same mean πi. To exploit the banded structure of G we seek the MAP estimate of C (instead of S) by solving the following convex quadratic problem (we let ¯y(t) = y(t) −Btb) minimize C XT t=1 1 2(¯y(t) −BtAc(t))T Σ−1 t (¯y(t) −BtAc(t)) −log p(C) subject to 0 ≤CGT ≤1, c(1) ≥0, (P-QP) Using the prior on S and the relation S = CGT , the log-prior of C can be written as log p(C) ∝ −λT CGT 1T .We can solve (P-QP) efficiently using an interior point method with a log-barrier (Vogelstein et al., 2010). The contribution of the likelihood term to the Hessian is a block-diagonal matrix, whereas the barrier-term will contribute a block-tridiagonal matrix where each non-zero block is diagonal. As a result the Newton search direction −H−1∇can be computed efficiently in O(TN 3) time using a block version of standard forward-backward methods for tridiagonal systems of linear equations. We note that if N is large this can be inefficient. In this case we can use an augmented Lagrangian method (Boyd et al., 2011) to derive a fully parallelizable first order method, with O(TN) complexity per iteration. We refer to the supplementary material for additional details. As a first example we consider a simple setup where all the parameters are assumed to be known. We consider N = 50 neurons observed over T = 1000 timesteps. We assume that A, b are known, with A = IN (corresponding to non-overlapping point neurons, with one neuron in each voxel) and b = 0, respectively. This case of known point neurons can be thought as the compressive analog of RAMP microscopy where the neuron locations are predetermined and then imaged in a serial manner. (We treat the case of unknown and possibly overlapping neuron locations in section 2.2.) Each neuron was assumed to fire in an i.i.d. fashion with probability per timestep p = 0.04. Each measurement was obtained by projecting the spatial fluorescence vector at time t, f(t), onto a random matrix Bt. Each row of Bt is taken as an i.i.d. normalized vector 2β/ √ N, where β has i.i.d. entries following a fair Bernoulli distribution. For each set of measurements we assume that Σt = σ2In, and 3 True traces A 10 20 30 40 50 Neuron id Estimated traces, 5 meas., SNR = 20dB B 10 20 30 40 50 Timestep Estimated traces, 20 meas., SNR = 20dB C 100 200 300 400 500 600 700 800 900 1000 10 20 30 40 50 0 100 200 300 400 500 600 700 800 900 1000 0 1 Timestep Estimated Spikes, SNR = 20db D True 5 meas. 20 meas. 5 10 15 20 25 30 35 40 45 50 10 −4 10 −3 10 −2 10 −1 10 0 # of measurements per timestep Relative error E Inf 30 dB 25 dB 20 dB 15 dB 10 dB 5 dB 0 dB Figure 1: Performance of proposed algorithm under different noise levels. A: True traces, B: Estimated traces with n = 5 (10× undersampling), SNR = 20dB. C: Estimated traces with n = 20 (2.5× undersampling), SNR = 20dB. D: True and estimated spikes from the traces shown in panels B and C for a randomly selected neuron. E: Relative error between true and estimated traces for different number of measurements per timestep under different noise levels. The error decreases with the number of observations and the reconstruction is stable with respect to noise. the signal-to-noise ratio (SNR) in dB is defined as SNR = 10 log10(Var[βT f(t)]/Nσ2); a quick calculation reveals that SNR = 10 log10(p(1 −p)/(1 −γ2)σ2). Fig. 1 examines the solution of (P-QP) when the number of measurements per timestep n varied from 1 to N and for 8 different SNR values 0, 5, . . . , 30 plus the noiseless case (SNR = ∞). Fig. 1A shows the noiseless traces for all the neurons and panels B and C show the reconstructed traces for SNR = 20dB and n = 5, 20 respectively. Fig. 1D shows the estimated spikes for these cases for a randomly picked neuron. For very small number of measurements (n = 5, i.e., 10× undersampling) the inferred calcium traces (Fig. 1B) already closely resemble the true traces. However, the inferred MAP values of the spikes (computed by S = CGT , essentially a differencing operation here) lie in the interior of [0, 1], and the results are not directly interpretable at a high temporal resolution. As n increases (n = 20, red) the estimated spikes lie very close to {0, 1} and a simple thresholding procedure can recover the true spike times. In Fig. 1E the relative error between the estimated and true traces (∥C −ˆC∥F /∥C∥F , with ∥· ∥F denoting the the Frobenius norm) is plotted. In general the error decreases with the number of observations and the reconstruction is robust with noise. Finally, by observing the noiseless case (dashed curve) we see that when n ≥13 the error becomes practically zero indicating fully compressed acquisition of the calcium traces with a roughly 4× undersampling factor. We will see below that this undersampling factor is inversely proportional to the firing rate: we can recover highly sparse spike signals S using very few measurements n. 2.2 Estimation of the spatial matrix A The above algorithm assumes that the underlying neurons have known locations, i.e., the matrix A is known. In some cases A can be estimated a-priori by running a conventional raster-scanning experiment at a high spatial resolution and locating the active voxels. However this approach is expensive and can still be challenging due to noise and possible spatial overlap between different neurons. To estimate A within the compressive framework we note that the baseline-subtracted spatiotemporal calcium matrix F (see eqs. (2) and (4)) can be written as ¯F = F −b1T T = AC; thus rank( ¯F) ≤N where N is the number of underlying neurons, with typically N ≪d. Since N is also in general unknown we estimate ¯F by solving a nuclear norm penalized problem (Recht et al., 2010) minimize ¯ F T X t=1 1 2(¯y(t) −Bt ¯f(t))T Σ−1 t (¯y(t) −Bt ¯f(t)) −log p( ¯F) + λNN∥¯F∥∗ subject to ¯FGT ≥0, ¯f(1) ≥0, (P-NN) 4 where ∥· ∥∗denotes the nuclear norm (NN) of a matrix (i.e., the sum of its singular values), which is a convex approximation to the nonconvex rank function (Fazel, 2002). The prior of ¯F can be chosen in a similar fashion as log p(C), i..e, log p( ¯F) ∝−λT F ¯FGT 1T , where λF ∈Rd. Although more complex than (P-QP), (P-NN) is again convex and can be solved efficiently using e.g. the ADMM method of Boyd et al. (2011). From the solution of (P-NN) we can estimate N by appropriately thresholding the singular values of the estimated ¯F.3 Having N we can then use appropriately constrained nonnegative matrix factorization (NMF) methods to alternately estimate A and C. Note that during this NMF step the baseline vector b can also be estimated jointly with A. Since NMF methods are nonconvex, and thus prone to local optima, informative initialization is important. We can use the solution of (P-NN) to initialize the spatial component A using clustering methods, similar to methods typically used in neuronal extracellular spike sorting (Lewicki, 1998). Details are given in the supplement (along with some discussion of the estimation of the other parameters in this problem); we refer to Pnevmatikakis et al. (2013) for full details. True Concentration Timestep Voxel # A 100 200 300 400 500 600 700 800 900 1000 20 40 60 80 100 120 NN Estimate Timestep B 100 200 300 400 500 600 700 800 900 1000 NMF Estimate Timestep C 100 200 300 400 500 600 700 800 900 1000 2 4 6 8 10 12 14 10 0 10 1 10 2 Singular Value Scaling D 0 0.5 1 0 0.2 0.4 0.6 0.8 1 True Estimate Baseline estimation E 20 40 60 80 100 120 True Locations Voxel # F 20 40 60 80 100 120 Estimated Locations Voxel # G Figure 2: Estimating locations and calcium concentration from compressive calcium imaging measurements. A: True spatiotemporal concentration B: estimate by solving (P-NN) C: estimate by using NMF methods. D: Logarithmic plot of the first singular values of the solution of (P-NN), E: Estimation of baseline vector, F: true spatial locations G: estimated spatial locations. The NN-penalized method estimates the number of neurons and the NMF algorithm recovers the spatial and temporal components with high accuracy. In Fig. 2 we present an application of this method to an example with N = 8 spatially overlapping neurons. For simplicity we consider neurons in a one-dimensional field with total number of voxels d = 128 and spatial positions shown in Fig. 2E. At each timestep we obtain just n = 5 noisy measurements using random projections on binary masks. From the solution to the NN-penalized problem (P-NN) (Fig. 2B) we threshold the singular values (Fig. 2D) and estimate the number of underlying neurons (note the logarithmic gap between the 8th and 9th largest singular values that enables this separation). We then use the NMF approach to obtain final estimates of the spatial locations (Fig. 2G), the baseline vector (Fig. 2E), and the full spatiotemporal concentration (Fig. 2C). The estimates match well with the true values. Note that n < N ≪d showing that compressive imaging with significant undersampling factors is possible, even in the case of classical raster scanning protocol where the spatial locations are unknown. 3 Estimation of the phase transition curve in the noiseless case The results presented above indicate that reconstruction of the spikes is possible even with significant undersampling. In this section we study this problem from a compressed sensing (CS) perspective in the idealized case where the measurements are noiseless. For simplicity, we also assume that A = I (similar to a RAMP setup). Unlike the traditional CS setup, where a sparse signal (in some basis) is sensed with a dense fully supported random matrix, in our case the sensing matrix B has a block-diagonal form. A standard justification of CS approaches proceeds by establishing that the sensing matrix satisfies the “restricted isometry property” (RIP) for certain classes of sparse signals 3To reduce potential shrinkage but promote low-rank solutions this “global” NN penalty can be replaced by a series of “local” NN penalties on spatially overlapping patches. 5 with high probability (w.h.p.); this property in turn guarantees the correct recovery of the parameters of interest (Candes and Tao, 2005). Yap et al. (2011) showed that for signals that are sparse in some orthogonal basis, the RIP holds for random block-diagonal matrices w.h.p. with a number of sufficient measurement that scales with the squared coherence between the sparse basis and the elementary (identity) basis. For non-orthogonal basis the RIP property has only been established for fully dense sensing matrices (Candes et al., 2011). For signals with sparse variations Ba et al. (2012) established perfect and stable recovery conditions under the assumption that the sensing matrix at each timestep satisfies certain RIPs, and the sparsity level at each timestep has known upper bounds. While the RIP is a valuable tool for the study of convex relaxation approaches to compressed sensing problems, its estimates are usually up to a constant and can be relatively loose (Blanchard et al., 2011). An alternative viewpoint is offered from conic geometric arguments (Chandrasekaran et al., 2012; Amelunxen et al., 2013) that examine how many measurements are required such that the convex relaxed program will have a unique solution which coincides with the true sparse solution. We use this approach to study the theoretical properties of our proposed compressed calcium imaging framework in an idealized noiseless setting. When noise is absent, the quadratic program (P-QP) for the approximate MAP estimate converges to a linear program4: minimize C f(C), subject to: Bvec(C) = vec(Y ) (P-LP) with f(C) = (v ⊗1N)T vec(C), (G ⊗Id)vec(C) ≥0 ∞, otherwise , and v = GT 1T . Here ⊗denotes the Kronecker product and we used the identity vec(CGT ) = (G ⊗Id)vec(C). To examine the properties of (P-LP) we follow the approach of Amelunxen et al. (2013): For a fully dense sensing i.i.d. Gaussian (or random rotation) matrix B, the linear program (P-LP) will succeed w.h.p. to reconstruct the true solution C0, if the total number of measurements nT satisfies nT ≥δ(D(f, C0)) + O( √ TN). (5) D(f, C0) is the descent cone of f at C0, induced by the set of non-increasing directions from C0, i.e., D(f, C0) = ∪τ≥0 y ∈RN×T : f(C0 + τy) ≤f(C0) , (6) and δ(C) is the “statistical dimension” (SD) of a convex cone C ⊆Rm, defined as the expected squared length of a standard normal Gaussian vector projected onto the cone δ(C) = Eg∥ΠC(g)∥2, with g ∼N(0, Im). Eq. (5), and the analysis of Amelunxen et al. (2013), state that as TN →∞, the probability that (P-LP) will succeed to find the true solution undergoes a phase transition, and that the phase transition curve (PTC), i.e., the number of measurements required for perfect reconstruction normalized by the ambient dimension NT (Donoho and Tanner, 2009), coincides with the normalized SD. In our case B is a block-diagonal matrix (not a fully-dense Gaussian matrix), and the SD only provides an estimate of the PTC. However, as we show below, this estimate is tight in most cases of interest. 3.1 Computing the statistical dimension Using a result from Amelunxen et al. (2013) the statistical dimension can also be expressed as the expected squared distance of a standard normal vector from the cone induced by the subdifferential (Rockafellar, 1970) ∂f of f at the true solution C0: δ(D(f, C0) = Eg inf τ>0 min u∈τ∂f(C0) ∥g −u∥2, with g ∼N(0, INT ). (7) Although in general (7) cannot be solved in closed form, it can be easily estimated numerically; in the supplementary material we show that the subdifferential ∂f(C0) takes the form of a convex polytope, i.e., an intersection of linear half spaces. As a result, the distance of any vector g from ∂f(C0) can be found by solving a simple quadratic program, and the statistical dimension can be estimated with a simple Monte-Carlo simulation (details are presented in the supplement). The characterization of (7) also explains the effect of the sparsity pattern on the SD. In the case where the sparse basis 4To illustrate the generality of our approach we allow for arbitrary nonnegative spike values in this analysis, but we also discuss the binary case that is of direct interest to our compressive calcium framework. 6 is the identity then the cone induced by the subdifferential can be decomposed as the union of the respective subdifferential cones induced by each coordinate. It follows that the SD is invariant to coordinate permutations and depends only on the sparsity level, i.e., the number of nonzero elements. However, this result is in general not true for a nonorthogonal sparse basis, indicating that the precise location of the spikes (sparsity pattern) and not just their number has an effect on the SD. In our case the calcium signal is sparse in the non-orthogonal basis described by the matrix G from (4). 3.2 Relation with the phase transition curve In this section we examine the relation of the SD with the PTC for our compressive calcium imaging problem. Let S denote the set of spikes, Ω= supp(S), and C the induced calcium traces C = SG−T . As we argued, the statistical dimension of the descent cone D(f, C) depends both on the cardinality of the spike set |Ω| (sparsity level) and the location of the spikes (sparsity pattern). To examine the effects of the sparsity level and pattern we define the normalized expected statistical dimension (NESD) with respect to a certain distribution (e.g. Bernoulli) χ from which the spikes S are drawn. ˜δ(k/NT, χ) = EΩ∼χ [δ(D(f, C))/NT] , with supp(S) = Ω, and EΩ∼χ|Ω| = k. In Fig. 3 we examine the relation of the NESD with the phase transition curve of the noiseless problem (P-LP). We consider a setup with N = 40 point neurons (A = Id, d = N) observed over T = 50 timesteps and chose discrete time constant γ = 0.99. The spike-times of each neuron came from the same distribution and we considered two different distributions: (i) Bernoulli spiking, i.e., each neuron fires i.i.d. spikes with the probability k/T, and (ii) desynchronized periodic spiking where each neuron fires deterministically spikes with discrete frequency k/T timesteps−1, and each neuron has a random phase. We considered two forms of spikes: (i) with nonnegative values (si(t) ≥0), and (ii) with binary values (si(t) = {0, 1}), and we also considered two forms of sensing matrices: (i) with time-varying matrix Bt, and (ii) with constant, fully supported matrices B1 = . . . = BT . The entries of each Bt are again drawn from an i.i.d. fair Bernoulli distribution. For each of these 8 different conditions we varied the expected number of spikes per neuron k from 1 to T and the number of observed measurements n from 1 to N. Fig. 3 shows the empirical probability that the program (P-LP) will succeed in reconstructing the true solution averaged over a 100 repetitions. Success is declared when the reconstructed spike signal ˆS satisfies5 ∥ˆS −S∥F /∥S∥F < 10−3. We also plot the empirical PTC (purple dashed line), i.e., the empirical 50% success probability line, and the NESD (solid blue line), approximated with a Monte Carlo simulation using 200 samples, for each of the four distinct cases (note that the SD does not depend on the structure of the sensing matrix B). In all cases, our problem undergoes a sharp phase transition as the number of measurements per timestep varies: in the white regions of Fig. 3, S is recovered essentially perfectly, with a transition to a high probability of at least some errors in the black regions. Note that the phase transitions are defined as functions of the sparsity index k/T ; the signal sparsity sets the compressibility of the data. In addition, in the case of time-varying Bt, the NESD provides a surprisingly good estimate of the PTC, especially in the binary case or when the spiking signal is actually sparse (k/T < 0.5), a result that justifies our overall approach. Although using time-varying sensing matrices Bt leads to better results, compression is also possible with a constant B. This is an important result for implementation purposes where changing the sensing matrix might be a costly or slow operation. On a more technical side we also observe the following interesting properties: • Periodic spiking requires more measurements for accurate deconvolution, a property again predicted by the SD. This comes from the fact that the sparse basis is not orthogonal and shows that for a fixed sparsity level k/T the sparsity pattern also affects the number of required measurements. This difference depends on the time constant γ. As γ →0, G →I; the problem becomes equivalent to a standard nonnegative CS problem, where the spike pattern is irrelevant. • In the Bernoulli spiking nonnegative case, the SD is numerically very close to the PTC of the standard nonnegative CS problem (not shown here), adding to the growing body of evidence for universal behavior of convex relaxation approaches to CS (Donoho and Tanner, 2009). • In the binary case the results exhibit a symmetry around the axis k/T = 0.5. In fact this symmetry becomes exact as γ →1. In the supplement we prove that this result is predicted by the SD. 5When calculating this error we excluded the last 10 timesteps. As every spike is filtered by the AR process it has an effect for multiple timelags in the future and an optimal encoder has to sense it over multiple timelags. This number depends only on γ and not on the length T, and thus this behavior becomes negligible as T →∞. 7 Bernouli spiking Undersampling Index 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Nonnegative Spikes Sparsity Index 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Statistical dimension Empirical PTC Binary Spikes Sparsity Index Periodic spiking Undersampling Index Time−varying B 0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Constant B 0.2 0.4 0.6 0.8 1 Time−varying B 0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Constant B 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Figure 3: Relation of the statistical dimension with the phase transition curve for two different spiking distributions (Bernouli, periodic), two different spike values (nonnegative, binary), and two classes of sensing matrices (time-varying, constant). For each panel: x-axis normalized sparsity k/T, y-axis undersampling index n/N. Each panel shows the empirical success probability for each pair (k/T, n/N), the empirical 50%-success line (dashed purple line) and the SD (blue solid line). When B is time-varying the SD provides a good estimate of the empirical PTC. As mentioned above, our analysis is only approximate since B is block-diagonal and not fully dense. However, this approximation is tight in the time-varying case. Still, it is possible to construct adversarial counterexamples where the SD approach fails to provide a good estimate of the PTC. For example, if all neurons fire in a completely synchronized manner then the required number of measurements grows at a rate that is not predicted by (5). We present such an example in the supplement and note that more research is needed to understand such extreme cases. 4 Conclusion We proposed a framework for compressive calcium imaging. Using convex relaxation tools from compressed sensing and low rank matrix factorization, we developed an efficient method for extracting neurons’ spatial locations and the temporal locations of their spikes from a limited number of measurements, enabling the imaging of large neural populations at potentially much higher imaging rates than currently available. We also studied a noiseless version of our problem from a compressed sensing point of view using newly introduced tools involving the statistical dimension of convex cones. Our analysis can in certain cases capture the number of measurements needed for perfect deconvolution, and helps explain the effects of different spike patterns on reconstruction performance. Our approach suggests potential improvements over the standard raster scanning protocol (unknown locations) as well as the more efficient RAMP protocol (known locations). However our analysis is idealistic and neglects several issues that can arise in practice. The results of Fig. 1 suggest a tradeoff between effective compression and SNR level. In the compressive framework the cycle length can be relaxed more easily due to the parallel nature of the imaging (each location is targeted during the whole “cycle”). The summed activity is then collected by the photomultiplier tube that introduces the noise. While the nature of this addition has to be examined in practice, we expect that the observed SNR will allow for significant compression. Another important issue is motion correction for brain movement, especially in vivo conditions. While new approaches have to be derived for this problem, the novel approach of Cotton et al. (2013) could be adaptable to our setting. We hope that our work will inspire experimentalists to leverage the proposed advanced signal processing methods to develop more efficient imaging protocols. Acknowledgements LP is supported from an NSF career award. This work is also supported by ARO MURI W911NF-121-0594. 8 References Ahrens, M. B., M. B. Orger, D. N. Robson, J. M. Li, and P. J. Keller (2013). Whole-brain functional imaging at cellular resolution using light-sheet microscopy. Nature methods 10(5), 413–420. Amelunxen, D., M. Lotz, M. B. McCoy, and J. A. Tropp (2013). Living on the edge: A geometric theory of phase transitions in convex optimization. arXiv preprint arXiv:1303.6672. Ba, D., B. Babadi, P. Purdon, and E. Brown (2012). Exact and stable recovery of sequences of signals with sparse increments via differential l1-minimization. In Advances in Neural Information Processing Systems 25, pp. 2636–2644. Blanchard, J. D., C. Cartis, and J. Tanner (2011). Compressed sensing: How sharp is the restricted isometry property? SIAM review 53(1), 105–125. Boyd, S., N. Parikh, E. Chu, B. Peleato, and J. Eckstein (2011). Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends R⃝in Machine Learning 3(1), 1–122. Branco, T., B. A. Clark, and M. H¨ausser (2010). Dendritic discrimination of temporal input sequences in cortical neurons. Science 329, 1671–1675. Candes, E. J., Y. C. Eldar, D. Needell, and P. Randall (2011). Compressed sensing with coherent and redundant dictionaries. Applied and Computational Harmonic Analysis 31(1), 59–73. Candes, E. J. and T. Tao (2005). Decoding by linear programming. Information Theory, IEEE Transactions on 51(12), 4203–4215. Chandrasekaran, V., B. Recht, P. A. Parrilo, and A. S. Willsky (2012). The convex geometry of linear inverse problems. Foundations of Computational Mathematics 12(6), 805–849. Cotton, R. J., E. Froudarakis, P. Storer, P. Saggau, and A. S. Tolias (2013). Three-dimensional mapping of microcircuit correlation structure. Frontiers in Neural Circuits 7. Donoho, D. and J. Tanner (2009). Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 367(1906), 4273–4293. Duarte, M. F., M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk (2008). Single-pixel imaging via compressive sampling. Signal Processing Magazine, IEEE 25(2), 83–91. Fazel, M. (2002). Matrix rank minimization with applications. Ph. D. thesis, Stanford University. Gehm, M., R. John, D. Brady, R. Willett, and T. Schulz (2007). Single-shot compressive spectral imaging with a dual-disperser architecture. Opt. Express 15(21), 14013–14027. Katz, O., Y. Bromberg, and Y. Silberberg (2009). Compressive ghost imaging. Applied Physics Letters 95(13). Lewicki, M. (1998). A review of methods for spike sorting: the detection and classification of neural action potentials. Network: Computation in Neural Systems 9, R53–R78. Lustig, M., D. Donoho, and J. M. Pauly (2007). Sparse MRI: The application of compressed sensing for rapid MR imaging. Magnetic resonance in medicine 58(6), 1182–1195. Nikolenko, V., B. Watson, R. Araya, A. Woodruff, D. Peterka, and R. Yuste (2008). SLM microscopy: Scanless two-photon imaging and photostimulation using spatial light modulators. Frontiers in Neural Circuits 2, 5. Pnevmatikakis, E., T. Machado, L. Grosenick, B. Poole, J. Vogelstein, and L. Paninski (2013). Rank-penalized nonnegative spatiotemporal deconvolution and demixing of calcium imaging data. In Computational and Systems Neuroscience Meeting COSYNE. (journal paper in preparation for PLoS Computational Biology). Recht, B., M. Fazel, and P. Parrilo (2010). Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM review 52(3), 471–501. Reddy, G., K. Kelleher, R. Fink, and P. Saggau (2008). Three-dimensional random access multiphoton microscopy for functional imaging of neuronal activity. Nature Neuroscience 11(6), 713–720. Rockafellar, R. (1970). Convex Analysis. Princeton University Press. Rust, M. J., M. Bates, and X. Zhuang (2006). Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM). Nature methods 3(10), 793–796. Studer, V., J. Bobin, M. Chahid, H. S. Mousavi, E. Candes, and M. Dahan (2012). Compressive fluorescence microscopy for biological and hyperspectral imaging. Proceedings of the National Academy of Sciences 109(26), E1679–E1687. Vogelstein, J., A. Packer, T. Machado, T. Sippy, B. Babadi, R. Yuste, and L. Paninski (2010). Fast non-negative deconvolution for spike train inference from population calcium imaging. Journal of Neurophysiology 104(6), 3691–3704. Yap, H. L., A. Eftekhari, M. B. Wakin, and C. J. Rozell (2011). The restricted isometry property for block diagonal matrices. In Information Sciences and Systems (CISS), 2011 45th Annual Conference on, pp. 1–6. 9
|
2013
|
122
|
4,846
|
Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Adrien Todeschini INRIA - IMB - Univ. Bordeaux 33405 Talence, France Adrien.Todeschini@inria.fr Franc¸ois Caron Univ. Oxford, Dept. of Statistics Oxford, OX1 3TG, UK Caron@stats.ox.ac.uk Marie Chavent Univ. Bordeaux - IMB - INRIA 33000 Bordeaux, France Marie.Chavent@u-bordeaux2.fr Abstract We propose a novel class of algorithms for low rank matrix completion. Our approach builds on novel penalty functions on the singular values of the low rank matrix. By exploiting a mixture model representation of this penalty, we show that a suitably chosen set of latent variables enables to derive an ExpectationMaximization algorithm to obtain a Maximum A Posteriori estimate of the completed low rank matrix. The resulting algorithm is an iterative soft-thresholded algorithm which iteratively adapts the shrinkage coefficients associated to the singular values. The algorithm is simple to implement and can scale to large matrices. We provide numerical comparisons between our approach and recent alternatives showing the interest of the proposed approach for low rank matrix completion. 1 Introduction Matrix completion has attracted a lot of attention over the past few years. The objective is to “complete” a matrix of potentially large dimension based on a small (and potentially noisy) subset of its entries [1, 2, 3]. One popular application is to build automatic recommender systems, where the rows correspond to users, the columns to items and entries may be ratings or binary (like/dislike). The objective is then to predict user preferences from a subset of the entries. In many cases, it is reasonable to assume that the unknown m × n matrix Z can be approximated by a matrix of low rank Z ≃ABT where A and B are respectively of size m × k and n × k, with k ≪min(m, n). In the recommender system application, the low rank assumption is sensible as it is commonly believed that only a few factors contribute to user’s preferences. The low rank structure thus implies some sort of collaboration between the different users/items [4]. We typically observe a noisy version Xij of some entries (i, j) ∈Ωwhere Ω⊂{1, . . . , m} × {1, . . . , n}. For (i, j) ∈Ω Xij = Zij + εij, εij iid∼N(0, σ2) (1) where σ2 > 0 and N(µ, σ2) is the normal distribution of mean µ and variance σ2. Low rank matrix completion can be adressed by solving the following optimization problem minimize Z 1 2σ2 X (i,j)∈Ω (Xij −Zij)2 + λ rank(Z) (2) 1 where λ > 0 is some regularization parameter. For general subsets Ω, the optimization problem (2) is computationally hard and many authors have advocated the use of a convex relaxation of (2) [5, 6, 4], yielding the following convex optimization problem minimize Z 1 2σ2 X (i,j)∈Ω (Xij −Zij)2 + λ ∥Z∥∗ (3) where ∥Z∥∗is the nuclear norm of Z, or the sum of the singular values of Z. [4] proposed an iterative algorithm, called Soft-Impute, for solving the nuclear norm regularized minimization (3). In this paper, we show that the solution to the objective function (3) can be interpreted as a Maximum A Posteriori (MAP) estimate when assuming that the singular values of Z are independently and identically drawn (iid) from an exponential distribution with rate λ. Using this Bayesian interpretation, we propose alternative concave penalties to the nuclear norm, obtained by considering that the singular values are iid from a mixture of exponential distributions. We show that this class of penalties bridges the gap between the nuclear norm and the rank penalty, and that a simple Expectation-Maximization (EM) algorithm can be derived to obtain MAP estimates. The resulting algorithm iteratively adapts the shrinkage coefficients associated to the singular values. It can be seen as the equivalent for matrices of reweighted ℓ1 algorithms [6] for multivariate linear regression. Interestingly, we show that the Soft-Impute algorithm of [4] is obtained as a particular case. We also discuss the extension of our algorithms to binary matrices, building on the same seed of ideas, in the supplementary material. Finally, we provide some empirical evidence of the interest of the proposed approach on simulated and real data. 2 Complete matrix X Consider first that we observe the complete matrix X of size m × n. Let r = min(m, n). We consider the following convex optimization problem minimize Z 1 2σ2 ∥X −Z∥2 F + λ ∥Z∥∗ (4) where ∥·∥F is the Frobenius norm. The solution to Eq. (4) in the complete case is a soft-thresholded singular value decomposition (SVD) of X [7, 4], i.e. bZ = Sλσ2(X) where Sλ(X) = eU eDλ eV T with eDλ = diag((ed1 −λ)+, . . . , ( edr −λ)+) and t+ = max(t, 0). X = eU eD eV T is the singular value decomposition of X with eD = diag( ed1, . . . , edr). The solution bZ to the optimization problem (4) can be interpreted as the Maximum A Posteriori estimate under the likelihood (1) and prior p(Z) ∝exp (−λ ∥Z∥∗) Assuming Z = UDV T , with D = diag(d1, d2, . . . , dr) this can be further decomposed as p(Z) = p(U)p(V )p(D) where we assume a uniform Haar prior distribution on the unitary matrices U and V , and exponential priors on the singular values di, hence p(d1, . . . , dr) = r Y i=1 Exp (di; λ) (5) where Exp(x; λ) = λ exp(−λx) is the probability density function (pdf) of the exponential distribution of parameter λ evaluated at x. The exponential distribution has a mode at 0, hence favoring sparse solutions. We propose here alternative penalty/prior distributions, that bridge the gap between the rank and the nuclear norm penalties. Our penalties are based on hierarchical Bayes constructions and the related optimization problems to obtain MAP estimates can be solved by using an EM algorithm. 2.1 Hierarchical adaptive spectral penalty We consider the following hierarchical prior for the low rank matrix Z. We still assume that Z = UDV T , where the unitary matrices U and V are assigned uniform priors and D = diag(d1, . . . , dr). We now assume that each singular value di has its own regularization parameter γi. 2 0 1 2 3 4 5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 di p(di) β = ∞ β = 2 β = 0.1 Figure 1: Marginal distribution p(di) with a = b = β for different values of the parameter β. The distribution becomes more concentrated around zero with heavier tails as β decreases. The case β →∞corresponds to an exponential distribution with unit rate. Figure 2: Thresholding rules on the singular values edi of X for the soft thresholding rule (λ = 1), and hierarchical adaptive soft thresholding algorithm with a = b = β. p(d1, . . . , dr|γ1, . . . γr) = r Y i=1 p(di|γi) = r Y i=1 Exp(di; γi) We further assume that the regularization parameters are themselves iid from a gamma distribution p(γ1, . . . , γr) = r Y i=1 p(γi) = r Y i=1 Gamma(γi; a, b) where Gamma(γi; a, b) is the pdf of the gamma distribution of parameters a > 0 and b > 0 evaluated at γi. The marginal distribution over di is thus a continuous mixture of exponential distributions p(di) = Z ∞ 0 Exp(di; γi) Gamma(γi; a, b)dγi = aba (di + b)a+1 (6) It is a Pareto distribution which has heavier tails than the exponential distribution. Figure 1 shows the marginal distribution p(di) for a = b = β. The lower β, the heavier the tails of the distribution. When β →∞, one recovers the exponential distribution of unit rate parameter. Let pen(Z) = −log p(Z) = − r X i=1 log(p(di)) = C1 + r X i=1 (a + 1) log(b + di) (7) be the penalty induced by the prior p(Z). We call the penalty (7) the Hierarchical Adaptive Spectral Penalty (HASP). On Figure 3 (top) are represented the balls of constant penalties for a symmetric 2 × 2 matrix, for the HASP, nuclear norm and rank penalties. When the matrix is assumed to be diagonal, one recovers respectively the lasso, hierarchical adaptive lasso (HAL) [6, 8] and ℓ0 penalties, as shown on Figure 3 (bottom). The penalty (7) admits as special cases the nuclear norm penalty λ||Z||∗when a = λb and b →∞. Another closely related penalty is the log-det heuristic [5, 9] penalty, defined for a square matrice Z by log det(Z + δI) where δ is some small regularization constant. Both penalties agree on square matrices when a = b = 0 and δ = 0. 2.2 EM algorithm for MAP estimation Using the exponential mixture representation (6), we now show how to derive an EM algorithm [10] to obtain a MAP estimate bZ = arg max Z [log p(X|Z) + log p(Z)] i.e. to minimize L(Z) = 1 2σ2 ∥X −Z∥2 F + r X i=1 (a + 1) log(b + di) (8) 3 (a) Nuclear norm (b) HASP (β = 1) (c) HASP (β = 0.1) (d) Rank penalty (e) ℓ1 norm (f) HAL (β = 1) (g) HAL (β = 0.1) (h) ℓ0 norm Figure 3: Top: Manifold of constant penalty, for a symmetric 2 × 2 matrix Z = [x, y; y, z] for (a) the nuclear norm, (b-c) hierarchical adaptive spectral penalty with a = b = β (b) β = 1 and (c) β = 0.1, and (d) the rank penalty. Bottom: contour of constant penalty for a diagonal matrix [x, 0; 0, z], where one recovers the classical (e) lasso, (f-g) hierarchical lasso and (h) ℓ0 penalties. We use the parameters γ = (γ1, . . . , γr) as latent variables in the EM algorithm. The E step is obtained by Q(Z, Z∗) = E [log(p(X, Z, γ))|Z∗, X] = C2 − 1 2σ2 ∥X −Z∥2 F − r X i=1 E[γi|d∗ i ]di Hence at each iteration of the EM algorithm, the M step consists in solving the optimization problem minimize Z 1 2σ2 ∥X −Z∥2 F + r X i=1 ωidi (9) where ωi = E[γi|d∗ i ] = ∂ ∂d∗ i [−log p(d∗ i )] = a+1 b+d∗ i . (9) is an adaptive nuclear norm regularized optimization problem, with weights ωi. Without loss of generality, assume that d∗ 1 ≥d∗ 2 ≥. . . ≥d∗ r. It implies that 0 ≤ω1 ≤ω2 ≤. . . ≤ωr (10) The above weights will therefore penalize less heavily higher singular values, hence reducing bias. As shown by [11, 12], a global optimal solution to Eq. (9) under the order constraint (10) is given by a weighted soft-thresholded SVD bZ = Sσ2ω(X) (11) where Sω(X) = eU eDω eV T with eDω = diag((ed1 −ω1)+, . . . , ( edr −ωr)+). X = eU eD eV T is the SVD of X with eD = diag( ed1, . . . , edr) and ed1 ≥ed2 . . . ≥edr. Algorithm 1 summarizes the Hierarchical Adaptive Soft Thresholded (HAST) procedure to converge to a local minimum of the objective (8). This algorithm admits the soft-thresholded SVD operator as a special case when a = bλ and b = β →∞. Figure 2 shows the thresholding rule applied to the singular values of X for the HAST algorithm (a = b = β, with β = 2 and β = 0.1) and the soft-thresholded SVD for λ = 1. The bias term, which is equal to λ for the nuclear norm, goes to zero as edi goes to infinity. Setting of the hyperparameters and initialization of the EM algorithm In the experiments, we have set b = β and a = λβ where λ and β are tuning parameters that can be chosen by crossvalidation. As λ is the mean value of the regularization parameter γi, we initialize the algorithm with the soft thresholded SVD with parameter σ2λ. It is possible to estimate the hyperparameter σ within the EM algorithm as described in the supplementary material. In our experiments, we have found the results not very sensitive to the setting of σ, and set it to 1. 4 Algorithm 1 Hierarchical Adaptive Soft Thresholded (HAST) algorithm for low rank estimation of complete matrices Initialize Z(0). At iteration t ≥1 • For i = 1, . . . , r, compute the weights ω(t) i = a+1 b+d(t−1) i • Set Z(t) = Sσ2ω(t)(X) • If L(Z(t−1))−L(Z(t)) L(Z(t−1)) < ε then return bZ = Z(t) 3 Matrix completion We now show how the EM algorithm derived in the previous section can be adapted to the case where only a subset of the entries is observed. It relies on imputing missing values, similarly to the EM algorithm for SVD with missing data, see e.g. [10, 13]. Consider that only a subset Ω⊂{1, . . . , m}×{1, . . . , n} of the entries of the matrix X is observed. Similarly to [7], we introduce the operator PΩ(X) and its complementary P ⊥ Ω(X) PΩ(X)(i, j) = Xij if (i, j) ∈Ω 0 otherwise and P ⊥ Ω(X)(i, j) = 0 if (i, j) ∈Ω Xij otherwise Assuming the same prior (6), the MAP estimate is obtained by minimizing L(Z) = 1 2σ2 ∥PΩ(X) −PΩ(Z)∥2 F + (a + 1) r X i=1 log(b + di) (12) We will now derive the EM algorithm, by using latent variables γ and P ⊥ Ω(X). The E step is given by (details in supplementary material) Q(Z, Z∗) = E log(p(PΩ(X), P ⊥ Ω(X), Z, γ))|Z∗, PΩ(X) = C4 − 1 2σ2 n
PΩ(X) + P ⊥ Ω(Z∗) −Z
2 F o − r X i=1 E[γi|d∗ i ]di Hence at each iteration of the algorithm, one needs to minimize 1 2σ2 ∥X∗−Z∥2 F + r X i=1 ωidi (13) where ωi = E[γi|d∗ i ] and X∗= PΩ(X) + P ⊥ Ω(Z∗) is the observed matrix, completed with entries in Z∗. We now have a complete matrix problem. As mentioned in the previous section, the minimum of (13) is obtained with a weighted soft-thresholded SVD. Algorithm 2 provides the resulting iterative procedure for matrix completion with the hierarchical adaptive spectral penalty. Algorithm 2 Hierarchical Adaptive Soft Impute (HASI) algorithm for matrix completion Initialize Z(0). At iteration t ≥1 • For i = 1, . . . , r, compute the weights ω(t) i = a+1 b+d(t−1) i • Set Z(t) = Sσ2ω(t) PΩ(X) + P ⊥ Ω(Z(t−1)) • If L(Z(t−1))−L(Z(t)) L(Z(t−1)) < ε then return bZ = Z(t) Related algorithms Algorithm 2 admits the Soft-Impute algorithm of [4] as a special case when a = λb and b = β →∞. In this case, one obtains at each iteration ω(t) i = λ for all i. On the contrary, when β < ∞, our algorithm adaptively updates the weights so that to penalize less heavily higher singular values. Some authors have proposed related one-step adaptive spectral penalty algorithms [14, 11, 12]. However, in these procedures, the weights have to be chosen by some procedure whereas in our case they are iteratively adapted. Initialization The objective function (12) is in general not convex and different initializations may lead to different modes. As in the complete case, we suggest to set a = λb and b = β and to initialize the algorithm with the Soft-Impute algorithm with regularization parameter σ2λ. 5 Scaling Similarly to the Soft-Impute algorithm, the computationally demanding part of Algorithm 2 is Sσ2ω(t) PΩ(X) + P ⊥ Ω(Z(t−1)) which requires calculating a low rank truncated SVD. For large matrices, one can resort to the PROPACK algorithm [15, 16] as described in [4]. This sophisticated linear algebra algorithm can efficiently compute the truncated SVD of the “sparse + low rank” matrix PΩ(X) + P ⊥ Ω(Z(t−1)) = PΩ(X) −PΩ(Z(t−1)) | {z } sparse + Z(t−1) | {z } low rank and can thus handle large matrices, as shown in [4]. 4 Experiments 4.1 Simulated data We first evaluate the performance of the proposed approach on simulated data. Our simulation setting is similar to that of [4]. We generate Gaussian matrices A and B respectively of size m × q and n × q, q ≤r so that the matrix Z = ABT is of low rank q. A Gaussian noise of variance σ2 is then added to the entries of Z to obtain the matrix X. The signal to noise ratio is defined as SNR = q var(Z) σ2 . We set m = n = 100 and σ = 1. We run all the algorithms with a precision ϵ = 10−9 and a maximum number of tmax = 200 iterations (initialization included for HASI). We compute err, the relative error between the estimated matrix bZ and the true matrix Z in the complete case, and errΩ⊥in the incomplete case, where err = || bZ −Z||2 F ||Z||2 F and errΩ⊥= || bP ⊥ Ω( bZ) −P ⊥ Ω(Z)||2 F ||P ⊥ Ω(Z)||2 F For the HASP penalty, we set a = λβ and b = β. We compute the solutions over a grid of 50 values of the regularization parameter λ linearly spaced from λ0 to 0, where λ0 = ||PΩ(X)||2 is the largest singular value of the input matrix X, padded with zeros. This is done for three different values β = 1, 10, 100. We use the same grid to obtain the regularization path for the other algorithms. Complete case We first consider that the observed matrix is complete, with SNR = 1 and q = 10. The HAST algorithm 1 is compared to the soft thresholded (ST) and hard thresholded (HT) SVD. Results are reported in Figure 4(a). The HASP penalty provides a bridge/tradeoff between the nuclear norm and the rank penalty. For example, value of β = 10 show a minimum at the true rank q = 10 as HT, but with a lower error when the rank is overestimated. 0 10 20 30 40 50 60 70 80 90 100 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Rank Relative error ST HT HAST β = 100 HAST β = 10 HAST β = 1 (a) SNR=1; Complete; rank=10 0 10 20 30 40 50 60 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Rank Test error MMMF SoftImp SoftImp+ HardImp HASI β = 100 HASI β = 10 HASI β = 1 (b) SNR=1; 50% missing; rank=5 0 5 10 15 20 25 30 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Rank Test error MMMF SoftImp SoftImp+ HardImp HASI β = 100 HASI β = 10 HASI β = 1 (c) SNR=10; 80% missing; rank=5 Figure 4: Test error w.r.t. the rank obtained by varying the value of the regularization parameter λ. Results on simulated data are given for (a) complete matrix with SNR=1 (b) 50% missing and SNR=1 and (c) 80% missing and SNR=10. Incomplete case Then we consider the matrix completion problem, and remove uniformly a given percentage of the entries in X. We compare the HASI algorithm to the Soft-Impute, Soft-Impute+ and Hard-Impute algorithms of [4] and to the MMMF algorithm of [17]. Results, averaged over 50 replications, are reported in Figures 4(b-c) for a true rank q = 5, (b) 50% of missing data and 6 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 HASI HardImp SoftImp+ SoftImp MMMF Test error (a) SNR=1; 50% miss. 10 20 30 40 50 60 70 80 90 100 Rank (b) SNR=1; 50% miss. 0 0.05 0.1 0.15 0.2 0.25 Test error (c) SNR=10; 80% miss. 10 20 30 40 50 60 70 80 90 100 Rank (d) SNR=10; 80% miss. Figure 5: Boxplots of the test error and ranks obtained over 50 replications on simulated data. Table 1: Results on the Jester and MovieLens datasets Jester 1 Jester 2 Jester 3 MovieLens 100k MovieLens 1M 24983 × 100 23500 × 100 24938 × 100 943 × 1682 6040 × 3952 27.5% miss. 27.3% miss. 75.3% miss. 93.7% miss. 95.8% miss. Method NMAE Rank NMAE Rank NMAE Rank NMAE Rank NMAE Rank MMMF 0.161 95 0.162 96 0.183 58 0.195 50 0.169 30 Soft Imp 0.161 100 0.162 100 0.184 78 0.197 156 0.176 30 Soft Imp+ 0.169 14 0.171 11 0.184 33 0.197 108 0.189 30 Hard Imp 0.158 7 0.159 6 0.181 4 0.190 7 0.175 8 HASI 0.153 100 0.153 100 0.174 30 0.187 35 0.172 27 SNR = 1 and (c) 80% of missing data and SNR = 10. Similar behavior is observed, with the HASI algorithm attaining a minimum at the true rank q = 5. We then conduct the same experiments, but remove 20% of the observed entries as a validation set to estimate the regularization parameters (λ, β) for HASI, and λ for the other methods. We estimate Z on the whole observed matrix, and use the unobserved entries as a test set. Results on the test error and estimated ranks over 50 replications are reported in Figure 5. For 50% missing data, HASI is shown to outperform the other methods. For 80% missing data, HASI and Hard Impute provide the best performances. In both cases, it is able to recover very accurately the true rank of the matrix. 4.2 Collaborative filtering examples We now compare the different methods on several benchmark datasets. We first consider the Jester datasets [18]. The three datasets1 contain one hundred jokes, with user ratings between -10 and +10. We randomly select two ratings per user as a test set, and two other ratings per user as a validation set to select the parameters λ and β. The results are computed over four values β = 1000, 100, 10, 1. We compare the results of the different methods with the Normalized Mean Absolute Error (NMAE) NMAE = 1 card(Ωtest) P (i,j)∈Ωtest |Xij −bZij| max(X) −min(X) where Ωtest is the test set. The mean number of iterations for Soft-Impute, Hard-Impute and HASI (initialization included) algorithms are respectively 9, 76 and 76. Computations for the HASI algorithm take approximately 5 hours on a standard computer. The results, averaged over 10 replications (with almost no variability observed), are presented in Table 1. The HASI algorithm provides very good performance on the different Jester datasets, with lower NMAE than the other methods. Figure 6 shows the NMAE in function of the rank. Low values of β exhibit a bimodal behavior with two modes at low rank and full rank. High value β = 1000 is unimodal and outperforms Soft-Impute at any particular rank. 1Jester datasets can be downloaded from the url http://goldberg.berkeley.edu/jester-data/ 7 0 10 20 30 40 50 60 70 80 90 100 0.16 0.18 0.2 0.22 0.24 0.26 0.28 0.3 0.32 Rank Test error MMMF SoftImp SoftImp+ HardImp HASI β = 1000 HASI β = 100 HASI β = 10 HASI β = 1 (a) Jester 1 0 10 20 30 40 50 60 70 80 90 0.15 0.2 0.25 0.3 Rank Test error MMMF SoftImp SoftImp+ HardImp HASI β = 1000 HASI β = 100 HASI β = 10 HASI β = 1 (b) Jester 3 Figure 6: NMAE on the test set of the (a) Jester 1 and (b) Jester 3 datasets w.r.t. the rank obtained by varying the value of the regularization parameter λ. The curves obtained on the Jester 2 dataset are hardly distinguishable from (a) and hence are not displayed due to space limitation. Second, we conducted the same comparison on two MovieLens datasets2, which contain ratings of movies by users. We randomly select 20% of the entries as a test set, and the remaining entries are split between a training set (80%) and a validation set (20%). For all the methods, we stop the regularization path as soon as the estimated rank exceeds rmax = 100. This is a practical consideration: given that the computations for high ranks demand more time and memory, we are interested in restricting ourselves to low rank solutions. Table 1 presents the results, averaged over 5 replications. For the MovieLens 100k dataset, HASI provides better NMAE than the other methods with a low rank solution. For the larger MovieLens 1M dataset, the precision, maximum number of iterations and maximum rank are decreased to ϵ = 10−6, tmax = 100 and rmax = 30. On this dataset, MMMF provides the best NMAE at maximum rank. HASI provides the second best performances with a slightly lower rank. 5 Conclusion The proposed class of methods has shown to provide good results compared to several alternative low rank matrix completion methods. It provides a bridge between nuclear norm and rank regularization algorithms. Although the related optimization problem is not convex, experiments show that initializing the algorithm with the Soft-Impute algorithm of [4] provides very satisfactory results. In this paper, we have focused on a gamma mixture of exponentials, as it leads to a simple and interpretable expression for the weights. It is however possible to generalize the results presented here by using a three parameter generalized inverse Gaussian prior distribution (see e.g. [19]) for the regularization parameters γi, thus offering an additional degree of freedom. Derivations of the weights are provided in the supplementary material. Additionally, it is possible to derive an EM algorithm for low rank matrix completion for binary matrices. Details are also provided in supplementary material. While we focus on point estimation in this paper, it would be of interest to investigate a fully Bayesian approach and derive a Gibbs sampler or variational algorithm to approximate the posterior distribution, and compare to other full Bayesian approaches to matrix completion [20, 21]. Acknowledgments F.C. acknowledges the support of the European Commission under the Marie Curie Intra-European Fellowship Programme. The contents reflect only the authors views and not the views of the European Commission. 2MovieLens datasets can be downloaded from the url http://www.grouplens.org/node/73. 8 References [1] N. Srebro, J.D.M. Rennie, and T. Jaakkola. Maximum-Margin Matrix Factorization. In Advances in neural information processing systems, volume 17, pages 1329–1336. MIT Press, 2005. [2] E.J. Cand`es and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational mathematics, 9(6):717–772, 2009. [3] E.J. Cand`es and Y. Plan. Matrix completion with noise. Proceedings of the IEEE, 98(6):925– 936, 2010. [4] R. Mazumder, T. Hastie, and R. Tibshirani. Spectral regularization algorithms for learning large incomplete matrices. The Journal of Machine Learning Research, 11:2287–2322, 2010. [5] M. Fazel. Matrix rank minimization with applications. PhD thesis, Stanford University, 2002. [6] E.J. Cand`es, M.B. Wakin, and S.P. Boyd. Enhancing sparsity by reweighted l1 minimization. Journal of Fourier Analysis and Applications, 14(5):877–905, 2008. [7] J.F. Cai, E.J. Cand`es, and Z. Shen. A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20(4):1956–1982, 2010. [8] Anthony Lee, Francois Caron, Arnaud Doucet, and Chris Holmes. A hierarchical Bayesian framework for constructing sparsity-inducing priors. arXiv preprint arXiv:1009.1914, 2010. [9] M. Fazel, H. Hindi, and S.P. Boyd. Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In American Control Conference, 2003. Proceedings of the 2003, volume 3, pages 2156–2162. IEEE, 2003. [10] A.P. Dempster, N.M. Laird, and D.B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society. Series B, pages 1–38, 1977. [11] S. Ga¨ıffas and G. Lecu´e. Weighted algorithms for compressed sensing and matrix completion. arXiv preprint arXiv:1107.1638, 2011. [12] Kun Chen, Hongbo Dong, and Kung-Sik Chan. Reduced rank regression via adaptive nuclear norm penalization. Biometrika, 100(4):901–920, 2013. [13] N. Srebro and T. Jaakkola. Weighted low-rank approximations. In NIPS, volume 20, page 720, 2003. [14] F. Bach. Consistency of trace norm minimization. The Journal of Machine Learning Research, 9:1019–1048, 2008. [15] R. M. Larsen. Lanczos bidiagonalization with partial reorthogonalization. Technical report, DAIMI PB-357, 1998. [16] R. M. Larsen. Propack-software for large and sparse svd calculations. Available online. URL http://sun. stanford. edu/rmunk/PROPACK, 2004. [17] J. Rennie and N. Srebro. Fast maximum margin matrix factorization for collaborative prediction. In Proceedings of the 22nd international conference on Machine learning, pages 713–719. ACM, 2005. [18] K. Goldberg, T. Roeder, D. Gupta, and C. Perkins. Eigentaste: A constant time collaborative filtering algorithm. Information Retrieval, 4(2):133–151, 2001. [19] Z. Zhang, S. Wang, D. Liu, and M. I. Jordan. EP-GIG priors and applications in Bayesian sparse learning. The Journal of Machine Learning Research, 98888:2031–2061, 2012. [20] M. Seeger and G. Bouchard. Fast variational Bayesian inference for non-conjugate matrix factorization models. In Proc. of AISTATS, 2012. [21] S. Nakajima, M. Sugiyama, S. D. Babacan, and R. Tomioka. Global analytic solution of fullyobserved variational Bayesian matrix factorization. Journal of Machine Learning Research, 14:1–37, 2013. 9
|
2013
|
123
|
4,847
|
Global Solver and Its Efficient Approximation for Variational Bayesian Low-rank Subspace Clustering Shinichi Nakajima Nikon Corporation Tokyo, 140-8601 Japan nakajima.s@nikon.co.jp Akiko Takeda The University of Tokyo Tokyo, 113-8685 Japan takeda@mist.i.u-tokyo.ac.jp S. Derin Babacan Google Inc. Mountain View, CA 94043 USA dbabacan@gmail.com Masashi Sugiyama Tokyo Institute of Technology Tokyo 152-8552, Japan sugi@cs.titech.ac.jp Ichiro Takeuchi Nagoya Institute of Technology Aichi, 466-8555, Japan takeuchi.ichiro@nitech.ac.jp Abstract When a probabilistic model and its prior are given, Bayesian learning offers inference with automatic parameter tuning. However, Bayesian learning is often obstructed by computational difficulty: the rigorous Bayesian learning is intractable in many models, and its variational Bayesian (VB) approximation is prone to suffer from local minima. In this paper, we overcome this difficulty for low-rank subspace clustering (LRSC) by providing an exact global solver and its efficient approximation. LRSC extracts a low-dimensional structure of data by embedding samples into the union of low-dimensional subspaces, and its variational Bayesian variant has shown good performance. We first prove a key property that the VBLRSC model is highly redundant. Thanks to this property, the optimization problem of VB-LRSC can be separated into small subproblems, each of which has only a small number of unknown variables. Our exact global solver relies on another key property that the stationary condition of each subproblem consists of a set of polynomial equations, which is solvable with the homotopy method. For further computational efficiency, we also propose an efficient approximate variant, of which the stationary condition can be written as a polynomial equation with a single variable. Experimental results show the usefulness of our approach. 1 Introduction Principal component analysis (PCA) is a widely-used classical technique for dimensionality reduction. This amounts to globally embedding the data points into a low-dimensional subspace. As more flexible models, the sparse subspace clustering (SSC) [7, 20] and the low-rank subspace clustering (LRSC) [8, 13, 15, 14] were proposed. By inducing sparsity and low-rankness, respectively, SSC and LRSC locally embed the data into the union of subspaces. This paper discusses a probabilistic model for LRSC. As the classical PCA requires users to pre-determine the dimensionality of the subspace, LRSC requires manual parameter tuning for adjusting the low-rankness of the solution. On the other hand, 1 Bayesian formulations enable us to estimate all unknown parameters without manual parameter tuning [5, 4, 17]. However, in many problems, the rigorous application of Bayesian inference is computationally intractable. To overcome this difficulty, the variational Bayesian (VB) approximation was proposed [1]. A Bayesian formulation and its variational inference have been proposed for LRSC [2]. There, to avoid computing the inverse of a prohibitively large matrix, the posterior is approximated with the matrix-variate Gaussian (MVG) [11]. Typically, the VB solution is computed by the iterated conditional modes (ICM) algorithm [3, 5], which is derived through the standard procedure for the VB approximation. Since the objective function for the VB approximation is generally non-convex, the ICM algorithm is prone to suffer from local minima. So far, the global solution for the VB approximation is not attainable except PCA (or the fully-observed matrix factorization), for which the global VB solution has been analytically obtained [17]. This paper makes LRSC another exception with proposed global VB solvers. Two common factors make the global VB solution attainable in PCA and LRSC: first, a large portion of the degrees of freedom that the VB approximation learns are irrelevant, and the optimization problem can be decomposed into subproblems, each of which has only a small number of unknown variables; second, the stationary condition of each subproblem is written as a polynomial system (a set of polynomial equations). Based on these facts, we propose an exact global VB solver (EGVBS) and an approximate global VB solver (AGVBS). EGVBS finds all stationary points by solving the polynomial system with the homotopy method [12, 10], and outputs the one giving the lowest free energy. Although EGVBS solves subproblems with much less complexity than the original VB problem, it is still not efficient enough for handling large-scale data. For further computational efficiency, we propose AGVBS, of which the stationary condition is written as a polynomial equation with a single variable. Our experiments on artificial and benchmark datasets show that AGVBS provides a more accurate solution than the MVG approximation [2] with much less computation time. 2 Background In this section, we introduce the low-rank subspace clustering and its variational Bayesian formulation. 2.1 Subspace Clustering Methods Let Y ∈RL×M = (y1, . . . , yM) be L-dimensional observed samples of size M. We generally denote a column vector of a matrix by a bold-faced small letter. We assume that each ym is approximately expressed as a linear combination of M ′ words in a dictionary, D = (d1, . . . , dM′), i.e., Y = DX + E, where X ∈RM ′×M is unknown coefficients, and E ∈RL×M is noise. In subspace clustering, the observed matrix Y itself is often used as a dictionary D. The convex formulation of the sparse subspace clustering (SSC) [7, 20] is given by min X ∥Y −Y X∥2 Fro + λ∥X∥1, s.t. diag(X) = 0, (1) where X ∈RM×M is a parameter to be estimated, λ > 0 is a regularization coefficient to be manually tuned. ∥· ∥Fro and ∥· ∥1 are the Frobenius norm and the (element-wise) ℓ1-norm of a matrix, respectively. The first term in Eq.(1) requires that each data point ym can be expressed as a linear combination of a small set of other data points {dm′} for m′ ̸= m. This smallness of the set is enforced by the second (ℓ1-regularization) term, and leads to the low-dimensionality of each obtained subspace. After the minimizer b X is obtained, abs( b X) + abs( b X⊤), where abs(·) takes the absolute value element-wise, is regarded as an affinity matrix, and a spectral clustering algorithm, such as normalized cuts [19], is applied to obtain clusters. In the low-rank subspace clustering (LRSC) or low-rank representation [8, 13, 15, 14], lowdimensional subspaces are sought by enforcing the low-rankness of X: min X ∥Y −Y X∥2 Fro + λ∥X∥tr. (2) 2 Thanks to the simplicity, the global solution of Eq.(2) has been analytically obtained [8]. 2.2 Variational Bayesian Low-rank Subspace Clustering We formulate the probabilistic model of LRSC, so that the maximum a posteriori (MAP) estimator coincides with the solution of the problem (2) under a certain hyperparameter setting: p(Y |A′, B′) ∝exp ( −1 2σ2 ∥Y −DB′A′⊤∥2 Fro ) , (3) p(A′) ∝exp ( −1 2tr(A′C−1 A A′⊤) ) , p(B′) ∝exp ( −1 2tr(B′C−1 B B′⊤) ) . (4) Here, we factorized X as X = B′A′⊤, as in [2], to induce low-rankness through the model-induced regularization mechanism [17]. In this formulation, A′ ∈RM×H and B′ ∈RM×H for H ≤ min(L, M) are the parameters to be estimated. We assume that hyperparameters CA = diag(c2 a1, . . . , c2 aH), CB = diag(c2 b1, . . . , c2 bH). are diagonal and positive definite. The dictionary D is treated as a constant, and set to D = Y , once Y is observed.1 The Bayes posterior is written as p(A′, B′|Y ) = p(Y |A′,B′)p(A′)p(B′) p(Y ) , (5) where p(Y ) = ⟨p(Y |A′, B′)⟩p(A′)p(B′) is the marginal likelihood. Here, ⟨·⟩p denotes the expectation over the distribution p. Since the Bayes posterior (5) is computationally intractable, we adopt the variational Bayesian (VB) approximation [1, 5]. Let r(A′, B′), or r for short, be a trial distribution. The following functional with respect to r is called the free energy: F(r) = ⟨ log r(A′,B′) p(Y |A′,B′),p(A′)p(B′) ⟩ r(A′,B′) = ⟨ log r(A′,B′) p(A′,B′|Y ) ⟩ r(A′,B′) −log p(Y ). (6) In the last equation of Eq.(6), the first term is the Kullback-Leibler (KL) distance from the trial distribution to the Bayes posterior, and the second term is a constant. Therefore, minimizing the free energy (6) amounts to finding a distribution closest to the Bayes posterior in the sense of the KL distance. In the VB approximation, the free energy (6) is minimized over some restricted function space. 2.2.1 Standard VB (SVB) Iteration The standard procedure for the VB approximation imposes the following constraint on the posterior: r(A′, B′) = r(A′)r(B′). By using the variational method, we can show that the VB posterior is Gaussian, and has the following form: r(A′) ∝exp ( − tr((A′−b A′)Σ−1 A′ (A′−b A′)⊤) 2 ) , r(B′) ∝exp ( − (˘b ′−b˘b ′ )⊤˘ Σ−1 B′ (˘b ′−b˘b ′ ) 2 ) , (7) where ˘b ′ = vec(B′) ∈RMH. The means and the covariances satisfy the following equations: bA′ = 1 σ2 Y ⊤Y bB′ΣA′, ΣA′ = σ2 (⟨ B′⊤Y ⊤Y B′⟩ r(B′) + σ2C−1 A )−1 , (8) b˘b ′ = ˘ ΣB′ σ2 vec ( Y ⊤Y bA′) , ˘ΣB′ = σ2 ( ( bA′⊤bA′ + MΣA′) ⊗Y ⊤Y + σ2(C−1 B ⊗IM) )−1 , (9) where ⊗denotes the Kronecker product of matrices, and IM is the M-dimensional identity matrix. 1 Our formulation is slightly different from the one proposed in [2], where a clean version of Y is introduced as an additional parameter to cope with outliers. Since we focus on the LRSC model without outliers in this paper, we simplified the model. In our formulation, the clean dictionary corresponds to Y BA⊤(BA⊤)†, where † denotes the pseudo-inverse of a matrix. 3 For empirical VB learning, where the hyperparameters are also estimated from observation, the following are obtained from the derivatives of the free energy (6): c2 ah = ∥ba′ h∥2/M + σ2 a′ h, c2 bh = ( ∥bb ′ h∥2 + ∑M m=1 σ2 B′ m,h ) /M, (10) σ2 = tr ( Y ⊤Y ( IM−2 b B′ b A′⊤+⟨B′( b A′⊤b A′+MΣA′)B′⊤⟩r(B′) )) LM , (11) where (σ2 a′ 1, . . . , σ2 a′ H) and ((σ2 B′ 1,1, . . . , σ2 B′ M,1), . . . , (σ2 B′ 1,H, . . . , σ2 B′ M,H)) are the diagonal entries of ΣA′ and ˘ΣB′, respectively. Eqs.(8)–(11) form an ICM algorithm, which we call the standard VB (SVB) iteration. 2.2.2 Matrix-Variate Gaussian Approximate (MVGA) Iteration Actually, the SVB iteration cannot be applied to a large-scale problem, because Eq.(9) requires the inversion of a huge MH × MH matrix. This difficulty can be avoided by restricting r(B′) to be the matrix-variate Gaussian (MVG) [11], i.e., r(B′) ∝exp ( −1 2tr ( Θ−1 B′ (B′ −bB′)Σ−1 B′ (B′ −bB′)⊤)) . (12) Under this additional constraint, a gradient-based computationally tractable algorithm can be derived [2], which we call the MVG approximate (MVGA) iteration. 3 Global Variational Bayesian Solvers In this section, we first show that a large portion of the degrees of freedom in the expression (7) are irrelevant, which significantly reduces the complexity of the optimization problem without the MVG approximation. Then, we propose an exact global VB solver and its approximation. 3.1 Irrelevant Degrees of Freedom of VB-LRSC Consider the following transforms: A = Ωright⊤ Y A′, B = Ωright⊤ Y B′, where Y = Ωleft Y ΓY Ωright⊤ Y (13) is the singular value decomposition (SVD) of Y . Then, we obtain the following theorem: Theorem 1 The global minimum of the VB free energy (6) is achieved with a solution such that bA, bB, ΣA, ˘ΣB are diagonal. (Sketch of proof) After the transform (13), we can regard the observed matrix as a diagonal matrix, i.e., Y →ΓY . Since we assume Gaussian priors with no correlation, the solution bB bA⊤is naturally expected to be diagonal. To prove this intuition, we apply a similar approach to [17], where the diagonalities of the VB posterior covariances were proved in fully-observed matrix factorization by investigating perturbations around any solution. We first show that bA′⊤bA′+MΣA′ is diagonal, with which Eq.(9) implies the diagonality of ˘ΣB. Other diagonalities can be shown similarly. 2 Theorem 1 does not only reduce the complexity of the optimization problem greatly, but also makes the problem separable, as shown in the following. 3.2 Exact Global VB Solver (EGVBS) Thanks to Theorem 1, the free energy minimization problem can be decomposed as follows: Lemma 1 Let J(≤min(L, M)) be the rank of Y , γm be the m-th largest singular value of Y , and (ba1, . . . , baH), (σ2 a1, . . . , σ2 aH), (bb1, . . . ,bbH), ((σ2 B1,1, . . . , σ2 BM,1), . . . , (σ2 B1,H, . . . , σ2 BM,H)) be the diagonal entries of bA, ΣA, bB, ˘ΣB, respectively. Then, the free energy (6) is written as F = 1 2 ( LM log(2πσ2) + ∑J h=1 γ2 h σ2 + ∑H h=1 2Fh ) , where (14) 4 Algorithm 1 Exact Global VB Solver (EGVBS) for LRSC. 1: Calculate the SVD of Y = Ωleft Y ΓY Ωright⊤ Y . 2: for h = 1 to H do 3: Find all the solutions of the polynomial system (16)–(18) by the homotopy method. 4: Discard prohibitive solutions with complex numbers or with negative variances. 5: Select the stationary point giving the smallest Fh (defined by Eq.(15)). 6: The global solution for h is the selected stationary point if it satisfies Fh < 0, otherwise the null solution (19). 7: end for 8: Calculate b X = Ωright Y bB bA⊤Ωright⊤ Y 9: Apply spectral clustering with the affinity matrix equal to abs( b X) + abs( b X⊤). 2Fh = M log c2 ah σ2ah + ∑J m=1 log c2 bh σ2 Bm,h −(M + J) + ba2 h+Mσ2 ah c2ah + bb2 h+∑J m=1 σ2 Bm,h c2 bh + 1 σ2 { γ2 h ( −2bahbbh + bb2 h(ba2 h + Mσ2 ah) ) + ∑J m=1 γ2 mσ2 Bm,h(ba2 h + Mσ2 ah) } , (15) and its stationary condition is given as follows: for each h = 1, . . . , H, bah = γ2 h σ2bbhσ2 ah, σ2 ah = σ2 ( γ2 hbb2 h + ∑J m=1 γ2 mσ2 Bm,h + σ2 c2 ah )−1 , (16) bbh = γ2 h σ2 bahσ2 Bh,h, σ2 Bm,h = σ2 ( γ2 m ( ba2 h + Mσ2 ah ) + σ2 c2 bh )−1 (m ≤J), c2 bh (m > J), (17) c2 ah = ba2 h/M + σ2 ah, c2 bh = ( bb2 h + ∑J m=1 σ2 Bm,h ) /J. (18) If no stationary point gives Fh < 0, the global solution is given by bah = bbh = 0, σ2 ah, σ2 Bm,h, c2 ah, c2 bh →0 for m = 1, . . . , M. (19) Taking account of the trivial relations c2 bh = σ2 Bm,h for m > J, Eqs.(16)–(18) for each h can be seen as a polynomial system with 5 + J unknown variables, i.e., ( bah, σ2 ah, c2 ah,bbh, {σ2 Bm,h}J m=1, c2 bh ) . Thus, Lemma 1 has decomposed the original problem (8)–(10) with O(M 2H2) unknown variables into H subproblems with O(J) variables each. Given the noise variance σ2, our exact global VB solver (EGVBS) finds all stationary points that satisfy the polynomial system (16)–(18) by using the homotopy method [12, 10],2 After that, it discards the prohibitive solutions with complex numbers or with negative variances, and then selects the one giving the smallest Fh, defined by Eq.(15). The global solution is the selected stationary point if it satisfies Fh < 0, or the null solution (19) otherwise. Algorithm 1 summarizes the procedure of EGVBS. If σ2 is unknown, we conduct a naive 1-D search by iteratively applying EGVBS, as for VB matrix factorization [17]. 3.3 Approximate Global VB Solver (AGVBS) Although Lemma 1 significantly reduced the complexity of the optimization problem, EGVBS is not applicable to large-scale data, since the homotopy method is not guaranteed to find all the solutions in polynomial time in J, when the polynomial system involves O(J) unknown variables. For large-scale data, we propose a scalable approximation by introducing an additional constraint that γ2 mσ2 Bm,h are constant over m = 1, . . . , J, i.e., γ2 mσ2 Bm,h = σ2 bh for all m ≤J. (20) 2 The homotopy method is a reliable and efficient numerical method to solve a polynomial system [6, 9]. It provides all the isolated solutions to a system of n polynomials f(x) ≡(f1(x), . . . , fn(x)) = 0 by defining a smooth set of homotopy systems with a parameter t ∈[0, 1]: g(x, t) ≡(g1(x, t), g2(x, t), . . . , gn(x, t)) = 0, such that one can continuously trace the solution path from the easiest (t = 0) to the target (t = 1). We use HOM4PS-2.0 [12], which is one of the most successful polynomial system solvers. 5 Under this constraint, we obtain the following theorem (the proof is omitted): Theorem 2 Under the constraint (20), any stationary point of the free energy (15) for each h satisfies the following polynomial equation with a single variable bbγh: ξ6bbγ 6 h + ξ5bbγ 5 h + ξ4bbγ 4 h + ξ3bbγ 3 h + ξ2bbγ 2 h + ξ1bbγh + ξ0 = 0, (21) where ξ6 = ϕ2 h γ2 h , ξ5 = −2 ϕ2 hMσ2 γ3 h + 2ϕh γh , ξ4 = ϕ2 hM 2σ4 γ4 h −2ϕh(2M−J)σ2 γ2 h + 1 + ϕ2 h(Mσ2−γ2 h) γ2 h , (22) ξ3 = 2ϕhM(M−J)σ4 γ3 h −2(M−J)σ2 γh + ϕh((M+J)σ2−γ2 h) γh −ϕ2 hMσ2(Mσ2−γ2 h) γ3 h + ϕh(Mσ2−γ2 h) γh , (23) ξ2 = (M−J)2σ4 γ2 h −ϕhMσ2((M+J)σ2−γ2 h) γ2 h + ((M + J)σ2 −γ2 h) −ϕh(M−J)σ2(Mσ2−γ2 h) γ2 h , (24) ξ1 = −(M−J)σ2((M+J)σ2−γ2 h) γh + ϕhMJσ4 γh , ξ0 = MJσ4. (25) Here, γ = (∑J m=1 γ−2 m /J)−1 and ϕh = ( 1 −γ2 h γ2 ) . For each real solution bbγh such that bγh = bbγ + γh −Mσ2 γh , bκ = γ2 h −(M + J)σ2 − ( Mσ2 −γ2 h ) ϕh bbγ γh , (26) bτ = 1 2MJ ( bκ + √ bκ2 −4MJσ4 ( 1 + ϕh bbγ γh )) , bδh = σ2 √ bτ ( γh −Mσ2 γh −bγh )−1 , (27) are real and positive, the corresponding stationary point candidate is given by ( bah, σ2 ah, c2 ah,bbh, σ2 bh, c2 bh ) = (√ bγbδ, σ2bδh γh , √ bτ, √ bγ/bδ/γh, σ2 γhbδh−ϕh σ2 √ bτ , √ bτ/γ2 h ) . (28) Given the noise variance σ2, obtaining the coefficients (22)–(25) is straightforward. Our approximate global VB solver (AGVBS) solves the sixth-order polynomial equation (21), e.g., by the ‘roots’ function in MATLAB R ⃝, and obtain all candidate stationary points by using Eqs.(26)–(28). Then, it selects the one giving the smallest Fh, and the global solution is the selected stationary point if it satisfies Fh < 0, otherwise the null solution (19). Note that, although a solution of Eq.(21) is not necessarily a stationary point, selection based on the free energy discards all non-stationary points and local maxima. As in EGVBS, a naive 1-D search is conducted for estimating σ2. In Section 4, we show that AGVBS is practically a good alternative to the MVGA iteration in terms of accuracy and computation time. 4 Experiments In this section, we experimentally evaluate the proposed EGVBS and AGVBS. We assume that the hyperparameters (CA, CB) and the noise variance σ2 are unknown and estimated from observations. We use the full-rank model (i.e., H = min(L, M)), and expect VB-LRSC to automatically find the true rank without any parameter tuning. We first conducted an experiment with a small artificial dataset (‘artificial small’), on which the exact algorithms, i.e., the SVB iteration (Section 2.2.1) and EGVBS (Section 3.2), are computationally tractable. Through this experiment, we can measure the accuracy of the efficient approximate variants, i.e., the MVGA iteration (Section 2.2.2) and AGVBS (Section 3.3). We randomly created M = 75 samples in L = 10 dimensional space. We assumed K = 2 clusters: M (1)∗= 50 samples lie in a H(1)∗= 3 dimensional subspace, and the other M (2)∗= 25 samples lie in a H(2)∗= 1 dimensional subspace. For each cluster k, we independently drew M (k)∗samples from NH(k)∗(0, 10IH(k)∗), where Nd(µ, Σ) denotes the d-dimensional Gaussian, and projected them into the observed L-dimensional space by R(k) ∈RL×H(k)∗, each entry of which follows N1(0, 1). Thus, we obtained a noiseless matrix Y (k)∗∈RL×M(k)∗for the k-th cluster. Concatenating all clusters, Y ∗= (Y (1)∗, . . . , Y (K)∗), and adding random noise subject to N1(0, 1) to each entry gave an artificial observed matrix Y ∈RL×M, where M = ∑K k=1 M (k)∗= 75. The true rank of Y ∗ 6 0 50 100 150 200 250 1.8 1.9 2 2.1 2.2 2.3 Iteration F /LM EGVBS AGVBS SVBIteration MVGAIteration (a) Free energy 0 50 100 150 200 250 10 0 10 2 10 4 Iteration Time(sec) EGVBS AGVBS SVBIteration MVGAIteration (b) Computation time 0 50 100 150 200 250 0 2 4 6 8 10 Iteration bH EGVBS AGVBS SVBIteration MVGAIteration (c) Estimated rank Figure 1: Results on the ‘artificial small’ dataset (L = 10, M = 75, H∗= 4). The clustering errors were 1.3% for EGVBS, AGVBS, and the SVB iteration, and 2.4% for the MVGA iteration. 0 500 1000 1500 2000 2500 1.61 1.615 1.62 1.625 1.63 1.635 Iteration F /LM AGVBS MVGAIteration (a) Free energy 0 500 1000 1500 2000 2500 10 0 10 2 10 4 Iteration Time(sec) AGVBS MVGAIteration (b) Computation time 0 500 1000 1500 2000 2500 0 5 10 15 Iteration bH AGVBS MVGAIteration (c) Estimated rank Figure 2: Results on the ‘artificial large’ dataset (L = 50, M = 225, H∗= 5). The clustering errors were 4.0% for AGVBS and 11.2% for the MVGA iteration. 0 500 1000 1500 2000 2500 2 3 4 5 6 7 Iteration F /LM AGVBS MVGAIteration (a) Free energy 0 500 1000 1500 2000 2500 10 0 10 2 10 4 Iteration Time(sec) AGVBS MVGAIteration (b) Computation time 0 500 1000 1500 2000 2500 0 10 20 30 40 50 Iteration bH AGVBS MVGAIteration (c) Estimated rank Figure 3: Results on the ‘1R2RC’ sequence (L = 59, M = 459) of the Hopkins 155 motion database. The clustering errors are shown in Figure 4. is given by H∗= min(∑K k=1 H(k)∗, L, M) = 4. Note that H∗is different from the rank J of the observed matrix Y , which is almost surely equal to min(L, M) = 10 under the Gaussian noise. Figure 1 shows the free energy, the computation time, and the estimated rank of b X = bB′ bA′⊤ over iterations. For the iterative methods, we show the results of 10 trials starting from different random initializations. We can see that AGVBS gives almost the same free energy as the exact methods (EGVBS and the SVB iteration). The exact method requires a large computation cost: EGVBS took 621 sec to obtain the global solution, and the SVB iteration took ∼100 sec to achieve almost the same free energy. The approximate methods are much faster: AGVBS took less than 1 sec, and the MVGA iteration took ∼10 sec. Since the MVGA iteration had not converged after 250 iterations, we continued the MVGA iteration until 2500 iterations, and found that the MVGA iteration sometimes converges to a local solution with significantly higher free energy than the other methods. EGVBS, AGVBS, and the SVB iteration successfully found the true rank H∗= 4, while the MVGA iteration sometimes failed. This difference is actually reflected to the clustering error, i.e., the misclassification rate with all possible cluster correspondences taken into account, after spectral clustering [19] is performed: 1.3% for EGVBS, AGVBS, and the SVB iteration, and 2.4% for the MVGA iteration. Next we conducted the same experiment with a larger artificial dataset (‘artificial large’) (L = 50, K = 4, (M (1)∗, . . . , M (K)∗) = (100, 50, 50, 25), (H(1)∗, . . . , H(K)∗) = (2, 1, 1, 1)), on which EGVBS and the SVB iteration are computationally intractable. Figure 2 shows results with AGVBS and the MVGA iteration. An advantage in computation time is clear: AGVBS took ∼0.1 sec, while the MVGA iteration took more than 100 sec. The clustering errors were 4.0% for AGVBS and 11.2% for the MVGA iteration. Finally, we applied AGVBS and the MVGA iteration to the Hopkins 155 motion database [21]. In this dataset, each sample corresponds to a trajectory of a point in a video, and clusteirng the trajectories amounts to finding a set of rigid bodies. Figure 3 shows the results on the ‘1R2RC’ 7 0 0.2 0.4 0.6 Clustering Error 1R2RC 1R2RCR 1R2RCR_g12 1R2RCR_g13 1R2RCR_g23 1R2RCT_A 1R2RCT_A_g12 1R2RCT_A_g13 1R2RCT_A_g23 1R2RCT_B 1R2RCT_B_g12 1R2RCT_B_g13 1R2RCT_B_g23 1R2RC_g12 1R2RC_g13 1R2RC_g23 1R2TCR 1R2TCRT 1R2TCRT_g12 1R2TCRT_g13 MAP (with optimized lambda) AGVBS MVGAIteration Figure 4: Clustering errors on the first 20 sequences of Hopkins 155 dataset. (L = 59, M = 459) sequence.3 We see that AGVBS gave a lower free energy with much less computation time than the MVGA iteration. Figure 4 shows the clustering errors over the first 20 sequences. We find that AGVBS generally outperforms the MVGA iteration. Figure 4 also shows the results with MAP estimation (2) with the tuning parameter λ optimized over the 20 sequences (we performed MAP with different values for λ, and selected the one giving the lowest average clustering error). We see that AGVBS performs comparably to MAP with optimized λ, which implies that VB estimates the hyperparameters and the noise variance reasonably well. 5 Conclusion In this paper, we proposed a global variational Bayesian (VB) solver for low-rank subspace clustering (LRSC), and its approximate variant. The key property that enabled us to obtain a global solver is that we can significantly reduce the degrees of freedom of the VB-LRSC model, and the optimization problem is separable. Our exact global VB solver (EGVBS) provides the global solution of a non-convex minimization problem by using the homotopy method, which solves the stationary condition written as a polynomial system. On the other hand, our approximate global VB solver (AGVBS) finds the roots of a polynomial equation with a single unknown variable, and provides the global solution of an approximate problem. We experimentally showed advantages of AGVBS over the previous scalable method, called the matrix-variate Gaussian approximate (MVGA) iteration, in terms of accuracy and computational efficiency. In AGVBS, SVD dominates the computation time. Accordingly, applying additional tricks, e.g., parallel computation and approximation based on random projection, to the SVD calculation would be a vital option for further computational efficiency. LRSC can be equipped with an outlier term, which enhances robustness [7, 8, 2]. With the outlier term, much better clustering error on Hopkins 155 dataset was reported [2]. Our future work is to extend our approach to such robust variants. Theorem 2 enables us to construct the mean update (MU) algorithm [16], which finds the global solution with respect to a large number of unknown variables in each step. We expect that the MU algorithm tends to converge to a better solution than the standard VB iteration, as in robust PCA and its extensions. EGVBS and AGVBS cannot be applied to the applications where Y has missing entries. Also in such cases, Theorem 2 might be used to derive a better algorithm, as the VB global solution of fully-observed matrix factorization (MF) was used as a subroutine for partially-observed MF [18]. In many probabilistic models, the Bayesian learning is often intractable, and its VB approximation has to rely on a local search algorithm. Exceptions are the fully-observed MF, for which an analyticform of the global solution has been derived [17], and LRSC, for which this paper provided global VB solvers. As in EGVBS, the homotopy method can solve a stationary condition if it can be written as a polynomial system. We expect that such a tool would extend the attainability of global solutions of non-convex problems, with which machine learners often face. Acknowledgments The authors thank the reviewers for helpful comments. SN, MS, and IT thank the support from MEXT Kakenhi 23120004, the FIRST program, and MEXT KAKENHI 23700165, respectively. 3Peaks in free energy curves are due to pruning, which is necessary for the gradient-based MVGA iteration. The free energy can jump just after pruning, but immediately get lower than the value before pruning. 8 References [1] H. Attias. Inferring parameters and structure of latent variable models by variational Bayes. In Proc. of UAI, pages 21–30, 1999. [2] S. D. Babacan, S. Nakajima, and M. N. Do. Probabilistic low-rank subspace clustering. In Advances in Neural Information Processing Systems 25, pages 2753–2761, 2012. [3] J. Besag. On the statistical analysis of dirty pictures. Journal of the Royal Statistical Society B, 48:259– 302, 1986. [4] C. M. Bishop. Variational principal components. In Proc. of International Conference on Artificial Neural Networks, volume 1, pages 514–509, 1999. [5] C. M. Bishop. Pattern Recognition and Machine Learning. Springer, New York, NY, USA, 2006. [6] F. J. Drexler. A homotopy method for the calculation of all zeros of zero-dimensional polynomial ideals. In H. J. Wacker, editor, Continuation methods, pages 69–93, New York, 1978. Academic Press. [7] E. Elhamifar and R. Vidal. Sparse subspace clustering. In Proc. of CVPR, pages 2790–2797, 2009. [8] P. Favaro, R. Vidal, and A. Ravichandran. A closed form solution to robust subspace estimation and clustering. In Proceedings of CVPR, pages 1801–1807, 2011. [9] C. B. Garcia and W. I. Zangwill. Determining all solutions to certain systems of nonlinear equations. Mathematics of Operations Research, 4:1–14, 1979. [10] T. Gunji, S. Kim, M. Kojima, A. Takeda, K. Fujisawa, and T. Mizutani. Phom—a polyhedral homotopy continuation method. Computing, 73:57–77, 2004. [11] A. K. Gupta and D. K. Nagar. Matrix Variate Distributions. Chapman and Hall/CRC, 1999. [12] T. L. Lee, T. Y. Li, and C. H. Tsai. Hom4ps-2.0: a software package for solving polynomial systems by the polyhedral homotopy continuation method. Computing, 83:109–133, 2008. [13] G. Liu, Z. Lin, and Y. Yu. Robust subspace segmentation by low-rank representation. In Proc. of ICML, pages 663–670, 2010. [14] G. Liu, H. Xu, and S. Yan. Exact subspace segmentation and outlier detection by low-rank representation. In Proc. of AISTATS, 2012. [15] G. Liu and S. Yan. Latent low-rank representation for subspace segmentation and feature extraction. In Proc. of ICCV, 2011. [16] S. Nakajima, M. Sugiyama, and S. D. Babacan. Variational Bayesian sparse additive matrix factorization. Machine Learning, 92:319–1347, 2013. [17] S. Nakajima, M. Sugiyama, S. D. Babacan, and R. Tomioka. Global analytic solution of fully-observed variational Bayesian matrix factorization. Journal of Machine Learning Research, 14:1–37, 2013. [18] M. Seeger and G. Bouchard. Fast variational Bayesian inference for non-conjugate matrix factorization models. In Proceedings of International Conference on Artificial Intelligence and Statistics, La Palma, Spain, 2012. [19] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Machine Intell., 22(8):888–905, 2000. [20] M. Soltanolkotabi and E. J. Cand`es. A geometric analysis of subspace clustering with outliers. CoRR, 2011. [21] R. Tron and R. Vidal. A benchmark for the comparison of 3-D motion segmentation algorithms. In Proc. of CVPR, 2007. 9
|
2013
|
124
|
4,848
|
Reservoir Boosting : Between Online and Offline Ensemble Learning Leonidas Lefakis Idiap Research Institute Martigny, Switzerland leonidas.lefakis@idiap.ch Franc¸ois Fleuret Idiap Research Institute Martigny, Switzerland francois.fleuret@idiap.ch Abstract We propose to train an ensemble with the help of a reservoir in which the learning algorithm can store a limited number of samples. This novel approach lies in the area between offline and online ensemble approaches and can be seen either as a restriction of the former or an enhancement of the latter. We identify some basic strategies that can be used to populate this reservoir and present our main contribution, dubbed Greedy Edge Expectation Maximization (GEEM), that maintains the reservoir content in the case of Boosting by viewing the samples through their projections into the weak classifier response space. We propose an efficient algorithmic implementation which makes it tractable in practice, and demonstrate its efficiency experimentally on several compute-vision data-sets, on which it outperforms both online and offline methods in a memory constrained setting. 1 Introduction Learning a boosted classifier from a set of samples S = {X, Y }N ∈RD × {−1, 1} is usually addressed in the context of two main frameworks. In offline Boosting settings [10] it is assumed that the learner has full access to the entire dataset S at any given time. At each iteration t, the learning algorithm calculates a weight wi for each sample i – the derivative of the loss with respect to the classifier response on that sample – and feeds these weights together with the entire dataset to a weak learning algorithm, which learns a predictor ht. The coefficient at of the chosen weak learner ht is then calculated based on its weighted error. There are many variations of this basic model, too many to mention here, but a common aspect of these is that they do not explicitly address the issue of limited resources. It is assumed that the dataset can be efficiently processed in its entirety at each iteration. In practice however, memory and computational limitations may make such learning approaches prohibitive or at least inefficient. A common approach used in practice to deal with such limitations is that of sub-sampling the data-set using strategies based on the sample weights W [9, 13]. Though these methods address the limits of the weak learning algorithms resources, they nonetheless assume a) access to the entire data-set at all times, b) the ability to calculate the weights W of the N samples and to sub-sample K of these, all in an efficient manner. The issues with such an approach can be seen in tasks such as computer vision, where samples need not only be loaded sequentially into memory if they do not all fit which in itself may be computationally prohibitive, but furthermore once loaded they must be pre-processed, for example by extracting descriptors, making the calculation of the weights themselves a computationally expensive process. For large datasets, in order to address such issues, the framework of online learning is frequently employed. Online Boosting algorithms [15] typically assume access solely to a Filter() function, by which they mine samples from the data-set typically one at a time. Due to the their online nature 1 such approaches typically treat the weak learning algorithm as a black box, assuming that it can be trained in an online manner, and concentrate on different approaches to calculating the weak learner coefficients [15, 4]. A notable exception is the works of [11] and [14], where weak learner selectors are introduced, one for each weak learner in the ensemble, which are capable of picking a weak learner from a predetermined pool. All these approaches however are similar in the fact that they are forced to predetermine the number of weak learners in the boosted strong classifier. We propose here a middle ground between these two extremes in which the boosted classifier can store some of the already processed samples in a reservoir, possibly keeping them through multiple rounds of training. As in online learning we assume access only to a Filter() through which we can sample Qt samples at each Boosting iteration. This setting is related to the framework proposed in [2] for dealing with large data-sets, the method proposed there however uses the filter to obtain a sample and stochastically accepts or rejects the sample based on its weight. The drawback of this approach is a) that after each iteration all old samples are discarded, and b) the algorithm must process an increasing number of samples at each iteration as the weights become increasingly smaller. We propose to acquire a fixed number of samples at each iteration and to add these to a persistent reservoir, discarding only a subset. The only other work we know which trains a Boosting classifier in a similar manner is [12], where the authors are solely concerned with learning in the presence of concept drift and do not propose a strategy for filling this reservoir. Rather they use a simple sliding window approach and concentrate on the removal and adding of weak learners to tackle this drift. A related concept to the work presented here is that of learning on a budget [6], where, as in the online learning setting, samples are presented one at a time to the learner, a perceptron, which builds a classification model by retaining an active subset of these samples. The main concern in this context is the complexity of the model itself and its effect via the Gramm matrix computation on both training and test time. Subsequent works on budget perceptrons has led to tighter budgets [16] (at higher computational costs), while [3] proved that such approaches are mistake-bound. Similar work on Support Vector Machines [1] proposed LaSVM, a SVM solver which was shown to converge to the SVM QP solution by adopting a scheme composed of two alternating steps, which consider respectively the expansion and contraction of the support vector set via the SMO algorithm. SVM budgeted learning was also considered in [8] via an L1-SVM formulation which allowed users to specifically set a budget parameter B, and subsequently minimized the loss on the B worst-classified examples. As noted, these approaches are concerned with the complexity of the classification model, that is the budget refers to the number of samples which have none-zero coefficients in the dual representation of the classifier. In this respect our work is only loosely related to what is often referred to as budget learning, in that we solve a qualitatively different task, namely addressing the complexity of the parsing and processing the data during training. Table 1: Notation Rt the contents of the reservoir at iteration t |Rt| the size of the reservoir Qt the fresh batch of samples at iteration t ΣAA the covariance matrix of the edges h ◦y µA the expectation of the edges of samples in set A yA the vector of labels {−1, 1}|A| of samples in A wt the vector of Boosting weights at iteration t Ft(x) the constructed strong classifier at iteration t Filter() a filter returning samples from S ht the weak learner chosen at iteration t H the family of weak learners ◦ component-wise (Hadamard) product T number of weak learners in the strong classifier 2 Table 2: Boosting with a Reservoir Construct R0 and Q0 with r and q calls to Filter(). for t = 1, . . . , T do Discard q samples from Rt−1 ∪Qt−1 samples to obtain Rt Select ht using the samples in Rt Compute at using Rt Construct Qt with q calls to Filter() end for Return FT = PT t=1 atht 2 Reservoir of samples In this section we present in more detailed form the framework of learning a boosted classifier with the help of a reservoir. As mentioned, the batch version of Boosting consists of iteratively selecting a weak learner ht at each iteration t, based on the loss reduction they induce on the full training set S. In the reservoir setting, weak learners are selected solely from the information provided by the samples contained in the reservoir Rt. Let N be the number of training samples, and S = {1, . . . , N} the set of their indexes. We consider here one iteration of a Boosting procedure, where each sample is weighted according to its contribution to the overall loss. Let y ∈{−1, 1}Nbe the sample labels, and H ⊂{−1, 1}N the set of weak-learners, each identified with its vector of responses over the samples. Let ω ∈RN + be the sample weights at that Boosting iteration. For any subset of sample indexes B ⊂{1, . . . , N} let yB ∈{−1, 1}|B| be the “extracted” vector. We define similarly ωB, and for any weak learner h ∈H let hB ∈{−1, 1}|B| stands for the vector of the |B| responses over the samples in B. At each iteration t, the learning algorithm is presented with a batch of fresh samples Qt ⊂S, |Qt| = q, and must choose r samples from the full set of samples Rt ∪Qt at its disposal, in order to build Rt+1 with |Rt+1| = r, which it subsequently uses for training. Using the samples from Rt, the learner chooses a weak learner ht ∈H to maximize ⟨ht Rt ◦yRt, wt Rt⟩, where ◦stands for the Hadamard component-wise vector product. Maximizing this latter quantity corresponds to minimizing the weighted error estimated on the samples currently in Rt. The weight at of the selected weak learner can also be estimated with Rt. The learner then receives a fresh batch of samples Qt+1 and the process continues iteratively. See algorithm in Table 2. In the following we will address the issue of which strategy to employ to discard the q samples at each time step t. To our knowledge, no previous work has been published in this or a similar framework. 3 Reservoir Strategies In the following we present a number of strategies for populating the reservoir, i.e. for choosing which q samples from Rt ∪Qt to discard. We begin by identifying three basic and rather straightforward approaches. Max Weights (Max) At each iteration t the weight vector wt Rt∪Qt is computed for the r + q samples and the r samples with the largest weights are kept. Weighted Sampling (WSam) As above wt Rt∪Qt is computed, then normalized to 1, and used as a distribution to sample r samples to keep without replacement. Random Sampling (Rand) The reservoir is constructed by sampling uniformly r samples from the r + q available, without replacement. These will serve mainly as benchmark baselines against which we will compare our proposed method, presented below, which is more sophisticated and, as we show empirically, more efficient. These baselines are presented to highlight that a more sophisticated reservoir strategy is needed to ensure competitive performance, rather than to serve as examples of state-of-the-art baselines. Our objective will be to populate the reservoir with samples that will allow for an optimal selection of weak learners, as close as possible to the choice we would make if we could keep all samples. 3 The issue at hand is similar to that of feature selection: The selected samples should be jointly informative for choosing the good weak learners. This forces to find a proper balance between the individual importance of the kept samples (i.e. choosing those with large weights) and maximizing the heterogeneity of the weak learners responses on them. 3.1 Greedy Edge Expectation Maximization In that reservoir setting, it makes sense that given a set of samples A from which we must discard samples and retain only a subset B, what we would like is to retain a training set that is as representative as possible of the entire set A. Ideally, we would like B to be such that if we pick the optimal weak-learner according to the samples it contains h∗= argmax h∈H ⟨hB ◦yB, wB⟩ (1) it maximizes the same quantity estimated on all the samples in A. i.e. we want ⟨h∗ A ◦yA, wA⟩to be large. There may be many weak-learners in H that have the exact same responses as h∗on the samples in B, and since we consider a situation where we will not have access to the samples from A \ B anymore, we model the choice among these weak-learners as a random choice. In which case, a good h∗is one maximizing EH∼U(H) (⟨HA ◦yA, ωA⟩| HB = h∗ B) , (2) that is the average of the scores on the full set A of the weak-learners which coincide with h∗on the retained set B. We propose to model the distribution U(H) with a normal law. If H is picked uniformly in H, under a reasonable assumption of symmetry, we propose H ◦y ∼N(µ, Σ) (3) where µ is the vector of dimension N of the expectations of weak learner edges, and Σ is a covariance matrix of size N × N. Under this model, if ¯B = A \ B, and with ΣA,B denoting an extracted sub-matrix, we have EH∼U(H) (⟨HA ◦yA, ωA⟩| HB = h∗ B) (4) = EH◦y∼N (µ,Σ) (⟨HA ◦yA, ωA⟩| HB = h∗ B) (5) = ⟨h∗ B ◦yB, ωB⟩+ EH◦y∼N (µ,Σ) (⟨H ¯ B ◦y ¯ B, ω ¯ B⟩| HB = h∗ B) (6) = ⟨(h∗ B ◦yB), wB⟩+ ⟨µ ¯ B + Σ ¯ BBΣ−1 BB(h∗ B ◦yB −µB), w ¯ B⟩ (7) Though the modeling of the discrete variables H ◦y by a continuous distribution may seem awkward, we point out two important aspects. Firstly the parametric modeling allows for an analytical expression for the calculation of (2). Given that we seek to maximize this value over the possible subsets B of A, an analytic approach is necessary for the algorithm to retain tractability. Secondly, for a given vector of edges h∗ B ◦yB in B, the vector µ ¯ B + Σ ¯ BBΣ−1 BB(h∗ B ◦yB −µB) is not only the conditional expectation of h∗ ¯ B ◦y ¯ B, but also its optimal linear predictor in a least squares error sense. We note that choosing B based on (7) requires estimates of three quantities: the expected weak-learner edges µA, the co-variance matrix ΣAA, and the weak learner h∗trained on B. Given these quantities, we must also develop a tractable optimization scheme to find the B maximizing it. 3.2 Computing Σ and µ As mentioned, the proposed method requires in particular an estimate of the vector of expected edges µA of the samples in A, as well as the corresponding covariance matrix ΣAA. In practice, the estimation of the above depends on the nature of the weak learner family H. In the case of classification stumps, which we use in the experiments below, both these values can be calculated with small computational cost. A classification stump is a simple classifier hθ,α,d which for a given threshold θ ∈R, polarity α ∈{−1, 1}, and feature index d ∈{1, . . . , D}, has the following form: ∀x ∈RD, hθ,α,d(x) = 1 if α xd ≥α θ −1 otherwise (8) 4 where xd refers to the value of the dth component of x. In practice when choosing the optimal stump for a given set of samples A, a learner would sort all the samples according to each of the D dimensions, and for each dimension d it would consider stumps with thresholds θ between two consecutive samples in that sorted list. For this family of stumps H and given that we shall consider both polarities, Eh(hAyA) = 0. The covariance of the edge of two samples can also be calculated efficiently, with O(|A|2D) complexity. For two given samples i,j we have ∀h ∈H, yihiyjhj ∈{−1, 1}. (9) Having sorted the samples along a specific dimension d we have that for α = 1, yihiyjhj ̸= yiyj for those weak learners which disagree on those samples i.e. with min(xd i , xd j) < θ < max(xd i , xd j). If Id j , Id i are the indexes of the samples in the sorted list then there are (|Id j −Id i |) such disagreeing weak learners for α = 1 (plus the same quantity for α = −1), given that for each dimension d there correspond 2(|A| −1) weak-learners in H, we reach the following update rule ∀d, ∀{i, j} : ΣAA(i, j)+ = yiyj(2 ∗(|A| −1) −4 ∗|Id j −Id i |) (10) where ΣAA(i, j) refers to the i, j element of Σ. As can be seen, this leads to a cost of O(|A|2D). Given that commonly D ≫|A|, this cost should not be much higher than O(D|A| log |A|) the cost of sorting along the D dimensions. 3.3 Choice of h∗ As stated, the estimation of h∗for a given B must be computationally efficient. We could further commit to the Gaussian assumption by defining p(h∗= h), ∀h ∈H i.e. the probability that a weak learner h will be the chosen one given that it will be trained on B and integrating over H, this however, though consistent with the Gaussian assumption, is computationally prohibitive. Rather, we present here two cheap alternatives both of which perform well in practice. The first and simplest strategy is to use ∀B, h∗◦yB = (1, . . . , 1) which is equivalent to making the assumption that the training process will results in a weak learner which performs perfectly on the training data B. This is exactly what the process will strive to achieve, however unlikely it may be. The second is to generate a number |HLattice| of weak learner edges by sampling on the {−1, 1}|B| lattice using the Gaussian H ◦y ∼N(µB, ΣBB) restricted to this lattice and to keep the optimal h∗= argmax h ∈HLattice⟨(hB ◦yB), wB⟩. We can further simplify this process by considering the whole set A and the lattice {−1, 1}|A| and simply extracting the values h∗ B for the different subsets B. Though much more complex, this approach can be implemented extremely efficiently, experiments showed however that the simple rule of ∀B, h∗◦yB = (1, . . . , 1) works just as well in practice and is considerably cheaper. In the following experiments we present results solely for this first rule. 3.4 Greedy Calculation of argmaxB Despite the analytical formulation offered by our Gaussian assumption, an exact maximization over all possible subsets remains computationally intractable. For these reason we propose a greedy approach to building the reservoir population which is computationally bounded. We initialize the set B = A, i.e. initially we assume we are keeping all the samples, and calculate Σ−1 BB. The greedy process then iteratively goes through the |B| samples in B and finds the sample j such that for B ′ = B \ {j} the value ⟨Σ ¯ B′B′ Σ−1 B′B′ (h∗ B′ ◦yB′ ), w ¯ B′ ⟩+ ⟨h∗ B′ ◦yB′ , wB′ ⟩ (11) is maximized, where, in this context, h∗refers to the weak learner chosen by training on B ′. This process is repeated q times, i.e. until | ¯B| = q, discarding one sample at each iteration. In the experiments presented here, we stop the greedy subset selection after these q steps. However in practice the subset selection can continue by choosing pairs k,j to swap between the two steps. In our experiments however we did not notice any gain from further optimization of the subset B. 5 3.5 Evaluation of E(⟨h∗ A, wA⟩|B) Each step in the above greedy process requires going through all the samples j in the current B and calculating E(⟨h∗ A, wA⟩|B′) for B′ = B \ {j}. In order for our method to be computationally tractable we must be able to compute the above value with a limited computational cost. The naive approach of calculating the value from scratch for each j would cost O(|B′|3 + | ¯B′||B|) . The main computational cost here is the first factor, incurred in calculating the inverse of the covariance matrix ΣB′B′ which results from the matrix ΣBB by removing a single row and column. It is thus important to be able to perform this calculation with a low computational cost. 3.5.1 Updating Σ−1 B′B′ For a given matrix M and its inverse M −1 we would like to efficiently calculate the inverse of M−j which is results from M by the deletion of row and column j. It can be shown that the inverse of the matrix Mej which results from M by the substitution of row and column j by the basis vector ej is given by the following formula: M − ej1 = M −1 − 1 Mii M −1 j∗M −1 ∗j + eT j ej (12) where M∗j stands for the vector of elements of the jth column of matrix M and Mj∗stand for the vector of elements of its jth row. We omit the proof (a relatively straightforward manipulation of the Sherman-Morrison formulas) due to space constraints. The inverse M −1 −j can be recovered by simply removing the jth row and column of M −1 ej . Based on this we can compute Σ−1 B′B′ in O(|B|2). We further exploit the fact that the matrices Σ ¯ B′B′ and Σ−1 B′B′ enter into the calculations through the products Σ−1 B′B′ h∗ B′ and wT ¯ BΣ ¯ B′B′. Thus by pre-calculating the products Σ−1 BBh∗ B and wT ¯ BΣ ¯ BBonce at the beginning of each greedy optimization step, we can incur a cost of O(|B|) for each sample j and an O(|B|2) cost overall. 3.6 Weights ˜ wB GEEM provides a method for selecting which samples to keep and which to discard. However in doing so it creates a biased sample B of the set A, and consequently weights wB are not representative of the weight distribution wA. It is thus necessary to alter the weights wB to obtain a new weight vector ˜ wB which will takes this bias into account. Based on the assumption (3) and (7), and the fact that µA = 0, we set ˜ wB = wB + wT ¯ BΣ ¯ BBΣ−1 BB (13) The resulting weight vector ˜ wB used to pick the weak-learner h∗correctly reflects the entire set A = Rt ∪Qt (under the Gaussian assumption) 3.7 Overall Complexity The proposed method GEEM comprises, at each boosting iteration, three main steps: (1) The calculation of ΣAA, (2) The optimization of B, and (3) The training of the weak learner ht The third step is common to all the reservoir strategies presented here. In the case of classification stumps by presorting the samples along each dimension and exploiting the structure of the hypothesis space H, we can incur a cost of O(D|B| log |B|) where D is the dimensionality of the input space. The first step, as mentioned, incurs a cost of O(|A|2D) if we go through all dimensions of the data. However the minimum objective of acquiring an invertible matrix ΣAA by only looking at |A| dimensions and incurring a cost of O(|A|3). Finally the second step as analyzed in the previous section, incurs a cost of O(q|A|2). Thus the overall complexity of the proposed method is O(|A|3 + D|A|log|A|) which in practice should not be significantly larger than O(D|B|log|B|), the cost of the remaining reservoir strategies. We note that this analysis ignores the cost of processing incoming samples Qt which is also common to all strategies, dependent on the task this cost may handily dominate all others. 6 4 Experiments In order to experimentally validate both the framework of reservoir boosting as well as the proposed method GEEM, we conducted experiments on four popular computer vision datasets. In all our experiments we use logitboost for training. It attempts to minimize the logistic loss which is less aggressive than the exponential loss. Original experiments with the exponential loss in a reservoir setting showed it to be unstable and to lead to degraded performance for all the reservoir strategies presented here. In [14] the authors performed extensive comparison in an online setting and also found logitboost to yield the best results. We set the number of weak learners T in the boosted classifier to be T = 250 common to all methods. In the case of the online boosting algorithms this translates to fixing the number of weak learners. Finally, for the methods that use a reservoir – that is GEEM and the baselines outlined in 3 – we set r = q. Thus at every iteration, the reservoir is populated with |Rt| = r samples and the algorithm receives a further |Qt| = r samples from the filter. The reservoir strategy is then used to discard r of these samples to build Rt+1. 4.1 Data-sets We used four standard datasets: CIFAR-10 is a recognition dataset consisting of 32 × 32 images of 10 distinct classes depicting vehicles and animals. The training data consists of 5000 images of each class. We pre-process the data as in [5] using code provided by the authors. MNIST is a well-known optical digit recognition dataset comprising 60000 images of size 28 × 28 of digits from 0 −9. We do not preprocess the data in anyway, using the raw pixels as features. INRIA is a pedestrian detection dataset. The training set consists of 12180 images of size 64 × 128 of both pedestrians and background images from which we extract HoG features [7]. STL-10 An image recognition dataset consisting of images of size 96 × 96 belonging to 10 classes, each represented by 500 images in the training set. We pre-process the data as for CIFAR. 4.2 Baselines The baselines for the reservoir strategy have already been outlined in 3, and we also benchmarked three online Boosting algorithms: Oza [15], Chen [4], and Bisc [11]. The first two algorithms treat weak learners as a black-box but predefine their number. We initiate the weak learners of these approaches by running Logitboost offline using a subset of the training set as we found that randomly sampling the weak learners led to very poor performance; thus though they are online algorithms, nonetheless in the experiments presented here they are afforded an offline initialization step. Note that these approaches are not mutually exclusive with the proposed method, as the weak learners picked by GEEM can be combined with an online boosting algorithm optimizing their coefficients. For the final method [11], we initiated the number of selectors to be K = 250 resulting in the same number of weak learners as the other methods. We also conducted experiments with [14] which is closely related to [11], however as it performed consistently worse than [11], we do not show those results here. Finally we compared our method against two sub-sampling methods that have access to the full dataset and subsample r samples using a weighted sampling routine. At each iteration, these methods compute the boosting weights of all the samples in the dataset and use weighted sampling to obtain a subset Rt. The first method is a simple weighted sampling method (WSS) while the second is Madaboost (Mada) which combines weighted sampling with weight adjustment for the sub-sampled samples. We furthermore show comparison with a fixed reservoir baseline (Fix), this baseline subsamples the dataset once prior to learning and then trains the ensemble using offline Adaboost, the contents of the reservoir in this case do not change from iteration to iteration. 5 Results and Discussion Table 3, 4, and 5, list respectively the performance of the reservoir baselines, the online Boosting techniques, and the sub-sampling methods. Each table also presents the performance of our GEEM approach in the same settings. 7 Max Rand WSam GEEM Dataset r=100 r=250 r=100 r=250 r=100 r=250 r=100 r=250 CIFAR 29.59 (0.59) 29.16 (0.71) 46.02 (0.35) 45.88 (0.24) 48.92 (0.34) 50.09 (0.24) 50.96 (0.36) 54.87 (0.28) STL 30.20 (0.75) 30.72 (0.82) 39.25 (0.32) 39.40 (0.25) 41.60 (0.39) 42.93 (0.30) 42.40 (0.65) 45.70 (0.38) INRIA 95.57 (0.49) 96.31 (0.37) 91.54 (0.49) 91.72 (0.35) 94.29 (0.23) 94.63 (0.30) 97.21 (0.21) 97.52 (0.13) MNIST 66.74 (1.45) 68.25 (0.81) 79.97 (0.24) 79.59 (0.22) 83.96 (0.29) 84.07 (0.23) 84.66 (0.30) 84.33 (0.33) Table 3: Test Accuracy on the four datasets for the different reservoir strategies Online Boosting GEEM Dataset Chen Bisc Oza (r=250) CIFAR 39.40 (1.91) 45.03 (0.93) 49.16 (0.40) 54.87 (0.28) STL 33.09 (1.49) 36.35 (0.49) 39.98 (0.56) 45.70 (0.38) INRIA 94.23 (0.97) 95.65 (0.38) 95.50 (0.49) 97.53 (0.13) MNIST 80.99 (1.11) 85.25 (0.82) 84.85 (0.54) 84.33 (0.33) Table 4: Comparison of GEEM with online boosting algorithms WSS Mada Fix GEEM Dataset r=100 r=250 r=100 r=250 r=1,000 r=2,500 r=100 r=250 CIFAR 50.38 (0.38) 51.66 (0.30) 48.87 (0.26) 49.44 (0.33) 48.41 (0.88) 52.40 (0.77) 50.96(0.36) 54.87 (0.28) STL 42.54 (0.35) 44.07 (0.31) 41.36 (0.32) 42.34 (0.24) 42.04 (0.19) 46.07 (0.41) 42.40 (0.65) 45.70 (0.38) INRIA 94.24 (0.30) 94.65 (0.16) 94.26 (0.27) 94.65 (0.10) 92.46 (0.67) 93.82 (0.74) 97.21 (0.21) 97.53 (0.13) MNIST 84.21 (0.27) 84.51 (0.16) 79.00 (0.33) 78.99 (0.31) 85.37 (0.33) 88.02 (0.15) 84.66 (0.30) 84.33 (0.33) Table 5: Comparison of GEEM with subsampling algorithms As can be seen, GEEM outperforms the other reservoir strategies on three of the four datasets and performs on par with the best on the fourth (MNIST). It also outperforms the on-line Boosting techniques on three data-sets and on par with the best baselines on MNIST. Finally, GEEM performs better than all the sub-sampling algorithms. Note that the Fix baseline was provided with ten times the number of samples to reach a similar level of performance. These results demonstrate that both the reservoir framework we propose for Boosting, and the specific GEEM algorithm, provide performance greater or on par with existing state-of-the-art methods. When compared with other reservoir strategies, GEEM suffers from larger complexity which translates to a longer training time. For the INRIA dataset and r = 100 GEEM requires circa 70 seconds for training as opposed to 50 for the WSam strategy, while for r = 250 GEEM takes approximately 320 seconds to train compared to 70 for WSam. We note however that even when equating training time, which translates to using r = 100 for GEEM and r = 250 for WSam, GEEM still outperforms the simpler reservoir strategies. The timing results on the other 3 datasets were similar in this respect. Many points can still be improved. In our ongoing research we are investigating different approaches to modeling the process of evaluating h∗, of particular importance is of course that it is both reasonable and fast to compute, one approach is to consider the maximum a posteriori value of h∗by drawing on elements in extreme value theory. We have further plans to adapt this framework, and the proposed method, to a series of other settings. It could be applied in the context of parallel processing, where a dataset can be split among CPUs each training a classifier on a different portion of the data. Finally, we are also investigating the method’s suitability for active learning tasks and dataset creation. We note that the proposed method GEEM is not given information concerning the labels of the samples, but simply the expectation and covariance matrix of the edges. Acknowledgments This work was supported by the European Community’s Seventh Framework Programme FP7 Challenge 2 - Cognitive Systems, Interaction, Robotics - under grant agreement No 247022 - MASH. 8 References [1] Antoine Bordes, Seyda Ertekin, Jason Weston, and L´eon Bottou. Fast kernel classifiers with online and active learning. J. Mach. Learn. Res., 6:1579–1619, December 2005. [2] Joseph K. Bradley and Robert E. Schapire. Filterboost: Regression and classification on large datasets. In NIPS, 2007. [3] Nicol Cesa-Bianchi and Claudio Gentile. Tracking the best hyperplane with a simple budget perceptron. In In Proc. of Nineteenth Annual Conference on Computational Learning Theory, pages 483–498. Springer-Verlag, 2006. [4] Shang-Tse Chen, Hsuan-Tien Lin, and Chi-Jen Lu. An online boosting algorithm with theoretical justifications. In John Langford and Joelle Pineau, editors, ICML, ICML ’12, pages 1007–1014, New York, NY, USA, July 2012. Omnipress. [5] Adam Coates and Andrew Ng. The importance of encoding versus training with sparse coding and vector quantization. In Lise Getoor and Tobias Scheffer, editors, Proceedings of the 28th International Conference on Machine Learning (ICML-11), ICML ’11, pages 921–928, New York, NY, USA, June 2011. ACM. [6] Koby Crammer, Jaz S. Kandola, and Yoram Singer. Online classification on a budget. In Sebastian Thrun, Lawrence K. Saul, and Bernhard Schlkopf, editors, NIPS. MIT Press, 2003. [7] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 1, pages 886–893 vol. 1, 2005. [8] Ofer Dekel and Yoram Singer. Support vector machines on a budget. In NIPS, pages 345–352, 2006. [9] Carlos Domingo and Osamu Watanabe. Madaboost: A modification of adaboost. In Proceedings of the Thirteenth Annual Conference on Computational Learning Theory, COLT ’00, pages 180–189, San Francisco, CA, USA, 2000. Morgan Kaufmann Publishers Inc. [10] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci., 55(1):119–139, August 1997. [11] Helmut Grabner and Horst Bischof. On-line boosting and vision. In CVPR (1), pages 260–267, 2006. [12] Mihajlo Grbovic and Slobodan Vucetic. Tracking concept change with incremental boosting by minimization of the evolving exponential loss. In ECML PKDD, ECML PKDD’11, pages 516–532, Berlin, Heidelberg, 2011. Springer-Verlag. [13] Zdenek Kalal, Jiri Matas, and Krystian Mikolajczyk. Weighted sampling for large-scale boosting. In BMVC, 2008. [14] C. Leistner, A. Saffari, P.M. Roth, and H. Bischof. On robustness of on-line boosting a competitive study. In Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on, pages 1362 –1369, 27 2009-oct. 4 2009. [15] Nikunj C. Oza and Stuart Russell. Online bagging and boosting. In In Artificial Intelligence and Statistics 2001, pages 105–112. Morgan Kaufmann, 2001. [16] Bordes Antoine Weston Jason and L´eon Bottou. Online (and offline) on an even tighter budget. In In Artificial Intelligence and Statistics 2005, 2005. 9
|
2013
|
125
|
4,849
|
Faster Ridge Regression via the Subsampled Randomized Hadamard Transform Yichao Lu1 Paramveer S. Dhillon2 Dean Foster1 Lyle Ungar2 1Statistics (Wharton School), 2Computer & Information Science University of Pennsylvania, Philadelphia, PA, U.S.A {dhillon|ungar}@cis.upenn.edu foster@wharton.upenn.edu, yichaolu@sas.upenn.edu Abstract We propose a fast algorithm for ridge regression when the number of features is much larger than the number of observations (p ≫n). The standard way to solve ridge regression in this setting works in the dual space and gives a running time of O(n2p). Our algorithm Subsampled Randomized Hadamard Transform- Dual Ridge Regression (SRHT-DRR) runs in time O(np log(n)) and works by preconditioning the design matrix by a Randomized Walsh-Hadamard Transform with a subsequent subsampling of features. We provide risk bounds for our SRHT-DRR algorithm in the fixed design setting and show experimental results on synthetic and real datasets. 1 Introduction Ridge Regression, which penalizes the ℓ2 norm of the weight vector and shrinks it towards zero, is the most widely used penalized regression method. It is of particular interest in the p > n case (p is the number of features and n is the number of observations), as the standard ordinary least squares regression (OLS) breaks in this setting. This setting is even more relevant in today’s age of ‘Big Data’, where it is common to have p ≫n. Thus efficient algorithms to solve ridge regression are highly desirable. The current method of choice for efficiently solving RR is [19], which works in the dual space and has a running time of O(n2p), which can be slow for huge p. As the runtime suggests, the bottleneck is the computation of XX⊤where X is the design matrix. An obvious way to speed up the algorithm is to subsample the columns of X. For example, suppose X has rank k, if we randomly subsample psubs of the p (k < psubs ≪p ) features, then the matrix multiplication can be performed in O(n2psubs) time, which is very fast! However, this speed-up comes with a big caveat. If all the signal in the problem were to be carried in just one of the p features, and if we missed this feature while sampling, we would miss all the signal. A parallel and recently popular line of research for solving large scale regression involves using some kind of random projections, for instance, transforming the data with a randomized Hadamard transform [1] or Fourier transform and then uniformly sampling observations from the resulting transformed matrix and estimating OLS on this smaller data set. The intuition behind this approach is that these frequency domain transformations uniformlize the data and smear the signal across all the observations so that there are no longer any high leverage points whose omission could unduly influence the parameter estimates. Hence, a uniform sampling in this transformed space suffices. This approach can also be viewed as preconditioning the design matrix with a carefully constructed data-independent random matrix. This transformation followed by subsampling has been used in a variety of variations, including Subsampled Randomized Hadamard Transform (SRHT) [4, 6] and Subsampled Randomized Fourier Transform (SRFT) [22, 17]. 1 In this paper, we build on the above line of research and provide a fast algorithm for ridge regression (RR) which applies a Randomized Hadamard transform to the columns of the X matrix and then samples psubs = O(n) columns. This allows the bottleneck matrix multiplication in the dual RR to be computed in O(np log(n)) time, so we call our algorithm Subsampled Randomized Hadamard Transform-Dual Ridge Regression (SRHT-DRR). In addition to being computationally efficient, we also prove that in the fixed design setting SRHTDRR only increases the risk by a factor of (1 + C q k psubs ) (where k is the rank of the data matrix) w.r.t. the true RR solution. 1.1 Related Work Using randomized algorithms to handle large matrices is an active area of research, and has been used in a variety of setups. Most of these algorithms involve a step that randomly projects the original large matrix down to lower dimensions [9, 16, 8]. [14] uses a matrix of i.i.d Gaussian elements to construct a preconditioner for least square which makes the problem well conditioned. However, computing a random projection is still expensive as it requires multiplying a huge data matrix by another random dense matrix. [18] introduced the idea of using structured random projection for making matrix multiplication substantially faster. Recently, several randomized algorithms have been developed for kernel approximation. [3] provided a fast method for low rank kernel approximation by randomly selecting q samples to construct a rank q approximation of the original kernel matrix. Their approximation can reduce the cost to O(nq2). [15] introduced a random sampling scheme to approximate symmetric kernels and [12] accelerates [15] by applying Hadamard Walsh transform. Although our paper and these papers can all be understood from a kernel approximation point of view, we are working in the p ≫n ≫1 case while they focus on large n. Also, it is worth distinguishing our setup from standard kernel learning. Kernel methods enable the learning models to take into account a much richer feature space than the original space and at the same time compute the inner product in these high dimensional space efficiently. In our p ≫n ≫1 setup, we already have a rich enough feature space and it suffices to consider the linear kernel XX⊤1. Therefore, in this paper we propose a randomized scheme to reduce the dimension of X and accelerate the computation of XX⊤. 2 Faster Ridge Regression via SRHT In this section we firstly review the traditional solution of solving RR in the dual and it’s computational cost. Then we introduce our algorithm SRHT-DRR for faster estimation of RR. 2.1 Ridge Regression Let X be the n × p design matrix containing n i.i.d. samples from the p dimensional independent variable (a.k.a. “covariates” or “predictors”) X such that p ≫n. Y is the real valued n × 1 response vector which contains n corresponding values of the dependent variable Y . ϵ is the n × 1 homoskedastic noise vector with common variance σ2. Let ˆβλ be the solution of the RR problem, i.e. ˆβλ = arg min β∈p×1 1 n∥Y −Xβ∥2 + λ∥β∥2 (1) The solution to Equation (1) is ˆβλ = (X⊤X + nλIp)−1X⊤Y . The step that dominates the computational cost is the matrix inversion which takes O(p3) flops and will be extremely slow when p ≫n ≫1. A straight forward improvement to this is to solve the Equation (1) in the dual space. By change of variables β = X⊤α where α ∈n × 1 and further letting K = XX⊤the optimization problem becomes ˆαλ = arg min α∈n×1 1 n∥Y −Kα∥2 + λα⊤Kα (2) 1For this reason, it is standard in natural language processing applications to just use linear kernels. 2 and the solution is ˆαλ = (K + nλIn)−1Y which directly gives ˆβλ = X⊤ˆαλ. Please see [19] for a detailed derivation of this dual solution. In the p ≫n case the step that dominates computational cost in the dual solution is computing the linear kernel matrix K = XX⊤which takes O(n2p) flops. This is regarded as the computational cost of the true RR solution in our setup. Since our algorithm SRHT-DRR uses Subsampled Randomized Hadamard Transform (SRHT), some introduction to SRHT is warranted. 2.2 Definition and Properties of SRHT Following [20], for p = 2q where q is any positive integer, a SRHT can be defined as a psubs × p (p > psubs) matrix of the form: Θ = r p psubs RHD where • R is a random psubs × p matrix the rows of which are psubs uniform samples (without replacement) from the standard basis of Rp. • H ∈Rp×p is a normalized Walsh-Hadamard matrix. The Walsh-Hadamard matrix of size p × p is defined recursively: Hp = Hp/2 Hp/2 Hp/2 −Hp/2 with H2 = +1 +1 +1 −1 . H = 1 √pHp is a rescaled version of Hp. • D is a p × p diagonal matrix and the diagonal elements are i.i.d. Rademacher random variables. There are two key features that makes SRHT a nice candidate for accelerating RR when p ≫n. Firstly, due to the recursive structure of the H matrix, it takes only O(p log(psubs)) FLOPS to compute Θv where v is a generic p × 1 dense vector while for arbitrary unstructured psubs × p dense matrix A, the cost for computing Av is O(psubsp) flops. Secondly, after projecting any matrix W ∈p × k with orthonormal columns down to low dimensions with SRHT, the columns of ΘW ∈psubs × k are still about orthonormal. The following lemma characterizes this property: Lemma 1. Let W be an p × k (p > k) matrix where W⊤W = Ik. Let Θ be a psubs × p SRHT matrix where p > psubs > k. Then with probability at least 1 −(δ + p ek ), ∥(ΘW)⊤ΘW −Ik∥2 ≤ s c log( 2k δ )k psubs (3) The bound is in terms of the spectral norm of the matrix. The proof of this lemma is in the Appendix. The tools for the random matrix theory part of the proof come from [20] and [21]. [10] also provided similar results. 2.3 The Algorithm Our fast algorithm for SRHT-DRR is described below: SRHT-DRR Input: Dataset X ∈n × p, response Y ∈n × 1, and subsampling size psubs. Output: The weight parameter β ∈psubs × 1. • Compute the SRHT of the data: XH = XΘ⊤. • Compute KH = XHX⊤ H • Compute αH,λ = (KH + nλIn)−1Y , which is the solution of Equation (2) obtained by replacing K with KH. • Compute βH,λ = X⊤ HαH,λ 3 Since, SRHT is only defined for p = 2q for any integer q, so, if the dimension p is not a power of 2, we can concatenate a block of zero matrix to the feature matrix X to make the dimension a power of 2. Remark 1. Let’s look at the computational cost of SRHT-DRR. Computing XH takes O(np log(psubs)) FLOPS [2, 6]. Once we have XH, computing αH,λ costs O(n2psubs) FLOPS, with the dominating step being computing KH = XHX⊤ H. So the computational cost for computing αH,λ is O(np log(psubs) + n2psubs), compared to the true RR which costs O(n2p). We will discuss how large psubs should be later after stating the main theorem. 3 Theory In this section we bound the risk of SRHT-DRR and compare it with the risk of the true dual ridge estimator in fixed design setting. As earlier, let X be an arbitrary n × p design matrix such that p ≫n. Also, we have Y = Xβ + ϵ, where ϵ is the n × 1 homoskedastic noise vector with common mean 0 and variance σ2. [5] and [3] did similar analysis for the risk of RR under similar fixed design setups. Firstly, we provide a corollary to Lemma 1 which will be helpful in the subsequent theory. Corollary 1. Let k be the rank of X. With probability at least 1 −(δ + p ek ) (1 −∆)K ⪯KH ⪯(1 + ∆)K (4) where ∆= C q k log(2k/δ) psubs . ( as for p.s.d. matrices G ⪰L means G −L is p.s.d.) Proof. Let X = UDV⊤be the SVD of X where U ∈n × k, V ∈p × k has orthonormal columns and D ∈k ×k is diagonal. Then KH = UD(V⊤ΘΘV)DU⊤. Lemma 1 directly implies Ik(1 −∆) ⪯(V⊤ΘΘV) ⪯Ik(1 + ∆) with probability at least 1 −(δ + p ek ). Left multiply UD and right multiply DU⊤to the above inequality complete the proof. 3.1 Risk Function for Ridge Regression Let Z = Eϵ(Y ) = Xβ. The risk for any prediction ˆY ∈n × 1 is 1 nEϵ∥ˆY −Z∥2. For any n × n positive symmetric definite matrix M, define the following risk function. R(M) = σ2 n Tr[M2(M + nλIn)−2] + nλ2Z⊤(M + nλIn)−2Z (5) Lemma 2. Under the fixed design setting, the risk for the true RR solution is R(K) and the risk for SRHT-DRR is R(KH). 4 Proof. The risk of the SRHT-DRR estimator is 1 nEϵ∥KHαH,λ −Z∥2 = 1 nEϵ∥KH(KH + nλIn)−1Y −Z∥2 = 1 nEϵ∥KH(KH + nλIn)−1Y −Eϵ(KH(KH + nλIn)−1Y )∥2 + 1 n∥Eϵ(KH(KH + nλIn)−1Y ) −Z∥2 = 1 nEϵ∥KH(KH + nλIn)−1ϵ∥2 + 1 n∥(KH(KH + nλIn)−1Z −Z∥2 = 1 nTr[K2 H(KH + nλIn)−2ϵϵ⊤] + 1 nZ⊤(In −KH(KH + nλIn)−1)2Z = σ2 n Tr[K2 H(KH + nλIn)−2] +nλ2Z⊤(KH + nλIn)−2Z (6) Note that the expectation here is only over the random noise ϵ and it is conditional on the Randomized Hadamard Transform. The calculation is the same for the ordinary estimator. In the risk function, the first term is the variance and the second term is the bias. 3.2 Risk Inflation Bound The following theorem bounds the risk inflation of SRHT-DRR compared with the true RR solution. Theorem 1. Let k be the rank of the X matrix. With probability at least 1 −(δ + p ek ) R(KH) ≤(1 −∆)−2R(K) (7) where ∆= C q k log(2k/δ) psubs Proof. Define B(M) = nλ2Z⊤(M + nλIn)−2Z V (M) = σ2 n Tr[K2 H(KH + nλIn)−2] for any p.s.d matrix M ∈n × n. Therefore, R(M) = V (M) + B(M). Now, due to [3] we know that B(M) is non-increasing in M and V (M) is non-decreasing in M. When Equation(4) holds, R(KH) = V (KH) + B(KH) ≤ V ((1 + ∆)K) + B((1 −∆)K) ≤ (1 + ∆)2V (K) + (1 −∆)−2B(K) ≤ (1 −∆)−2(V (K) + B(K)) = (1 −∆)−2R(K) Remark 2. Theorem 1 gives us an idea of how large psubs should be. Assuming ∆(the risk inflation ratio) is fixed, we get psubs = C k log(2k/δ) ∆2 = O(k). If we further assume that X is full rank, i.e. k = n, then, it suffices to choose psubs = O(n). Combining this with Remark 1, we can see that the cost of computing XH is O(np log(n)). Hence, under the ideal setup where p is huge so that the dominating step of SRHT-DRR is computing XH, the computational cost of SRHT-DRR O(np log(n)) FLOPS. 5 Comparison with PCA Another way to handle high dimensional features is to use PCA and run regression only on the top few principal components (this procedure is called PCR), as illustrated by [13] and many other papers. RR falls in the family of “shrinkage” estimators as it shrinks the weight parameter towards zero. On the other hand, PCA is a “keep-or-kill” estimator as it kills components with smaller eigenvalues. Recently, [5] have shown that the risk of PCR and RR are related and that the risk of PCR is bounded by four times the risk of RR. However, we believe that both PCR and RR are parallel approaches and one can be better than the other depending on the structure of the problem, so it is hard to compare SRHT-DRR with PCR theoretically. Moreover, PCA under our p ≫n ≫1 setup is itself a non-trivial problem both statistically and computationally. Firstly, in the p ≫n case we do not have enough samples to estimate the huge p × p covariance matrix. Therefore the eigenvectors of the sample covariance matrix obtained by PCA maybe very different from the truth. (See [11] for a theoretical study on the consistency of the principal directions for the high p low n case.) Secondly, PCA requires one to compute an SVD of the X matrix, which is extremely slow when p ≫n ≫1. An alternative is to use a randomized algorithm such as [16] or [9] to compute PCA. Again, whether randomized PCA is better than our SRHT-DRR algorithm depends on the problem. With that in mind, we compare SRHT-DRR against standard as well as Randomized PCA in our experiments section; We find that SRHT-DRR beats both of them in speed as well as accuracy. 4 Experiments In this section we show experimental results on synthetic as well as real-world data highlighting the merits of SRHT, namely, lower computational cost compared to the true Ridge Regression (RR) solution, without any significant loss of accuracy. We also compare our approach against “standard” PCA as well as randomized PCA [16]. In all our experiments, we choose the regularization constant λ via cross-validation on the training set. As far as PCA algorithms are concerned, we implemented standard PCA using the built in SVD function in MATLAB and for randomized PCA we used the block power iteration like approach proposed by [16]. We always achieved convergence in three power iterations of randomized PCA. 4.1 Measures of Performance Since we know the true β which generated the synthetic data, we report MSE/Risk for the fixed design setting (they are equivalent for squared loss) as measure of accuracy. It is computed as ∥ˆY −Xβ∥2, where ˆY is the prediction corresponding to different methods being compared. For real-world data we report the classification error on the test set. In order to compare the computational cost of SHRT-DRR with true RR, we need to estimate the number of FLOPS used by them. As reported by other papers, e.g. [4, 6], the theoretical cost of applying Randomized Hadamard Transform is O(np log(psubs)). However, the MATLAB implementation we used took about np log(p) FLOPS to compute XH. So, for SRHT-DRR, the total computational cost is np log(p) for getting XH and a further 2n2psubs FLOPS to compute KH. As mentioned earlier, the true dual RR solution takes ≈2n2p. So, in our experiments, we report relative computational cost which is computed as the ratio of the two. Relative Computational Cost = p log(p) · n + 2n2psubs 2n2p 4.2 Synthetic Data We generated synthetic data with p = 8192 and varied the number of observations n = 20, 100, 200. We generated a n × n matrix R ∼MV N(0, I) where MVN(µ, Σ) is the Multivariate Normal Distribution with mean vector µ, variance-covariance matrix Σ and βj ∼N(0, 1) ∀j = 1, . . . , p. The final X matrix was generated by rotating R with a randomly generated n × p rotation matrix. Finally, we generated the Ys as Y = Xβ + ϵ where ϵi ∼N(0, 1) ∀i = 1, . . . , n. 6 0.5 1 1.5 2 2.5 0.329 0.331 0.337 0.349 0.386 0.417 0.447 0.471 0.508 0.569 Relative Computational Cost MSE/Risk True RR Solution PCA Randomized PCA 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 0.08 0.083 0.089 0.126 0.157 0.187 0.211 0.248 0.279 0.309 Relative Computational Cost MSE/Risk True RR Solution PCA Randomized PCA 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 0.059 0.063 0.069 0.094 0.124 0.155 0.179 0.216 0.246 0.277 Relative Computational Cost MSE/Risk True RR Solution PCA Randomized PCA Figure 1: Left to right n=20, 100, 200. The boxplots show the median error rates for SRHT-DRR for different psubs. The solid red line is the median error rate for the true RR using all the features. The green line is the median error rate for PCR when PCA is computed by SVD in MATLAB. The black dashed line is median error rate for PCR when PCA is computed by randomized PCA. For PCA and randomized PCA, we tried keeping r PCs in the range 10 to n and finally chose the value of r which gave the minimum error on the training set. We tried 10 different values for psubs from n + 10 to 2000 . All the results were averaged over 50 random trials. The results are shown in Figure 1. There are two main things worth noticing. Firstly, in all the cases, SRHT-DRR gets very close in accuracy to the true RR with only ≈30% of its computational cost. SRHT-DRR also cost much fewer FLOPS than the Randomized PCA for our experiments. Secondly, as we mentioned earlier, RR and PCA are parallel approaches. Either one might be better than the other depending on the structure of the problem. As can be seen, for our data, RR approaches are always better than PCA based approaches. We hypothesize that PCA might perform better relative to RR for larger n. 4.3 Real world Data We took the UCI ARCENE dataset which has 200 samples with 10000 features as our real world dataset. ARCENE is a binary classification dataset which consists of 88 cancer individuals and 112 healthy individuals (see [7] for more details about this dataset). We split the dataset into 100 training and 100 testing samples and repeated this procedure 50 times (so n = 100, p = 10000 for this dataset). For PCA and randomized PCA, we tried keeping r = 10, 20, 30, 40, 50, 60, 70, 80, 90 PCs and finally chose the value of r which gave the minimum error on the training set (r = 30). As earlier, we tried 10 different values for psubs: 150, 250, 400, 600, 800, 1000, 1200, 1600, 2000, 2500. Standard PCA is known to be slow for this size datasets, so the comparison with it is just for accuracy. Randomized PCA is fast but less accurate than standard (“true”) PCA; its computational cost for r = 30 can be approximately calculated as about 240np (see [9] for details), which in this case is roughly the same as computing XX⊤(≈2n2p). The results are shown in Figure 2. As can be seen, SRHT-DRR comes very close in accuracy to the true RR solution with just ≈30% of its computational cost. SRHT-DRR beats PCA and Randomized PCA even more comprehensively, achieving the same or better accuracy at just ≈18% of their computational cost. 5 Conclusion In this paper we proposed a fast algorithm, SRHT-DRR, for ridge regression in the p ≫n ≫1 setting SRHT-DRR preconditions the design matrix by a Randomized Walsh-Hadamard Transform with a subsequent subsampling of features. In addition to being significantly faster than the true dual ridge regression solution, SRHT-DRR only inflates the risk w.r.t. the true solution by a small amount. Experiments on both synthetic and real data show that SRHT-DRR gives significant speeds up with only small loss of accuracy. We believe similar techniques can be developed for other statistical methods such as logistic regression. 7 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.13 0.14 0.155 0.175 0.195 0.215 0.235 0.275 0.315 0.365 Relative Computational Cost Classification Error True RR Solution PCA Randomized PCA Figure 2: The boxplots show the median error rates for SRHT-DRR for different psubs. The solid red line is the median error rate for the true RR using all the features. The green line is the median error rate for PCR with top 30 PCs when PCA is computed by SVD in MATLAB. The black dashed line is the median error rate for PCR with the top 30 PCs computed by randomized PCA. References [1] Nir Ailon and Bernard Chazelle. Approximate nearest neighbors and the fast johnsonlindenstrauss transform. In STOC, pages 557–563, 2006. [2] Nir Ailon and Edo Liberty. Fast dimension reduction using rademacher series on dual bch codes. Technical report, 2007. [3] Francis Bach. Sharp analysis of low-rank kernel matrix approximations. CoRR, abs/1208.2015, 2012. [4] Christos Boutsidis and Alex Gittens. Improved matrix algorithms via the subsampled randomized hadamard transform. CoRR, abs/1204.0062, 2012. [5] Paramveer S. Dhillon, Dean P. Foster, Sham M. Kakade, and Lyle H. Ungar. A risk comparison of ordinary least squares vs ridge regression. Journal of Machine Learning Research, 14:1505– 1511, 2013. [6] Petros Drineas, Michael W. Mahoney, S. Muthukrishnan, and Tamás Sarlós. Faster least squares approximation. CoRR, abs/0710.1435, 2007. [7] Isabelle Guyon. Design of experiments for the nips 2003 variable selection benchmark. 2003. [8] N. Halko, P. G. Martinsson, and J. A. Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev., 53(2):217–288, May 2011. [9] Nathan Halko, Per-Gunnar Martinsson, Yoel Shkolnisky, and Mark Tygert. An algorithm for the principal component analysis of large data sets. SIAM J. Scientific Computing, 33(5):2580– 2594, 2011. [10] Daniel Hsu, Sham M. Kakade, and Tong Zhang. Analysis of a randomized approximation scheme for matrix multiplication. CoRR, abs/1211.5414, 2012. [11] S. Jung and J.S. Marron. PCA consistency in high dimension, low sample size context. Annals of Statistics, 37:4104–4130, 2009. [12] Quoc Le, Tamas Sarlos, and Alex Smola. Fastfood -approximating kernel expansions in loglinear time. ICML, 2013. [13] W.F. Massy. Principal components regression in exploratory statistical research. Journal of the American Statistical Association, 60:234–256, 1965. [14] Xiangrui Meng, Michael A. Saunders, and Michael W. Mahoney. Lsrn: A parallel iterative solver for strongly over- or under-determined systems. CoRR, abs/1109.5981, 2011. 8 [15] Ali Rahimi and Ben Recht. Random features for large-scale kernel machines. In In Neural Infomration Processing Systems, 2007. [16] Vladimir Rokhlin, Arthur Szlam, and Mark Tygert. A randomized algorithm for principal component analysis. SIAM J. Matrix Analysis Applications, 31(3):1100–1124, 2009. [17] Vladimir Rokhlin and Mark Tygert. A fast randomized algorithm for overdetermined linear least-squares regression. Proceedings of the National Academy of Sciences, 105(36):13212– 13217, September 2008. [18] Tamas Sarlos. Improved approximation algorithms for large matrices via random projections. In In Proc. 47th Annu. IEEE Sympos. Found. Comput. Sci, pages 143–152. IEEE Computer Society, 2006. [19] G. Saunders, A. Gammerman, and V. Vovk. Ridge regression learning algorithm in dual variables. In Proc. 15th International Conf. on Machine Learning, pages 515–521. Morgan Kaufmann, San Francisco, CA, 1998. [20] Joel A. Tropp. Improved analysis of the subsampled randomized hadamard transform. CoRR, abs/1011.1595, 2010. [21] Joel A. Tropp. User-friendly tail bounds for sums of random matrices. Foundations of Computational Mathematics, 12(4):389–434, 2012. [22] Mark Tygert. A fast algorithm for computing minimal-norm solutions to underdetermined systems of linear equations. CoRR, abs/0905.4745, 2009. 9
|
2013
|
126
|
4,850
|
Convex Relaxations for Permutation Problems Fajwel Fogel C.M.A.P., ´Ecole Polytechnique, Palaiseau, France fogel@cmap.polytechnique.fr Rodolphe Jenatton CRITEO, Paris & C.M.A.P., ´Ecole Polytechnique, Palaiseau, France jenatton@cmap.polytechnique.fr Francis Bach INRIA, SIERRA Project-Team & D.I., ´Ecole Normale Sup´erieure, Paris, France. francis.bach@ens.fr Alexandre d’Aspremont CNRS & D.I., UMR 8548, ´Ecole Normale Sup´erieure, Paris, France. aspremon@ens.fr Abstract Seriation seeks to reconstruct a linear order between variables using unsorted similarity information. It has direct applications in archeology and shotgun gene sequencing for example. We prove the equivalence between the seriation and the combinatorial 2-SUM problem (a quadratic minimization problem over permutations) over a class of similarity matrices. The seriation problem can be solved exactly by a spectral algorithm in the noiseless case and we produce a convex relaxation for the 2-SUM problem to improve the robustness of solutions in a noisy setting. This relaxation also allows us to impose additional structural constraints on the solution, to solve semi-supervised seriation problems. We present numerical experiments on archeological data, Markov chains and gene sequences. 1 Introduction We focus on optimization problems written over the set of permutations. While the relaxation techniques discussed in what follows are applicable to a much more general setting, most of the paper is centered on the seriation problem: we are given a similarity matrix between a set of n variables and assume that the variables can be ordered along a chain, where the similarity between variables decreases with their distance within this chain. The seriation problem seeks to reconstruct this linear ordering based on unsorted, possibly noisy, similarity information. This problem has its roots in archeology [1]. It also has direct applications in e.g. envelope reduction algorithms for sparse linear algebra [2], in identifying interval graphs for scheduling [3], or in shotgun DNA sequencing where a single strand of genetic material is reconstructed from many cloned shorter reads, i.e. small, fully sequenced sections of DNA [4, 5]. With shotgun gene sequencing applications in mind, many references focused on the Consecutive Ones Problem (C1P) which seeks to permute the rows of a binary matrix so that all the ones in each column are contiguous. In particular, [3] studied further connections to interval graphs and [6] crucially showed that a solution to C1P can be obtained by solving the seriation problem on the squared data matrix. We refer the reader to [7, 8, 9] for a much more complete survey of applications. On the algorithmic front, the seriation problem was shown to be NP-Complete by [10]. Archeological examples are usually small scale and earlier references such as [1] used greedy techniques to reorder matrices. Similar techniques were, and are still used to reorder genetic data sets. More general ordering problems were studied extensively in operations research, mostly in connection with the Quadratic Assignment Problem (QAP), for which several convex relaxations were studied in e.g. [11, 12]. Since a matrix is a permutation matrix if and only if it is both orthogonal and 1 doubly stochastic, much work also focused on producing semidefinite relaxations to orthogonality constraints [13, 14]. These programs are convex hence tractable but the relaxations are usually very large and scale poorly. More recently however, [15] produced a spectral algorithm that exactly solves the seriation problem in a noiseless setting, in results that are very similar to those obtained on the interlacing of eigenvectors for Sturm Liouville operators. They show that for similarity matrices computed from serial variables (for which a total order exists), the ordering of the second eigenvector of the Laplacian (a.k.a. the Fiedler vector) matches that of the variables. Here, we show that the solution of the seriation problem explicitly minimizes a quadratic function. While this quadratic problem was mentioned explicitly in [15], no connection was made between the combinatorial and spectral solutions. Our result shows in particular that the 2-SUM minimization problem mentioned in [10], and defined below, is polynomially solvable for matrices coming from serial data. This result allows us to write seriation as a quadratic minimization problem over permutation matrices and we then produce convex relaxations for this last problem. This relaxation appears to be more robust to noise than the spectral or combinatorial techniques in a number of examples. Perhaps more importantly, it allows us to impose additional structural constraints to solve semi-supervised seriation problems. We also develop a fast algorithm for projecting on the set of doubly stochastic matrices, which is of independent interest. The paper is organized as follows. In Section 2, we show a decomposition result for similarity matrices formed from the C1P problem. This decomposition allows to make the connection between the seriation and 2-SUM minimization problems on these matrices. In Section 3 we use these results to write convex relaxations of the seriation problem by relaxing permutation matrices as doubly stochastic matrices in the 2-SUM minimization problem. We also briefly discuss algorithmic and computational complexity issues. Finally Section 4 discusses some applications and numerical experiments. Notation. We write P the set of permutations of {1, . . . , n}. The notation ⇡will refer to a permuted vector of {1, . . . , n} while the notation ⇧(in capital letter) will refer to the corresponding matrix permutation, which is a {0, 1} matrix such that ⇧ij = 1 iff ⇡(j) = i. For a vector y 2 Rn, we write var(y) its variance, with var(y) = Pn i=1 y2 i /n −(Pn i=1 yi/n)2, we also write y[u,v] 2 Rv−u+1 the vector (yu, . . . , yv)T . Here, ei 2 Rn is i-th Euclidean basis vector and 1 is the vector of ones. We write Sn the set of symmetric matrices of dimension n, k · kF denotes the Frobenius norm and λi(X) the ith eigenvalue (in increasing order) of X. 2 Seriation & consecutive ones Given a symmetric, binary matrix A, we will focus on variations of the following 2-SUM combinatorial minimization problem, studied in e.g. [10], and written minimize Pn i,j=1 Aij(⇡(i) −⇡(j))2 subject to ⇡2 P. (1) This problem is used for example to reduce the envelope of sparse matrices and is shown in [10, Th. 2.2] to be NP-Complete. When A has a specific structure, [15] show that a related matrix reordering problem used for seriation can be solved explicitly by a spectral algorithm. However, the results in [15] do not explicitly link spectral ordering and the optimum of (1). For some instances of A related to seriation and consecutive one problems, we show below that the spectral ordering directly minimizes the objective of problem (1). We first focus on binary matrices, then extend our results to more general unimodal matrices. 2.1 Binary matrices Let A 2 Sn and y 2 Rn, we focus on a generalization of the 2-SUM minimization problem minimize f(y⇡) , Pn i,j=1 Aij(y⇡(i) −y⇡(j))2 subject to ⇡2 P. (2) The main point of this section is to show that if A is the permutation of a similarity matrix formed from serial data, then minimizing (2) recovers the correct variable ordering. We first introduce a few definitions following the terminology in [15]. 2 Definition 2.1 We say that the matrix A 2 Sn is an R-matrix (or Robinson matrix) iff it is symmetric and satisfies Ai,j Ai,j+1 and Ai+1,j Ai,j in the lower triangle, where 1 j < i n. Another way to write the R-matrix conditions is to impose Aij Akl if |i−j| |k−l| off-diagonal, i.e. the coefficients of A decrease as we move away from the diagonal (cf. Figure 1). Figure 1: A Q-matrix A (see Def. 2.7), which has unimodal columns (left), its “circular square” A ◦AT (see Def. 2.8) which is an R-matrix (center), and a matrix a ◦aT where a is a unimodal vector (right). Definition 2.2 We say that the {0, 1}-matrix A 2 Rn⇥m is a P-matrix (or Petrie matrix) iff for each column of A, the ones form a consecutive sequence. As in [15], we will say that A is pre-R (resp. pre-P) iff there is a permutation ⇧such that ⇧A⇧T is an R-matrix (resp. ⇧A is a P-matrix). We now define CUT matrices as follows. Definition 2.3 For u, v 2 [1, n], we call CUT(u, v) the matrix such that CUT(u, v) = ⇢ 1 if u i, j v 0 otherwise, i.e. CUT(u, v) is symmetric, block diagonal and has one square block equal to one. The motivation for this definition is that if A is a {0, 1} P-matrix, then AAT is a sum of CUT matrices (with blocks generated by the columns of A). This means that we can start by studying problem (2) on CUT matrices. We first show that the objective of (2) has a natural interpretation in this case, as the variance of a subset of y under a uniform probability measure. Lemma 2.4 Let A = CUT(u, v), then f(y) = Pn i,j=1 Aij(yi −yj)2 = (v −u + 1)2 var(y[u,v]). Proof. We can write P ij Aij(yi −yj)2 = yT LAy where LA = diag(A1)−A is the Laplacian of matrix A, which is a block matrix equal to (v −u + 1)δ{i=j} −1 for u i, j v. This last lemma shows that solving (2) for CUT matrices amounts to finding a subset of y of size (u−v +1) with minimum variance. The next lemma characterizes optimal solutions of problem (2) for CUT matrices and shows that its solution splits the coefficients of y in two disjoint intervals. Lemma 2.5 Suppose A = CUT(u, v), and write z = y⇡the optimal solution to (2). If we call I = [u, v] and Ic its complement in [1, n], then zj /2 [min(zI), max(zI)], for all j 2 Ic, in other words, the coefficients in zI and zIc belong to disjoint intervals. We can use these last results to show that, at least for some vectors y, when A is an R-matrix, then the solution y⇡to (2) is monotonic. Proposition 2.6 Suppose C 2 Sn is a {0, 1} pre-R matrix, A = C2, and yi = ai + b for i = 1, . . . , n and a, b 2 R with a 6= 0. If ⇧is such that ⇧C⇧T (hence ⇧A⇧T ) is an R-matrix, then the corresponding permutation ⇡solves the combinatorial minimization problem (2) for A = C2. 3 Proof. Suppose C is {0, 1} pre-R, then C2 is pre-R and Lemma 5.2 shows that there exists ⇧ such that ⇧C⇧T and ⇧A⇧T are R-matrices, so we can write ⇧A⇧T as a sum of CUT matrices. Furthermore, Lemmas 2.4 and 2.5 show that each CUT term is minimized by a monotonic sequence, but yi = ai+b means here that all monotonic subsets of y of a given length have the same (minimal) variance, attained by ⇧y. So the corresponding ⇡also solves problem (2). 2.2 Unimodal matrices Here, based on [6], we first define a generalization of P-matrices called (appropriately enough) Qmatrices, i.e. matrices with unimodal columns. We now show that minimizing (2) also recovers the correct ordering for these more general matrix classes. Definition 2.7 We say that a matrix A 2 Rn⇥m is a Q-matrix if and only if each column of A is unimodal, i.e. its coefficients increase to a maximum, then decrease. Note that R-matrices are symmetric Q-matrices. We call a matrix A pre-Q iff there is a permutation ⇧such that ⇧A is a Q-matrix. Next, again based on [6], we define the circular product of two matrices. Definition 2.8 Given A, BT 2 Rn⇥m, and a strictly positive weight vector w 2 Rm, their circular product A ◦B is defined as (A ◦B)ij = Pm k=1 wk min{Aik, Bkj}, i, j = 1, . . . , n, note that when A is a symmetric matrix, A ◦A is also symmetric. Remark that when A, B are {0, 1} matrices and w = 1, min{Aik, Bkj} = AikBkj, so the circle product matches the regular matrix product ABT . In the appendix we first prove that when A is a Q-matrix, then A ◦AT is a sum of CUT matrices. This is illustrated in Figure 1. Lemma 2.9 Let A 2 Rn⇥m a Q-matrix, then A ◦AT is a conic combination of CUT matrices. This last result also shows that A◦AT is a R-matrix when A is a Q matrix, as a sum of CUT matrices. These definitions are illustrated in Figure 1. We now recall the central result in [6, Th. 1]. Theorem 2.10 [6, Th. 1] Suppose A 2 Rn⇥m is pre-Q, then ⇧A is a Q-matrix iff ⇧(A ◦AT )⇧T is a R-matrix. We are now ready to show the main result of this section linking permutations which order Rmatrices and solutions to problem (2). Proposition 2.11 Suppose C 2 Rn⇥m is a pre-Q matrix and yi = ai + b for i = 1, . . . , n and a, b 2 R with a 6= 0. Let A = C◦CT , if ⇧is such that ⇧A⇧T is an R-matrix, then the corresponding permutation ⇡solves the combinatorial minimization problem (2). Proof. If C 2 Rn⇥m is pre-Q, then Lemma 2.9 and Theorem 2.10 show that there is a permutation ⇧such that ⇧(C ◦CT )⇧T is a sum of CUT matrices (hence a R-matrix). Now as in Propostion 2.6, all monotonic subsets of y of a given length have the same variance, hence Lemmas 2.4 and 2.5 show that ⇡solves problem (2). This result shows that if A is pre-R and can be written A = C ◦CT with C pre-Q, then the permutation that makes A an R-matrix also solves (2). Since [15] show that sorting the Fiedler vector also orders A as an R-matrix, Prop. 2.11 gives a polynomial time solution to problem (2) when A = C ◦CT is pre-R with C pre-Q. 3 Convex relaxations for permutation problems In the sections that follow, we will use the combinatorial results derived above to produce convex relaxations of optimization problems written over the set of permutation matrices. Recall that the Fiedler value of a symmetric non negative matrix is the smallest non-zero eigenvalue of its Laplacian. The Fiedler vector is the corresponding eigenvector. We first recall the main result from [15] which shows how to reorder pre-R matrices in a noise free setting. 4 Proposition 3.1 [15, Th. 3.3] Suppose A 2 Sn is a pre-R-matrix, with a simple Fiedler value whose Fiedler vector v has no repeated values. Suppose that ⇧2 P is such that the permuted Fielder vector ⇧v is monotonic, then ⇧A⇧T is an R-matrix. The results in [15] provide a polynomial time solution to the R-matrix ordering problem in a noiseless setting. While [15] also show how to handle cases where the Fiedler vector is degenerate, these scenarios are highly unlikely to arise in settings where observations on A are noisy and we do not discuss these cases here. The results in the previous section made the connection between the spectral ordering in [15] and problem (2). In what follows, we will use (2) to produce convex relaxations to matrix ordering problems in a noisy setting. We also show in Section 3 how to incorporate a priori knowledge in the optimization problem. Numerical experiments in Section 4 show that semi-supervised seriation solutions are sometimes significantly more robust to noise than the spectral solutions ordered from the Fiedler vector. Permutations and doubly stochastic matrices. We write Dn the set of doubly stochastic matrices in Rn⇥n, i.e. Dn = {X 2 Rn⇥n : X > 0, X1 = 1, XT 1 = 1}. Note that Dn is convex and polyhedral. Classical results show that the set of doubly stochastic matrices is the convex hull of the set of permutation matrices. We also have P = D \ O, i.e. a matrix is a permutation matrix if and only if it is both doubly stochastic and orthogonal. This means that we can directly write a convex relaxation to the combinatorial problem (2) by replacing P with its convex hull Dn, to get minimize gT ⇧T LA⇧g subject to ⇧2 Dn, (3) where g = (1, . . . , n). By symmetry, if a vector ⇧y minimizes (3), then the reverse vector also minimizes (3). This often has a significant negative impact on the quality of the relaxation, and we add the linear constraint eT 1 ⇧g + 1 eT n⇧g to break symmetries, which means that we always pick monotonically increasing solutions. Because the Laplacian LA is always positive semidefinite, problem (3) is a convex quadratic program in the variable ⇧and can be solved efficiently. To provide a solution to the combinatorial problem (2), we then generate permutations from the doubly stochastic optimal solution to (3) (we will describe an efficient procedure to do so in §3). The results of Section 2 show that the optimal solution to (2) also solves the seriation problem in the noiseless setting when the matrix A is of the form C ◦CT with C a Q-matrix and y is an affine transform of the vector (1, . . . , n). These results also hold empirically for small perturbations of the vector y and to improve robustness to noisy observations of A, we can average several values of the objective of (3) over these perturbations, solving minimize Tr(Y T ⇧T LA⇧Y )/p subject to eT 1 ⇧g + 1 eT n⇧g, ⇧1 = 1, ⇧T 1 = 1, ⇧≥0, (4) in the variable ⇧2 Rn⇥n, where Y 2 Rn⇥p is a matrix whose columns are small perturbations of the vector g = (1, . . . , n)T . Note that the objective of (4) can be rewritten in vector format as Vec(⇧)T (Y Y T ⌦LA)Vec(⇧)/p. Solving (4) is roughly p times faster than individually solving p versions of (3). Regularized convex relaxation. As the set of permutation matrices P is the intersection of the set of doubly stochastic matrices D and the set of orthogonal matrices O, i.e. P = D \ O we can add a penalty to the objective of the convex relaxed problem (4) to force the solution to get closer to the set of orthogonal matrices. As a doubly stochastic matrix of Frobenius norm pn is necessarily orthogonal, we would ideally like to solve minimize 1 p Tr(Y T ⇧T LA⇧Y ) −µ p k⇧k2 F subject to eT 1 ⇧g + 1 eT n⇧g, ⇧1 = 1, ⇧T 1 = 1, ⇧≥0, (5) with µ large enough to guarantee that the global solution is indeed a permutation. However, this problem is not convex for any µ > 0 since its Hessian is not positive semi-definite (the Hessian Y Y T ⌦LA −µI ⌦I is never positive semidefinite when µ > 0 since the first eigenvalue of LA is 0). Instead, we propose a slightly modified version of (5), which has the same objective function 5 up to a constant, and is convex for some values of µ. Remember that the Laplacian matrix LA is always positive semidefinite with at least one eigenvalue equal to zero (strictly one if the graph is connected). Let P = I −1 n11T . Proposition 3.2 The optimization problem minimize 1 p Tr(Y T ⇧T LA⇧Y ) −µ p kP⇧k2 F subject to eT 1 ⇧g + 1 eT n⇧g, ⇧1 = 1, ⇧T 1 = 1, ⇧≥0, (6) is equivalent to problem (5) and their objectives differ by a constant. When µ λ2(LA)λ1(Y Y T ), this problem is convex. Incorporating structural contraints. The QP relaxation allows us to add convex structural constraints in the problem. For instance, in archeological applications, one may specify that observation i must appear before observation j, i.e. ⇡(i) < ⇡(j). In gene sequencing applications, one may want to constrain the distance between two elements (e.g. mate reads), which would read a ⇡(i) −⇡(j) b and introduce an affine inequality on the variable ⇧in the QP relaxation of the form a eT i ⇧g −eT j ⇧g b. Linear constraints could also be extracted from a reference gene sequence. More generally, we can rewrite problem (6) with nc additional linear constraints as follows minimize 1 p Tr(Y T ⇧T LA⇧Y ) −µ p kP⇧k2 F subject to DT ⇧g + δ 0, ⇧1 = 1, ⇧T 1 = 1, ⇧≥0, (7) where D is a matrix of size n ⇥nc and δ is a vector of size nc. The first column of D is equal to e1 −en and δ1 = 1 (to break symmetry). Sampling permutations from doubly stochastic matrices. This procedure is based on the fact that a permutation can be defined from a doubly stochastic matrix D by the order induced on a monotonic vector. Suppose we generate a monotonic random vector v and compute Dv. To each v, we can associate a permutation ⇧such that ⇧Dv is monotonically increasing. If D is a permutation matrix, then the permutation ⇧generated by this procedure will be constant, if D is a doubly stochastic matrix but not a permutation, it might fluctuate. Starting from a solution D to problem (6), we can use this procedure to generate many permutation matrices ⇧and we pick the one with lowest cost yT ⇧T LA⇧y in the combinatorial problem (2). We could also project ⇧on permutations using the Hungarian algorithm, but this proved more costly and less effective. Orthogonal relaxation. Recall that P = D \ O, i.e. a matrix is a permutation matrix if and only if it is both doubly stochastic and orthogonal. So far, we have relaxed the orthogonality constraint to replace it by a penalty on the Frobenius norm. Semidefinite relaxations to orthogonality constraints have been developed in e.g. [12, 13, 14], with excellent approximation bounds, and these could provide alternative relaxation schemes. However, these relaxations form semidefinite programs of dimension O(n2) (hence have O(n4) variables) which are out of reach numerically for most of the problems considered here. Algorithms. The convex relaxation in (7) is a quadratic program in the variable ⇧2 Rn⇥n, which has dimension n2. For reasonable values of n (around a few hundreds), interior point solvers such as MOSEK [17] solve this problem very efficiently. Furthermore, most pre-R matrices formed by squaring pre-Q matrices are very sparse, which considerably speeds up linear algebra. However, first-order methods remain the only alternative beyond a certain scale. We quickly discuss the implementation of two classes of methods: the Frank-Wolfe (a.k.a. conditional gradient) algorithm, and accelerated gradient methods. Solving (7) using the conditional gradient algorithm in [18] requires minimizing an affine function over the set of doubly stochastic matrices at each iteration. This amounts to solving a classical transportation (or matching) problem for which very efficient solvers exist [19]. On the other hand, solving (7) using accelerated gradient algorithms requires solving a projection step on doubly stochastic matrices at each iteration [20]. Here too, exploiting structure significantly improves the complexity of these steps. Given some matrix ⇧0, the projection problem is written minimize 1 2k⇧−⇧0k2 F subject to DT ⇧g + δ 0, ⇧1 = 1, ⇧T 1 = 1, ⇧≥0 (8) 6 in the variable ⇧2 Rn⇥n, with parameter g 2 Rn. The dual is written maximize −1 2kx1T + 1yT + DzgT −Zk2 F −Tr(ZT ⇧0) +xT (⇧01 −1) + yT (⇧T 0 1 −1) + z(DT ⇧0g + δ) subject to z ≥0, Z ≥0 (9) in the variables Z 2 Rn⇥n, x, y 2 Rn and z 2 Rnc. The dual is written over decoupled linear constraints in (z, Z) (with x and y are unconstrained). Each subproblem is equivalent to computing a conjugate norm and can be solved in closed form. In particular, the matrix Z is updated at each iteration by Z = max{0, x1T + 1yT + DzgT −⇧0}. Warm-starting provides a significant speedup. This means that problem (9) can be solved very efficiently by block-coordinate ascent, whose convergence is guaranteed in this setting [21], and a solution to (8) can be reconstructed from the optimum in (9). 4 Applications & numerical experiments Archeology. We reorder the rows of the Hodson’s Munsingen dataset (as provided by [22] and manually ordered by [6]), to date 59 graves from 70 recovered artifact types (graves from similar periods containing similar artifacts). The results are reported in Table 1 (and in the appendix). We use a fraction of the pairwise orders in [6] to solve the semi-supervised version. Sol. in [6] Spectral QP Reg QP Reg + 0.1% QP Reg + 47.5% Kendall ⌧ 1.00±0.00 0.75±0.00 0.73±0.22 0.76±0.16 0.97±0.01 Spearman ⇢ 1.00±0.00 0.90±0.00 0.88±0.19 0.91±0.16 1.00±0.00 Comb. Obj. 38520±0 38903±0 41810±13960 43457±23004 37602±775 # R-constr. 1556±0 1802±0 2021±484 2050±747 1545±43 Table 1: Performance metrics (median and stdev over 100 runs of the QP relaxation, for Kendall’s ⌧, Spearman’s ⇢ranking correlations (large values are good), the objective value in (2), and the number of R-matrix monotonicity constraint violations (small values are good), comparing Kendall’s original solution with that of the Fiedler vector, the seriation QP in (6) and the semi-supervised seriation QP in (7) with 0.1% and 47.5% pairwise ordering constraints specified. Note that the semi-supervised solution actually improves on both Kendall’s manual solution and on the spectral ordering. Markov chains. Here, we observe many disordered samples from a Markov chain. The mutual information matrix of these variables must be decreasing with |i −j| when ordered according to the true generating Markov chain [23, Th. 2.8.1], hence the mutual information matrix of these variables is a pre-R-matrix. We can thus recover the order of the Markov chain by solving the seriation problem on this matrix. In the following example, we try to recover the order of a Gaussian Markov chain written Xi+1 = biXi + ✏i with ✏i ⇠N(0, σ2 i ). The results are presented in Table 2 on 30 variables. We test performance in a noise free setting where we observe the randomly ordered model covariance, in a noisy setting with enough samples (6000) to ensure that the spectral solution stays in a perturbative regime, and finally using much fewer samples (60) so the spectral perturbation condition fails. Gene sequencing. In next generation shotgun gene sequencing experiments, genes are cloned about ten to a hundred times before being decomposed into very small subsequences called “reads”, each fifty to a few hundreds base pairs long. Current machines can only accurately sequence these small reads, which must then be reordered by “assembly” algorithms, using the overlaps between reads. We generate artificial sequencing data by (uniformly) sampling reads from chromosome 22 of the human genome from NCBI, then store k-mer hit versus read in a binary matrix (a k-mer is a fixed sequence of k base pairs). If the reads are ordered correctly, this matrix should be C1P, hence we solve the C1P problem on the {0, 1}-matrix whose rows correspond to k-mers hits for each read, i.e. the element (i, j) of the matrix is equal to one if k-mer j is included in read i. This matrix is extremely sparse, as it is approximately band-diagonal with roughly constant degree when reordered appropriately, and computing the Fiedler vector can be done with complexity O(n log n), as it amounts to computing the second largest eigenvector of λn(L)I −L, where L is the Laplacian 7 No noise Noise within spectral gap Large noise True 1.00±0.00 1.00±0.00 1.00±0.00 Spectral 1.00±0.00 0.86±0.14 0.41±0.25 QP Reg 0.50±0.34 0.58±0.31 0.45±0.27 QP + 0.2% 0.65±0.29 0.40±0.26 0.60±0.27 QP + 4.6% 0.71±0.08 0.70±0.07 0.68±0.08 QP + 54.3% 0.98±0.01 0.97±0.01 0.97±0.02 Table 2: Kendall’s ⌧between the true Markov chain ordering, the Fiedler vector, the seriation QP in (6) and the semi-supervised seriation QP in (7) with varying numbers of pairwise orders specified. We observe the (randomly ordered) model covariance matrix (no noise), the sample covariance matrix with enough samples so the error is smaller than half of the spectral gap, then a sample covariance computed using much fewer samples so the spectral perturbation condition fails. of the matrix. In our experiments, computing the Fiedler vector of a million base pairs sequence takes less than a minute using MATLAB’s eigs on a standard desktop machine. In practice, besides sequencing errors (handled relatively well by the high coverage of the reads), there are often repeats in long genomes. If the repeats are longer than the k-mers, the C1P assumption is violated and the order given by the Fiedler vector is not reliable anymore. On the other hand, handling the repeats is possible using the information given by mate reads, i.e. reads that are known to be separated by a given number of base pairs in the original genome. This structural knowledge can be incorporated into the relaxation (7). While our algorithm for solving (7) only scales up to a few thousands base pairs on a regular desktop, it can be used to solve the sequencing problem hierarchically, i.e. to refine the spectral solution. Graph connectivity issues can be solved directly using spectral information. Figure 2: We plot the reads ⇥reads matrix measuring the number of common k-mers between read pairs, reordered according to the spectral ordering on two regions (two plots on the left), then the Fiedler and Fiedler+QP read orderings versus true ordering (two plots on the right). The semisupervised solution contains much fewer misplaced reads. In Figure 2, the two first plots show the result of spectral ordering on simulated reads from human chromosome 22. The full R matrix formed by squaring the reads ⇥kmers matrix is too large to be plotted in MATLAB and we zoom in on two diagonal block submatrices. In the first one, the reordering is good and the matrix has very low bandwidth, the corresponding gene segment (or contig.) is well reconstructed. In the second the reordering is less reliable, and the bandwidth is larger, the reconstructed gene segment contains errors. The last two plots show recovered read position versus true read position for the Fiedler vector and the Fiedler vector followed by semisupervised seriation, where the QP relaxation is applied to the reads assembled by the spectral solution, on 250 000 reads generated in our experiments. We see that the number of misplaced reads significantly decreases in the semi-supervised seriation solution. Acknoledgements. AA, FF and RJ would like to acknowledge support from a European Research Council starting grant (project SIPA) and a gift from Google. FB would like to acknowledge support from a European Research Council starting grant (project SIERRA). A much more complete version of this paper is available as [16] at arXiv:1306.4805. 8 References [1] William S Robinson. A method for chronologically ordering archaeological deposits. American antiquity, 16(4):293–301, 1951. [2] Stephen T Barnard, Alex Pothen, and Horst Simon. A spectral algorithm for envelope reduction of sparse matrices. Numerical linear algebra with applications, 2(4):317–334, 1995. [3] D.R. Fulkerson and O. A. Gross. Incidence matrices and interval graphs. Pacific journal of mathematics, 15(3):835, 1965. [4] Gemma C Garriga, Esa Junttila, and Heikki Mannila. Banded structure in binary matrices. Knowledge and information systems, 28(1):197–226, 2011. [5] Jo˜ao Meidanis, Oscar Porto, and Guilherme P Telles. On the consecutive ones property. Discrete Applied Mathematics, 88(1):325–354, 1998. [6] David G Kendall. Abundance matrices and seriation in archaeology. Probability Theory and Related Fields, 17(2):104–112, 1971. [7] Chris Ding and Xiaofeng He. Linearized cluster assignment via spectral ordering. In Proceedings of the twenty-first international conference on Machine learning, page 30. ACM, 2004. [8] Niko Vuokko. Consecutive ones property and spectral ordering. In Proceedings of the 10th SIAM International Conference on Data Mining (SDM’10), pages 350–360, 2010. [9] Innar Liiv. Seriation and matrix reordering methods: An historical overview. Statistical analysis and data mining, 3(2):70–91, 2010. [10] Alan George and Alex Pothen. An analysis of spectral envelope reduction via quadratic assignment problems. SIAM Journal on Matrix Analysis and Applications, 18(3):706–732, 1997. [11] Eugene L Lawler. The quadratic assignment problem. Management science, 9(4):586–599, 1963. [12] Qing Zhao, Stefan E Karisch, Franz Rendl, and Henry Wolkowicz. Semidefinite programming relaxations for the quadratic assignment problem. Journal of Combinatorial Optimization, 2(1):71–109, 1998. [13] A. Nemirovski. Sums of random symmetric matrices and quadratic optimization under orthogonality constraints. Mathematical programming, 109(2):283–317, 2007. [14] Anthony Man-Cho So. Moment inequalities for sums of random matrices and their applications in optimization. Mathematical programming, 130(1):125–151, 2011. [15] J.E. Atkins, E.G. Boman, B. Hendrickson, et al. A spectral algorithm for seriation and the consecutive ones problem. SIAM J. Comput., 28(1):297–310, 1998. [16] F. Fogel, R. Jenatton, F. Bach, and A. d’Aspremont. Convex relaxations for permutation problems. arXiv:1306.4805, 2013. [17] Erling D Andersen and Knud D Andersen. The mosek interior point optimizer for linear programming: an implementation of the homogeneous algorithm. High performance optimization, 33:197–232, 2000. [18] M. Frank and P. Wolfe. An algorithm for quadratic programming. Naval research logistics quarterly, 3(1-2):95–110, 1956. [19] L Portugal, F Bastos, J J´udice, J Paixao, and T Terlaky. An investigation of interior-point algorithms for the linear transportation problem. SIAM Journal on Scientific Computing, 17(5):1202–1223, 1996. [20] Y. Nesterov. Introductory Lectures on Convex Optimization. Springer, 2003. [21] D. Bertsekas. Nonlinear Programming. Athena Scientific, 1998. [22] Frank Roy Hodson. The La T`ene cemetery at M¨unsingen-Rain: catalogue and relative chronology, volume 5. St¨ampfli, 1968. [23] Thomas M Cover and Joy A Thomas. Elements of information theory. Wiley-interscience, 2012. 9
|
2013
|
127
|
4,851
|
Online Learning of Dynamic Parameters in Social Networks Shahin Shahrampour 1 Alexander Rakhlin 2 Ali Jadbabaie 1 1Department of Electrical and Systems Engineering, 2Department of Statistics University of Pennsylvania Philadelphia, PA 19104 USA 1{shahin,jadbabai}@seas.upenn.edu 2rakhlin@wharton.upenn.edu Abstract This paper addresses the problem of online learning in a dynamic setting. We consider a social network in which each individual observes a private signal about the underlying state of the world and communicates with her neighbors at each time period. Unlike many existing approaches, the underlying state is dynamic, and evolves according to a geometric random walk. We view the scenario as an optimization problem where agents aim to learn the true state while suffering the smallest possible loss. Based on the decomposition of the global loss function, we introduce two update mechanisms, each of which generates an estimate of the true state. We establish a tight bound on the rate of change of the underlying state, under which individuals can track the parameter with a bounded variance. Then, we characterize explicit expressions for the steady state mean-square deviation(MSD) of the estimates from the truth, per individual. We observe that only one of the estimators recovers the optimal MSD, which underscores the impact of the objective function decomposition on the learning quality. Finally, we provide an upper bound on the regret of the proposed methods, measured as an average of errors in estimating the parameter in a finite time. 1 Introduction In recent years, distributed estimation, learning and prediction has attracted a considerable attention in wide variety of disciplines with applications ranging from sensor networks to social and economic networks [1–6]. In this broad class of problems, agents aim to learn the true value of a parameter often called the underlying state of the world. The state could represent a product, an opinion, a vote, or a quantity of interest in a sensor network. Each agent observes a private signal about the underlying state at each time period, and communicates with her neighbors to augment her imperfect observations. Despite the wealth of research in this area when the underlying state is fixed (see e.g. [1–3, 7]), often the state is subject to some change over time(e.g. the price of stocks) [8–11]. Therefore, it is more realistic to study models which allow the parameter of interest to vary. In the non-distributed context, such models have been studied in the classical literature on time-series prediction, and, more recently, in the literature on online learning under relaxed assumptions about the nature of sequences [12]. In this paper we aim to study the sequential prediction problem in the context of a social network and noisy feedback to agents. We consider a stochastic optimization framework to describe an online social learning problem when the underlying state of the world varies over time. Our motivation for the current study is the results of [8] and [9] where authors propose a social learning scheme in which the underlying state follows a simple random walk. However, unlike [8] and [9], we assume a geometric random walk evolution with an associated rate of change. This enables us to investigate the interplay of social learning, network structure, and the rate of state change, especially in the interesting case that the rate is 1 greater than unity. We then pose the social learning as an optimization problem in which individuals aim to suffer the smallest possible loss as they observe the stream of signals. Of particular relevance to this work is the work of Duchi et al. in [13] where the authors develop a distributed method based on dual averaging of sub-gradients to converge to the optimal solution. In this paper, we restrict our attention to quadratic loss functions regularized by a quadratic proximal function, but there is no fixed optimal solution as the underlying state is dynamic. In this direction, the key observation is the decomposition of the global loss function into local loss functions. We consider two decompositions for the global objective, each of which gives rise to a single-consensus-step belief update mechanism. The first method incorporates the averaged prior beliefs among neighbors with the new private observation, while the second one takes into account the observations in the neighborhood as well. In both scenarios, we establish that the estimates are eventually unbiased, and we characterize an explicit expression for the mean-square deviation(MSD) of the beliefs from the truth, per individual. Interestingly, this quantity relies on the whole spectrum of the communication matrix which exhibits the formidable role of the network structure in the asymptotic learning. We observe that the estimators outperform the upper bound provided for MSD in the previous work [8]. Furthermore, only one of the two proposed estimators can compete with the centralized optimal Kalman Filter [14] in certain circumstances. This fact underscores the dependence of optimality on decomposition of the global loss function. We further highlight the influence of connectivity on learning by quantifying the ratio of MSD for a complete versus a disconnected network. We see that this ratio is always less than unity and it can get arbitrarily close to zero under some constraints. Our next contribution is to provide an upper bound for regret of the proposed methods, defined as an average of errors in estimating the parameter up to a given time minus the long-run expected loss due to noise and dynamics alone. This finite-time regret analysis is based on the recently developed concentration inequalities for matrices and it complements the asymptotic statements about the behavior of MSD. Finally, we examine the trade-off between the network sparsity and learning quality in a microscopic level. Under mild technical constraints, we see that losing each connection has detrimental effect on learning as it monotonically increases the MSD. On the other hand, capturing agents communications with a graph, we introduce the notion of optimal edge as the edge whose addition has the most effect on learning in the sense of MSD reduction. We prove that such a friendship is likely to occur between a pair of individuals with high self-reliance that have the least common neighbors. 2 Preliminaries 2.1 State and Observation Model We consider a network consisting of a finite number of agents V = {1, 2, ..., N}. The agents indexed by i ∈V seek the underlying state of the world, xt ∈R, which varies over time and evolves according to xt+1 = axt + rt, (1) where rt is a zero mean innovation, which is independent over time with finite variance E[r2 t ] = σ2 r, and a ∈R is the expected rate of change of the state of the world, assumed to be available to all agents, and could potentially be greater than unity. We assume the initial state x0 is a finite random variable drawn independently by the nature. At time period t, each agent i receives a private signal yi,t ∈R, which is a noisy version of xt, and can be described by the linear equation yi,t = xt + wi,t, (2) where wi,t is a zero mean observation noise with finite variance E[w2 i,t] = σ2 w, and it is assumed to be independent over time and agents, and uncorrelated to the innovation noise. Each agent i forms an estimate or a belief about the true value of xt at time t conforming to an update mechanism that will be discussed later. Much of the difficulty of this problem stems from the hardness of tracking a dynamic state with noisy observations, especially when |a| > 1, and communication mitigates the difficulty by virtue of reducing the effective noise. 2 2.2 Communication Structure Agents communicate with each other to update their beliefs about the underlying state of the world. The interaction between agents is captured by an undirected graph G = (V, E), where V is the set of agents, and if there is a link between agent i and agent j, then {i, j} ∈E. We let ¯ Ni = {j ∈ V : {i, j} ∈E} be the set of neighbors of agent i, and Ni = ¯ Ni ∪{i}. Each agent i can only communicate with her neighbors, and assigns a weight pij > 0 for any j ∈¯ Ni. We also let pii ≥0 denote the self-reliance of agent i. Assumption 1. The communication matrix P = [pij] is symmetric and doubly stochastic, i.e., it satisfies pij ≥0 , pij = pji , and j∈Ni pij = N j=1 pij = 1. We further assume the eigenvalues of P are in descending order and satisfy −1 < λN(P) ≤... ≤λ2(P) < λ1(P) = 1. 2.3 Estimate Updates The goal of agents is to learn xt in a collaborative manner by making sequential predictions. From optimization perspective, this can be cast as a quest for online minimization of the separable, global, time-varying cost function min ¯x∈R ft(¯x) = 1 N N i=1 ˆfi,t(¯x) 1 2E yi,t −¯x 2 = 1 N N i=1 ˜fi,t(¯x) N j=1 pij ˆfj,t(¯x) , (3) at each time period t. One approach to tackle the stochastic learning problem formulated above is to employ distributed dual averaging regularized by a quadratic proximal function [13]. To this end, if agent i exploits ˆfi,t as the local loss function, she updates her belief as ˆxi,t+1 = a j∈Ni pij ˆxj,t consensus update + α(yi,t −ˆxi,t) innovation update , (4) while using ˜fi,t as the local loss function results in the following update ˜xi,t+1 = a j∈Ni pij ˜xj,t consensus update + α( j∈Ni pijyj,t −˜xi,t) innovation update , (5) where α ∈(0, 1] is a constant step size that agents place for their innovation update, and we refer to it as signal weight. Equations (4) and (5) are distinct, single-consensus-step estimators differing in the choice of the local loss function with (4) using only private observations while (5) averaging observations over the neighborhood. We analyze both class of estimators noting that one might expect (5) to perform better than (4) due to more information availability. Note that the choice of constant step size provides an insight on the interplay of persistent innovation and learning abilities of the network. We remark that agents can easily learn the fixed rate of change a by taking ratios of observations, and we assume that this has been already performed by the agents in the past. The case of a changing a is beyond the scope of the present paper. We also point out that the real-valued (rather than vector-valued) nature of the state is a simplification that forms a clean playground for the study of the effects of social learning, effects of friendships, and other properties of the problem. 2.4 Error Process Defining the local error processes ˆξi,t and ˜ξi,t, at time t for agent i, as ˆξi,t ˆxi,t −xt and ˜ξi,t ˜xi,t −xt, 3 and stacking the local errors in vectors ˆξt, ˜ξt ∈RN, respectively, such that ˆξt [ˆξ1,t, ..., ˆξN,t] T and ˜ξt [˜ξ1,t, ..., ˜ξN,t] T, (6) one can show that the aforementioned collective error processes could be described as a linear dynamical system. Lemma 2. Given Assumption 1, the collective error processes ˆξt and ˜ξt defined in (6) satisfy ˆξt+1 = Qˆξt + ˆst and ˜ξt+1 = Q˜ξt + ˜st, (7) respectively, where Q = a(P −αIN), (8) and ˆst = (αa)[w1,t, ..., wN,t] T −rt1N and ˜st = (αa)P[w1,t, ..., wN,t] T −rt1N, (9) with 1N being vector of all ones. Throughout the paper, we let ρ(Q), denote the spectral radius of Q, which is equal to the largest singular value of Q due to symmetry. 3 Social Learning: Convergence of Beliefs and Regret Analysis In this section, we study the behavior of estimators (4) and (5) in the mean and mean-square sense, and we provide the regret analysis. In the following proposition, we establish a tight bound for a, under which agents can achieve asymptotically unbiased estimates using proper signal weight. Proposition 3 (Unbiased Estimates). Given the network G with corresponding communication matrix P satisfying Assumption 1, the rate of change of the social network in (4) and (5) must respect the constraint |a| < 2 1 −λN(P), to allow agents to form asymptotically unbiased estimates of the underlying state. Proposition 3 determines the trade-off between the rate of change and the network structure. In other words, changing less than the rate given in the statement of the proposition, individuals can always track xt with bounded variance by selecting an appropriate signal weight. However, the proposition does not make any statement on the learning quality. To capture that, we define the steady state Mean Square Deviation(MSD) of the network from the truth as follows. Definition 4 ((Steady State-)Mean Square Deviation). Given the network G with a rate of change which allows unbiased estimation, the steady state of the error processes in (7) is defined as follows ˆΣ lim t→∞E[ˆξt ˆξ T t ] and ˜Σ lim t→∞E[˜ξt ˜ξ T t ]. Hence, the (Steady State-)Mean Square Deviation of the network is the deviation from the truth in the mean-square sense, per individual, and it is defined as ˆ MSD 1 N Tr(ˆΣ) and ˜ MSD 1 N Tr(˜Σ). Theorem 5 (MSD). Given the error processes (7) with ρ(Q) < 1, the steady state MSD for (4) and (5) is a function of the communication matrix P, and the signal weight α as follows ˆ MSD(P, α) = RMSD(α) + ˆWMSD(P, α) ˜ MSD(P, α) = RMSD(α) + ˜WMSD(P, α), (10) where RMSD(α) σ2 r 1 −a2(1 −α)2 , (11) and ˆ WMSD(P, α) 1 N N i=1 a2α2σ2 w 1 −a2(λi(P) −α)2 and ˜ WMSD(P, α) 1 N N i=1 a2α2σ2 wλ2 i (P) 1 −a2(λi(P) −α)2 . (12) 4 Theorem 5 shows that the steady state MSD is governed by all eigenvalues of P contributing to WMSD pertaining to the observation noise, while RMSD is the penalty incurred due to the innovation noise. Moreover, (5) outperforms (4) due to richer information diffusion, which stresses the importance of global loss function decomposition. One might advance a conjecture that a complete network, where all individuals can communicate with each other, achieves a lower steady state MSD in the learning process since it provides the most information diffusion among other networks. This intuitive idea is discussed in the following corollary beside a few examples. Corollary 6. Denoting the complete, star, and cycle graphs on N vertices by KN, SN, and CN, respectively, and denoting their corresponding Laplacians by LKN , LSN , and LCN , under conditions of Theorem 5, (a) For P = I −1−α N LKN , we have lim N→∞ ˆ MSDKN = RMSD(α) + a2α2σ2 w. (13) (b) For P = I −1−α N LSN , we have lim N→∞ ˆ MSDSN = RMSD(α) + a2α2σ2 w 1 −a2(1 −α)2 . (14) (c) For P = I −βLCN , where β must preserve unbiasedness, we have lim N→∞ ˆ MSDCN = RMSD(α) + 2π 0 a2α2σ2 w 1 −a2(1 −β(2 −2 cos(τ)) −α)2 dτ 2π . (15) (d) For P = I −1 N LKN , we have lim N→∞ ˜ MSDKN = RMSD(α). (16) Proof. Noting that the spectrum of LKN , LSN and LCN are, respectively [15], {λN = 0, λN−1 = N, ..., λ1 = N}, {λN = 0, λN−1 = 1, ..., λ2 = 1, λ1 = N}, and {λi = 2 −2 cos( 2πi N )}N−1 i=0 , substituting each case in (10), and taking the limit over N, the proof follows immediately. To study the effect of communication let us consider the estimator (4). Under purview of Theorem 5 and Corollary 6, the ratio of the steady state MSD for a complete network (13) versus a fully disconnected network(P = IN) can be computed as lim N→∞ ˆ MSDKN ˆ MSDdisconnected = σ2 r + a2α2σ2 w(1 −a2(1 −α)2) σ2r + a2α2σ2w ≈1 −a2(1 −α)2, for σ2 r σ2 w. The ratio above can get arbitrary close to zero which, indeed, highlights the influence of communication on the learning quality. We now consider Kalman Filter(KF) [14] as the optimal centralized counterpart of (5). It is wellknown that the steady state KF satisfies a Riccati equation, and when the parameter of interest is scalar, the Riccati equation simplifies to a quadratic with the positive root ΣKF = a2σ2 w −σ2 w + Nσ2 r + (a2σ2w −σ2w + Nσ2r)2 + 4Nσ2wσ2r 2N . Therefore, comparing with the complete graph (16), we have lim N→∞ΣKF = σ2 r ≤ σ2 r 1 −a2(1 −α)2 , and the upper bound can be made tight by choosing α = 1 for |a| < 1 |λN(P )−1|. If |a| ≥ 1 |λN(P )−1| we should choose an α < 1 to preserve unbiasedness as well. 5 On the other hand, to evaluate the performance of estimator (4), we consider the upper bound MSDBound = σ2 r + α2σ2 w α , (17) derived in [8], for a = 1 via a distributed estimation scheme. For simplicity, we assume σ2 w = σ2 r = σ2, and let β in (15) be any diminishing function of N. Optimizing (13), (14), (15), and (17) over α, we obtain lim N→∞ ˆ MSDKN ≈1.55σ2 < lim N→∞ ˆ MSDSN = lim N→∞ ˆ MSDCN ≈1.62σ2 < MSDBound = 2σ2, which suggests a noticeable improvement in learning even in the star and cycle networks where the number of individuals and connections are in the same order. Regret Analysis We now turn to finite-time regret analysis of our methods. The average loss of all agents in predicting the state, up until time T, is 1 T T t=1 1 N N i=1 (ˆxi,t −xt)2 = 1 T T t=1 1 N Tr(ˆξt ˆξ T t ) . As motivated earlier, it is not possible, in general, to drive this average loss to zero, and we need to subtract off the limit. We thus define regret as RT 1 T T t=1 1 N Tr(ˆξt ˆξ T t ) −1 T T t=1 1 N Tr(ˆΣ) = 1 N Tr 1 T T t=1 ˆξt ˆξ T t −ˆΣ , where ˆΣ is from Definition 4. We then have for the spectral norm · that RT ≤ 1 T T t=1 ξtξ T t −Σ , (18) where we dropped the distinguishing notation between the two estimators since the analysis works for both of them. We, first, state a technical lemma from [16] that we invoke later for bounding the quantity RT . For simplicity, we assume that magnitudes of both innovation and observation noise are bounded. Lemma 7. Let {st}T t=1 be an independent family of vector valued random variables, and let H be a function that maps T variables to a self-adjoint matrix of dimension N. Consider a sequence {At}T t=1 of fixed self-adjoint matrices that satisfy H(ω1, ..., ωt, ..., ωT ) −H(ω1, ..., ω t, ..., ωT ) 2 A2 t, where ωi and ω i range over all possible values of si for each index i. Letting Var = T t=1 A2 t, for all c ≥0, we have P H(s1, ..., sT ) −E[H(s1, ..., sT )] ≥c ≤Ne−c2/8Var. Theorem 8. Under conditions of Theorem 5 together with boundedness of noise maxt≤T st≤s for some s > 0, the regret function defined in (18) satisfies RT ≤1 T ξ02 1 −ρ2(Q) + 1 T 2sξ0 1 −ρ(Q) 2 + 1 T s2 1 −ρ2(Q) 2 + 1 √ T 8s2 2 log N δ (1 −ρ(Q))2 , (19) with probability at least 1 −δ. We mention that results that are similar in spirit have been studied for general unbounded stationary ergodic time series in [17–19] by employing techniques from the online learning literature. On the other hand, our problem has the network structure and the specific evolution of the hidden state, not present in the above works. 6 4 The Impact of New Friendships on Social Learning In the social learning model we proposed, agents are cooperative and they aim to accomplish a global objective. In this direction, the network structure contributes substantially to the learning process. In this section, we restrict our attention to estimator (5), and characterize the intuitive idea that making(losing) friendships can influence the quality of learning in the sense of decreasing(increasing) the steady state MSD of the network. To commence, letting ei denote the i-th unit vector in the standard basis of RN, we exploit the negative semi-definite, edge function matrix ∆P(i, j) −(ei −ej)(ei −ej) T, (20) for edge addition(removal) to(from) the graph. Essentially, if there is no connection between agents i and j, PP + ∆P(i, j), (21) for < min{pii, pjj}, corresponds to a new communication matrix adding the edge {i, j} with a weight to the network G, and subtracting from self-reliance of agents i and j. Proposition 9. Let G−be the network resulted by removing the bidirectional edge {i, j} with the weight from the network G, so P−and P denote the communication matrices associated to G− and G, respectively. Given Assumption 1, for a fixed signal weight α the following relationship holds ˜ MSD(P, α) ≤ ˜ MSD(P−, α), (22) as long as P is positive semi-definite, and |a| < 1 |α|. Under a mild technical assumption, Proposition 9 suggests that losing connections monotonically increases the MSD, and individuals tend to maintain their friendships to obtain a lower MSD as a global objective. However, this does not elaborate on the existence of individuals with whom losing or making connections could have an immense impact on learning. We bring this concept to light in the following proposition with finding a so-called optimal edge which provides the most MSD reduction, in case it is added to the network graph. Proposition 10. Given Assumption 1, a positive semi-definite P, and |a| < 1 |α|, to find the optimal edge with a pre-assigned weight 1 to add to the network G, we need to solve the following optimization problem min {i,j}/∈E N k=1 hk(i, j) zk(i, j) 2(1 −α2a2)λk(P) + 2a2αλ2 k(P) 1 −a2(λk(P) −α)22 , (23) where zk(i, j) (v T k∆P(i, j)vk), (24) and {vk}N k=1 are the right eigenvectors of P. In addition, letting ζmax = maxk>1 |λk(P) −α|, min {i,j}/∈E N k=1 hk(i, j) ≥ min {i,j}/∈E −2 (1 −α2a2)(pii + pjj) + a2α([P 2]ii + [P 2]jj −2[P 2]ij) 1 −a2ζ2max 2 . (25) Proof. Representing the first order approximation of λk(P) using definition of zk(i, j) in (24), we have λk(P) ≈λk(P) + zk(i, j) for 1. Based on Theorem 5, we now derive ˜ MSD(P, α) − ˜ MSD(P, α) ∝ N k=1 λk(P) −λk(P) (1 −α2a2)(λk(P) + λk(P)) + 2a2αλk(P)λk(P) 1 −a2(λk(P) −α)2 1 −a2(λk(P) −α)2 ≈ N k=1 zk(i, j) 2(1 −α2a2)λk(P) + 2a2αλ2 k(P) + (1 −α2a2 + 2a2αλk(P))zk(i, j) 1 −a2(λk(P) −α)2 1 −a2(λk(P) −α + zk(i, j))2 = N k=1 zk(i, j) 2(1 −α2a2)λk(P) + 2a2αλ2 k(P) 1 −a2(λk(P) −α)22 + O(2), 7 noting that zk(i, j) is O() from the definition (24). Minimizing ˜ MSD(P, α) − ˜ MSD(P, α) is, hence, equivalent to optimization (23) when 1. Taking into account that P is positive semidefinite, zk(i, j) ≤0 for k ≥2, and v1 = 1N/ √ N which implies z1(i, j) = 0, we proceed to the lower bound proof using the definition of hk(i, j) and ζmax in the statement of the proposition, as follows N k=1 hk(i, j) = N k=2 zk(i, j) 2(1 −α2a2)λk(P) + 2a2αλ2 k(P) 1 −a2(λk(P) −α)22 ≥ 1 1 −a2ζ2max 2 N k=2 zk(i, j) 2(1 −α2a2)λk(P) + 2a2αλ2 k(P) . Substituting zk(i, j) from (24) to above, we have N k=1 hk(i, j) ≥ 2 1 −a2ζ2max 2 N k=1 v T k∆P(i, j)vk (1 −α2a2)λk(P) + a2αλ2 k(P) = 2 1 −a2ζ2max 2 Tr ∆P(i, j) N k=1 (1 −α2a2)λk(P) + a2αλ2 k(P) vkv T k = 2 1 −a2ζ2max 2 Tr ∆P(i, j) (1 −α2a2)P + a2αP 2 . Using the facts that Tr(∆P(i, j)P) = −pii −pjj +2pij and Tr(∆P(i, j)P 2) = −[P 2]ii −[P 2]jj + 2[P 2]ij according to definition of ∆P(i, j) in (20), and pij = 0 since we are adding a non-existent edge {i, j}, the lower bound (25) is derived. Beside posing the optimal edge problem as an optimization, Proposition 10 also provides an upper bound for the best improvement that making a friendship brings to the network. In view of (25), forming a connection between two agents with more self-reliance and less common neighbors, minimizes the lower bound, which offers the most maneuver for MSD reduction. 5 Conclusion We studied a distributed online learning problem over a social network. The goal of agents is to estimate the underlying state of the world which follows a geometric random walk. Each individual receives a noisy signal about the underlying state at each time period, so she communicates with her neighbors to recover the true state. We viewed the problem with an optimization lens where agents want to minimize a global loss function in a collaborative manner. To estimate the true state, we proposed two methodologies derived from a different decomposition of the global objective. Given the structure of the network, we provided a tight upper bound on the rate of change of the parameter which allows agents to follow the state with a bounded variance. Moreover, we computed the averaged, steady state, mean-square deviation of the estimates from the true state. The key observation was optimality of one of the estimators indicating the dependence of learning quality on the decomposition. Furthermore, defining the regret as the average of errors in the process of learning during a finite time T, we demonstrated that the regret function of the proposed algorithms decays with a rate O(1/ √ T). Finally, under mild technical assumptions, we characterized the influence of network pattern on learning by observing that each connection brings a monotonic decrease in the MSD. Acknowledgments We gratefully acknowledge the support of AFOSR MURI CHASE, ONR BRC Program on Decentralized, Online Optimization, NSF under grants CAREER DMS-0954737 and CCF-1116928, as well as Dean’s Research Fund. 8 References [1] M. H. DeGroot, “Reaching a consensus,” Journal of the American Statistical Association, vol. 69, no. 345, pp. 118–121, 1974. [2] A. Jadbabaie, P. Molavi, A. Sandroni, and A. Tahbaz-Salehi, “Non-bayesian social learning,” Games and Economic Behavior, vol. 76, no. 1, pp. 210–225, 2012. [3] E. Mossel and O. Tamuz, “Efficient bayesian learning in social networks with gaussian estimators,” arXiv preprint arXiv:1002.0747, 2010. [4] O. Dekel, R. Gilad-Bachrach, O. Shamir, and L. Xiao, “Optimal distributed online prediction using mini-batches,” The Journal of Machine Learning Research, vol. 13, pp. 165–202, 2012. [5] L. Xiao, S. Boyd, and S. Lall, “A scheme for robust distributed sensor fusion based on average consensus,” in Fourth International Symposium on Information Processing in Sensor Networks. IEEE, 2005, pp. 63–70. [6] S. Kar, J. M. Moura, and K. Ramanan, “Distributed parameter estimation in sensor networks: Nonlinear observation models and imperfect communication,” IEEE Transactions on Information Theory, vol. 58, no. 6, pp. 3575–3605, 2012. [7] S. Shahrampour and A. Jadbabaie, “Exponentially fast parameter estimation in networks using distributed dual averaging,” arXiv preprint arXiv:1309.2350, 2013. [8] D. Acemoglu, A. Nedic, and A. Ozdaglar, “Convergence of rule-of-thumb learning rules in social networks,” in 47th IEEE Conference on Decision and Control, 2008, pp. 1714–1720. [9] R. M. Frongillo, G. Schoenebeck, and O. Tamuz, “Social learning in a changing world,” in Internet and Network Economics. Springer, 2011, pp. 146–157. [10] U. A. Khan, S. Kar, A. Jadbabaie, and J. M. Moura, “On connectivity, observability, and stability in distributed estimation,” in 49th IEEE Conference on Decision and Control, 2010, pp. 6639–6644. [11] R. Olfati-Saber, “Distributed kalman filtering for sensor networks,” in 46th IEEE Conference on Decision and Control, 2007, pp. 5492–5498. [12] N. Cesa-Bianchi, Prediction, learning, and games. Cambridge University Press, 2006. [13] J. C. Duchi, A. Agarwal, and M. J. Wainwright, “Dual averaging for distributed optimization: convergence analysis and network scaling,” IEEE Transactions on Automatic Control, vol. 57, no. 3, pp. 592–606, 2012. [14] R. E. Kalman et al., “A new approach to linear filtering and prediction problems,” Journal of basic Engineering, vol. 82, no. 1, pp. 35–45, 1960. [15] M. Mesbahi and M. Egerstedt, Graph theoretic methods in multiagent networks. Princeton University Press, 2010. [16] J. A. Tropp, “User-friendly tail bounds for sums of random matrices,” Foundations of Computational Mathematics, vol. 12, no. 4, pp. 389–434, 2012. [17] G. Biau, K. Bleakley, L. Gy¨orfi, and G. Ottucs´ak, “Nonparametric sequential prediction of time series,” Journal of Nonparametric Statistics, vol. 22, no. 3, pp. 297–317, 2010. [18] L. Gyorfiand G. Ottucsak, “Sequential prediction of unbounded stationary time series,” IEEE Transactions on Information Theory, vol. 53, no. 5, pp. 1866–1872, 2007. [19] L. Gy¨orfi, G. Lugosi et al., Strategies for sequential prediction of stationary time series. Springer, 2000. 9
|
2013
|
128
|
4,852
|
Discovering Hidden Variables in Noisy-Or Networks using Quartet Tests Yacine Jernite, Yoni Halpern, David Sontag Courant Institute of Mathematical Sciences New York University {halpern, jernite, dsontag}@cs.nyu.edu Abstract We give a polynomial-time algorithm for provably learning the structure and parameters of bipartite noisy-or Bayesian networks of binary variables where the top layer is completely hidden. Unsupervised learning of these models is a form of discrete factor analysis, enabling the discovery of hidden variables and their causal relationships with observed data. We obtain an efficient learning algorithm for a family of Bayesian networks that we call quartet-learnable. For each latent variable, the existence of a singly-coupled quartet allows us to uniquely identify and learn all parameters involving that latent variable. We give a proof of the polynomial sample complexity of our learning algorithm, and experimentally compare it to variational EM. 1 Introduction We study the problem of discovering the presence of latent variables in data and learning models involving them. The particular family of probabilistic models that we consider are bipartite noisy-or Bayesian networks where the top layer is completely hidden. Unsupervised learning of these models is a form of discrete factor analysis and has applications in sociology, psychology, epidemiology, economics, and other areas of scientific inquiry that need to identify the causal relationships of hidden or latent variables with observed data (Saund, 1995; Martin & VanLehn, 1995). Furthermore, these models are widely used in expert systems, such as the QMR-DT network for medical diagnosis (Shwe et al. , 1991). The ability to learn the structure and parameters of these models from partially labeled data could dramatically increase their adoption. We obtain an efficient learning algorithm for a family of Bayesian networks that we call quartetlearnable, meaning that every latent variable has a singly-coupled quartet (i.e. four children of a latent variable for which there is no other latent variable that is shared by at least two of the children). We show that the existence of such a quartet allows us to uniquely identify each latent variable and to learn all parameters involving that latent variable. Furthermore, using a technique introduced by Halpern & Sontag (2013), we show how to subtract already learned latent variables to create new singly-coupled quartets, substantially expanding the class of structures that we can learn. Importantly, even if we cannot discover every latent variable, our algorithm guarantees the correctness of any latent variable that was discovered. We show in Sec. 4 that our algorithm can learn nearly all of the structure of the QMR-DT network for medical diagnosis (i.e., discovering the existence of hundreds of diseases) simply from data recording the symptoms of each patient. Underlying our algorithm are two new techniques for structure learning. First, we introduce a quartet test to determine whether a set of binary variables is singly-coupled. When singly-coupled variables are found, we use previous results in mixture model learning to identify the coupling latent variable. Second, we develop a conditional point-wise mutual information test to learn parameters of other children of identified latent variables. We give a self-contained proof of the polynomial sample 1 X Y Z a b c d e f g h i Y a b c X a b c Z = ? Figure 1: Left: Example of a quartet-learnable network. For this network, the order (X, Y, Z) satisfies the definition: {a, b, c, d} is singly coupled by X, {c, e, f, g} is singly coupled by Y given X and {d, g, h, i} is singly coupled by Z given X, Y . Right: Example of two different networks that have the same observable moments (i.e., distribution on a, b, c). pX = 0.2, pY = 0.3, pZ = 0.37. fX = (0.1, 0.2, 0.3), fY = (0.6, 0.4, 0.5), fZ = (0.28, 0.23, 0.33). The noise probabilities and full moments are given in the supplementary material. complexity of our structure and parameter learning algorithms, by bounding the error propagation due to finding roots of polynomials. Finally, we present an experimental comparison of our structure learning algorithm to the variational expectation maximization algorithm of ˇSingliar & Hauskrecht (2006) on a synthetic image-decomposition problem and show competitive results. Related work. Martin & VanLehn (1995) study structure learning for noisy-or Bayesian networks, observing that any two observed variables that share a hidden parent must be correlated. Their algorithm greedily attempts to find a small set of cliques that cover the dependencies of which it is most certain. Kearns & Mansour (1998) give a polynomial-time algorithm with provable guarantees for structure learning of noisy-or bipartite networks with bounded in-degree. Their algorithm incrementally constructs the network, in each step adding a new observed variable, introducing edges from the existing latent variables to the observed variable, and then seeing if new latent variables should be created. This approach requires strong assumptions, such as identical priors for the hidden variables and all incoming edges for an observed variable having the same failure probabilities. Silva et al. (2006) study structure learning in linear models with continuous latent variables, giving an algorithm for discovering disjoint subsets of observed variables that have a single hidden variable as its parent. Recent work has used tensor methods and sparse recovery to learn linear latent variable models with graph expansion (Anandkumar et al. , 2013), and also continuous admixture models such as latent Dirichlet allocation (Anandkumar et al. , 2012a). The discrete variable setting is not linear, making it non-trivial to apply these methods that rely on linearity of expectation. An alternative approach is to perform gradient ascent on the likelihood or use expectation maximization (EM). Although more robust to model error, the likelihood is nonconvex and these methods do not have consistency guarantees. Elidan et al. (2001) seek “structural signatures”, in their case semicliques, to use as structure candidates within structural EM (Elidan & Friedman, 2006; Friedman, 1997; Lazic et al. , 2013). Our algorithm could be used in the same way. Exact inference is intractable in noisy-or networks (Cooper, 1987), so ˇSingliar & Hauskrecht (2006) give a variational EM algorithm for unsupervised learning of the parameters of a bipartite noisy-or network. We will use this as a baseline in our experimental results. Spectral approaches to learning mixture models originated with Chang’s spectral method (Chang 1996; analyzed in Mossel & Roch 2005, see also Anandkumar et al. (2012b)). The binary variable setting is a special case and is discussed in Lazarsfeld (1950) and Pearl & Tarsi (1986). In Halpern & Sontag (2013) the parameters of singly-coupled variables in bipartite networks of known structure are learned using mixture model learning. Quartet tests have been previously used for learning latent tree models (Anandkumar et al. , 2011; Pearl & Tarsi, 1986). Our quartet test, like that of Ishteva et al. (2013) and Eriksson (2005), uses the full fourth-order moment and a similar unfolding of the fourth-order moment matrix. Background. We consider bipartite noisy-or Bayesian networks (G, Θ) with n binary latent variables U, which we denote with capital letters (e.g. X), and m observed binary variables O, which we denote with lower case letters (e.g. a). The edges in the model are directed from the latent variables to the observed variables, as shown in Fig. 1. In the noisy-or framework, an observed variable is on if at least one of its parents is on and does not fail to activate it. The entire Bayesian network is parametrized by n×m+n+m parameters. These parameters consist of prior probabilities on the latent variables, pX for X ∈U, failure probabilities between latent and 2 observed variables, ⃗fX (a vector of size m), and noise or leak probabilities ⃗ν = {ν1, ..., νm}. An equivalent formulation includes the noise in the model by introducing a single ‘noise’ latent variable, X0, which is present with probability p0 = 1 and has failure probabilities ⃗f0 = 1−⃗ν. The Bayesian network only has an edge between latent variable X and observed variable a if fX,a < 1. The generative process for the model is then: • The states of the latent variables are drawn independently: X ∼Bernoulli(pX) for X ∈U. • Each X ∈U with X = 1 activates observed variable a with probability 1 −fX,a. • An observed variable a ∈O is “on” (a = 1) if it is activated by at least one of its parents. The algorithms described in this paper make substantial use of sets of moments of the observed variables, particularly the negative moments. Let S ⊆O be a set of observed variables, and X ⊆U be the set of parents of S. The joint distribution of a bipartite noisy-or network can be shown to have the following factorization, where S = {o1, . . . , o|S|}: NG,S = P(o1 = 0, o2 = 0, . . . , o|S| = 0) = Y U∈X (1 −pU + pU |S| Y i=1 fU,oi). (1) The full joint distribution can be obtained from the negative moments via inclusion-exclusion formulas. We denote NG to be the set of negative moments of the observed variables under (G, Θ). In the remainder of this section we will review two results described in Halpern & Sontag (2013). Parameter learning of singly-coupled triplets. We say that a set O of observed variables is singlycoupled by a parent X if X is a parent of every member of O and there is no other parent Y that is shared by at least two members of O. A singly coupled set of observations is a binary mixture model, which gives rise to the next result based on a rank-2 tensor decomposition of the joint distribution. If (a, b, c) are singly-coupled by X, we can learn pX and fX,a as follows. Let M1 = P(b, c, a = 0), M2 = P(b, c, a = 1), and M3 = M2M −1 1 . Solving for (λ1, λ2) = eigenvalues(M3), we then have: pX = 1 + λ2 λ2 −λ1 1T (M2 −λ1M1)1 and fX,a = 1 + λ1 1 + λ2 . (2) Subtracting off. Because of the factored form of Equation 1, we can remove the influence of a latent variable from the negative moments. Let X be a latent variable of G. Let S ⊆O be a set of observations and X be the parents of S. If we know NG,S, the prior of X, and the failure probabilities fX,S, we can obtain the negative moments of S under (G \ {X}, Θ). When S includes all of the children of X, this operation “subtracts off” or removes X from the network: NG\X,S = Y U∈X\X (1 −pU + pU |S| Y i=1 fU,oi) = NG,S (1 −pX + pX Q|S| i=1 fX,oi) . (3) 2 Structure learning Our paper focuses on learning the structure of these bipartite networks, including the number of latent variables. We begin with the observation that not all structures are identifiable, even if given infinite data. Suppose we applied the tensor decomposition method to the marginal distribution (moments) of three observed variables that share two parents. Often we can learn a network with the same marginal distribution, but where these three variables have just one parent. Figure 1 gives an example of such a network. As a result, if we hope to be able to learn structure, we need to make additional assumptions (e.g., every latent variable has at least four children). We give two variants of an algorithm based on quartet tests, and prove its correctness in Section 3. Our approach is based on decomposing the structure learning problem into two tasks: (1) identifying the latent variables, and (2) determining to which observed variables they are connected. 2.1 Finding singly coupled quartets Since triplets are not sufficient to identify a latent variable (Figure 1), we propose a new approach based on identifying singly-coupled quartets. We present two methods to find such quartets. The 3 Algorithm 1 STRUCTURE-LEARN Input: Observations S, Thresholds τq, τ ′ q, τe. Output: Latent structure Latent 1: Latent = {} 2: while Not Converged do 3: for all quartets (a, b, c, d) in S do 4: T ←JOINT(a, b, c, d) 5: T ←ADJUST(T, Latent) 6: if PRETEST(T,τe) and 4TEST(T, τq,τ ′ q) then 7: // (a, b, c, d) are singly-coupled. 8: L ←MIXTURE(a, b, c, d) 9: children ←EXTEND(L, Latent, τe) 10: Latent ←Latent ∪{(L, children)} 11: end if 12: end for 13: end while Algorithm 2 EXTEND Input: Latent variable L with singly-coupled children (a, b, c, d), currently known latent structure Latent, threshold τ Output: children, all the children of L. 1: children={(a, fL,a), (b, fL,b), (c, fL,c), (d, fL,d)} 2: for all observable x ̸∈{a, b, c, d} do 3: Subtract off coupling parents in Latent from the moments 4: if P (¯a,¯b) P (¯a)P (¯b) > P (¯a,¯b|¯x) P (¯a|¯x)P (¯b|¯x) + τ then 5: fL,x = FAILURE(a,b,x,L) 6: children ←children ∪{(x, fL,x)} 7: end if 8: end for 9: Return children Figure 2: Structure learning. Left: Main routine of the algorithm. JOINT gives the joint distribution and ADJUST subtracts off the influence of the latent variables (Eq. 3). PRETEST filters the set of candidate quartets by determining whether every triplet in a quartet has a shared parent, using Lemma 2. 4TEST refers to either of the quartet tests described in Section 2.1. τ ′ q is only used in the coherence quartet test. MIXTURE refers to using Eq. 2 to learn the parameters for all triplets in a singly-coupled quartet. This yields multiple estimates for each parameter and we take the median. Right: Algorithm to identify all of the children of a latent variable. FAILURE uses the method outlined in Section 2.2 (see Eq. 6) to find the failure probability fL,x. first is based on a rank test on a matrix formed from the fourth order moments and the second uses variance of parameters learned from third order moments. We then present a method that uses the point-wise mutual information of a triplet to identify all the other children of the new latent variable. The outline of the learning algorithm is presented in Algorithm 1. While not all networks can be learned, this method allows us to define a class of noisy-or networks on which we can perform structure learning. Definition 1. A noisy-or network is quartet-learnable if there exists an ordering of its latent variables such that each one has a quartet of children which are singly coupled once the previous latent variables are removed from the model. A noisy-or network is strongly quartet-learnable if all of its latent variables have a singly coupled quartet of children. An example of a quartet-learnable network is given in Figure 1. Rank test. A candidate quartet for the rank test is a quartet where all nodes have at least one common parent. One way to find whether a candidate quartet is singly coupled is by looking directly at the rank of its fourth-order moments matrix. We have three ways to unfold the 2 × 2 × 2 × 2 tensor defined by these moments into a 4 × 4 matrix: we can consider the joint probability matrix of the aggregated variables (a, b) and (c, d), of (a, c) and (b, d), or of (a, d) and (b, c). We discuss the rank property for the first unfolding, but note that it holds for all three. Let M be the 4 × 4 matrix obtained this way, and S be the set of parents that are parents of both (a, b) and (c, d). For all S ⊂S let qS and rS be the vectors of the probabilities of (a, b) and (c, d) respectively given that S is the set of parents that are active. Then: M = X S⊂S Y X∈S pX Y Y ∈S\S (1 −pY ) qSrT S . In particular, this means that if there is only one parent shared between (a, b) and (c, d), M is the sum of two rank 1 matrices, and thus is at most rank 2. 4 Conversely, if |S| > 1, M is the sum of at least 4 rank 1 matrices, and its elements are polynomial expressions of the parameters of the model. The determinant itself is then a polynomial function of the parameters of the model, i.e. P(pX, fX,u ∀X ∈S, u ∈{a, b, c, d}). We give examples in the supplementary material of parameter settings showing that P ̸≡0, hence the set of its roots has measure 0, which means that the third largest eigenvalue (using the eigenvalues’ absolute values) of M is non-zero with probability one. This will allow us to determine whether a candidate quartet is singly coupled by looking at the third eigenvalues of the three unfoldings of its joint distribution tensor. However, for the algorithm to be practical, we need a slightly stronger formalization of the property: Definition 2. We say that a model is ϵ-rank-testable if for any quartet {a, b, c, d} that share a parent U and any non-empty set of latent variables V such that U ̸∈V and ∃V ∈V, (fV,b ̸= 1∧fV,c ̸= 1), the third eigenvalue of the moments matrix M corresponding to the sub-network {U, a, b, c, d} ∪V is at least ϵ. Any (finite) noisy-or network whose parameters were drawn at random is ϵ-rank-testable for some ϵ with probability 1. The special case where all failure probabilities are equal also falls within this framework, provided they are not too close to 0 or 1. We can then determine whether a quartet is singly coupled by testing whether the third eigenvalues of all of the three unfoldings of the joint distributions are below a threshold, τq. If this test succeeds, we learn its parameters using Eq. 2. Coherence test. Let {a, b, c, d} be a quartet of observed variables. To determine whether it is singly coupled, we can also apply Eq. 2 to learn the parameters of triplets (a, b, c), (a, b, d), (a, c, d) and (b, c, d) as if they were singly coupled. This gives us four overlapping sets of parameters. If the variance of parameter estimates exceeds a threshold we know that the quartet is not singly coupled. Note that agreement between the parameters learned is necessary but not sufficient to determine that (a, b, c, d) are singly coupled. For example, in the case of a fully connected graph with two parents, four children and identical failure probabilities, the third-order moments of any triplet are identical, hence the parameters learned will be the same. Lemma 1, however, states that the moments generated from the estimated parameters can only be equal to the true moments if the quartet is actually singly coupled. Lemma 1. If the model is ϵ-rank-testable and (a, b, c, d) are not singly coupled, then if MR represents the reconstructed moments and M the true moments, we have: ||MR −M||∞> ϵ 8 4 . This can be proved using a result on eigenvalue perturbation from Elsner (1985) for an unfolding of the moments’ tensor. These two properties lead to the following algorithm: First try to learn the parameters as if the quartet were singly coupled. If the variance of the parameter estimates exceeds a threshold, then reject the quartet. Next, check whether we can reconstruct the moments using the mean of the parameter estimates. Accept the quartet as singly-coupled if the reconstruction error is below a second threshold. 2.2 Extending Latent Variables Once we have found a singly coupled quartet (a, b, c, d), the second step is to find all other children of the coupling parent A. To that end, we can use a property of the conditional pointwise mutual information (CPMI) that we introduce in this section. In this section, we use the notation ¯a to denote the event a = 0. The CPMI between a and b given x is defined as CPMI(a, b|x) ≡P(¯a,¯b|¯x)/(P(¯a|¯x)P(¯b|¯x)). We will compare it to the point-wise mutual information (PMI) between a and b defined as PMI(a, b) ≡P(¯a,¯b)/(P(¯a)P(¯b)). Let (a, b) be two observed variables that we know only share one parent A, and let x be any another observed variable. We show how the CPMI between a and b given x can be used to find fA,x, the failure probability of x given A. Our algorithm requires that the priors of all of the hidden variables be less than 1/2. For any observed variable x, the following lemma states that CPMI(a, b|x) ̸= PMI(a, b) if and only if a, b and x share a parent. Since the only latent variable that has both a and b as children is A, this is equivalent to saying that x is a child of A. 5 Lemma 2. Let (a, b, x) be three observed variables in a noisy-or network, and let Ua,b be the set of common parents of a and b. For U ∈Ua,b, defining pU|¯x = P(U, ¯x) P(¯x) = pUfU,x 1 −pU + pUfU,x , (4) we have pU|¯x ≤pU. Furthermore, P(¯a,¯b|¯x) P(¯a|¯x)P(¯b|¯x) = Y U∈Ua,b (1 −pU|¯x + pU|¯xfU,afU,b) (1 −pU|¯x + pU|¯xfU,a)(1 −pU|¯x + pU|¯xfU,b) ≤ P(¯a,¯b) P(¯a)P(¯b), with equality if and only if (a, b, x) do not share a parent. The proof for Lemma 2 is given in the supplementary material. As a result, if a and b have only parent A in common, we can write: R ≡CPMI(a, b|x) = P(¯a,¯b|¯x) P(¯a|¯x)P(¯b|¯x) = (1 −pA|¯x + pA|¯xfA,afA,b) (1 −pA|¯x + pA|¯xfA,a)(1 −pA|¯x + pA|¯xfA,b). We can equivalently write this equation as Q(pA|¯x) = 0 for the quadratic function Q(x) given by: Q(x) = R(fA,a −1)(fA,b −1)x2 + [R(fA,a + fA,b −2) −(fA,afA,b −1)]x + R −1. (5) Moreover, we can show that Q′(x) = 0 for some x > 1/2, hence one of the roots of Q is always greater than 1/2. In our framework, we know that pA|¯x ≤pA ≤1 2, hence pA|¯x is simply the smaller root of Q. After solving for pA|¯x, we can obtain fA,x using Eq. 4: fA,x = pA|¯x(1 −pA) pA(1 −pA|¯x). (6) Extending step. Once we find a singly-coupled quartet (a, b, c, d) with common parent A, Lemma 2 allows us to determine whether a new variable x is also a child of A. Notice that for this step we only need to use two of the children in {a, b, c, d}, which we arbitrarily choose to be a and b. If x is found to be a child of A, we can solve for fA,x using Eqs. 5 and 6. Algorithm 2 combines these two steps to find the parameters of all the children of A after a singly-coupled quartet has been found. Parameter learning with known structure. When the structure of the network is known, singlycoupled triplets are sufficient for identifiability without resorting to the quartet tests in Section 2.1. That setting was previously studied in Halpern & Sontag (2013), which required every edge to be part of a singly coupled triplet or pair for its parameters to be learnable (possibly after subtracting off latent variables). Our new CPMI technique improves this result by enabling us to learn all failure probabilities for a latent variable’s children even if the variable has only one singly coupled triplet. 3 Sample complexity analysis In Section 2, we gave two variants of an algorithm to learn the structure of a class of noisy-or networks. We now want to upper bound the number of samples it requires to learn the structure of the network correctly with high probability, as a function of the ranges in which the parameters are found. All priors are in [pmin, 1/2], all failures probabilities are in [fmin, fmax], and the marginal probabilities of an observed variable x being off is lower bounded by nmin ≤P(¯x). The full proofs for these results are given in the supplementary materials. Theorem 1. If a network with m observed variables is strongly quartet-learnable and ζ-ranktestable, then its structure can be learned in polynomial time with probability (1 −δ) and with a polynomial number of samples equal to: O max 1 ζ8 , 1 n8 minp2 min(1 −fmax)8 ln 2m δ . After N samples, the additive error on any of the parameters ϵ(N) is bounded with probability 1−δ by: ϵ(N) ≤O r ln 2m δ f 18 min(1 −fmax)6n28 minp13 min 1 √ N . 6 We obtain this result by determining the accuracy we need for our tests to be provably correct, and bounding how much the error in the output of the parameter learning algorithms depends on the input. This proves that we can learn a class of strongly quartet-learnable noisy-or networks in polynomial time and sample complexity. Next, we show how to extend the analysis to quartetlearnable networks as defined in Section 2 by subtracting off latent variables that we have previously learned. If some of the removed latent variables were coupling for an otherwise singly coupled quartet, we then discover new latent variables, and repeat the operation. If a network is quartetlearnable, we can find all of the latent variables in a finite number of subtracting off steps, which we call the depth of the network (thus, a strongly quartet-learnable network has depth 0). To prove that the structure learning algorithm remains correct, we simply need to show that the estimated subtracted off moments remain close to the true ones. Lemma 3. If the additive error on the estimated negative moments of an observed quartet C and on the parameters for W latent variables X1, . . . , XW whose influence we want to remove from C is at most ϵ, then the error on the subtracted off moments for C is O(W4W ϵ). We define the width of the network to be the maximum number of parents that need to be subtracted off to be able to learn the parameters for a new singly-coupled quartet (this is typically a small constant). This leads to the following result: Theorem 2. If a network with m observed variables is quartet-learnable at depth d, is ζ-ranktestable, and has width W, then its structure can be learned with probability (1 −δ) with NS samples, where: NS = O W4W f 18 min(1 −fmax)6n28 minp13 min 2d × max 1 ζ8 , 1 n8 minp2 min(1 −fmax)8 ln 2m δ . The left hand side of this expression has to do with the error introduced in the estimate of the parameters each time we do a subtracting off step, which by definition occurs at most d times, hence the exponent. We notice that the bounds do not depend directly on the number of latent variables, indicating that we can learn networks with many latent variables, as long as the number of subtraction steps is small. While this bound is useful for proving that the sample complexity is indeed polynomial, in the experiments section we show that in practice our algorithm obtains reasonable results on sample sizes well below the theoretical bound. 4 Experiments Depth of aQMR-DT. Halpern & Sontag (2013) previously showed that the parameters of the anonymized QMR-DT network for medical diagnosis (provided by the University of Pittsburgh through the efforts of Frances Connell, Randolph A. Miller, and Gregory F. Cooper) could be learned from data recording only symptoms if the structure is known. We now show that the structure can also be learned. Here we assume that the quartet tests are perfect (i.e. infinite data setting). Table 1 compares the depth of the aQMR-DT network using triplets and quartets. Structure learning discovers all but four of the diseases, two of which would not be learnable even if the structure were known. These two diseases are discussed in Halpern & Sontag (2013) and share all of their children except for one symptom each, resulting in a situation where no singly-coupled triplets can be found. The additional two diseases that cannot be learned share all but two children with each other. Thus, for these two latent variables, singly-coupled triplets exist but singly-coupled quartets do not. Implementation. We test the performance of our algorithm on the synthetic image dataset used in ˇSingliar & Hauskrecht (2006). The Bayesian network consists of 8 latent variables and 64 observed variables, arranged in an 8x8 grid of pixels. Each of the latent variables connects to a subset of the observed pixels (see Figure 3). The latent variable priors are set to 0.25, the failure probabilities for all edges are set to 0.1, and leak probabilities are set to 0.001. We generate samples from the network and use them to test the ability of our algorithm to discover the latent variables and network structure from the samples. The network is quartet learnable, but the first and last of the ground truth sources shown in Figure 3 can only be learned after a subtraction step. We use variational EM (ˇSingliar & Hauskrecht, 2006) as a baseline, using 16 random initializations and choosing the run with the highest lower bound on likelihood. We found that multiple initializations substantially improved the quality of its result. The variational algorithm is given the correct 7 Triplets (known structure) depth priors learned edges learned 0 527 43,139 1 39 2,109 2 2 100 3 0 0 inf 2 122 Quartets (unknown structure) depth diseases discovered edges learned 0 469 39,522 1 82 4,875 2 13 789 3 2 86 inf 4 198 Table 1: Right: The depth at which latent variables (i.e., diseases) are discovered and parameters learned in the aQMR-DT network for medical diagnosis (Shwe et al. , 1991) using the quartet-based structure learning algorithm, assuming infinite data. Left: Comparison to parameter learning with known structure, using one singly-coupled triplet to learn the failure probabilities for all of a disease’s symptoms. The parameters learned at level 0 can be learned without any subtracting-off step. Those marked depth inf cannot be learned. number of sources as input. For our algorithm, we use the rank-based quartet test, which has the advantage of requiring only one threshold, τq, compared to the two needed by the coherence test. In our algorithm, the thresholds determine the number of discovered latent variables (sources). Quartets are pre-filtered using pointwise mutual information to reject quartets that have non-siblings (i.e. (a, b, c, d) where a and b are likely not siblings). All quartets that fail the pretest or the rank test are discarded. We sort the remaining quartets by third singular value and proceed from lowest to highest. For each quartet in sorted order we check if it overlaps with a latent variable previously learned in this round. If it does not, we create a new latent variable and use the EXTEND step to find all of its children. The algorithm converges when no quartets pass the threshold. Figure 3 shows how the algorithms perform on the synthetic dataset with varying numbers of samples. Unless otherwise specified, our experiments use threshold values τq = 0.01 and τe = 0.1. Experiments exploring the sensitivity of the algorithm to these thresholds can be found in the supplementary material. The running time of the quartet algorithm is under 6 minutes for 10,000 samples using a parallel implementation with 16 cores. For comparison, the variational algorithm on the same samples takes 4 hours using 16 cores simultaneously (one random initialization per core) on the same machine. The variational run-time scales linearly with sample size while the quartet algorithm is independent of sample size once the quartet marginals are computed. Variational EM Quartet Structure Learning d=0 d=1 100 500 1000 2000 10000 10000* Sample size Ground truth sources Figure 3: A comparison between the variational algorithm of ˇSingliar & Hauskrecht (2006) and the quartet algorithm as the number of samples increases. The true network structure is shown on the right, with one image for each of the eight latent variables (sources). For each edge from a latent variable to an observed variable, the corresponding pixel intensity specifies 1−fX,a (black means no edge). The results of the quartet algorithm are divided by depth. Column d=0 shows the sources learned without any subtraction and d=1 shows the sources learned after a single subtraction step. Nothing was learned at d > 1. The sample size of 10,000* refers to 10,000 samples using an optimized value for the threshold of the rank-based quartet test (τq = 0.003). 5 Conclusion We presented a novel algorithm for learning the structure and parameters of bipartite noisy-or Bayesian networks where the top layer consists completely of latent variables. Our algorithm can learn a broad class of models that may be useful for factor analysis and unsupervised learning. The structure learning algorithm does not depend on an ability to estimate the parameters in strongly quartet-learnable networks. As a result, it may be possible to generalize the approach beyond the noisy-or setting to other bipartite Bayesian networks, including those with continuous variables and discrete variables of more than two states. 8 References Anandkumar, Anima, Chaudhuri, Kamalika, Hsu, Daniel, Kakade, Sham, Song, Le, & Zhang, Tong. 2011. Spectral Methods for Learning Multivariate Latent Tree Structure. Proceedings of NIPS 24, 2025–2033. Anandkumar, Anima, Foster, Dean, Hsu, Daniel, Kakade, Sham, & Liu, Yi-Kai. 2012a. A spectral algorithm for latent Dirichlet allocation. Proceedings of NIPS 25, 926–934. Anandkumar, Animashree, Hsu, Daniel, & Kakade, Sham M. 2012b. A method of moments for mixture models and hidden Markov models. In: Proceedings of COLT 2012. Anandkumar, Animashree, Javanmard, Adel, Hsu, Daniel J, & Kakade, Sham M. 2013. Learning Linear Bayesian Networks with Latent Variables. Pages 249–257 of: Proceedings of ICML. Chang, Joseph T. 1996. Full reconstruction of Markov models on evolutionary trees: identifiability and consistency. Mathematical biosciences, 137(1), 51–73. Cooper, Gregory F. 1987. Probabilistic Inference Using Belief Networks Is NP-Hard. Technical Report BMIR-1987-0195. Medical Computer Science Group, Stanford University. Elidan, Gal, & Friedman, Nir. 2006. Learning hidden variable networks: The information bottleneck approach. Journal of Machine Learning Research, 6(1), 81. Elidan, Gal, Lotner, Noam, Friedman, Nir, & Koller, Daphne. 2001. Discovering hidden variables: A structure-based approach. Advances in Neural Information Processing Systems, 479–485. Elsner, Ludwig. 1985. An optimal bound for the spectral variation of two matrices. Linear algebra and its applications, 71, 77–80. Eriksson, Nicholas. 2005. Tree construction using singular value decomposition. Algebraic Statistics for computational biology, 347–358. Friedman, Nir. 1997. Learning Belief Networks in the Presence of Missing Values and Hidden Variables. Pages 125–133 of: ICML ’97. Halpern, Yoni, & Sontag, David. 2013. Unsupervised Learning of Noisy-Or Bayesian Networks. In: Conference on Uncertainty in Artificial Intelligence (UAI-13). Ishteva, Mariya, Park, Haesun, & Song, Le. 2013. Unfolding Latent Tree Structures using 4th Order Tensors. In: ICML ’13. Kearns, Michael, & Mansour, Yishay. 1998. Exact inference of hidden structure from sample data in noisy-OR networks. Pages 304–310 of: Proceedings of UAI 14. Lazarsfeld, Paul. 1950. Latent Structure Analysis. In: Stouffer, Samuel, Guttman, Louis, Suchman, Edward, Lazarsfeld, Paul, Star, Shirley, & Clausen, John (eds), Measurement and Prediction. Princeton, New Jersey: Princeton University Press. Lazic, Nevena, Bishop, Christopher M, & Winn, John. 2013. Structural Expectation Propagation: Bayesian structure learning for networks with latent variables. In: Proceedings of AISTATS 16. Martin, J, & VanLehn, Kurt. 1995. Discrete factor analysis: Learning hidden variables in Bayesian networks. Tech. rept. Department of Computer Science, University of Pittsburgh. Mossel, Elchanan, & Roch, S´ebastien. 2005. Learning nonsingular phylogenies and hidden Markov models. Pages 366–375 of: Proceedings of 37th STOC. ACM. Pearl, Judea, & Tarsi, Michael. 1986. Structuring causal trees. Journal of Complexity, 2(1), 60–77. Saund, Eric. 1995. A multiple cause mixture model for unsupervised learning. Neural Computation, 7(1), 51–71. Shwe, Michael A, Middleton, B, Heckerman, DE, Henrion, M, Horvitz, EJ, Lehmann, HP, & Cooper, GF. 1991. Probabilistic diagnosis using a reformulation of the INTERNIST-1/QMR knowledge base. Meth. Inform. Med, 30, 241–255. Silva, Ricardo, Scheine, Richard, Glymour, Clark, & Spirtes, Peter. 2006. Learning the structure of linear latent variable models. The Journal of Machine Learning Research, 7, 191–246. ˇSingliar, Tom´aˇs, & Hauskrecht, Miloˇs. 2006. Noisy-or component analysis and its application to link analysis. The Journal of Machine Learning Research, 7, 2189–2213. 9
|
2013
|
129
|
4,853
|
Multi-Prediction Deep Boltzmann Machines Ian J. Goodfellow, Mehdi Mirza, Aaron Courville, Yoshua Bengio D´epartement d’informatique et de recherche op´erationnelle Universit´e de Montr´eal Montr´eal, QC H3C 3J7 {goodfeli,mirzamom,courvila}@iro.umontreal.ca, Yoshua.Bengio@umontreal.ca Abstract We introduce the multi-prediction deep Boltzmann machine (MP-DBM). The MPDBM can be seen as a single probabilistic model trained to maximize a variational approximation to the generalized pseudolikelihood, or as a family of recurrent nets that share parameters and approximately solve different inference problems. Prior methods of training DBMs either do not perform well on classification tasks or require an initial learning pass that trains the DBM greedily, one layer at a time. The MP-DBM does not require greedy layerwise pretraining, and outperforms the standard DBM at classification, classification with missing inputs, and mean field prediction tasks.1 1 Introduction A deep Boltzmann machine (DBM) [18] is a structured probabilistic model consisting of many layers of random variables, most of which are latent. DBMs are well established as generative models and as feature learning algorithms for classifiers. Exact inference in a DBM is intractable. DBMs are usually used as feature learners, where the mean field expectations of the hidden units are used as input features to a separate classifier, such as an MLP or logistic regression. To some extent, this erodes the utility of the DBM as a probabilistic model–it can generate good samples, and provides good features for deterministic models, but it has not proven especially useful for solving inference problems such as predicting class labels given input features or completing missing input features. Another drawback to the DBM is the complexity of training it. Typically it is trained in a greedy, layerwise fashion, by training a stack of RBMs. Training each RBM to model samples from the previous RBM’s posterior distribution increases a variational lower bound on the likelihood of the DBM, and serves as a good way to initialize the joint model. Training the DBM from a random initialization generally does not work. It can be difficult for practitioners to tell whether a given lower layer RBM is a good starting point to build a larger model. We propose a new way of training deep Boltzmann machines called multi-prediction training (MPT). MPT uses the mean field equations for the DBM to induce recurrent nets that are then trained to solve different inference tasks. The resulting trained MP-DBM model can be viewed either as a single probabilistic model trained with a variational criterion, or as a family of recurrent nets that solve related inference tasks. We find empirically that the MP-DBM does not require greedy layerwise training, so its performance on the final task can be monitored from the start. This makes it more suitable than the DBM for 1Code and hyperparameters available at http://www-etud.iro.umontreal.ca/˜goodfeli/ mp_dbm.html 1 practitioners who do not have extensive experience with layerwise pretraining techniques or Markov chains. Anyone with experience minimizing non-convex functions should find MP-DBM training familiar and straightforward. Moreover, we show that inference in the MP-DBM is useful– the MPDBM does not need an extra classifier built on top of its learned features to obtain good inference accuracy. We show that it outperforms the DBM at solving a variety of inference tasks including classification, classification with missing inputs, and prediction of randomly selected subsets of variables. Specifically, we use the MP-DBM to outperform the classification results reported for the standard DBM by Salakhutdinov and Hinton [18] on both the MNIST handwritten character dataset [14] and the NORB object recognition dataset [13]. 2 Review of deep Boltzmann machines Typically, a DBM contains a set of D input features v that are called the visible units because they are always observed during both training and evaluation. When a class label is present the DBM typically represents it with a discrete-valued label unit y. The unit y is observed (on examples for which it is available) during training, but typically is not available at test time. The DBM also contains several latent variables that are never observed. These hidden units are usually organized into L layers h(i) of size Ni, i ∈{1, . . . , L}, with each unit in a layer conditionally independent of the other units in the layer given the neighboring layers. The DBM is trained to maximize the mean field lower bound on log P(v, y). Unfortunately, training the entire model simultaneously does not seem to be feasible. See [8] for an example of a DBM that has failed to learn using the naive training algorithm. Salakhutdinov and Hinton [18] found that for their joint training procedure to work, the DBM must first be initialized by training one layer at a time. After each layer is trained as an RBM, the RBMs can be modified slightly, assembled into a DBM, and the DBM may be trained with PCD [22, 21] and mean field. In order to achieve good classification results, an MLP designed specifically to predict y from v must be trained on top of the DBM model. Simply running mean field inference to predict y given v in the DBM model does not work nearly as well. See figure 1 for a graphical description of the training procedure used by [18]. The standard approach to training a DBM requires training L + 2 different models using L + 2 different objective functions, and does not yield a single model that excels at answering all queries. Our proposed approach requires training only one model with only one objective function, and the resulting model outperforms previous approaches at answering many kinds of queries (classification, classification with missing inputs, predicting arbitrary subsets of variables given the complementary subset). 3 Motivation There are numerous reasons to prefer a single-model, single-training stage approach to deep Boltzmann machine learning: 1. Optimization As a greedy optimization procedure, layerwise training may be suboptimal. Small-scale experimental work has demonstrated this to be the case for deep belief networks [1]. In general, for layerwise training to be optimal, the training procedure for each layer must take into account the influence that the deeper layers will provide. The layerwise initialization procedure simply does not attempt to be optimal. The procedures used by Le Roux and Bengio [12], Arnold and Ollivier [1] make an optimistic assumption that the deeper layers will be able to implement the best possible prior on the current layer’s hidden units. This approach is not immediately applicable to Boltzmann machines because it is specified in terms of learning the parameters of P(h(i−1)|h(i)) assuming that the parameters of the P(h(i)) will be set optimally later. In a DBM the symmetrical nature of the interactions between units means that these two distributions share parameters, so it is not possible to set the parameters of the one distribution, leave them fixed for the remainder of learning, and then set the parameters of the other distribution. Moreover, model architectures incorporating design features such as sparse connections, 2 pooling, or factored multilinear interactions make it difficult to predict how best to structure one layer’s hidden units in order for the next layer to make good use of them. 2. Probabilistic modeling Using multiple models and having some models specialized for exactly one task (like predicting y from v) loses some of the benefit of probabilistic modeling. If we have one model that excels at all tasks, we can use inference in this model to answer arbitrary queries, perform classification with missing inputs, and so on. The standard DBM training procedure gives this up by training a rich probabilistic model and then using it as just a feature extractor for an MLP. 3. Simplicity Needing to implement multiple models and training stages makes the cost of developing software with DBMs greater, and makes using them more cumbersome. Beyond the software engineering considerations, it can be difficult to monitor training and tell what kind of results during layerwise RBM pretraining will correspond to good DBM classification accuracy later. Our joint training procedure allows the user to monitor the model’s ability of interest (usually ability to classify y given v) from the very start of training. 4 Methods We now described the new methods proposed in this paper, and some pre-existing methods that we compare against. 4.1 Multi-prediction Training Our proposed approach is to directly train the DBM to be good at solving all possible variational inference problems. We call this multi-prediction training because the procedure involves training the model to predict any subset of variables given the complement of that subset of variables. Let O be a vector containing all variables that are observed during training. For a purely unsupervised learning task, O is just v itself. In the supervised setting, O = [v, y]T . Note that y won’t be observed at test time, only training time. Let D be the training set, i.e. a collection of values of O. Let S be a sequence of subsets of the possible indices of O. Let Qi be the variational (e.g., mean-field) approximation to the joint of OSi and h given O−Si. Qi(OSi, h) = argminQDKL (Q(OSi, h)∥P(OSi, h | O−Si)) . In all of the experiments presented in this paper, Q is constrained to be factorial, though one could design model families for which it makes sense to use richer structure in Q. Note that there is not an explicit formula for Q; Q must be computed by an iterative optimization process. In order to accomplish this minimization, we run the mean field fixed point equations to convergence. Because each fixed point update uses the output of a previous fixed point update as input, this optimization procedure can be viewed as a recurrent neural network. (To simplify implementation, we don’t explicitly test for convergence, but run the recurrent net for a pre-specified number of iterations that is chosen to be high enough that the net usually converges) We train the MP-DBM by using minibatch stochastic gradient descent on the multi-prediction (MP) objective function J(D, θ) = − X O∈D X i log Qi(OSi) In other words, the criterion for a single example O is a sum of several terms, with term i measuring the model’s ability to predict (through a variational approximation) a subset of the variables in the training set, OSi, given the remainder of the observed variables, O−Si. During SGD training, we sample minibatches of values of O and Si. Sampling O just means drawing an example from the training set. Sampling an Si uniformly simply requires sampling one bit (1 with probability 0.5) for each variable, to determine whether that variable should be an input to the inference procedure or a prediction target. To compute the gradient, we simply backprop the error derivatives of J through the recurrent net defining Q. See Fig. 2 for a graphical description of this training procedure, and Fig. 3 for an example of the inference procedure run on MNIST digits. 3 c) d) b) a) Figure 1: The training procedure used by Salakhutdinov and Hinton [18] on MNIST. a) Train an RBM to maximize log P(v) using CD. b) Train another RBM to maximize log P(h(1), y) where h(1) is drawn from the first RBM’s posterior. c) Stitch the two RBMs into one DBM. Train the DBM to maximize log P(v, y). d) Delete y from the model (don’t marginalize it out, just remove the layer from the model). Make an MLP with inputs v and the mean field expectations of h(1) and h(2). Fix the DBM parameters. Initialize the MLP parameters based on the DBM parameters. Train the MLP parameters to predict y. Figure 2: Multi-prediction training: This diagram shows the neural nets instantiated to do multiprediction training on one minibatch of data. The three rows show three different examples. Black circles represent variables the net is allowed to oberve. Blue circles represent prediction targets. Green arrows represent computational dependencies. Each column shows a single mean field fixed point update. Each mean field iteration consists of two fixed point updates. Here we show only one iteration to save space, but in a real application MP training should be run with 5-15 iterations. Figure 3: Mean field inference applied to MNIST digits. Within each pair of rows, the upper row shows pixels and the lower row shows class labels. The first column shows a complete, labeled example. The second column shows information to be masked out, using red pixels to indicate information that is removed. The subsequent columns show steps of mean field. The images show the pixels being filled back in by the mean field inference, and the blue bars show the probability of the correct class under the mean field posterior. Mean Field Iteration Multi-Inference Iteration + = Step 1 Step 2 Previous State + Reconstruction Step 1 Step 2 Previous State Figure 4: Multi-inference trick: When estimating y given v, a mean field iteration consists of first applying a mean field update to h(1) and y, then applying one to h(2). To use the multi-inference trick, start the iteration by computing r as the mean field update v would receive if it were not observed. Then use 0.5(r + v) in place of v and run a regular mean field iteration. Figure 5: Samples generated by alternately sampling Si uniformly and sampling O−Sifrom Qi(O−Si). 4 This training procedure is similar to one introduced by Brakel et al. [6] for time-series models. The primary difference is that we use log Q as the loss function, while Brakel et al. [6] apply hard-coded loss functions such as mean squared error to the predictions of the missing values. 4.2 The Multi-Inference Trick Mean field inference can be expensive due to needing to run the fixed point equations several times in order to reach convergence. In order to reduce this computational expense, it is possible to train using fewer mean field iterations than required to reach convergence. In this case, we are no longer necessarily minimizing J as written, but rather doing partial training of a large number of fixediteration recurrent nets that solve related problems. We can approximately take the geometric mean over all predicted distributions Q (for different subsets Si) and renormalize in order to combine the predictions of all of these recurrent nets. This way, imperfections in the training procedure are averaged out, and we are able to solve inference tasks even if the corresponding recurrent net was never sampled during MP training. In order to approximate this average efficiently, we simply take the geometric mean at each step of inference, instead of attempting to take the correct geometric mean of the entire inference process. See Fig. 4 for a graphical depiction of the method. This is the same type of approximation used to take the average over several MLP predictions when using dropout [10]. Here, the averaging rule is slightly different. In dropout, the different MLPs we average over either include or exclude each variable. To take the geometric mean over a unit hj that receives input from vi, we average together the contribution viWij from the model that contains vi and the contribution 0 from the model that does not. The final contribution from vi is 0.5viWij so the dropout model averaging rule is to run an MLP with the weights divided by 2. For the multi-inference trick, each recurrent net we average over solves a different inference problem. In half of the problems, vi is observed, and contributes viWij to hj’s total input. In the other half of the problems, vi is inferred. In contrast to dropout, vi is never completely absent. If we represent the mean field estimate of vi with ri, then in this case that unit contributes riWij to hj’s total input. To run multi-inference, we thus replace references to v with 0.5(v + r), where r is updated at each mean field iteration. The main benefit to this approach is that it gives a good way to incorporate information from many recurrent nets trained in slightly different ways. If the recurrent net corresponding to the desired inference task is somewhat suboptimal due to not having been sampled enough during training, its defects can be oftened be remedied by averaging its predictions with those of other similar recurrent nets. The multi-inference trick can also be understood as including an input denoising step built into the inference. In practice, multi-inference mostly seems to be beneficial if the network was trained without letting mean field run to convergence. When the model was trained with converged mean field, each recurrent net is just solving an optimization problem in a graphical model, and it doesn’t matter whether every recurrent net has been individually trained. The multi-inference trick is mostly useful as a cheap alternative when getting the absolute best possible test set accuracy is not as important as fast training and evaluation. 4.3 Justification and advantages In the case where we run the recurrent net for predicting Q to convergence, the multi-prediction training algorithm follows the gradient of the objective function J. This can be viewed as a mean field approximation to the generalized pseudolikelihood. While both pseudolikelihood and likelihood are asymptotically consistent estimators, their behavior in the limited data case is different. Maximum likelihood should be better if the overall goal is to draw realistic samples from the model, but generalized pseudolikelihood can often be better for training a model to answer queries conditioning on sets similar to the Si used during training. Note that our variational approximation is not quite the same as the way variational approximations are usually applied. We use variational inference to ensure that the distributions we shape using backprop are as close as possible to the true conditionals. This is different from the usual approach to variational learning, where Q is used to define a lower bound on the log likelihood and variational inference is used to make the bound as tight as possible. 5 In the case where the recurrent net is not trained to convergence, there is an alternate way to justify MP training. Rather than doing variational learning on a single probabilistic model, the MP procedure trains a family of recurrent nets to solve related prediction problems by running for some fixed number of iterations. Each recurrent net is trained only on a subset of the data (and most recurrent nets are never trained at all, but only work because they share parameters with the others). In this case, the multi-inference trick allows us to justify MP training as approximately training an ensemble of recurrent nets using bagging. Stoyanov et al. [20] have observed that a training strategy similar to MPT (but lacking the multiinference trick) is useful because it trains the model to work well with the inference approximations it will be evaluated with at test time. We find these properties to be useful as well. The choice of this type of variational learning combined with the underlying generalized pseudolikelihood objective makes an MP-DBM very well suited for solving approximate inference problems but not very well suited for sampling. Our primary design consideration when developing multi-prediction training was ensuring that the learning rule was state-free. PCD training uses persistent Markov chains to estimate the gradient. These Markov chains are used to approximately sample from the model, and only sample from approximately the right distribution if the model parameters evolve slowly. The MP training rule does not make any reference to earlier training steps, and can be computed with no burn in. This means that the accuracy of the MP gradient is not dependent on properties of the training algorithm such as the learning rate which can easily break PCD for many choices of the hyperparameters. Another benefit of MP is that it is easy to obtain an unbiased estimate of the MP objective from a small number of samples of v and i. This is in contrast to the log likelihood, which requires estimating the log partition function. The best known method for doing so is AIS, which is relatively expensive [16]. Cheap estimates of the objective function enable early stopping based on the MPobjective (though we generally use early stopping based on classification accuracy) and optimization based on line searches (though we do not explore that possibility in this paper). 4.4 Regularization In order to obtain good generalization performance, Salakhutdinov and Hinton [18] regularized both the weights and the activations of the network. Salakhutdinov and Hinton [18] regularize the weights using an L2 penalty. We find that for joint training, it is critically important to not do this (on the MNIST dataset, we were not able to find any MP-DBM hyperparameter configuration involving weight decay that performs as well as layerwise DBMs, but without weight decay MP-DBMs outperform DBMs). When the second layer weights are not trained well enough for them to be useful for modeling the data, the weight decay term will drive them to become very small, and they will never have an opportunity to recover. It is much better to use constraints on the norms of the columns of the weight vectors as done by Srebro and Shraibman [19]. Salakhutdinov and Hinton [18] regularize the activities of the hidden units with a somewhat complicated sparsity penalty. See http://www.mit.edu/˜rsalakhu/DBM.html for details. We use max(|Eh∼Q(h)[h] −t| −λ, 0) and backpropagate this through the entire inference graph. t and λ are hyperparameters. 4.5 Related work: centering Montavon and M¨uller [15] showed that an alternative, “centered” representation of the DBM results in successful generative training without a greedy layerwise pretraining step. However, centered DBMs have never been shown to have good classification performance. We therefore evaluate the classification performance of centering in this work. We consider two methods of variational PCD training. In one, we use Rao-Blackwellization [5, 11, 17] of the negative phase particles to reduce the variance of the negative phase. In the other variant (“centering+”), we use a special negative phase that Salakhutdinov and Hinton [18] found useful. This negative phase uses a small amount of mean field, which reduces the variance further but introduces some bias, and has better symmetry with the positive phase. See http://www.mit.edu/˜rsalakhu/DBM.html for details. 6 Centering Centering+ Multi-Prediction 10−2 10−1 100 Validation set misclassification rate Variation across hyperparameters (a) Cross-validation 0.0 0.2 0.4 0.6 0.8 1.0 Probability of dropping each input unit 0.0 0.2 0.4 0.6 0.8 1.0 Test set misclassification rate MNIST classification with missing inputs Standard DBM (no fine tuned stage) Centered DBM Standard DBM (+ fine tuned stage) MP-DBM MP-DBM (2X hidden units) (b) Missing inputs 0.0 0.2 0.4 0.6 0.8 1.0 Probability of including a unit in S -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 Average test set log Q(vi) for i ∈S Ability to answer general queries MP-DBM Standard DBM Centering+ DBM (c) General queries Figure 6: Quantitative results on MNIST: (a) During cross-validation, MP training performs well for most hyperparameters, while both centering and centering with the special negative phase do not perform as well and only perform well for a few hyperparameter values. Note that the vertical axis is on a log scale. (b) Generic inference tasks: When classifying with missing inputs, the MP-DBM outperforms the other DBMs for most amounts of missing inputs. (c) When using approximate inference to resolve general queries, the standard DBM, centered DBM, and MP-DBM all perform about the same when asked to predict a small number of variables. For larger queries, the MP-DBM performs the best. 4.6 Sampling, and a connection to GSNs The focus of this paper is solving inference problems, not generating samples, so we do not investigate the sampling properties of MP-DBMs extensively. However, it is interesting to note that an MP-DBM can be viewed as a collection of dependency networks [9] with shared parameters. Dependency networks are a special case of generative stochastic networks or GSNs (Bengio et al. [3], section 3.4). This means that the MP-DBM is associated with a distribution arising out of the Markov chain in which at each step one samples an Si uniformly and then samples O from Qi(O). Example samples are shown in figure 5. Furthermore, it means that if MPT is a consistent estimator of the conditional distributions, then MPT is a consistent estimator of the probability distribution defined by the stationary distribution of this Markov chain. Samples drawn by Gibbs sampling in the DBM model do not look as good (probably because the variational approximation is too damaging). This suggests that the perspective of the MP-DBM as a GSN merits further investigation. 5 Experiments 5.1 MNIST experiments In order to compare MP training and centering to standard DBM performance, we cross-validated each of the new methods by running 25 training experiments for each of three conditions: centered DBMs, centered DBMs with the special negative phase (“Centering+”), and MP training. All three conditions visited exactly the same set of 25 hyperparameter values for the momentum schedule, sparsity regularization hyperparameters, weight and bias initialization hyperparameters, weight norm constraint values, and number of mean field iterations. The centered DBMs also required one additional hyperparameter, the number of Gibbs steps to run for variational PCD. We used different values of the learning rate for the different conditions, because the different conditions require different ranges of learning rate to perform well. We use the same size of model, minibatch and negative chain collection as Salakhutdinov and Hinton [18], with 500 hidden units in the first layer, 1,000 hidden units in the second, 100 examples per minibatch, and 100 negative chains. The energy function for this model is E(v, h, y) = −vT W (1)h(1) −h(1)T W (2)h(2) −h(2)T W (3)y −vT b(0) −h(1)T b(1) −h(2)T b(2) −yT b(3). See Fig. 6a for the results of cross-validation. On the validation set, MP training consistently performs better and is much less sensitive to hyperparameters than the other methods. This is likely because the state-free nature of the learning rule makes it perform better with settings of the learning rate and momentum schedule that result in the model distribution changing too fast for a method based on Markov chains to keep up. When we add an MLP classifier (as shown in Fig. 1d), the best “Centering+” DBM obtains a classification error of 1.22% on the test set. The best MP-DBM obtains a classification error of 0.88%. This compares to 0.95% obtained by Salakhutdinov and Hinton [18]. 7 If instead of adding an MLP to the model, we simply train a larger MP-DBM with twice as many hidden units in each layer, and apply the multi-inference trick, we obtain a classification error rate of 0.91%. In other words, we are able to classify nearly as well using a single large DBM and a generic inference procedure, rather than using a DBM followed by an entirely separate MLP model specialized for classification. The original DBM was motivated primarily as a generative model with a high AIS score and as a means of initializing a classifier. Here we explore some more uses of the DBM as a generative model. Fig. 6b shows an evaluation of various DBM’s ability to classify with missing inputs. Fig. 6c shows an evaluation of their ability to resolve queries about random subsets of variables. In both cases we find that the MP-DBM performs the best for most amounts of missing inputs. 5.2 NORB experiments NORB consists of 96×96 binocular greyscale images of objects from five different categories, under a variety of pose and lighting conditions. Salakhutdinov and Hinton [18] preprocessed the images by resampling them with bigger pixels near the border of the image, yielding an input vector of size 8,976. We used this preprocessing as well. Salakhutdinov and Hinton [18] then trained an RBM with 4,000 binary hidden units and Gaussian visible units to preprocess the data into an all-binary representation, and trained a DBM with two hidden layers of 4,000 units each on this representation. Since the goal of this work is to provide a single unified model and training algorithm, we do not train a separate Gaussian RBM. Instead we train a single MP-DBM with Gaussian visible units and three hidden layers of 4,000 units each. The energy function for this model is E(v, h, y) = −(v −µ)T βW (1)h(1) −h(1)T W (2)h(2) −h(2)T W (3)h(3) −h(3)T W (4)y +1 2(v −µ)T β(v −µ) −h(1)T b(1) −h(2)T b(2) −h(3)T b(3) −yT b(4). where µ is a learned vector of visible unit means and β is a learned diagonal precision matrix. By adding an MLP on top of the MP-DBM, following the same architecture as Salakhutdinov and Hinton [18], we were able to obtain a test set error of 10.6%. This is a slight improvement over the standard DBM’s 10.8%. On MNIST we were able to outperform the DBM without using the MLP classifier because we were able to train a larger MP-DBM. On NORB, the model size used by Salakhutdinov and Hinton [18] is already as large as we are able to fit on most of our graphics cards, so we were not able to do the same for this dataset. It is possible to do better on NORB using convolution or synthetic transformations of the training data. We did not evaluate the effect of these techniques on the MP-DBM because our present goal is not to obtain state-of-the-art object recognition performance but only to verify that our joint training procedure works as well as the layerwise training procedure for DBMs. There is no public demo code available for the standard DBM on this dataset, and we were not able to reproduce the standard DBM results (layerwise DBM training requires significant experience and intuition). We therefore can’t compare the MP-DBM to the original DBM in terms of answering general queries or classification with missing inputs on this dataset. 6 Conclusion This paper has demonstrated that MP training and the multi-inference trick provide a means of training a single model, with a single stage of training, that matches the performance of standard DBMs but still works as a general probabilistic model, capable of handling missing inputs and answering general queries. We have verified that MP training outperforms the standard training procedure at classification on the MNIST and NORB datasets where the original DBM was first applied. We have shown that MP training works well with binary, Gaussian, and softmax units, as well as architectures with either two or three hidden layers. In future work, we hope to apply the MP-DBM to more practical applications, and explore techniques, such as dropout, that could improve its performance further. Acknowledgments We would like to thank the developers of Theano [4, 2], Pylearn2 [7]. We would also like to thank NSERC, Compute Canada, and Calcul Qu´ebec for providing computational resources. 8 References [1] Arnold, L. and Ollivier, Y. (2012). Layer-wise learning of deep generative models. Technical report, arXiv:1212.1524. [2] Bastien, F., Lamblin, P., Pascanu, R., Bergstra, J., Goodfellow, I. J., Bergeron, A., Bouchard, N., and Bengio, Y. (2012). Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop. [3] Bengio, Y., Thibodeau-Laufer, E., and Yosinski, J. (2013). Deep generative stochastic networks trainable by backprop. Technical Report arXiv:1306.1091, Universite de Montreal. [4] Bergstra, J., Breuleux, O., Bastien, F., Lamblin, P., Pascanu, R., Desjardins, G., Turian, J., Warde-Farley, D., and Bengio, Y. (2010). Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy). Oral Presentation. [5] Blackwell, D. (1947). Conditional Expectation and Unbiased Sequential Estimation. Ann.Math.Statist., 18, 105–110. [6] Brakel, P., Stroobandt, D., and Schrauwen, B. (2013). Training energy-based models for time-series imputation. Journal of Machine Learning Research, 14, 2771–2797. [7] Goodfellow, I. J., Warde-Farley, D., Lamblin, P., Dumoulin, V., Mirza, M., Pascanu, R., Bergstra, J., Bastien, F., and Bengio, Y. (2013a). Pylearn2: a machine learning research library. arXiv preprint arXiv:1308.4214. [8] Goodfellow, I. J., Courville, A., and Bengio, Y. (2013b). Scaling up spike-and-slab models for unsupervised feature learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1902–1914. [9] Heckerman, D., Chickering, D. M., Meek, C., Rounthwaite, R., and Kadie, C. (2000). Dependency networks for inference, collaborative filtering, and data visualization. Journal of Machine Learning Research, 1, 49–75. [10] Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I., and Salakhutdinv, R. (2012). Improving neural networks by preventing co-adaptation of feature detectors. Technical report, arXiv:1207.0580. [11] Kolmogorov, A. (1953). Unbiased Estimates:. American Mathematical Society translations. American Mathematical Society. [12] Le Roux, N. and Bengio, Y. (2008). Representational power of restricted Boltzmann machines and deep belief networks. Neural Computation, 20(6), 1631–1649. [13] LeCun, Y., Huang, F.-J., and Bottou, L. (????). Learning methods for generic object recognition with invariance to pose and lighting. In CVPR’2004, pages 97–104. [14] LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324. [15] Montavon, G. and M¨uller, K.-R. (2012). Learning feature hierarchies with centered deep Boltzmann machines. CoRR, abs/1203.4416. [16] Neal, R. M. (2001). Annealed importance sampling. Statistics and Computing, 11(2), 125–139. [17] Rao, C. R. (1973). Linear Statistical Inference and its Applications. J. Wiley and Sons, New York, 2nd edition. [18] Salakhutdinov, R. and Hinton, G. (2009). Deep Boltzmann machines. In Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics (AISTATS 2009), volume 8. [19] Srebro, N. and Shraibman, A. (2005). Rank, trace-norm and max-norm. In Proceedings of the 18th Annual Conference on Learning Theory, pages 545–560. Springer-Verlag. [20] Stoyanov, V., Ropson, A., and Eisner, J. (2011). Empirical risk minimization of graphical model parameters given approximate inference, decoding, and model structure. In AISTATS’2011. [21] Tieleman, T. (2008). Training restricted Boltzmann machines using approximations to the likelihood gradient. In ICML’2008, pages 1064–1071. [22] Younes, L. (1999). On the convergence of Markovian stochastic algorithms with rapidly decreasing ergodicity rates. Stochastics and Stochastic Reports, 65(3), 177–228. 9
|
2013
|
13
|
4,854
|
Gaussian Process Conditional Copulas with Applications to Financial Time Series Jos´e Miguel Hern´andez-Lobato Engineering Department University of Cambridge jmh233@cam.ac.uk James Robert Lloyd Engineering Department University of Cambridge jrl44@cam.ac.uk Daniel Hern´andez-Lobato Computer Science Department Universidad Aut´onoma de Madrid daniel.hernandez@uam.es Abstract The estimation of dependencies between multiple variables is a central problem in the analysis of financial time series. A common approach is to express these dependencies in terms of a copula function. Typically the copula function is assumed to be constant but this may be inaccurate when there are covariates that could have a large influence on the dependence structure of the data. To account for this, a Bayesian framework for the estimation of conditional copulas is proposed. In this framework the parameters of a copula are non-linearly related to some arbitrary conditioning variables. We evaluate the ability of our method to predict time-varying dependencies on several equities and currencies and observe consistent performance gains compared to static copula models and other timevarying copula methods. 1 Introduction Understanding dependencies within multivariate data is a central problem in the analysis of financial time series, underpinning common tasks such as portfolio construction and calculation of value-atrisk. Classical methods estimate these dependencies in terms of a covariance matrix (possibly time varying) which is induced from the data [4, 5, 7, 1]. However, a more general approach is to use copula functions to model dependencies [6]. Copulas have become popular since they separate the estimation of marginal distributions from the estimation of the dependence structure, which is completely determined by the copula. The use of standard copula methods to estimate dependencies is likely to be inaccurate when the actual dependencies are strongly influenced by other covariates. For example, dependencies can vary with time or be affected by observations of other time series. Standard copula methods cannot handle such conditional dependencies. To address this limitation, we propose a probabilistic framework to estimate conditional copulas. Specifically we assume parametric copulas whose parameters are specified by unknown non-linear functions of arbitrary conditioning variables. These latent functions are approximated using Gaussian processes (GP) [17]. GPs have previously been used to model conditional copulas in [12] but that work only applies to copulas specified by a single parameter. We extend this work to accommodate copulas with multiple parameters. This is an important improvement since it allows the use of a richer set of copulas including Student’s t and asymmetric copulas. We demonstrate our method by choosing the conditioning variables to be time and evaluating its ability to estimate time-varying dependencies 1 Gaussian Copula 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 Student's t Copula 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 Symmetrized Joe Clayton Copula 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 Figure 1: Left, Gaussian copula density for τ = 0.3. Middle, Student’s t copula density for τ = 0.3 and ν = 1. Right, symmetrized Joe Clayton copula density for τ U = 0.1 and τ L = 0.6. The latter copula model is asymmetric along the main diagonal of the unit square. on several currency and equity time series. Our method achieves consistently superior predictive performance compared to static copula models and other dynamic copula methods. These include models that allow their parameters to change with time, e.g. regime switching models [11] and methods proposing GARCH-style updates to copula parameters [20, 11]. 2 Copulas and Conditional Copulas Copulas provide a powerful framework for the construction of multivariate probabilistic models by separating the modeling of univariate marginal distributions from the modeling of dependencies between variables [6]. We focus on bivariate copulas since higher dimensional copulas are typically constructed using bivariate copulas as building blocks [e.g 2, 12]. Sklar’s theorem [18] states that given two one-dimensional random variables, X and Y , with continuous marginal cumulative distribution functions (cdfs) FX(X) and FY (Y ), we can express their joint cdf FX,Y as FX,Y (x, y) = CX,Y [FX(x), FY (y)], where CX,Y is the unique copula for X and Y . Since FX(X) and FY (Y ) are marginally uniformly distributed on [0, 1], CX,Y is the cdf of a probability distribution on the unit square [0, 1] × [0, 1] with uniform marginals. Figure 1 shows plots of the copula densities for three parametric copula models: Gaussian, Student’s t and the symmetrized Joe Clayton (SJC) copulas. Copula models can be learnt in a two step process [10]. First, the marginals FX and FY are learnt by fitting univariate models. Second, the data are mapped to the unit square by U = FX(X), V = FY (Y ) (i.e. a probability integral transform) and then CX,Y is then fit to the transformed data. 2.1 Conditional Copulas When one has access to a covariate vector Z, one may wish to estimate a conditional version of a copula model i.e. FX,Y |Z(x, y|z) = CX,Y |Z FX|Z(x|z), FY |Z(y|z)|z . (1) Here, the same two-step estimation process can be used to estimate FX,Y |Z(x, y|z). The estimation of the marginals FX|Z and FY |Z can be implemented using standard methods for univariate conditional distribution estimation. However, the estimation of CX,Y |Z is constrained to have uniform marginal distributions; this is a problem that has only been considered recently [12]. We propose a general Bayesian non-parametric framework for the estimation of conditional copulas based on GPs and an alternating expectation propagation (EP) algorithm for efficient approximate inference. 3 Gaussian Process Conditional Copulas Let DZ = {zi}n i=1 and DU,V = {(ui, vi)}n i=1 where (ui, vi) is a sample drawn from CX,Y |zi. We assume that CX,Y |Z is a parametric copula model Cpar[u, v|θ1(z), . . . , θk(z)] specified by k parameters θ1, . . . , θk that may be functions of the conditioning variable z. Let θi(z) = σi[fi(z)], 2 where fi is an arbitrary real function and σi is a function that maps the real line to a set Θi of valid configurations for θi. For example, Cpar could be a Student’s t copula. In this case, k = 2 and θ1 and θ2 are the correlation and the degrees of freedom in the Student’s t copula, Θ1 = (−1, 1) and Θ2 = (0, ∞). One could then choose σ1(·) = 2Φ(·) −1, where Φ is the standard Gaussian cdf and σ2(·) = exp(·) to satisfy the constraint sets Θ1 and Θ2 respectively. Once we have specified the parametric form of Cpar and the mapping functions σ1, . . . , σk, we need to learn the latent functions f1, . . . , fk. We perform a Bayesian non-parametric analysis by placing GP priors on these functions and computing their posterior distribution given the observed data. Let fi = (fi(z1), . . . , fi(zn))T. The prior distribution for fi given DZ is p(fi|DZ) = N(fi|mi, Ki), where mi = (mi(z1), . . . , mi(zn))T for some mean function mi(z) and Ki is an n × n covariance matrix generated by the squared exponential covariance function, i.e. [Ki]jk = Cov[fi(zj), fi(zk)] = βi exp −(zj −zk)Tdiag(λi)(zj −zk) + γi , (2) where λi is a vector of inverse length-scales and βi, γi are amplitude and noise parameters. The posterior distribution for f1, . . . , fk given DU,V and DZ is p(f1, . . . , fk|DU,V , DZ) = h Qn i=1 cpar ui, vi|σ1 [f1(zi)] , . . . , σk [fk(zi)] i hQk i=1 N(fi|mi, Ki) i p(DU,V |DZ) , (3) where cpar is the density of the parametric copula model and p(DU,V |DZ) is a normalization constant often called the model evidence. Given a particular value of Z denoted by z⋆, we can make predictions about the conditional distribution of U and V using the standard GP prediction formula p(u⋆, v⋆|z⋆) = Z cpar(u⋆, v⋆|σ1[f ⋆ 1 ], . . . , σk[f ⋆ k])p(f ⋆|f1, . . . , fk, z⋆, Dz) p(f1, . . . , fk|DU,V , DZ) df1 · · · dfk df ⋆, (4) where f ⋆= (f ⋆ 1 , . . . , f ⋆ k)T, p(f ⋆|f1, . . . , fk, z⋆, Dz) = Qk i=1 p(f ⋆ i |fi, z⋆, Dz), f ⋆ i = fi(z⋆), p(f ⋆ i |fi, z⋆, Dz) = N(f ⋆ i |mi(z⋆) + kT i K−1 i (fi −mi), ki −kT i K−1 i ki), ki = Cov[fi(z⋆), fi(z⋆)] and ki = (Cov[fi(z⋆), fi(z1)], . . . , Cov[fi(z⋆), fi(zn)])T. Unfortunately, (3) and (4) cannot be computed analytically, so we approximate them using expectation propagation (EP) [13]. 3.1 An Alternating EP Algorithm for Approximate Bayesian Inference The joint distribution for f1, . . . , fk and DU,V given DZ can be written as a product of n + k factors: p(f1, . . . , fk, DU,V |DZ) = " n Y i=1 gi(f1i, . . . , fki, ) # " k Y i=1 hi(fi) # , (5) where fji = fj(zi), hi(fi) = N(fi|mi, Ki) and gi(f1i, . . . , fki) = cpar[ui, vi|σ1[f1i], . . . , σk[fki]]. EP approximates each factor gi with an approximate Gaussian factor ˜gi that may not integrate to one, i.e. ˜gi(f1i, . . . , fki) = si Qk j=1 exp −(fji −˜mji)2/[2˜vji] , where si > 0, ˜mji and ˜vji are parameters to be calculated by EP. The other factors hi already have a Gaussian form so they do not need to be approximated. Since all the ˜gi and hi are Gaussian, their product is, up to a normalization constant, a multivariate Gaussian distribution q(f1, . . . , fk) which approximates the exact posterior (3) and factorizes across f1, . . . , fk. The predictive distribution (4) is approximated by first integrating p(f ⋆|f1, . . . , fk, z⋆, Dz) with respect to q(f1, . . . , fk). This results in a factorized Gaussian distribution q⋆(f ⋆) which approximates p(f ⋆|DU,V , DZ). Finally, (4) is approximated by Monte-Carlo by sampling from q⋆and then averaging cpar(u⋆, v⋆|σ1[f ⋆ 1 ], . . . , σk[f ⋆ k]) over the samples. EP iteratively updates each ˜gi until convergence by first computing q\i ∝q/˜gi and then minimizing the Kullback-Leibler divergence [3] between giq\i and ˜giq\i. This involves updating ˜gi so that the first and second marginal moments of giq\i and ˜giq\i match. However, it is not possible to compute the moments of giq\i analytically due to the complicated form of gi. A solution is to use numerical methods to compute these k-dimensional integrals. However, this typically has an exponentially large computational cost in k which is prohibitive for k > 1. Instead we perform an additional approximation when computing the marginal moments of fji with respect to giq\i. Without loss of 3 generality, assume that we want to compute the expectation of f1i with respect to giq\i. We make the following approximation: Z f1igi(f1i, . . . , fki)q\i(f1i, . . . ,fki) df1i, . . . , dfki ≈ C × Z f1igi(f1i, ¯f2i, . . . , ¯fki)q\i(f1i, ¯f2i, . . . , ¯fki) df1i , (6) where ¯f1i, . . . , ¯fki are the means of f1i, . . . , fki with respect to q\i, and C is a constant that approximates the width of the integrand around its maximum in all dimensions except f1i. In practice all moments are normalized by the 0-th moment so C can be ignored. The right hand side of (6) is a onedimensional integral that can be easily computed using numerical techniques. The approximation above is similar to approximating an integral by the product of the maximum value of the integrand and an estimate of its width. However, instead of maximizing gi(f1i, . . . , fki)q\i(f1i, . . . , fki) with respect to f2i, . . . , fki, we are maximizing q\i. This is a much easier task because q\i is Gaussian and its maximizer is its own mean vector. In practice, gi(f1i, . . . , fki) is very flat when compared to q\i and the maximizer of q\i approximates well the maximizer of gi(f1i, . . . , fki)q\i(f1i, . . . , fki). Since q factorizes across f1, . . . , fk (as well as q\i), our implementation of EP decouples into k EP sub-routines among which we alternate; the j-th sub-routine approximates the posterior distribution of fj using as input the means of q\i generated by the other EP sub-routines. Each sub-routine finds a Gaussian approximation to a set of n one-dimensional factors; one factor per data point. In the j-th EP sub-routine, the i-th factor is given by gi(f1i, . . . , fki), where each {f1i, . . . , fki} \ {fji} is kept fixed to the current mean of q\i, as estimated by the other EP sub-routines. We iteratively alternate between sub-routines, running each one until convergence before re-running the next one. Convergence is achieved very quickly; we only run each EP sub-routine four times. The EP sub-routines are implemented using the parallel EP update scheme described in [21]. To speed up GP related computations, we use the generalized FITC approximation [19, 14]: Each n × n covariance matrix Ki is approximated by K′ i = Qi + diag(Ki −Qi), where Qi = Ki nn0[Ki n0n0]−1[Ki nn0]T, Ki n0n0 is the n0 × n0 covariance matrix generated by evaluating (2) at n0 ≪n pseudo-inputs, and Ki nn0 is the n × n0 matrix with the covariances between training points and pseudo-inputs. The cost of EP is O(knn2 0). Each time we call the j-th EP subroutine, we optimize the corresponding kernel hyper-parameters λj, βj and γj and the pseudo-inputs by maximizing the EP approximation of the model evidence [17]. 4 Related Work The model proposed here is an extension of the conditional copula model of [12]. In the case of bivariate data and a copula based on one parameter the models are identical. We have extended the approximate inference for this model to accommodate copulas with multiple parameters; previously computationally infeasible due to requiring the numerical calculation of multidimensional integrals within an inner loop of EP inference. We have also demonstrated that one can use this model to produce excellent predictive results on financial time series by conditioning the copula on time. 4.1 Dynamic Copula Models In [11] a dynamic copula model is proposed based on a two-state hidden Markov model (HMM) (St ∈{0, 1}) that assumes that the data generating process changes between two regimes of low/high correlation. At any time t the copula density is Student’s t with different parameters for the two values of the hidden state St. Maximum likelihood estimation of the copula parameters and transition probabilities is performed using an EM algorithm [e.g. 3]. A time-varying correlation (TVC) model based on the Student’s t copula is described in [20, 11]. The correlation parameter1of a Student’s t copula is assumed to satisfy ρt = (1 −α −β)ρ + αεt−1 + βρt−1, where εt−1 is the empirical correlation of the previous 10 observations and ρ, α and β satisfy −1 ≤ρ ≤1, 0 ≤α, β ≤1 and α + β ≤1. The number of degrees of freedom ν 4 is assumed to be constant. The previous formula is the GARCH equation for correlation instead of variance. Estimation of ρ, α, β and ν is easily performed by maximum likelihood. In [15] a dynamic copula based on the SJC copula (DSJCC) is introduced. In this method, the parameters τ U and τ L of an SJC copula are assumed to depend on time according to τ U(t) = 0.01 + 0.98Λ ωU + αUεt−1 + βUτ U(t −1) , (7) τ L(t) = 0.01 + 0.98Λ ωL + αLεt−1 + βLτ L(t −1) , (8) where Λ[·] is the logistic function, εt−1 = 1 10 P10 j=1 |ut−j −vt−j|, (ut, vt) is a copula sample at time t and the constants are used to avoid numerical instabilities. These formulae are the GARCH equation for correlations, with an additional logistic function to constrain parameter values. The estimation of ωU, αU, βU, ωL, αL and βL is performed by maximum likelihood. We go beyond this prior work by allowing copula parameters to depend on an arbitrary conditioning variables rather than time alone. Also, the models above either assume Markov independence or GARCH-like updates to copula parameters. These assumptions have been empirically proven to be effective for the estimation of univariate variances, but the consistent performance gains of our proposed method suggest these assumptions are less applicable for the estimation of dependencies. 4.2 Other Dynamic Covariance Models A direct extension of the GARCH equations to multiple time series, VEC, was proposed by [5]. Let x(t) be a multivariate time series assumed to satisfy x(t) ∼N(0, Σ(t)). VEC(p, q) models the dynamics of Σ(t) by an equation of the form vech(Σ(t)) = c + p X k=1 Ak vech(x(t −k)x(t −k)T) + q X k=1 Bk vech(Σ(t −k)) (9) where vech is the operation that stacks the lower triangular part on a matrix into a column vector. The VEC model has a very large number of parameters and hence a more commonly used model is the BEKK(p, q) model [7] which assumes the following dynamics Σ(t) = CTC + p X k=1 AT kx(t −k)x(t −k)TAk + q X k=1 BT kΣ(t −k)Bk. (10) This model also has many parameters and many restricted versions of these models have been proposed to avoid over-fitting (see e.g. section 2 of [1]). An alternative solution to over-fitting due to over-parameterization is the Bayesian approach of [23] where Bayesian inference is performed in a dynamic BEKK(1, 1) model. Other Bayesian approaches include the non-parametric generalized Wishart process [22, 8]. In these works Σ(t) is modeled by a generalized Wishart process i.e. Σ(t) = ν X i=1 Lui(t)ui(t)TLT (11) where uid(·) are distributed as independent GPs. 5 Experiments We evaluate the proposed Gaussian process conditional copula models (GPCC) on a one-step-ahead prediction task with synthetic data and financial time series. We use time as the conditioning variable and consider three parametric copula families; Gaussian (GPCC-G), Student’s t (GPCC-T) and symmetrized Joe Clayton (GPCC-SJC). The parameters of these copulas are presented in Table 1 along with the transformations used to model them. Figure 1 shows plots of the densities of these three parametric copula models. The code and data are publicly available at http://jmhl.org. 1The parameterization used in this paper is related by ρ = sin(0.5τπ) 5 Copula Parameters Transformation Synthetic parameter function Gaussian correlation, τ 0.99(2Φ[f(t)] −1) τ(t) = 0.3 + 0.2 cos(tπ/125) Student’s t correlation, τ 0.99(2Φ[f(t)] −1) τ(t) = 0.3 + 0.2 cos(tπ/125) degrees of freedom, ν 1 + 106Φ[g(t)] ν(t) = 1 + 2(1 + cos(tπ/250)) SJC upper dependence, τ U 0.01 + 0.98Φ[g(t)] τ U(t) = 0.1 + 0.3(1 + cos(tπ/125)) lower dependence, τ L 0.01 + 0.98Φ[g(t)] τ L(t) = 0.1 + 0.3(1 + cos(tπ/125 + π/2)) Table 1: Copula parameters, modeling formulae and parameter functions used to generate synthetic data. Φ is the standard Gaussian cumulative density function f and g are GPs. The three variants of GPCC were compared against three dynamic copula methods and three constant copula models. The three dynamic methods include the HMM based model, TVC and DSJCC introduced in Section 4. The three constant copula models use Gaussian, Student’s t and SJC copulas with parameter values that do not change with time (CONST-G, CONST-T and CONST-SJC). We perform a one-step-ahead rolling-window prediction task on bivariate time series {(ut, vt)}. Each model is trained on the first nW data points and the predictive log-likelihood of the (nW +1)−th data point is recorded, where nW = 1000. This is then repeated, shifting the training and test windows forward by one data point. The methods are then compared by average predictive log-likelihood; an appropriate performance measure for copula estimation since copulas are probability distributions. 5.1 Synthetic Data We generated three synthetic datasets of length 5001 from copula models (Gaussian, Student’s t, SJC) whose parameters vary as periodic functions of time, as specified in Table 1. Table 2 reports the average predictive log-likelihood for each method on each synthetic time series. The results of the best performing method on each synthetic time series are shown in bold. The results of any other method are underlined when the differences with respect to the best performing method are not statistically significant according to a paired t test at α = 0.05. GPCC-T and GPCC-SJC obtain the best results in the Student’s t and SJC time series respectively. However, HMM is the best performing method for the Gaussian time series. This technique successfully captures the two regimes of low/high correlation corresponding to the peaks and troughs of the sinusoid that maps time t to correlation τ. The proposed methods GPCC-[G,T,SJC] are more flexible and hence less efficient than HMM in this particular problem. However, HMM performs significantly worse in the Student’s t and SJC time series since the different periods for the different copula parameter functions cannot be captured by a two state model. Figure 2 shows how GPCC-T successfully tracks τ(t) and ν(t) in the Student’s t time series. The plots display the mean (red) and confidence bands (orange, 0.1 and 0.9 quantiles) for the predictive distribution of τ(t) and ν(t) as well as the ground truth values (blue). Finally, Table 2 also shows that the static copula methods CONST-[G,T,SJC] are usually outperformed by all dynamic techniques GPCC-[G,T,SJC], DSJCC, TVC and HMM. 5.2 Foreign Exchange Time Series We evaluated each method on the daily logarithmic returns of nine currencies shown in Table 3 (all priced with respect to the U.S. dollar).The date range of the data is 02-01-1990 to 15-01-2013; a total of 6011 observations. We evaluated the methods on eight bivariate time series, pairing each currency pair with the Swiss franc (CHF). CHF is known to be a safe haven currency, meaning that investors flock to it during times of uncertainty [16]. Consequently we expect correlations between CHF and other currencies to have large variability across time in response to changes in financial conditions. We first process our data using an asymmetric AR(1)-GARCH(1,1) process with non-parametric innovations [9] to estimate the univariate marginal cdfs at all time points. We train this GARCH model on nW = 2016 data points and then predict the cdf of the next data point; subsequent cdfs are predicted by shifting the training window by one data point in a rolling-window methodology. The cdf estimates are used to transform the raw logarithmic returns (xt, yt) into a pseudo-sample of the underlying copula (ut, vt) as described in Section 2. We note that any method for predicting univariate cdfs could have been used to produce pseudo-samples from the copula. We then perform 6 the rolling-window predictive likelihood experiment on the transformed data. The results are shown in Table 4; overall the best technique is GPCC-T, followed by GPCC-G. The dynamic copula methods GPCC-[G,T,SJC], HMM, and TVC outperform the static methods CONST-[G,T,SJC] in all the analyzed series. The dynamic method DSJCC occasionally performed poorly; worse than the static methods for 3 experiments. 0 200 400 600 800 1000 5 15 Student's t Time Series, Mean GPCC−T Ground truth 0 200 400 600 800 1000 0.2 0.4 0.6 0.8 Student's t Time Series Mean GPCC−T Ground truth Figure 2: Predictions made by GPCC-T for ν(t) and τ(t) on the synthetic time series sampled from a Student’s t copula. Method Gaussian Student SJC GPCC-G 0.3347 0.3879 0.2513 GPCC-T 0.3397 0.4656 0.2610 GPCC-SJC 0.3355 0.4132 0.2771 HMM 0.3555 0.4422 0.2547 TVC 0.3277 0.4273 0.2534 DSJCC 0.3329 0.4096 0.2612 CONST-G 0.3129 0.3201 0.2339 CONST-T 0.3178 0.4218 0.2499 CONST-SJC 0.3002 0.3812 0.2502 Table 2: Avg. test log-likelihood of each method on each time series. Code Currency Name CHF Swiss Franc AUD Australian Dollar CAD Canadian Dollar JPY Japanese Yen NOK Norwegian Krone SEK Swedish Krona EUR Euro NZD New Zeland Dollar GBP British Pound Table 3: Currencies. Method AUD CAD JPY NOK SEK EUR GBP NZD GPCC-G 0.1260 0.0562 0.1221 0.4106 0.4132 0.8842 0.2487 0.1045 GPCC-T 0.1319 0.0589 0.1201 0.4161 0.4192 0.8995 0.2514 0.1079 GPCC-SJC 0.1168 0.0469 0.1064 0.3941 0.3905 0.8287 0.2404 0.0921 HMM 0.1164 0.0478 0.1009 0.4069 0.3955 0.8700 0.2374 0.0926 TVC 0.1181 0.0524 0.1038 0.3930 0.3878 0.7855 0.2301 0.0974 DSJCC 0.0798 0.0259 0.0891 0.3994 0.3937 0.8335 0.2320 0.0560 CONST-G 0.0925 0.0398 0.0771 0.3413 0.3426 0.6803 0.2085 0.0745 CONST-T 0.1078 0.0463 0.0898 0.3765 0.3760 0.7732 0.2231 0.0875 CONST-SJC 0.1000 0.0425 0.0852 0.3536 0.3544 0.7113 0.2165 0.0796 Table 4: Avg. test log-likelihood of each method on the currency data. 0 20 40 60 80 100 120 140 EUR−CHF GPCC-T, Oct 06 Mar 07 Aug 07 Jan 08 Jun 08 Nov 08 Apr 09 Oct 09 Mar 10 Aug 10 Mean GPCC−T 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 EUR−CHF GPCC-T, Oct 06 Mar 07 Aug 07 Jan 08 Jun 08 Nov 08 Apr 09 Oct 09 Mar 10 Aug 10 Mean GPCC−T 0.2 0.4 0.6 0.8 1.0 1.2 EUR−CHF GPCC-SJC, Oct 06 Mar 07 Aug 07 Jan 08 Jun 08 Nov 08 Apr 09 Oct 09 Mar 10 Aug 10 Mean GPCC-SJC Mean GPCC-SJC Figure 3: Left and middle, predictions made by GPCC-T for ν(t) and τ(t) on the time series EURCHF when trained on data from 10-10-2006 to 09-08-2010. There is a significant reduction in ν(t) at the onset of the 2008-2012 global recession. Right, predictions made by GPCC-SJC for τ U(t) and τ L(t) when trained on the same time-series data. The predictions for τ L(t) are much more erratic than those for τ U(t). The proposed method GPCC-T can capture changes across time in the parameters of the Student’s t copula. The left and middle plots in Figure 3 show predictions for ν(t) and τ(t) generated by GPCCT. In the left plot, we observe a reduction in ν(t) at the onset of the 2008-2012 global recession indicating that the return series became more prone to outliers. The plot for τ(t) (middle) also shows large changes across time. In particular, we observe large drops in the dependence level between EUR-USD and CHF-USD during the fall of 2008 (at the onset of the global recession) and the summer of 2010 (corresponding to the worsening European sovereign debt crisis). For comparison, we include predictions for τ L(t) and τ U(t) made by GPCC-SJC in the right plot of Figure 3. In this case, the prediction for τ U(t) is similar to the one made by GPCC-T for τ(t), 7 but the prediction for τ L(t) is much noisier and erratic. This suggests that GPCC-SJC is less robust than GPCC-T. All the copula densities in Figure 1 take large values in the proximity of the points (0,0) and (1,1) i.e. positive correlation. However, the Student’s t copula is the only one of these three copulas which can take high values in the proximity of the points (0,1) and (1,0) i.e. negative correlation. The plot in the left of Figure 3 shows how ν(t) takes very low values at the end of the time period, increasing the robustness of GPCC-T to negatively correlated outliers. 5.3 Equity Time Series As a further comparison, we evaluated each method on the logarithmic returns of 8 equity pairs, from the same date range and processed using the same AR(1)-GARCH(1,1) model discussed previously. The equities were chosen to include pairs with both high correlation (e.g. RBS and BARC) and low correlation (e.g. AXP and BA). The results are shown in Table 5; again the best technique is GPCC-T, followed by GPCC-G. 0 10 20 30 40 50 RBS−BARC GPCC-T Apr 09 Sep 09 Aug 10 Jan 11 Jul 11 Dec 11 Jun 12 Nov 12 Apr 13 Mean GPCC−T Figure 4: Prediction for ν(t) on RBS-BARC. HD AXP CNW ED HPQ BARC RBS RBS Method HON BA CSX EIX IBM HSBC BARC HSBC GPCC-G 0.1247 0.1133 0.1450 0.2072 0.1536 0.2424 0.3401 0.1860 GPCC-T 0.1289 0.1187 0.1499 0.2059 0.1591 0.2486 0.3501 0.1882 GPCC-SJC 0.1210 0.1095 0.1399 0.1935 0.1462 0.2342 0.3234 0.1753 HMM 0.1260 0.1119 0.1458 0.2040 0.1511 0.2486 0.3414 0.1818 TVC 0.1251 0.1119 0.1459 0.2011 0.1511 0.2449 0.3336 0.1823 DSJCC 0.0935 0.0750 0.1196 0.1721 0.1163 0.2188 0.3051 0.1582 CONST-G 0.1162 0.1027 0.1288 0.1962 0.1325 0.2307 0.2979 0.1663 CONST-T 0.1239 0.1091 0.1408 0.2007 0.1481 0.2426 0.3301 0.1775 CONST-SJC 0.1175 0.1046 0.1307 0.1891 0.1373 0.2268 0.2992 0.1639 Table 5: Average test log-likelihood for each method on each pair of stocks. Figure 4 shows predictions for ν(t) generated by GPCC-T. We observe low values of ν during 2010 suggesting that a Gaussian copula would be a bad fit to the data. Indeed, GPCC-G performs significantly worse than GPCC-T on this equity pair. 6 Conclusions and Future Work We have proposed an inference scheme to fit a conditional copula model to multivariate data where the copula is specified by multiple parameters. The copula parameters are modeled as unknown nonlinear functions of arbitrary conditioning variables. We evaluated this framework by estimating timevarying copula parameters for bivariate financial time series. Our method consistently outperforms static copula models and other dynamic copula models. In this initial investigation we have focused on bivariate copulas. Higher dimensional copulas are typically constructed using bivariate copulas as building blocks [2, 12]. Our framework could be applied to these constructions and our empirical predictive performance gains will likely transfer to this setting. Evaluating the effectiveness of this approach compared to other models of multivariate covariance would be a profitable area of empirical research. One could also extend the analysis presented here by including additional conditioning variables as well as time. For example, including a prediction of univariate volatility as a conditioning variable would allow copula parameters to change in response to changing volatility. This would pose inference challenges as the dimension of the GP increases, but could create richer models. Acknowledgements We thank David L´opez-Paz and Andrew Gordon Wilson for interesting discussions. Jos´e Miguel Hern´andez-Lobato acknowledges support from Infosys Labs, Infosys Limited. Daniel HernandezLobato acknowledges support from the Spanish Direcci´on General de Investigaci´on, project ALLS (TIN2010-21575-C02-02). 8 References [1] L. Bauwens, S. Laurent, and J. V. K. Rombouts. Multivariate GARCH models: a survey. Journal of Applied Econometrics, 21(1):79–109, 2006. [2] T. Bedford and R. M. Cooke. Probability density decomposition for conditionally dependent random variables modeled by vines. Annals of Mathematics and Artificial Intelligence, 32(1-4):245–268, 2001. [3] C. M. Bishop. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer, 2007. [4] T. Bollerslev. Generalized autoregressive conditional heteroskedasticity. Journal of Econometrics, 31(3):307–327, 1986. [5] T. Bollerslev, R. F. Engle, and J. M. Wooldridge. A capital asset pricing model with time-varying covariances. The Journal of Political Economy, pages 116–131, 1988. [6] G. Elidan. Copulas and machine learning. In Invited survey to appear in the proceedings of the Copulae in Mathematical and Quantitative Finance workshop, 2012. [7] R. F. Engle and K. F. Kroner. Multivariate simultaneous generalized ARCH. Econometric theory, 11(1):122–150, 1995. [8] E. B. Fox and D. B. Dunson. Bayesian nonparametric covariance regression. arXiv:1101.2017, 2011. [9] J. M. Hern´andez-Lobato, D. Hern´andez-Lobato, and A. Su´arez. GARCH processes with non-parametric innovations for market risk estimation. In Artificial Neural Networks ICANN 2007, volume 4669 of Lecture Notes in Computer Science, pages 718–727. Springer Berlin Heidelberg, 2007. [10] H. Joe. Asymptotic efficiency of the two-stage estimation method for copula-based models. Journal of Multivariate Analysis, 94(2):401–419, 2005. [11] E. Jondeau and M. Rockinger. The Copula-GARCH model of conditional dependencies: An international stock market application. Journal of International Money and Finance, 25(5):827–853, 2006. [12] D. Lopez-Paz, J. M. Hern´andez-Lobato, and Z. Ghahramani. Gaussian process vine copulas for multivariate dependence. In S Dasgupta and D McAllester, editors, JMLR W&CP 28(2): Proceedings of The 30th International Conference on Machine Learning, pages 10–18. JMLR, 2013. [13] T. P. Minka. Expectation Propagation for approximate Bayesian inference. Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence, pages 362–369, 2001. [14] A. Naish-Guzman and S. Holden. The generalized fitc approximation. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 1057–1064. MIT Press, Cambridge, MA, 2008. [15] A. J. Patton. Modelling asymmetric exchange rate dependence. International Economic Review, 47(2):527–556, 2006. [16] A. Ranaldo and P. S¨oderlind. Safe haven currencies. Review of Finance, 14(3):385–407, 2010. [17] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. The MIT Press, 2006. [18] A. Sklar. Fonctions de r´epartition `a n dimensions et leurs marges. Publ. Inst. Statis. Univ. Paris, 8(1):229– 231, 1959. [19] E. Snelson and Z. Ghahramani. Sparse gaussian processes using pseudo-inputs. In Y. Weiss, B. Sch¨olkopf, and J. Platt, editors, Advances in Neural Information Processing Systems 18, pages 1257–1264. MIT Press, Cambridge, MA, 2006. [20] Y. K. Tse and A. K. C. Tsui. A multivariate generalized autoregressive conditional heteroscedasticity model with time-varying correlations. Journal of Business & Economic Statistics, 20(3):351–362, 2002. [21] M. A. J. van Gerven, B. Cseke, F. P. de Lange, and T. Heskes. Efficient bayesian multivariate fmri analysis using a sparsifying spatio-temporal prior. NeuroImage, 50(1):150–161, 2010. [22] A. G. Wilson and Z. Ghahramani. Generalised Wishart processes. In F. Cozman and A. Pfeffer, editors, Proceedings of the Twenty-Seventh Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-11), Barcelona, Spain, 2011. AUAI Press. [23] Y. Wu, J. M. Hernandez-Lobato, and Z. Ghahramani. Dynamic covariance models for multivariate financial time series. In S. Dasgupta and D. McAllester, editors, Proceedings of the 30th International Conference on Machine Learning (ICML-13), volume 28, pages 558–566. JMLR Workshop and Conference Proceedings, 2013. 9
|
2013
|
130
|
4,855
|
Non-Uniform Camera Shake Removal Using a Spatially-Adaptive Sparse Penalty Haichao Zhang†‡ and David Wipf § † School of Computer Science, Northwestern Polytechnical University, Xi’an, China ‡ Department of Electrical and Computer Engineering, Duke University, USA § Visual Computing Group, Microsoft Research Asia, Beijing, China hczhang1@gmail.com davidwipf@gmail.com Abstract Typical blur from camera shake often deviates from the standard uniform convolutional assumption, in part because of problematic rotations which create greater blurring away from some unknown center point. Consequently, successful blind deconvolution for removing shake artifacts requires the estimation of a spatiallyvarying or non-uniform blur operator. Using ideas from Bayesian inference and convex analysis, this paper derives a simple non-uniform blind deblurring algorithm with a spatially-adaptive image penalty. Through an implicit normalization process, this penalty automatically adjust its shape based on the estimated degree of local blur and image structure such that regions with large blur or few prominent edges are discounted. Remaining regions with modest blur and revealing edges therefore dominate on average without explicitly incorporating structureselection heuristics. The algorithm can be implemented using an optimization strategy that is virtually tuning-parameter free and simpler than existing methods, and likely can be applied in other settings such as dictionary learning. Detailed theoretical analysis and empirical comparisons on real images serve as validation. 1 Introduction Image blur is an undesirable degradation that often accompanies the image formation process and may arise, for example, because of camera shake during acquisition. Blind image deblurring strategies aim to recover a sharp image from only a blurry, compromised observation. Extensive efforts have been devoted to the uniform blur (shift-invariant) case, which can be described with the convolutional model y = k ∗x + n, where x is the unknown sharp image, y is the observed blurry image, k is the unknown blur kernel (or point spread function), and n is a zero-mean Gaussian noise term [6, 21, 17, 5, 28, 14, 1, 27, 29]. Unfortunately, many real-world photographs contain blur effects that vary across the image plane, such as when unknown rotations are introduced by camera shake [17]. More recently, algorithms have been generalized to explicitly handle some degree of non-uniform blur using the more general observation model y = Hx+n, where each column of the blur operator H contains the spatially-varying effective blur kernel at the corresponding pixel site [25, 7, 8, 9, 11, 4, 22, 12]. Note that the original uniform blur model can be achieved equivalently when H is forced to adopt certain structure (e.g., block-toeplitz structure with toeplitz-blocks). In general, nonuniform blur may arise under several different contexts. This paper will focus on the blind removal of non-uniform blur caused by general camera shake (as opposed to blur from object motion) using only a single image, with no additional hardware assistance. While existing algorithms for addressing non-uniform camera shake have displayed a measure of success, several important limitations remain. First, some methods require either additional spe1 cialized hardware such as high-speed video capture [23] or inertial measurement sensors [13] for estimating motion, or else multiple images of the same scene [4]. Secondly, even the algorithms that operate given only data from a single image typically rely on carefully engineered initializations, heuristics, and trade-off parameters for selecting salient image structure or edges, in part to avoid undesirable degenerate, no-blur solutions [7, 8, 9, 11]. Consequently, enhancements and rigorous analysis may be problematic. To address these shortcomings, we present an alternative blind deblurring algorithm built upon a simple, closed-form cost function that automatically discounts regions of the image that contain little information about the blur operator without introducing any additional salient structure selection steps. This transparency leads to a nearly tuning-parameter free algorithm based upon a sparsity penalty whose shape adapts to the estimated degree of local blur, and provides theoretical arguments regarding how to robustly handle non-uniform degradations. The rest of the paper is structured as follows. Section 2 briefly describes relevant existing work on non-uniform blind deblurring operators and implementation techniques. Section 3 then introduces the proposed non-uniformblind deblurringmodel, while further theoretical justification and analyses are provided in Section 4. Experimental comparisons with state-of-the-art methods are carried out in Section 5 followed by conclusions in Section 6. 2 Non-Uniform Deblurring Operators Perhaps the most direct way of handling non-uniformblur is to simply partition the image into different regions and then learn a separate, uniform blur kernel for each region, possibly with an additional weighting function for smoothing the boundaries between two adjacent kernels. The resulting algorithm has been adopted extensively [18, 8, 22, 12] and admits an efficient implementation called efficient filter flow (EFF) [10]. The downside with this type of model is that geometric relationships between the blur kernels of different regions derived from the the physical motion path of the camera are ignored. In contrast, to explicitly account for camera motion, the projective motion path (PMP) model [23] treats a blurry image as the weighted summation of projectively transformed sharp images, leading to the revised observation model y = j wjPjx + n, (1) where Pj is the j-th projection or homographyoperator (a combination of rotations and translations) and wj is the correspondingcombination weight representing the proportion of time spent at that particular camera pose during exposure. The uniform convolutional model can be obtained by restricting the general projection operators {Pj} to be translations. In this regard, (1) represents a more general model that has been used in many recent non-uniform deblurring efforts [23, 25, 7, 11, 4]. PMP also retains the bilinear property of uniform convolution, meaning that y = Hx + n = Dw + n, (2) where H = j wjPj and D = [P1x, P2x, · · · , Pjx, · · · ] is a matrix of transformed sharp images. The disadvantage of PMP is that it typically leads to inefficient algorithms because the evaluation of the matrix-vector product Hx = Dw requires generating many expensive intermediate transformed images. However, EFF can be combined with the PMP model by introducing a set of basis images efficiently generated by transforming a grid of delta peak images [9]. The computational cost can be further reduced by using an active set for pruning out the projection operators with small responses [11]. 3 A New Non-Uniform Blind Deblurring Model Following previous work [6, 16], we will work in the derivative domain of images for ease of modeling and better performance, meaning that x ∈R m and y ∈Rn will denote the lexicographically ordered sharp and blurry image derivatives respectively. 1 1The derivative filters used in this work are {[−1, 1], [−1, 1]T }. Other choices are also possible. 2 The observation model (1) is equivalent to the likelihood function p(y|x, w) ∝exp −1 2λ∥y −Hx∥2 2 , (3) where λ denotes the noise variance. Maximum likelihood estimation of x and w using (3) is clearly ill-posed and so further regularization is required to constrain the solution space. For this purpose we adopt the Gaussian prior p(x) ∼N(x; 0, Γ), where Γ ≜diag[γ] with γ = [γ1, . . . , γm]T a vector of m hyperparameter variances, one for each element of x = [x 1, . . . , xm]T . While presently γ is unknown, if we first marginalize over the unknown x, we can estimate it jointly along with the blur parameters w and the unknown noise variance λ. This type II maximum likelihood procedure has been advocated in the context of sparse estimation, where the goal is to learn vectors with mostly zero-valued coefficients [24, 26]. The final sharp image can then be recovered using the estimated kernel and noise level along with standard non-blind deblurring algorithms (e.g., [15]). Mathematically, the proposed estimation scheme requires that we solve max γ,w,λ≥0 p(y|x, w)p(x)dx ≡ min γ,w,λ≥0 yT HΓHT + λI −1 y + log HΓHT + λI , (4) where a −log transformation has been included for convenience. Clearly (4) does not resemble the traditional blind non-uniform deblurring script, where estimation proceeds using the more transparent penalized regression model [4, 7, 9] min x;w≥0 ∥y −Hx∥2 2 + α i g(xi) + β j h(wj) (5) and α and β are user-defined trade-off parameters, g is an image penalty which typically favors sparsity, and h is usually assumed to be quadratic. Despite the differing appearances however, (4) has some advantageous properties with respect to deconvolution problems. In particular, it is devoid of tuning parameters and it possesses more favorable minimization conditions. For example, consider the simplified non-uniform deblurring situation where the true x has a single non-zero element and H is defined such that each column indexed by i is independently parameterized with finite support symmetric around pixel i. Moreover, assume this support matches the true support of the unknown blur operator. Then we have the following: Lemma 1 Given the idealized non-uniform deblurring problem described above, the cost function (4) will be characterized by a unique minimizing solution that correctly locates the nonzero element in x and the corresponding true blur kernel at this location. No possible problem in the form of (5), with g(x) = |x|p, h(w) = wq, and {p, q} arbitrary non-negative scalars, can achieve a similar result (there will always exist either multiple different minimizing solutions or an global minima that does not produce the correct solution). This result, which can be generalized with additional effort, can be shown by expanding on some of the derivations in [26]. Although obviously the conditions upon which Lemma 1 is based are extremely idealized, it is nonetheless emblematic of the potential of the underlying cost function to avoid local minima, etc., and [26] contains complementary results in the case where H is fixed. While optimizing (4) is possible using various general techniques such as the EM algorithm, it is computationally expensive in part because of the high-dimensional determinants involved with realistic-sized images. Consequently we are presently considering various specially-tailored optimization schemes for future work. But for the present purposes, we instead minimize a convenient upper bound allowing us to circumvent such computational issues. Specifically, using Hadamard’s inequality we have log HΓHT + λI = n log λ + log |Γ| + log λ−1HT H + Γ−1 ≤ n log λ + log |Γ| + log λ−1diag HT H + Γ−1 = i log λ + γi∥¯wi∥2 2 + (n −m) log λ, (6) where ¯wi denotes the i-th column of H. Note that Hadamard’s inequality is applied by using λ−1HT H + Γ−1 = VT V for some matrix V = [v1, . . . , vm]. We then have log |λ−1HT H + Γ−1| = 2 log |V| ≤2 log ( i ∥vi∥2) = log diag λ−1HT H + Γ−1 , leading to the stated result. 3 Also, the quantity ∥¯wi∥2 which appears in (6) can be viewed as a measure of the degree of local blur at location i. Given the feasible region w ≥0 and without loss of generality the constraint i wi = 1 for normalization purposes, it can easily be shown that 1/L ≤∥¯wi∥2 2 ≤1, where L is the maximum number of elements in any local blur kernel ¯wi or column of H. The upper bound is achieved when the local kernel is a delta solution, meaning only one nonzero element and therefore minimal blur. In contrast, the lower bound on ∥¯wi∥2 2 occurs when every element of ¯wi has an equal value, constituting the maximal possible blur. This metric, which will influence our analysis in the next section, can be computing using ∥¯wi∥2 2 = wT (BT i Bi)w, where Bi ≜ [P1ei, P2ei, · · · , Pjei, · · · ] and ei denotes an all-zero image with a one at site i. In the uniform deblurring case, BT i Bi = I ignoring edge effects, and therefore ∥¯wi∥2 = ∥w∥2 for all i. While optimizing (4) using the upper bound from (6) can be justified in part using Bayesian-inspired arguments and the lack of trade-off parameters, the augmented cost function unfortunately no longer satisfies Lemma 1. However, it is still well-equipped for estimating sparse image gradients and avoiding degenerate no-blur solutions. For example, consider the case of an asymptotically large image with iid distributed sparse image gradients, with some constant fraction exactly equal to zero and the remaining nonzero elements drawn from any continuous distribution. Now suppose that this image is corrupted with a non-uniform blur operator of the form H = j wjPj, where the cardinality of the summation is finite and H satisfies minimal regularity conditions. Then it can be shown that any global minimum of (4), with or without the bound from (6), will produce the true blur operator. Related intuition applies when noise is present or when the image gradients are not exactly sparse (we will defer more detailed analysis to a future publication). Regardless, the simplified γ-dependent cost function is still far less intuitive than the penalized regression models dependent on x such as (5) that are typically employed for non-uniform blind deblurring. However, using the framework from [26], it can be shown that the kernel estimate obtained by this process is formally equivalent to the one obtained via min x;w≥0,λ≥0 1 λ∥y −Hx∥2 2 + i ψ(|xi|∥¯wi∥2, λ) + (n −m) log λ, with (7) ψ(u, λ) ≜ 2u u + √ 4λ + u2 + log
2λ + u2 + u 4λ + u2 u ≥0. The optimization from (7) closely resembles a standard penalized regression (or equivalently MAP) problem used for blind deblurring. The primary distinction is the penalty term ψ, which jointly regularizes x, w, and λ as discussed Section 4. The supplementary file derives a simple majorizationminimization algorithm for solving (7) along with additional implementational details. The underlying procedure is related to variational Bayesian (VB) models from [1, 16, 20]; however, these models are based on a completely different mean-field approximation and a uniform blur assumption, and they do not learn the noise parameter. Additionally, the analysis provided with these VB models is limited by relatively less transparent underlying cost functions. 4 Model Properties The proposed blind deblurring strategy involves simply minimizing (7); no additional steps for tradeoff parameter selection or structure/salient-edge detection are required unlike other state-of-the-art approaches. This section will examine theoretical properties of (7) that ultimately allow such a simple algorithm to succeed. First, we will demonstrate a form of intrinsic column normalization that facilitates the balanced sparse estimation of the unknown latent image and implicitly de-emphasizes regions with large blur and few dominate edges. Later we describe an appealing form of noisedependent shape adaptation that helps in avoiding local minima. While there are multiple, complementary perspectives for interpreting the behavior of this algorithm, more detailed analyses, as well as extensions to other types of underdetermined inverse problems such as dictionary learning, will be deferred to a later publication. 4.1 Column-Normalized Sparse Estimation Using the simple reparameterization zi ≜xi∥¯wi∥2 it follows that (7) is exactly equivalent to solving min z;w≥0,λ≥0 1 λ∥y −Hz∥2 2 + i ψ(|zi|, λ) + (n −m) log λ, (8) 4 where z = [z1, . . . , zm]T and H is simply the ℓ2-column-normalized version of H. Moreover, it can be shown that this ψ is a concave, non-decreasing function of |z|, and hence represents a canonical sparsity-promoting penalty function with respect to z [26]. Consequently, noise and kernel dependencies notwithstanding, this reparameterization places the proposed cost function in a form exactly consistent with nearly all prototypical sparse regression problems, where ℓ2 column normalization is ubiquitous, at least in part, to avoid favoring one column over another during the estimation process (which can potentially bias the solution). To understand the latter point, note that ∥y −Hz∥2 2 ≡zT HT Hz −2yT Hz. Among other things, because of the normalization, the quadratic factor HT H now has a unit diagonal, and likewise the inner products y T H are scaled by the consistent induced ℓ2 norms, which collectively avoids the premature favoring of any one element of z over another. Moreover, no additional heuristic kernel penalty terms such as in (5) are required since H is in some sense self-regularized by the normalization. Additional ancillary benefits of (8) will be described in Section 4.2. Of course we can always apply the same reparameterization to existing algorithms in the form of (5). While this will indeed result in normalized columns and a properly balanced data-fit term, these raw norms will now appear in the penalty function g, giving the equivalent objective min z;w≥0 ∥y −Hz∥2 2 + α i g zi∥¯wi∥−1 2 + β j h(wj). (9) However, the presence of these norms now embedded in g may have undesirable consequences. Simply put, the problem (9) will favor solutions where the ratio z i/∥¯wi∥2 is sparse or nearly so, which can be achieved by either making many z i zero or many ∥¯wi∥2 big. If some zi is estimated to be zero (and many zi will provably be exactly zero at any local minima if g(x) is a concave, non-decreasing function of |x|), then the corresponding ∥¯wi∥2 will be unconstrained. In contrast, if a given zi is non-zero, there will be a stronger push for the associated ∥¯wi∥2 to be large, i.e., more like the delta kernel which maximizes the ℓ2 norm. Thus, the relative penalization of the kernel norms will depend on the estimated local image gradients, and no-blur delta solutions may be arbitrarily favored in parts of the image plane dominated by edges, the very place where blur estimation information is paramount. In reality, the local kernel norms ∥¯wi∥2, which quantify the degree of local blur as mentioned previously, should be completely independent of the sparsity of the image gradients in the same location. This is of course because the different blurring effects from camera shake are independent of the locations of strong edges in a given scene, since the blur operator is only a function of camera motion (at least to first order approximation). One way to compensate for this independence would be to simply optimize (9) with ∥¯wi∥2 removed from g. While this is possible in principle, enforcing the non-convex, and coupled constraints required to maintain normalized columns is extremely difficult. Another option would be to carefully choose β and h to somehow compensate. In contrast, our algorithm handles these complications seamlessly without any additional penalty terms. 4.2 Noise-Dependent, Parameter-Free Homotopy Continuation Column normalization can be viewed as a principled first step towards solving challenging sparse estimation problems. However, when non-convex sparse regularizers are used for the image penalty, e.g., ℓp norms with p < 1, then local minima can be a significant problem. The rationalization for using such potentially problematic non-convexityis as follows; more details can be found in [17, 27]. When applied to a sharp image, any blur operator will necessarily contribute two opposing effects: (i) It reduces a measure of the image sparsity, which normally increases the penalty i |yi|p, and (ii) It broadly reduces the overall image variance, which actually reduces i |yi|p. Additionally, the greater the degree of blur, the more effect (ii) will begin to overshadow (i). Note that we can always apply greater and greater blur to any sharp image x such that the variance of the resulting blurry y is arbitrarily small. This then produces an arbitrarily small ℓp norm, which implies that i |yi|p < i |xi|p, meaning that the penalty actually favors the blurry image over the sharp one. In a practical sense though, the amount of blur that can be tolerated before this undesirable preference for y over x occurs is much larger as p approaches zero. This is because the more concave the image penalty becomes (as a function of coefficient magnitudes), the less sensitive it is to image variance and the more sensitive it is to image sparsity. In fact the scale-invariant special case where 5 p →0 depends only on sparsity, or the number of elements that are exactly equal to zero. 2 We may therefore expect such a highly concave, sparsity promoting penalty to favor the sharp image over the blurry one in a broader range of blur conditions. Even with other families of penalty functions the same basic notion holds: greater concavity means greater sparsity preference and less sensitivity to variance changes that favor no-blur degenerate solutions. From an implementational standpoint, homotopy continuation methods provide one attractive means of dealing with difficult non-convex penalty functions and the associated constellation of local minima [3]. The basic idea is to use a parameterized family of sparsity-promoting functions g(x; θ), where different values of θ determine the relative degree of concavity allowing a transition from something convex such as the ℓ1 norm (with θ large) to something concave such as the ℓ0 norm (with θ small). Moreover, to ensure cost function descent (see below), we also require that g(x; θ2) ≥g(x; θ1) whenever θ2 ≥θ1, noting that this rules out simply setting θ = p and using the family of ℓp norms. We then begin optimization with a large θ value; later as the estimation progresses and hopefully we are near a reasonably good basin of attraction, θ is reduced introducing greater concavity, a process which is repeated until convergence, all the while guaranteeing cost function descent. While potentially effective in practice, homotopy continuation methods require both a trade-off parameter for g(x; θ) and a pre-defined schedule or heuristic for adjusting θ, both of which could potentially be image dependent. The proposed deblurring algorithm automatically implements a form of noise-dependent, parameterfree homotopy continuation with several attractive auxiliary properties [26]. To make this claim precise and facilitate subsequent analysis, we first introduce the definition of relative concavity [19]: Definition 1 Let u be a strictly increasing function on [a, b]. The function ν is concave relative to u on the interval [a, b] if and only if ν(y) ≤ν(x) + ν′(x) u′(x) [u(y) −u(x)] holds ∀x, y ∈[a, b]. We will use ν ≺u to denote that ν is concave relative to u on [0, ∞). This can be understood as a natural generalization of the traditional notion of a concavity, in that a concave function is equivalently concave relative to a linear function per Definition 1. In general, if ν ≺u, then when ν and u are set to have the same functional value and the same slope at any given point (i.e., by an affine transformation of u), then ν lies completely under u. In the context of homotopy continuation, an ideal candidate penalty would be one for which g(x; θ 1) ≺g(x; θ2) whenever θ1 ≤θ2. This would ensure that greater sparsity-inducing concavity is introduced as θ is reduced. We now demonstrate that ψ(|z|, λ) is such a function, with λ occupying the role of θ. This dependency on the noise parameter is unlike other continuation methods and ultimately leads to several attractive attributes. Theorem 1 If λ1 < λ2, then ψ(u, λ1) ≺ψ(u, λ2) for u ≥0. Additionally, in the limit as λ →0, then i ψ(|zi|, λ) converges to the ℓ0 norm (up to an inconsequential scaling and translation). Conversely, as λ becomes large, i ψ(|zi|, λ) converges to 2∥z∥1/ √ λ. The proof has been deferred to the supplementary file. The relevance of this result can be understood as follows. First, at the beginning of the optimization process λ will be large both because of initialization and because we have not yet found a relatively sparse z and associated w such that y can be well-approximated; hence the estimated λ should not be small. Based on Theorem 1, in this regime (8) approaches min z ∥y −Hz∥2 2 + 2 √ λ∥z∥1 (10) assuming w and λ are fixed. Note incidentally that this square-root dependency on λ, which arises naturally from our model, is frequently advocated when performing regular ℓ1-norm penalized sparse regression given that the true noise variance is λ [2]. Additionally, because λ must be relatively large to arrive at this ℓ1 approximation, the estimation need only focus on reproducing the largest elements in z since the sparse penalty will dominate the data fit term. Furthermore, these larger elements are on average more likely to be in regions of relatively lower blurring or high ∥¯wi∥2 value by virtue of the reparameterization z i = xi∥¯wi∥2. Consequently, the less concave initial estimation can proceed successfully by de-emphasizing regions with high blur or low ∥¯wi∥2, and focusing on coarsely approximating regions with relatively less blur. 2Note that even if the true sharp image is not exactly sparse, as long as it can be reasonably wellapproximated by some exactly sparse image in an ℓ2 norm sense, then the analysis here still holds [27]. 6 Blurry Elephant Spatially Non-Adaptive Spatially Adaptive Blur-map Figure 1: Effectiveness of spatially-adaptive sparsity. From left to right: the blurry image, the deblurred image and estimated local kernels without spatially-adaptive column normalization, the analogous results with this normalization and its spatially-varying impact on image estimation, and the associated map of ∥¯wi∥−1 2 , which reflects the degree of estimated local blurring. Later as the estimation proceeds and w and z are refined, λ will be reduced which in turn necessarily increases the relative concavity of the penalty ψ per Theorem 1. However, the added concavity will now be welcome for resolving increasingly fine details uncovered by a lower noise variance and the concomitant boosted importance of the data fidelity term, especially since many of these uncovered details may reside near increasingly blurry regions of the image and we need to avoid unwanted noblur solutions. Eventually the penalty can even approach the ℓ0 norm (although images are generally not exactly sparse, and other noise factors and unmodeled artifacts are usually present such that λ will never go all the way to zero). Importantly, all of this implicit, spatially-adaptive penalization occurs without the need for trade-off parameters or additional structure selection measures, meaning carefully engineered heuristics designed to locate prominent edges such that good global solutions can be found without strongly concave image penalties [21, 5, 28, 8, 9]. Figure 1 displays results of this procedure both with and without the spatially-varying column normalizations and the implicit adaptive penalization that help compensate for locally varying image blur. 5 Experimental Results This section compares the proposed method with several state-of-the-art algorithms for non-uniform blind deblurring using real-world images from previously published papers (note that source code is not available for conducting more widespread evaluations with most algorithms). The supplementary file contains a number of additional comparisons, including assessments with a benchmark uniform blind deblurring dataset where ground truth is available. Overall, our algorithm consistently performs comparably or better on all of these respective images. Experimental specifics of our implementation (e.g., regarding the non-blind deblurring step, projection operators, etc.) are also contained in the supplementary file for space considerations. Comparison with Harmeling et al. [8] and Hirsch et al. [9]: Results are based on three test images provided in [8]. Figure 2 displays deblurring comparisons based on the Butchershop and Vintage-car images. In both cases, the proposed algorithm reveals more fine details than the other methods, despite its simplicity and lack of salient structure selection heuristics or trade-off parameters. Note that with these images, ground truth blur kernels were independently estimated using a special capturing process [8]. As shown in the supplementary file, the estimated blur kernel patterns obtained from our algorithm better resemble the ground truth relative to the other methods, a performance result that compensates for any differences in the non-blind step. Comparison with Whyte et al. [25]: Results on the Pantheon test image from [25] are shown in Figure 3 (top row), where we observe that the deblurred image from Whyte et al. has noticeable ringing artifacts. In contrast, our result is considerably cleaner. Comparison with Gupta et al. [7]: We next experiment using the test image Building from [7], which contains large rotational blurring that can be challenging for blind deblurring algorithms. Figure 3 (middle row) reveals that our algorithm contains less ringing and more fine details relative to Gupta et al. Comparison with Joshi et al. [13]: Joshi et al. presents a deblurring algorithm that relies upon additional hardware for estimating camera motion [13]. However, even without this additional in7 BLURRY Butchershop HARMELING HIRSCH OUR BLURRY Vintage-car HARMELING HIRSCH OUR Figure 2: Non-uniform deblurring results. Comparison with Harmeling [8] and Hirsch [9] on real-world images. (better viewed electronically with zooming) BLURRY Pantheon WHYTE OUR BLURRY Building GUPTA OUR BLURRY Sculpture JOSHI OUR Figure 3: Non-uniform deblurring results. Comparison with Whyte [25], Gupta [7], and Joshi [13] on real-world images. (better viewed electronically with zooming) formation, our algorithm produces a better sharp estimate of the Sculpture image from [13], with fewer ringing artifacts and higher resolution details. See Figure 3 (bottom row). 6 Conclusion This paper presents a strikingly simple yet effective method for non-uniform camera shake removal based upon a principled, transparent cost function that is open to analysis and further extensions/refinements. For example, it can be combined with the model from [29] to perform joint multi-image alignment, denoising, and deblurring. Both theoretical and empirical evidence are provided demonstrating the efficacy of the blur-dependent, spatially-adaptive sparse regularization which emerges from our model. The framework also suggests exploring other related cost functions that, while deviating from the original probabilistic script, nonetheless share similar properties. One such simple example is a penalty of the form i log( √ λ + |xi|∥¯wi∥2); many others are possible. Acknowledgements This work was supported in part by National Natural Science Foundation of China (61231016). 8 References [1] S. D. Babacan, R. Molina, M. N. Do, and A. K. Katsaggelos. Bayesian blind deconvolution with general sparse image priors. In ECCV, 2012. [2] E. Candès and Y. Plan. Near-ideal model selection by ℓ1 minimization. The Annals of Statistics, (5A):2145–2177. [3] R. Chartrand and W. Yin. Iteratively reweighted algorithms for compressive sensing. In ICASSP, 2008. [4] S. Cho, H. Cho, Y.-W. Tai, and S. Lee. Registration based non-uniform motion deblurring. Comput. Graph. Forum, 31(7-2):2183–2192, 2012. [5] S. Cho and S. Lee. Fast motion deblurring. In SIGGRAPH ASIA, 2009. [6] R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman. Removing camera shake from a single photograph. In SIGGRAPH, 2006. [7] A. Gupta, N. Joshi, C. L. Zitnick, M. Cohen, and B. Curless. Single image deblurring using motion density functions. In ECCV, 2010. [8] S. Harmeling, M. Hirsch, and B. Schölkopf. Space-variant single-image blind deconvolution for removing camera shake. In NIPS, 2010. [9] M. Hirsch, C. J. Schuler, S. Harmeling, and B. Schölkopf. Fast removal of non-uniform camera shake. In ICCV, 2011. [10] M. Hirsch, S. Sra, B. Scholkopf, and S. Harmeling. Efficient filter flow for space-variant multiframe blind deconvolution. In CVPR, 2010. [11] Z. Hu and M.-H. Yang. Fast non-uniform deblurring using constrained camera pose subspace. In BMVC, 2012. [12] H. Ji and K. Wang. A two-stage approach to blind spatially-varying motion deblurring. In CVPR, 2012. [13] N. Joshi, S. B. Kang, C. L. Zitnick, and R. Szeliski. Image deblurring using inertial measurement sensors. In ACM SIGGRAPH, 2010. [14] D. Krishnan, T. Tay, and R. Fergus. Blind deconvolution using a normalized sparsity measure. In CVPR, 2011. [15] A. Levin, R. Fergus, F. Durand, and W. T. Freeman. Deconvolution using natural image priors. Technical report, MIT, 2007. [16] A. Levin, Y. Weiss, F. Durand, and W. T. Freeman. Efficient marginal likelihood optimization in blind deconvolution. In CVPR, 2011. [17] A. Levin, Y. Weiss, F. Durand, and W. T. Freeman. Understanding blind deconvolution algorithms. IEEE Trans. Pattern Anal. Mach. Intell., 33(12):2354–2367, 2011. [18] J. G. Nagy and D. P. O’Leary. Restoring images degraded by spatially variant blur. SIAM J. Sci. Comput., 19(4):1063–1082, 1998. [19] J. A. Palmer. Relatve convexity. Technical report, UCSD, 2003. [20] J. A. Palmer, D. P. Wipf, K. Kreutz-Delgado, and B. D. Rao. Variational EM algorithms for non-Gaussian latent variable models. In NIPS, 2006. [21] Q. Shan, J. Jia, and A. Agarwala. High-quality motion deblurring from a single image. In SIGGRAPH, 2008. [22] M. Sorel and F. Sroubek. Image Restoration: Fundamentals and Advances. CRC Press, 2012. [23] Y.-W. Tai, P. Tan, and M. S. Brown. Richardson-Lucy deblurring for scenes under a projective motion path. IEEE Trans. Pattern Anal. Mach. Intell., 33(8):1603–1618, 2011. [24] M. E. Tipping. Sparse bayesian learning and the relevance vector machine. Journal of Machine Learning Research, 1:211–244, 2001. [25] O. Whyte, J. Sivic, A. Zisserman, and J. Ponce. Non-uniform deblurring for shaken images. In CVPR, 2010. [26] D. P. Wipf, B. D. Rao, and S. S. Nagarajan. Latent variable Bayesian models for promoting sparsity. IEEE Trans. Information Theory, 57(9):6236–6255, 2011. [27] D. P. Wipf and H. Zhang. Revisiting Bayesian blind deconvolution. submitted to Journal of Machine Learning Research, 2013. [28] L. Xu and J. Jia. Two-phase kernel estimation for robust motion deblurring. In ECCV, 2010. [29] H. Zhang, D. P. Wipf, and Y. Zhang. Multi-image blind deblurring using a coupled adaptive sparse prior. In CVPR, 2013. 9
|
2013
|
131
|
4,856
|
Online Learning in Episodic Markovian Decision Processes by Relative Entropy Policy Search Alexander Zimin Institute of Science and Technology Austria alexander.zimin@ist.ac.at Gergely Neu INRIA Lille – Nord Europe gergely.neu@gmail.com Abstract We study the problem of online learning in finite episodic Markov decision processes (MDPs) where the loss function is allowed to change between episodes. The natural performance measure in this learning problem is the regret defined as the difference between the total loss of the best stationary policy and the total loss suffered by the learner. We assume that the learner is given access to a finite action space A and the state space X has a layered structure with L layers, so that state transitions are only possible between consecutive layers. We describe a variant of the recently proposed Relative Entropy Policy Search algorithm and show that its regret after T episodes is 2 p L|X||A|T log(|X||A|/L) in the bandit setting and 2L p T log(|X||A|/L) in the full information setting, given that the learner has perfect knowledge of the transition probabilities of the underlying MDP. These guarantees largely improve previously known results under much milder assumptions and cannot be significantly improved under general assumptions. 1 Introduction In this paper, we study the problem of online learning in a class of finite non-stationary episodic Markov decision processes. The learning problem that we consider can be formalized as a sequential interaction between a learner (often called agent) and an environment, where the interaction between the two entities proceeds in episodes. Every episode consists of multiple time steps: In every time step of an episode, a learner has to choose one of its available actions after observing some part of the current state of the environment. The chosen action influences the observable state of the environment in a stochastic fashion and imposes some loss on the learner. However, the entire state (be it observed or not) also influences the loss. The goal of the learner is to minimize its total (non-discounted) loss that it suffers. In this work, we assume that the unobserved part of the state evolves autonomously from the observed part of the state or the actions chosen by the learner, thus corresponding to a state sequence generated by an oblivious adversary such as nature. Otherwise, absolutely no statistical assumption is made about the mechanism generating the unobserved state variables. As usual for such learning problems, we set our goal as minimizing the regret defined as the difference between the total loss suffered by the learner and the total loss of the best stationary state-feedback policy. This setting fuses two important paradigms of learning theory: online learning [5] and reinforcement learning [21, 22]. The learning problem outlined above can be formalized as an online learning problem where the actions of the learner correspond to choosing policies in a known Markovian decision process where the loss function changes arbitrarily between episodes. This setting is a simplified version of the Parts of this work were done while Alexander Zimin was enrolled in the MSc. programme of the Central European University, Budapest, and Gergely Neu was working on his PhD. thesis at the Budapest University of Technology and Economics and the MTA SZTAKI Institute for Computer Science and Control, Hungary. Both authors would like to express their gratitude to L´aszl´o Gy¨orfifor making this collaboration possible. 1 learning problem first addressed by Even-Dar et al. [8, 9], who consider online learning unichain MDPs. In their variant of the problem, the learner faces a continuing MDP task where all policies are assumed to generate a unique stationary distribution over the state space and losses can change arbitrarily between consecutive time steps. Assuming that the learner observes the complete loss function after each time step (that is, assuming full information feedback), they propose an algorithm called MDP-E and show that its regret is O(τ 2p T log |A|), where τ > 0 is an upper bound on the mixing time of any policy. The core idea of MDP-E is the observation that the regret of the global decision problem can be decomposed into regrets of simpler decision problems defined in each state. Yu et al. [23] consider the same setting and propose an algorithm that guarantees o(T) regret under bandit feedback where the learner only observes the losses that it actually suffers, but not the whole loss function. Based on the results of Even-Dar et al. [9], Neu et al. [16] propose an algorithm that is shown to enjoy an O(T 2/3) bound on the regret in the bandit setting, given some further assumptions concerning the transition structure of the underlying MDP. For the case of continuing deterministic MDP tasks, Dekel and Hazan [7] describe an algorithm guaranteeing O(T 2/3) regret. The immediate precursor of the current paper is the work of Neu et al. [14], who consider online learning in episodic MDPs where the state space has a layered (or loop-free) structure and every policy visits every state with a positive probability of at least α > 0. Their analysis is based on a decomposition similar to the one proposed by Even-Dar et al. [9], and is sufficient to prove a regret bound of O(L2p T|A| log |A|/α) in the bandit case and O(L2p T log |A|) in the full information case. In this paper, we present a learning algorithm that directly aims to minimize the global regret of the algorithm instead of trying to minimize the local regrets in a decomposed problem. Our approach is motivated by the insightful paper of Peters et al. [17], who propose an algorithm called Relative Entropy Policy Search (REPS) for reinforcement learning problems. As Peters et al. [17] and Kakade [11] point out, good performance of policy search algorithms requires that the information loss between the consecutive policies selected by the algorithm is bounded, so that policies are only modified in small steps. Accordingly, REPS aims to select policies that minimize the expected loss while guaranteeing that the state-action distributions generated by the policies stay close in terms of Kullback–Leibler divergence. Further, Daniel et al. [6] point out that REPS is closely related to a number of previously known probabilistic policy search methods. Our paper is based on the observation that REPS is closely related to the Proximal Point Algorithm (PPA) first proposed by Martinet [13] (see also [20]). We propose a variant of REPS called online REPS or O-REPS and analyze it using fundamental results concerning the PPA family. Our analysis improves all previous results concerning online learning in episodic MDPs: we show that the expected regret of O-REPS is bounded by 2 p L|X||A|T log(|X||A|/L) in the bandit setting and 2L p T log(|X||A|/L) in the full information setting. Unlike previous works in the literature, we do not have to make any assumptions about the transition dynamics apart from the loop-free assumption. The full discussion of our results is deferred to Section 5. Before we move to the technical content of the paper, we first fix some conventions. Random variables will be typeset in boldface (e.g., x, a) and indefinite sums over states and actions are to be understood as sums over the entire state and action spaces. For clarity, we assume that all actions are available in all states, however, this assumption is not essential. The indicator of any event A will be denoted by I {A}. 2 Problem definition An episodic loop-free Markov decision process is formally defined by the tuple M = {X, A, P}, where X is the finite state space, A is the finite action space, and P : X × X × A is the transition function, where P(x′|x, a) is the probability that the next state of the Markovian environment will be x′, given that action a is selected in state x. We will assume that M satisfies the following assumptions: • The state space X can be decomposed into non-intersecting layers, i.e. X = SL k=0 Xk where Xl ∩Xk = ∅for l ̸= k. • X0 and XL are singletons, i.e. X0 = {x0} and XL = {xL}. 2 • Transitions are possible only between consecutive layers. Formally, if P(x′|x, a) > 0, then x′ ∈Xk+1 and x ∈Xk for some 0 ≤k ≤L −1. The interaction between the learner and the environment is described on Figure 1. The interaction of an agent and the Markovian environment proceeds in episodes, where in each episode the agent starts in state x0 and moves forward across the consecutive layers until it reaches state xL.1 We assume that the environment selects a sequence of loss functions {ℓt}T t=1 and the losses only change between episodes. Furthermore, we assume that the learner only observes the losses that it suffers in each individual state-action pair that it visits, in other words, we consider bandit feedback.2 Parameters: Markovian environment M = {X, A, P}; For all episodes t = 1, 2, . . . , T, repeat 1. The environment chooses the loss function ℓt : X × A →[0, 1]. 2. The learner starts in state x0(t) = x0. 3. For all time steps l = 0, 1, 2, . . . , L −1, repeat (a) The learner observes xl(t) ∈Xl. (b) Based on its previous observations (and randomness), the learner selects al(t). (c) The learner suffers and observes loss ℓt(xl(t), al(t)). (d) The environment draws the new state xl+1(t) ∼P(·|xl(t), al(t)). Figure 1: The protocol of online learning in episodic MDPs. For defining our performance measure, we need to specify a set of reference controllers that is made available to the learner. To this end, we define the concept of (stochastic stationary) policies: A policy is defined as a mapping π : A × X →[0, 1], where π(a|x) gives the probability of selecting action a in state x. The expected total loss of a policy π is defined as LT (π) = E " T X t=1 L−1 X k=0 ℓt(x′ k, a′ k) P, π # , where the notation E [·| P, π] is used to emphasize that the random variables x′ k and a′ k are generated by executing π in the MDP specified by the transition function P. Denote the total expected loss suffered by the learner as bLT = PT t=1 PL−1 k=0 E [ℓt(xk(t), ak(t))| P], where the expectation is taken over the internal randomization of the learner and the random transitions of the Markovian environment. Using these notations, we define the learner’s goal as minimizing the (total expected) regret defined as bRT = bLT −min π LT (π), where the minimum is taken over the complete set of stochastic stationary policies.3 It is beneficial to introduce the concept of occupancy measures on the state-action space X × A: the occupancy measure qπ of policy π is defined as the collection of distributions generated by executing policy π on the episodic MDP described by P: qπ(x, a) = P h x′ k(x) = x, a′ k(x) = a P, π i , where k(x) denotes the index of the layer that x belongs to. It is easy to see that the occupancy measure of any policy π satisfies X a qπ(x, a) = X x′∈Xk(x)−1 X a′ P(x|x′, a′)qπ(x′, a′), (1) 1Such MDPs naturally arise in episodic decision tasks where some notion of time is present in the state description. 2In the literature of online combinatorial optimization, this feedback scheme is often called semi-bandit feedback, see Audibert et al. [2]. 3The existence of this minimum is a standard result of MDP theory, see Puterman [18]. 3 for all x ∈X \{x0, xl}, with qπ(x0, a) = π(a|x0) for all a ∈A. The set of all occupancy measures satisfying the above equality in the MDP M will be denoted as ∆(M). The policy π is said to generate the occupancy measure q ∈∆(M) if π(a|x) = q(x, a) P b q(x, b) holds for all (x, a) ∈X × A. It is clear that there exists a unique generating policy for all measures in ∆(M) and vice versa. The policy generating q will be denoted as πq. In what follows, we will redefine the task of the learner from having to select individual actions ak(t) to having to select occupancy measures qt ∈∆(M) in each episode t. To see why this notion simplifies the treatment of the problem, observe that E " L−1 X k=0 ℓt(x′ k, a′ k) P, πq # = L−1 X k=0 X x∈Xk X a q(x, a)ℓt(x, a) = X x,a q(x, a)ℓt(x, a) def = ⟨q, ℓt⟩, (2) where we defined the inner product ⟨·, ·⟩on X × A in the last line. Using this notation, we can reformulate our original problem as an instance of online linear optimization with decision space ∆(M). Assuming that the learner selects occupancy measure qt in episode t, the regret can be rewritten as bRT = max q∈∆(M) E " T X t=1 ⟨qt −q, ℓt⟩ # . 3 The algorithm: O-REPS Using the formalism introduced in the previous section, we now describe our algorithm called Online Relative Entropy Policy Search (O-REPS). O-REPS is an instance of online linear optimization methods usually referred to as Follow-the-Regularized-Leader (FTRL), Online Stochastic Mirror Descent (OSMD) or the Proximal Point Algorithm (PPA)—see, e.g., [1], [19], [3] and [2] for a discussion of these methods and their relations. To allow comparisons with the original derivation of REPS by Peters et al. [17], we formalize our algorithm as an instance of PPA. Before describing the algorithm, some more definitions are in order. First, define D (q∥q′) as the unnormalized Kullback– Leibler divergence between two occupancy measures q and q′: D (q∥q′) = X x,a q(x, a) log q(x, a) q′(x, a) − X x,a (q(x, a) −q′(x, a)) . Furthermore, let R(q) define the unnormalized negative entropy of the occupancy measure q: R(q) = X x,a q(x, a) log q(x, a) − X x,a q(x, a). We are now ready to define O-REPS formally. In the first episode, O-REPS chooses the uniform policy with π1(a|x) = 1/|A| for all x and a, and we let q1 = qπ1.4 Then, the algorithm proceeds recursively: After observing ut = (x0(t), a0(t), ℓt(x0(t), a0(t)), . . . , xL−1(t), aL−1(t), ℓt(xL−1(t), aL−1(t)), xL(t)) in episode t, we define the loss estimates ˆℓt as ˆℓt = ℓt(x, a) qt(x, a)I {(x, a) ∈ut} , where we used the notation (x, a) ∈ut to indicate that the state-action pair (x, a) was observed during episode t. After episode t, O-REPS selects the occupancy measure that solves the optimization problem qt+1 = arg min q∈∆(M) n η D q, ˆℓt E + D(q||qt) o . (3) 4Note that qπ can be simply computed by using (1) recursively. 4 In episode t, our algorithm follows the policy πt = πqt. Defining Ut = (u1, u2, . . . , ut), we clearly have that qt(x, a) = P [(x, a) ∈ut| Ut−1], so ˆℓt(x, a) is an unbiased estimate of ℓt(x, a) for all (x, a) such that qt(x, a) > 0: E h ˆℓt(x, a) Ut−1 i = ℓt(x, a) qt(x, a)P [(x, a) ∈ut| Ut−1] = ℓt(x, a). (4) We now proceed to explain how the policy update step (3) can be implemented efficiently. It is known (see, e.g., Bart´ok et al. [3, Lemma 8.6]) that performing this optimization can be reformulated as first solving the unconstrained optimization problem ˜qt+1 = arg min q n η D q, ˆℓt E + D(q||qt) o and then projecting the result to ∆(M) as qt+1 = arg min q∈∆(M) D (q∥˜qt+1) . The first step can be simply carried out by setting ˜qt+1(x, a) = qt(x, a)e−η ˆℓt(x,a). The projection step, however, requires more care. To describe the projection procedure, we need to introduce some more notation. For any function v : X →R and loss function ℓ: X × A →[0, 1] we define a function δ(x, a|v, ℓ) = −ηℓ(x, a) − X x′∈X v(x′)P(x′|x, a) + v(x). (5) As noted by Peters et al. [17], the above function can be regarded as the Bellman error corresponding to the value function v. The next proposition provides a succinct formalization of the optimization problem (3). Proposition 1. Let t > 1 and define the function Zt(v, k) = X x∈Xk,a∈A qt(x, a)eδ(x,a|v,ˆℓt). The update step (3) can be performed as qt+1(x, a) = qt(x, a)eδ(x,a|ˆvt,ˆℓt) Zt(ˆvt, k(x)) , where ˆvt = arg min v L X k=0 ln Zt(v, k). (6) Minimizing the expression on the right-hand side of Equation (6) is an unconstrained convex optimization problem (see Boyd and Vandenberghe [4] and the comments of Peters et al. [17]) and can be solved efficiently. It is important to note that since q1(x, a) > 0 holds for all (x, a) pairs, qt(x, a) is also positive for all t > 0 by the multiplicative update rule, so Equation 4 holds for all state-action pairs (x, a) in all time steps. The proof follows the steps of Peters et al. [17], however, their original formalization of REPS is slightly different, which results in small changes in the analysis as well. For further comments regarding the differences between O-REPS and REPS, see Section 5. Proof of Proposition 1. We start with formulating the projection step as a constrained optimization problem: min q D (q∥˜qt+1) subject to X a q(x, a) = X x′,a′ P(x|x′, a′)q(x′, a′) for all x ∈X \ {x0, xl}, X x∈Xk X a q(x, a) = 1 for all k = 0, 1, . . . , L −1. 5 To solve the problem, consider the Lagrangian: Lt(q) =D (q∥˜qt+1) + L−1 X k=0 λk X x∈Xk,a∈A q(x, a) −1 + L−1 X k=1 X x∈Xk v(x) X x′∈Xk−1 X a′ q(x′, a′)P(x|x′, a′) − X a q(x, a) =D (q∥˜qt+1) + X a q(x0, a) λ0 + X x′ v(x′)P(x′|x0, a) ! − L−1 X k=0 λk + X x̸=x0 X a q(x, a) λk(x) + X x′ v(x′)P(x′|x, a) −v(x) ! , where {λk}L−1 k=0 and {v(x)}x∈X\{x0,xl} are Lagrange multipliers. In what follows, we set v(x0) = v(xL) = 0 for convenience. Differentiating the Lagrangian with respect to any q(x, a), we get ∂Lt(q) ∂q(x, a) = ln q(x, a) −ln ˜qt+1(x, a) + λk(x) + X x′ v(x′)P(x′|x, a) −v(x). Hence, setting the gradient to zero, we obtain the formula for qt+1(x, a): qt+1(x, a) = ˜qt+1(x, a)e−λk(x)−P x′ v(x′)P (x′|x,a)+v(x). Substituting the formula for ˜qt+1(x, a), we get qt+1(x, a) = qt(x, a)e−λk(x)+δ(x,a|v,ˆℓt). Using the second constraint, we have for every k = 0, 1, . . . , L −1 that X x∈Xk X a qt(x, a)e−λk+δ(x,a|v,ˆℓt) = 1, yielding e−λk = 1/Zt(v, k), which leaves us with computing the value of v at the optimum. This can be done by solving the dual problem of maximizing X x,a ˜qt+1(x, a) −L − L−1 X k=0 λk over {λk}L−1 k=0 . If we drop the constants and express each λk in terms of Zt(v, k), then the problem is equivalent to maximizing −PL−1 k=0 ln Zt(v, k), that is, solving the optimization problem (6). 4 Analysis The next theorem states our main result concerning the regret of O-REPS under bandit feedback. The proof of the theorem is based on rather common ideas used in the analysis of FTRL/OSMD/PPAstyle algorithms (see, e.g., [24], Chapter 11 of [5], [1], [12], [2]). After proving the theorem, we also present the regret bound for O-REPS when used in a full information setting where the learner gets to observe ℓt after each episode t. Theorem 1. Assuming bandit feedback, the total expected regret of O-REPS satisfies bRT ≤η|X||A|T + L log |X||A| L η . In particular, setting η = r L log |X||A| L T |X||A| yields bRT ≤2 r L|X||A|T log |X||A| L . 6 Proof. By standard arguments (see, e.g., [19, Lemma 12], [3, Lemma 9.2] or [5, Theorem 11.1]), we have T X t=1 D qt −q, ˆℓt E ≤ T X t=1 D qt −˜qt+1, ˆℓt E + D (q∥q1) η . (7) Using the exact form of ˜qt+1 and the fact that ex ≥1 + x, we get that ˜qt+1(x, a) ≥qt(x, a) −ηqt(x, a)ˆℓt(x, a) and thus T X t=1 D qt −˜qt+1, ˆℓt E ≤η T X t=1 X x,a qt(x, a)ˆℓ2 t(x, a) ≤η T X t=1 X x,a qt(x, a) ℓt(x, a) qt(x, a) ˆℓt(x, a) ≤η T X t=1 X x,a ˆℓt(x, a). Combining this with (7), we get T X t=1 D qt −q, ˆℓt E ≤η T X t=1 X x,a ˆℓt(x, a) + D (q∥q1) η . (8) Next, we take an expectation on both sides. By Equation (4), we have E " T X t=1 X x,a ˆℓt(x, a) # ≤|X||A|T. It also follows from Equation (4) that E hD q, ˆℓt Ei = ⟨q, ℓt⟩and E hD qt, ˆℓt Ei = E [⟨qt, ℓt⟩]. Finally, notice that D (q∥q1) ≤R(q) −R(q1) ≤ L−1 X k=0 X x∈Xk X a q1(x, a) log 1 q1(x, a) (since R(q) ≤0) ≤ L−1 X k=0 log |Xk||A| ≤L log |X||A| L , where we used the trivial upper bound on the entropy of distributions and Jensen’s inequality in the last step. Plugging the above upper bound into Equation (8), we obtain the statement of the theorem. Theorem 2. Assuming full feedback, the total expected regret of O-REPS satisfies bRT ≤ηLT + L log |X||A| L η . In particular, setting η = q log |X||A| L T yields bRT ≤2L r T log |X||A| L . The proof of the statement follows directly from the proof of Theorem 1, with the only difference that we set ˆℓt = ℓt and we can use the tighter upper bound T X t=1 ⟨qt −˜qt+1, ℓt⟩≤η T X t=1 X x,a qt(x, a)ℓ2 t(x, a) ≤η T X t=1 X x,a qt(x, a) = ηLT, where we used that P x∈Xk P a qt(x, a) = 1 for all layers k. 7 5 Conclusions and future work Comparison with previous results We first compare our regret bounds with previous results from the literature. First, our guarantees for the full information case trade off a factor of L present in the bounds of Neu et al. [14] to a (usually much smaller) factor of p log |X|. More importantly, our bounds trade off a factor of L3/2/α in the bandit case to a factor of p |X|. This improvement is particularly remarkable considering that we do not need to assume that α > 0, that is, we drop the rather unnatural assumption that every stationary policy has to visit every state with positive probability. In particular, dropping this assumption enables our algorithm to work in deterministic loop-free MDPs, that is, to solve the online shortest path problem (see, e.g., [10]). In the shortest path setting, O-REPS provides an alternative implementation to the Component Hedge algorithm analyzed by Koolen et al. [12], who prove identical bounds in the full information case. As shown by Audibert et al. [2], Component Hedge achieves the analog of our bounds in the bandit case as well. O-REPS also bears close resemblance to the algorithms of Even-Dar et al. [9] and Neu et al. [16] who also use policy updates of the form πt+1(a|x) ∝πt(a|x) exp(−ηℓt(x, a) −P x′ P(x′|x, a)vt(x′)). The most important difference between their algorithm and O-REPS is that their value functions vt are computed as the solution of the Bellman-equations instead of the solution of the optimization problem (6). By a simple combination of our analysis and that of Even-Dar et al. [9], it is possible to show that O-REPS attains a regret of eO( √ τT) in the unichain setting with full information feedback, improving their bound by a factor of τ 3/2 under the same assumptions. It is an interesting open problem to find out if using the O-REPS value functions is a strictly better idea than solving the Bellman equations in general. Another important direction of future work is to extend our results to the case of unichain MDPs with bandit feedback and the setting where the transition probabilities of the underlying MDP is unknown (see Neu et al. [15]). Lower bounds Following the proof of Theorem 10 in Audibert et al. [2], it is straightforward to construct an MDP consisting of |X|/L chains of L consecutive bandit problems each with |A| actions such that no algorithm can achieve smaller regret than 0.03L p T log(|X||A|) in the full information case and 0.04 p L|X||A|T in the bandit case. These results suggest that our bounds cannot be significantly improved in general, however, finding an appropriate problem-dependent lower bound remains an interesting open problem in the much broader field of online linear optimization. REPS vs. O-REPS As noted several times above, our algorithm is directly inspired by the work of Peters et al. [17]. However, there is a slight difference between the original version of REPS and O-REPS, namely, Peters et al. aim to solve the optimization problem qt+1 = arg minq∈∆(M)⟨q, ˆℓt⟩ subject to the constraint D (q∥qt) ≤ε for some ε > 0. This is to be contrasted with the following property of the occupancy measures generated by O-REPS (proved in the supplementary material): Lemma 1. For any t > 0, D (qt∥qt+1) ≤η2 2 ⟨qt, ˆℓ2 t⟩. In particular, if the losses are estimated by bounded sample averages as done by Peters et al. [17], this gives D (qt∥qt+1) ≤η2/2. While this is not the exact same property as desired by REPS, both inequalities imply that the occupancy measures stay close to each other in the 1-norm sense by Pinsker’s inequality. Thus we conjecture that our formulation of O-REPS has similar properties to the one studied by Peters et al. [17], while it might be somewhat simpler to implement. Acknowledgments Alexander Zimin is an OMV scholar. Gergely Neu’s work was carried out during the tenure of an ERCIM ”Alain Bensoussan” Fellowship Programme. The research leading to these results has received funding from INRIA, the European Union Seventh Framework Programme (FP7/20072013) under grant agreements 246016 and 231495 (project CompLACS), the Ministry of Higher Education and Research, Nord-Pas-de-Calais Regional Council and FEDER through the “Contrat de Projets Etat Region (CPER) 2007-2013”. 8 References [1] Abernethy, J., Hazan, E., and Rakhlin, A. (2008). Competing in the dark: An efficient algorithm for bandit linear optimization. In Proceedings of the 21st Annual Conference on Learning Theory (COLT), pages 263–274. [2] Audibert, J. Y., Bubeck, S., and Lugosi, G. (2013). Regret in online combinatorial optimization. Mathematics of Operations Research. to appear. [3] Bart´ok, G., P´al, D., Szepesv´ari, C., and Szita, I. (2011). Online learning. Lecture notes, University of Alberta. https://moodle.cs.ualberta.ca/file.php/354/notes.pdf. [4] Boyd, S. and Vandenberghe, L. (2004). Convex Optimization. Cambridge University Press. [5] Cesa-Bianchi, N. and Lugosi, G. (2006). Prediction, Learning, and Games. Cambridge University Press, New York, NY, USA. [6] Daniel, C., Neumann, G., and Peters, J. (2012). Hierarchical relative entropy policy search. In Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, volume 22 of JMLR Workshop and Conference Proceedings, pages 273–281. [7] Dekel, O. and Hazan, E. (2013). Better rates for any adversarial deterministic mdp. In Dasgupta, S. and McAllester, D., editors, Proceedings of the 30th International Conference on Machine Learning (ICML-13), volume 28, pages 675–683. JMLR Workshop and Conference Proceedings. [8] Even-Dar, E., Kakade, S. M., and Mansour, Y. (2005). Experts in a Markov decision process. In NIPS-17, pages 401–408. [9] Even-Dar, E., Kakade, S. M., and Mansour, Y. (2009). Online Markov decision processes. Mathematics of Operations Research, 34(3):726–736. [10] Gy¨orgy, A., Linder, T., Lugosi, G., and Ottucs´ak, Gy.. (2007). The on-line shortest path problem under partial monitoring. Journal of Machine Learning Research, 8:2369–2403. [11] Kakade, S. (2001). A natural policy gradient. In Advances in Neural Information Processing Systems 14 (NIPS), pages 1531–1538. [12] Koolen, W. M., Warmuth, M. K., and Kivinen, J. (2010). Hedging structured concepts. In Proceedings of the 23rd Annual Conference on Learning Theory (COLT), pages 93–105. [13] Martinet, B. (1970). R´egularisation d’in´equations variationnelles par approximations successives. ESAIM: Mathematical Modelling and Numerical Analysis - Mod´elisation Math´ematique et Analyse Num´erique, 4(R3):154–158. [14] Neu, G., Gy¨orgy, A., and Szepesv´ari, Cs. (2010a). The online loop-free stochastic shortestpath problem. In Proceedings of the 23rd Annual Conference on Learning Theory (COLT), pages 231–243. [15] Neu, G., Gy¨orgy, A., and Szepesv´ari, Cs. (2012). The adversarial stochastic shortest path problem with unknown transition probabilities. In AISTATS 2012, pages 805–813. [16] Neu, G., Gy¨orgy, A., Szepesv´ari, Cs., and Antos, A. (2010b). Online Markov decision processes under bandit feedback. In NIPS-23, pages 1804–1812. CURRAN. [17] Peters, J., M¨ulling, K., and Altun, Y. (2010). Relative entropy policy search. In AAAI 2010, pages 1607–1612. [18] Puterman, M. L. (1994). Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley-Interscience. [19] Rakhlin, A. (2009). Lecture notes on online learning. [20] Rockafellar, R. T. (1976). Monotone Operators and the Proximal Point Algorithm. SIAM Journal on Control and Optimization, 14(5):877–898. [21] Sutton, R. and Barto, A. (1998). Reinforcement Learning: An Introduction. MIT Press. [22] Szepesv´ari, Cs. (2010). Algorithms for Reinforcement Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers. [23] Yu, J. Y., Mannor, S., and Shimkin, N. (2009). Markov decision processes with arbitrary reward processes. Mathematics of Operations Research, 34(3):737–757. [24] Zinkevich, M. (2003). Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the Twentieth International Conference on Machine Learning, pages 928–936. 9
|
2013
|
132
|
4,857
|
Bayesian inference for low rank spatiotemporal neural receptive fields Mijung Park Electrical and Computer Engineering The University of Texas at Austin mjpark@mail.utexas.edu Jonathan W. Pillow Center for Perceptual Systems The University of Texas at Austin pillow@mail.utexas.edu Abstract The receptive field (RF) of a sensory neuron describes how the neuron integrates sensory stimuli over time and space. In typical experiments with naturalistic or flickering spatiotemporal stimuli, RFs are very high-dimensional, due to the large number of coefficients needed to specify an integration profile across time and space. Estimating these coefficients from small amounts of data poses a variety of challenging statistical and computational problems. Here we address these challenges by developing Bayesian reduced rank regression methods for RF estimation. This corresponds to modeling the RF as a sum of space-time separable (i.e., rank-1) filters. This approach substantially reduces the number of parameters needed to specify the RF, from 1K-10K down to mere 100s in the examples we consider, and confers substantial benefits in statistical power and computational efficiency. We introduce a novel prior over low-rank RFs using the restriction of a matrix normal prior to the manifold of low-rank matrices, and use “localized” row and column covariances to obtain sparse, smooth, localized estimates of the spatial and temporal RF components. We develop two methods for inference in the resulting hierarchical model: (1) a fully Bayesian method using blocked-Gibbs sampling; and (2) a fast, approximate method that employs alternating ascent of conditional marginal likelihoods. We develop these methods for Gaussian and Poisson noise models, and show that low-rank estimates substantially outperform full rank estimates using neural data from retina and V1. 1 Introduction A neuron’s linear receptive field (RF) is a filter that maps high-dimensional sensory stimuli to a one-dimensional variable underlying the neuron’s spike rate. In white noise or reverse-correlation experiments, the dimensionality of the RF is determined by the number of stimulus elements in the spatiotemporal window influencing a neuron’s probability of spiking. For a stimulus movie with nx×ny pixels per frame, the RF has nxnynt coefficients, where nt is the (experimenter-determined) number of movie frames in the neuron’s temporal integration window. In typical neurophysiology experiments, this can result in RFs with hundreds to thousands of parameters, meaning we can think of the RF as a vector in a very high dimensional space. In high dimensional settings, traditional RF estimators like the whitened spike-triggered average (STA) exhibit large errors, particularly with naturalistic or correlated stimuli. A substantial literature has therefore focused on methods for regularizing RF estimates to improve accuracy in the face of limited experimental data. The Bayesian approach to regularization involves specifying a prior distribution that assigns higher probability to RFs with particular kinds of structure. Popular methods have involved priors to impose smallness, sparsity, smoothness, and localized structure in RF coefficients[1, 2, 3, 4, 5]. 1 Here we develop a novel regularization method to exploit the fact that neural RFs can be modeled as a low-rank matrices (or tensors). This approach is justified by the observation that RFs can be well described by summing a small number of space-time separable filters [6, 7, 8, 9]. Moreover, it can substantially reduce the number of RF parameters: a rank p receptive field in nxnynt dimensions requires only p(nxny + nt −1) parameters, since a single space-time separable filter has nxny spatial coefficients and nt −1 temporal coefficients (i.e., for a temporal unit vector). When p ≪min(nxny, nt), as commonly occurs in experimental settings, this parametrization yields considerable savings. In the statistics literature, the problem of estimating a low-rank matrix of regression coefficients is known as reduced rank regression [10, 11]. This problem has received considerable attention in the econometrics literature, but Bayesian formulations have tended to focus on non-informative or minimally informative priors [12]. Here we formulate a novel prior for reduced rank regression using a restriction of the matrix normal distribution [13] to the manifold of low-rank matrices. This results in a marginally Gaussian prior over RF coefficients, which puts it on equal footing with “ridge”, AR1, and other Gaussian priors. Moreover, under a linear-Gaussian response model, the posterior over RF rows and columns are conditionally Gaussian, leading to fast and efficient sampling-based inference methods. We use a “localized” form for the row and and column covariances in the matrix normal prior, which have hyperparameters governing smoothness and locality of RF components in space and time [5]. In addition to fully Bayesian sampling-based inference, we develop a fast approximate inference method using coordinate ascent of the conditional marginal likelihoods for temporal (column) and spatial (row) hyperparameters. We apply this method under linear-Gaussian and linear-nonlinear-Poisson encoding models, and show that the latter gives the best performance on neural data. The paper is organized as follows. In Sec. 2, we describe the low-rank RF model with localized priors. In Sec. 3, we describe a fully Bayesian inference method using the blocked-Gibbs sampling with interleaved Metroplis Hastings steps. In Sec. 4, we introduce a fast method for approximate inference using conditional empirical Bayesian hyperparameter estimates. In Sec. 5, we extend our estimator to the linear-nonlinear Poisson encoding model. Finally, in Sec. 6, we show applications to simulated and real neural datasets from retina and V1. 2 Hierarchical low-rank receptive field model 2.1 Response model (likelihood) We begin by defining two probabilistic encoding models that will provide likelihood functions for RF inference. Let yi denote the number of spikes that occur in response to a (dt × dx) matrix stimulus Xi, where dt and dx denote the number of temporal and spatial elements in the RF, respectively. Let K denote the neuron’s (dt × dx) matrix receptive field. We will consider, first, a linear Gaussian encoding model: yi|Xi ∼ N(x⊤ i k + b, γ), (1) where xi = vec(Xi) and k = vec(K) denote the vectorized stimulus and vectorized RF, respectively, γ is the variance of the response noise, and b is a bias term. Second, we will consider a linear-nonlinear-Poisson (LNP) encoding model yi|Xi, ∼ Poiss(g(x⊤ i k + b)). (2) where g denotes the nonlinearity. Examples of g include exponential and soft rectifying function, log(exp(·) + 1), both of which give rise to a concave log-likelihood [14]. 2.2 Prior for low rank receptive field We can represent an RF of rank p using the factorization K = KtK⊤ x , (3) where the columns of the matrix Kt ∈Rdt×p contain temporal filters and the columns of the matrix Kx ∈Rdx×p contain spatial filters. 2 We define a prior over rank-p matrices using a restriction of the matrix normal distribution MN(0, Cx, Ct). The prior can be written: p(K|Ct, Cx) = 1 Z exp −1 2Tr[C−1 x K⊤C−1 t K] , (4) where the normalizer Z involves integration over the space of rank-p matrices, which has no known closed-form expression. The prior is controlled by a “column” covariance matrix Ct ∈Rdt×dt and “row” covariance matrix Cx ∈Rdx×dx, which govern the temporal and spatial RF components, respectively. If we express K in factorized form (eq. 3), we can rewrite the prior p(K|Ct, Cx) = 1 Z exp −1 2Tr (K⊤ x C−1 x Kx)(K⊤ t C−1 t Kt) . (5) This formulation makes it clear that we have conditionally Gaussian priors on Kt and Kx, that is: kt|kx, Cx, Ct ∼ N(0, A−1 x ⊗Ct), kx|kt, Ct, Cx ∼ N(0, A−1 t ⊗Cx), (6) where ⊗denotes Kronecker product, and kt = vec(Kt) ∈Rpdt×1, kx = vec(Kx) ∈Rpdx×1, and where we define Ax = K⊤ x C−1 x Kx and At = K⊤ t C−1 t Kt. We define Ct and Cx have a parametric form controlled by hyperparameters θt and θx, respectively. This form is adopted from the “automatic locality determination” (ALD) prior introduced in [5]. In the ALD prior, the covariance matrix encodes the tendency for RFs to be localized in both space-time and spatiotemporal frequency. For the spatial covariance matrix Cx, the hyperparameters are θx = {ρ, µs, µf, Φs, Φf}, where ρ is a scalar determining the overall scale of the covariance; µs and µf are length-D vectors specifying the center location of the RF support in space and spatial frequency, respectively (where D is the number of spatial dimensions, e.g., “D=2” for standard 2D visual pixel stimuli). The positive definite matrices Φs and Φf are D × D determine the size of the local region of RF support in space and spatial frequency, respectively [15]. In the temporal covariance matrix Ct, the hyperparameters θt, which are directly are analogous to θx, determine the localized RF structure in time and temporal frequency. Finally, we place a zero-mean Gaussian prior on the (scalar) bias term: b ∼N(0, σ2 b). 3 Posterior inference using Markov Chain Monte Carlo For a complete dataset D = {X, y}, where X ∈Rn×(dtdx) is a design matrix, and y is a vector of responses, our goal is to infer the joint posterior over K and b, p(K, b|D) ∝ Z Z p(D|K, b)p(K|θt, θx)p(b|σ2 b)p(θt, θx, σ2 b)dσ2 bdθtdθx. (7) We develop an efficient Markov chain Monte Carlo (MCMC) sampling method using blocked-Gibbs sampling. Blocked-Gibbs sampling is possible since the closed-form conditional priors in eq. 6 and the Gaussian likelihood yields closed-form “conditional marginal likelihood” for θt|(kx, θx, D) and θx|(kt, θt, D), respectively1. The blocked-Gibbs first samples (σ2 b, θt, γ) from the conditional evidence and simultaneously sample kt from the conditional posterior. Given the samples of (σ2 b, θt, γ, b, kt), we then sample θx and kx similarly. For sampling from the conditional evidence, we use the Metropolis Hastings (MH) algorithm to sample the low dimensional space of hyperparameters. For sampling (b, kt) and kx, we use the closed-form formula (will be introduced shortly) for the mean of the conditional posterior. The details of our algorithm are as follows. Step 1 Given (i-1)th samples of (kx, θx), we draw ith samples (b, kt, θt, σ2 b, γ) from p(b(i), k(i) t , θ(i) t , σ2 b (i), γ(i)|k(i−1) x , θ(i−1) x , D) = p(θ(i) t , σ2 b (i), γ(i)|k(i−1) x , θ(i−1) x , D) p(b(i), k(i) t |θ(i) t , σ2 b (i), γ(i), k(i−1) x , θ(i−1) x , D), 1In this section and Sec.4, we fix the likelihood to Gaussian (eq. 1). An extension to the Poisson likelihood model (eq. 2) will be described in Sec.5. 3 which is divided into two parts2: • We sample (θt, σ2 b, γ) from the conditional posterior given by p(θt, σ2 b, γ|kx, θx, D) ∝ p(θt, σ2 b, γ) Z p(D|b, kt, kx, γ)p(b, kt|kx, θx, θt)dbdkt, ∝ p(θt, σ2 b, γ) Z N(D|M ′ xwt, γI)N(wt|0, Cwt)dwt, (8) where wt is a vector of [b kT t ]T , M ′ x is concatenation of a vector of ones and the matrix Mx, which is generated by projecting each stimulus Xi onto Kx and then stacking it in each row, meaning that the i-th row of Mx is [vec(XiKx)]⊤, and Cwt is a block diagonal matrix whose diagonal is σ2 b and A−1 x ⊗Ct. Using the standard formula for a product of two Gaussians, we obtain the closed form conditional evidence: p(D|θt, σ2 b, γ, kx, θx) ≈ |2πΛt| 1 2 |2πγI| 1 2 |2πCwt| 1 2 exp h 1 2µ⊤ t Λ−1 t µt − 1 2γ y⊤y i (9) where the mean and covariance of conditional posterior over wt given kx are given by µt = 1 γ ΛtM ′T x y, and Λt = (C−1 wt + 1 γ M ′T x Mx)−1. (10) We use the MH algorithm to search over the low dimensional hyperparameter space, with the conditional evidence (eq. 9) as the target distribution, under a uniform hyperprior on (θt, σ2 b, γ). • We sample (b, kt) from the conditional posterior given in eq. 10. Step 2 Given the ith samples of (b, kt, θt, σ2 b, γ), we draw ith samples (kx, θx) from p(k(i) x , θ(i) x |b(i), k(i) t , σ2 b (i), θ(i) t , γ(i), D) = p(θ(i) x |b(i), k(i) t , θ(i) t , σ2 b (i), γ(i), D), p(k(i) x |θ(i) x , b(i), k(i) t , σ2 b (i), θ(i) t , γ(i), D), which is divided into two parts: • We sample θx from the conditional posterior given by p(θx|b, kt, θt, σ2 b, γ, D) ∝ p(θx) Z p(D|b, kt, kx, γ)p(kx|kt, θt, θx)dkx, (11) ∝ p(θx) Z N(D|Mtkx + b1, γI)N(kx|0, A−1 t ⊗Cx)dkx, where the matrix Mt is generated by projecting each stimulus Xi onto Kt and then stacking it in each row, meaning that the i-th row of Mt is [vec([X⊤ i Kt])]⊤. Using the standard formula for a product of two Gaussians, we obtain the closed form conditional evidence: p(D|θx, kt, b) = |2πΛx| 1 2 |2πγI| 1 2 |2π(A−1 t ⊗Cx)| 1 2 exp h 1 2µ⊤ x Λ−1 x µx − 1 2γ (y −b1)T (y −b1) i , where the mean and covariance of conditional posterior over kx given (b, kt) are given by µx = 1 γ ΛxM ⊤ t (y −b1), and Λx = (At ⊗C−1 x + 1 γ M ⊤ t Mt)−1. (12) As in Step 1, with a uniform hyperprior on θx, the conditional evidence is the target distribution in the MH algorithm. • We sample kx from the conditional posterior given in eq. 12. A summary of this algorithm is given in Algorithm 1. 2We omit the sample index, the superscript i and (i-1), for notational cleanness. 4 Algorithm 1 fully Bayesian low-rank RF inference using blocked-Gibbs sampling Given data D, conditioned on samples for other variables, iterate the following: 1. Sample for (b, kt, σ2 b, θt, γ) from the conditional evidence for (θt, σ2 b, γ) (in eq. 8) and the conditional posterior over (b, kt) (in eq. 10). 2. Sample for (kx, θx) from the conditional evidence for θx (in eq. 11) and the conditional posterior over kx (in eq. 12). Until convergence. 4 Approximate algorithm for fast posterior inference Here we develop an alternative, approximate algorithm for fast posterior inference. Instead of integrating over hyperparameters, we attempt to find point estimates that maximize the conditional marginal likelihood. This resembles empirical Bayesian inference, where the hyperparameters are set by maximizing the full marginal likelihood. In our model, the evidence has no closed form; however, the conditional evidence for (θt, σ2 b, γ) given (kx, θx) and the conditional evidence for θx given (b, kt, θt, σ2 b, γ) are given in closed form (in eq. 8 and eq. 11). Thus, we alternate (1) maximizing the conditional evidence to set (θt, σ2 b, γ) and finding the MAP estimates of (b, kt), and (2) maximizing the conditional evidence to set θx and finding the MAP estimates of kx, that is, ˆθt, ˆγ, ˆσ2 b = arg max θt,σ2 b,γ p(D|θt, σ2 b, γ, ˆkx, ˆθx), (13) ˆb, ˆkt = arg max b,kt p(b, kt|ˆθt, ˆγ, ˆσ2 b, ˆkx, ˆθx, D), (14) ˆθx = arg max θx p(D|θx,ˆb, ˆkt, ˆθt, ˆγ, ˆσ2 b), (15) ˆkx = arg max kx p(kx|ˆθx,ˆb, ˆkt, ˆθt, ˆγ, ˆσ2 b, D). (16) The approximate algorithm works well if the conditional evidence is tightly concentrated around its maximum. Note that if the hyperparameters are fixed, the iterative updates of (b, kt) and kx given above amount to alternating coordinate ascent of the posterior over (b, K). 5 Extension to Poisson likelihood When the likelihood is non-Gaussian, blocked-Gibbs sampling is not tractable, because we do not have a closed form expression for conditional evidence. Here, we introduce a fast, approximate inference algorithm for the low-rank RF model under the LNP likelihood. The basic steps are the same as those in the approximate algorithm (Sec.4). However, we make a Gaussian approximation to the conditional posterior over (b, kt) given kx via the Laplace approximation. We then approximate the conditional evidence for (θt, σ2 b) given kx at the posterior mode of (b, kt) given kx. The details are as follows. The conditional evidence for θt given kx is p(D|θt, σ2 b, kx, θx) ∝ Z Poiss(y|g(M ′ xwt))N(wt|0, Cwt)dwt (17) The integrand is proportional to the conditional posterior over wt given kx, which we approximate to a Gaussian distribution via Laplace approximation p(wt|θt, σ2 b, kx, D) ≈ N( ˆwt, Σt), (18) where ˆwt is the conditional MAP estimate of wt obtained by numerically maximizing the logconditional posterior for wt (e.g., using Newton’s method. See Appendix A), log p(wt|θt, σ2 b, kx, D) = y⊤log(g(M ′ xwt)) −g(M ′ xwt) −1 2w⊤ t C−1 wt wt + c, (19) and Σt is the covariance of the conditional posterior obtained by the second derivative of the logconditional posterior around its mode Σ−1 t = Ht + C−1 wt , where the Hessian of the negative loglikelihood is denoted by Ht = −∂2 ∂w2 t log p(D|wt, M ′ x). 5 MSE # training data 1 64 1 16 time space 250 samples A B 2000 samples full-rank true k ML low-rank fast 250 500 1000 2000 0.003 0.01 0.1 1 2 ML low-rank Gibbs full-rank low-rank (Gibbs) low-rank (fast) Figure 1: Simulated data. Data generated from the linear Gaussian response model with a rank-2 RF (16 by 64 pixels: 1024 parameters for full-rank model; 160 for rank-2 model). A. True rank-2 RF (left). Estimates obtained by ML, full-rank ALD, low-rank approximate method, and blocked-Gibbs sampling, using 250 samples (top), and using 2000 samples (bottom), respectively. B. Average mean squared error of the RF estimate by each method (average over 10 independent repetitions). Under the Gaussian posterior (eq. 18), the log conditional evidence (log of eq. 17) at the posterior mode wt = ˆwt is simply log p(D|θt, σ2 b, kx) ≈ log p(D| ˆ wt, M ′ x) −1 2 ˆw⊤ t C−1 wt ˆwt −1 2 log |CwtΣ−1 t |, which we maximize to set θt and σ2 b. Due to space limit, we omit the derivations for the conditional posterior for kx and the conditional evidence for θx given (b, kt). (See Appendix B). 6 Results 6.1 Simulations We first tested the performance of the blocked-Gibbs sampling and the fast approximate algorithm on a simulated Gaussian neuron with a rank-2 RF of 16 temporal bins and 64 spatial pixels shown in Fig. 1A. We compared these methods with the maximum likelihood estimate and the full-rank ALD estimate. Fig. 1 shows that the low-rank RF estimates obtained by the blocked-Gibbs sampling and the approximate algorithm perform similarly, and achieve lower mean squared error than the full-rank RF estimates. MSE # training data 250 samples A B 2000 samples full-rank ML low-rank ML full-rank low-rank 250 500 1000 2000 0 0.5 1 1.5 2 ML full-rank low-rank full-rank ML low-rank linear Gaussian Linear Nonlinear Poisson Gaussian LNP Figure 2: Simulated data. Data generated from the linear-nonlinear Poisson (LNP) response model with a rank-2 RF (shown in Fig. 1A) and “softrect” nonlinearity. A. Estimates obtained by ML, fullrank ALD, low-rank approximate method under the linear Gaussian model, and the methods under the LNP model, using 250 (top) and 2000 (bottom) samples, respectively. B. Average mean squared error of the RF estimate (from 10 independent repetitions). The low-rank RF estimates under the LNP model perform better than those under the linear Gaussian model. We then tested the performance of the above methods on a simulated linear-nonlinear Poisson (LNP) neuron with the same RF and the softrect nonlinearity. We estimated the RF using each method under the linear Gaussian model as well as under the LNP model. Fig. 2 shows that the low-rank RF 6 1 16 1 24 low-rank (fast) time space A B rank-1 rank-2 rank-4 V1 simple cell #2 V1 simple cell #1 low-rank STA low-rank (fast) rank-1 rank-2 rank-4 1 12 space 1 24 time per stimulus relative likelihood per stimulus relative likelihood low-rank (Gibbs) low-rank (Gibbs) 2.25 0.67 1 2 4 rank 3 low-rank STA 2.50 0.67 1 2 4 rank 3 Figure 3: Comparison of low-rank RF estimates for V1 simple cells (using white noise flickering bars stimuli [16]). A: Relative likelihood per test stimulus (left) and low-rank RF estimates for three different ranks (right). Relative likelihood is the ratio of the test likelihood of rank-1 STA to that of other estimates. Using 1 minutes of training data, the rank-2 RF estimates obtained by the blocked-Gibbs sampling and the approximate method achieve the highest test likelihood (estimates are shown in the top row), while rank-1 STA achieves the highest test likelihood, since more noise is added to the low-rank STA as the rank increases (estimates are shown in the bottom row). Relative likelihood under full rank ALD is 2.25. B: Similar plot for another V1 simple cell. The rank-4 estimates obtained by the blocked-Gibbs sampling and the approximate method achieve the highest test likelihood for this cell. Relative likelihood under full rank ALD is 2.17. estimates perform better than full-rank estimates regardless of the model, and that the low-rank RF estimates under the LNP model achieved the lowest MSE. 6.2 Application to neural data We applied our methods to estimate the RFs of V1 simple cells and retinal ganglion cells (RGCs). The details of data collection are described in [16, 9]. We performed 10-fold cross-validation using 1 minute of training and 2 minutes of test data. In Fig. 3 and Fig. 4, we show the average test likelihood as a function of RF rank under the linear Gaussian model. We also show the low-rank RF estimates obtained by our methods as well as the low-rank STA. The low-rank STA (rank-p) is computed as ˆKST A,p = Pp i diuiv⊤ i , where di is the i-th singular value, ui and vi are the i-th left and right singular vectors, respectively. If the stimulus distribution is non-Gaussian, the low-rank STA will have larger bias than the low-rank ALD estimate. 0.9 1 B RGC on-cell 1 2 4 0.9 1 A RGC of-cell low-rank per stimulus rank low-rank STA 1st 2nd 3rd 1st 2nd 3rd temporal extent 0 25 3 spatial extent low-rank (fast) low-rank STA spatial extent 1 10 1 10 0 25 1st 2nd temporal extent 3rd 1 10 1 10 (fast) relative likelihood per stimulus relative likelihood low-rank (Gibbs) low-rank (Gibbs) 1 2 4 rank 3 Figure 4: Comparison of low-rank RF estimates for retinal data (using binary white noise stimuli [9]). The RF consists of 10 by 10 spatial pixels and 25 temporal bins (2500 RF coefficients). A: Relative likelihood per test stimulus (left), top three left singular vectors (middle) and right singular vectors (right) of estimated RF for an off-RGC cell. The samplingbased RF estimate benefits from a rank-3 representation, making use of three distinct spatial and temporal components, whereas the performance of the low-rank STA degrades above rank 1. Relative likelihood under full rank ALD is 1.0146. B: Similar plot for on-RGC cell. Relative likelihood under full rank ALD is 1.006. Both estimates perform best with rank 1. 7 0.5 0.25 1 2 10 0 10 1 10 2 103 runtime (sec) # minutes of training data prediction error # minutes of training data 1 16 1 16 rank-2 (LNP) rank -2 (Gaussian) ML (Gaussian) time space 30 sec. 2 min. B A ML full-rank rank-2(fast) full-rank rank-2 0.25 0.5 1 2 0.18 0.2 0.22 0.24 rank-2(Gibbs) C Gaussian LNP Figure 5: RF estimates for a V1 simple cell. (Data from [16]). A: RF estimates obtained by ML (left) and low-rank blocked-Gibbs sampling under the linear Gaussian model (middle), and low-rank approximate algorithm under the LNP model (right), for two different amounts of training data (30 sec. and 2 min.). The RF consists of 16 temporal and 16 spatial dimensions (256 RF coefficients). B: Average prediction (on spike count) error across 10-subset of available data. The low-rank RF estimates under the LNP model achieved the lowest prediction error among all other methods. C: Runtime of each method. The low-rank approximate algorithms took less than 10 sec., while the full-rank inference methods took 10 to 100 times longer. Finally, we applied our methods to estimate the RF of a V1 simple cell with four different amounts of training data (0.25, 0.5 1, and 2 minutes) and computed the prediction error of each estimate under the linear Gaussian and the LNP models. In Fig. 5, we show the estimates using 30 sec. and 2 min. of training data. We computed the test likelihood of each estimate to set the RF rank and found that the rank-2 RF estimates achieved the highest test likelihood. In terms of the average prediction error, the low-rank RF estimates obtained by our fast approximate algorithm achieved the lowest error, while the runtime of the algorithm was significantly lower than full-rank inference methods. 7 Conclusion We have described a new hierarchical model for low-rank RFs. We introduced a novel prior for low-rank matrices based on a restricted matrix normal distribution, which has the feature of preserving a marginally Gaussian prior over the regression coefficients. We used a “localized” form to define row and column covariance matrices in the matrix normal prior, which allows the model to flexibly learn smooth and sparse structure in RF spatial and temporal components. We developed two inference methods: an exact one based on MCMC with blocked-Gibbs sampling and an approximate one based on alternating evidence optimization. We applied the model to neural data using both Gaussian and Poisson noise models, and found that the Poisson (or LNP) model performed best despite the increased reliance on approximate inference. Overall, we found that low-rank estimates achieved higher prediction accuracy with significantly lower computation time compared to full-rank estimates. We believe our localized, low-rank RF model will be especially useful in high-dimensional settings, particularly in cases where the stimulus covariance matrix does not fit in memory. In future work, we will develop fully Bayesian inference methods for low-rank RFs under the LNP noise model, which will allow us to quantify the accuracy of our approximate method. Secondly, we will examine methods for inferring the RF rank, so that the number of space-time separable components can be determined automatically from the data. Acknowledgments We thank N. C. Rust and J. A. Movshon for V1 data, and E. J. Chichilnisky, J. Shlens, A. .M. Litke, and A. Sher for retinal data. This work was supported by a Sloan Research Fellowship, McKnight Scholar’s Award, and NSF CAREER Award IIS-1150186. 8 References [1] F. Theunissen, S. David, N. Singh, A. Hsu, W. Vinje, and J. Gallant. Estimating spatio-temporal receptive fields of auditory and visual neurons from their responses to natural stimuli. Network: Computation in Neural Systems, 12:289–316, 2001. [2] D. Smyth, B. Willmore, G. Baker, I. Thompson, and D. Tolhurst. The receptive-field organization of simple cells in primary visual cortex of ferrets under natural scene stimulation. Journal of Neuroscience, 23:4746–4759, 2003. [3] M. Sahani and J. Linden. Evidence optimization techniques for estimating stimulus-response functions. NIPS, 15, 2003. [4] S.V. David and J.L. Gallant. Predicting neuronal responses during natural vision. Network: Computation in Neural Systems, 16(2):239–260, 2005. [5] M. Park and J. W. Pillow. Receptive field inference with localized priors. PLoS Comput Biol, 7(10):e1002219, 2011. [6] Jennifer F. Linden, Robert C. Liu, Maneesh Sahani, Christoph E. Schreiner, and Michael M. Merzenich. Spectrotemporal structure of receptive fields in areas ai and aaf of mouse auditory cortex. Journal of Neurophysiology, 90(4):2660–2675, 2003. [7] Anqi Qiu, Christoph E. Schreiner, and Monty A. Escab. Gabor analysis of auditory midbrain receptive fields: Spectro-temporal and binaural composition. Journal of Neurophysiology, 90(1):456–476, 2003. [8] J. W. Pillow and E. P. Simoncelli. Dimensionality reduction in neural models: An information-theoretic generalization of spike-triggered average and covariance analysis. Journal of Vision, 6(4):414–428, 4 2006. [9] J. W. Pillow, J. Shlens, L. Paninski, A. Sher, A. M. Litke, and E. P. Chichilnisky, E. J. Simoncelli. Spatiotemporal correlations and visual signaling in a complete neuronal population. Nature, 454:995–999, 2008. [10] A.J. Izenman. Reduced-rank regression for the multivariate linear model. Journal of multivariate analysis, 5(2):248–264, 1975. [11] Gregory C Reinsel and Rajabather Palani Velu. Multivariate reduced-rank regression: theory and applications. Springer New York, 1998. [12] John Geweke. Bayesian reduced rank regression in econometrics. Journal of Econometrics, 75(1):121 – 146, 1996. [13] A.P. Dawid. Some matrix-variate distribution theory: notational considerations and a bayesian application. Biometrika, 68(1):265, 1981. [14] L. Paninski. Maximum likelihood estimation of cascade point-process neural encoding models. Network: Computation in Neural Systems, 15:243–262, 2004. [15] M. Park and J. W. Pillow. Bayesian active learning with localized priors for fast receptive field characterization. In NIPS, pages 2357–2365, 2012. [16] N. C. Rust, Schwartz O., J. A. Movshon, and Simoncelli E.P. Spatiotemporal elements of macaque v1 receptive fields. Neuron, 46(6):945–956, 2005. 9
|
2013
|
133
|
4,858
|
Global MAP-Optimality by Shrinking the Combinatorial Search Area with Convex Relaxation Bogdan Savchynskyy1 J¨org Kappes2 Paul Swoboda2 Christoph Schn¨orr1,2 1Heidelberg Collaboratory for Image Processing, Heidelberg University, Germany bogdan.savchynskyy@iwr.uni-heidelberg.de 2Image and Pattern Analysis Group, Heidelberg University, Germany {kappes,swoboda,schnoerr}@math.uni-heidelberg.de Abstract We consider energy minimization for undirected graphical models, also known as the MAP-inference problem for Markov random fields. Although combinatorial methods, which return a provably optimal integral solution of the problem, made a significant progress in the past decade, they are still typically unable to cope with large-scale datasets. On the other hand, large scale datasets are often defined on sparse graphs and convex relaxation methods, such as linear programming relaxations then provide good approximations to integral solutions. We propose a novel method of combining combinatorial and convex programming techniques to obtain a global solution of the initial combinatorial problem. Based on the information obtained from the solution of the convex relaxation, our method confines application of the combinatorial solver to a small fraction of the initial graphical model, which allows to optimally solve much larger problems. We demonstrate the efficacy of our approach on a computer vision energy minimization benchmark. 1 Introduction The focus of this paper is energy minimization for Markov random fields. In the most common pairwise case this problem reads min x∈XG EG,θ(x) := min x∈XG X v∈VG θv(xv) + X uv∈EG θuv(xu, xv) , (1) where G = (VG, EG) denotes an undirected graph with the set of nodes VG ∋v and the set of edges EG ∋uv; variables xv belong to the finite label sets Xv, v ∈VG; potentials θv : Xv →R, θuv : Xu×Xv →R, v ∈VG, uv ∈EG, are associated with the nodes and the edges of G respectively. We denote by XG the Cartesian product ⊗v∈VGXv. Problem (1) is known to be NP-hard in general, hence existing methods either consider its convex relaxations or/and apply combinatorial techniques such as branch-and-bound, combinatorial search, cutting plane etc. on top of convex relaxations. The main contribution of this paper is a novel method to combine convex and combinatorial approaches to compute a provably optimal solution. The method is very general in the sense that it is not restricted to a specific convex programming or combinatorial algorithm, although some algorithms are more preferable than others. The main restriction of the method is the neighborhood structure of the graph G: it has to be sparse. Basic grid graphs of image data provide examples satisfying this requirement. The method is applicable also to higher-order problems, defined on so called factor graphs [1], however we will concentrate mainly on the pairwise case to keep our exposition simple. Underlying idea. Fig. 1 demonstrates the main idea of our method. Let A and B be two subgraphs covering G. Select them so that the only common nodes of these subgraphs lie on their mutual border 1 Solve A and B separately Check consistency on ∂A Increase B B\∂A A\∂A label mismatch ∂A ≡∂B Figure 1: Underlying idea of the proposed method: the initial graph is split into two subgraphs A (blue+yellow) and B (red+yellow), assigned to a convex and a combinatorial solver respectively. If the integral solutions provided by both solvers do not coincide on the common border ∂A (yellow) of the two subgraphs, the subgraph B is increased by appending mismatching nodes (green) and the border is adjusted respectively. ∂A(≡∂B) defined in terms of the master-graph G. Let x∗ A and x∗ B be optimal labelings computed independently on A and B. If these labelings coincide on the border ∂A, then under some additional conditions the concatenation of x∗ A and x∗ B is an optimal labeling for the initial problem (1), as we show in Section 3 (see Theorem 1). We select the subgraph A such that it contains a ”simple“ part of the problem, for which the convex relaxation is tight. This part is assigned to the respective convex program solver. The subgraph B contains in contrast the difficult, combinatorial subproblem and is assigned to a combinatorial solver. If the labelings x∗ A and x∗ B do not coincide on some border node v ∈∂A, we (i) increase the subgraph B by appending the node v and edges from v to B, (ii) correspondingly decrease A and (iii) recompute x∗ A and x∗ B. This process is repeated until either labelings x∗ A and x∗ B coincide on the border or B equals G. The sparsity of G is required to avoid fast growth of the subgraph B. We refer to Section 3 for a detailed description of the algorithm, where we in particular specify the initial selection of the subgraphs A and B and the methods for (i) encouraging consistency of x∗ A and x∗ B on the boundary ∂A and (ii) providing equivalent results with just a single run of the convex relaxation solver. These techniques will be described for the local polytope relaxation, known also as a linear programming relaxation of (1) [2,3]. Related work. The literature on problem (1) is very broad, both regarding convex programming and combinatorial methods. Here we will concentrate on the local polytope relaxation, that is essential to our approach. The local polytope relaxation (LP) of (1) was proposed and analyzed in [4] (see also the recent review [2]). An alternative view on the same relaxation was proposed in [5]. This view appeared to be very close to the idea of the Lagrangian or dual decomposition technique (see [6] for applications to (1)). This idea stimulated development of efficient solvers for convex relaxations of (1). Scalable solvers for the LP relaxation became a hot topic in recent years [7–14]. The algorithms however, which guarantee attainment of the optimum of the convex relaxation at least theoretically, are quite slow in practice, see e.g. comparisons in [11, 15]. Remarkably, the fastest scalable algorithms for convex relaxations are based on coordinate descent: the diffusion algorithm [2] known from the seventies and especially its dual decomposition based variant TRW-S [16]. There are other closely related methods [17, 18] based on the same principle. Although these algorithms do not guarantee attainment of the optimum, they converge [19] to points fulfilling a condition known as arc consistency [2] or weak tree agreement [16]. We show in Section 3 that this condition plays a significant role for our approach. It is a common observation that in the case of sparse graphs and/or strong evidence of the unary terms θv, v ∈VG, the approximate solutions delivered by such solvers are quite good from the practical viewpoint. The belief, that these solutions are close to optimal ones is evidenced by numerical bounds, which these solvers provide as a byproduct. The techniques used in combinatorial solvers specialized to problem (1) include most of the classical tools: cutting plane, combinatorial search and branch-and-bound methods were adapted to the problem (1). The ideas of the cutting plane method form the basis for tightening the LP relaxation within the dual decomposition framework (see the recent review [20] and references therein) and for finding an exact solution for Potts models [21], which is a special class of problem (1). Combinatorial search methods with dynamic programming based heuristics were successfully applied 2 to problems defined on dense and fully connected but small graphs [22]. The specialized branchand-bound solvers [23,24] also use convex (mostly LP) relaxations and/or a dynamic programming technique to produce bounds in the course of the combinatorial search [25]. However the reported applicability of most combinatorial solvers nowadays is limited to small graphs. Specialized solvers like [21] scale much better, but are focused on a certain narrow class of problems. The goal of this work is to employ the fact, that local polytope solvers provide good approximate solutions and to restrict computational efforts of combinatorial solvers to a relatively small, and hence tractable part of the initial problem. Contribution. We propose a novel method for obtaining a globally optimal solution of the energy minimization problem (1) for sparse graphs and demonstrate its performance on a series of largescale benchmark datasets. We were able to • solve previously unsolved large-scale problems of several different types, and • attain optimal solutions of hard instances of Potts models an order of magnitude faster than specialized state of the art algorithms [21]. For an evaluation of our method we use datasets from the very recent benchmark [15]. Paper structure. In Section 2 we provide the definitions for the local polytope relaxation and arc consistency. Section 3 is devoted to the specification of our algorithm. In Sections 4 and 5 we provide results of the experimental evaluation and conclusions. 2 Preliminaries Notation. A vector x with coordinates xv, v ∈VG, will be called labeling and its coordinates xv ∈Xv – labels. The notation x|W, W ⊂VG stands for the restriction of x to the subset W, i.e. for the subvector (xv, v ∈W). To shorten notation we will sometimes write xuv ∈Xuv in place of (xv, xu) ∈Xu × Xv for (v, u) ∈EG. Let also nb(v), v ∈VG, denote the set of neighbors of node v, that is the set {u ∈VG : uv ∈EG}. LP relaxation. The local polytope relaxation of (1) reads (see e.g. [2]) min µ≥0 X v∈VG X xv∈Xv θv(xv)µv(xv) + X uv∈EG X (xu,xv)∈Xuv θuv(xu, xv)µuv(xu, xv) s.t. P xv∈VG µv(xv) = 1, v ∈VG P xv∈VG µuv(xu, xv) = µu(xu), xu ∈Xu, uv ∈EG P xu∈VG µuv(xu, xv) = µv(xv), xv ∈Xv, uv ∈EG . (2) This formulation is based on the overcomplete representation of indicator vectors µ constrained to the local polytope commonly used for discrete graphical models [3]. It is well-known that the local polytope constitutes an outer bound (relaxation) of the convex hull of all indicator vectors of labelings (marginal polytope; cf. [3]). The Lagrange dual of (2) reads max φ,γ X v∈VG γv + X uv∈EG γuv (3) s.t. γv ≤ ˜θφ v (xv) := θv(xv) −P u∈nb(v) φv,u(xv), v ∈VG, xv ∈Xv , γuv ≤ ˜θφ uv(xu, xv) := θuv(xu, xv) + φv,u(xv) + φu,v(xu), uv ∈EG, (xu, xv) ∈Xuv . In the constraints of (3) we introduced the reparametrized potentials ˜θφ. One can see, that for any values of the dual variables φ the reparametrized energy E˜θφ,G(x) is equal to the non-parametrized one Eθ,G(x) for any labeling x ∈XG. The objective function of the dual problem is equal to D(φ) := P v∈VG ˜θφ v (x′ v) + P uv∈EG ˜θφ uv(x′ uv), where x′ w ∈arg minxw∈Xv∪Xuv ˜θφ w(xw). A reparametrization, that is reparametrized potentials ˜θφ, will be called optimal, if the corresponding φ is the solution of the dual problem (3). In general neither the optimal φ is unique nor the optimal reparametrization. 3 Definition 1 (Strict arc consistency). We will call the node v ∈VG strictly arc consistent w.r.t. potentials θ if there exist labels x′ v ∈Xv and x′ u ∈Xu for all u ∈nb(v), such that θv(x′ v) < θv(xv) for all xv ∈Xv\{x′ v} and θvu(x′ v, x′ u) < θvu(xv, xu) for all (xv, xu) ∈Xvu\{(x′ v, x′ u)}. The label x′ v will be called locally optimal. If all nodes v ∈VG are strictly arc consistent w.r.t. the potentials ˜θφ, the dual objective value D(φ) becomes equal to the energy D(φ) = EG,˜θφ(x′) = EG,θ(x′) (4) of the labeling x′ constructed by the corresponding locally optimal labels. From duality it follows, that D(φ) is a lower bound for energies of all labelings EG,θ(x), x ∈XG. Hence attainment of equality (4) shows that (i) φ is the solution of the dual problem (3) and (ii) x′ is the solution of both the energy minimization problem (1) and its relaxation (2). Strict arc consistency of all nodes is sufficient, but not necessary for attaining the optimum of the dual objective (3). Its fulfillment means that our LP relaxation is tight, which is not always the case. However, in many practical cases the optimal reparametrization φ corresponds to strict arc consistency of a significant portion of, but not all graph nodes. The remaining non-consistent part is often much smaller and consists of many separate ”islands“. The strict arc consistency of a certain node v, even for the optimally reparametrized potentials ˜θφ, does not guarantee global optimality of the corresponding locally optimal label xv (unless it holds for all nodes), though it is a good and widely used heuristic to obtain an approximate solution of the non-relaxed problem (1). In this work we provide an algorithm, which is able to prove this optimality or discard it. The algorithm applies combinatorial optimization techniques only to the arc inconsistent part of the model, which is often much smaller than the whole model in applications. Remark 1. Efficient dual decomposition based algorithms optimize dual functions, which differ from (4) (see e.g. [6, 13, 16]), but are equivalent to it in the sense of equal optimal values. Getting reparametrizations ˜θφ is less straightforward in these cases, but can be efficiently computed (see e.g. [16, Sec. 2.2]). 3 Algorithm description The graph A = (VA, EA) will be called an (induced) subgraph of the graph G = (VG, EG), if VA ⊂VG and EA = {uv ∈EG : u, v ∈VA}. The graph G will be called supergraph of A. The subgraph ∂A induced by a set of nodes V∂A of the graph A, which are connected to VG\VA, is called its boundary w.r.t. G, i.e. V∂A = {v ∈VA : ∃uv ∈EG : u ∈VG\VA}. The complement B to A\∂A, given by VB = {v ∈VG : v ∈∂A ∪(VG\VA)}, EB = {uv ∈EG : u, v ∈VB}, is called boundary complement to A w.r.t. the graph G. Let A be a subgraph of G and potentials θv, v ∈VG, and θuv ∈EG be associated with nodes and edges of G respectively. We assume, that θv, v ∈VA, and θuv ∈EA are associated with the subgraph A. Hence we consider the energy function EA,θ to be defined on A together with an optimal labeling on A, which is the one that minimizes EA,θ. The following theorem formulates conditions necessary to produce an optimal labeling x∗on the subgraph G from the optimal labelings on its mutually boundary complement subgraphs A and B. Theorem 1. Let A be a subgraph of G and B be its boundary complement w.r.t. A. Let x∗ A and x∗ B be labelings minimizing EA,θ and EB,θ respectively and let all nodes v ∈VA be strictly arc consistent w.r.t. potentials θ. Then from x∗ A,v = x∗ B,v for all v ∈V∂A (5) follows that the labeling x∗with coordinates x∗ v = x∗ A,v, v ∈A x∗ B,v, v ∈B\A , v ∈VG, is optimal on G. Proof. Let θ denote potentials of the problem. Let us define other potentials θ′ as θ′ w(xw) := 0, w ∈V∂A ∪E∂A θw(xw), w /∈V∂A ∪E∂A . Then EG,θ(x) = EA,θ′(x|A)+EB,θ(x|B). From strict 4 Algorithm 1 (1) Solve LP and reparametrize (G, θ) →(G, ˜θφ). (2) Initialize: (A, ˜θφ) and x∗ A,v from arc consistent nodes. (3) repeat Set B as a boundary complement to A. Compute an optimal labeling x∗ B on B. If x∗ A|∂A = x∗ B|∂A return. Else set C := {v ∈V∂A : x∗ A,v ̸= x∗ B,v}, A := A\C until C = ∅ arc consistency of θ over A directly follows that EA,θ′(x∗ A) = minxA EA,θ′(xA). From this follows min x EG,θ(x) = { min xA,xB EA,θ′(xA) + EB,θ(xB) s.t. xA|∂A = xB|∂A} = min x′ ∂A min xA : xA|∂A=x′ ∂A EA,θ′(xA) + min xB : xB|∂A=x′ ∂A EB,θ(xB) ≥min xA EA,θ′(xA) + min xB EB,θ(xB) = EA,θ′(x∗ A) + EB,θ(x∗ B) = EG,θ(x∗) Now we are ready to transform the idea described in the introduction into Algorithm 1. Step (1). As a first step of the algorithm we run an LP solver for the dual problem (3) on the whole graph G. The output of the algorithm is the reparametrization ˜θφ of the initial problem. Since well-scalable algorithms for the dual problem (3) attain the optimum only in the limit after a potentially infinite number of iterations, we cannot afford to solve it exactly. Fortunately, it is not needed to do so and it is enough to get only a sufficiently good approximation. We will return to this point at the end of this section. Step (2). We assign to the set VA the nodes of the graph G, which satisfy the strict arc consistency condition. The optimal labeling on A can be trivially computed from the reparametrized unary potentials ˜θφ v by x∗ A,v := arg minxv ˜θφ v (xv), v ∈A. Step (3). We define B as the boundary complement to A w.r.t. the master graph G and find an optimal labeling x∗ B on the subgraph B with a combinatorial solver. If the boundary condition (5) holds we have found the optimal labeling according to Theorem 1. Otherwise we remove the nodes where this condition fails from A and repeat the whole step until either (5) holds or B = G. 3.1 Remarks on Algorithm 1 Encouraging boundary consistency condition. It is quite unlikely, that the optimal boundary labeling x∗ A|∂A obtained based only on the subgraph A coincides with the boundary labeling x∗ B|∂A obtained for the subgraph B. To satisfy this condition the unary potentials should be quite strong on the border. In other words, they should be at least strictly arc consistent. Indeed they are so, since we consider the reparametrized potentials ˜θφ, obtained at the LP presolve step of the algorithm. Single run of LP solver. Reparametrization allows also to perform only a single run of the LP solver, keeping the results as if the subproblem over A has been solved at each iteration. The following theorem states this property formally. Theorem 2. Let all nodes of a graph A be strictly arc consistent w.r.t. potentials ˜θφ, x be the optimum of EA,˜θφ and A′ be a subgraph of A. Then x|A′ optimizes EA′,˜θφ. Proof. The proof follows directly from Definition 1. Equation (4) holds for the labeling x|A′ plugged in place of x′ and graph A′ in place of G. Hence x|A′ provides a minimum of EA′,˜θφ. Presolving B for combinatorial solver. Many combinatorial solvers use linear programming relaxations as a presolving step. Reparametrization of the subproblem over the subgraph B plays the role of such a presolver, since the optimal reparametrization corresponds to the solution of the dual problem and makes solving the primal one easier. Connected components analysis. It is often the case that the subgraph B consists of several connected components. We apply the combinatorial solver to each of them independently. 5 Dataset Step (1) LP (TRWS) Step (3) ILP (CPLEX) |B| name |VG| |Xv| # it time, s E # it time, s E min max tsukuba 110592 16 250 186 369537 24 36 369218 130 656 venus 166222 20 2000 3083 3048296 10 69 3048043 66 233 teddy 168750 60 10000 14763 1345214 1 − − 2062 − family 425632 5 10000 20156 184825 18 2 184813 11 109 pano 514080 7 10000 34092 169224 1 − − 24474 − Table 1: Results on Middlebury datasets. The column Dataset contains the dataset name, numbers |VG| of nodes and |Xv| of labels. Columns Step (1) and Step (3) contain number of iterations, time and attained energy at steps (1) and (3) of Algorithm 1, corresponding to solving the LP relaxation and use of a combinatorial solver respectively. The column |B| presents starting and final sizes of the ”combinatorial“ subgraph B. Dash ”-” stands for failure of CPLEX, due to the size of the combinatorial subproblem. Subgraph B growing strategy. One can consider different strategies for increasing the subgraph B, if the boundary condition (5) does not hold. Our greedy strategy is just one possible option. Optimality of reparametrization. As one can see, the reparametrization plays a significant role for our algorithm: it (i) is required for Theorem 1 to hold; (ii) serves as a criterion for the initial splitting of G into A and B; (iii) makes the local potentials on the border ∂A stronger; (iv) allows to avoid multiple runs of the LP solver, when the subgraph A shrinks; (v) can speed-up some combinatorial solvers by serving as a presolve result. However, there is no real reason to search for an optimal reparametrization: all its mentioned functionality remains valid also if it is non-optimal. Of course, one pays a certain price for the non-optimality: (i) the initial subgraph B becomes larger; (ii) the local potentials – weaker; (iii) the presolve results for the combinatorial solver become less precise. Note that even for non-optimal reparametrizations Theorem 2 holds and we need to run the LP solver only once. 4 Experimental evaluation We tested our approach on problems from the Middlebury energy minimization benchmark [26] and the recently published discrete energy minimization benchmark [15], which includes the datasets from the first one. We have selected computer vision benchmarks intentionally, because many problems in this area fulfill our requirements: the underlying graph is sparse (typically it has a grid structure) and the LP relaxation delivers good practical results. Since our experiments serve mainly as proof of concept we used general, though not always the most efficient solvers: TRW-S [16] as the LP-solver and CPLEX [27] as the combinatorial one within the OpenGM framework [28]. Unfortunately the original version of TRW-S does not provide information about strict arc consistency and does not output a reparametrization. Therefore we used our own implementation in the experiments. Depending on the type of the pairwise factors (Potts, truncated ℓ2 or ℓ1-norm) we found our implementation up to an order of magnitude slower than the freely available code of V. Kolmogorov. This fact suggests that the provided processing time can be significantly improved in more efficient future implementations. In the first round of our experiments we considered problems (i.e. graphical models with the specified unary and pairwise factors) of the Middlebury MRF benchmark, most of which remained unsolved, to the best of our knowledge. MRF stereo dataset consists of 3 models: tsukuba, venus and teddy. Since the optimal integral solution of tsukuba was recently obtained by LP-solvers [11,13], we used this dataset to show how our approach performs for clearly non-optimal reparametrizations. For this we run TRW-S for 250 iterations only. The size of the subgraph B grew from 130 to 656 nodes out of more than 100000 nodes of the original problem (see Table 1). On venus we obtained an optimal labeling after 10 iterations of our algorithm. During these iterations the size of the set B grew from 66 to 233 nodes, which is only 0.14% of the original problem size. The dataset teddy remains unsolved: though 6 Dataset EG,θ(x∗) Step (1) LP Step (3) ILP MCA MPLP # it time, s # it time, s time, s # LP it LP time, s ILP time, s pfau 24010.44 1000 276 14 14 > 55496 10000 > 15000 palm 12253.75 200 65 17 93 561 700 1579 3701 clownfish 14794.18 100 32 8 10 328 350 790 181 crops 11853.12 100 32 6 6 355 350 797 1601 strawberry 11766.34 100 29 8 31 483 350 697 1114 Table 2: Exemplary Potts model comparison. Datasets taken from the Color segmentation (N8) set. Column EG,θ(x∗) shows the optimal energy value, columns Step (1) LP and Step (3) ILP contain number of iterations and time spent at the steps (1) and (3) of Algorithm 1, corresponding to solving the LP relaxation and use of a combinatorial solver respectively. The column MCA stands for the time of the multiway-cut solver reported in [21]. The MPLP [17] column provides number of iterations and time of the LP presolve and the time of the tightening cutting plane phase (ILP). the size of the problem was reduced from the original 168750 to 2062 nodes, they constituted a non-manageable task for CPLEX, presumably because of the big number of labels, 60 in each node. MRF photomontage models are difficult for dual solvers like TRW-S because their range of values in pairwise factors is quite large and varies from 0 to more than 500000 in a factor. Hence we used 10000 iterations of TRW-S at the first step of Algorithm 1. For the family dataset the algorithm decreased the size of the problem for CPLEX from originally over 400000 nodes to slightly more than 100 and found a solution of the whole problem. In contrast to family the initial subgraph B for the panorama dataset is much larger (about 25000 nodes) and CPLEX gave up. MRF inpainting. Though applying TRW-S to both datasets penguin and house allows to decrease the problem to about 0.5% of its original size, the resulting subgraphs B of respectively 141 and 856 nodes were too large for CPLEX, presumably because of the big number (256) of labels. (a) Original image (b) Kovtun’s method (c) Our approach (d) Optimal Labeling Figure 2: Results for the pfau-instance from [15]. Gray pixels in (b) and (c) mark nodes that need to be labeled by the combinatorial solver. Our approach (c) leads to much smaller combinatorial problem instances than Kovtun’s method [29] (b) used in [30]. While Kovtun’s method gets partial optimality for 5% of the nodes only, our approach requires to solve only tiny problems by a combinatorial solver. Potts models. Our approach appeared to be especially efficient for Potts models. We tested it on the following datasets from the benchmark [15]: Color segmentation (N4), Color segmentation (N8), Color segmentation, Brain and managed to solve all 26 problem instances to optimality. Solving Potts models to optimality is not a big issue anymore due to the recent work [21], which related this problems to the multiway-cut problem [31] and adopted a quite efficient solver based on the cutting plane technique. However, we were able to outperform even this specialized solver on hard instances, which we collected in Table 2. There is indeed a simple explanation for this phenomenon: the difficult instances are those, for which the optimal labeling contains many small areas corresponding to different labels, see e.g. Fig. 2. This is not very typical for Potts models, where an optimal labeling typically consists of a small number of large segments. Since the number of cutting planes, which have to be processed by the multiway-cut solver, grows with the total length of the segment borders, the overall performance significantly drops on such instances. Our approach is able to correctly label most of the borders when solving the LP relaxation. Since the resulting subgraph B, passed to the combinatorial solver, is quite small, the corresponding subproblems appear 7 easy to solve even for a general-purpose solver like CPLEX. Indeed, we expect an increase in the overall performance of our method if the multiway-cut solver would be used in place of CPLEX. For Potts models there exist methods [29,32] providing part of an optimal solution, known as partial optimality. Often they allow to drastically simplify the problem so that it can be solved to global optimality on the remaining variables very fast, see [30]. However for hard instances like pfau these methods can label only a small fraction of graph nodes persistently, hence combinatorial solvers cannot solve the rest, or require a lot of time. Our method does not provide partially optimal variables: if it cannot solve the whole problem no node can be labelled as optimal at all. On the upside the subgraph B which is given to a combinatorial solver is typically much smaller, see Fig. 2. For comparison we tested the MPLP solver [17], which is based on coordinate descent LP iterations and tightens the LP relaxation with the cutting plane approach described in [33]. We used its publicly available code [34]. However this solver did not managed to solve any of the considered difficult problems (marked as unsolved in the OpenGM Benchmark [15]), such as color-seg-n8/pfau, mrf stereo/{venus, teddy}, mrf photomontage/{family, pano}. For easier instances of the Potts model, we found our solver an order of magnitude faster than MPLP (see Table 2 for the exemplary comparison), though we tried different numbers of LP presolve iterations to speed up the MPLP. Summary. Our experiments show that our method used even with quite general and not always the most efficient solvers like TRW-S and CPLEX allows to (i) find globally optimal solutions of large scale problem instances, which were previously unsolvable; (ii) solve hard instances of Potts models an order of magnitude faster than with a modern specialized combinatorial multiway-cut method; (iii) overcome the cutting-plane based MPLP method on the tested datasets. 5 Conclusions and future work The method proposed in this paper provides a novel way of combining convex and combinatorial algorithms to solve large scale optimization problems to a global optimum. It does an efficient extraction of the subgraph, where the LP relaxation is not tight and combinatorial algorithms have to be applied. Since this subgraph often corresponds to only a tiny fraction of the initial problem, the combinatorial search becomes feasible. The method is very generic: any linear programming and combinatorial solvers can be used to carry out the respective steps of Algorithm 1. It is particularly efficient for sparse graphs and when the LP relaxation is almost tight. In the future we plan to generalize the method to higher order models, tighter convex relaxations for the convex part of our solver and apply alternative and specialized solvers both for the convex and the combinatorial parts of our approach. Acknowledgement. This work has been supported by the German Research Foundation (DFG) within the program Spatio-/Temporal Graphical Models and Applications in Image Analysis, grant GRK 1653. Authors thank A. Shekhovtsov, B. Flach, T. Werner, K. Antoniuk and V. Franc from the Center for Machine Perception of the Czech Technical University in Prague for fruitful discussions. References [1] D. Koller and N. Friedman. Probabilistic Graphical Models:Principles and Techniques. MIT Press, 2009. [2] T. Werner. A linear programming approach to max-sum problem: A review. IEEE Trans. on PAMI, 29(7), July 2007. [3] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference. Found. Trends Mach. Learn., 1(1-2):1–305, 2008. [4] M. Schlesinger. Syntactic analysis of two-dimensional visual signals in the presence of noise. Kibernetika, (4):113–130, 1976. [5] M. Wainwright, T. Jaakkola, and A. Willsky. MAP estimation via agreement on (hyper)trees: message passing and linear programming approaches. IEEE Trans. on Inf. Th., 51(11), 2005. [6] N. Komodakis, N. Paragios, and G. Tziritas. MRF energy minimization and beyond via dual decomposition. IEEE Trans. on PAMI, 33(3):531 –552, march 2011. [7] B. Savchynskyy, J. H. Kappes, S. Schmidt, and C. Schn¨orr. A study of Nesterov’s scheme for Lagrangian decomposition and MAP labeling. In CVPR 2011, 2011. 8 [8] S. Schmidt, B. Savchynskyy, J. H. Kappes, and C. Schn¨orr. Evaluation of a first-order primal-dual algorithm for MRF energy minimization. In EMMCVPR, pages 89–103, 2011. [9] O. Meshi and A. Globerson. An alternating direction method for dual MAP LP relaxation. In ECML/PKDD (2), pages 470–483, 2011. [10] A. F. T. Martins, M. A. T. Figueiredo, P. M. Q. Aguiar, N. A. Smith, and E. P. Xing. An augmented Lagrangian approach to constrained MAP inference. In ICML, 2011. [11] B. Savchynskyy, S. Schmidt, J. H. Kappes, and C. Schn¨orr. Efficient MRF energy minimization via adaptive diminishing smoothing. In UAI-2012, pages 746–755. [12] D. V. N. Luong, P. Parpas, D. Rueckert, and B. Rustem. Solving MRF minimization by mirror descent. In Advances in Visual Computing, volume 7431, pages 587–598. Springer Berlin Heidelberg, 2012. [13] J. H. Kappes, B. Savchynskyy, and C. Schn¨orr. A bundle approach to efficient MAP-inference by Lagrangian relaxation. In CVPR 2012, 2012. [14] B. Savchynskyy and S. Schmidt. Getting feasible variable estimates from infeasible ones: MRF local polytope study. Technical report, arXiv:1210.4081, 2012. [15] J. H. Kappes, B. Andres, F. A. Hamprecht, C. Schn¨orr, S. Nowozin, D. Batra, S. Kim, B. X. Kausler, J. Lellmann, N. Komodakis, and C. Rother. A comparative study of modern inference techniques for discrete energy minimization problems. In CVPR, 2013. [16] V. Kolmogorov. Convergent tree-reweighted message passing for energy minimization. IEEE Trans. on PAMI, 28(10):1568–1583, 2006. [17] A. Globerson and T. Jaakkola. Fixing max-product: Convergent message passing algorithms for MAP LP-relaxations. In NIPS, 2007. [18] T. Hazan and A. Shashua. Norm-product belief propagation: Primal-dual message-passing for approximate inference. IEEE Trans. on Inf. Theory,, 56(12):6294 –6316, 2010. [19] M. I. Schlesinger and K. V. Antoniuk. Diffusion algorithms and structural recognition optimization problems. Cybernetics and Systems Analysis, 47(2):175–192, 2011. [20] V. Franc, S. Sonnenburg, and T. Werner. Cutting-Plane Methods in Machine Learning, chapter 7, pages 185–218. The MIT Press, Cambridge,USA, 2012. [21] J. H. Kappes, M. Speth, B. Andres, G. Reinelt, and C. Schn¨orr. Globally optimal image partitioning by multicuts. In EMMCVPR, 2011. [22] M. Bergtholdt, J. H. Kappes, S. Schmidt, and C. Schn¨orr. A study of parts-based object class detection using complete graphs. IJCV, 87(1-2):93–117, 2010. [23] M. Sun, M. Telaprolu, H. Lee, and S. Savarese. Efficient and exact MAP-MRF inference using branch and bound. In AISTATS-2012. [24] L. Otten and R. Dechter. Anytime AND/OR depth-first search for combinatorial optimization. In Proceedings of the Annual Symposium on Combinatorial Search (SOCS), 2011. [25] M. C. Cooper, S. de Givry, M. Sanchez, T. Schiex, M. Zytnicki, and T. Werner. Soft arc consistency revisited. Artificial Intelligence, 174(7-8):449–478, May 2010. [26] R. Szeliski, R. Zabih, D. Scharstein, O. Veksler, V. Kolmogorov, A. Agarwala, M. Tappen, and C. Rother. A comparative study of energy minimization methods for Markov random fields with smoothness-based priors. IEEE Trans. PAMI., 30:1068–1080, June 2008. [27] ILOG, Inc. ILOG CPLEX: High-performance software for mathematical programming and optimization. See http://www.ilog.com/products/cplex/. [28] B. Andres, T. Beier, and J. H. Kappes. OpenGM: A C++ library for discrete graphical models. ArXiv e-prints, 2012. Projectpage: http://hci.iwr.uni-heidelberg.de/opengm2/. [29] I. Kovtun. Partial optimal labeling search for a NP-hard subclass of (max, +) problems. In Proceedings of the DAGM Symposium, 2003. [30] J. H. Kappes, M. Speth, G. Reinelt, and C. Schn¨orr. Towards efficient and exact MAP-inference for large scale discrete computer vision problems via combinatorial optimization. In CVPR, 2013. [31] S. Chopra and M. R. Rao. On the multiway cut polyhedron. Networks, 21(1):51–89, 1991. [32] P. Swoboda, B. Savchynskyy, J. H. Kappes, and C. Schn¨orr. Partial optimality via iterative pruning for the Potts model. In SSVM, 2013. [33] D. Sontag, T. Meltzer, A. Globerson, Y. Weiss, and T. Jaakkola. Tightening LP relaxations for MAP using message-passing. In UAI-2008, pages 503–510. [34] D. Sontag. C++ code for MAP inference in graphical models. See http://cs.nyu.edu/ ˜dsontag/code/mplp_ver2.tgz. 9
|
2013
|
134
|
4,859
|
Error-Minimizing Estimates and Universal Entry-Wise Error Bounds for Low-Rank Matrix Completion Franz J. Kir´aly⇤ Department of Statistical Science and Centre for Inverse Problems University College London f.kiraly@ucl.ac.uk Louis Theran† Institute of Mathematics Discrete Geometry Group Freie Universit¨at Berlin theran@math.fu-berlin.de Abstract We propose a general framework for reconstructing and denoising single entries of incomplete and noisy entries. We describe: effective algorithms for deciding if and entry can be reconstructed and, if so, for reconstructing and denoising it; and a priori bounds on the error of each entry, individually. In the noiseless case our algorithm is exact. For rank-one matrices, the new algorithm is fast, admits a highly-parallel implementation, and produces an error minimizing estimate that is qualitatively close to our theoretical and the state-of-the-art Nuclear Norm and OptSpace methods. 1 Introduction Matrix Completion is the task to reconstruct low-rank matrices from a subset of its entries and occurs naturally in many practically relevant problems, such as missing feature imputation, multitask learning [2], transductive learning [4], or collaborative filtering and link prediction [1, 8, 9]. Almost all known methods performing matrix completion are optimization methods such as the max-norm and nuclear norm heuristics [3, 9, 10], or OptSpace [5], to name a few amongst many. These methods have in common that in general: (a) they reconstruct the whole matrix; (b) error bounds are given for all of the matrix, not single entries; (c) theoretical guarantees are given based on the sampling distribution of the observations. These properties are all problematic in scenarios where: (i) one is interested only in predicting or imputing a specific set of entries; (ii) the entire data set is unwieldy to work with; (iii) or there are non-random “holes” in the observations. All of these possibilities are very natural for the typical “big data” setup. The recent results of [6] suggest that a method capable of handling challenges (i)–(iii) is within reach. By analyzing the algebraic-combinatorial structure Matrix Completion, the authors provide algorithms that identify, for any fixed set of observations, exactly the entries that can be, in principle, reconstructed from them. Moreover, the theory developed indicates that, when a missing entry can be determined, it can be found by first exposing combinatorially-determined polynomial relations between the known entries and the unknown ones and then selecting a common solution. To bridge the gap between the theory of [6] and practice are the following challenges: to efficiently find the relevant polynomial relations; and to extend the methodology to the noisy case. In this paper, we show how to do both of these things in the case of rank one, and discuss how to instantiate the same scheme for general rank. It will turn out that finding the right set of polynomials and ⇤Supported by the Mathematisches Forschungsinstitut Oberwolfach †Supported by the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement no 247029-SDModels. 1 noisy estimation are intimately related: we can treat each polynomial as providing an estimate of the missing entry, and we can then take as our estimate the variance minimizing weighted average. This technique also gives a priori lower bounds for a broad class of unbiased single-entry estimators in terms of the combinatorial structure of the observations and the noise model only. In detail, our contributions include: • the construction of a variance-minimal and unbiased estimator for any fixed missing entry of a rank-one-matrix, under the assumption of known noise variances • an explicit form for the variance of that estimator, being a lower bound for the variance of any unbiased estimation of any fixed missing entry and thus yielding a quantiative measure on the trustability of that entry reconstructed from any algorithm • the description of a strategy to generalize the above to any rank • comparison of the estimator with two state-of-the-art optimization algorithms (OptSpace and nuclear norm), and error assessment of the three matrix completion methods with the variance bound As mentioned, the restriction to rank one is not inherent in the overall scheme. We depend on rank one only in the sense that we understand the combinatorial-algebraic structure of rank-one-matrix completion exactly, whereas the behavior in higher rank is not yet as well understood. Nonetheless, it is, in principle accessible, and, once available will can be “plugged in” to the results here without changing the complexity much. In this sense, the present paper is a proof-of-concept for a new approach to estimating and denoising in algebraic settings based on combinatorially enumerating a set of polynomial estimators and then averaging them. For us, computational efficiency comes via a connection to the topology of graphs that is specific to this problem, but we suspect that this part, too, can be generalized somewhat. 2 The Algebraic Combinatorics of Matrix Completion We first briefly review facts about Matrix Completion that we require. The exposition is along the lines of [6]. Definition 2.1. A matrix M 2 {0, 1}m⇥n is called a mask. If A is a partially known matrix, then the mask of A is the mask which has ones in exactly the positions which are known in A and zeros otherwise. Definition 2.2. Let M be an (m⇥n) mask. We will call the unique bipartite graph G(M) which has M as bipartite adjacency matrix the completion graph of M. We will refer to the m vertices of G(M) corresponding to the rows of M as blue vertices, and to the n vertices of G(M) corresponding to the columns as red vertices. If e = (i, j) is an edge in Km,n (where Km,n is the complete bipartite graph with m blue and n red vertices), we will also write Ae instead of Aij and for any (m ⇥n) matrix A. A fundamental result, [6, Theorem 2.3.5], says that identifiability and reconstructability are, up to a null set, graph properties. Theorem 2.3. Let A be a generic1 and partially known (m⇥n) matrix of rank r, let M be the mask of A, let i, j be integers. Whether Aij is reconstructible (uniquely, or up to finite choice) depends only on M and the true rank r; in particular, it does not depend on the true A. For rank one, as opposed to higher rank, the set of reconstructible entries is easily obtainable from G(M) by combinatorial means: Theorem 2.4 ([6, Theorem 2.5.36 (i)]). Let G ✓Km,n be the completion graph of a partially known (m ⇥n) matrix A. Then the set of uniquely reconstructible entries of A is exactly the set Ae, with e in the transitive closure of G. In particular, all of A is reconstructible if and only if G is connected. 1In particular, if A is sampled from a continuous density, then the set of non-generic A is a null set. 2 2.1 Reconstruction on the transitive closure We extend Theorem 2.4’s theoretical reconstruction guarantee by describing an explicit, algebraic algorithm for actually doing the reconstruction. Definition 2.5. Let P ✓Km,n (or, C ✓Km,n) be a path (or, cycle), with a fixed start and end. We will denote by E+(P) be the set of edges in P (resp. E+(C) and C) traversed from blue vertex to a red one, and by E−(P) the set of edges traversed from a red vertex to a blue one 2. From now on, when we speak of “oriented paths” or “oriented cycles”, we mean with this sign convention and some fixed traversal order. Let A = Aij be a (m ⇥n) matrix of rank 1, and identify the entries Aij with the edges of Km,n. For an oriented cycle C, we define the polynomials PC(A) = Y e2E+(C) Ae − Y e2E−(C) Ae, and LC(A) = X e2E+(C) log Ae − X e2E−(C) log Ae, where for negative entries of A, we fix a branch of the complex logarithm. Theorem 2.6. Let A = Aij be a generic (m ⇥n) matrix of rank 1. Let C ✓Km,n be an oriented cycle. Then, PC(A) = LC(A) = 0. Proof: The determinantal ideal of rank one is a binomial ideal generated by the (2 ⇥2) minors of A (where entries of A are considered as variables). The minor equations are exactly PC(A), where C is an elementary oriented four-cycle; if C is an elementary 4-cycle, denote its edges by a(C), b(C), c(C), d(C), with E+(C) = {a(C), d(C)}. Let C be the collection of the elementary 4-cycles, and define LC(A) = {LC(A) : C 2 C} and PC(A) = {PC(A) : C 2 C}. By sending the term log Ae to a formal variable xe, we see that the free Z-group generated by the LC(A) is isomorphic to H1(Km,n, Z). With this equivalence, it is straightforward that, for any oriented cycle D, LD(A) lies in the Z-span of elements of LC(A) and, therefore, formally, LD(A) = X C2C ↵C · LC(A) with the ↵C 2 Z. Thus LD(·) vanishes when A is rank one, since the r.h.s. does. Exponentiating completes the proof. ⇤ Corollary 2.7. Let A = Aij be a (m ⇥n) matrix of rank 1. Let v, w be two vertices in Km,n. Let P, Q be two oriented paths in Km,n starting at v and ending at w. Then, for all A, it holds that LP (A) = LQ(A). 3 A Combinatorial Algebraic Estimate for Missing Entries and Their Error We now construct our estimator. 3.1 The sampling model In all of the following, we will assume that the observations arise from the following sampling process: Assumption 3.1. There is an unknown fixed, rank one, matrix A which is generic, and an (m ⇥n) mask M 2 {0, 1}m⇥n which is known. There is a (stochastic) noise matrix E 2 Rm⇥n whose entries are uncorrelated and which is multiplicatively centered with finite variance, non-zero3 variance; i.e., E(log Eij) = 0 and 0 < Var(log Eij) < 1 for all i and j. The observed data is the matrix A ◦M ◦E = ⌦(A ◦E), where ◦denotes the Hadamard (i.e., component-wise) product. That is, the observation is a matrix with entries Aij · Mij · Eij. 2Any fixed orientation of Km,n will give us the same result. 3The zero-variance case corresponds to exact reconstruction, which is handled already by Theorem 2.4. 3 The assumption of multiplicative noise is a necessary precaution in order for the presented estimator (and in fact, any estimator) for the missing entries to have bounded variance, as shown in Example 3.2 below. This is not, in practice, a restriction since an infinitesimal additive error δAij on an entry of A is equivalent to an infinitesimal multiplicative error δ log Aij = δAij/Aij, and additive variances can be directly translated into multiplicative variances if the density function for the noise is known4. The previous observation implies that the multiplicative noise model is as powerful as any additive one that allows bounded variance estimates. Example 3.2. Consider a (2 ⇥2)-matrix A of rank 1. The unique equation between the entries is then A11A22 = A12A21. Solving for any entry will have another entry in the denominator, for example A11 = A12A21 A22 . Thus we get an estimator for A11 when substituting observed and noisy entries for A12, A21, A22. When A22 approaches zero, the estimation error for A11 approaches infinity. In particular, if the density function of the error E22 of A22 is too dense around the value −A22, then the estimate for A11 given by the equation will have unbounded variance. In such a case, one can show that no estimator for A11 has bounded variance. 3.2 Estimating entries and error bounds In this section, we construct the unbiased estimator for the entries of a rank-one-matrix with minimal variance. First, we define some notation to ease the exposition: Notations 3.3. We will denote by aij = log Aij and "ij = log Eij the logarithmic entries and noise. Thus, for some path P in Km,n we obtain LP (A) = X e2E+(P ) ae − X e2E−(P ) ae. Denote by bij = aij + "ij the logarithmic (observed) entries, and B the (incomplete) matrix which has the (observed) bij as entries. Denote by σij = Var(bij) = Var("ij). The components of the estimator will be built from the LP : Lemma 3.4. Let G = G(M) be the graph of the mask M. Let x = (v, w) 2 Km,n be any edge with v red. Let P be an oriented path in G(M) starting at v and ending at w. Then, LP (B) = X e2E+(P ) be − X e2E−(P ) be is an unbiased estimator for ax with variance Var(LP (B)) = P e2P σe. Proof: By linearity of expectation and centeredness of "ij, it follows that E(LP (B)) = X e2E+(P ) E(be) − X e2E−(P ) E(be), thus LP (B) is unbiased. Since the "e are uncorrelated, the be also are; thus, by Bienaym´e’s formula, we obtain Var(LP (B)) = X e2E+(P ) Var(be) + X e2E−(P ) Var(be), and the statement follows from the definition of σe. In the following, we will consider the following parametric estimator as a candidate for estimating ae: Notations 3.5. Fix an edge x = (v, w) 2 Km,n. Let P be a basis for the v–w path space and denote #P by p. For ↵2 Rp, set X(↵) = P P 2P ↵P LP (B). Furthermore, we will denote by the n-vector of ones. 4The multiplicative noise assumption causes the observed entries and the true entries to have the same sign. The change of sign can be modeled by adding another multiplicative binary random variable in the model which takes values ±1; this adds an independent combinatorial problem for the estimation of the sign which can be done by maximum likelihood. In order to keep the exposition short and easy, we did not include this into the exposition. 4 The following Lemma follows immediately from Lemma 3.4 and Theorem 2.6: Lemma 3.6. E(X(↵)) = >↵· bx; in particular, X(↵) is an unbiased estimator for bx if and only if >↵= 1. We will now show that minimizing the variance of X(↵) can be formulated as a quadratic program with coefficients entirely determined by ax, the measurements be and the graph G(M). In particular, we will expose an explicit formula for the ↵minimizing the variance. The formula will make use of the following path kernel. For fixed vertices s and t, an s–t path is the sum of a cycle H1(G, Z) and −ast. The s–t path space is the linear span of all the s–t paths. We discuss its relevant properties in Appendix A. Definition 3.7. Let e 2 Km,n be an edge. For an edge e and a path P, set ce,P = ±1 if e 2 E±(P) otherwise ce,P = 0. Let P, Q 2 P be any fixed oriented paths. Define the (weighted) path kernel k : P ⇥P ! R by k(P, Q) = X e2Km,n ce,P · ce,Q · σe. Under our assumption that Var(be) > 0 for all e 2 Km,n, the path kernel is positive definite, since it is a sum of p independent positive semi-definite functions; in particular, its kernel matrix has full rank. Here is the variance-minimizing unbiased estimator: Proposition 3.8. Let x = (s, t) be a pair of vertices, and P a basis for the s–t path space in G with p elements. Let ⌃be the p ⇥p kernel matrix of the path kernel with respect to the basis P. For any ↵2 Rp, it holds that Var(X(↵)) = ↵>⌃↵. Moreover, under the condition >↵= 1, the variance Var(X(↵)) is minimized by ↵= " ⌃−1 # " >⌃−1 #−1 . Proof: By inserting definitions, we obtain X(↵) = X P 2P ↵P LP (B) = X P 2P ↵P X e2Km,n ce,P be. Writing b = (be) 2 Rmn as vectors, and C = (ce,P ) 2 Rp⇥mn as matrices, we obtain X(↵) = b>C↵. By using that Var(λ·) = λ2 Var(·) for any scalar λ, and independence of the be, a calculation yields Var(X(↵)) = ↵>⌃↵. In order to determine the minimum of the variance in ↵, consider the Lagrangian L(↵, λ) = ↵>⌃↵+ λ 1 − X P 2P ↵P ! , where the slack term models the condition >↵= 1. An straightforward computation yields @L @↵= 2⌃↵−λ Due to positive definiteness of ⌃the function Var(X(↵)) is convex, thus ↵= ⌃−1 / >⌃−1 will be the unique ↵minimizing the variance while satisfying >↵= 1. ⇤ Remark 3.9. The above setup works in wider generality: (i) if Var(be) = 0 is allowed and there is an s–t path of all zero variance edges, the path kernel becomes positive semi-definite; (ii) similarly if P is replaced with any set of paths at all, the same may occur. In both cases, we may replace ⌃−1 with the Moore-Penrose pseudo-inverse and the proposition still holds: (i) reduces to the exact reconstruction case of Theorem 2.4; (ii) produces the optimal estimator with respect to P, which is optimal provided that P is spanning, and adding paths to P does not make the estimate worse. Our estimator is optimal over a fairly large class. Theorem 3.10. Let bAij be any estimator for an entry Aij of the true matrix that is: (i) unbiased; (ii) a deterministic piecewise smooth function of the observations; (iii) independent of the noise model. Let A⇤ ij be the estimator from Proposition 3.8. Then Var(A⇤ ij) Var( bAij). We give a complete proof in the full version. Here, we prove the special case of log-normal noise, which gives an alternate viewpoint on the path kernel. 5 Proof: As above, we work with the formal logarithm aij of Aij. For log-normal noise, the "e are independently distributed normals with variance σe. It then follows that, for any P in the i–j path space, LP (B) ⇠N aij, X e2P σe ! and the kernel matrix ⌃of the path kernel is the covariance matrix for the LP in our path basis. Thus, the LP have distribution N(aij , ⌃). It is well-known that any multivariate normal has a linear repreameterization so that the coordinates are independent; a computation shows that, here, ⌃−1 $ >⌃−1 %−1 is the correct linear map. Thus, the estimator A⇤ ij is the sample mean of the coordinates in the new parameterization. Since this is a sufficient statistic, we are done via the Lehmann–Scheff´e Theorem. ⇤ 3.3 Rank 2 and higher An estimator for rank 2 and higher, together with a variance analysis, can be constructed similarly once all the solving polynomials are known. The main difficulties lies in the fact that these polynomials are not parameterized by cycles anymore, but specific subgraphs of G(M), see [6, Section 2.5] and that they are not necessarily linear in the missing entry Ae. However, even with approximate oracles for evaluating these polynomials and estimating their covariances, an estimator similar to X(↵) can be constructed and analyzed; in particular, we still need only to consider a basis for the space of “circuits” through the missing entry and not a costly brute force enumeration. 3.4 The algorithms We now give the algorithms for estimating/denoising entries and computing the variance bounds; an implementation is available from [7]. Since the the path matrix C, the path kernel matrix ⌃, and the optimal ↵are required for both, we show how to compute them first. We can find a basis Algorithm 1 Calculates path kernel ⌃and ↵. Input: index (i, j), an (m ⇥n) mask M, variances σ. Output: path matrix C, path kernel ⌃and minimizer ↵. 1: Find a linearly independent set of paths P in the graph G(M), starting from i and ending at j. 2: Determine the matrix C = (ce,P ) with e 2 G(M), P 2 P; set ce,P = ±1 if e 2 E±(P), otherwise ce,P = 0. 3: Define a diagonal matrix S = diag(σ), with See = σe for e 2 G(M). 4: Compute the kernel matrix ⌃= C>SC. 5: Calculate ↵= $ ⌃−1 % $ >⌃−1 %−1 . 6: Output C, ⌃and ↵. for the path space in linear time. To keep the notation manageable, we will conflate formal sums of the xe, cycles in H1(G, Z) and their representations as vectors in Rm. Correctness is proven in Appendix A. Algorithm 2 Calculates a basis P of the path space. Input: index (i, j), an (m ⇥n) mask M. Output: a basis P for the space of oriented i–j paths. 1: If (i, j) is not an edge of M, and i and j are in different connected components, then P is empty. Output ;. 2: Otherwise, if (i, j) is not an edge, of M, add a “dummy” copy. 3: Compute a spanning forest F of M that does not contain (i, j), if possible. 4: For each edge e 2 M \ F, compute the fundamental cycle Ce of e in F. 5: If (i, j) is an edge in M, output {−x(i,j)} [ {Ce −x(i,j) : e 2 M \ F}. 6: Otherwise, let P(i,j) = C(i,j) −x(i,j). Output {Ce −P(i,j) : e 2 M \ (F [ {(i, j)})}. 6 Algorithms 3 and 4 then can make use of the calculated C, ↵, ⌃to determine an estimate for any entry Aij and its minimum variance bound. The algorithms follow the exposition in Section 3.2, from where correctness follows; Algorithm 3 additionally provides treatment for the sign of the entries. Algorithm 3 Estimates the entry aij. Input: index (i, j), an (m ⇥n) mask M, log-variances σ, the partially observed and noisy matrix B. Output: The variance-minimizing estimate for Aij. 1: Calculate C and ↵with Algorithm 1. 2: Store B as a vector b = (log |Be|) and a sign vector s = (sgn Be) with e 2 G(M). 3: Calculate bAij = ± exp " b>C↵ # . The sign is + if each column of s>|C| (|.| component-wise) contains an odd number of entries −1, else −. 4: Return bAij. Algorithm 4 Determines the variance of the entry log(Aij). Input: index (i, j), an (m ⇥n) mask M, log-variances σ. Output: The variance lower bound for log(Aij). 1: Calculate ⌃and ↵with Algorithm 1. 2: Return ↵>⌃↵. Algorithm 4 can be used to obtain the variance bound independently of the observations. The variance bound is relative, due to its multiplicativity, and can be used to approximate absolute bounds when any (in particular not necessarily the one from Algorithm 3) reconstruction estimate bAij is available. Namely, if bσij is the estimated variance of the logarithm, we obtain an upper confidence/deviation bound bAij · exp "p bσij # for bAij, and a lower confidence/deviation bound bAij · exp " − p bσij # , corresponding to the log-confidence log bAij ± p bσij. Also note that if Aij is not reconstructible from the mask M, then the deviation bounds will be infinite. 4 Experiments 4.1 Universal error estimates For three different masks, we calculated the predicted minimum variance for each entry of the mask. The mask sizes are all 140⇥140. The multiplicative noise was assumed to be σe = 1 for each entry. Figure 1 shows the predicted a-priori minimum variances for each of the masks. The structure of the mask affects the expected error. Known entries generally have least variance, and it is less than the initial variance of 1, which implies that the (independent) estimates coming from other paths can be used to successfully denoise observed data. For unknown entries, the structure of the mask is mirrored in the pattern of the predicted errors; a diffuse mask gives a similar error on each missing entry, while the more structured masks have structured error which is determined by combinatorial properties of the completion graph. 1 1.5 2 2.5 3 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 10 20 30 40 50 60 70 Figure 1: The figure shows three pairs of masks and predicted variances. A pair consists of two adjacent squares. The left half is the mask which is depicted by red/blue heatmap with red entries known and blue unknown. The right half is a multicolor heatmap with color scale, showing the predicted variance of the completion. Variances were calculated by our implementation of Algorithm 4. 7 0 .1 .2 .3 .4 .5 .6 .7 .8 .9 0 5 10 15 20 25 30 35 40 45 Noise Level MSE Path Kernel NN OptSpace (a) mean squared errors −0.2 0 0.2 0.4 0.6 0.8 1 1.2 0 1 2 3 4 5 6 predicted variance quantile mean observed error PathKernel NN OptSpace (b) error vs. predicted variance Figure 2: For 10 randomly chosen masks and 50 ⇥50 true matrix, matrix completions were performed with Nuclear Norm (green), OptSpace (red), and Algorithm 3 (blue) under multiplicative noise with variance increasing in increments of 0.1. For each completed entry, minimum variances were predicted by Algorithm 4. 2(a) shows the mean squared error of the three algorithms for each noise level, coded by the algorithms’ respective colors. 2(b) shows a bin-plot of errors (y-axis) versus predicted variances (x-axis) for each of the three algorithms: for each completed entry, a pair (predicted error, true error) was calculated, predicted error being the predicted variance, and the actual prediction error being the squared logarithmic error (i.e., (log |atrue| −log |apredicted|)2 for an entry a). Then, the points were binned into 11 bins with equal numbers of points. The figure shows the mean of the errors (second coordinate) of the value pairs with predicted variance (first coordinate) in each of the bins, the color corresponds to the particular algorithm; each group of bars is centered on the minimum value of the associated bin. 4.2 Influence of noise level We generated 10 random mask of size 50 ⇥50 with 200 entries sampled uniformly and a random (50 ⇥50) matrix of rank one. The multiplicative noise was chosen entry-wise independent, with variance σi = (i −1)/10 for each entry. Figure 2(a) compares the Mean Squared Error (MSE) for three algorithms: Nuclear Norm (using the implementation Tomioka et al. [10]), OptSpace [5], and Algorithm 3. It can be seen that on this particular mask, Algorithm 3 is competitive with the other methods and even outperforms them for low noise. 4.3 Prediction of estimation errors The data are the same as in Section 4.2, as are the compared algorithm. Figure 2(b) compares the error of each of the methods with the variance predicted by Algorithm 4 each time the noise level changed. The figure shows that for any of the algorithms, the mean of the actual error increases with the predicted error, showing that the error estimate is useful for a-priori prediction of the actual error - independently of the particular algorithm. Note that by construction of the data this statement holds in particular for entry-wise predictions. Furthermore, in quantitative comparison Algorithm 4 also outperforms the other two in each of the bins. The qualitative reversal between the algorithms in Figures 2(b) (a) and (b) comes from the different error measure and the conditioning on the bins. 5 Conclusion In this paper, we have introduced an algebraic combinatorics based method for reconstructing and denoising single entries of an incomplete and noisy matrix, and for calculating confidence bounds of single entry estimations for arbitrary algorithms. We have evaluated these methods against stateof-the art matrix completion methods. Our method is competitive and yields the first known a priori variance bounds for reconstruction. These bounds coarsely predict the performance of all the methods. Furthermore, our method can reconstruct and estimate the error for single entries. It can be restricted to using only a small number of nearby observations and smoothly improves as more information is added, making it attractive for applications on large scale data. These results are an instance of a general algebraic-combinatorial scheme and viewpoint that we argue is crucial for the future understanding and practical treatment of big data. 8 References [1] E. Acar, D. Dunlavy, and T. Kolda. Link prediction on evolving data using matrix and tensor factorizations. In Data Mining Workshops, 2009. ICDMW’09. IEEE International Conference on, pages 262–269. IEEE, 2009. [2] A. Argyriou, C. A. Micchelli, M. Pontil, and Y. Ying. A spectral regularization framework for multi-task structure learning. In J. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in NIPS 20, pages 25–32. MIT Press, Cambridge, MA, 2008. [3] E. J. Cand`es and B. Recht. Exact matrix completion via convex optimization. Found. Comput. Math., 9(6):717–772, 2009. ISSN 1615-3375. doi: 10.1007/s10208-009-9045-5. URL http://dx.doi. org/10.1007/s10208-009-9045-5. [4] A. Goldberg, X. Zhu, B. Recht, J. Xu, and R. Nowak. Transduction with matrix completion: Three birds with one stone. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 757–765. 2010. [5] R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from a few entries. IEEE Trans. Inform. Theory, 56(6):2980–2998, 2010. ISSN 0018-9448. doi: 10.1109/TIT.2010.2046205. URL http: //dx.doi.org/10.1109/TIT.2010.2046205. [6] F. J. Kir´aly, L. Theran, R. Tomioka, and T. Uno. The algebraic combinatorial approach for low-rank matrix completion. Preprint, arXiv:1211.4116v4, 2012. URL http://arxiv.org/abs/1211.4116. [7] F. J. Kir´aly and L. Theran. AlCoCoMa, 2013. http://mloss.org/software/view/524/. [8] A. Menon and C. Elkan. Link prediction via matrix factorization. Machine Learning and Knowledge Discovery in Databases, pages 437–452, 2011. [9] N. Srebro, J. D. M. Rennie, and T. S. Jaakkola. Maximum-margin matrix factorization. In L. K. Saul, Y. Weiss, and L. Bottou, editors, Advances in NIPS 17, pages 1329–1336. MIT Press, Cambridge, MA, 2005. [10] R. Tomioka, K. Hayashi, and H. Kashima. On the extension of trace norm to tensors. In NIPS Workshop on Tensors, Kernels, and Machine Learning, 2010. 9
|
2013
|
135
|
4,860
|
Decision Jungles: Compact and Rich Models for Classification Jamie Shotton Toby Sharp Pushmeet Kohli Sebastian Nowozin John Winn Antonio Criminisi Microsoft Research Abstract Randomized decision trees and forests have a rich history in machine learning and have seen considerable success in application, perhaps particularly so for computer vision. However, they face a fundamental limitation: given enough data, the number of nodes in decision trees will grow exponentially with depth. For certain applications, for example on mobile or embedded processors, memory is a limited resource, and so the exponential growth of trees limits their depth, and thus their potential accuracy. This paper proposes decision jungles, revisiting the idea of ensembles of rooted decision directed acyclic graphs (DAGs), and shows these to be compact and powerful discriminative models for classification. Unlike conventional decision trees that only allow one path to every node, a DAG in a decision jungle allows multiple paths from the root to each leaf. We present and compare two new node merging algorithms that jointly optimize both the features and the structure of the DAGs efficiently. During training, node splitting and node merging are driven by the minimization of exactly the same objective function, here the weighted sum of entropies at the leaves. Results on varied datasets show that, compared to decision forests and several other baselines, decision jungles require dramatically less memory while considerably improving generalization. 1 Introduction Decision trees have a long history in machine learning and were one of the first models proposed for inductive learning [14]. Their use for classification and regression was popularized by the work of Breiman [6]. More recently, they have become popular in fields such as computer vision and information retrieval, partly due to their ability to handle large amounts of data and make efficient predictions. This has led to successes in tasks such as human pose estimation in depth images [29]. Although trees allow making predictions efficiently, learning the optimal decision tree is an NP-hard problem [15]. In his seminal work, Quinlan proposed efficient approximate methods for learning decision trees [27, 28]. Some researchers have argued that learning optimal decision trees could be harmful as it may lead to overfitting [21]. Overfitting may be reduced by controlling the model complexity, e.g. via various stopping criteria such as limiting the tree depth, and post-hoc pruning. These techniques for controlling model complexity impose implicit limits on the type of classification boundaries and feature partitions that can be induced by the decision tree. A number of approaches have been proposed in the literature to regularize tree models without limiting their modelling power. The work in [7] introduced a non-greedy Bayesian sampling-based approach for constructing decision trees. A prior over the space of trees and their parameters induces a posterior distribution, which can be used, for example, to marginalize over all tree models. There are similarities between the idea of randomly drawing multiple trees via a Bayesian procedure and construction of random tree ensembles (forests) using bagging, a method shown to be effective in many applications [1, 5, 9]. Another approach to improve generalization is via large-margin tree classifiers [4]. 1 While the above-mentioned methods can reduce overfitting, decision trees face a fundamental limitation: their exponential growth with depth. For large datasets where deep trees have been shown to be more accurate than large ensembles (e.g. [29]), this exponential growth poses a problem for implementing tree models on memory-constrained hardware such as embedded or mobile processors. In this paper, we investigate the use of randomized ensembles of rooted decision directed acyclic graphs (DAGs) as a means to obtain compact and yet accurate classifiers. We call these ensembles ‘decision jungles’, after the popular ‘decision forests’. We formulate the task of learning each DAG in a jungle as an energy minimization problem. Building on the information gain measure commonly used for training decision trees, we propose an objective that is defined jointly over the features of the split nodes and the structure of the DAG. We then propose two minimization methods for learning the optimal DAG. Both methods alternate between optimizing the split functions at the nodes of the DAG and optimizing the placement of the branches emanating from the parent nodes. As detailed later, they differ in the way they optimize the placement of branches. We evaluate jungles on a number of challenging labelling problems. Our experiments below quantify a substantially reduced memory footprint for decision jungles compared to standard decision forests and several baselines. Furthermore, the experiments also show an important side-benefit of jungles: our optimization strategy is able to achieve considerably improved generalization for only a small extra cost in the number of features evaluated per test example. Background and Prior Work. The use of rooted decision DAGs (‘DAGs’ for short) has been explored by a number of papers in the literature. In [16, 26], DAGs were used to combine the outputs of C ×C binary 1-v-1 SVM classifiers into a single C-class classifier. More recently, in [3], DAGs were shown to be a generalization of cascaded boosting. It has also been shown that DAGs lead to accurate predictions while having lower model complexity, subtree replication, and training data fragmentation compared to decision trees. Most existing algorithms for learning DAGs involve training a conventional tree that is later manipulated into a DAG. For instance [17] merges same-level nodes which are associated with the same split function. They report performance similar to that of C4.5-trained trees, but with a much reduced number of nodes. Oliveira [23] used local search method for constructing DAGs in which tree nodes are removed or merged together based on similarity of the underlying sub-graphs and the corresponding message length reduction. A message-length criterion is also employed by the node merging algorithm in [24]. Chou [8] investigated a k-means clustering for learning decision trees and DAGs (similar ‘ClusterSearch’ below), though did not jointly optimize the features with the DAG structure. Most existing work on DAGs have focused on showing how the size and complexity of the learned tree model can be reduced without substantially degrading its accuracy. However, their use for increasing test accuracy has attracted comparatively little attention [10, 20, 23]. In this paper we show how jungles, ensembles of DAGs, optimized so as to reduce a well defined objective function, can produce results which are superior to those of analogous decision tree ensembles, both in terms of model compactness as well as generalization. Our work is related to [25], where the authors achieve compact classification DAGs via post-training removal of redundant subtrees in forests. In contrast, our probabilistic node merging is applied directly and efficiently during training, and both saves space as well as achieves greater generalization for multi-class classification. Contributions. In summary, our contributions are: (i) we highlight that traditional decision trees grow exponentially in memory with depth, and propose decision jungles as a means to avoid this; (ii) we propose and compare two learning algorithms that, within each level, jointly optimize an objective function over both the structure of the graph and the features; (iii) we show that not only do the jungles dramatically reduce memory consumption, but can also improve generalization. 2 Forests and Jungles Before delving into the details of our method for learning decision jungles, we first briefly discuss how decision trees and forests are used for classification problems and how they relate to jungles. Binary decision trees. A binary decision tree is composed of a set of nodes each with an in-degree of 1, except the root node. The out-degree for every internal (split) node of the tree is 2 and for the leaf nodes is 0. Each split node contains a binary split function (‘feature’) which decides whether an 2 (a) 2 grass grass cow sheep c s g c s g c s g c s g c s g c s g Training patches c s g 0 1 3 4 5 … (b) Figure 1: Motivation and notation. (a) An example use of a rooted decision DAG for classifying image patches as belonging to grass, cow or sheep classes. Using DAGs instead of trees reduces the number of nodes and can result in better generalization. For example, differently coloured patches of grass (yellow and green) are merged together into node 4, because of similar class statistics. This may encourage generalization by representing the fact that grass may appear as a mix of yellow and green. (b) Notation for a DAG, its nodes, features and branches. See text for details. input instance that reaches that node should progress through the left or right branch emanating from the node. Prediction in binary decision trees involves every input starting at the root and moving down as dictated by the split functions encountered at the split nodes. Prediction concludes when the instance reaches a leaf node, each of which contains a unique prediction. For classification trees, this prediction is a normalized histogram over class labels. Rooted binary decision DAGs. Rooted binary DAGs have a different architecture compared to decision trees and were introduced by Platt et al. [26] as a way of combining binary classifier for multi-class classification tasks. More specifically a rooted binary DAG has: (i) one root node, with in-degree 0; (ii) multiple split nodes, with in-degree ≥1 and out-degree 2; (iii) multiple leaf nodes, with in-degree ≥1 and out-degree 0. Note that in contrast to [26], if we have a C-class classification problem, here we do not necessarily expect to have C DAG leaves. In fact, the leaf nodes are not necessarily pure; And each leaf remains associated with an empirical class distribution. Classification DAGs vs classification trees. We explain the relationship between decision trees and decision DAGs using the image classification task illustrated in Fig. 1(a) as an example. We wish to classify image patches into the classes: cow, sheep or grass. A labelled set of patches is used to train a DAG. Since patches corresponding to different classes may have different average intensity, the root node may decide to split them according to this feature. Similarly, the two child nodes may decide to split the patches further based on their chromaticity. This results in grass patches with different intensity and chromaticity (bright yellow and dark green) ending up in different subtrees. However, if we detect that two such nodes are associated with similar class distributions (peaked around grass in this case) and merge them, then we get a single node with training examples from both grass types. This helps capture the degree of variability intrinsic to the training data, and reduce the classifier complexity. While this is clearly a toy example, we hope it gives some intuition as to why rooted DAGs are expected to achieve the improved generalization demonstrated in Section 4. 3 Learning Decision Jungles We train each rooted decision DAG in a jungle independently, though there is scope for merging across DAGs as future work. Our method for training DAGs works by growing the DAG one level at a time.1 At each level, the algorithm jointly learns the features and branching structure of the nodes. This is done by minimizing an objective function defined over the predictions made by the child nodes emanating from the nodes whose split features are being learned. Consider the set of nodes at two consecutive levels of the decision DAG (as shown in Fig. 1b). This set consist of the set of parent nodes Np and a set of child nodes Nc. We assume in this work a known value for M = |Nc|. M is a parameter of our method and may vary per level. Let θi denote the parameters of the split feature function f for parent node i ∈Np, and Si denote the set of labelled training instances (x, y) that reach node i. Given θi and Si, we can compute the set of instances from node i that travel through its left and right branches as SL i (θi) = {(x, y) ∈Si | f(θi, x) ≤0} 1Jointly training all levels of the tree simultaneously remains an expensive operation [15]. 3 and SR i (θi) = Si \ SL i (θi), respectively. We use li ∈Nc to denote the current assignment of the left outwards edge from parent node i ∈Np to a child node, and similarly ri ∈Nc for the right outward edge. Then, the set of instances that reach any child node j ∈Nc is: Sj({θi}, {li}, {ri}) = [ i∈Np s.t. li=j SL i (θi) ∪ [ i∈Np s.t. ri=j SR i (θi) . (1) The objective function E associated with the current level of the DAG is a function of {Sj}j∈Nc. We can now formulate the problem of learning the parameters of the decision DAG as a joint minimization of the objective over the split parameters {θi} and the child assignments {li}, {ri}. Thus, the task of learning the current level of a DAG can be written as: min {θi},{li},{ri} E({θi}, {li}, {ri}) . (2) Maximizing the Information Gain. Although our method can be used for optimizing any objective E that decomposes over nodes, including in theory a regression-based objective, for the sake of simplicity we focus in this work on the information gain objective commonly used for classification problems. The information gain objective requires the minimization of the total weighted entropy of instances, defined as: E({θi}, {li}, {ri}) = X j∈Nc |Sj| H(Sj) (3) where Sj is defined in (1), and H(S) is the Shannon entropy of the class labels y in the training instances (x, y) ∈S. Note that if the number of child nodes M is equal to twice the number of parent nodes i.e. M = 2|Np|, then the DAG becomes a tree and we can optimize the parameters of the different nodes independently, as done in standard decision tree training, to achieve optimal results. 3.1 Optimization The minimization problem described in (2) is hard to solve exactly. We propose two local search based algorithms for its solution: LSearch and ClusterSearch. As local optimizations, neither are likely to reach a global minimum, but in practice both are effective at minimizing the objective. The experiments below show that the simpler LSearch appears to be more effective. LSearch. The LSearch method starts from a feasible assignment of the parameters, and then alternates between two coordinate descent steps. In the first (split-optimization) step, it sequentially goes over every parent node k in turn and tries to find the split function parameters θk that minimize the objective function, keeping the values of {li}, {ri} and the split parameters of all other nodes fixed: for k ∈Np θk ←argmin θ′ k E(θ′ k ∪{θi}i∈Np\{k}, {li}, {ri}) This minimization over θ′ k is done by random sampling in a manner similar to decision forest training [9]. In the second (branch-optimization) step, the algorithm redirects the branches emanating from each parent node to different child nodes, so as to yield a lower objective: for k ∈Np lk ←argmin l′ k∈Nc E({θi}, l′ k ∪{li}i∈Np\{k}, {ri}) rk ←argmin r′ k∈Nc E({θi}, {li}, r′ k ∪{ri}i∈Np\{k}) The algorithm terminates when no changes are made, and is guaranteed to converge. We found that a greedy initialization of LSearch (allocating splits to the most energetic parent nodes first) resulted in a lower objective after optimization than a random initialization. We also found that a stochastic version of the above algorithm where only a single randomly chosen node was optimized at a time resulted in similar reductions in the objective for considerably less compute. 4 ClusterSearch. The ClusterSearch algorithm also alternates between optimizing the branching variables and the split parameters, but differs in that it optimizes the branching variables more globally. First, 2|Np| temporary child nodes are built via conventional tree-based, training-objective minimization procedures. Second, the temporary nodes are clustered into M = |Nc| groups to produce a DAG. Node clustering is done via the Bregman information objective optimization technique in [2]. 4 Experiments and results This section compares testing accuracy and computational performance of our decision jungles with state-of-the-art forests of binary decision trees and their variants on several classification problems. 4.1 Classification Tasks and Datasets We focus on semantic image segmentation (pixel-wise classification) tasks, where decision forests have proven very successful [9, 19, 29]. We evaluate our jungle model on the following datasets: (A) Kinect body part classification [29] (31 classes). We train each tree or DAG in the ensemble on a separate 1000 training images with 250 example pixels randomly sampled per image. Following [29], 3 trees or DAGs are used unless otherwise specified. We test on (a common set of) 1000 images drawn randomly from the MSRC-5000 test set [29]. We use a DAG merging schedule of |N D c | = min(M, 2min(5,D) · 1.2max(0,D−5)), where M is a fixed constant maximum width and D is the current level (depth) in the tree. (B) Facial features segmentation [18] (8 classes including background). We train each of 3 trees or DAGs in the ensemble on a separate 1000 training images using every pixel. We use a DAG merging schedule of |N D c | = min(M, 2D). (C) Stanford background dataset [12] (8 classes). We train on all 715 labelled images, seeding our feature generator differently for each of 3 trees or DAGs in the ensemble. Again, we use a DAG merging schedule of |N D c | = min(M, 2D). (D) UCI data sets [22]. We use 28 classification data sets from the UCI corpus as prepared on the libsvm data set repository.2 For each data set all instances from the training, validation, and test set, if available, are combined to a large set of instances. We repeat the following procedure five times: randomly permute the instances, and divide them 50/50 into training and testing set. Train on the training set, evaluate the multiclass accuracy on the test set. We use 8 trees or DAGs per ensemble. Further details regarding parameter choices can be found in the supplementary material. For all segmentation tasks we use the Jaccard index (intersection over union) as adopted in PASCAL VOC [11]. Note that this measure is stricter than e.g. the per class average metric reported in [29]. On the UCI dataset we report the standard classification accuracy numbers. In order to keep training time low, the training sets are somewhat reduced compared to the original sources, especially for (A). However, identical trends were observed in limited experiments with more training data. 4.2 Baseline Algorithms We compare our decision jungles with several tree-based alternatives, listed below. Standard Forests of Trees. We have implemented standard classification forests, as described in [9] and building upon their publically available implementation. Baseline 1: Fixed-Width Trees (A). As a first variant on forests, we train binary decision trees with an enforced maximum width M at each level, and thus a reduced memory footprint. This is useful to tease out whether the improved generalization of jungles is due more to the reduced model complexity or to the node merging. Training a tree with fixed width is achieved by ranking the leaf nodes i at each level by decreasing value of E(Si) and then greedily splitting only the M/2 nodes with highest value of the objective. The leaves that are not split are discarded. Baseline 2: Fixed-Width Trees (B). A related, second tree-based variant is obtained by greedily optimizing the best split candidate for all leaf nodes, then ranking the leaves by reduction in the 2http://www.csie.ntu.edu.tw/˜cjlin/libsvmtools/datasets/ 5 0 0.1 0.2 0.3 0.4 0.5 1 100 10000 1000000 Test segmentation accuracy Total number of nodes Stanford Background dataset Standard Trees Baseline 3: Priority Scheduled Trees Merged DAGs 0 0.1 0.2 0.3 0.4 0.5 1 10 100 1000 Test segmentation accuracy Max. no. feature evaluations / pixel Stanford Background dataset Standard Trees Baseline 3: Priority Scheduled Trees Merged DAGs 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 1 10 100 1000 10000 Test segmentation accuracy Max. no. feature evaluations / pixel Faces dataset Standard Trees Baseline 3: Priority Scheduled Trees Merged DAGs (c) (f) 0 0.05 0.1 0.15 0.2 0.25 0.3 1 10 100 1000 10000 100000 1000000 Test segmentation accuracy Total number of nodes Kinect dataset Standard Trees Baseline 1: Fixed-Width Trees (A) Baseline 2: Fixed-Width Trees (B) Baseline 3: Priority Scheduled Trees Merged DAGs (a) (e) 0 0.05 0.1 0.15 0.2 0.25 0.3 0 50 100 150 200 Test segmentation accuracy Max. no. feature evaluations / pixel Kinect dataset Standard Trees Baseline 1: Fixed-Width Trees (A) Baseline 2: Fixed-Width Trees (B) Merged DAGs (d) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 1 10 100 1000 10000 100000 1000000 Test segmentation accuracy Total number of nodes Faces dataset Standard Trees Baseline 3: Priority Scheduled Trees Merged DAGs (b) Figure 2: Accuracy comparisons. Each graph compares Jaccard scores for jungles vs. standard decision forests and three other baselines. (a, b, c) Segmentation accuracy as a function of the total number of nodes in the ensemble (i.e. memory usage) for three different datasets. (d, e, f) Segmentation accuracy as a function of the maximum number of test comparisons per pixel (maximum depth × size of ensemble), for the same datasets. Jungles achieve the same accuracy with fewer nodes. Jungles also improve the overall generalization of the resulting classifier. objective, and greedily taking only the M/2 splits that most reduce the objective.3 The leaf nodes that are not split are discarded from further consideration. Baseline 3: Priority Scheduled Trees. As a final variant, we consider priority-driven tree trainining. Current leaf nodes are ranked by the reduction in the objective that would be achieved by splitting them. At each iteration, the top M nodes are split, optimal splits computed and the new children added into the priority queue. This baseline is identical to the baseline 2 above, except that nodes that are not split at a particular iteration are part of the ranking at subsequent iterations. This can be seen as a form of tree pruning [13], and in the limit, will result in standard binary decision trees. As shown later, the trees at intermediate iterations can give surprisingly good generalization. 4.3 Comparative Experiments Prediction Accuracy vs. Model Size. One of our two main hypotheses is that jungles can reduce the amount of memory used compared to forests. To investigate this we compared jungles to the baseline forests on three different datasets. The results are shown in Fig. 2 (top row). Note that the jungles of merged DAGs achieve the same accuracy as the baselines with substantially fewer total nodes. For example, on the Kinect dataset, to achieve an accuracy of 0.2, the jungle requires around 3000 nodes whereas the standard forest require around 22000 nodes. We use the total number of nodes as a proxy for memory usage; the two are strongly linked, and the proxy works well in practice. For example, the forest of 3 trees occupied 80MB on the Kinect dataset vs. 9MB for a jungle of 3 DAGs. On the Faces dataset the forest of 3 trees occupied 7.17MB vs. 1.72MB for 3 DAGs. A second hypothesis is that merging provides a good way to regularize the training and thus increases generalization. Firstly, observe how all tree-based baselines saturate and in some cases start to overfit as the trees become larger. This is a common effect with deep trees and small ensembles. Our merged DAGs appear to be able to avoid this overfitting (at least in as far as we have trained them here), and further, actually have increased the generalization quite considerably. 3In other words, baseline 1 optimizes the most energetic nodes, whereas baseline 2 optimizes all nodes and takes only the splits that most reduce the objective. 6 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 1 10 100 1000 10000 100000 1000000 Test segmentation accuracy Total number of nodes Faces dataset Standard Trees Merged DAGs (M=128) Merged DAGs (M=256) Merged DAGs (M=512) 0 0.05 0.1 0.15 0.2 0.25 0.3 1 100 10000 1000000 Test segmentation accuracy Total number of nodes Kinect dataset 1 Standard Tree 3 Standard Trees 9 Standard Trees 1 Merged DAG 3 Merged DAGs 9 Merged DAGs 0 0.05 0.1 0.15 0.2 0.25 0.3 1 10 100 1000 Test segmentation accuracy Max. no. feature evaluations / pixel Kinect dataset 1 Standard Tree 3 Standard Trees 9 Standard Trees 1 Merged DAG 3 Merged DAGs 9 Merged DAGs (a) (b) (c) Figure 3: (a, b) Effect of ensemble size on test accuracy. (a) plots accuracy against the total number of nodes in the ensemble, whereas (b) plots accuracy against the maximum number of computations required at test time. For a fixed ensemble size jungles of DAGs achieve consistently better generalization than conventional forests. (c) Effect of merging parameter M on test accuracy. The model width M has a regularizing effect on our DAG model. For other results shown on this dataset, we set M = 256. See text for details. Interestingly, the width-limited tree-based baselines perform substantially better than the standard tree training algorithm, and in particular the priority scheduling appears to work very well, though still inferior to our DAG model. This suggests that both reducing the model size and node merging have a substantial positive effect on generalization. Prediction Accuracy vs. Depth. We do not expect the reduction in memory given by merging to come for free: there is likely to be a cost in terms of the number of nodes evaluated for any individual test example. Fig. 2 (bottom row) shows this trade-off. The large gains in memory footprint and accuracy come at a relatively small cost in the number of feature evaluations at test time. Again, however, the improved generalization is also evident. The need to train deeper also has some effect on training time. For example, training 3 trees for Kinect took 32mins vs. 50mins for 3 DAGs. Effect of Ensemble Size. Fig. 3 (a, b) compares results for 1, 3, and 9 trees/DAGs in a forest/jungle. Note from (a) that in all cases, a jungle of DAGs uses substantially less memory than a standard forest for the same accuracy, and also that the merging consistently increases generalization. In (b) we can see again that this comes at a cost in terms of test time evaluations, but note that the upper-envelope of the curves belongs in several regions to DAGs rather than trees. LSearch vs. ClusterSearch Optimization. In experiments we observed the LSearch algorithm to perform better than the ClusterSearch optimization, both in terms of the objective achieved (reported in the table below for the face dataset) and also in test accuracy. The difference is slight, yet very consistent. In our experiments the LSearch algorithm was used with 250 iterations. Number of nodes 2047 5631 10239 20223 30207 40191 LSearch objective 0.735 0.596 0.514 0.423 0.375 0.343 ClusterSearch objective 0.739 0.605 0.524 0.432 0.382 0.351 Effect of Model Width. We performed an experiment investigating changes to M, the maximum tree width. Fig. 3 (c) shows the results. The merged DAGs consistently outperform the standard trees both in terms of memory consumption and generalization, for all settings of M evaluated. Smaller values of M improve accuracy while keeping memory constant, but must be trained deeper. Qualitative Image Segmentation Results. Fig. 4 shows some randomly chosen segmentation results on both the Kinect and Faces data. On the Kinect data, forests of 9 trees are compared to jungles of 9 DAGs. The jungles appear to give smoother segmentations than the standard forests, perhaps more so than the quantitative results would suggest. On the Faces data, small forests of 3 trees are compared to jungles of 3 DAGs, with each model containing only 48k nodes in total. Results on UCI Datasets. Figure 5 reports the test classification accuracy as a function of model size for two UCI data sets. The full results for all UCI data sets are reported in the supplementary material. Overall using DAGs allows us to achieve higher accuracies at smaller model sizes, but in 7 Input Image Ground Truth Merged DAGs Segmentation Standard Trees Segmentation Input Image Ground Truth Merged DAGs Segmentation Standard Trees Segmentation Figure 4: Qualitative results. A few example results on the Kinect body parts and face segmentation tasks, comparing standard trees and merged DAGs with the same number of nodes. 10 1 10 2 10 3 10 4 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Total number of nodes Multiclass accuracy Dataset "mnist−60k", 10 classes, 5 folds 8 Standard Trees 8 Merged DAGs 10 2 10 4 10 6 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Total number of nodes Multiclass accuracy Dataset "poker", 10 classes, 5 folds 8 Standard Trees 8 Merged DAGs Figure 5: UCI classification results for two data sets, MNIST-60k and Poker, eight trees or DAGs per ensemble. The MNIST result is typical in that the accuracy improvements of DAGs over trees is small but achieved at a smaller number of nodes (memory). The largest UCI data set (Poker, 1M instances) profits most from the use of randomized DAGs. most cases the generalization performance is not improved or only slightly improved. The largest improvements for DAGs over trees is reported for the largest dataset (Poker). 5 Conclusion This paper has presented decision jungles as ensembles of rooted decision DAGs. These DAGs are trained, level-by-level, by jointly optimizing an objective function over both the choice of split function and the structure of the DAG. Two local optimization strategies were evaluated, with an efficient move-making algorithm producing the best results. Our evaluation on a number of diverse and challenging classification tasks has shown jungles to improve both memory efficiency and generalization for several tasks compared to conventional decision forests and their variants. We believe that decision jungles can be extended to regression tasks. We also plan to investigate multiply rooted trees and merging between DAGs within a jungle. Acknowledgements. The authors would like to thank Albert Montillo for initial investigation of related ideas. 8 References [1] Y. Amit and D. Geman. Randomized inquiries about shape; an application to handwritten digit recognition. Technical Report 401, Dept. of Statistics, University of Chicago, IL, Nov 1994. [2] A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh. Clustering with Bregman divergences. Journal of Machine Learning Research, 6:1705–1749, Oct. 2005. [3] D. Benbouzid, R. Busa-Fekete, and B. K´egl. Fast classification using sparse decision DAGs. In Proc. Intl Conf. on Machine Learning (ICML), New York, NY, USA, 2012. ACM. [4] K. P. Bennett, N. Cristianini, J. Shawe-Taylor, and D. Wu. Enlarging the margins in perceptron decision trees. Machine Learning, 41(3):295–313, 2000. [5] L. Breiman. Random forests. Machine Learning, 45(1), 2001. [6] L. Breiman, J. Friedman, C. J. Stone, and R. A. Olshen. Classification and Regression Trees. Chapman and Hall/CRC, 1984. [7] H. Chipman, E. I. George, and R. E. Mcculloch. Bayesian CART model search. Journal of the American Statistical Association, 93:935–960, 1997. [8] P. Chou. Optimal partitioning for classification and regression trees. IEEE Trans. PAMI, 13(4), 1991. [9] A. Criminisi and J. Shotton. Decision Forests for Computer Vision and Medical Image Analysis. Springer, 2013. [10] T. Elomaa and M. K¨a¨ari¨ainen. On the practice of branching program boosting. In European Conf. on Machine Learning (ECML), 2001. [11] M. Everingham, L. van Gool, C. Williams, J. Winn, and A. Zisserman. The Pascal Visual Object Classes (VOC) Challenge. http://www.pascal-network.org/challenges/VOC/. [12] S. Gould, R. Fulton, and D. Koller. Decomposing a scene into geometric and semantically consistent regions. In Proc. IEEE ICCV, 2009. [13] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer, 2001. [14] E. B. Hunt, J. Marin, and P. T. Stone. Experiments in Induction. Academic Press, New York, 1966. [15] L. Hyafil and R. L. Rivest. Constructing optimal binary decision trees is NP-complete. Information Processing Letters, 5(1):15–17, 1976. [16] B. Kijsirikul, N. Ussivakul, and S. Meknavin. Adaptive directed acyclic graphs for multiclass classification. In Pacific Rim Intl Conference on Artificial Intelligence (PRICAI), 2002. [17] R. Kohavi and C.-H. Li. Oblivious decision trees, graphs, and top-down pruning. In Intl Joint Conf. on Artifical Intelligence (IJCAI), 1995. [18] P. Kontschieder, P. Kohli, J. Shotton, and A. Criminisi. GeoF: Geodesic forests for learning coupled predictors. In Proc. IEEE CVPR, 2013. [19] V. Lepetit and P. Fua. Keypoint recognition using randomized trees. IEEE Trans. PAMI, 2006. [20] J. Mahoney and R. J. Mooney. Initializing ID5R with a domain theory: some negative results. Technical Report 91-154, Dept. of Computer Science, University of Texas, Austin, TX, 1991. [21] K. V. S. Murthy and S. L. Salzberg. On growing better decision trees from data. PhD thesis, John Hopkins University, 1995. [22] D. Newman, S. Hettich, C. Blake, and C. Merz. UCI repository of machine learning databases. Technical Report 28, University of California, Irvine, Department of Information and Computer Science, 1998. [23] A. L. Oliveira and A. Sangiovanni-Vincentelli. Using the minimum description length principle to infer reduced ordered decision graphs. Machine Learning, 12, 1995. [24] J. J. Oliver. Decision graphs – an extension of decision trees. Technical Report 92/173, Dept. of Computer Science, Monash University, Victoria, Australia, 1992. [25] A. H. Peterson and T. R. Martinez. Reducing decision trees ensemble size using parallel decision DAGs. Intl Journ. on Artificial Intelligence Tools, 18(4), 2009. [26] J. C. Platt, N. Cristianini, and J. Shawe-Taylor. Large margin DAGs for multiclass classification. In Proc. NIPS, pages 547–553, 2000. [27] J. R. Quinlan. Induction of decision trees. Machine Learning, 1986. [28] J. R. Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers, 1993. [29] J. Shotton, R. Girshick, A. Fitzgibbon, T. Sharp, M. Cook, M. Finocchio, R. Moore, P. Kohli, A. Criminisi, A. Kipman, and A. Blake. Efficient human pose estimation from single depth images. IEEE Trans. PAMI, 2013. 9
|
2013
|
136
|
4,861
|
Bayesian Estimation of Latently-grouped Parameters in Undirected Graphical Models Jie Liu Dept of CS, University of Wisconsin Madison, WI 53706 jieliu@cs.wisc.edu David Page Dept of BMI, University of Wisconsin Madison, WI 53706 page@biostat.wisc.edu Abstract In large-scale applications of undirected graphical models, such as social networks and biological networks, similar patterns occur frequently and give rise to similar parameters. In this situation, it is beneficial to group the parameters for more efficient learning. We show that even when the grouping is unknown, we can infer these parameter groups during learning via a Bayesian approach. We impose a Dirichlet process prior on the parameters. Posterior inference usually involves calculating intractable terms, and we propose two approximation algorithms, namely a Metropolis-Hastings algorithm with auxiliary variables and a Gibbs sampling algorithm with “stripped” Beta approximation (Gibbs SBA). Simulations show that both algorithms outperform conventional maximum likelihood estimation (MLE). Gibbs SBA’s performance is close to Gibbs sampling with exact likelihood calculation. Models learned with Gibbs SBA also generalize better than the models learned by MLE on real-world Senate voting data. 1 Introduction Undirected graphical models, a.k.a. Markov random fields (MRFs), have many real-world applications such as social networks and biological networks. In these large-scale networks, similar kinds of relations can occur frequently and give rise to repeated occurrences of similar parameters, but the grouping pattern among the parameters is usually unknown. For a social network example, suppose that we collect voting data over the last 20 years from a group of 1,000 people who are related to each other through different types of relations (such as family, co-workers, classmates, friends and so on), but the relation types are usually unknown. If we use a binary pairwise MRF to model the data, each binary node denotes one person’s vote, and two nodes are connected if the two people are linked in the social network. Eventually we want to estimate the pairwise potential functions on edges, which can provide insights about how the relations between people affect their decisions. This can be done via standard maximum likelihood estimation (MLE), but the latent grouping pattern among the parameters is totally ignored, and the model can be overparametrized. Therefore, two questions naturally arise. Can MRF parameter learners automatically identify these latent parameter groups during learning? Will this further abstraction make the model generalize better, analogous to the lessons we have learned from hierarchical modeling [9] and topic modeling [5]? This paper shows that it is feasible and potentially beneficial to identify the latent parameter groups during MRF parameter learning. Specifically, we impose a Dirichlet process prior on the parameters to accommodate our uncertainty about the number of the parameter groups. Posterior inference can be done by Markov chain Monte Carlo with proper approximations. We propose two approximation algorithms, a Metropolis-Hastings algorithm with auxiliary variables and a Gibbs sampling algorithm with stripped Beta approximation (Gibbs SBA). Algorithmic details are provided in Section 3 after we review related parameter estimation methods in Section 2. In Section 4, we evaluate our Bayesian estimates and the classical MLE on different models, and both algorithms outperform classical MLE. The Gibbs SBA algorithm performs very close to the Gibbs sampling algorithm with exact likelihood calculation. Models learned with Gibbs SBA also generalize better than the models learned by MLE on real-world Senate voting data in Section 5. We finally conclude in Section 6. 1 2 Maximum Likelihood Estimation and Bayesian Estimation for MRFs Let X = {0, 1, ..., m −1} be a discrete space. Suppose that we have an MRF defined on a random vector X ∈X d described by an undirected graph G(V, E) with d nodes in the node set V and r edges in the edge set E. The probability of one sample x from the MRF parameterized by θ is P(x; θ) = ˜P(x; θ)/Z(θ), (1) where Z(θ) is the partition function. ˜P(x; θ)= Q c∈C(G) φc(x; θc) is some unnormalized measure, and C(G) is some subset of cliques in G, and φc is the potential function defined on the clique c parameterized by θc. In this paper, we consider binary pairwise MRFs for simplicity, i.e. C(G)=E and m=2. We also assume that each potential function φc is parameterized by one parameter θc, namely φc(X; θc)=θc I(Xu=Xv)(1−θc)I(Xu̸=Xv) where I(Xu=Xv) indicates whether the two nodes u and v connected by edge c take the same value, and 0<θc<1, ∀c=1, ...,r. Thus, θ={θ1, ..., θr}. Suppose that we have n independent samples X={x1, ..., xn} from (1), and we want to estimate θ. Maximum Likelihood Estimate: The MLE of θ maximizes the log-likelihood function L(θ|X) which is concave w.r.t. θ. Therefore, we can use gradient ascent to find the global maximum of the likelihood function and find the MLE of θ. The partial derivative of L(θ|X) with respect to θi is ∂L(θ|X) ∂θi = 1 n Pn j=1 ψi(xj)−Eθψi=EXψi−Eθψi where ψi is the sufficient statistic corresponding to θi after we rewrite the density into the exponential family form, and Eθψi is the expectation of ψi with respect to the distribution specified by θ. However the exact computation of Eθψi takes time exponential in the treewidth of G. A few sampling-based methods have been proposed, with different ways of generating particles and computing Eθψ from the particles, including MCMCMLE [11, 34], particle-filtered MCMC-MLE [1], contrastive divergence [15] and its variations such as persistent contrastive divergence (PCD) [29] and fast PCD [30]. Note that contrastive divergence is related to pseudo-likelihood [4], ratio matching [17, 16], and together with other MRF parameter estimators [13, 31, 12] can be unified as minimum KL contraction [18]. Bayesian Estimate: Let π(θ) be a prior of θ; then its posterior is P(θ|X) ∝π(θ) ˜P(X; θ)/Z(θ). The Bayesian estimate of θ is its posterior mean. Exact sampling from P(θ|X) is known as doublyintractable for general MRFs [21]. If we use the Metropolis-Hastings algorithm, then MetropolisHastings ratio is a(θ∗|θ) = π(θ∗) ˜P(X; θ∗)Q(θ|θ∗)/Z(θ∗) π(θ) ˜P(X; θ)Q(θ∗|θ)/Z(θ) , (2) where Q(θ∗|θ) is some proposal distribution from θ to θ∗, and with probability min{1, a(θ∗|θ)} we accept the move from θ to θ∗. The real hurdle is that we have to evaluate the intractable Z(θ)/Z(θ∗) in the ratio. In [20], Møller et al. introduce one auxiliary variable y on the same space as x, and the state variable is extended to (θ, y). They set the new proposal distribution for the extended state Q(θ, y|θ∗,y∗)=Q(θ|θ∗) ˜P(y; θ)/Z(θ) to cancel Z(θ)/Z(θ∗) in (2). Therefore by ignoring y, we can generate the posterior samples of θ via Metropolis-Hastings. Technically, this auxiliary variable approach requires perfect sampling [25], but [20] pointed out that other simpler Markov chain methods also work with the proviso that it converges adequately to the equilibrium distribution. 3 Bayesian Parameter Estimation for MRFs with Dirichlet Process Prior In order to model the latent parameter groups, we impose a Dirichlet process prior on θ, which accommodates our uncertainty about the number of groups. Then, the generating model is G ∼DP(α0, G0) θi|G ∼G, i = 1, ..., r xj|θ ∼F(θ), j = 1, ..., n, (3) where F(θ) is the distribution specified by (1). G0 is the base distribution (e.g. Unif(0, 1)), and α0 is the concentration parameter. With probability 1.0, the distribution G drawn from DP(α0, G0) is discrete, and places its mass on a countably infinite collection of atoms drawn from G0. In this model, X={x1, ..., xn} is observed, and we want to perform posterior inference for θ = (θ1, θ2, ..., θr), 2 and regard its posterior mean as its Bayesian estimate. We propose two Markov chain Monte Carlo (MCMC) methods. One is a Metropolis-Hastings algorithm with auxiliary variables, as introduced in Section 3.1. The second is a Gibbs sampling algorithm with stripped Beta approximation, as introduced in Section 3.2. In both methods, the state of the Markov chain is specified by two vectors, c and φ. In vector c = (c1, ..., cr), ci denotes the group to which θi belongs. φ = (φ1, ..., φk) records the k distinct values in {θ1, ..., θr} with φci = θi for i = 1, ..., r. This way of specifying the Markov chain is more efficient than setting the state variable directly to be (θ1, θ2, ..., θr) [22]. 3.1 Metropolis-Hastings (MH) with Auxiliary Variables In the MH algorithm (see Algorithm 1), the initial state of the Markov chain is set by performing Kmeans clustering on MLE of θ (e.g. from the PCD algorithm [29]) with K=⌊α0 ln r⌋. The Markov chain resembles Algorithm 5 in [22], and it is ergodic. We move the Markov chain forward for T steps. In each step, we update c first and then update φ. We update each element of c in turn; when resampling ci, we fix c−i, all elements in c other than ci. When updating ci, we repeatedly for M times propose a new value c∗ i according to proposal Q(c∗ i |ci) and accept the move with probability min{1, a(c∗ i |ci)} where a(c∗ i |ci) is the MH ratio. After we update every element of c in the current iteration, we draw a posterior sample of φ according to the current grouping c. We iterate T times, and get T posterior samples of θ. Unlike the tractable Algorithm 5 in [22], we need to introduce auxiliary variables to bypass MRF’s intractable likelihood in two places, namely calculating the MH ratio (in Section 3.1.1) and drawing samples of φ|c (in Section 3.1.2). 3.1.1 Calculating Metropolis-Hastings Ratio Algorithm 1 The Metropolis-Hastings algorithm Input: observed data X={x1, ..., xn} Output: ˆθ (1), ..., ˆθ (T ); T samples of θ|X Procedure: Perform PCD algorithm to get ˜θ, MLE of θ Init. c and φ via K-means on ˜θ; K=⌊α0 ln r⌋ for t = 1 to T do for i = 1 to r do for l = 1 to M do Draw a candidate c∗ i from Q(ci|c∗ i ) If c∗ i ̸∈c, draw a value for φci from G0 Set ci=c∗ i with prob min{1, a(c∗ i |ci)} end for end for Draw a posterior sample of φ according to current c, and set ˆθ(t) i =φci for i=1, ..., r. end for The MH ratio of proposing a new value c∗ i for ci according to proposal Q(c∗ i |ci) is a(c∗ i |ci) = π(c∗ i , c−i)P(X; θ.∗ i )Q(ci|c∗ i ) π(ci, c−i)P(X; θ)Q(c∗ i |ci) = π(c∗ i |c−i) ˜P(X; θ.∗ i )Q(ci|c∗ i )/Z(θ.∗ i ) π(ci|c−i) ˜P(X; θ)Q(c∗ i |ci)/Z(θ) , where θ.∗ i is the same as θ except its i-th element is replaced with φc∗ i . The conditional prior π(c∗ i |c−i) is π(ci=c|c−i)= ( n−i,c r−1+α0 , if c ∈c−i α0 r−1+α0 , if c ̸∈c−i where n−i,c is the number of cj for j̸=i and cj=c. We choose proposal Q(c∗ i |ci) to be the conditional prior π(c∗ i |c−i), and the MetropolisHastings ratio can be further simplified as a(c∗ i |ci)= ˜P(X; θ.∗ i )Z(θ)/ ˜P(X; θ)Z(θ.∗ i ). However, Z(θ)/Z(θ.∗ i ) is intractable. Similar to [20], we introduce an auxiliary variable Z on the same space as X, and the state variable is extended to (c, Z). When proposing a move, we propose c∗ i first and then propose Z∗with proposal P(Z; θ.∗ i ) to cancel the intractable Z(θ)/Z(θ.∗ i ). We set the target distribution of Z to be P(Z; ˜θ) where ˜θ is some estimate of θ (e.g. from PCD [29]). Then, the MH ratio with the auxiliary variable is a(c∗ i , Z∗|ci, Z) = P(Z∗; ˜θ) ˜P(X; θ.∗ i ) ˜P(Z; θ) P(Z; ˜θ) ˜P(X; θ) ˜P(Z∗; θ.∗ i ) = ˜P(Z∗; ˜θ) ˜P(X; θ.∗ i ) ˜P(Z; θ) ˜P(Z; ˜θ) ˜P(X; θ) ˜P(Z∗; θ.∗ i ) . Thus, the intractable computation of the MH ratio is replaced by generating particles Z∗and Z under θ.∗ i and θ respectively. Ideally, we should use perfect sampling [25], but it is intractable for general MRFs. As a compromise, we use standard Gibbs sampling with long runs to generate these particles. 3.1.2 Drawing Posterior Samples of φ|c We draw posterior samples of φ under grouping c via the MH algorithm, again following [20]. The state of the Markov chain is φ. The initial state of the Markov chain is set by running PCD [29] with 3 parameters tied according to c. The proposal Q(φ∗|φ) is a k-variate Gaussian N(φ, σ2 QIk) where σ2 QIk is the covariance matrix. The auxiliary variable Y is on the same space as X, and the state is extended to (φ, Y). The proposal distribution for the extended state variable is Q(φ, Y|φ∗, Y∗) = Q(φ|φ∗) ˜P(Y; φ)/Z(φ). We set the target distribution of Y to be P(Y; ˜φ) where ˜φ is some estimate of φ such as the estimate from the PCD algorithm [29]. Then, the MH ratio for the extended state is a(φ∗, Y∗|φ, Y) = I(φ∗∈Θ) ˜P(Y∗; ˜φ) ˜P(X; φ∗) ˜P(Y; φ) ˜P(Y; ˜φ) ˜P(X; φ) ˜P(Y∗; φ∗) , where I(φ∗∈Θ) indicates that every dimension of φ∗is in the domain of G0. We set the state to be the new values with probability min{1, a(φ∗, Y∗|φ, Y)}. We move the Markov chain for S steps, and get S samples of φ by ignoring Y. Eventually we draw one sample from them randomly. 3.2 Gibbs Sampling with Stripped Beta Approximation Algorithm 2 The Gibbs sampling algorithm Input: observed data X = {x1, x2, ..., xn} Output: ˆθ (1), ..., ˆθ (T ); T posterior samples of θ|X Procedure: Perform PCD algorithm to get MLE ˜θ Init. c and φ via K-means on ˜θ; K=⌊α0 ln r⌋ for t = 1 to T do for i = 1 to r do If current ci is unique in c, remove φci from φ Update ci according to (4). If new ci̸∈c, draw a value for φci and add to φ end for Draw a posterior sample of φ according to current c, and set ˆθ(t) i = φci for i = 1, ..., r end for In the Gibbs sampling algorithm (see Algorithm 2), the initialization of the Markov chain is exactly the same as in the MH algorithm in Section 3.1. The Markov chain resembles Algorithm 2 in [22] and it can be shown to be ergodic. We move the Markov chain forward for T steps. In each of the T steps, we update c first and then update φ. When we update c, we fix the values in φ, except we may add one new value to φ or remove a value from φ. We update each element of c in turn. When we update ci, we first examine whether ci is unique in c. If so, we remove φci from φ first. We then update ci by assigning it to an existing group or a new group with a probability proportional to a product of two quantities, namely P(ci = c|c−i, X, φc−i) ∝ ( n−i,c r−1+α0 P(X; φc, φc−i), if c ∈c−i α0 r−1+α0 R P(X; θi, φc−i) dG0(θi), if c ̸∈c−i. (4) The first quantity is n−i,c, the number of members already in group c. For starting a new group, the quantity is α0. The second quantity is the likelihood of X after assigning ci to the new value c conditional on φc−i. When considering a new group, we integrate the likelihood w.r.t. G0. After ci is resampled, it is either set to be an existing group or a new group. If a new group is assigned, we draw a new value for φci, and add it to φ. After updating every element of c in the current iteration, we draw a posterior sample of φ under the current grouping c. In total, we run T iterations, and get T posterior samples of θ. This Gibbs sampling algorithm involves two intractable calculations, namely (i) calculating P(X; φc, φc−i) and R P(X; θi, φc−i) dG0(θi) in (4) and (ii) drawing posterior samples for φ. We use a stripped Beta approximation in both places, as in Sections 3.2.1 and 3.2.2. 3.2.1 Calculating P(X; φc, φc−i) and R P(X; θi, φc−i) dG0(θi) in (4) In Formula (4), we evaluate P(X; φc, φc−i) for different φc values with φc−i fixed and X = {x1, x2, ..., xn} observed. For ease of notation, we rewrite this quantity as a likelihood function of θi, L(θi|X, θ−i), where θ−i = {θ1, ..., θi−1, θi+1, ..., θr} is fixed. Suppose that the edge i connects variables Xu and Xv, and we denote X−uv to be the variables other than Xu and Xv. Then L(θi|X, θ−i)= Yn j=1 P(xj u, xj v|xj −uv; θi, θ−i)P(xj −uv; θi, θ−i) ≈ Yn j=1 P(xj u, xj v|xj −uv; θi, θ−i)P(xj −uv; θ−i) ∝ Yn j=1 P(xj u, xj v|xj −uv; θi, θ−i). Above we approximate P(xj −uv; θi, θ−i) with P(xj −uv; θ−i) because the density of X−uv mostly depends on θ−i. The term P(xj −uv; θ−i) can be dropped since θ−i is fixed, and we only have 4 to consider P(xj u, xj v|xj −uv; θi, θ−i). Since θ−i is fixed and we are conditioning on xj −uv, they together can be regarded as a fixed potential function telling how likely the rest of the graph thinks Xu and Xv should take the same value. Suppose that this fixed potential function (the message from the rest of the network xj −uv) is parameterized as ηi (0<ηi <1). Then n Y j=1 P(xj u, xj v|xj −uv; θi, θ−i)∝ n Y j=1 λI(xj u=xj v)(1−λ)I(xj u̸=xj v)=λ n P j=1 I(xj u=xj v) (1−λ) n P j=1 I(xj u̸=xj v) (5) where λ=θiηi/{θiηi+(1−θi)(1−ηi)}. The end of (5) resembles a Beta distribution with parameters (Pn j=1 I(xj u=xj v)+1, n−Pn j=1 I(xj u=xj v)+1) except that only part of λ, namely θi, is random. Now we want to use a Beta distribution to approximate the likelihood with respect to θi, and we need to remove the contribution of ηi and only consider the contribution from θi. We choose Beta(⌊n ˜θi⌋+1, n−⌊n ˜θi⌋+1) where ˜θi is MLE of θi (e.g. from the PCD algorithm). This approximation is named the Stripped Beta Approximation. The simulation results in Section 4.2 indicate that the performance of the stripped Beta approximation is very close to using exact calculation. Also this approximation only requires as much computation as in the tractable tree-structure MRFs, and it does not require generating expensive particles as in the MH algorithm with auxiliary variables. The integral R P(X; θi, φc−i) dG0(θi) in (4) can be calculated via Monte Carlo approximation. We draw a number of samples of θi from G0, and evaluate P(X; θi, φc−i) and take the average. 3.2.2 Drawing Posterior Samples of φ|c The stripped Beta approximation also allows us to draw posterior samples from φ|c approximately. Suppose that there are k groups according to c, and we have estimates for φ, denoted as ˆφ = (ˆφ1, ..., ˆφk). We denote the numbers of elements in the k groups by m = {m1, ..., mk}. For group i, we draw a posterior sample for φi from Beta(⌊min ˆφi⌋+1, min−⌊min ˆφi⌋+1). 4 Simulations We investigate the performance of our Bayesian estimators on three models: (i) a tree-MRF, (ii) a small grid-MRF whose likelihood is tractable, and (iii) a large grid-MRF whose likelihood is intractable. We first set the ground truth of the parameters, and then generate training and testing samples. On training data, we apply our grouping-aware Bayesian estimators and two baseline estimators, namely a grouping-blind estimator and an oracle estimator. The grouping-blind estimator does not know groups exist in the parameters, and estimates the parameters in the normal MLE fashion. The oracle estimator knows the ground truth of the groupings, and ties the parameters from the same group and estimates them via MLE. For the tree-MRF, our Bayesian estimator is exact since the likelihood is tractable. For the small grid-MRF, we have three variations for the Bayesian estimator, namely Gibbs sampling with exact likelihood computation, MH with auxiliary variables, and Gibbs sampling with stripped Beta approximation. For the large grid-MRF, the computational burden only allows us to apply Gibbs sampling with stripped Beta approximation. We compare the estimators by three measures. The first is the average absolute error of estimate 1/r Pr i=1 |θi −ˆθi| where ˆθi is the estimate of θi. The second measure is the log likelihood of the testing data, or the log pseudo-likelihood [4] of the testing data when exact likelihood is intractable. Thirdly, we evaluate how informative the grouping yielded by the Bayesian estimator is. We use the variation of information metric [19] between the inferred grouping ˆC and the ground truth grouping C, namely VI( ˆC, C). Since VI( ˆC, C) is sensitive to the number of groups in ˆC, we contrast it with VI( ¯C, C) where ¯C is a random grouping with its number of groups the same as ˆC. Eventually, we evaluate ˆC via the VI difference, namely VI( ¯C, C)−VI( ˆC, C). A larger value of VI difference indicates a more informative grouping yielded by our Bayesian estimator. Because we have one grouping in each of the T MCMC steps, we average the VI difference yielded in each of the T steps. 4.1 Simulations on Tree-structure MRFs For the structure of the MRF, we choose a perfect binary tree of height 12 (i.e. 8,191 nodes and 8,190 edges). We assume there are 25 groups among the 8,190 parameters. The base distribution G0 is Unif(0, 1). We first generate the true parameters for the 25 groups from Unif(0, 1). We then randomly assign each of the 8,190 parameters to one of the 25 groups. We then generate 1,000 5 G G G G G G G G G G 0.000 0.010 0.020 0.030 Error of Estimate 100 200 300 400 500 600 700 800 900 1000 Training Sample Size (a) G MLE Oracle Bayesian G G G G G G G G G G −4160 −4150 −4140 −4130 −4120 Log−likelihood of Test Data 100 200 300 400 500 600 700 800 900 1000 Training Sample Size (b) G MLE Oracle Bayesian 5.0 5.5 6.0 6.5 7.0 VI Difference 100 200 300 400 500 600 700 800 900 1000 Training Sample Size (c) Bayesian Figure 1: Performance of the grouping-blind MLE, the oracle MLE and our Bayesian estimator on tree-structure MRFs in terms of (a) error of estimate and (b) log-likelihood of test data. Subfigure (c) shows the VI difference between the grouping yielded by our Bayesian estimator and random grouping. testing samples and n training samples (n=100, 200, ..., 1,000). Eventually, we apply the groupingblind MLE, the oracle MLE, and our grouping-aware Bayesian estimator on the training samples. For tree-structure MRFs, both MLE and Bayesian estimation have a closed form solution. For the Bayesian estimator, we set the number of Gibbs sampling steps to be 500 and set α0=1.0. We replicate the experiment 500 times, and the averaged results are in Figure 1. Training Sample Size Number of Groups Inferred 15 20 25 30 100 200 300 400 500 600 700 800 900 1000 G G G G G G G G G G G G G G G GG G G GG G G G G G G G G G G G G G G GG G G G GG 400 410 420 430 440 450 460 Run Time (in seconds) 100 200 300 400 500 600 700 800 900 1000 Training Sample Size Figure 2: Number of groups inferred by the Bayesian estimator and its run time. Our grouping-aware Bayesian estimator has a lower estimate error and a higher log likelihood of test data, compared with the grouping-blind MLE, demonstrating the “blessing of abstraction”. Our Bayesian estimator performs worse than oracle MLE, as we expect. In addition, as the training sample size increases, the performance of our Bayesian estimator approaches that of the oracle MLE. The VI difference in Figure 1(c) indicates that the Bayesian estimator also recovers the latent grouping to some extent, and the inferred groupings become more and more reliable as the training size increases. The number of groups inferred by the Bayesian estimator and its running time are in Figure 2. We also investigate the asymptotic performance of the estimators and their performance when there are no parameter groups. The results are provided in the supplementary materials. 4.2 Simulations on Small Grid-MRFs For the structure of the MRF, we choose a 4×4 grid with 16 nodes and 24 edges. Exact likelihood is tractable in this small model, which allows us to investigate how good the two types of approximation are. We apply the grouping-blind MLE (the PCD algorithm), the oracle MLE (the PCD algorithm with the parameters from same group tied) and three Bayesian estimators: Gibbs sampling with exact likelihood computation (Gibbs ExactL), Metropolis-Hastings with auxiliary variables (MH AuxVar), and Gibbs sampling with stripped Beta approximation (Gibbs SBA). We assume there are five parameter groups. The base distribution is Unif(0, 1). We first generate the true parameters for the five groups from Unif(0, 1). We then randomly assign each of the 24 parameters to one of the five groups. We then generate 1,000 testing samples and n training samples (n=100, 200, ..., 1,000). For Gibbs ExactL and Gibbs SBA, we set the number of Gibbs sampling steps to be 100. For MH AuxVar, we set the number of MH steps to be 500 and its proposal number M to be 5. The parameter σQ in Section 3.1.2 is set to be 0.001 and the parameter S is set to be 100. For all three Bayesian estimators, we set α0=1.0. We replicate the experiment 50 times, and the averaged results are in Figure 4. (a) Gibbs_ExactL Training Sample Size # Groups Inferred 4 6 8 10 100 200 300 400 500 600 700 800 900 1000 G G G G G G G G G G G G G G G G G (b) MH_AuxVar Training Sample Size 4 6 8 10 100 200 300 400 500 600 700 800 900 1000 G G G G G G G G G G G (c) Gibbs_SBA Training Sample Size 4 6 8 10 100 200 300 400 500 600 700 800 900 1000 G G G G G G G G G G GG GG GG G GG GG G G G G G G G G G G Figure 3: The number of groups inferred by Gibbs ExactL, MH AuxVar and Gibbs SBA. Our grouping-aware Bayesian estimators have a lower estimate error and a higher log likelihood of test data, compared with the groupingblind MLE, demonstrating the blessing of abstraction. All three Bayesian estimators perform worse than oracle MLE, as we expect. The VI difference in Figure 4(c) indicates that the Bayesian estimators also recover the grouping to some extent, and the inferred groupings become more and more reliable as the training size increases. In Figure 3, we provide the boxplots of the number of groups inferred by Gibbs ExactL, MH AuxVar and Gibbs SBA. All three methods recover a reasonable number of groups, and Gibbs SBA slightly over-estimates the number of groups. 6 G G G G G G G G G G 0.005 0.015 0.025 0.035 Error of Estimate 100 200 300 400 500 600 700 800 900 1000 Training Sample Size (a) G MLE Oracle Gibbs_ExactL Gibbs_SBA MH_AuxVar G G G G G G G G G G −6920 −6880 −6840 −6800 Log−likelihood of Test Data 100 200 300 400 500 600 700 800 900 1000 Training Sample Size (b) G MLE Oracle Gibbs_ExactL Gibbs_SBA MH_AuxVar 1.0 1.2 1.4 1.6 1.8 2.0 2.2 VI Difference 100 200 300 400 500 600 700 800 900 1000 Training Sample Size (c) Gibbs_ExactL Gibbs_SBA MH_AuxVar Figure 4: Performance of grouping-blind MLE, oracle MLE, Gibbs ExactL, MH AuxVar, and Gibbs SBA on the small grid-structure MRFs in terms of (a) error of estimate and (b) log-likelihood of test data. Subfigure (c) shows the VI difference between the grouping yielded by our Bayesian estimators and random grouping. G G G G G G G G G G 0.01 0.02 0.03 0.04 Error of Estimate 100 200 300 400 500 600 700 800 900 1000 Training Sample Size (a) G MLE Oracle Gibbs_SBA G G G G G G G G G G −210000 −206000 −202000 −198000 Log−pseudolikelihood of Test Data 100 200 300 400 500 600 700 800 900 1000 Training Sample Size (b) G MLE Oracle Gibbs_SBA 0.5 1.0 1.5 2.0 VI Difference 100 200 300 400 500 600 700 800 900 1000 Training Sample Size (c) Gibbs_SBA Figure 5: Performance of the grouping-blind MLE, the oracle MLE and the Bayesian estimator (Gibbs SBA) on large grid-structure MRFs in terms of (a) error of estimate and (b) log-likelihood of test data. Subfigure (c) shows the VI difference between the grouping yielded by our Bayesian estimator and random grouping. Table 1: The run time (in seconds) of Gibbs ExactL, MH AuxVar and Gibbs SBA when training size is n. n=100 n=500 n=1,000 GIBBS EXACTL 88,136.3 91,055.0 92,503.4 MH AUXVAR 540.2 3,342.2 4,546.7 GIBBS SBA 8.1 10.8 14.2 Among the three Bayesian estimators, Gibbs ExactL has the lowest estimate error and the highest log likelihood of test data. Gibbs SBA also performs considerably well, and its performance is close to the performance of Gibbs ExactL. MH AuxVar works slightly worse, especially when there is less training data. However, MH AuxVar recovers better groupings than Gibbs SBA when there are more training data. The run times of the three Bayesian estimators are listed in Table 1. Gibbs ExactL has a computational complexity that is exponential in the dimensionality d, and cannot be applied to situations when d > 20. MH AuxVar is also computationally intensive because it has to generate expensive particles. Gibbs SBA runs fast, with its burden mainly from running PCD under a specific grouping in each Gibbs sampling step, and it scales well. 4.3 Simulations on Large Grid-MRFs The large grid consists of 30 rows and 30 columns (i.e. 900 nodes and 1,740 edges). Exact likelihood is intractable for this large model, and we cannot run Gibbs ExactL. The high dimension also prohibits MH AuxVar. Therefore, we only run the Gibbs SBA algorithm on this large grid-structure MRF. We assume that there are 10 groups among the 1,740 parameters. We also evaluate the estimators by the log pseudo-likelihood of testing data. The other settings of the experiments stay the same as Section 4.2. We replicate the experiment 50 times, and the averaged results are in Figure 5. Training Sample Size Number of Groups Inferred 20 40 60 80 100 200 300 400 500 600 700 800 900 1000 G G G G G G G G G G G G G G G GG G G 15000 20000 25000 30000 Run Time (in seconds) 100 200 300 400 500 600 700 800 900 1000 Training Sample Size Figure 6: Number of groups inferred by Gibbs SBA and its run time. For all 10 training sets, our Bayesian estimator Gibbs SBA has a lower estimate error and a higher log likelihood of test data, compared with the grouping-blind MLE (via the PCD algorithm). Gibbs SBA has a higher estimate error and a lower pseudo-likelihood of test data than the oracle MLE. The VI difference in Figure 5(c) indicates that Gibbs SBA gradually recovers the grouping as the training size increases. The number of groups inferred by Gibbs SBA and its running time are provided in Figure 6. Similarly to the observation in Section 4.2, Gibbs SBA overestimates the number of groups. Gibbs SBA finishes the simulations on 900 nodes and 1,740 edges in hundreds of minutes (depending on the training size), which is considered to be very fast. 7 Table 2: Log pseudo-likelihood (LPL) of training and testing data from MLE (PCD) and Bayesian estimate (Gibbs SBA), the number of groups inferred by Gibbs SBA, and its run time in the Senate voting experiments. LPL-TRAIN LPL-TEST MLE GIBBS SBA MLE GIBBS SBA # GROUPS RUNTIME (MINS) EXP1 -10716.75 -10721.34 -9022.01 -8989.87 7.89 204 EXP2 -8306.17 -8322.34 -11490.47 -11446.45 7.29 183 5 Real-world Application We apply the Gibbs SBA algorithm on US Senate voting data from the 109th Congress (available at www.senate.gov). The 109th Congress has two sessions, the first session in 2005 and the second session in 2006. There are 366 votes and 278 votes in the two sessions, respectively. There are 100 senators in both sessions, but Senator Corzine only served the first session and Senator Menendez only served the second session. We remove them. In total, we have 99 senators in our experiments, and we treat the votes from the 99 senators as the 99 variables in the MRF. We only consider contested votes, namely we remove the votes with less than ten or more than ninety supporters. In total, there are 292 votes and 221 votes left in the two sessions, respectively. The structure of the MRF is from Figure 13 in [2]. There are in total 279 edges. The votes are coded as −1 for no and 1 for yes. We replace all missing votes with −1, staying consistent with [2]. We perform two experiments. First, we train the MRF using the first session data, and test on the second session data. Then, we train on the second session and test on the first session. We compare our Bayesian estimator (via Gibbs SBA) and MLE (via PCD) by the log pseudo-likelihood of testing data since exact likelihood is intractable. We set the number of Gibbs sampling steps to be 3,000. Both of the two experiments are finished in around three hours on a single CPU. The results are summarized in Table 2. In the first experiment, the log pseudo-likelihood of test data is −9022.01 from MLE, whereas it is −8989.87 from our Bayesian estimate. In the second experiment, the log pseudo-likelihood of test data is −11490.47 from MLE, whereas it is −11446.45 from our Bayesian estimate. The increase of log pseudo-likelihood is comparable to the increase of log (pseudo-)likelihood we gain in the simulations (please refer to Figures 1b, 4b and 5b at the points when we simulate 200 and 300 training samples). Both experiments indicate that the models trained with the Gibbs SBA algorithm generalize considerably better than the models trained with MLE. Gibbs SBA also infers there are around eight different types of relations among the senators. The two trained models are provided in the supplementary materials, and the estimated parameters in the two models are consistent. 6 Discussion Bayesian nonparametric approaches [23, 10], such as the Dirichlet process [7], provide an elegant way of modeling mixtures with an unknown number of components. These approaches have yielded advances in different machine learning areas, such as the infinite Gaussian mixture models [26], the infinite mixture of Gaussian processes [27], infinite HMMs [3, 8], infinite HMRFs [6], DP-nonlinear models [28], DP-mixture GLMs [14], infinite SVMs [33, 32], and the infinite latent attribute models [24]. In this paper, we play the same trick of replacing the prior distribution with a prior stochastic process to accommodate our uncertainty about the number of parameter groups. To the best of our knowledge, this is the first time a Bayesian nonparametric approach is applied to models whose likelihood is intractable. Accordingly, we propose two types of approximation, namely a MetropolisHastings algorithm with auxiliary variables and a Gibbs sampling algorithm with stripped Beta approximation. Both algorithms show superior performance over conventional MLE, and Gibbs SBA can also scale well to large-scale MRFs. The Markov chains in both algorithms are ergodic, but may not be in detailed balance because we rely on approximation. Thus, we guarantee that both algorithms converge for general MRFs, but they may not exactly converge to the target distribution. In this paper, we only consider the situation where the potential functions are pairwise and there is only one parameter in each potential function. For graphical models with more than one parameter in the potential functions, it is appropriate to group the parameters on the level of potential functions. A more sophisticated base distribution G0 (such as some multivariate distribution) needs to be considered. In this paper, we also assume the structures of the MRFs are given. When the structures are unknown, we still need to perform structure learning. Allowing structure learners to automatically identify structure modules will be another very interesting topic to explore in the future research. Acknowledgements The authors acknowledge the support of NIGMS R01GM097618-01 and NLM R01LM011028-01. 8 References [1] A. U. Asuncion, Q. Liu, A. T. Ihler, and P. Smyth. Particle filtered MCMC-MLE with connections to contrastive divergence. In ICML, 2010. [2] O. Banerjee, L. El Ghaoui, and A. d’Aspremont. Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. JMLR, 9:485–516, June 2008. [3] M. J. Beal, Z. Ghahramani, and C. E. Rasmussen. The infinite hidden Markov model. In NIPS, 2002. [4] J. Besag. Statistical analysis of non-lattice data. JRSS-D, 24(3):179–195, 1975. [5] D. Blei, A. Ng, and M. Jordan. Latent Dirichlet allocation. JMLR, 3:993–1022, 2003. [6] S. P. Chatzis and G. Tsechpenakis. The infinite hidden Markov random field model. In ICCV, 2009. [7] T. S. Ferguson. A Bayesian analysis of some nonparametric problems. The Annals of Statistics, 1(2):209– 230, 1973. [8] J. V. Gael, Y. Saatci, Y. W. Teh, and Z. Ghahramani. Beam sampling for the infinite hidden Markov model. In ICML, 2008. [9] A. Gelman and J. Hill. Data analysis using regression and multilevel/hierarchical models. Cambridge University Press, New York, 2007. [10] S. J. Gershman and D. M. Blei. A tutorial on Bayesian nonparametric models. Journal of Mathematical Psychology, 56(1):1–12, 2012. [11] C. J. Geyer. Markov chain Monte Carlo maximum likelihood. Computing Science and Statistics, pages 156–163, 1991. [12] M. Gutmann and J. Hirayama. Bregman divergence as general framework to estimate unnormalized statistical models. In UAI, pages 283–290, Corvallis, Oregon, 2011. AUAI Press. [13] M. Gutmann and A. Hyv¨arinen. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In AISTATS, 2010. [14] L. A. Hannah, D. M. Blei, and W. B. Powell. Dirichlet process mixtures of generalized linear models. JMLR, 12:1923–1953, 2011. [15] G. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 14:1771–1800, 2002. [16] A. Hyvarinen. Connections between score matching, contrastive divergence, and pseudolikelihood for continuous-valued variables. Neural Networks, IEEE Transactions on, 18(5):1529–1531, 2007. [17] A. Hyv¨arinen. Some extensions of score matching. Computational statistics & data analysis, 51(5):2499– 2512, 2007. [18] S. Lyu. Unifying non-maximum likelihood learning objectives with minimum KL contraction. NIPS, 2011. [19] M. Meila. Comparing clusterings by the variation of information. In COLT, 2003. [20] J. Møller, A. Pettitt, R. Reeves, and K. Berthelsen. An efficient Markov chain Monte Carlo method for distributions with intractable normalising constants. Biometrika, 93(2):451–458, 2006. [21] I. Murray, Z. Ghahramani, and D. J. C. MacKay. MCMC for doubly-intractable distributions. In UAI, 2006. [22] R. M. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9(2):249–265, 2000. [23] P. Orbanz and Y. W. Teh. Bayesian nonparametric models. In Encyclopedia of Machine Learning. Springer, 2010. [24] K. Palla, D. A. Knowles, and Z. Ghahramani. An infinite latent attribute model for network data. In ICML, 2012. [25] J. G. Propp and D. B. Wilson. Exact sampling with coupled Markov chains and applications to statistical mechanics. Random structures and Algorithms, 9(1-2):223–252, 1996. [26] C. E. Rasmussen. The infinite Gaussian mixture model. In NIPS, 2000. [27] C. E. Rasmussen and Z. Ghahramani. Infinite mixtures of Gaussian process experts. In NIPS, 2001. [28] B. Shahbaba and R. Neal. Nonlinear models using Dirichlet process mixtures. JMLR, 10:1829–1850, 2009. [29] T. Tieleman. Training restricted Boltzmann machines using approximations to the likelihood gradient. In ICML, 2008. [30] T. Tieleman and G. Hinton. Using fast weights to improve persistent contrastive divergence. In ICML, 2009. [31] D. Vickrey, C. Lin, and D. Koller. Non-local contrastive objectives. In Proc. of the International Conference on Machine Learning. Citeseer, 2010. [32] J. Zhu, N. Chen, and E. P. Xing. Infinite latent SVM for classification and multi-task learning. In NIPS, 2011. [33] J. Zhu, N. Chen, and E. P. Xing. Infinite SVM: a Dirichlet process mixture of large-margin kernel machines. In ICML, 2011. [34] S. C. Zhu and X. Liu. Learning in Gibbsian fields: How accurate and how fast can it be? IEEE Transactions on Pattern Analysis and Machine Intelligence, 24:1001–1006, 2002. 9
|
2013
|
137
|
4,862
|
(More) Efficient Reinforcement Learning via Posterior Sampling Osband, Ian Stanford University Stanford, CA 94305 iosband@stanford.edu Van Roy, Benjamin Stanford University Stanford, CA 94305 bvr@stanford.edu Russo, Daniel Stanford University Stanford, CA 94305 djrusso@stanford.edu Abstract Most provably-efficient reinforcement learning algorithms introduce optimism about poorly-understood states and actions to encourage exploration. We study an alternative approach for efficient exploration: posterior sampling for reinforcement learning (PSRL). This algorithm proceeds in repeated episodes of known duration. At the start of each episode, PSRL updates a prior distribution over Markov decision processes and takes one sample from this posterior. PSRL then follows the policy that is optimal for this sample during the episode. The algorithm is conceptually simple, computationally efficient and allows an agent to encode prior knowledge in a natural way. We establish an ˜O(τS √ AT) bound on expected regret, where T is time, τ is the episode length and S and A are the cardinalities of the state and action spaces. This bound is one of the first for an algorithm not based on optimism, and close to the state of the art for any reinforcement learning algorithm. We show through simulation that PSRL significantly outperforms existing algorithms with similar regret bounds. 1 Introduction We consider the classical reinforcement learning problem of an agent interacting with its environment while trying to maximize total reward accumulated over time [1, 2]. The agent’s environment is modeled as a Markov decision process (MDP), but the agent is uncertain about the true dynamics of the MDP. As the agent interacts with its environment, it observes the outcomes that result from previous states and actions, and learns about the system dynamics. This leads to a fundamental tradeoff: by exploring poorly-understood states and actions the agent can learn to improve future performance, but it may attain better short-run performance by exploiting its existing knowledge. Na¨ıve optimization using point estimates for unknown variables overstates an agent’s knowledge, and can lead to premature and suboptimal exploitation. To offset this, the majority of provably efficient learning algorithms use a principle known as optimism in the face of uncertainty [3] to encourage exploration. In such an algorithm, each state and action is afforded some optimism bonus such that their value to the agent is modeled to be as high as is statistically plausible. The agent will then choose a policy that is optimal under this “optimistic” model of the environment. This incentivizes exploration since poorly-understood states and actions will receive a higher optimism bonus. As the agent resolves its uncertainty, the effect of optimism is reduced and the agent’s behavior approaches optimality. Many authors have provided strong theoretical guarantees for optimistic algorithms [4, 5, 6, 7, 8]. In fact, almost all reinforcement learning algorithms with polynomial bounds on sample complexity employ optimism to guide exploration. 1 We study an alternative approach to efficient exploration, posterior sampling, and provide finite time bounds on regret. We model the agent’s initial uncertainty over the environment through a prior distribution.1 At the start of each episode, the agent chooses a new policy, which it follows for the duration of the episode. Posterior sampling for reinforcement learning (PSRL) selects this policy through two simple steps. First, a single instance of the environment is sampled from the posterior distribution at the start of an episode. Then, PSRL solves for and executes the policy that is optimal under the sampled environment over the episode. PSRL randomly selects policies according to the probability they are optimal; exploration is guided by the variance of sampled policies as opposed to optimism. The idea of posterior sampling goes back to 1933 [9] and has been applied successfully to multi-armed bandits. In that literature, the algorithm is often referred to as Thompson sampling or as probability matching. Despite its long history, posterior sampling was largely neglected by the multi-armed bandit literature until empirical studies [10, 11] demonstrated that the algorithm could produce state of the art performance. This prompted a surge of interest, and a variety of strong theoretical guarantees are now available [12, 13, 14, 15]. Our results suggest this method has great potential in reinforcement learning as well. PSRL was originally introduced in the context of reinforcement learning by Strens [16] under the name “Bayesian Dynamic Programming”,2 where it appeared primarily as a heuristic method. In reference to PSRL and other “Bayesian RL” algorithms, Kolter and Ng [17] write “little is known about these algorithms from a theoretical perspective, and it is unclear, what (if any) formal guarantees can be made for such approaches.” Those Bayesian algorithms for which performance guarantees exist are guided by optimism. BOSS [18] introduces a more complicated version of PSRL that samples many MDPs, instead of just one, and then combines them into an optimistic environment to guide exploration. BEB [17] adds an exploration bonus to states and actions according to how infrequently they have been visited. We show it is not always necessary to introduce optimism via a complicated construction, and that the simple algorithm originally proposed by Strens [16] satisfies strong bounds itself. Our work is motivated by several advantages of posterior sampling relative to optimistic algorithms. First, since PSRL only requires solving for an optimal policy for a single sampled MDP, it is computationally efficient both relative to many optimistic methods, which require simultaneous optimization across a family of plausible environments [4, 5, 18], and to computationally intensive approaches that attempt to approximate the Bayes-optimal solutions directly [18, 19, 20]. Second, the presence of an explicit prior allows an agent to incorporate known environment structure in a natural way. This is crucial for most practical applications, as learning without prior knowledge requires exhaustive experimentation in each possible state. Finally, posterior sampling allows us to separate the algorithm from the analysis. In any optimistic algorithm, performance is greatly influenced by the manner in which optimism is implemented. Past works have designed algorithms, at least in part, to facilitate theoretical analysis for toy problems. Although our analysis of posterior sampling is closely related to the analysis in [4], this worst-case bound has no impact on the algorithm’s actual performance. In addition, PSRL is naturally suited to more complex settings where design of an efficiently optimistic algorithm might not be possible. We demonstrate through a computational study in Section 6 that PSRL outperforms the optimistic algorithm UCRL2 [4]: a competitor with similar regret bounds over some example MDPs. 2 Problem formulation We consider the problem of learning to optimize a random finite horizon MDP M = (S, A, RM, P M, τ, ρ) in repeated finite episodes of interaction. S is the state space, A is the action space, RM a (s) is a probability distribution over reward realized when selecting action a while in state s whose support is [0, 1], P M a (s′|s) is the probability of transitioning to state s′ if action a is selected while at state s, τ is the time horizon, and ρ the initial state distribution. We define the MDP and all other random variables we will consider with 1For an MDP, this might be a prior over transition dynamics and reward distributions. 2We alter terminology since PSRL is neither Bayes-optimal, nor a direct approximation of this. 2 respect to a probability space (Ω, F, P). We assume S, A, and τ are deterministic so the agent need not learn the state and action spaces or the time horizon. A deterministic policy µ is a function mapping each state s ∈S and i = 1, . . . , τ to an action a ∈A. For each MDP M = (S, A, RM, P M, τ, ρ) and policy µ, we define a value function V M µ,i(s) := EM,µ τ X j=i R M aj (sj) si = s , where R M a (s) denotes the expected reward realized when action a is selected while in state s, and the subscripts of the expectation operator indicate that aj = µ(sj, j), and sj+1 ∼ P M aj (·|sj) for j = i, . . . , τ. A policy µ is said to be optimal for MDP M if V M µ,i(s) = maxµ′ V M µ′,i(s) for all s ∈S and i = 1, . . . , τ. We will associate with each MDP M a policy µM that is optimal for M. The reinforcement learning agent interacts with the MDP over episodes that begin at times tk = (k −1)τ + 1, k = 1, 2, . . .. At each time t, the agent selects an action at, observes a scalar reward rt, and then transitions to st+1. If an agent follows a policy µ then when in state s at time t during episode k, it selects an action at = µ(s, t −tk). Let Ht = (s1, a1, r1, . . . , st−1, at−1, rt−1) denote the history of observations made prior to time t. A reinforcement learning algorithm is a deterministic sequence {πk|k = 1, 2, . . .} of functions, each mapping Htk to a probability distribution πk(Htk) over policies. At the start of the kth episode, the algorithm samples a policy µk from the distribution πk(Htk). The algorithm then selects actions at = µk(st, t −tk) at times t during the kth episode. We define the regret incurred by a reinforcement learning algorithm π up to time T to be Regret(T, π) := ⌈T/τ⌉ X k=1 ∆k, where ∆k denotes regret over the kth episode, defined with respect to the MDP M ∗by ∆k = X s∈S ρ(s)(V M ∗ µ∗,1(s) −V M ∗ µk,1(s)), with µ∗= µM ∗and µk ∼πk(Htk). Note that regret is not deterministic since it can depend on the random MDP M ∗, the algorithm’s internal random sampling and, through the history Htk, on previous random transitions and random rewards. We will assess and compare algorithm performance in terms of regret and its expectation. 3 Posterior sampling for reinforcement learning The use of posterior sampling for reinforcement learning (PSRL) was first proposed by Strens [16]. PSRL begins with a prior distribution over MDPs with states S, actions A and horizon τ. At the start of each kth episode, PSRL samples an MDP Mk from the posterior distribution conditioned on the history Htk available at that time. PSRL then computes and follows the policy µk = µMk over episode k. Algorithm: Posterior Sampling for Reinforcement Learning (PSRL) Data: Prior distribution f, t=1 for episodes k = 1, 2, . . . do sample Mk ∼f(·|Htk) compute µk = µMk for timesteps j = 1, . . . , τ do sample and apply at = µk(st, j) observe rt and st+1 t = t + 1 end end 3 We show PSRL obeys performance guarantees intimately related to those for learning algorithms based upon OFU, as has been demonstrated for multi-armed bandit problems [15]. We believe that a posterior sampling approach offers some inherent advantages. Optimistic algorithms require explicit construction of the confidence bounds on V M ∗ µ,1 (s) based on observed data, which is a complicated statistical problem even for simple models. In addition, even if strong confidence bounds for V M ∗ µ,1 (s) were known, solving for the best optimistic policy may be computationally intractable. Algorithms such as UCRL2 [4] are computationally tractable, but must resort to separately bounding R M a (s) and P M a (s) with high probability for each s, a. These bounds allow a “worst-case” mis-estimation simultaneously in every state-action pair and consequently give rise to a confidence set which may be far too conservative. By contrast, PSRL always selects policies according to the probability they are optimal. Uncertainty about each policy is quantified in a statistically efficient way through the posterior distribution. The algorithm only requires a single sample from the posterior, which may be approximated through algorithms such as Metropolis-Hastings if no closed form exists. As such, we believe PSRL will be simpler to implement, computationally cheaper and statistically more efficient than existing optimistic methods. 3.1 Main results The following result establishes regret bounds for PSRL. The bounds have ˜O(τS √ AT) expected regret, and, to our knowledge, provide the first guarantees for an algorithm not based upon optimism: Theorem 1. If f is the distribution of M ∗then, E Regret(T, πPS τ ) = O τS p AT log(SAT) (1) This result holds for any prior distribution on MDPs, and so applies to an immense class of models. To accommodate this generality, the result bounds expected regret under the prior distribution (sometimes called Bayes risk or Bayesian regret). We feel this is a natural measure of performance, but should emphasize that it is more common in the literature to bound regret under a worst-case MDP instance. The next result provides a link between these notions of regret. Applying Markov’s inequality to (1) gives convergence in probability. Corollary 1. If f is the distribution of M ∗then for any α > 1 2, Regret(T, πPS τ ) T α → p 0. As shown in the appendix, this also bounds the frequentist regret for any MDP with non-zero probability. State-of-the-art guarantees similar to Theorem 1 are satisfied by the algorithms UCRL2 [4] and REGAL [5] for the case of non-episodic RL. Here UCRL2 gives regret bounds ˜O(DS √ AT) where D = maxs′̸=s minπ E[T(s′|M, π, s)] and T(s′|M, π, s) is the first time step where s′ is reached from s under the policy π. REGAL improves this result to ˜O(ΨS √ AT) where Ψ ≤D is the span of the of the optimal value function. However, there is so far no computationally tractable implementation of this algorithm. In many practical applications we may be interested in episodic learning tasks where the constants D and Ψ could be improved to take advantage of the episode length τ. Simple modifications to both UCRL2 and REGAL will produce regret bounds of ˜O(τS √ AT), just as PSRL. This is close to the theoretical lower bounds of √ SAT-dependence. 4 True versus sampled MDP A simple observation, which is central to our analysis, is that, at the start of each kth episode, M ∗and Mk are identically distributed. This fact allows us to relate quantities that depend on the true, but unknown, MDP M ∗, to those of the sampled MDP Mk, which is 4 fully observed by the agent. We introduce σ(Htk) as the σ-algebra generated by the history up to tk. Readers unfamiliar with measure theory can think of this as “all information known just before the start of period tk.” When we say that a random variable X is σ(Htk)measurable, this intuitively means that although X is random, it is deterministically known given the information contained in Htk. The following lemma is an immediate consequence of this observation [15]. Lemma 1 (Posterior Sampling). If f is the distribution of M ∗then, for any σ(Htk)measurable function g, E[g(M ∗)|Htk] = E[g(Mk)|Htk]. (2) Note that taking the expectation of (2) shows E[g(M ∗)] = E[g(Mk)] through the tower property. Recall, we have defined ∆k = P s∈S ρ(s)(V M ∗ µ∗,1(s) −V M ∗ µk,1(s)) to be the regret over period k. A significant hurdle in analyzing this equation is its dependence on the optimal policy µ∗, which we do not observe. For many reinforcement learning algorithms, there is no clean way to relate the unknown optimal policy to the states and actions the agent actually observes. The following result shows how we can avoid this issue using Lemma 1. First, define ˜∆k = X s∈S ρ(s)(V Mk µk,1(s) −V M ∗ µk,1(s)) (3) as the difference in expected value of the policy µk under the sampled MDP Mk, which is known, and its performance under the true MDP M ∗, which is observed by the agent. Theorem 2 (Regret equivalence). E " m X k=1 ∆k # = E " m X k=1 ˜∆k # (4) and for any δ > 0 with probability at least 1 −δ, Proof. Note, ∆k −˜∆k = P s∈S ρ(s)(V M ∗ µ∗,1(s) −V Mk µk,1(s)) ∈[−τ, τ]. By Lemma 1, E[∆k − ˜∆k|Htk] = 0. Taking expectations of these sums therefore establishes the claim. This result bounds the agent’s regret in epsiode k by the difference between the agent’s estimate V Mk µk,1(stk) of the expected reward in Mk from the policy it chooses, and the expected reward V M ∗ µk,1(stk) in M ∗. If the agent has a poor estimate of the MDP M ∗, we expect it to learn as the performance of following µk under M ∗differs from its expectation under Mk. As more information is gathered, its performance should improve. In the next section, we formalize these ideas and give a precise bound on the regret of posterior sampling. 5 Analysis An essential tool in our analysis will be the dynamic programming, or Bellman operator T M µ , which for any MDP M = (S, A, RM, P M, τ, ρ), stationary policy µ : S →A and value function V : S →R, is defined by T M µ V (s) := R M µ (s, µ) + X s′∈S P M µ(s)(s′|s)V (s′). This operation returns the expected value of state s where we follow the policy µ under the laws of M, for one time step. The following lemma gives a concise form for the dynamic programming paradigm in terms of the Bellman operator. Lemma 2 (Dynamic programming equation). For any MDP M = (S, A, RM, P M, τ, ρ) and policy µ : S × {1, . . . , τ} →A, the value functions V M µ satisfy V M µ,i = T M µ(·,i)V M µ,i+1 (5) for i = 1 . . . τ, with V M µ,τ+1 := 0. 5 In order to streamline our notation we will let V ∗ µ,i := V M ∗ µ,i , V k µ,i(s) := V Mk µ,i (s), T k µ := T Mk µ , T ∗ µ := T M ∗ µ and P ∗ µ(·|s) := P M ∗ µ(s)(·|s). 5.1 Rewriting regret in terms of Bellman error E ˜∆k M ∗, Mk = E " τ X i=1 h (T k µk(·,i) −T ∗ µk(·,i))V k µk,i+1(stk+i) i M ∗, Mk # (6) To see why (6) holds, simply apply the Dynamic programming equation inductively: (V k µk,1 −V ∗ µk,1)(stk+1) = (T k µk(·,1)V k µk,2 −T ∗ µk(·,1)V ∗ µk,2)(stk+1) = (T k µk(·,1) −T ∗ µk(·,1))V k µk,2(stk+1) + X s′∈S {P ∗ µk(·,1)(s′|stk+1)(V ∗ µk,2 −V k µk,2)(s′)} = (T k µk(·,1) −T ∗ µk(·,1))V k µk,2(stk+1) + (V ∗ µk,2 −V k µk,2)(stk+1) + dtk+1 = . . . = τ X i=1 (T k µk(·,i) −T ∗ µk(·,i))V k µk,i+1(stk+i) + τ X i=1 dtk+i, where dtk+i := P s′∈S{P ∗ µk(·,i)(s′|stk+i)(V ∗ µk,i+1 −V k µk,i+1)(s′)} −(V ∗ µk,i+1 −V k µk,i+1)(stk+i). This expresses the regret in terms two factors. The first factor is the one step Bellman error h (T k µk(·,i) −T ∗ µk(·,i))V k µk,i+1(stk+i) i under the sampled MDP Mk. Crucially, (6) depends only the Bellman error under the observed policy µk and the states s1, .., sT that are actually visited over the first T periods. We go on to show the posterior distribution of Mk concentrates around M ∗as these actions are sampled, and so this term tends to zero. The second term captures the randomness in the transitions of the true MDP M ∗. In state st under policy µk, the expected value of (V ∗ µk,i+1 −V k µk,i+1)(stk+i) is exactly P s′∈S{P ∗ µk(·,i)(s′|stk+i)(V ∗ µk,i+1 −V k µk,i+1)(s′)}. Hence, conditioned on the true MDP M ∗ and the sampled MDP Mk, the term Pτ i=1 dtk+i has expectation zero. 5.2 Introducing confidence sets The last section reduced the algorithm’s regret to its expected Bellman error. We will proceed by arguing that the sampled Bellman operator T k µk(·,i) concentrates around the true Bellman operatior T ∗ µk(·,i). To do this, we introduce high probability confidence sets similar to those used in [4] and [5]. Let ˆP t a(·|s) denote the emprical distribution up period t of transitions observed after sampling (s, a), and let ˆRt a(s) denote the empirical average reward. Finally, define Ntk(s, a) = Ptk−1 t=1 1{(st,at)=(s,a)} to be the number of times (s, a) was sampled prior to time tk. Define the confidence set for episode k: Mk := n M :
ˆP t a(·|s) −P M a (·|s)
1 ≤βk(s, a) & | ˆRt a(s) −RM a (s)| ≤βk(s, a) ∀(s, a) o Where βk(s, a) := q 14S log(2SAmtk) max{1,Ntk (s,a)} is chosen conservatively so that Mk contains both M ∗ and Mk with high probability. It’s worth pointing out that we have not tried to optimize this confidence bound, and it can be improved, at least by a numerical factor, with more careful analysis. Now, using that ˜∆k ≤τ we can decompose regret as follows: m X k=1 ˜∆k ≤ m X k=1 ˜∆k1{Mk,M ∗∈Mk} + τ m X k=1 [1{Mk /∈Mk} + 1{M ∗/∈Mk}] (7) 6 Now, since Mk is σ(Htk)-measureable, by Lemma 1, E[1{Mk /∈Mk}|Htk] = E[1{M ∗/∈Mk}|Htk]. Lemma 17 of [4] shows3 P(M ∗/∈Mk) ≤1/m for this choice of βk(s, a), which implies E " m X k=1 ˜∆k # ≤ E " m X k=1 ˜∆k1{Mk,M ∗∈Mk} # + 2τ m X k=1 P{M ∗/∈Mk}. ≤ E " m X k=1 E ˜∆k|M ∗, Mk 1{Mk,M ∗∈Mk} # + 2τ ≤ E m X k=1 τ X i=1 |(T k µk(·,i) −T ∗ µk(·,i))V k µk,i+1(stk+i)|1{Mk,M ∗∈Mk} + 2τ ≤ τE m X k=1 τ X i=1 min{βk(stk+i, atk+i), 1} + 2τ. (8) We also have the worst–case bound Pm k=1 ˜∆k ≤T. In the technical appendix we go on to provide a worst case bound on min{τ Pm k=1 Pτ i=1 min{βk(stk+i, atk+i), 1}, T} of order τS p AT log(SAT), which completes our analysis. 6 Simulation results We compare performance of PSRL to UCRL2 [4]: an optimistic algorithm with similar regret bounds. We use the standard example of RiverSwim [21], as well as several randomly generated MDPs. We provide results in both the episodic case, where the state is reset every τ = 20 steps, as well as the setting without episodic reset. Figure 1: RiverSwim - continuous and dotted arrows represent the MDP under the actions “right” and “left”. RiverSwim consists of six states arranged in a chain as shown in Figure 1. The agent begins at the far left state and at every time step has the choice to swim left or right. Swimming left (with the current) is always successful, but swimming right (against the current) often fails. The agent receives a small reward for reaching the leftmost state, but the optimal policy is to attempt to swim right and receive a much larger reward. This MDP is constructed so that efficient exploration is required in order to obtain the optimal policy. To generate the random MDPs, we sampled 10-state, 5-action environments according to the prior. We express our prior in terms of Dirichlet and normal-gamma distributions over the transitions and rewards respectively.4 In both environments we perform 20 Monte Carlo simulations and compute the total regret over 10,000 time steps. We implement UCRL2 with δ = 0.05 and optimize the algorithm to take account of finite episodes where appropriate. PSRL outperformed UCRL2 across every environment, as shown in Table 1. In Figure 2, we show regret through time across 50 Monte Carlo simulations to 100,000 time–steps in the RiverSwim environment: PSRL’s outperformance is quite extreme. 3Our confidence sets are equivalent to those of [4] when the parameter δ = 1/m. 4These priors are conjugate to the multinomial and normal distribution. We used the values α = 1/S, µ = σ2 = 1 and pseudocount n = 1 for a diffuse uniform prior. 7 Table 1: Total regret in simulation. PSRL outperforms UCRL2 over different environments. Random MDP Random MDP RiverSwim RiverSwim Algorithm τ-episodes ∞-horizon τ-episodes ∞-horizon PSRL 1.04 × 104 7.30 × 103 6.88 × 101 1.06 × 102 UCRL2 5.92 × 104 1.13 × 105 1.26 × 103 3.64 × 103 6.1 Learning in MDPs without episodic resets The majority of practical problems in reinforcement learning can be mapped to repeated episodic interactions for some length τ. Even in cases where there is no actual reset of episodes, one can show that PSRL’s regret is bounded against all policies which work over horizon τ or less [6]. Any setting with discount factor α can be learned for τ ∝(1 −α)−1. One appealing feature of UCRL2 [4] and REGAL [5] is that they learn this optimal timeframe τ. Instead of computing a new policy after a fixed number of periods, they begin a new episode when the total visits to any state-action pair is doubled. We can apply this same rule for episodes to PSRL in the ∞-horizon case, as shown in Figure 2. Using optimism with KL-divergence instead of L1 balls has also shown improved performance over UCRL2 [22], but its regret remains orders of magnitude more than PSRL on RiverSwim. (a) PSRL outperforms UCRL2 by large margins (b) PSRL learns quickly despite misspecified prior Figure 2: Simulated regret on the ∞-horizon RiverSwim environment. 7 Conclusion We establish posterior sampling for reinforcement learning not just as a heuristic, but as a provably efficient learning algorithm. We present ˜O(τS √ AT) Bayesian regret bounds, which are some of the first for an algorithm not motivated by optimism and are close to state of the art for any reinforcement learning algorithm. These bounds hold in expectation irrespective of prior or model structure. PSRL is conceptually simple, computationally efficient and can easily incorporate prior knowledge. Compared to feasible optimistic algorithms we believe that PSRL is often more efficient statistically, simpler to implement and computationally cheaper. We demonstrate that PSRL performs well in simulation over several domains. We believe there is a strong case for the wider adoption of algorithms based upon posterior sampling in both theory and practice. Acknowledgments Osband and Russo are supported by Stanford Graduate Fellowships courtesy of PACCAR inc., and Burt and Deedee McMurty, respectively. This work was supported in part by Award CMMI-0968707 from the National Science Foundation. 8 References [1] A. N. Burnetas and M. N. Katehakis. Optimal adaptive policies for markov decision processes. Mathematics of Operations Research, 22(1):222–255, 1997. [2] P. R. Kumar and P. Varaiya. Stochastic systems: estimation, identification and adaptive control. Prentice-Hall, Inc., 1986. [3] T.L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in applied mathematics, 6(1):4–22, 1985. [4] T. Jaksch, R. Ortner, and P. Auer. Near-optimal regret bounds for reinforcement learning. The Journal of Machine Learning Research, 99:1563–1600, 2010. [5] P. L. Bartlett and A. Tewari. Regal: A regularization based algorithm for reinforcement learning in weakly communicating mdps. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pages 35–42. AUAI Press, 2009. [6] R. I. Brafman and M. Tennenholtz. R-max-a general polynomial time algorithm for nearoptimal reinforcement learning. The Journal of Machine Learning Research, 3:213–231, 2003. [7] S. M. Kakade. On the sample complexity of reinforcement learning. PhD thesis, University of London, 2003. [8] M. Kearns and S. Singh. Near-optimal reinforcement learning in polynomial time. Machine Learning, 49(2-3):209–232, 2002. [9] W. R. Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3/4):285–294, 1933. [10] O. Chapelle and L. Li. An empirical evaluation of Thompson sampling. In Neural Information Processing Systems (NIPS), 2011. [11] S.L. Scott. A modern Bayesian look at the multi-armed bandit. Applied Stochastic Models in Business and Industry, 26(6):639–658, 2010. [12] S. Agrawal and N. Goyal. Further optimal regret bounds for Thompson sampling. arXiv preprint arXiv:1209.3353, 2012. [13] S. Agrawal and N. Goyal. Thompson sampling for contextual bandits with linear payoffs. arXiv preprint arXiv:1209.3352, 2012. [14] E. Kauffmann, N. Korda, and R. Munos. Thompson sampling: an asymptotically optimal finite time analysis. In International Conference on Algorithmic Learning Theory, 2012. [15] D. Russo and B. Van Roy. Learning to optimize via posterior sampling. CoRR, abs/1301.2609, 2013. [16] M. Strens. A Bayesian framework for reinforcement learning. In Proceedings of the 17th International Conference on Machine Learning, pages 943–950, 2000. [17] J. Z. Kolter and A. Y. Ng. Near-Bayesian exploration in polynomial time. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 513–520. ACM, 2009. [18] T. Wang, D. Lizotte, M. Bowling, and D. Schuurmans. Bayesian sparse sampling for on-line reward optimization. In Proceedings of the 22nd international conference on Machine learning, pages 956–963. ACM, 2005. [19] A. Guez, D. Silver, and P. Dayan. Efficient bayes-adaptive reinforcement learning using samplebased search. arXiv preprint arXiv:1205.3109, 2012. [20] J. Asmuth and M. L. Littman. Approaching bayes-optimalilty using monte-carlo tree search. In Proc. 21st Int. Conf. Automat. Plan. Sched., Freiburg, Germany, 2011. [21] A. L. Strehl and M. L. Littman. An analysis of model-based interval estimation for markov decision processes. Journal of Computer and System Sciences, 74(8):1309–1331, 2008. [22] S. Filippi, O. Capp´e, and A. Garivier. Optimism in reinforcement learning based on kullbackleibler divergence. CoRR, abs/1004.5229, 2010. 9 A Relating Bayesian to frequentist regret Let M be any family of MDPs with non-zero probability under the prior. Then, for any ϵ > 0, α > 1 2: P Regret(T, πP S τ ) T α > ϵ M ∗∈M →0 This provides regret bounds even if M ∗is not distributed according to f. As long as the true MDP is not impossible under the prior, we will have an asymptotic frequentist regret close to the theoretical lower bounds of in T-dependence of O( √ T). Proof. We have for any ϵ > 0: E[Regret(T, πP S τ )] T α ≥ E Regret(T, πP S τ ) T α M ∗∈M P (M ∗∈M) ≥ ϵP Regret(T, πP S τ ) T α M ∗∈M P (M ∗∈M) Therefore via theorem (1), for any α > 1 2: P Regret(T, πP S τ ) T α M ∗∈M ≤ 1 ϵP (M ∗∈M) E[Regret(T, πP Sτ)] T α →0 B Bounding the sum of confidence set widths We are interested in bounding min{τ Pm k=1 Pτ i=1 min{βkstk+i, atk+i), 1}, T} which we claim is O(τS p AT log(SAT) for βk(s, a) := q 14S log(2SAmtk) max{1,Ntk (s,a)} . Proof. In a manner similar to [4] we can say: m X k=1 τ X i=1 r 14S log(2SAmtk) max{1, Ntk(s, a)} ≤ m X k=1 τ X i=1 1{Ntk ≤τ} + m X k=1 τ X i=1 1{Ntk >τ} r 14S log(2SAmtk) max{1, Ntk(s, a)} Now, the consider the event (st, at) = (s, a) and (Ntk(s, a) ≤τ). This can happen fewer than 2τ times per state action pair. Therefore, Pm k=1 Pτ i=1 1(Ntk(s, a) ≤τ) ≤2τSA.Now, suppose Ntk(s, a) > τ. Then for any t ∈{tk, .., tk+1 −1}, Nt(s, a) + 1 ≤Ntk(s, a) + τ ≤2Ntk(s, a). Therefore: m X k=1 tk+1−1 X t=tk r 1(Ntk(st, at) > τ) Ntk(st, at) ≤ m X k=1 tk+1−1 X t=tk r 2 Nt(st, at) + 1 = √ 2 T X t=1 (Nt(st, at) + 1)−1/2 ≤ √ 2 X s,a NT +1(s,a) X j=1 j−1/2 ≤ √ 2 X s,a Z NT +1(s,a) x=0 x−1/2 dx ≤ s 2SA X s,a NT +1(s, a) = √ 2SAT Note that since all rewards and transitions are absolutely constrained ∈[0, 1] our regret min{τ m X k=1 τ X i=1 min{βk(stk+i, atk+i), 1}, T} ≤ min{2τ 2SA + τ p 28S2AT log(SAT), T} ≤ √ 2τ 2SAT + τ p 28S2AT log(SAT) ≤τS p 30AT log(SAT) Which is our required result. 10
|
2013
|
138
|
4,863
|
Forgetful Bayes and myopic planning: Human learning and decision-making in a bandit setting Shunan Zhang Department of Cognitive Science University of California, San Diego La Jolla, CA 92093 s6zhang@ucsd.edu Angela J. Yu Department of Cognitive Science University of California, San Diego La Jolla, CA 92093 ajyu@ucsd.edu Abstract How humans achieve long-term goals in an uncertain environment, via repeated trials and noisy observations, is an important problem in cognitive science. We investigate this behavior in the context of a multi-armed bandit task. We compare human behavior to a variety of models that vary in their representational and computational complexity. Our result shows that subjects’ choices, on a trial-totrial basis, are best captured by a “forgetful” Bayesian iterative learning model [21] in combination with a partially myopic decision policy known as Knowledge Gradient [7]. This model accounts for subjects’ trial-by-trial choice better than a number of other previously proposed models, including optimal Bayesian learning and risk minimization, ε-greedy and win-stay-lose-shift. It has the added benefit of being closest in performance to the optimal Bayesian model than all the other heuristic models that have the same computational complexity (all are significantly less complex than the optimal model). These results constitute an advancement in the theoretical understanding of how humans negotiate the tension between exploration and exploitation in a noisy, imperfectly known environment. 1 Introduction How humans achieve long-term goals in an uncertain environment, via repeated trials and noisy observations, is an important problem in cognitive science. The computational challenges consist of the learning component, whereby the observer updates his/her representation of knowledge and uncertainty based on ongoing observations, and the control component, whereby the observer chooses an action that balances between the short-term objective of acquiring reward and the long-term objective of gaining information about the environment. A classic task used to study such sequential decision making problems is the multi-arm bandit paradigm [15]. In a standard bandit setting, people are given a limited number of trials to choose among a set of alternatives, or arms. After each choice, an outcome is generated based on a hidden reward distribution specific to the arm chosen, and the objective is to maximize the total reward after all trials. The reward gained on each trial both has intrinsic value and informs the decision maker about the relative desirability of the arm, which can help with future decisions. In order to be successful, decision makers have to balance their decisions between general exploration (selecting an arm about which one is ignorant) and exploitation (selecting an arm that is known to have relatively high expected reward). Because bandit problem elegantly capture the tension between exploration and exploitation that is manifest in real-world decision-making situations, they have received attention in many fields, including statistics [10], reinforcement learning [11, 19], economics [1, e.g.], psychology and neuroscience [5, 4, 18, 12, 6]. There is no known analytical optimal solution to the general bandit problem, though properties about the optimal solution of special cases are known [10]. For relatively simple, finite-horizon problems, the optimal solution can be computed numerically via dynamic program1 ming [11], though its computational complexity grows exponentially with the number of arms and trials. In the psychology literature, a number of heuristic policies, with varying levels of complexity in the learning and control processes, have been proposed as possible strategies used by human subjects [5, 4, 18, 12]. Most models assume that humans either adopt simplistic policies that retain little information about the past and sidestep long-term optimization (e.g. win-stay-lose-shift and ε-greedy), or switch between an exploration and exploitation mode either randomly [5] or discretely over time as more is learned about the environment [18]. In this work, we analyze a new model for human bandit choice behavior, whose learning component is based on the dynamic belief model (DBM) [21], and whose control component is based on the knowledge gradient (KG) algorithm [7]. DBM is a Bayesian iterative inference model that assumes that there exists statistical patterns in a sequence of observations, and they tend to change at a characteristic timescale [21]. DBM was proposed as a normative learning framework that is able to capture the commonly observed sequential effect in human choice behavior, where choice probabilities (and response times) are sensitive to the local history of preceding events in a systematic manner — even if the subjects are instructed that the design is randomized, so that any local trends arise merely by chance and not truly predictive of upcoming stimuli [13, 8, 20, 3]. KG is a myopic approximation to the optimal policy for sequential informational control problem, originally developed for operations research applications [7]; KG is known to be exactly optimal in some special cases of bandit problems, such as when there are only two arms. Conditioned on the previous observations at each step, KG chooses the option that maximizes the future cumulative reward gain, based on the myopic assumption that the next observation is the last exploratory choice, and all remaining choices will be exploitative (choosing the option with the highest expected reward by the end of the next trial). Note that this myopic assumption is only used in reducing the complexity of computing the expected value of each option, and not actually implemented in practice – the algorithm may end up executing arbitrarily many non-exploitative choices. KG tends to explore more when the number of trials left is large, because finding an arm with even a slightly better reward rate than the currently best known one can lead to a large cumulative advantage in future gain; on the other hand, when the number of trials left is small, KG tends to stay with the currently best known option, as the relative benefit of finding a better option diminishes against the risk of wasting limited time on a good option. KG has been shown to outperform several established models, including the optimal Bayesian learning and risk minimization, ε-greedy and win-stay-lose-shift, for human decision-making in bandit problems, under two certain learning scenarios other than DBM [22]. In the following, we first describe the experiment, then describe all the learning and control models that we consider. We then compare the performance of the models both in terms of agreement with human behavior on a trial-to-trial basis, and in terms of computational optimality. 2 Experiment We adopt data from [18], where a total of 451 subjects participated in the experiment as part of “testweek” at the University of Amsterdam. In the experiment, each participant completed 20 bandit problems in sequence, all problems had 4 arms and 15 trials. The reward rates were fixed for all arms in each game, and were generated, prior to the start of data collection, independently from a Beta(2,2) distribution. All participants played the same reward rates, but the order of the games was randomized. Participants were instructed that the reward rates in all games were drawn from the same environment, and that the reward rates were drawn only once; participants were not told the exact form of the Beta environment, i.e. Beta(2,2). A screenshot of the experimental interface is shown in Fig 1:a. 3 Models There exist multiple levels of complexity and optimality in both the learning and the decision components of decision making models of bandit problems. For the learning component, we examine whether people maintain any statistical representation of the environment at all, and if they do, whether they only keep a mean estimate (running average) of the reward probability of the different options, or also uncertainty about those estimates; in addition, we consider the possibility that they entertain trial-by-trial fluctuation of the reward probabilities. The decision component can also 2 .6 1 0 1 Rt-1 Rt Rt+1 FBM .4 0 1 1 Rt-1 Rt Rt+1 DBM .4 .6 t-1 t+1 a b c Figure 1: (a) A screenshot of the experimental interface. The four panels correspond to the four arms, each of which can be chosen by clicking the corresponding button. In each panel, successes from previous trials are shown as green bars, and failures as red bars. At the top of each panel, the ratio of successes to failures, if defined, is shown. The top of the interface provides the count of the total number of successes to the current trial, index of the current trial and index of the current game. (b) Bayesian graphical model of FBM, assuming fixed reward probabilities. θ ∈[0,1], Rt ∈{0,1}. The inset shows an example of the Beta prior for the reward probabilities. The numbers in circles show example values for the variables. (c) Bayesian graphical model of DBM, assuming reward probabilities change from trial to trial. P(θt) = γδ(θt = θt−1)+(1−γ)P0 (θt). differ in complexity in at least two respects: the objective the decision policy tries to optimize (e.g. reward versus information), and the time-horizon over which the decision policy optimizes its objective (e.g. greedy versus long-term). In this section, we introduce models that incorporate different combinations of learning and decision policies. 3.1 Bayesian Learning in Beta Environments The observations are generated independently and identically (iid) from an unknown Bernoulli distribution for each arm. We consider two Bayesian learning scenarios below, the dynamic belief model (DBM), which assumes that the Bernoulli reward rates for all the arms can reset on any trial with probability 1 −γ, and the fixed belief model (FBM), a special case of DBM that assumes the reward rates to be stationary throughout each game. In either case, we assume the prior distribution that generates the Bernoulli rates is a Beta distribution, Beta(α, β), which is conjugate to the Bernoulli distribution, and whose two hyper-parameters, α and β, specify the pseudo-counts associated with the prior. 3.1.1 Dynamic Belief Model Under the dynamic belief model (DBM), the reward probabilities can undergo discrete changes at times during the experimental session, such that at any trial, the subject’s prior belief is a mixture of the posterior belief from the previous trial and a generic prior. The subject’s implicit task is then to track the evolving reward probability of each arm over the course of the experiment. Suppose on each game, we have K arms with reward rates, θk, k = 1,··· ,K, which are iid generated from Beta(α, β). Let St k and Ft k be the numbers of successes and failures obtained from the kth arm on the trial t. The estimated reward probability of arm k at trial t is θt k. We assume θt k has a Markovian dependence on θt−1 k , such that there is a probability γ of them being the same, and a probability 1 −γ of θt k being redrawn from the prior distribution Beta(α, β). The Bayesian ideal observer combines the sequentially developed prior belief about reward probabilities, with the incoming stream of observations (successes and failures on each arm), to infer the new posterior distributions. The observation Rt k is assumed to be Bernoulli, Rt k ∼Bernoulli θt k . We use the notation qt k(θt k) := Pr θt k|St k,Ft k to denote the posterior distribution of θt k given the observed sequence, also known as the belief state. On each trial, the new posterior distribution can be computed via Bayes’ Rule: qt k(θt k) ∼Pr Rt k|θt k Pr θt k|St−1 k ,Ft−1 k (1) 3 where the prior probability is a weighted sum (parameterized by γ) of last trial’s posterior and the generic prior q0 := Beta(α,β): Pr θt k = θ|St−1 k , Ft−1 k = γqt−1 k (θ)+(1−γ)q0(θ) (2) 3.1.2 Fixed Belief Model A simpler generative model (and more correct one given the true, stationary environment) is to assume that the statistical contingencies in the task remain fixed throughout each game, i.e. all bandit arms have fixed probabilities of giving a reward throughout the game. What the subjects would then learn about the task over the time course of the experiment is the true value of θ. We call this model a fixed belief model (FBM); it can be viewed as a special case of the DBM with γ = 1. In the Bayesian update rule, the prior on each trial is simply the posterior on the previous trial. Figure 1b;c illustrates the graphical models of FBM and DBM, respectively. 3.2 Decision Policies We consider four different decision policies. We first describe the optimal model, and then the three heuristic models with increasing levels of complexity. 3.2.1 The Optimal Model The learning and decision problem for bandit problems can be viewed as as a Markov Decision Process with a finite horizon [11], with the state being the belief state qt = (qt 1,qt 2,qt 3,qt 4), which obviously provides the sufficient statistics for all the data seen up through trial t. Due to the low dimensionality of the bandit problem here (i.e. small number of arms and number of trials per game), the optimal policy, up to a discretization of the belief state, can be computed numerically using Bellman’s dynamic programming principle [2]. Let Vt(qt) be the expected total future reward on trial t. The optimal policy should satisfy the following iterative property: Vt(qt) = max k θt k +E Vt+1(qt+1) (3) and the optimal action, Dt, is chosen according to Dt(qt) = argmaxkθt k +E Vt+1(qt+1) (4) We solve the equation using dynamic programming, backward in time from the last time step, whose value function and optimal policy are known for any belief state: always choose the arm with the highest expected reward, and the value function is just that expected reward. In the simulations, we compute the optimal policy off-line, for any conceivable setting of belief state on each trial (up to a fine discretization of the belief state space), and then apply the computed policy for each sequence of choice and observations that each subject experiences. We use the term “the optimal solution” to refer to the specific solution under α = 2 and β = 2, which is the true experimental design. 3.2.2 Win-Stay-Lose-Shift WSLS does not learn any abstract representation of the environment, and has a very simple decision policy. It assumes that the decision-maker will keep choosing the same arm as long as it continues to produce a reward, but shifts to other arms (with equal probabilities) following a failure to gain reward. It starts off on the first trial randomly (equal probability at all arms). 3.2.3 ε-Greedy The ε-greedy model assumes that decision-making is determined by a parameter ε that controls the balance between random exploration and exploitation. On each trial, with probability ε, the decision-maker chooses randomly (exploration), otherwise chooses the arm with the greatest estimated reward rate (exploitation). ε-Greedy keeps simple estimates of the reward rates, but does not track the uncertainty of the estimates. It is not sensitive to the horizon, maximizing the immediate gain with a constant rate, otherwise searching for information by random selection. 4 More concretely, ε-greedy adopts a stochastic policy: Pr Dt = k|ε, θt = (1−ε)/Mt if k ∈argmaxk′θt k′ ε/(K −Mt) otherwise where Mt is the number of arms with the greatest estimated value at the tth trial. 3.2.4 Knowledge Gradient The knowledge gradient (KG) algorithm [16] is an approximation to the optimal policy, by pretending only one more exploratory measurement is allowed, and assuming all remaining choices will exploit what is known after the next measurement. It evaluates the expected change in each estimated reward rate, if a certain arm were to be chosen, based on the current belief state. Its approximate value function for choosing arm k on trial t given the current belief state qt is vKG,t k = E max k′ θt+1 k′ |Dt = k, qt −max k′ θt k′ (5) The first term is the expected largest reward rate (the value of the subsequent exploitative choices) on the next step if the kth arm were to be chosen, with the expectation taken over all possible outcomes of choosing k; the second term is the expected largest reward given no more exploitative choices; their difference is the “knowledge gradient” of taking one more exploratory sample. The KG decision rule is DKG,t = argmax k θt k +(T −t −1)vKG,t k (6) The first term of Equation 6 denotes the expected immediate reward by choosing the kth arm on trial t, whereas the second term reflects the expected knowledge gain. The formula for calculating vKG,t k for the binary bandit problems can be found in Chapter 5 of [14]. 3.3 Model Inference and Evaluation Unlike previous modeling papers on human decision-making in the bandit setting [5, 4, 18, 12], which generally look at the average statistics of how people distribute their choices among the options, here we use a more stringent trial-by-trial measure of the model agreement, i.e. how well each model captures subject’s choice. We calculate the per-trial likelihood of the subject’s choice conditioned on the previously experienced actions and choices. For WSLS, it is 1 for a win-stay decision, 1/3 for a lose-shift decision (because the model predicts shifting to the other three arms with equal probability), and 0 otherwise. For probabilistic models, take ε-greedy for example, it is (1 −ε)/M if the subject chooses the option with the highest predictive reward, where M is the number of arms with the highest predictive reward; it is ε/(4 −M) for any other choice, and when M = 4, it is considered all arms have the highest predictive reward. We use sampling to compute a posterior distribution of the following model parameters: the parameters of the prior Beta distribution (α and β) for all policies, γ for all DBM policies, ε for ε-greedy. For this model fitting process, we infer the re-parameterization of α/(α+β) and α+β, with a uniform prior on the former, and weakly informative prior for the latter, i.e. Pr(α+β) ∼(α+β)−3/2, as suggested by [9]. The reparameterization has psychological interpretation as the mean reward probability and the certainty. We use uniform prior for ε and γ. Model inference use combined sampling algorithm, with Gibbs sampling of ε, and Metropolis sampling of γ, α and β. All chains contained 3000 steps, with a burn-in size of 1000. All chains converged according to the R-hat measure [9]. We calculate the average per-trial likelihood (across trials, games, and subjects) under each model based on its maximum a posteriori (MAP) parameterization. We fit each model across all subjects, assuming that every subject shared the same prior belief of the environment (α and β), rate of exploration (ε), and rate of change (γ). For further analyses to be shown in the result section, we also fit the ε-greedy policy and the KG policy together with both learning models for each individual subject. All model inferences are based on a leave-one-out crossvalidation containing 20 runs. Specifically, for each run, we train the model while withholding one game (sampled without replacement) from each subject, and test the model on the withheld game. 5 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 WSLS eG KG 0.4 0.5 0.6 0.7 0.8 0.9 1 Model agreement with optimal a FBM DBM WSLS Optimal eG KG 0.4 0.5 0.6 0.7 0.8 Model agreement with subjects b FBM DBM eG KG 0.4 0.5 0.6 0.7 0.8 Individually−fit Model agreement c 5 10 15 0.55 0.6 0.65 0.7 0.75 0.8 0.85 d Trial Trialwise model agreement DBM DBM ind. eG ind. KG ind. Figure 1: Average reward achieved by the KG model forward playing the bandit problems with the same reward rates. KG achieves similar reward distribution as the human performance, with KG playing at its maximum a posteriori probability (MAP) estimate, α = .1 and β = .8. KG achieves the same reward distribution as the optimal solution when playing with the correct prior knowledge of the environment. New Roman is the preferred typeface throughout. Paragraphs are separated by 1/2 line space, with no indentation. Paper title is 17 point, initial caps/lower case, bold, centered between 2 horizontal rules. Top rule is 4 points thick and bottom rule is 1 point thick. Allow 1/4 inch space above and below title to rules. All pages should start at 1 inch (6 picas) from the top of the page. For the final version, authors’ names are set in boldface, and each name is centered above the corresponding address. The lead author’s name is to be listed first (left-most), and the co-authors’ names (if different address) are set to follow. If there is only one co-author, list both author and co-author side by side. Please pay special attention to the instructions in section 3 regarding figures, tables, acknowledgments, and references. 2 Headings: first level First level headings are lower case (except for first word and proper nouns), flush left, bold and in point size 12. One line space before the first level heading and 1/2 line space after the first level heading. 2.1 Headings: second level Second level headings are lower case (except for first word and proper nouns), flush left, bold and in point size 10. One line space before the second level heading and 1/2 line space after the second level heading. 2.1.1 Headings: third level Third level headings are lower case (except for first word and proper nouns), flush left, bold and in point size 10. One line space before the third level heading and 1/2 line space after the third level heading. 3 Citations, figures, tables, references These instructions apply to everyone, regardless of the formatter being used. Figure 2: (a) Model agreement with data simulated by the optimal solution, measured as the average per-trial likelihood. All models (except the optimal) are fit to data simulated by the optimal solution under the correct beta prior Beta(2,2). Each bar shows the mean per-trial likelihood (across all subjects, trials and games) of a decision policy coupled with a learning framework. For ε-greedy (eG) and KG, the error bars show the standard errors of the mean per-trial likelihood calculated across all tests in the cross validation procedure (20-fold). WSLS does not rely on any learning framework.(b) Model agreement with human data based on a leave-one(game)-out cross-validation, where we randomly withhold one game from each subject for training, i.e. we train the model on a total number of 19 × 451 games, with 19 games from each subject. For the current study, we implement the optimal policy under DBM using the estimated γ under the KG DBM model in order to reduce the computational burden. (c) Mean per-trial likelihood of the ε-greedy model (eG) and KG with individually-fit parameters (for each subject), using cross-validation; the individualized (ind. for abbreviation in the legend) DBM assumes each person has his/her own Beta prior and γ. (d) Trialwise agreement of eG and KG under individually-fit MAP parameterization. The mean per-trial likelihood is calculated across all subjects for each trial, with the error bars showing the standard error of the mean per-trial likelihood across all tests. 4 Results 4.1 Model agreement with the Optimal Policy We first examine how well each of the decision policies agrees with the optimal policy on a trialto-trial basis. Figure 2a shows the mean per-trial likelihood (averaged across all tests in the crossvalidation procedure) of each model, when fit to data simulated by the optimal solution under the true design Beta(2,2). KG algorithm, under either learning framework, is most consistent (over 90%) with the optimal algorithm (separately under FBM and DBM assumptions). This is not surprising given that KG is an approximation algorithm to the optimal policy. The inferred prior is Beta(1.93, 2.15), correctly recovering the actual environment. The simplest WSLS model, on the other hand, achieves model agreement well above 60%. In fact, the optimal model also almost always stays after a success; the only situation that WSLS does not resemble the optimal decision occurs when it shifts away from an arm that the optimal policy would otherwise stay with. Because the optimal solution (which simulated the data) knows the true environment, DBM does not have advantage against FBM. 4.2 Model Agreement with Human Data Figure 2b shows the mean per-trial likelihood (averaged across all tests in the cross-validation procedure) of each model, when fit to the human data. KG with DBM outperforms other models of consideration. The average posterior mean of γ across all tests is .81, with standard error .091. The average posterior means for α and β are .65 and 1.05, with standard errors .074 and .122, respectively. A γ value of .81 implies that the subjects behave as if they think the world changes on average about every 5 steps (calculated as 1/(1−.81)). We did a pairwise comparison between models on the mean per-trial likelihood of the subject’s choice given each model’s predictive distribution, using a pairwise t-test. The test between DBM6 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 3 15 0.6 0.8 1 P(stay|win) Trial 3 15 0.2 0.4 0.6 0.8 1 P(shift|lose) Trial 3 15 0.4 0.6 0.8 1 P(best value) Trial 3 15 0.2 0.4 0.6 P(least known) Trial Human Optimal FBM KG DBM KG FBM eG DBM eG WSLS Figure 1: Average reward achieved by the KG model forward playing the bandit problems with the same reward rates. KG achieves similar reward distribution as the human performance, with KG playing at its maximum a posteriori probability (MAP) estimate, α = .1 and β = .8. KG achieves the same reward distribution as the optimal solution when playing with the correct prior knowledge of the environment. New Roman is the preferred typeface throughout. Paragraphs are separated by 1/2 line space, with no indentation. Paper title is 17 point, initial caps/lower case, bold, centered between 2 horizontal rules. Top rule is 4 points thick and bottom rule is 1 point thick. Allow 1/4 inch space above and below title to rules. All pages should start at 1 inch (6 picas) from the top of the page. For the final version, authors’ names are set in boldface, and each name is centered above the corresponding address. The lead author’s name is to be listed first (left-most), and the co-authors’ names (if different address) are set to follow. If there is only one co-author, list both author and co-author side by side. Please pay special attention to the instructions in section 3 regarding figures, tables, acknowledgments, and references. 2 Headings: first level First level headings are lower case (except for first word and proper nouns), flush left, bold and in point size 12. One line space before the first level heading and 1/2 line space after the first level heading. 2.1 Headings: second level Second level headings are lower case (except for first word and proper nouns), flush left, bold and in point size 10. One line space before the second level heading and 1/2 line space after the second level heading. 2.1.1 Headings: third level Third level headings are lower case (except for first word and proper nouns), flush left, bold and in point size 10. One line space before the third level heading and 1/2 line space after the third level heading. 3 Citations, figures, tables, references These instructions apply to everyone, regardless of the formatter being used. 3.1 Citations within the text Citations within the text should be numbered consecutively. The corresponding number is to appear enclosed in square brackets, such as [1] or [2]-[5]. The corresponding references are to be listed in the same order at the end of the paper, in the References section. (Note: the standard BIBTEX style 2 Figure 3: Behavioral patterns in the human data and the simulated data from all models. The four panels show the trial-wise probability of staying after winning, shifting after losing, choosing the greatest estimated value on any trial, choosing the least known when the exploitative choice is not chosen, respectively. Probabilities are calculated based on simulated data from each model at their MAP parameterization, and are averaged across all games and all participants. The optimal solution shown here uses the correct prior Beta(2,2). optimal and DBM-eG, and the test between DBM-optimal and FBM-optimal, are not significant at the .05 level. All other tests are significant. Table 1 shows the p-values for each pairwise comparison. Table 1: P-values for all pairwise t tests. KG DB KG FB eG DB eG FB Op DB KG FB eG DB eG FB Op DB Op FB eG DB eG FB Op DB Op FB eG FB Op DB Op FB Op DB Op FB Op FB .0480 .0001 .0000 .0001 .0000 .0187 .0000 .0060 .0002 .0001 .5066 .0354 .0001 .0036 .1476 Figure 2c shows the model agreement with human data, of ε-greedy and KG, when their parameters are individually fit. KG with DBM with individual parameterization has the best performance under cross validation. ε-Greedy also has a great gain in model agreement when coupled with DBM. In fact, under DBM, ε-greedy and KG have close performance in the overall model agreement. However, Figure 2d shows a systematic difference between the two models in their agreement with human data on a trial-by-trial base: during early trials, subjects’ behavior is more consistent with ε-greedy, whereas during later trials, it is more consistent with KG. We next break down the overall behavioral performance into four finer measures: how often people do win-stay and lose-shift, how often they exploit, and whether they use random selection or search for the greatest amount of information during exploration. Figure 3 shows the results of model comparisons on these additional behavioral criteria. We show the patterns of the subjects, the optimal solution with Beta(2,2), KG and eG under both learning frameworks and the simplest WSLS. The first panel, for example, shows the trialwise probability of staying with the same arm following a previous success. People do not stay with the same arm after an immediate reward, which is always the case for the optimal algorithm. Subjects also do not persistently explore, as predicted by ε-greedy. In fact, subjects explore more during early trials, and become more exploitative later on, similar to KG. As implied by Equation 5, KG calculates the probability of an arm surpassing the known best upon chosen, and weights the knowledge gain more heavily in the early stage of the game. During the early trials, it sometimes chooses the second-best arm to maximize the knowledge gain. Under DBM, a previous success will cause the corresponding arm to appear more rewarding, resulting in a smaller knowledge gradient value; because knowledge is weighted more heavily during the early trials, the KG model then tends to choose the second best arms that have a larger knowledge gain. The second panel shows the trialwise probability of shifting away given a previous failure. When the horizon is approaching, it becomes increasingly important to stay with the arm that is known to be reasonably good, even if it may occasionally yield a failure. All algorithms, except for the naive WSLS algorithm, show a downward trend to shift after losing as the horizon approaches, along with human choices. ε-Greedy with DBM learning is closest to human behavior. The third panel shows the probability of choosing the arm with the largest success ratio. KG, under FBM, mimics the optimal model in that the probability of choosing the highest success ratio increases over time; they both grossly overly estimate subjects’ tendency to select the highest success 7 ratio, as well as predicting an unrealized upward trend. WSLS under-estimates how often subjects make this choice, while ε-greedy under DBM learning over-estimates it. It is KG under DBM, and ε-greedy with FBM, that are closest to subjects’ behavior. The fourth panels shows how often subjects choose to explore the least known option when they shift away from the choice with the highest expected reward. It is DBM with either KG or ε-greedy that provides the best fit. In general, the KG model with DBM matches the second-order trend of human data the best, with ε-greedy following closely behind. However, there still exists a gap on the absolute scale, especially with respect to the probability of staying with a successful arm. 5 Discussion Our analysis suggests that human behavior in the multi-armed bandit task is best captured by a knowledge gradient decision policy supported by a dynamic belief model learning process. Human subjects tend to explore more often than policies that optimize the specific utility of the bandit problems, and KG with DBM attributes this tendency to the belief of a stochastically changing environment, causing the sequential effects due to recent trial history. Concretely, we find that people adopt a learning process that (erroneously) assumes the world to be non-stationary, and that they employ a semi-myopic choice policy that is sensitive to the horizon but assumes one-step exploration when comparing action values. Our results indicate that all decision policies considered here capture human data much better under the dynamic belief model than the fixed belief model. By assuming the world is changeable, DBM discount data from the distant past in favor of new data. Instead of attributing this discounting behavior to biological limitations (e.g. memory loss), DBM explains it as the automatic engagement of mechanisms that are critical for adapting to a changing environment. Indeed, there is previous work suggesting that people approach bandit problems as if expecting a changing world [17]. This is despite informing the subjects that the arms have fixed reward probabilities. So far, our results also favor the knowledge gradient policy as the best model for human decisionmaking in the bandit task. It optimizes the semi-myopic goal of maximizing future cumulative reward while assuming only one more time step of exploration and strict exploitation thereafter. The KG model under the more general DBM has the largest proportion of correct predictions of human data, and can capture the trial-wise dynamics of human behavioral reasonably well. This result implies that humans may use a normative way, as captured by KG, to explore by combining immediate reward expectation and long-term knowledge gain, compared to the previously proposed behavioral models that typically assumes that exploration is random or arbitrary. In addition, KG achieves similar behavioral patterns as the optimal model, and is computationally much less expensive (in particular being online and incurring a constant cost), making it a more plausible algorithm for human learning and decision-making. We observed that decision policies vary systematically in their abilities to predict human behavior on different kinds of trials. In the real world, people might use hybrid policies to solve the bandit problems; they might also use some smart heuristics, which dynamically adjusts the weight of the knowledge gain to the immediate reward gain. Figure 2d suggests that subjects may be adopting a strategy that is aggressively greedy at the beginning of the game, and then switches to a policy that is both sensitive to the value of exploration and the impending horizon as the end of the game approaches. One possibility is that subjects discount future rewards, which would result in a more exploitative behavior than non-discounted KG, especially at the beginning of the game. These would all be interesting lines of future inquiries. Acknowledgments We thank M Steyvers and E-J Wagenmakers for sharing the data. This material is based upon work supported by, or in part by, the U. S. Army Research Laboratory and the U. S. Army Research Office under contract/grant number W911NF1110391 and NIH NIDA B/START # 1R03DA030440-01A1. 8 References [1] J. Banks, M. Olson, and D. Porter. An experimental analysis of the bandit problem. Economic Theory, 10:55–77, 2013. [2] R. Bellman. On the theory of dynamic programming. Proceedings of the National Academy of Sciences, 1952. [3] R. Cho, L. Nystrom, E. Brown, A. Jones, T. Braver, P. Holmes, and J. D. Cohen. Mechanisms underlying dependencies of performance on stimulus history in a two-alternative forced-choice task. Cognitive, Affective and Behavioral Neuroscience, 2:283–299, 2002. [4] J. D. Cohen, S. M. McClure, and A. J. Yu. Should I stay or should I go? Exploration versus exploitation. Philosophical Transactions of the Royal Society B: Biological Sciences, 362:933– 942, 2007. [5] N. D. Daw, J. P. O’Doherty, P. Dayan, B. Seymour, and R. J. Dolan. Cortical substrates for exploratory decisions in humans. Nature, 441:876–879, 2006. [6] A. Ejova, D. J. Navarro, and A. F. Perfors. When to walk away: The effect of variability on keeping options viable. In N. Taatgen, H. van Rijn, L. Schomaker, and J. Nerbonne, editors, Proceedings of the 31st Annual Conference of the Cognitive Science Society, Austin, TX, 2009. [7] P. Frazier, W. Powell, and S. Dayanik. A knowledge-gradient policy for sequential information collection. SIAM Journal on Control and Optimization, 47:2410–2439, 2008. [8] W. R. Garner. An informational analysis of absolute judgments of loudness. Journal of Experimental Psychology, 46:373–380, 1953. [9] A. Gelman, J. B. Carlin, H. S. Stern, and D. B. Rubin. Bayesian data analysis. Chapman & Hall/CRC, Boca Raton, FL, 2 edition, 2004. [10] J. C. Gittins. Bandit processes and dynamic allocation indices. Journal of the Royal Statistical Society, 41:148–177, 1979. [11] L. P. Kaebling, M. L. Littman, and A. W. Moore. Reinforcement learning: A survey. Journal of Artificial Intelligence Research, 4:237–285, 1996. [12] M. D. Lee, S. Zhang, M. Munro, and M. Steyvers. Psychological models of human and optimal performance in bandit problems. Cognitive Systems Research, 12:164–174, 2011. [13] M. I. Posner and Y. Cohen. Components of visual orienting. Attention and Performance Vol. X, 1984. [14] W. Powell and I. Ryzhov. Optimal Learning. Wiley, 1 edition, 2012. [15] H. Robbins. Some aspects of the sequential design of experiments. Bulletin of the American Mathematical Society, 58:527–535, 1952. [16] I. Ryzhov, W. Powell, and P. Frazier. The knowledge gradient algorithm for a general class of online learning problems. Operations Research, 60:180–195, 2012. [17] J. Shin and D. Ariely. Keeping doors open: The effect of unavailability on incentives to keep options viable. MANAGEMENT SCIENCE, 50:575–586, 2004. [18] M. Steyvers, M. D. Lee, and E.-J. Wagenmakers. A bayesian analysis of human decisionmaking on bandit problems. Journal of Mathematical Psychology, 53:168–179, 2009. [19] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA, 1998. [20] M. C. Treisman and T. C. Williams. A theory of criterion setting with an application to sequential dependencies. Psychological Review, 91:68–111, 1984. [21] A. J. Yu and J. D. Cohen. Sequential effects: Superstition or rational behavior? In Advances in Neural Information Processing Systems, volume 21, pages 1873–1880, Cambridge, MA., 2009. MIT Press. [22] S. Zhang and A. J. Yu. Cheap but clever: Human active learning in a bandit setting. In Proceedings of the Cognitive Science Society Conference, 2013. 9
|
2013
|
139
|
4,864
|
Learning and using language via recursive pragmatic reasoning about other agents Nathaniel J. Smith∗ University of Edinburgh Noah D. Goodman Stanford University Michael C. Frank Stanford University Abstract Language users are remarkably good at making inferences about speakers’ intentions in context, and children learning their native language also display substantial skill in acquiring the meanings of unknown words. These two cases are deeply related: Language users invent new terms in conversation, and language learners learn the literal meanings of words based on their pragmatic inferences about how those words are used. While pragmatic inference and word learning have both been independently characterized in probabilistic terms, no current work unifies these two. We describe a model in which language learners assume that they jointly approximate a shared, external lexicon and reason recursively about the goals of others in using this lexicon. This model captures phenomena in word learning and pragmatic inference; it additionally leads to insights about the emergence of communicative systems in conversation and the mechanisms by which pragmatic inferences become incorporated into word meanings. 1 Introduction Two puzzles present themselves to language users: What do words mean in general, and what do they mean in context? Consider the utterances “it’s raining,” “I ate some of the cookies,” or “can you close the window?” In each, a listener must go beyond the literal meaning of the words to fill in contextual details (“it’s raining here and now”), infer that a stronger alternative is not true (“I ate some but not all of the cookies”), or more generally infer the speaker’s communicative goal (“I want you to close the window right now because I’m cold”), a process known as pragmatic reasoning. Theories of pragmatics frame the process of language comprehension as inference about the generating goal of an utterance given a rational speaker [14, 8, 9]. For example, a listener might reason, “if she had wanted me to think ‘all’ of the cookies, she would have said ‘all’—but she didn’t. Hence ‘all’ must not be true and she must have eaten some but not all of the cookies.” This kind of reasoning is core to language use. But pragmatic reasoning about meaning-in-context relies on stable literal meanings that must themselves be learned. In both adults and children, uncertainty about word meanings is common, and often considering speakers’ pragmatic goals can help to resolve this uncertainty. For example, if a novel word is used in a context containing both a novel and a familiar object, young children can make the inference that the novel word refers to the novel object [22].1 For adults who are proficient language users, there are also a variety of intriguing cases in which listeners seem to create situation- and task-specific ways of referring to particular objects. For example, when asked to refer to idiosyncratic geometric shapes, over the course of an experimental session, participants create conventionalized descriptions that allow them to perform accurately even though they do not begin with shared labels [19, 7]. In both of these examples, reasoning about another person’s goals informs ∗nathaniel.smith@ed.ac.uk 1Very young children make inferences that are often labeled as “pragmatic” in that they involve reasoning about context [6, 1], though in some cases they are systematically ‘too literal’ (e.g. failing to strengthen SOME to SOME-BUT-NOT-ALL [23]). Here we remain agnostic about the age at which children are able to make such inferences robustly, as it may vary depending on the linguistic materials being used in the inference [2]. 1 language learners’ estimates of what words are likely to mean. Despite this intersection, there is relatively little work that takes pragmatic reasoning into account when considering language learning in context. Recent work on grounded language learning has attempted to learn large sets of (sometimes relatively complex) word meanings from noisy and ambiguous input (e.g. [10, 17, 20]). And a number of models have begun to formalize the consequences of pragmatic reasoning in situations where limited learning takes place [12, 9, 3, 13]. But as yet these two strands of research have not been brought together so that the implications of pragmatics for learning can be investigated directly. The goal of our current work is to investigate the possibilities for integrating models of recursive pragmatic reasoning with models of language learning, with the hope of capturing phenomena in both domains. We begin by describing a proposal for bringing the two together, noting several issues in previous approaches based on recursive reasoning under uncertainty. We next simulate findings on pragmatic inference in one-shot games (replicating previous work). We then build on these results to simulate the results of pragmatic learning in the language acquisition setting where one communicator is uncertain about the lexicon and in iterated communication games where both communicators are uncertain about the lexicon. 2 Model We model a standard communication game [19, 7]: two participants each, separately, view identical arrays of objects. On the Speaker’s screen, one object is highlighted; their goal is to get the Listener to click on this item. To do this, they have available a fixed, finite set of words; they must pick one. The Listener then receives this word, and attempts to guess which object the Speaker meant by it. In the psychology literature, as in real-world interactions, games are typically iterated; one view of our contribution here is as a generalization of one-shot models [9, 3] to the iterated context. 2.1 Paradoxes in optimal models of pragmatic learning. Multi-agent interactions are difficult to model in a normative or optimal framework without falling prey to paradox. Consider a simple model of the agents in the above game. First we define a literal listener L0. This agent has a lexicon of associations between words and meanings; specifically, it assigns each word w a vector of numbers in (0, 1) describing the extent to which this word provides evidence for each possible object2.To interpret a word, the literal listener simply re-weights their prior expectation about what is referred to using their lexicon’s entry for this word: PL0(object|word, lexicon) ∝lexicon(word, object) × Pprior(object). (1) Because of the normalization in this equation, there is a systematic but unimportant symmetry among lexicons; we remove this by assuming the lexicon sums to 1 over objects for each word. Confronted with such a listener, a speaker who chooses approximately optimal actions should attempt to choose a word which soft-maximizes the probability that the listener will assign to the target object—modulated by the effort or cost associated with producing this word: PS1(word|object, lexicon) ∝exp λ log PL0(object|word, lexicon) −cost(word) . (2) But given this speaker, then the naive L0 strategy is not optimal. Instead, listeners should use Bayes rule to invert the speaker’s decision procedure [9]: PL2(object|word, lexicon) ∝PS1(word|object, lexicon) × Pprior(object). (3) Now a difficulty becomes apparent. Given such a listener, it is no longer optimal for speakers to implement strategy S1; instead, they should implement strategy S3 which soft-maximizes PL2 instead of PL0. And then listeners ought to implement L4, and so on. One option is to continue iterating such strategies until reaching a fixed point equilibrium. While this strategy guarantees that each agent will behave normatively given the other agent’s strategy, there is no guarantee that such strategies will be near the system’s global optimum. More importantly, 2We assume words refer directly to objects, rather than to abstract semantic features. Our simplification is without loss of generalization, however, because we can interpret our model as marginalizing over such a representation, with our literal Plexicon(object|word) = P features P(object|features)Plexicon(features|word). 2 there is a great deal of evidence that humans do not use such equilibrium strategies; their behavior in language games (and in other games [5]) can be well-modeled as implementing Sk or Lk for some small k [9]. Following this work, we recurse a finite (small) number of times, n. The consequence is that one agent, implementing Sn, is fully optimal with respect to the game, while the other, implementing Ln−1, is only nearly optimal—off by a single recursion. This resolves one problem, but as soon as we attempt to add uncertainty about the meanings of words to such a model, a new paradox arises. Suppose the listener is a young child who is uncertain about the lexicon their partner is using. The obvious solution is for them to place a prior on the lexicon; they then update their posterior based on whatever utterances and contextual cues they observe, and in the mean time interpret each utterance by making their best guess, marginalizing out this uncertainty. This basic structure is captured in previous models of Bayesian word learning [10]. But when combined with the recursive pragmatic model, a new question arises: Given such a listener, what model should the speaker use? A rational speaker attempts to maximize the listener’s likelihood of understanding, so if an uncertain listener interpets by marginalizing over some posterior, then a fully knowledgeable speaker should disregard their own lexical knowledge, and instead model and marginalize over the listener’s uncertainty. But if they do this, then their utterances will provide no data about their lexicon, and there is nothing for the rational listener to learn from observing them.3 One final problem is that under this model, when agents switch roles between listener and speaker, there is nothing constraining them to continue using the same language. Optimizing task performance requires my lexicon as a speaker to match your lexicon as a listener and vice-versa, but there is nothing that relates my lexicon as a speaker to my lexicon as a listener, because these never interact. This clearly represents a dramatic mismatch to typical human communication, which almost never proceeds with distinct languages spoken by each participant. 2.2 A conventionality-based model of pragmatic word learning. We resolve the problems described above by assuming that speakers and listeners deviate from normative behavior by assuming a conventional lexicon. Specifically, our final convention-based agents assume: (a) There is some single, specific literal lexicon which everyone should be using, (b) and everyone else knows this lexicon, and believes that I know it as well, (c) but in fact I don’t. These assumptions instantiate a kind of “social anxiety” in which agents are all trying to learn the correct lexicon that they assume everyone else knows. Assumption (a) corresponds to the lexicographer’s illusion: Naive language users will argue vociferously that words have specific meanings, even though these meanings are unobservable to everyone who purportedly uses them. It also explains why learners speak the language they hear (rather than some private language that they assume listeners will eventually learn): Under assumption (a), observing other speakers’ behavior provides data about not just that speaker’s idiosyncratic lexicon, but the consensus lexicon. Assumption (b) avoids the explosion of hypern-distributions described above: If agent n knows the lexicon, they assume that all lower agents do as well, reducing to the original tractable model without uncertainty. And assumption (c) introduces a limited form of uncertainty at the top level, and thus the potential for learning. To the extent that a child’s interlocutors do use a stable lexicon and do not fully adapt their speech to accomodate the child’s limitations, these assumptions make a reasonable approximation for the child language learning case. In general, though, in arbitrary multi-turn interactions in which both agents have non-trivial uncertainty, these assumptions are incorrect, and thus induce complex and non-normative learning dynamics. Formally, let an unadorned L and S denote the listener and speaker who follow the above assumptions. If the lexicon were known then the listener would draw inferences as in Ln−1 above; but by assumption (c), they have uncertainty, which they marginalize out: PL(object|word, L’s data) = Z PLn−1(object|word, lexicon)P(lexicon|L’s data) d(lexicon) (4) 3Of course, in reality both parties will generally have some uncertainty, making the situation even worse. If we start from an uncertain listener with a prior over lexicons, then a first-level uncertain speaker needs a prior over priors on lexicons, a second-level uncertain listener needs a prior over priors over priors, etc. The original L0 →S1 →. . . recursion was bad enough, but at least each step had a constant cost. This new recursion produces hypern-distributions for which inference almost immediately becomes intractable even in principle, since the dimensionality of the learning problem increases with each step. Yet, without this addition of new uncertainty at each level, the model would dissolve back into certainty as in the previous paragraph, making learning impossible. 3 Phenomenon Ref. WL PI PI+U PI+WL Section Interpreting scalar implicature [14] x x x 3.1 Interpreting Horn implicature [15] x x 3.2 Learning literal meanings despite scalar implicature [21] x 4.1 Disambiguating new words using old words [22] x x x 4.2 Learning new words using old words [22] x x 4.2 Disambiguation without learning [16] x x 4.2 Emergence of novel & efficient lexicons [11] x 5.1 Lexicalization of Horn implicature [15] x 5.2 Table 1: Empirical results and references. WL refers to the word learning model of [10]; PI refers to the recursive pragmatic inference model of [9]; PI+U refers to the pragmatic inference model of [3] which includes lexical uncertainty, marginalizes it out, and then recurses. Our current model is referred to here as PI+WL, and combines pragmatic inference with word learning. Here L’s data consists of her previous experience with language. In particular in the iterated games explored here it consists of S’s previous utterances together with whatever other information L may have about their intended referents (e.g. from contextual clues). By assumption (b), L treats these utterances as samples from the knowledgeable speaker Sn−2, not S, and thus as being informative about the lexicon. For instance, when the data is a set of fully observed word-referent pairs {wi, oi}: P(lexicon|L’s data) ∝P(lexicon) Y i PSn−2(wi|oi, lexicon) (5) The top-level speaker S attempts to select the word which soft-maximizes their utility, with utility now being defined in terms of the informativity of the expectation (over lexicons) that the listener will have for the right referent4: PS(word|object, S’s data) ∝ (6) exp λ log Z PLn−1(object|word, lexicon)P(lexicon|S’s data) d(lexicon) −cost(word) Here P(lexicon|S’s data) is defined similarly, when S observes L’s interpretations of various utterances, and treats them as samples from Ln−1, not L. However, notice that if S and L have the same subjective distributions over lexicons, then S is approximately optimal with respect to L in the same sense that Sk is approximately optimal with respect to Lk−1. In one-shot games, this model is conceptually equivalent to that of [3] restricted to n = 3; our key innovations are that we allow learning by replacing their P(lexicon) with P(lexicon|data), and provide a theoretical justification for how this learning can occur. In the remainder of the paper, we apply the model described above to a set of one-shot pragmatic inference games that have been well-studied in linguistics [14, 15] and are addressed by previous one-shot models of pragmatic inference [9, 3]. These situations set the stage for simulations investigating how learning proceeds in iterated versions of such games, described in the following section. Results captured by our model and previous models are summarized in Table 1. In our simulations throughout, we somewhat arbitrarily set the recursion depth n = 3 (the minimal value that produces all the qualitative phenomena), λ = 3, and assume that all agents have shared priors on the lexicon and full knowledge of the cost function. Inference is via importance sampling from a Dirichlet prior over lexicons. 3 Pragmatic inference in one-shot games 3.1 Scalar implicature. Many sets of words in natural language form scales in which each term makes a successively stronger claim. “Some” and “all” form a scale of this type. While “I ate some 4An alternative model would have the speaker take the expectation over informativity, instead of the informativity of the expectation, which would correspond to slightly different utility functions. We adopt the current formulation for consistency with [3]. 4 of the cookies” is compatible with the followup “in fact, I ate all of the cookies,” the reverse is not true. “Might” and “must” are another example, as are “OK,” “good,” and “excellent.” All of these scales allow for scalar implicatures [14]: the use of a less specific term pragmatically implies that the more specific term does not apply. So although “I ate some of the cookies” could in principle be compatible with eating ALL of them, the listener is lead to believe that SOME-BUT-NOT-ALL is the likely state of affairs. The recursive pragmatic reasoning portions of our model capture findings on scalar implicature in the same manner as previous models [3, 13]. 3.2 Horn implicature. Consider a world which contains two words and two types of objects. One word is expensive to use, and one is cheap (call them “expensive” and “cheap” for short). One object type is common and one is rare; denote these COMMON and RARE. Intuitively, there are two possible communicative systems here: a good system where “cheap” referes to COMMON and “expensive” refers to RARE, and a bad system where the opposite holds. Obviously we would prefer to use the good system, but it has historically proven very difficult to derive this conclusion in a game theoretic setting, because both systems are stable equilibria: if our partner uses the bad system, then we would rather follow and communicate at some cost than switch to the good system and fail entirely [3]. Humans, however, unlike traditional game theoretic models, do make the inference that given two otherwise equivalent utterances, the costly utterance should have a rare or unusual meaning. We call this pattern Horn implicature, after [15]. For instance, “Lee got the car to stop” implies that Lee used an unusual method (e.g. not the brakes) because, had he used the brakes, the speaker would have chosen the simpler and shorter (less costly) expression, “Lee stopped the car” [15]. Surprisingly, Bergen et al. [3] show that the key to achieving this favorable result is ignorance. If a listener assigns equal probability to her partner using the good system or the bad system, then their best bet is to estimate PS(word|object) as the average of PS(word|object, good system) and PS(word|object, bad system). These might seem to cancel out, but in fact they do not. In the good system, the utilities of the speaker’s actions are relatively strongly separated compared to the bad system; therefore, a soft-max agent in the bad system has noiser behavior than in the good system, and the behavior in the good system dominates the average. Similar reasoning applies to an uncertain speaker. For example, in our model with a uniform prior over lexicons and Pprior(COMMON) = 0.8, cost(“cheap”) = 0.5, cost(“expensive”) = 1.0, the symmetry breaks in the appropriate way: Despite total ignorance about the conventional system, our modeled speakers prefer to use simple words for common referents (PS(“cheap”|COMMON) = 0.88, PS(“cheap”|RARE) = 0.46), and listeners show a similar bias (PL(COMMON|“cheap”) = 0.77, PL(COMMON|“expensive”) = 0.65). This preference is weak; the critical point is that it exists at all, given the unbiased priors. We return to this in §5.2. [3] report a much stronger preference, which they accomplish by applying further layers of pragmatic recursion on top of these marginal distributions. On the one hand, this allows them to better fit their empirical data; on the other, it removes the possibility of learning the literal lexicon that underlies pragmatic inference – further recursion above the uncertainty means that it is only hypothetical agents who are ignorant, while the actual speaker and listener have no uncertainty about each other’s generative process. 4 Pragmatics in learning from a knowledgable speaker 4.1 Learning literal meanings despite scalar implicatures. The acquisition of quantifiers like “some” provides a puzzle for most models of word learning: given that in many contexts, the word “some” is used to mean SOME-BUT-NOT-ALL, how do children learn that SOME-BUT-NOT-ALL is not in fact its literal meaning? Our model is able to take scalar implicatures into account when learning, and thus provide a potential solution, congruent with the observation that no known language in fact lexicalizes SOME-BUT-NOT-ALL [21]. Following the details of §3.1, we created a simulation in which the model’s prior fixed the meaning of “all” to be a particular set ALL, but was ambiguous about whether “some” literally meant SOME-BUT-NOT-ALL (incorrect) or SOME-BUT-NOT-ALL OR ALL (correct). The model was then exposed to training situations in which “some” was used to refer to SOME-BUT-NOT-ALL. Despite this training, the model maintained substantial posterior probability on the correct hypothesis about the meaning of “some.” Essentially, the model reasoned that although it had unambiguous evidence for “some” being used to refer to SOME-BUT-NOT-ALL, this was nonetheless consistent with a literal meaning of SOME-BUT-NOT-ALL OR ALL which had then been pragmatically strengthened. 5 1 2 3 4 5 6 7 8 9 10 Dialogue turn 0.0 0.5 1.0 P(L understands S) objects words Run 1 Run 2 Run 1 Run 2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Dialogue turn Run 1 Run 2 Run 1 Run 2 2 words, 2 objects 3 words, 3 objects Figure 1: Simulations of two pragmatic agents playing a naming game. Each panel shows two representative simulation runs, with run 1 chosen to show strong convergence and run 2 chosen to show relatively weaker convergence. At each stage, S and L have different, possibly contradictory posteriors over the conventional, consensus lexicon. From these posteriors we derive the probability P(L understands S) (marginalizing over target objects and word choices), and also depict graphically S’s model of the listener (top row), and L’s actual model (bottom row). Thus, a pragmatically-informed learner might be able to maintain the true meaning of SOME despite seemingly conflicting evidence. 4.2 Disambiguation using known words. Children, when presented with both a novel and a familiar object (e.g. an eggbeater and a ball), will treat a novel label (e.g. “dax”) as referring to the novel object, for example by supplying the eggbeater when asked to “give me the dax” [22]. This phenomenon is sometimes referred to as “mutual exclusivity.” Simple probabilistic word learning models can produce a similar pattern of findings [10], but all such models assume that learners retain the mapping between novel word and novel object demonstrated in the experimental situation. This observation is contradicted, however, by evidence that children often do not retain the mappings that are demonstrated by their inferences in the moment [16]. Our model provides an intriguing possible explanation of this finding: when simulating a single disambiguation situation, the model gives a substantial probability (e.g. 75%) that the speaker is referring to the novel object. Nevertheless, this inference is not accompanied by an increased belief that the novel word literally refers to this object. The learner’s interpretation arises not from lexical mapping but instead from a variant of scalar implicature: the listener knows that the familiar word does not refer to the novel object—hence the novel word will be the best way to refer to the novel object, even if it literally could refer to either. Nevertheless, on repeated exposure to the same novel word, novel object situation, the learner does learn the mapping as part of the lexicon (congruent with other data on repeated training on disambiguation situations [4]). 5 Pragmatic reasoning in the absence of conventional meanings 5.1 Emergence of efficient communicative conventions. Experimental results suggest that communicators who start without a usable communication system are able to establish novel, consensusbased systems. For example, adults playing a communication game using only novel symbols with no conventional meaning will typically converge on a set of new conventions which allow them to accomplish their task [11]. Or in a less extreme example, communicators asked to refer to novel objects invent conventional names for them over the course of repeated interactions (e.g., “the ice skater” for an abstract figure vaguely resembling an ice skater, [7]). From a pure learning perspective this behavior is anomalous, however: Since both agents know perfectly well that there is no existing convention to discover, there is nothing to learn from the other’s behavior. Furthermore, even if only one partner is producing the novel expressions, their behavior in these studies still becomes more regular (conventional) over time, which would seem to rule out a role for learning—even if there is some pattern in the expressions the speaker chooses to use, there is certainly nothing for the speaker to learn by observing these patterns, and thus their behavior should not change over time. 6 1 2 3 4 5 6 7 8 9 10 Dialogue turn 0.0 0.5 1.0 P(L understands S) objects words Run 1 Run 2 Run 1 Run 2 Figure 2: Example simulations showing the lexicalization of Horn implicatures. Plotting conventions are as above. In the first run, speaker and listener converge on a sparse and efficient communicative equilibrium, in which “cheap” means COMMON and “expensive” means RARE, while in the second they reach a sub-optimal equilibrium. As shown in Fig. 3, the former is more typical. 0 10 20 30 40 Dialogue turn 0.0 0.2 0.4 0.6 0.8 1.0 Mean P(L understands S) 2x2 uniform prior 3x3 uniform prior Horn implicature 0 10 20 30 40 Dialogue turn 0.0 0.2 0.4 0.6 0.8 1.0 Horn lexicalization rate Good lexicon Bad lexicon Figure 3: Averaged behavior over 300 dialogues as in Figs. 1 and 2. Left: Communicative success by game type and dialogue turn. Right: Proportion of dyads in the Horn implicature game (§5.2) who have converged on the ‘good’ or ‘bad’ lexicons and believe that these are literal meanings. To model such phenomena, we imagine two agents playing the simple referential game introduced in § 2. On each turn the speaker is assigned a target object, utters some word referring to this object, the listener makes a guess at the object, and then, critically, the speaker observes the listener’s guess and the listener receives feedback indicating the correct answer (i.e., the speaker’s intended referent). Both agents then update their posterior over lexicons before proceeding to the next trial. As in [19, 7], the speaker and listener remain fixed in the same role throughout. Fig. 1 shows the result of simulating several such games when both parties begin with a uniform prior over lexicons. Notice that: (a) agents’ performance begins at chance, but quickly rises – a communicative system emerges where none previously existed; (b) they tend towards structured, sparse lexicons with a one-to-one correspondence between objects and words – these communicative systems are biased towards being useful and efficient; and (c) as the speaker and listener have entirely different data (the listener’s interpretations and the speaker’s intended referent, respectively), unlucky early guesses can lead them to believe in entirely contradictory lexicons—but they generally recover and converge. Each agent effectively uses their partner’s behavior as a basis for forming weak beliefs about the underlying lexicon that they assume must exist. Since they then each act on these beliefs, and their partner uses the resulting actions to form new beliefs, they soon converge on using similar lexicons, and what started as a “superstition” becomes normatively correct. And unlike some previous models of emergence across multiple generations of agents [18, 25], this occurs within individual agents in a single dialogue. 5.2 Lexicalization and loss of Horn implicatures. A stronger example of how pragmatics can create biases in emerging lexicons can be observed by considering a version of this game played in the “cheap”/“expensive”/COMMON/RARE domain introduced in our discussion of Horn implicature (§3.2). Here, a uniform prior over lexicons, combined with pragmatic reasoning, causes each agent to start out weakly biased towards the associations “cheap” ↔COMMON, “expensive” ↔RARE. A fully rational listener who observed an uncertain speaker using words in this manner would therefore discount it as arising from this bias, and conclude that the speaker was, in fact, highly uncertain. Our convention-based listener, however, believes that speakers do know which convention is in use, and therefore tends to misinterpret this biased behavior as positive evidence that the ‘good’ system is in use. Similarly, convention-based speakers will wager that since on average they will succeed more often if listeners are using the ‘good’ system, they might as well try it. When they succeed, they take their success as evidence that the listener was in fact using the good system all along. As a result, dyads in this game end up converging onto a stable system at a rate far above chance, and 7 preferentially onto the ‘good’ system (Figs. 2 and 3). In the process, though, something interesting happens. In this model, Horn implicatures depend on uncertainty about literal meaning. As the agents gather more data, their uncertainty is reduced, and thus through the course of a dialogue, the implicature is replaced by a belief that “cheap” literally means COMMON (and did all along). To demonstrate this phenomenon, we queried each agent in each simulated dyad about how they would refer to or interpret each object and word, if the two objects were equally common, which cancels the Horn implicature. As shown in Fig. 3 (right), after 30 turns, in nearly 70% of dyads both S and L used the ‘good’ mapping even in this implicature-free case, while less than 20% used the ‘bad’ mapping (with the rest being inconsistent). This points to a fundamental difference in how learning interacts with Horn versus scalar implicatures. Depending on the details of the input, it is possible for our convention-based agents to observe pragmatically strengthened uses of scalar terms (e.g., “some” used to refer to SOME-BUT-NOT-ALL), without becoming confused into thinking that “some” literally means SOME-BUT-NOT-ALL (§4.1). This occurs because scalar implicature depends only on recursive pragmatic reasoning (§2.1), which our convention-based agents’ learning rules are able to model and correct for. But, while our agents are able to use Horn implicatures in their own behaviour (§ 3.2), this happens implicitly as a result of their uncertainty, and our agents do not model the uncertainty of other agents; thus, when they observe other agents using Horn implicatures, they cannot interpret this behavior as arising from an implicature. Instead, they take it as reflecting the actual literal meaning. And this result isn’t just a technical limitation of our implementation, but is intrinsic to our convention-based approach to combining pragmatics and learning: in our system, the only thing that makes word learning possible at all is each agent’s assumption that other agents are better informed; otherwise, other agents’ behavior would not provide any useful data for learning. Our model therefore makes the interesting prediction that all else being equal, uncertainty-based implicatures should over time be more prone to lexicalizing and becoming part of literal meaning than recursion-based implicatures are. 6 Conclusion Language learners and language users must consider word meanings both within and across contexts. A critical part of this process is reasoning pragmatically about agents’ goals in individual situations. In the current work we treat agents communicating with one another as assuming that there is a shared conventional lexicon which they both rely on, but with differing degrees of knowledge. They then reason recursively about how this lexicon should be used to convey particular meanings in context. These assumptions allow us to create a model that unifies two previously separate strands of modeling work on language usage and acquisition and account for a variety of new phenomena. In particular, we consider new explanations of disambiguation in early word learning and the acquisition of quantifiers, and demonstrate that our model is capable of developing novel and efficient communicative systems through iterated learning within the context of a single simulated conversation. Our assumptions produce a tractable model, but because they deviate from pure rationality, they must introduce biases, of which we identify two: a tendency for pragmatic speakers and listeners to accentuate useful, sparse patterns in their communicative systems (§5.1), and for short, ‘low cost’ expressions to be assigned to common objects (§5.2). Strikingly, both of these biases systematically drive the overall communicative system towards greater global efficiency. In the long term, these processes should leave their mark on the structure of the language itself, which may contribute to explaining how languages become optimized for effective communication [26, 24]. More generally, understanding the interaction between pragmatics and learning is a precondition to developing a unified understanding of human language. Our work here takes a first step towards joining disparate strands of research that have treated language acquisition and language use as distinct. Acknowledgments This work was supported in part by the European Commission through the EU Cognitive Systems Project Xperience (FP7-ICT-270273), the John S. McDonnell Foundation, and ONR grant N000141310287. 8 References [1] D.A. Baldwin. Early referential understanding: Infants’ ability to recognize referential acts for what they are. Developmental Psychology, 29(5):832–843, 1993. [2] D. Barner, N. Brooks, and A. Bale. Accessing the unsaid: The role of scalar alternatives in childrens pragmatic inference. Cognition, 118(1):84, 2011. [3] L. Bergen, N. D. Goodman, and R. Levy. That’s what she (could have) said: How alternative utterances affect language use. In Proceedings of the 34th Annual Conference of the Cognitive Science Society, 2012. [4] R.A.H. Bion, A. Borovsky, and A. Fernald. Fast mapping, slow learning: Disambiguation of novel word– object mappings in relation to vocabulary learning at 18, 24, and 30months. Cognition, 2012. [5] C. F. Camerer, T.-H. Ho, and J.-K. Chong. A cognitive hierarchy model of games. The Quarterly Journal of Economics, 119(3):861–898, 2004. [6] E.V. Clark. On the logic of contrast. Journal of Child Language, 15:317–335, 1988. [7] Herbert H Clark and Deanna Wilkes-Gibbs. Referring as a collaborative process. Cognition, 22(1):1–39, 1986. [8] R. Dale and E. Reiter. Computational interpretations of the gricean maxims in the generation of referring expressions. Cognitive Science, 19(2):233–263, 1995. [9] M. C. Frank and N. D. Goodman. Predicting pragmatic reasoning in language games. Science, 336(6084):998–998, 2012. [10] M. C. Frank, N. D. Goodman, and J. B. Tenenbaum. Using speakers’ referential intentions to model early cross-situational word learning. Psychological Science, 20:578–585, 2009. [11] B. Galantucci. An experimental study of the emergence of human communication systems. Cognitive science, 29(5):737–767, 2005. [12] D. Golland, P. Liang, and D. Klein. A game-theoretic approach to generating spatial descriptions. In Proceedings of EMNLP 2010, pages 410–419. Association for Computational Linguistics, 2010. [13] Noah D. Goodman and Andreas Stuhlm¨uller. Knowledge and implicature: Modeling language understanding as social cognition. Topics in Cognitive Science, 5:173–184, 2013. [14] H.P. Grice. Logic and conversation. Syntax and Semantics, 3:41–58, 1975. [15] L. Horn. Toward a new taxonomy for pragmatic inference: Q-based and r-based implicature. In Meaning, form, and use in context, volume 42. Washington: Georgetown University Press, 1984. [16] J. S. Horst and L. K. Samuelson. Fast mapping but poor retention by 24-month-old infants. Infancy, 13(2):128–157, 2008. [17] G. Kachergis, C. Yu, and R. M. Shiffrin. An associative model of adaptive inference for learning word– referent mappings. Psychonomic Bulletin & Review, 19(2):317–324, April 2012. [18] S. Kirby, H. Cornish, and K. Smith. Cumulative cultural evolution in the laboratory: An experimental approach to the origins of structure in human language. Proceedings of the National Academy of Sciences, 105(31):10681–10686, 2008. [19] R. M. Krauss and S. Weinheimer. Changes in reference phrases as a function of frequency of usage in social interaction: A preliminary study. Psychonomic Science, 1964. [20] T. Kwiatkowski, S. Goldwater, L. Zettlemoyer, and M. Steedman. A probabilistic model of syntactic and semantic acquisition from child-directed utterances and their meanings. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 234–244, 2012. [21] S.C. Levinson. Presumptive meanings: The theory of generalized conversational implicature. MIT Press, 2000. [22] E. M. Markman and G. F. Wachtel. Children’s use of mutual exclusivity to constrain the meanings of words. Cognitive Psychology, 20:121–157, 1988. [23] A. Papafragou and J. Musolino. Scalar implicatures: Experiments at the semantics-pragmatics interface. Cognition, 86(3):253–282, 2003. [24] S. T. Piantadosi, H. Tily, and E. Gibson. Word lengths are optimized for efficient communication. Proceedings of the National Academy of Sciences, 108(9):3526 –3529, 2011. [25] R. van Rooy. Evolution of conventional meaning and conversational principles. Synthese, 139(2):331– 366, 2004. [26] G. Zipf. The Psychobiology of Language. Routledge, London, 1936. 9
|
2013
|
14
|
4,865
|
Stochastic Optimization of PCA with Capped MSG Raman Arora TTI-Chicago Chicago, IL USA arora@ttic.edu Andrew Cotter TTI-Chicago Chicago, IL USA cotter@ttic.edu Nathan Srebro Technion, Haifa, Israel and TTI-Chicago nati@ttic.edu Abstract We study PCA as a stochastic optimization problem and propose a novel stochastic approximation algorithm which we refer to as “Matrix Stochastic Gradient” (MSG), as well as a practical variant, Capped MSG. We study the method both theoretically and empirically. 1 Introduction Principal Component Analysis (PCA) is a ubiquitous tool used in many data analysis, machine learning and information retrieval applications. It is used to obtain a lower dimensional representation of a high dimensional signal that still captures as much of the original signal as possible. Such a low dimensional representation can be useful for reducing storage and computational costs, as complexity control in learning systems, or to aid in visualization. PCA is typically phrased as a question about a fixed data set: given n vectors in Rd, what is the k-dimensional subspace that captures most of the variance in the data (or equivalently, that is best in reconstructing the vectors, minimizing the sum squared distances, or residuals, from the subspace)? It is well known that this subspace is the span of the leading k components of the singular value decomposition of the data matrix (or equivalently of the empirical second moment matrix). Hence, the study of computational approaches for PCA has mostly focused on methods for finding the SVD (or leading components of the SVD) for a given n×d matrix (Oja & Karhunen, 1985; Sanger, 1989). In this paper we approach PCA as a stochastic optimization problem, where the goal is to optimize a “population objective” based on i.i.d. draws from the population. In this setting, we have some unknown source (“population”) distribution D over Rd, and the goal is to find the k-dimensional subspace maximizing the (uncentered) variance of D inside the subspace (or equivalently, minimizing the average squared residual in the population), based on i.i.d. samples from D. The main point here is that the true objective is not how well the subspace captures the sample (i.e. the “training error”), but rather how well the subspace captures the underlying source distribution (i.e. the “generalization error”). Furthermore, we are not concerned with capturing some “true” subspace, and so do not, for example, try to minimize the angle to such a subspace, but rather attampt to find a “good” subspace, i.e. one that is almost as good as the optimal one in terms of reconstruction error. Of course, finding the subspace that best captures the sample is a very reasonable approach to PCA on the population. This is essentially an Empirical Risk Minimization (ERM) approach. However, when comparing it to alternative, perhaps computationally cheaper, approaches, we argue that one should not compare the error on the sample, but rather the population objective. Such a view can justify and favor computational approaches that are far from optimal on the sample, but are essentially as good as ERM on the population. Such a population-based view of optimization has recently been advocated in machine learning, and has been used to argue for crude stochastic approximation approaches (online-type methods) over sophisticated deterministic optimization of the empirical (training) objective (i.e. “batch” methods) (Bottou & Bousquet, 2007; Shalev-Shwartz & Srebro, 2008). A similar argument was also 1 made in the context of stochastic optimization, where Nemirovski et al. (2009) argues for stochastic approximation (SA) approaches over ERM. approaches (a.k.a. ERM). Accordingly, SA approaches, mostly variants of Stochastic Gradient Descent, are often the methods of choice for many learning problems, especially when very large data sets are available (Shalev-Shwartz et al., 2007; Collins et al., 2008; Shalev-Shwartz & Tewari, 2009). We take the same view in order to advocate for, study, and develop stochastic approximation approaches for PCA. In an empirical study of stochastic approximation methods for PCA, a heuristic “incremental” method showed very good empirical performance (Arora et al., 2012). However, no theoretical guarantees or justification were given for incremental PCA. In fact, it was shown that for some distributions it can converge to a suboptimal solution with high probability (see Section 5.2 for more about this “incremental” algorithm). Also relevant is careful theoretical work on online PCA by Warmuth & Kuzmin (2008), in which an online regret guarantee was established. Using an onlineto-batch conversion, this online algorithm can be converted to a stochastic approximation algorithm with good iteration complexity, however the runtime for each iteration is essentially the same as that of ERM (i.e. of PCA on the sample), and thus senseless as a stochastic approximation method (see Section 3.3 for more on this algorithm). In this paper we borrow from these two approaches and present a novel algorithm for stochastic PCA—the Matrix Stochastic Gradient (MSG) algorithm. MSG enjoys similar iteration complexity to Warmuth’s and Kuzmin’s algorithm, and in fact we present a unified view of both algorithms as different instantiations of Mirror Descent for the same convex relaxation of PCA. We then present the capped MSG algorithm, which is a more practical variant of MSG, has very similar updates to those of the “incremental” method, works well in practice, and does not get stuck like the “incremental” method. The Capped MSG algorithm is thus a clean, theoretically well founded method, with interesting connections to other stochastic/online PCA methods, and excellent practical performance—a “best of both worlds” algorithm. 2 Problem Setup We consider PCA as the problem of finding the maximal (uncentered) variance k-dimensional subspace with respect to an (unknown) distribution D over x ∈Rd. We assume without loss of generality that the data are scaled in such a way that Ex∼D[∥x∥2] ≤1. For our analysis, we also require that the fourth moment be bounded: Ex∼D[∥x∥4] ≤1. We represent a k-dimensional subspace by an orthonormal basis, collected in the columns of a matrix U. With this parametrization, PCA is defined as the following stochastic optimization problem: maximize : Ex∼D[xT UU T x] (2.1) subject to : U ∈Rd×k, U T U = I. In a stochastic optimization setting we do not have direct knowledge of the distribution D, and instead may access it only through i.i.d. samples—these can be thought of as “training examples”. As in other studies of stochastic approximation methods, we are less concerned with the number of required samples, and instead care mostly about the overall runtime required to obtain an ϵsuboptimal solution. The standard approach to Problem 2.1 is empirical risk minimization (ERM): given samples {xt}T t=1 drawn from D, we compute the empirical covariance matrix ˆC = 1 T PT t=1 xtxT t , and take the columns of U to be the eigenvectors of ˆC corresponding to the top-k eigenvalues. This approach requires O(d2) memory and O(d2) operations just in order to compute the covariance matrix, plus some additional time for the SVD. We are interested in methods with much lower sample time and space complexity, preferably linear rather than quadratic in d. 3 MSG and MEG A natural stochastic approximation (SA) approach to PCA is projected stochastic gradient descent (SGD) on Problem 2.1, with respect to U. This leads to the stochastic power method, for which, at each iteration, the following update is performed: U (t+1) = Porth U (t) + ηxtxT t (3.1) 2 Here, xtxT t is the gradient of the PCA objective w.r.t. U, η is a step size, and Porth (·) projects its argument onto the set of matrices with orthonormal columns. Unfortunately, although SGD is well understood for convex problems, Problem 2.1 is non-convex. Consequently, obtaining a theoretical understanding of the stochastic power method, or of how the step size should be set, has proved elusive. Under some conditions, convergence to the optimal solution can be ensured, but no rate is known (Oja & Karhunen, 1985; Sanger, 1989; Arora et al., 2012). Instead, we consider a re-parameterization of the PCA problem where the objective is convex. Instead of representing a linear subspace in terms of its basis matrix U, we parametrize it using the corresponding projection matrix M = UU T . We can now reformulate the PCA problem as: maximize : Ex∼D[xT Mx] (3.2) subject to : M ∈Rd×d, λi (M) ∈{0, 1} , rank M = k where λi (M) is the ith eigenvalue of M. We now have a convex (linear, in fact) objective, but the constraints are not convex. This prompts us relax the objective by taking the convex hull of the feasible region: maximize : Ex∼D[xT Mx] (3.3) subject to : M ∈Rd×d, 0 ⪯M ⪯I, tr M = k Since the objective is linear, and the feasible regiuon is the convex hull of that of Problem 3.2, an optimal solution is always attained at a “vertex”, i.e. a point on the boundary of the original constraints. The optima of the two objectives are thus the same (strictly speaking—every optimum of Problem 3.2 is also an optimum of Problem 3.3), and solving Problem 3.3 is equivalent to solving Problem 3.2. Furthermore, if a suboptimal solution for Problem 3.3 is not rank-k, i.e. is not a feasible point of Problem 3.2, we can easily sample from it to obtain a rank-k solution with the same objective function value (in expectation). This is shown by the following result of Warmuth & Kuzmin (2008): Lemma 3.1 (Rounding (Warmuth & Kuzmin, 2008)). Any feasible solution of Problem 3.3 can be expressed as a convex combination of at most d feasible solutions of Problem 3.2. Algorithm 2 of Warmuth & Kuzmin (2008) shows how to efficiently find such a convex combination. Since the objective is linear, treating the coefficients of the convex combination as defining a discrete distribution, and sampling according to this distribution, yields a rank-k matrix with the desired expected objective function value. 3.1 Matrix Stochastic Gradient Performing SGD on Problem 3.3 (w.r.t. the variable M) yields the following update rule: M (t+1) = P M (t) + ηxtxT t , (3.4) The projection is now performed onto the (convex) constraints of Problem 3.3. This gives the Matrix Stochastic Gradient (MSG) algorithm, which, in detail, consists of the following steps: 1. Choose a step-size η, iteration count T, and starting point M (0). 2. Iterate the update rule (Equation 3.4) T times, each time using an independent sample xt ∼D. 3. Average the iterates as ¯ M = 1 T PT t=1 M (t). 4. Sample a rank-k solution ˜ M from ¯ M using the rounding procedure discussed in the previous section. Analyzing MSG is straightforward using a standard SGD analysis: Theorem 1. After T iterations of MSG (on Problem 3.3), with step size η = q k T , and starting at M (0) = 0, E[Ex∼D[xT ˜ Mx]] ≥Ex∼D[xT M ∗x] −1 2 r k T , where the expectation is w.r.t. the i.i.d. samples x1, . . . , xT ∼D and the rounding, and M ∗is the optimum of Problem 3.2. 3 Algorithm 1 Matrix stochastic gradient (MSG) update: compute an eigendecomposition of M ′+ηxxT from a rank-m eigendecomposition M ′=U ′diag(σ′)(U ′)T and project the resulting solution onto the constraint set. The computational cost is dominated by the matrix multiplication on lines 4 or 7 costing O(m2d) operations. msg-step d, k, m : N, U ′ : Rd×m, σ′ : Rm, x : Rd, η : R 1 ˆx ←√η(U ′)T x; x⊥←√ηx −U ′ˆx; r ←∥x⊥∥; 2 if r > 0 3 V, σ ←eig([diag(σ′) + ˆxˆxT , rˆx; rˆxT , r2]); 4 U ←[U ′, x⊥/r]V ; 5 else 6 V, σ ←eig(diag(σ′) + ˆxˆxT ); 7 U ←U ′V ; 8 σ ←distinct eigenvalues in σ; κ ←corresponding multiplicities; 9 σ ←project (d, k, m, σ, κ); 10 return U, σ; Proof. The SGD analysis of Nemirovski & Yudin (1983) yields that: E[xT M ∗x −xT ¯ Mx] ≤η 2Ex∼D[∥g∥2 F ] + ∥M ∗−M (0)∥2 F 2ηT (3.5) where g = xxT is the gradient of the PCA objective. Now, Ex∼D[∥g∥2 F ] = Ex∼D[∥x∥4] ≤1 and
M ∗−M (0)
2 F = ∥M ∗∥2 F = k. In the last inequality, we used the fact that M ∗has k eigenvalues of value 1 each, and hence ∥M ∗∥F = √ k. 3.2 Efficient Implementation and Projection A na¨ıve implementation of the MSG update requires O(d2) memory and O(d2) operations per iteration. In this section, we show how to perform this update efficiently by maintaining an up-to-date eigendecomposition of M (t). Pseudo-code for the update may be found in Algorithm 1. Consider the eigendecomposition M (t) = U ′diag(σ)(U ′)T at the tth iteration, where rank(M (t)) = kt and U ′ ∈Rd×kt. Given a new observation xt, the eigendecomposition of M (t) + ηxtxT t can be updated efficiently using a (kt+1)×(kt+1) SVD (Brand, 2002; Arora et al., 2012) (steps 1-7 of Algorithm 1). This rank-one eigen-update is followed by projection onto the constraints of Problem 3.3, invoked as project in step 8 of Algorithm 1, discussed in the following paragraphs and given as Algorithm 2. The projection procedure is based on the following lemma1. See supplementary material for a proof. Lemma 3.2. Let M ′ ∈Rd×d be a symmetric matrix, with eigenvalues σ′ 1, . . . , σ′ d and associated eigenvectors v′ 1, . . . , v′ d. Its projection M = P (M ′) onto the feasible region of Problem 3.3 with respect to the Frobenius norm, is the unique feasible matrix which has the same eigenvectors as M ′, with the associated eigenvalues σ1, . . . , σd satisfying: σi = max (0, min (1, σ′ i + S)) with S ∈R being chosen in such a way that Pd i=1 σi = k. This result shows that projecting onto the feasible region amounts to finding the value of S such that, after shifting the eigenvalues by S and clipping the results to [0, 1], the result is feasible. Importantly, the projection operates only on the eigenvalues. Algorithm 2 contains pseudocode which finds S from a list of eigenvalues. It is optimized to efficiently handle repeated eigenvalues—rather than receiving the eigenvalues in a length-d list, it instead receives a length-n list containing only the distinct eigenvalues, with κ containing the corresponding multiplicities. In Sections 4 and 5, we will see why this is an important optimization. The central idea motivating the algorithm is that, in a sorted array of eigenvalues, all elements with indices below some threshold i will be clipped to 0, and all of those with indices above another threshold j will be clipped to 1. The pseudocode simply searches over all possible pairs of such thresholds until it finds the one that works. The rank-one eigen-update combined with the fast projection step yields an efficient MSG update that requires O(dkt) memory and O(dk2 t ) operations per iteration (recall that kt is the rank of the 1Our projection problem onto the capped simplex, even when seen in the vector setting, is substantially different from Duchi et al. (2008). We project onto the set {0 ≤σ ≤1, ∥σ∥1 = k} in Problem 3.3 and {0 ≤ σ ≤1, ∥σ∥1 = k, ∥σ∥0 ≤K} in Problem 5.1 whereas Duchi et al. (2008) project onto {0 ≤σ, ∥σ∥1 = k}. 4 Algorithm 2 Routine which finds the S of Lemma 3.2. It takes as parameters the dimension d, “target” subspace dimension k, and the number of distinct eigenvalues n of the current iterate. The length-n arrays σ′ and κ′ contain the distinct eigenvalues and their multiplicities, respectively, of M ′ (with Pn i=1 κ′ i = d). Line 1 sorts σ′ and re-orders κ′ so as to match this sorting. The loop will be run at most 2n times (once for each possible increment to i or j on lines 12–15), so the computational cost is dominated by that of the sort: O(n log n). project (d, k, n : N, σ′ : Rn, κ′ : Nn) 1 σ′, κ′ ←sort(σ′, κ′); 2 i ←1; j ←1; si ←0; sj ←0; ci ←0; cj ←0; 3 while i ≤n 4 if (i < j) 5 S ←(k −(sj −si) −(d −cj))/(cj −ci); 6 b ←( 7 (σ′ i + S ≥0) and (σ′ j−1 + S ≤1) 8 and ((i ≤1) or (σ′ i−1 + S ≤0)) 9 and ((j ≥n) or (σ′ j+1 ≥1)) 10 ); 11 return S if b; 12 if (j ≤n) and (σ′ j −σ′ i ≤1) 13 sj ←sj + κ′ jσ′ j; cj ←cj + κ′ j; j ←j + 1; 14 else 15 si ←si + κ′ iσ′ i; ci ←ci + κ′ i; i ←i + 1; 16 return error; iterate M (t)). This is a significant improvement over the O(d2) memory and O(d2) computation required by a standard implementation of MSG, if the iterates have relatively low rank. 3.3 Matrix Exponentiated Gradient Since M is constrained by its trace, and not by its Frobenius norm, it is tempting to consider mirror descent (MD) (Beck & Teboulle, 2003) instead of SGD updates for solving Problem 3.3. Recall that Mirror Descent depends on a choice of “potential function” Ψ(·) which should be chosen according to the geometry of the feasible set and the subgradients (Srebro et al., 2011). Using the squared Frobenius norm as a potential function, i.e. Ψ(M) = ∥M∥2 F , yields SGD, i.e. the MSG updates Equation 3.4. The trace-norm constraint suggests using the von Neumann entropy as the potential function, i.e. Ψh(M) = P i λi (M) log λi (M). This leads to multiplicative updates, yielding what we refer to as the Matrix Exponentiated Gradient (MEG) algorithm, which is similar to that of (Warmuth & Kuzmin, 2008). In fact, Warmuth and Kuzmin’s algorithm exactly corresponds to online Mirror Descent on Problem 3.3 with this potential function, but takes the optimization variable to be M⊥= I −M (with the constraints tr M⊥= d −k and 0 ⪯M⊥⪯I). In either case, using the entropy potential, despite being well suited for the trace-geometry, does not actually lead to a better dependence2 on d or k, and a Mirror Descent-based analysis again yields an excess loss of p k/T. Warmuth and Kuzmin present an “optimistic” analysis, with a dependence on the “reconstruction error” L∗= E[xT (I −M ∗)x], which yields an excess error of O q L∗k log(d/k) T + k log(d/k) T (their logarithmic term can be avoided by a more careful analysis). 4 MSG runtime and the rank of the iterates As we saw in Sections 3.1 and 3.2, MSG requires O(k/ϵ2) iterations to obtain an ϵ-suboptimal solution, and each iteration costs O(k2 t d) operations, where kt is the rank of iterate M (t). This yields a total runtime of O( ¯k2dk/ϵ2), where ¯k2 = PT t=1 k2 t . Clearly, the runtime for MSG depends critically on the rank of the iterates. If kt is as large as d, then MSG achieves a runtime that is cubic in the dimensionality. On the other hand, if the rank of the iterates is O(k), the runtime is linear in the dimensionality. Fortunately, in practice, each kt is typically much lower than d. The reason for this is that the MSG update performs a rank-1 update followed by a projection onto the constraints. Since M ′ = M (t) + ηxtxT t will have a larger trace than M (t) (i.e. tr M ′ ≥k), the projection, as is 2This is because in our case, due to the other constraints, ∥M ∗∥F = √ trM ∗. Furthermore, the SGD analysis depends on the Frobenius norm of the stochastic gradients, but since all stochastic gradients are rank one, this is the same as their spectral norm, which comes up in the entropy-case analysis, and again there is no benefit. 5 shown by Lemma 3.2, will subtract a quantity S from every eigenvalue of M ′, clipping each to 0 if it becomes negative. Therefore, each MSG update will increase the rank of the iterate by at most 1, and has the potential to decrease it, perhaps significantly. It’s very difficult to theoretically quantify how the rank of the iterates will evolve over time, but we have observed empirically that the iterates do tend to have relatively low rank. We explore this issue in greater detail experimentally, on a distribution which we expect to be difficult for MSG. To this end, we generated data from known 32-dimensional distributions with diagonal covariance matrices Σ = diag(σ/ ∥σ∥), where σi = τ −i/ P32 j=1 τ −j, for i = 1, . . . , 32 and for some τ > 1. Observe that Σ(k) has a smoothly-decaying set of eigenvalues and the rate of decay is controlled by τ. As τ →1, the spectrum becomes flatter resulting in distributions that present challenging test cases for MSG. We experimented with τ = 1.1 and k ∈{1, 2, 4}, where k is the desired subspace dimension used by each algorithm. The data is generated by sampling the ith standard unit basis vector ei with probability √Σii. We refer to this as the “orthogonal distribution”, since it is a discrete distribution over 32 orthogonal vectors. kt Spectrum 10 1 10 2 10 3 10 4 0 5 10 15 20 25 30 35 10 1 10 2 10 3 10 4 0 0.2 0.4 0.6 0.8 1 Iterations Iterations Figure 1: The ranks kt (left) and the eigenvalues (right) of the MSG iterates M (t). In Figure 1, we show the results with k = 4. We can see from the left-hand plot that MSG maintains a subspace of dimension around 15. The plot on the right shows how the set of nonzero eigenvalues of the MSG iterates evolves over time, from which we can see that many of the extra dimensions are “wasted” on very small eigenvalues, corresponding to directions which leave the state matrix only a handful of iterations after they enter. This suggests that constraining kt can lead to significant speedups and motivates capped MSG updates discussed in the next section. 5 Capped MSG While, as was observed in the previous section, MSG’s iterates will tend to have ranks kt smaller than d, they will nevertheless also be larger than k. For this reason, we recommend imposing a hard constraint K on the rank of the iterates: maximize : Ex∼D[xT Mx] (5.1) subject to : M ∈Rd×d, 0 ⪯M ⪯I tr M = k, rank M ≤K We will refer to MSG where the projection is replaced with a projection onto the constraints of Problem 5.1 (i.e. where the iterates are SGD iterates on Problem 5.1) as “capped MSG”. As before, as long as K ≥k, Problem 5.1 and Problem 3.3 have the same optimum, it is achieved at a rank-k matrix, and the extra rank constraint in Problem 5.1 is inactive at the optimum. However, the rank constraint does affect the iterates, especially since Problem 5.1 is no longer convex. Nonetheless if K > k (i.e. the hard rank-constraint K is strictly larger than the target rank k), then we can easily check if we are at a global optimum of Problem 5.1, and hence of Problem 3.3: if the capped MSG algorithm converges to a solution of rank K, then the upper bound K should be increased. Conversely, if it has converged to a rank-deficient solution, then it must be the global optimum. There is thus an advantage in using K > k, and we recommend setting K = k + 1, as we do in our experiments, and increasing K only if a rank deficient solution is not found in a timely manner. If we take K = k, then the only way to satisfy the trace constraint is to have all non-zero eigenvalues equal to one, and Problem 5.1 becomes identical to Problem 3.2. The detour through the convex objective of Problem 3.3 allows us to increase the search rank K, allowing for more flexibility in the iterates, while still forcing each iterate to be low-rank, and each update to therefore be efficient, through the rank constraint. 5.1 Implementing the projection The only difference between the implementation of MSG and capped MSG is in the projection step. Similar reasoning to that which was used in the proof of Lemma 3.2 shows that if M (t+1) =P (M ′) 6 k = 1 k = 2 k = 4 Suboptimality 10 1 10 2 10 3 10 4 0 0.2 0.4 0.6 0.8 1 10 1 10 2 10 3 10 4 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Incremental Warmuth & Kuzmin MSG Capped MSG 10 1 10 2 10 3 10 4 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Iterations Iterations Iterations Figure 2: Comparison on simulated data for different values of parameter k. with M ′ = M (t) + ηxtxT t , then M (t) and M ′ are simultaneously diagonalizable, and therefore we can consider only how the projection acts on the eigenvalues. Hence, if we let σ′ be the vector of the eigenvalues of M ′, and suppose that more than K of them are nonzero, then there will be a a size-K subset of σ′ such that applying Algorithm 2 to this set gives the projected eigenvalues. Since we perform only a rank-1 update at every iteration, we must check at most K possibilities, at a total cost of O(K2 log K) operations, which has no effect on the asymptotic runtime because Algorithm 1 requires O(K2d) operations. 5.2 Relationship to the incremental PCA method The capped MSG algorithm with K = k is similar to the incremental algorithm of Arora et al. (2012), which maintains a rank-k approximation of the covariance matrix and updates according to: M (t+1) = Prank-k M (t) + xtxT t where the projection is onto the set of rank-k matrices. Unlike MSG, the incremental algorithm does not have a step-size. Updates can be performed efficiently by maintaining an eigendecomposition of each iterate, just as was done for MSG (see Section 3.2). In a recent survey of stochastic algorithms for PCA (Arora et al., 2012), the incremental algorithm was found to perform extremely well in practice–it was the best, in fact, among the compared algorithms. However, there exist cases in which it can get stuck at a suboptimal solution. For example, If the data are drawn from a discrete distribution D which samples [ √ 3, 0]T with probability 1/3 and [0, √ 2]T with probability 2/3, and one runs the incremental algorithm with k = 1, then it will converge to [1, 0]T with probability 5/9, despite the fact that the maximal eigenvector is [0, 1]T . The reason for this failure is essentially that the orthogonality of the data interacts poorly with the low-rank projection: any update which does not entirely displace the maximal eigenvector in one iteration will be removed entirely by the projection, causing the algorithm to fail to make progress. The capped MSG algorithm with K > k will not get stuck in such situations, since it will use the additional dimensions to search in the new direction. Only as it becomes more confident in its current candidate will the trace of M become increasingly concentrated on the top k directions. To illustrate this empirically, we generalized this example by generating data using the 32-dimensional “orthogonal” distribution described in Section 4. This distribution presents a challenging test-case for MSG, capped MSG and the incremental algorithm. Figure 2 shows plots of individual runs of MSG, capped MSG with K = k + 1, the incremental algorithm, and Warmuth and Kuzmin’s algorithm, all based on the same sequence of samples drawn from the orthogonal distribution. We compare algorithms in terms of the suboptimality on the population objective based on the largest k eigenvalues of the state matrix M (t). The plots show the incremental algorithm getting stuck for k ∈{1, 4}, and the others intermittently plateauing at intermediate solutions before beginning to again converge rapidly towards the optimum. This behavior is to be expected on the capped MSG algorithm, due to the fact that the dimension of the subspace stored at each iterate is constrained. However, it is somewhat surprising that MSG and Warmuth and Kuzmin’s algorithm behaved similarly, and barely faster than capped MSG. 6 Experiments We also compared the algorithms on the real-world MNIST dataset, which consists of 70, 000 binary images of handwritten digits of size 28×28, resulting in a dimensionality of 784. We pre-normalized the data by mean centering the feature vectors and scaling each feature by the product of its standard 7 k = 1 k = 4 k = 8 Suboptimality 10 0 10 1 10 2 10 3 10 4 10 5 0 0.5 1 1.5 2 2.5 3 Incremental Warmuth & Kuzmin MSG Capped MSG Grassmannian 10 0 10 1 10 2 10 3 10 4 10 5 0 1 2 3 4 5 6 7 8 10 0 10 1 10 2 10 3 10 4 10 5 0 2 4 6 8 10 12 Iterations Iterations Iterations Suboptimality 10 0 10 1 10 2 10 3 10 4 10 5 10 6 10 7 10 8 0 0.5 1 1.5 2 2.5 10 0 10 1 10 2 10 3 10 4 10 5 10 6 10 7 10 8 0 1 2 3 4 5 6 7 8 10 0 10 1 10 2 10 3 10 4 10 5 10 6 10 7 10 8 0 2 4 6 8 10 12 Est. runtime Est. runtime Est. runtime Figure 3: Comparison on the MNIST dataset. The top row of plots shows suboptimality as a function of iteration count, while the bottom row suboptimality as a function of estimated runtime Pt s=1(k′ s)2. deviation and the data dimension, so that each feature vector is zero mean and unit norm in expectation. In addition to MSG, capped MSG, the incremental algorithm and Warmuth and Kuzmin’s algorithm, we also compare to a Grassmannian SGD algorithm (Balzano et al., 2010). All algorithms except the incremental algorithm have a step-size parameter. In these experiments, we ran each algorithm with decreasing step sizes ηt = c/ √ t for c ∈{2−12, 2−11, . . . , 25} and picked the best c, in terms of the average suboptimality over the run, on a validation set. Since we cannot evaluate the true population objective, we estimate it by evaluating on a held-out test set. We use 40% of samples in the dataset for training, 20% for validation (tuning step-size), and 40% for testing. We are interested in learning a maximum variance subspace of dimension k ∈{1, 4, 8} in a single “pass” over the training sample. In order to compare MSG, capped MSG, the incremental algorithm and Warmuth and Kuzmin’s algorithm in terms of runtime, we calculate the dominant term in the computational complexity: Pt s=1(k′ s)2. The results are averaged over 100 random splits into train-validation-test sets. We can see from Figure 3 that the incremental algorithm makes the most progress per iteration and is also the fastest of all algorithms. MSG is comparable to the incremental algorithm in terms of the the progress made per iteration. However, its runtime is slightly worse because it will often keep a slightly larger representation (of dimension kt). The capped MSG variant (with K = k + 1) is significantly faster–almost as fast as the incremental algorithm, while, as we saw in the previous section, being less prone to getting stuck. Warmuth and Kuzmin’s algorithm fares well with k = 1, but its performance drops for higher k. Inspection of the underlying data shows that, in the k ∈ {4, 8} experiments, it also tends to have a larger kt than MSG in these experiments, and therefore has a higher cost-per-iteration. Grassmannian SGD performs better than Warmuth and Kuzmin’s algorithm, but much worse than MSG and capped MSG. 7 Conclusions In this paper, we presented a careful development and analysis of MSG, a stochastic approximation algorithm for PCA, which enjoys good theoretical guarantees and offers a computationally efficient variant, capped MSG. We show that capped MSG is well-motivated theoretically and that it does not get stuck at a suboptimal solution. Capped MSG is also shown to have excellent empirical performance and it therefore is a much better alternative to the recently proposed incremental PCA algorithm of Arora et al. (2012). Furthermore, we provided a cleaner interpretation of PCA updates of Warmuth & Kuzmin (2008) in terms of Matrix Exponentiated Gradient (MEG) updates and showed that both MSG and MEG can be interpreted as mirror descent algorithms on the same relaxation of the PCA optimization problem but with different distance generating functions. 8 References Arora, Raman, Cotter, Andrew, Livescu, Karen, and Srebro, Nathan. Stochastic optimization for PCA and PLS. In 50th Annual Allerton Conference on Communication, Control, and Computing, 2012. Balzano, Laura, Nowak, Robert, and Recht, Benjamin. Online identification and tracking of subspaces from highly incomplete information. In 48th Annual Allerton Conference on Communication, Control, and Computing, 2010. Beck, A. and Teboulle, M. Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters, 31(3):167–175, 2003. Bottou, Leon and Bousquet, Olivier. The tradeoffs of large scale learning. In NIPS’07, pp. 161–168, 2007. Boyd, Stephen and Vandenberghe, Lieven. Convex Optimization. Cambridge University Press, 2004. Brand, Matthew. Incremental singular value decomposition of uncertain data with missing values. In ECCV, 2002. Collins, Michael, Globerson, Amir, Koo, Terry, Carreras, Xavier, and Bartlett, Peter L. Exponentiated gradient algorithms for conditional random fields and max-margin markov networks. J. Mach. Learn. Res., 9:1775–1822, June 2008. Duchi, John, Shalev-Shwartz, Shai, Singer, Yoram, and Chandra, Tushar. Efficient projections onto the l1-ball for learning in high dimensions. In Proceedings of the 25th international conference on Machine learning, ICML ’08, pp. 272–279, New York, NY, USA, 2008. ACM. Nemirovski, Arkadi and Yudin, David. Problem complexity and method efficiency in optimization. John Wiley & Sons Ltd, 1983. Nemirovski, Arkadi, Juditsky, Anatoli, Lan, Guanghui, and Shapiro, Alexander. Robust stochastic approximation approach to stochastic programming. SIAM Journal on Optimization, 19(4):1574– 1609, January 2009. Oja, Erkki and Karhunen, Juha. On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix. Journal of Mathematical Analysis and Applications, 106: 69–84, 1985. Sanger, Terence D. Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Networks, 12:459–473, 1989. Shalev-Shwartz, Shai and Srebro, Nathan. SVM optimization: Inverse dependence on training set size. In ICML’08, pp. 928–935, 2008. Shalev-Shwartz, Shai and Tewari, Ambuj. Stochastic methods for l1 regularized loss minimization. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML’09, pp. 929–936, New York, NY, USA, 2009. ACM. Shalev-Shwartz, Shai, Singer, Yoram, and Srebro, Nathan. Pegasos: Primal Estimated sub-GrAdient SOlver for SVM. In ICML’07, pp. 807–814, 2007. Srebro, N., Sridharan, K., and Tewari, A. On the universality of online mirror descent. Advances in Neural Information Processing Systems, 24, 2011. Warmuth, Manfred K. and Kuzmin, Dima. Randomized online PCA algorithms with regret bounds that are logarithmic in the dimension. Journal of Machine Learning Research (JMLR), 9:2287– 2320, 2008. 9
|
2013
|
140
|
4,866
|
Embed and Project: Discrete Sampling with Universal Hashing Stefano Ermon, Carla P. Gomes Dept. of Computer Science Cornell University Ithaca NY 14853, U.S.A. Ashish Sabharwal IBM Watson Research Ctr. Yorktown Heights NY 10598, U.S.A. Bart Selman Dept. of Computer Science Cornell University Ithaca NY 14853, U.S.A. Abstract We consider the problem of sampling from a probability distribution defined over a high-dimensional discrete set, specified for instance by a graphical model. We propose a sampling algorithm, called PAWS, based on embedding the set into a higher-dimensional space which is then randomly projected using universal hash functions to a lower-dimensional subspace and explored using combinatorial search methods. Our scheme can leverage fast combinatorial optimization tools as a blackbox and, unlike MCMC methods, samples produced are guaranteed to be within an (arbitrarily small) constant factor of the true probability distribution. We demonstrate that by using state-of-the-art combinatorial search tools, PAWS can efficiently sample from Ising grids with strong interactions and from software verification instances, while MCMC and variational methods fail in both cases. 1 Introduction Sampling techniques are one of the most widely used approaches to approximate probabilistic reasoning for high-dimensional probability distributions where exact inference is intractable. In fact, many statistics of interest can be estimated from sample averages based on a sufficiently large number of samples. Since this can be used to approximate #P-complete inference problems, sampling is also believed to be computationally hard in the worst case [1, 2]. Sampling from a succinctly specified combinatorial space is believed to much harder than searching the space. Intuitively, not only do we need to be able to find areas of interest (e.g., modes of the underlying distribution) but also to balance their relative importance. Typically, this is achieved using Markov Chain Monte Carlo (MCMC) methods. MCMC techniques are a specialized form of local search that only allows moves that maintain detailed balance, thus guaranteeing the right occupation probability once the chain has mixed. However, in the context of hard combinatorial spaces with complex internal structure, mixing times are often exponential. An alternative is to use complete or systematic search techniques such as Branch and Bound for integer programming, DPLL for SATisfiability testing, and constraint and answer-set programming (CP & ASP), which are preferred in many application areas, and have witnessed a tremendous success in the past few decades. It is therefore a natural question whether one can construct sampling techniques based on these more powerful complete search methods rather than local search. Prior work in cryptography by Bellare et al. [3] showed that it is possible to uniformly sample witnesses of an NP language leveraging universal hash functions and using only a small number of queries to an NP-oracle. This is significant because samples can be used to approximate #Pcomplete (counting) problems [2], a complexity class believed to be much harder than NP. Practical algorithms based on these ideas were later developed [4–6] to near-uniformly sample solutions of propositional SATisfiability instances, using a SAT solver as an NP-oracle. However, unlike SAT, 1 most models used in Machine Learning, physics, and statistics are weighted (represented, e.g., as graphical models) and cannot be handled using these techniques. We fill this gap by extending this approach, based on hashing-based projections and NP-oracle queries, to the weighted sampling case. Our algorithm, called PAWS, uses a form of approximation by quantization [7] and an embedding technique inspired by slice sampling [8], before applying projections. This parallels recent work [9] that extended similar ideas for unweighted counting to the weighted counting world, addressing the problem of discrete integration. Although in theory one could use that technique to produce samples by estimating ratios of discrete integrals [1, 2], the general sampling-by-counting reduction requires a large number of such estimates (proportional to the number of variables) for each sample. Further, the accuracy guarantees on the sampling probability quickly become loose when taking ratios of estimates. In contrast, PAWS is a more direct and practical sampling approach, providing better accuracy guarantees while requiring a much smaller number of NP-oracle queries per sample. Answering NP-oracle queries, of course, requires exponential time in the worst case, in accordance with the hardness of sampling. We rely on the fact that combinatorial search tools, however, are often extremely fast in practice, and any complete solver can be used as a black box in our sampling scheme. Another key advantage is that when combinatorial search succeeds, our analysis provides a certificate that, with high probability, any samples produced will be distributed within an (arbitrarily small) constant factor of the desired probability distribution. In contrast, with MCMC methods it is generally hard to assess whether the chain has mixed. We empirically demonstrate that PAWS outperforms MCMC as well as variational methods on hard synthetic Ising Models and on a realworld test case generation problem for software verification. 2 Setup and Problem Definition We are given a probability distribution p over a (high-dimensional) discrete set X, where the probability of each item x ∈X is proportional to a weight function w : X →R+, with R+ being the set of non-negative real numbers. Specifically, given x ∈X, its probability p(x) is given by p(x) = w(x) Z , Z = X x∈X w(x) where Z is a normalization constant known as the partition function. We assume w is specified compactly, e.g., as the product of factors or in a conjunctive normal form. As our driving example, we consider the case of undirected discrete graphical models [10] with n = |V | random variables {xi, i ∈V } where each xi takes values in a finite set Xi. We consider a factor graph representation for a joint probability distribution over elements (or configurations) x ∈X = X1 × · · · × Xn: p(x) = w(x) Z = 1 Z Y α∈I ψα({x}α). (1) This is a compact representation for p(x) based on the weight function w(x) = Q α∈I ψα({x}α), defined as the product of potentials or factors ψα : {x}α 7→R+, where I is an index set and {x}α ⊆V the subset of variables factor ψα depends on. For simplicity of exposition, without loss of generality, we will focus on the case of binary variables, where X = {0, 1}n. We consider the fundamental problem of (approximately) sampling from p(x), i.e., designing a randomized algorithm that takes w as input and outputs elements x ∈X according to the probability distribution p. This is a hard computational problem in the worst case. In fact, it is more general than NP-complete decision problems (e.g., sampling solutions of a SATisfiability instance specified as a factor graph entails finding at least one solution, or deciding there is none). Further, samples can be used to approximate #P-complete problems [2], such as estimating a marginal probability. 3 Sampling by Embed, Project, and Search Conceptually, our sampling strategy has three steps, described in Sections 3.1, 3.2, and 3.3, resp. (1) From the input distribution p we construct a new distribution p′ that is “close” to p but more 2 discrete. Specifically, p′ is based on a new weight function w′ that takes values only in a discrete set of geometrically increasing weights. (2) From p′, we define a uniform probability distribution p′′ over a carefully constructed higher-dimensional embedding of X = {0, 1}n. The previous discretization step allows us to specify p′′ in a compact form, and sampling from p′′ can be seen to be precisely equivalent to sampling from p′. (3) Finally, we indirectly sample from the desired distribution p by sampling uniformly from p′′, by randomly projecting the embedding onto a lowerdimensional subspace using universal hash functions and then searching for feasible states. The first and third steps involve a bounded loss of accuracy, which we can trade off with computational efficiency by setting hyper-parameters of the algorithm. A key advantage is that our technique reduces the weighted sampling problem to that of solving one MAP query (i.e., finding the most likely state) and a polynomial number of feasibility queries (i.e., finding any state with non-zero probability) for the original graphical model augmented (through an embedding) with additional variables and carefully constructed factors. In practice, we use a combinatorial optimization package, which requires exponential time in the worst case (consistent with the hardness of sampling) but is often fast in practice. Our analysis shows that whenever the underlying combinatorial search and optimization queries succeed, the samples produced are guaranteed, with high probability, to be coming from an approximately accurate distribution. 3.1 Weight Discretization We use a geometric discretization of the weights into “buckets”, i.e., a uniform discretization of the log-probability. As we will see, Θ(n) buckets are sufficient to preserve accuracy. Definition 1. Let M = maxx w(x), r > 1, ǫ > 0, and ℓ= ⌈logr(2n/ǫ)⌉. Partition the configurations into the following weight based disjoint buckets: Bi = {x | w(x) ∈ M ri+1 , M ri }, i = 0, . . . , ℓ−1 and Bℓ= {x | w(x) ∈[0, M rℓ]}. The discretized weight function w′ : {0, 1}n →R+ is defined as follows: w′(x) = M ri+1 if x ∈Bi for i < ℓand w′(x) = 0 if x ∈Bℓ. The corresponding discretized probability distribution p′(x) = w′(x)/Z′ where Z′ is the normalization constant. Lemma 1. Let ρ = r2/(1 −ǫ). For all x ∈∪l−1 i=0Bℓ, p(x) and p′(x) are within a factor of ρ of each other. Furthermore, P x∈Bℓp(x) ≤ǫ. Proof. Since w maps to non-negative values, we have Z ≥M. Further, X x∈Bℓ p(x) = 1 Z X x∈Bℓ w(x) ≤1 Z |Bℓ|M rℓ= |Bℓ| 2n ǫM Z ≤ǫM Z ≤ǫ. This proves the second part of the claim. For the first part, note that by construction, Z′ ≤Z and Z′ = ℓ X i=0 X x∈Bi w′(x) ≥ ℓ−1 X i=0 X x∈Bi 1 r w(x) = 1 r Z − X x∈Bℓ w(x) ! ≥(1 −ǫ)Z. Thus Z and Z′ are within a factor of r/(1−ǫ) of each other. For all x such that w(x) /∈Bn, recalling that r > 1 > 1 −ǫ and that w(x)/r ≤w′(x) ≤rw(x), we have 1 ρp(x) ≤w(x) rZ ≤w(x) rZ′ ≤w′(x) Z′ = p′(x) = w′(x) Z′ ≤rw(x) Z′ ≤ r2 1 −ǫ w(x) Z = ρp(x). This finishes the proof that p(x) and p′(x) are within a factor of ρ of each other. Remark 1. If the weights w defined by the original graphical model are represented in finite precision (e.g., there are 264 possible weights in double precision floating point), for every b ≥1 there is a possibly large but finite value of ℓ(such that M/rℓis smaller than the smallest representable weight) such that Bℓis empty and the discretization error ǫ is effectively zero. 3.2 Embed: From Weighted to Uniform Sampling We now show how to reduce the problem of sampling from the discrete distribution p′ (weighted sampling) to the problem of uniformly sampling, without loss of accuracy, from a higherdimensional discrete set into which X = {0, 1}n is embedded. This is inspired by slice sampling [8], and can be intuitively understood as its discrete counterpart where we uniformly sample points (x, y) from a discrete representation of the area under the (y vs. x) probability density function of p′ . 3 Definition 2. Let w : X →R+, M = maxx w(x), and r = 2b/(2b −1). Then the embedding S(w, ℓ, b) of X in X × {0, 1}(ℓ−1)b is defined as: S(w, ℓ, b) = ( x, y1 1, y2 1, . . . , yb−1 ℓ−1, yb ℓ−1 w(x) ≤M ri ⇒ b_ k=1 yk i , 1 ≤i ≤ℓ−1; w(x) > M rℓ ) . where Wb k=1 yk i may alternatively be thought of as the linear constraint Pb k=1 yk i ≥1. Further, let p′′ denote a uniform probability distribution over S(w, ℓ, b) and n′ = n + (ℓ−1)b. Given a compact representation of w within a combinatorial search or optimization framework, the set S(w, ℓ, b) can often be easily encoded using the disjunctive constraints on the y variables. Lemma 2. Let (x, y) = (x, y1 1, y2 1, · · · , yb 1, y1 2, · · · , yb 2, · · · , y1 ℓ−1 · · · , yb ℓ−1) be a sample from p′′, i.e., a uniformly sampled element from S(w, ℓ, b). Then x is distributed according to p′. Informally, given x ∈Bi and x′ ∈Bi+1 with i + 1 ≤l −1, there are precisely r = 2b/(2b −1) times more valid configurations (x, y) than (x′, y′). Thus x is sampled r times more often than x′. A formal proof may be found in the Appendix. 3.3 Project and Search: Uniform Sampling with Hash Functions and an NP-oracle In principle, using the technique of Bellare et al. [3] and n′-wise independent hash functions we can sample purely uniformly from S(w, ℓ, b) using an NP oracle to answer feasibility queries. However, such hash functions involve constructions that are difficult to implement and reason about in existing combinatorial search methods. Instead, we use a more practical algorithm based on pairwise independent hash functions that can be implemented using parity constraints (modular arithmetic) and still provides accuracy guarantees. The approach is similar to [5], but we include an algorithmic way to estimate the number of parity constraints to be used. We also use the pivot technique from [6] but extend that work in two ways: we introduce a parameter α (similar to [5]) that allows us to trade off uniformity against runtime and also provide upper bounds on the sampling probabilities. We refer to our algorithm as PArity-basedWeightedSampler (PAWS) and provide its pseudocode as Algorithm 1. The idea is to project by randomly constraining the configuration space using a family of universal hash functions, search for up to P “surviving” configurations, and then, if fewer than P survive, perform rejection sampling to choose one of them. The number k of constraints or factors (encoding a randomly chosen hash function) to add is determined first; this is where we depart from both Gomes et al. [5], who do not provide a way to compute k, and Chakraborty et al. [6], who do not fix k or provide upper bounds. Then we repeatedly add k such constraints, check whether fewer than P configurations survive, and if so output one configuration chosen using rejection sampling. Intuitively, we need the hashed space to contain no more than P solutions because that is a base case where we know how to produce uniform samples via enumeration. k is a guess (accurate with high probability) of the number of constraints that is likely to reduce (by hashing) the original problem to a situation where enumeration is feasible. If too many or too few configurations survive, the algorithm fails and is run again. The small failure probability, accounting for a potentially poor choice of random hash functions, can be bounded irrespective of the underlying graphical model. A combinatorial optimization procedure is used once in order to determine the maximum weight M through MAP inference. M is used in the discretization step. Subsequently, several feasibility queries are issued to the underlying combinatorial search procedure in order to, e.g., count the number of surviving configurations and produce one as a sample. We briefly review the construction and properties of universal hash functions [11, 12]. Definition 3. H = {h : {0, 1}n →{0, 1}m} is a family of pairwise independent hash functions if the following two conditions hold when a function H is chosen uniformly at random from H: 1) ∀x ∈{0, 1}n, the random variable H(x) is uniformly distributed in {0, 1}m; 2) ∀x1, x2 ∈{0, 1}n x1 ̸= x2, the random variables H(x1) and H(x2) are independent. Proposition 1. Let A ∈{0, 1}m×n, c ∈{0, 1}m. The family H = {hA,c(x) : {0, 1}n →{0, 1}m} where hA,c(x) = Ax + c mod 2 is a family of pairwise independent hash functions. Further, H is also known to be a family of three-wise independent hash functions [5]. 4 Algorithm 1 Algorithm PAWS for sampling configurations σ according to w 1: procedure COMPUTEK(n′, δ, P, S) 2: T ←24 ⌈ln (n′/δ)⌉; k ←−1 ; count ←0 3: repeat 4: k ←k + 1 ; count ←0 5: for t = 1, · · · , T do 6: Sample hash function hk A,c : {0, 1}n′ →{0, 1}k 7: Let Sk,t ≜{(x, y) ∈S, hk A,c(x, y) = 0} 8: if |Sk,t| < P then /* search for ≥P different elements */ 9: count ←count + 1 10: end for 11: until count ≥⌈T/2⌉or k = n′ 12: return k 13: end procedure 14: procedure PAWS(w : {0, 1}n →R+, ℓ, b, δ, P, α) 15: M ←maxx w(x) /* compute with one MAP inference query on w */ 16: S ←S(w, ℓ, b); n′ ←n + b(ℓ−1) /* as in Definition 2 */ 17: i ←COMPUTEK(n′, δ, γ, P, S) + α 18: Sample hash fn. hi A,c : {0, 1}n′ →{0, 1}i, i.e., uniformly choose A ∈{0, 1}i×n′, c ∈{0, 1}i 19: Let Si ≜{(x, y) ∈S, hi A,c(x, y) = 0} 20: Check if |Si| ≥P by searching for at least P different elements 21: if |Si| ≥P or |Si| = 0 then 22: return ⊥ /* failure */ 23: else 24: Fix an arbitrary ordering of Si /* for rejection sampling */ 25: Uniformly sample p from {0, 1, . . . , P −1} 26: if p ≤|Si| then 27: Select p-th element (x, y) of Si ; return x 28: else 29: return ⊥ /* failure */ 30: end procedure Lemma 3 (see Appendix for a proof) shows that the subroutine COMPUTEK in Algorithm 1 outputs with high probability a value close to log(|S(w, ℓ, b)|/P). The idea is similar to an unweighted version of the WISH algorithm [9] but with tighter guarantees and using more feasibility queries. Lemma 3. Let S = S(w, ℓ, b) ⊆{0, 1}n′, δ > 0, and γ > 0. Further, let P ≥min{2, 2γ+2/(2γ − 1)2}, Z = |S|, k∗ P = log(Z/P), and k be the output of procedure COMPUTEK(n′, δ, P, S). Then, P[k∗ P −γ ≤k ≤k∗ P + 1 + γ] ≥1 −δ and COMPUTEK uses O(n′ ln (n′/δ)) feasibility queries. Lemma 4. Let S = S(w, ℓ, b) ⊆{0, 1}n′, δ > 0, P ≥2, and γ = log (P + 2 √ P + 1 + 2)/P . For any α ∈Z, α > γ, let c(α, P) = 1 −2γ−α/(1 − 1 P −2γ−α)2. Then with probability at least 1 −δ the following holds: PAWS(w, ℓ, b, δ, P, α) outputs a sample with probability at least c(α, P)2−(γ+α+1) P P −1 and, conditioned on outputting a sample, every element (x, y) ∈ S(w, ℓ, b) is selected (Line 27) with probability p′ s(x, y) within a constant factor c(α, P) of the uniform probability p′′(x, y) = 1/|S|. Proof Sketch. For lack of space, we defer details to the Appendix. Briefly, the probability P[σ ∈Si] that σ = (x, y) survives is 2−i by the properties of the hash functions in Definition 3, and the probability of being selected by rejection sampling is 1/(P −1). Conditioned on σ surviving, the mean and variance of the size of the surviving set |Si| are independent of σ because of 3-wise independence. When k∗ P −γ ≤k ≤k∗ P + 1 + γ and i = k + α, α > γ, on average |Si| < P and the size is concentrated around the mean. Using Chebychev’s inequality, one can upper bound by 1 −c(α, P) the probability P[Si ≥P | σ ∈Si] that the algorithm fails because |Si| is too large. Note that the bound is independent of σ and lets us bound the probability ps(σ) that σ is output: c(α, P) 2−i P −1 = 1 − 2γ−α (1 −1 P −2γ−α)2 2−i P −1 ≤ps(σ) ≤ 2−i P −1. (2) 5 From i = k + α ≤k∗ P + 1 + γ + α and summing the lower bound of ps(σ) over all σ, we obtain the desired lower bound on the success probability. Note that given σ, σ′, ps(σ) and ps(σ′) are within a constant factor c(α, P) of each other from (2). Therefore, the probabilities p′ s(σ) (for various σ) that σ is output conditioned on outputting a sample are also within a constant factor of each other. From the normalization P σ p′ s(σ) = 1, one gets the desired result that p′ s(x, y) is within a constant factor c(α, P) of the uniform probability p′′(x, y) = 1/|S|. 3.4 Main Results: Sampling with Accuracy Guarantees Combining pieces from the previous three sections, we have the following main result: Theorem 1. Let w : {0, 1}n →R+, ǫ > 0, b ≥1, δ > 0, and P ≥2. Fix α ∈Z as in Lemma 4, r = 2b/(2b−1), ℓ= ⌈logr(2n/ǫ)⌉, ρ = r2/(1−ǫ), bucket Bℓas in Definition 1, and κ = 1/c(α, P). Then P x∈Bℓp(x) ≤ǫ and with probability at least (1 −δ)c(α, P)2−(γ+α+1) P P −1, PAWS(w, ℓ, b, δ, P, α) succeeds and outputs a sample σ from {0, 1}n \ Bℓ. Upon success, each σ ∈{0, 1}n \ Bℓ is output with probability p′ s(σ) within a constant factor ρκ of the desired probability p(σ) ∝w(σ). Proof. Success probability follows from Lemma 4. For x ∈{0, 1}n \ Bℓ, combining Lemmas 1, 2, 4 we obtain 1 ρκp(x) ≤1 κp′(x) = X y:(x,y)∈S(w,ℓ,b) 1 κp′′(x, y) ≤ X y|(x,y)∈S(w,ℓ,b) p′ s(x, y) = p′ s(x) ≤ X y:(x,y)∈S(w,ℓ,b) κp′′(x, y) = κp′(x) ≤ρκp(x) where the first inequality accounts for discretization error from p(x) to p′(x) (Lemma 1), equality follows from Lemma 2, and the sampling error between p′′ and p′ s is bounded by Lemma 4. The rest is proved in Lemmas 1, 2. Remark 2. By appropriately setting the hyper-parameters b and ℓwe can make the discretization errors ρ and ǫ arbitrarily small. Although this does not change the number of required feasibility queries, it can significantly increase the runtime of combinatorial search because of the increased search space size |S(w, ℓ, b)|. Practically, one should set these parameters as large as possible, while ensuring combinatorial searches can be completed within the available time budget. Increasing parameter P improves the accuracy as well, but also increases the number of feasibility queries issued, which is proportional to P (but does not affect the structure of the search space). Similarly, by increasing α we can make κ arbitrarily small. However, the probability of success of the algorithm decreases exponentially as α is increased. We will demonstrate in the next section that a practical tradeoff between computational complexity and accuracy can be achieved for reasonably sized problems of interest. Corollary 2. Let w, b, ǫ, ℓ, δ, P, α, and Bℓbe as in Theorem 1, and p′ s(σ) be the output distribution of PAWS(w, ℓ, b, δ, P, α). Let φ : {0, 1}n →R and ηφ = maxx∈Bℓ|φ(x)| ≤∥φ∥∞. Then, 1 ρκEp′s[φ] −ǫηφ ≤Ep[φ] ≤ρκEp′s[φ] + ǫηφ where Ep′ s[φ] can be approximated with a sample average using samples produced by PAWS. 4 Experiments We evaluate PAWS on synthetic Ising Models and on a real-world test case generation problem for software verification. All experiments used Intel Xeon 5670 3GHz machines with 48GB RAM. 4.1 Ising Grids Models We first consider the marginal computation task for synthetic grid-structured Ising models with random interactions (attractive and mixed). Specifically, the corresponding graphical model has n binary variables xi, i = 1, · · · , n, with single node potentials ψi(xi) = exp(fixi) and pairwise 6 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 True marginals Estimated marginals Gibbs Belief Propagation WISH PAWS b=1 PAWS b=2 (a) Mixed (w = 4.0,f = 0.6) 0.16 0.165 0.17 0.175 0.18 0.185 0.19 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 True marginals Estimated marginals Gibbs Belief Propagation WISH PAWS b=1 PAWS b=2 (b) Attractive (w = 3.0,f = 0.45) Figure 1: Estimated marginals vs. true marginals on 8 × 8 Ising Grid models. Closeness to the 45 degree line indicates accuracy. PAWS is run with b ∈{1, 2}, P = 4, α = 1, and ℓ= 25 (mixed case) or ℓ= 40 (attractive case). interactions ψij(xi, xj) = exp(wijxixj), where fi ∈R [−f, f] and wij ∈R [−w, w] in the mixed case, while wij ∈R [0, w] in the attractive case. Our implementation of PAWS uses the open source solver ToulBar2 [13] to compute M = maxx w(x) and as an oracle to check the existence of at least P different solutions. We augmented ToulBar2 with the IBM ILOG CPLEX CP Optimizer 12.3 [14] based on techniques borrowed from [15] to efficiently reason about parity constraints (the hash functions) using GaussJordan elimination. We run the subroutine COMPUTEK in Algorithm 1 only once at the beginning, and then generate all the samples with the same value of i (Line 17). The comparison is with Gibbs sampling, Belief Propagation, and the recent WISH algorithm [9]. Ground truth is obtained using the Junction Tree method [16]. In Figure 1, we show a scatter plot of the estimated vs. true marginal probabilities for two Ising grids with mixed and attractive interactions, respectively, representative of the general behavior in the large-weights regime. Each sampling method is run for 10 minutes. Marginals computed with Gibbs sampling (run for about 108 iterations) are clearly very inaccurate (far from the 45 degree line), an indication that the Markov Chain had not mixed as an effect of the relatively large weights that tend to create barriers between modes which are hard to traverse. In contrast, samples from PAWS provide much more accurate marginals, in part because it does not rely on local search and hence is not directly affected by the energy landscape (with respect to the Hamming metric). Further, we see that we can improve the accuracy by increasing the hyper-parameter b. These results highlight the practical value of having accuracy guarantees on the quality of the samples after finite amounts of time vs. MCMC-style guarantees that hold only after a potentially exponential mixing time. Belief Propagation can be seen from Figure 1 to be quite inaccurate in this large-weights regime. Finally, we also compare to the recent WISH algorithm [9] which uses similar hash-based techniques to estimate the partition function of graphical models. Since producing samples with the general sampling-by-counting reduction [1, 2] or estimating each marginal as the ratio of two partition functions (with and without a variable clamped) would be too expensive (requiring n + 1 calls to WISH) we heuristically run it once and use the solutions of the optimization instances it solves in the inner loop as samples. We see in Figure 1 that while samples produced by WISH can sometimes produce fairly accurate marginal estimates, these estimates can also be far from the true value because of an inherent bias introduced by the arg max operator. 4.2 Test Case Generation for Software Verification Hardware and software verification tools are becoming increasingly important in industrial system design. For example, IBM estimates $100 million savings over the past 10 years from hardware verification tools alone [17]. Given that complete formal verification is often infeasible, the paradigm of choice has become that of randomly generating “interesting” test cases to stress the code or chip 7 Instance Vars Factors Time (s) MSE (×10−5) bench1039 330 785 1710 5.76 bench431 173 410 34.97 4.35 bench115 189 458 52.75 20.74 bench97 170 401 67.03 45.57 bench590 244 527 593.71 8.11 bench105 243 524 842.35 8.56 (a) Marginals: runtime and mean squared error 0 100 200 300 400 500 600 700 800 0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018 Solution Frequency Theoretical Sample Frequency (b) True vs. observed sampling frequencies. Figure 2: Experiments on software verification benchmark. with the hope of uncovering bugs. Typically, a model based on hard constraints is used to specify consistent input/output pairs, or valid program execution traces. In addition, in some systems, domain knowledge can be specified by experts in the form of soft constraints, for instance to introduce a preference for test cases where operands are zero and bugs are more likely [17]. For our experiments, we focus on software (SW) verification, using an industrial benchmark [18] produced by Microsoft’s SAGE system [19, 20]. Each instance defines a uniform probability distribution over certain valid traces of a computer program. We modify this benchmark by introducing soft constraints defining a weighted distribution over valid traces, indicating traces that meet certain criteria should be sampled more often. Specifically, following Naveh et al. [17] we introduce a preference towards traces where certain registers are zero. The weight is chosen to be a power of two, so that there is no loss of accuracy due to discretization using the previous construction with b = 1. These instances are very difficult for MCMC methods because of the presence of very large regions of zero probability that cannot be traversed and thus can break the ergodicity assumption. Indeed we observed that Gibbs sampling often fails to find a non-zero probability state, and when it finds one it gets stuck there, because there might not be a non-zero probability path from one feasible state to another. In contrast, our sampling strategy is not affected and does not require any ergodicity assumption. Table 2a summarizes the results obtained using the propositional satisfiability (SAT) solver CryptoMiniSAT [21] as the feasibility query oracle for PAWS. CryptoMiniSAT has built-in support for parity constraints Ax = c mod 2. We report the time to collect 1000 samples and the Mean Squared Error (MSE) of the marginals estimated using these samples. We report results only on the subset of instances where we could enumerate all feasible states using the exact model counter Relsat [22] in order to obtain ground truth marginals for MSE computation. We see that PAWS scales to fairly large instances with hundreds of variables and gives accurate estimates of the marginals. Figure 2b shows the theoretical vs. observed sampling frequencies (based on 50000 samples) for a small instance with 810 feasible states (execution traces), where we see that the output distribution p′ s is indeed very close to the target distribution p. 5 Conclusions We introduced a new approach, called PAWS, to the fundamental problem of sampling from a discrete probability distribution specified, up to a normalization constant, by a weight function, e.g., by a discrete graphical model. While traditional sampling methods are based on the MCMC paradigm and hence on some form of local search, PAWS can leverage more advanced combinatorial search and optimization tools as a black box. A significant advantage over MCMC methods is that PAWS comes with a strong accuracy guarantee: whenever combinatorial search succeeds, our analysis provides a certificate that, with high probability, the samples are produced from an approximately correct distribution. In contrast, accuracy guarantees for MCMC methods hold only in the limit, with unknown and potentially exponential mixing times. Further, the hyper-parameters of PAWS can be tuned to trade off runtime with accuracy. Our experiments demonstrate that PAWS outperforms competing sampling methods on challenging domains for MCMC. 8 References [1] N.N. Madras. Lectures on Monte Carlo Methods. American Mathematical Society, 2002. ISBN 0821829785. [2] M. Jerrum and A. Sinclair. The Markov chain Monte Carlo method: an approach to approximate counting and integration. Approximation algorithms for NP-hard problems, pages 482– 520, 1997. [3] Mihir Bellare, Oded Goldreich, and Erez Petrank. Uniform generation of NP-witnesses using an NP-oracle. Information and Computation, 163(2):510–526, 2000. [4] Stefano Ermon, Carla P. Gomes, and Bart Selman. Uniform solution sampling using a constraint solver as an oracle. In UAI, pages 255–264, 2012. [5] C.P. Gomes, A. Sabharwal, and B. Selman. Near-uniform sampling of combinatorial spaces using XOR constraints. In NIPS-2006, pages 481–488, 2006. [6] S. Chakraborty, K. Meel, and M. Vardi. A scalable and nearly uniform generator of SAT witnesses. In CAV-2013, 2013. [7] Vibhav Gogate and Pedro Domingos. Approximation by quantization. In UAI, pages 247–255, 2011. [8] Radford M Neal. Slice sampling. Annals of statistics, pages 705–741, 2003. [9] Stefano Ermon, Carla Gomes, Ashish Sabharwal, and Bart Selman. Taming the curse of dimensionality: Discrete integration by hashing and optimization. In ICML, 2013. [10] M.J. Wainwright and M.I. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1–305, 2008. [11] S. Vadhan. Pseudorandomness. Foundations and Trends in Theoretical Computer Science, 2011. [12] O. Goldreich. Randomized methods in computation. Lecture Notes, 2011. [13] D. Allouche, S. de Givry, and T. Schiex. Toulbar2, an open source exact cost function network solver. Technical report, INRIA, 2010. [14] IBM ILOG. IBM ILOG CPLEX Optimization Studio 12.3, 2011. [15] Carla P. Gomes, Willem Jan van Hoeve, Ashish Sabharwal, and Bart Selman. Counting CSP solutions using generalized XOR constraints. In AAAI, 2007. [16] Steffen L Lauritzen and David J Spiegelhalter. Local computations with probabilities on graphical structures and their application to expert systems. Journal of the Royal Statistical Society. Series B (Methodological), pages 157–224, 1988. [17] Yehuda Naveh, Michal Rimon, Itai Jaeger, Yoav Katz, Michael Vinov, Eitan s Marcu, and Gil Shurek. Constraint-based random stimuli generation for hardware verification. AI magazine, 28(3):13, 2007. [18] Clark Barrett, Aaron Stump, and Cesare Tinelli. The Satisfiability Modulo Theories Library (SMT-LIB). www.SMT-LIB.org, 2010. [19] Patrice Godefroid, Michael Y Levin, David Molnar, et al. Automated whitebox fuzz testing. In NDSS, 2008. [20] Patrice Godefroid, Michael Y. Levin, and David Molnar. Sage: Whitebox fuzzing for security testing. Queue, 10(1):20:20–20:27, January 2012. ISSN 1542-7730. [21] M. Soos, K. Nohl, and C. Castelluccia. Extending SAT solvers to cryptographic problems. In SAT-2009. Springer, 2009. [22] Robert J Bayardo and Joseph Daniel Pehoushek. Counting models using connected components. In AAAI-2000, pages 157–162, 2000. 9
|
2013
|
141
|
4,867
|
Fisher-Optimal Neural Population Codes for High-Dimensional Diffeomorphic Stimulus Representations Zhuo Wang Department of Mathematics University of Pennsylvania Philadelphia, PA 19104 wangzhuo@sas.upenn.edu Alan A. Stocker Department of Psychology University of Pennsylvania Philadelphia, PA 19104 astocker@sas.upenn.edu Daniel D. Lee Department of Electrical and Systems Engineering University of Pennsylvania Philadelphia, PA 19104 ddlee@seas.upenn.edu Abstract In many neural systems, information about stimulus variables is often represented in a distributed manner by means of a population code. It is generally assumed that the responses of the neural population are tuned to the stimulus statistics, and most prior work has investigated the optimal tuning characteristics of one or a small number of stimulus variables. In this work, we investigate the optimal tuning for diffeomorphic representations of high-dimensional stimuli. We analytically derive the solution that minimizes the L2 reconstruction loss. We compared our solution with other well-known criteria such as maximal mutual information. Our solution suggests that the optimal weights do not necessarily decorrelate the inputs, and the optimal nonlinearity differs from the conventional equalization solution. Results illustrating these optimal representations are shown for some input distributions that may be relevant for understanding the coding of perceptual pathways. 1 Introduction There has been much work investigating how information about stimulus variables is represented by a population of neurons in the brain [1]. Studies on motion perception [2, 3] and sound localization [4, 5] have demonstrated that these representations adapt to the stimulus statistics on various time scales [6, 7, 8, 9]. This raises the natural question of what encoding scheme is underlying this adaptive process? To address this question, several assumptions about the neural representation and its overall objective need to be made. In the case of a one-dimensional stimulus, a number of theoretical approaches have previously been investigated. Some work have focused on the scenario with a single neuron [10, 11, 12, 13, 14, 15], while other work focused on the population level [16, 17, 18, 19, 20, 21, 22, 23], with different model and noise assumptions. However, the question becomes more difficult when considering adaptation to high dimensional stimuli. An interesting class of solutions to this question is related to independent component analysis (ICA) [24, 25, 26], which considers maximizing the amount of information in the encoding given a distribution of stimulus inputs. The use of mutual information as a metric to measure neural coding quality has also been discussed in [27]. 1 In this paper, we study Fisher-optimal population codes for the diffeomorphic encoding of stimuli with multivariate Gaussian distributions. Using Fisher information, we investigate the properties of representations that would minimize the L2 reconstruction error assuming an optimal decoder. The optimization problem is derived under a diffeomorphic assumption, i.e. the number of encoding neurons matches the dimensionality of the input and the nonlinearity is monotonic. In this case, the optimal solution can be found analytically and can be given a geometric interpretation. Qualitative differences between this solution and the previously studied information maximization solutions are demonstrated and discussed. 2 Model and Methods 2.1 Encoding and Decoding Model We consider a n dimensional stimulus input s = (s1, . . . , sn) with prior distribution p(s). In general, a population with m neurons can have m individual activation functions, h1(s), . . . , hm(s) which determines the average firing rate of each neuron in response to the stimulus. However, the encoding process is affected by neural noise. Two commonly used models are Poisson noise model and constant Gaussian model, for which the observed firing rate vector r = (r1, . . . , rm) follows the probabilistic distribution p(r|s), where rkT ∼Poisson(hk(s)T) (Poisson noise) (1) rkT ∼Gaussian(hk(s)T, V T) (Gaussian noise) (2) As opposed to encoding, the decoding process involves constructing an estimator ˆs(r), which deterministically maps the response r to an estimate ˆs of the true stimulus s. We choose a maximum likelihood estimator ˆsMLE(r) = arg maxs p(r|s) because it simplifies the calculation due to its nice statistical properties as discussed in section 2.3. 2.2 Fisher Information Matrix The Fisher information is a key concept widely used in optimal coding theory. For multiple dimensions, the Fisher information matrix is defined element-wise for each s, as in [28], IF (s)i,j = ∂ ∂si log p(r|s) · ∂ ∂sj log p(r|s) s r (3) In the supplementary section A we prove that the Fisher information matrix for a population of m neurons is IF (s) = T · m X k=1 hk(s)−1∇hk(s) · ∇hk(s)T (Poisson noise) (4) IF (s) = T · m X k=1 V −1∇˜hk(s) · ∇˜hk(s)T (Gaussian noise) (5) where T is length of the encoding time window and V represents the variance of the constant Gaussian noise. The equivalence for two noise models can be established via the variance stabilizing transformation ˜hk = 2√hk [29]. Without loss of generality, throughout the paper we assume the Gaussian noise model for mathematical convenience. Also we will simply assume V = 1, T = 1 because they do not change the optimal solution for any Fisher information-related quantities. 2.3 Cramer-Rao Lower Bound Ideally, a good neural population code should produce estimates ˆs that are close to the true value of the stimulus s. However multiple measures exist for how well an estimate matches the true value. One possibility is the L2 loss which is related to the Fisher information matrix via the Cramer-Rao lower bound [28]. For any unbiased estimator ˆs, including the MLE, cov[ˆs −s] ≥IF (s)−1 (6) 2 in the sense that cov[ˆs −s] −IF (s)−1 is a positive semidefinite matrix. Being only a lower bound, the Cramer-Rao bound can be attained by the MLE ˆs because it is asymptotically efficient. The local L2 decoding error ⟨∥ˆs −s∥2|s⟩r = tr(cov(ˆs −s)) ≥tr(IF (s)−1). In order to minimize the overall L2 decoding error, one should minimize the attainable lower bound on the right side of Eq.(7), under appropriate constraints on hk(·). ∥ˆs −s∥2 s ≥⟨tr(IF (s)−1)⟩s (7) 2.4 Mutual Information Limit Another possible measurement of neural coding quality is the mutual information. This quantity does not explicitly rely on an estimator ˆs(r) but directly measures the mutual information between the response and the stimulus. The link between mutual information and the Fisher information matrix was established in [16]. One goal (infomax) is to maximize the mutual information I(r, s) = H(r) −H(r|s). Assuming perfect integration, the first term H(r) asymptotically converges to a constant H(s) for long encoding time because the noise is Gaussian. The second term H(r|s) = ⟨H(r|s∗)⟩s∗because the noise is independent. For each s∗, the conditional entropy H(r|s = s∗) ∝1 2 log det IF (s∗) since r|s∗is asymptotically a Gaussian variable with covariance IF (s∗). Therefore the mutual information is I(r, s) = const + 1 2⟨log det IF (s)⟩s (8) 2.5 Diffeomorphic Population Before one can formalize the optimal coding problem, some assumptions about the neural population need to be made. Under a diffeomorphic assumption, the number of neurons (m) in the population matches the dimensionality (n) of the input stimulus. Each neuron projects the signal s onto its basis wk and passes the one-dimensional projection tk = wT k s through a sigmoidal tuning curve hk(·) which is bounded 0 ≤hk(·) ≤1. The tuning curve is rk = hk(wT k s). (9) We would like to optimize for the nonlinear functions h1(·), . . . , hn(·) and the basis {wk}n k=1 simultaneously. We may assume ∥wk∥= 1 since the scale can be compensated by the nonlinearity. Such an encoding scheme is called diffeomorphic because the population establishes a smooth and invertible mapping from the stimulus space s ∈S to the rate space r ∈R. An arbitrary observation of the firing rate r can be first inverted to calculate the hidden variables tk = h−1 k (rk) and then linearly decoded to obtain ˆsMLE. Fig.1a shows how the encoding scheme is implemented by a neural network. Fig.1b illustrates explicitly how a 2D stimulus s is encoded by two neurons with basis w1, w2 and nonlinear mappings h1, h2. (a) (b) s1 s2 s3 s4 r1 r2 r3 r4 input stimulus W nonlinear map hk(·) output s1 s2 w1 w2 s wT 1 s r1 h1(wT 1 s) wT 2 s r2 h2(wT 2 s) Figure 1: (a) Illustration of a neural network with diffeomorphic encoding. (b) The Linear-Nonlinear (LN) encoding process of 2D stimulus for a stimulus s. 3 3 Review of One Dimensional Solution In the case of encoding an one-dimensional stimulus, the diffeomorphic population is just one neuron with sigmoidal tuning curve r = h(w · s). The only two options w = ±1 is determined by whether the sigmoidal tuning curve is increasing or decreasing. Here we simply assume w = 1. For the L2-minimization problem, we want to minimize ⟨tr(IF (s)−1)⟩= ⟨h′(s)−2⟩because of Eq.(5) and (7). Now apply Holder’s inequality [30] to non-negative functions p(s)/h′(s)2 and h′(s), Z p(s) h′(s)2 ds | {z } overall L2 loss · Z h′(s) ds 2 | {z } =1 ≥ Z p(s)1/3 ds 3 (10) The minimum L2 loss is attained by the optimal h∗(s) ∝ R s −∞p(t)1/3dt. For one dimensional Gaussian with variance Var[s], the right side of Eq.(10) is 6 √ 3πVar[s]. This preliminary result will be useful for the high dimensional case discussed in Section 4 and 5. On the other hand, for the infomax problem we want to maximize I(r, s) because of Eq.(5) and (8). Note that ⟨log det IF (s)⟩= 2⟨log h′(s)⟩. By treating the sigmoidal activation function h(s) as a cumulative probability distribution [10], we have Z p(s) log h′(s) ds ≤ Z p(s) log p(s) ds (11) because the KL-divergence DKL(p||h′) = R p(s) log p(s) ds − R p(s) log h′(s) ds is non-negative. The optimal solution is h∗(s) = R s −∞p(t)dt and the optimal value is 2H(p), where H(p) is the differential entropy of the distribution p(s). This h∗(s) is exactly obtained by equalizing the output probability to maximize the entropy. For a one dimensional Gaussian with variance Var[s], the optimal value is log Var[s] + const. 4 Optimal Diffeomorphic Population In the case of encoding high-dimensional random stimulus using a diffeomorphic population code, n neurons encode n stimulus dimensions. The gradient of the k-th neuron’s tuning curve is ∇k = h′ k(wT k s)wk and the Fisher information matrix is thus IF (s) = n X k=1 ∇k∇T k = n X k=1 h′ k(wT k s)2wkwT k = WH2W T (12) where W = (w1, . . . , wn) and H = diag(h′ 1(wT 1 s), . . . , h′ n(wT n s)). Using the fact that tr(AB) = tr(BA) for any matrices A, B, we know tr(IF (s)−1) = tr((W T )−1H−2W −1) = tr((W T W)−1H−2). Because H−2 is diagonal, the L2-min problem is simplified as minimize {wk,hk(·)},k=1...n L(W, H) = ⟨tr(IF (s)−1)⟩= n X k=1 [(W T W)−1]kk Z p(s) h′ k(wT k s)2 ds (13) If we define the marginal distribution pk(t) = Z p(s)δ(t −wT k s) ds (14) then the optimization over wk and hk can be decoupled in the following way. For any fixed W, the integral term can be evaluated by marginalizing out all those directions perpendicular to wk. As discussed in section 3, the optimal value ( R pk(t)1/3 dt)3 is attained when h∗ k ′(t) ∝pk(t)1/3. The optimization problem is now minimize {wk},k=1...n Lh∗(W) = n X k=1 [(W T W)−1]kk Z pk(t)1/3 dt 3 (15) In general, analytically optimizing such a term for arbitrary prior distribution p(s) is intractable. However if p(s) is multivariate Gaussian then the optimization can be further simplified and solved analytically, as discussed in the following section. 4 5 Stimulus with Gaussian Prior We consider the case when the stimulus prior is Gaussian N(0, Σ). This assumption allows us to calculate the marginal distribution along any direction wk as an one-dimensional Gaussian with mean zero and variance wT k Σwk = (W T ΣW)kk. By plugging in the Gaussian density pk(t) and using the fact we derived in Section 3, we can further simplify the L2-optimization problem as minimize {wk},k=1...n Lh∗(W) = 6 √ 3π · n X k=1 [(W T W)−1]kk(W T ΣW)kk (16) 5.1 Geometric Interpretation In the above optimization problem, (W T ΣW)kk has a clear and simple meaning – it is the variance of the marginal distribution pk(t). For term [(W T W)−1]kk, notice that W T W is the inner product matrix of the basis {wk}n k=1, i.e. (W T W)ij = wT i wj. Using the adjoint method we can calculate the diagonal elements of (W T W)−1, [(W T W)−1]kk = det(W T k Wk) det(W T W) (17) where W T k Wk is the inner product matrix of leave-wk-out basis {w1, . . . , wk−1, wk+1, . . . , wn}. Let θk be the angle between wk and the hyperplane spanned by all other basis vectors (see Fig.2). The diagonal element is just [(W T W)−1]kk = (det Wk/ det W)2 = (sin θk)−2 simply because Volume ({w1, . . . , wn}) | {z } n dim parallelogram = Volume ({w1, . . . , wk−1, wk+1, . . . , wn}) | {z } n−1 dim base parallelogram · |wk| · sin θk | {z } height , (18) s1 s2 s3 θ3 w3 w1 w2 Figure 2: Illustration of θk. In this example, w1 and w2 are on the s1-s2 plane. θ3 is just the angle between w3 and its projection on the s1-s2 plane. The optimization involves two competing parts. Minimizing (W T ΣW)kk makes all those directions with small variance favorable. Meanwhile, minimizing [(W T W)−1]kk = (sin θk)−2 strongly penalizes neurons having similar tuning directions with the rest of population. To qualitatively summarize, the optimal population would tend to encode those directions with small variance while keeping certain degree of population diversity. 5.2 General Solution Due to space limitations, we will only present the optimal solution here and the derivation can be found in Appendix C in the supplementary notes. For any covariance matrix Σ, the optimal solution for Eq.(16) is W ∗= Σ−1/4U, where U T U = I and (U T Σ1/2U)kk = 1 n tr(Σ1/2) for all k = 1, . . . , n (19) Such unitary matrix U is guaranteed to exist yet may not be unique. See Appendix D for a detailed discussion. In general for dimension n, the solution has a manifold structure with dimension not less than (n −1)(n −2)/2. For n = 2 the solution can be easily derived. Let Σ = diag(σ2 x, σ2 y). Then optimal solution is given by U = 1 √ 2 1 −1 1 1 , W ∗ L2 = Σ−1/4U = 1 √ 2 1 √σx − 1 √σx 1 √σy 1 √σy ! (20) This 2D solution is special and is unique under reflection and permutation unless the prior distribution is spherically symmetric i.e. Σ = aI. 5 6 Comparison with Infomax Solution Previous studies have focused on finding solutions that maximize the mutual information (infomax) between the stimulus and the neural population response. This is related to independent component analysis (ICA) [24]. Mutual information can be maximized if and only if each neuron encodes an independent component of the stimulus and uses the proper nonlinear tuning curve. Ideally, the joint distribution p(s) can be decomposed as the product of n one dimensional components Qn k=1 pk(Wk(s)). For a Gaussian prior with covariance Σ, the infomax solution is W ∗ info = Σ−1/2U ⇒ cov(W ∗T infos) = U T Σ−1/2 · Σ · Σ−1/2U = I (21) where Σ−1/2 is the whitening matrix and U is an arbitrary unitary matrix. The derivation can be found in Appendix E. In the same 2D example where Σ = diag(σ2 x, σ2 y), the family of optimal solutions is parametrized by an angular variable φ U(φ) = 1 √ 2 cos φ −sin φ sin φ cos φ , W ∗ info(φ) = Σ−1/2 · U(φ) = cos φ σx −sin φ σx sin φ σy cos φ σy ! (22) In Fig.3 we compare W ∗ info(φ) and W ∗ L2 for different prior covariances. One observation is that, L2 optimal neurons do not fully decorrelate input signals unless the Gaussian prior is spherical. By correlating the input signal and encoding redundant information, the channel signal to noise ratio (SNR) can be balanced to reduce the vulnerability of those independent channels with low SNR. As a consequence, the overall L2 performance is improved at the cost of transferring a suboptimal amount of information. Another important observation is that the infomax solution allows a greater degree of symmetry – Eq.(21) holds for arbitrary unitary matrices while Eq.(19) holds only for a subset of them. (a) (b) (c) (d) σx = 1σy s1 s2 slope = 1 w1 w2 w′ 1 w′ 2 0 90 180 0 90 180 α (degree) β (degree) s1 s2 slope = 1 w1 w2 w′ 1 w′ 2 0 90 180 0 90 180 α (degree) β (degree) σx = 2σy s1 s2 slope = √ 2 w1 w2 0 90 180 0 90 180 α (degree) β (degree) s1 s2 slope = 2 w1 w2 w′ 1 w′ 2 0 90 180 0 90 180 α (degree) β (degree) σx = 3σy s1 s2 slope = √ 3 w1 w2 0 90 180 0 90 180 α (degree) β (degree) s1 s2 slope = 3 w1 w2 w′ 1 w′ 2 0 90 180 0 90 180 α (degree) β (degree) L2-min infomax Figure 3: Comparison of L2-min and infomax optimal solution for 2D case. Each row represents the result for different ratio σx/σy for the prior distribution. (a) The optimal pair of basis vectors w1, w2 for L2-min with the prior covariance ellipse is unique unless the prior distribution has rotational symmetry. (b) The loss function with ”+” marking the optimal solution shown in (a). (c) One pair of optimal basis vector w1, w2 for infomax with the prior covariance ellipse. (d) The loss function with ”+” marking the optimal solution shown in (c). 6 7 Application – 16-by-16 Gaussian Images In this section we apply our diffeomorphic coding scheme to an image representation problem. We assume that the intensity values of all pixels from a set of 16-by-16 images follow a 256-D Gaussian distribution. Instead of directly defining the pairwise covariance between pixels of s, we calculate its real Fourier components ˆs ˜s = F T s ⇔ s = Fˆs (23) where the real Fourier matrix is F = (f1, . . . , f256) with each filter fa and its spatial frequency ⃗ka. The covariance of those Fourier components ˜s is typically assumed to be diagonal and the power decays following some power law cov(˜s) = D = diag(σ2 1, . . . , σ2 n), where σ2 a ∝|⃗ka|−β, β > 0 (24) Therefore the original stimulus s has covariance cov(s) = Σ = FDF T . Such image statistics are called stationary because the covariance between pair of pixels is fully determined by their relative position. For the stimulus s with covariance Σ, one naive choice of L2 optimal filter is simply W ∗ L2 = Σ−1/4 · I = FD−1/4F T (25) because Σ1/2 = FD1/2F T has constant diagonal terms (See Appendix F for detailed calculation) and U = I qualifies for Eq.(19). The covariance matrix and one sample image generated from Σ is plotted in Fig. 4(a)-(c) below. (a) (b) (c) Figure 4: For β = 2.5 in the power law: (a) The 256 × 256 covariance matrix Σ. (b) One column of Σ reshaped to 16 × 16 matrix representing the covariance between any pixels and a fixed pixel in the center. (c) A random sample from the Gaussian distribution with covariance Σ. In addition, we have numerically computed the L2 loss using a family of filters Wγ = FD−γF T , γ ∈[0, 1/2] (26) Note that when γ = 0, we have the naive filter W0 = FF T = I which does nothing to the input stimulus; when γ = 1/4 or 1/2, we revisit the L2 optimal filter or the infomax filter, respectively. As we can see from Fig. 5(a)-(d), the L2 optimal filter half-decorrelates the input stimulus channels to keep the balance between the simplicity of the filters and the simplicity of the correlation structure. In each simulation run, a set of 10,000 16-by-16 images is randomly sampled from the multivariate Gaussian distribution with zero mean and covariance matrix Σ. For each stimulus image s, we calculate y = W T γ s and zk = hk(yk) + ηk to simulate the encoding process. Here hk(y) ∝ R y −∞pk(t)1/3dt and pk(t) is Gaussian N(0, (W T γ ΣWγ)kk). The additive Gaussian noise ηk is independent Gaussian N(0, 10−4). To decode, we just calculate ˆyk = h−1 k (zk) and ˆs = (W T γ )−1ˆy. Then we measure the L2 loss ∥ˆs −s∥2 2. This procedure is repeated 20 times and the result is plotted in Fig. 5(e). 8 Discussion and Conclusions In this paper, we have studied the an optimal diffeomorphic neural population code which minimizes the L2 reconstruction error. The population of neurons is assumed to have sigmoidal activation functions encoding linear combinations of a high dimensional stimulus with a multivariate Gaussian 7 (a) (b) (c) (d) (e) naive 2D filter L2 optimal infomax 0 8 16 0 0.5 1 filter cross−section 0 8 16 0 0.5 1 0 8 16 0 0.5 1 2D correlation 0 8 16 0 0.5 1 correlation cross−section 0 8 16 0 0.5 1 0 8 16 0 0.5 1 0 1/4 1/2 0 0.5 1 1.5 2 x 10 −8 L2 loss (± 3σ) γ naive L2 optimal infomax Figure 5: (a) The 2D filter Wγ of one specific neuron for γ = 0, 1/4, 1/2 from top to bottom. (b) The cross-section of the filter Wγ on one specific row boxed in (a), plotted as a function. (c) The correlation of the 2D filtered stimulus, between one specific neuron and all neurons. (d) The cross-section of the 2D correlation of the filtered stimulus, between the neuron and other neurons on the same row. (e) The simulation result of L2 loss for different filter Wγ and optimal nonlinearity h and the vertical bar shows the ±3σ interval across trials. distribution. The optimal solution is provided and compared with solutions which maximize the mutual information. In order to derive the optimal solution, we first show that the Poisson noise model is equivalent to the constant Gaussian noise under the variance stabilizing transformation. Then we relate the L2 reconstruction error to the trace of inverse Fisher information matrix via the Cramer-Rao bound. Minimizing this bound leads to the global optimal solution in the asymptotic limit of long integration time. The general L2-minimization problem can be simplified and the optimal solution is analytically derived when the stimulus distribution is Gaussian. Compared to the infomax solutions, a careful evaluation and calculation of the Fisher information matrix is needed for L2 minimization. The manifold of L2 optimal solutions possess a lower dimensional structure compared to the infomax solution. Instead of decorrelating the input statistics, the L2-min solution maintains a certain degree of correlation across the channels. Our result suggests that maximizing mutual information and minimizing the overall decoding loss are not the same in general – encoding redundant information can be beneficial to improve reconstruction accuracy. This principle may explain the existence of correlations at many layers in biological perception systems. As an example, we have applied our theory to 16-by-16 images with stationary pixel statistics. The optimal solution exhibits center-surround receptive fields, but with a decay differing from those found by decorrelating solutions. We speculate that these solutions may better explain observed correlations measured in certain neural areas of the brain. Finally, we acknowledge the support of the Office of Naval Research. References [1] K Kang, RM Shapley, and H Sompolinsky. Information tuning of populations of neurons in primary visual cortex. Journal of neuroscience, 24(15):3726–3735, 2004. [2] AP Georgopoulos, AB Schwartz, and RE Kettner. Adaptation of the motion-sensitive neuron h1 is generated locally and governed by contrast frequency. Science, 233:1416–1419, 1986. [3] FE Theunissen and JP Miller. Representation of sensory information in the cricket cercal sensory system. II. information theoretic calculation of system accuracy and optimal tuningcurve widths of four primary interneurons. J Neurophysiol, 66(5):1690–1703, November 1991. [4] DC Fitzpatrick, R Batra, TR Stanford, and S Kuwada. A neuronal population code for sound localization. Nature, 388:871–874, 1997. 8 [5] NS Harper and D McAlpine. Optimal neural population coding of an auditory spatial cue. Nature, 430:682–686, 2004. [6] N Brenner, W Bialek, and R de Ruyter van Steveninck. Adaptive rescaling maximizes information transmission. Neuron, 26:695–702, 2000. [7] Tvd Twer and DIA MacLeod. Optimal nonlinear codes for the perception of natural colours. Network: Computation in Neural Systems, 12(3):395–407, 2001. [8] I Dean, NS Harper, and D McAlpine. Neural population coding of sound level adapts to stimulus statistics. Nature neuroscience, 8:1684–1689, 2005. [9] Y Ozuysal and SA Baccus. Linking the computational structure of variance adaptation to biophysical mechanisms. Neuron, 73:1002–1015, 2012. [10] SB Laughlin. A simple coding procedure enhances a neurons information capacity. Z. Naturforschung, 36c(3):910–912, 1981. [11] J-P Nadal and N Parga. Non linear neurons in the low noise limit: A factorial code maximizes information transfer, 1994. [12] M Bethge, D Rotermund, and K Pawelzik. Optimal short-term population coding: when Fisher information fails. Neural Computation, 14:2317–2351, 2002. [13] M Bethge, D Rotermund, and K Pawelzik. Optimal neural rate coding leads to bimodal firing rate distributions. Netw. Comput. Neural Syst., 14:303–319, 2003. [14] MD McDonnell and NG Stocks. Maximally informative stimuli and tuning curves for sigmoidal rate-coding neurons and populations. Phys. Rev. Lett., 101:058103, 2008. [15] Z Wang, A Stocker, and DD Lee. Optimal neural tuning curves for arbitrary stimulus distributions: Discrimax, infomax and minimum lp loss. Adv. Neural Information Processing Systems, 25:2177–2185, 2012. [16] N Brunel and J-P Nadal. Mutual information, fisher information and population coding. Neural Computation, 10(7):1731–1757, 1998. [17] K Zhang and TJ Sejnowski. Neuronal tuning: To sharpen or broaden? Neural Computation, 11:75–84, 1999. [18] A Pouget, S Deneve, J-C Ducom, and PE Latham. Narrow versus wide tuning curves: Whats best for a population code? Neural Computation, 11:85–90, 1999. [19] H Sompolinsky and H Yoon. The effect of correlations on the fisher information of population codes. Advances in Neural Information Processing Systems, 11, 1999. [20] AP Nikitin, NG Stocks, RP Morse, and MD McDonnell. Neural population coding is optimized by discrete tuning curves. Phys. Rev. Lett., 103:138101, 2009. [21] D Ganguli and EP Simoncelli. Implicit encoding of prior probabilities in optimal neural populations. Adv. Neural Information Processing Systems, 23:658–666, 2010. [22] S Yaeli and R Meir. Error-based analysis of optimal tuning functions explains phenomena observed in sensory neurons. Front Comput Neurosci, 4, 2010. [23] E Doi and MS Lewicki. Characterization of minimum error linear coding with sensory and neural noise. Neural Computation, 23, 2011. [24] AJ Bell and TJ Sejnowski. An information-maximization approach to blind separation and blind deconvolution. Neural Computation, 7:1129–1159, 1995. [25] DJ Field BA Olshausen. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381:607–609, 1996. [26] A Hyvarinen and E Oja. Independent component analysis: Algorithms and applications. Neural Networks, 13:411–430, 2000. [27] P Berens, A Ecker, S Gerwinn, AS Tolias, and M Bethge. Reassessing optimal neural population codes with neurometric functions. Proceedings of the National Academy of Sciences, 11:4423–4428, 2011. [28] TM Cover and J Thomas. Elements of Information Theory. Wiley, 1991. [29] EL Lehmann and G Casella. Theory of point estimation. New York: Springer-Verlag., 1999. [30] GH Hardy, JE Littlewood, and G Polya. Inequalities, 2nd ed. Cambridge University Press, 1988. 9
|
2013
|
142
|
4,868
|
Near-Optimal Entrywise Sampling for Data Matrices Dimitris Achlioptas UC Santa Cruz optas@cs.ucsc.edu Zohar Karnin Yahoo Labs zkarnin@ymail.com Edo Liberty Yahoo Labs edo.liberty@ymail.com Abstract We consider the problem of selecting non-zero entries of a matrix A in order to produce a sparse sketch of it, B, that minimizes A B 2. For large m n matrices, such that n m (for example, representing n observations over m attributes) we give sampling distributions that exhibit four important properties. First, they have closed forms computable from minimal information regarding A. Second, they allow sketching of matrices whose non-zeros are presented to the algorithm in arbitrary order as a stream, with O 1 computation per non-zero. Third, the resulting sketch matrices are not only sparse, but their non-zero entries are highly compressible. Lastly, and most importantly, under mild assumptions, our distributions are provably competitive with the optimal offline distribution. Note that the probabilities in the optimal offline distribution may be complex functions of all the entries in the matrix. Therefore, regardless of computational complexity, the optimal distribution might be impossible to compute in the streaming model. 1 Introduction Given an m n matrix A, it is often desirable to find a sparser matrix B that is a good proxy for A. Besides being a natural mathematical question, such sparsification has become a ubiquitous preprocessing step in a number of data analysis operations including approximate eigenvector computations [AM01, AHK06, AM07], semi-definite programming [AHK05, d’A08], and matrix completion problems [CR09, CT10]. A fruitful measure for the approximation of A by B is the spectral norm of A B, where for any matrix C its spectral norm is defined as C 2 max x 2 1 Cx 2. Randomization has been central in the context of matrix approximations and the overall problem is typically cast as follows: given a matrix A and a budget s, devise a distribution over matrices B such that the (expected) number of non-zero entries in B is at most s and A B 2 is as small as possible. Our work is motivated by big data matrices that are generated by measurement processes. Each of the n matrix columns correspond to an observation of m attributes. Thus, we expect n m. Also we expect the total number of non-zero entries in A to exceed available memory. We assume that the original data matrix A is accessed in the streaming model where we know only very basic features of A a priori and the actual non-zero entries are presented to us one at a time in an arbitrary order. The streaming model is especially important for tasks like recommendation engines where user-item preferences become available one by one in an arbitrary order. But, it is also important in cases when A exists in durable storage and random access of its entries is prohibitively expensive. We establish that for such matrices the following approach gives provably near-optimal sparsification. Assign to each element Aij of the matrix a weight that depends only on the elements in its row qij Aij A i 1. Take ⇢to be an (appropriate) distribution over the rows. Sample s i.i.d. locations from A using the distribution pij ⇢iqij. Return B which is the mean of s matrices, each containing a single non zero entry Aij pij in the corresponding selected location i, j . As we will see, this simple form of the probabilities pij falls out naturally from generic optimization considerations. The fact that each entry is kept with probability proportional to its magnitude, be1 sides being interesting on its own right, has a remarkably practical implication. Every non-zero in the i-th row of B will take the form kij A i 1 s⇢i where kij is the number of times location i, j of A was selected. Note that since we sample with replacement kij may be more than 1 but, typically, kij 0, 1 . The result is a matrix B which is representable in O m log n s log n s bits. This is because there is no reason to store floating point matrix entry values. We use O m log n bits to store1 all values A i 1 s⇢i and O s log n s bits to store the non zero index offsets. Note that kij s and that some of the offsets may be zero. In a simple experiment we measured the average number of bits per sample resulting from this approach (total size of the sketch divided by the number of samples s). The results were between 5 and 22 bits per sample depending on the matrix and s. It is important to note that the number of bits per sample was usually less than even log2 n log2 m , the minimal number of bits required to represent a pair i, j . Our experiments show a reduction of disc space by a factor of between 2 and 5 relative to the compressed size of the file representing the sample matrix B in the standard row-column-value list format. Another insight of our work is that the distributions we propose are combinations of two L1-based distributions and and which distribution is more dominant depends on the sampling budget. When the number of samples s is small, ⇢i is nearly linear in A i 1 resulting in pij Aij . However, as the number of samples grows, ⇢i tends towards A i 2 1 resulting in pij Aij A i 1, a distribution we refer to as Row-L1 sampling. The dependence of the preferred distribution on the sample budget is also borne out in experiments, with sampling based on appropriately mixed distributions being consistently best. This highlights that the need to adapt the sampling distribution to the sample budget is a genuine phenomenon. 2 Measure of Error and Related Work We measure the difference between A and B with respect to the L2 (spectral) norm as it is highly revealing in the context of data analysis. Let us define a linear trend in the data of A as any tendency of the rows to align with a particular unit vector x. To examine the presence of such a trend, we need only multiply A with x: the ith coordinate of Ax is the projection of the ith row of A onto x. Thus, Ax 2 measures the strength of linear trend x in A, and A 2 measures the strongest linear trend in A. Thus, minimizing A B 2 minimizes the strength of the strongest linear trend of A not captured by B. In contrast, measuring the difference using an entry-wise norm, e.g., the Frobenius norm, can be completely uninformative. This is because the best strategy would be to always pick the largest s matrix entries from A, a strategy that can easily be “fooled”. As a stark example, when the matrix entries are Aij 0, 1 , the quality of approximation of A by B is completely independent of which elements of A we keep. This is clearly bad; as long as A contains even a modicum of structure certain approximations will be far better than others. By using the spectral norm to measure error we get a natural and sophisticated target: to minimize A B 2 is to make E A B a near-rotation, having only small variations in the amount by which it stretches different vectors. This idea that the error matrix E should be isotropic, thus packing as much Frobenius norm as possible for its L2 norm, motivated the first work on element-wise matrix sampling by Achlioptas and McSherry [AM07]. Concretely, to minimize E 2 it is natural to aim for a matrix E that is both zero-mean, i.e., an unbiased estimator of A, and whose entries are formed by sampling the entries of A (and, thus, of E) independently. In the work of [AM07], E is a matrix of i.i.d. zero-mean random variables. The study of the spectral characteristics of such matrices goes back all the way to Wigner’s famous semi-circle law [Wig58]. Specifically, to bound E 2 in [AM07] a bound due to Alon Krivelevich and Vu [AKV02] was used, a refinement of a bound by Juh´asz [Juh81] and F¨uredi and Koml´os [FK81]. The most salient feature of that bound is that it depends on the maximum entry-wise variance σ2 of A B, and therefore the distribution optimizing the bound is the one in which the variance of all entries in E is the same. In turn, this means keeping each entry of A independently with probability pij A2 ij (up to a small wrinkle discussed below). Several papers have since analyzed L2-sampling and variants [NDT09, NDT10, DZ11, GT09, AM07]. An inherent difficulty of L2-sampling based strategies is the need for special handling of small entries. This is because when each item Aij is kept with probability pij A2 ij, the result1It is harmless to assume any value in the matrix is kept using O log n bits of precision. Otherwise, truncating the trailing bits can be shown to be negligible. 2 ing entry Bij in the sample matrix has magnitude Aij pij Aij 1. Thus, if an extremely small element Aij is accidentally picked, the largest entry of the sample matrix “blows up”. In [AM07] this was addressed by sampling small entries with probability proportional to Aij rather than A2 ij. In the work of Gittens and Tropp [GT09], small entries are not handled separately and the bound derived depends on the ratio between the largest and the smallest non-zero magnitude. Random matrix theory has witnessed dramatic progress in the last few years and [AW02, RV07, Tro12a, Rec11] provide a good overview of the results. This progress motivated Drineas and Zouzias in [DZ11] to revisit L2-sampling using concentration results for sums of random matrices [Rec11], as we do here. This is somewhat different from the original setting of [AM07] since now B is not a random matrix with independent entries, but a sum of many single-element independent matrices, each such matrix resulting by choosing a location of A with replacement. Their work improved upon all previous L2-based sampling results and also upon the L1-sampling result of Arora, Hazan and Kale [AHK06], discussed below, while admitting a remarkably compact proof. The issue of small entries was handled in [DZ11] by deterministically discarding all sufficiently small entries, a strategy that gives a strong mathematical guarantee (but see the discussion regarding deterministic truncation in the experimental section). A completely different tack at the problem, avoiding random matrix theory altogether, was taken by Arora et al. [AHK06]. Their approximation keeps the largest entries in A deterministically (specifically all Aij " n where the threshold " needs be known a priori) and randomly rounds the remaining smaller entries to sign Aij " n or 0. They exploit the simple fact A B sup x 1, y 1 xT A B y by noting that, as a scalar quantity, its concentration around its expectation can be established by standard Bernstein-Bennet type inequalities. A union bound then allows them to prove that with high probability, xT A B y " for every x and y. The result of [AHK06] admits a relatively simple proof. However, it also requires a truncation that depends on the desired approximation ". Rather interestingly, this time the truncation amounts to keeping every entry larger than some threshold. 3 Our Approach Following the discussion in Section 2 and in line with previous works, we: (i) measure the quality of B by A B 2, (ii) sample the entries of A independently, and (iii) require B to be an unbiased estimator of A. We are therefore left with the task of determining a good probability distribution pij from which to sample the entries of A in order to get B. As discussed in Section 2 prior art makes heavy use of beautiful results in the theory of random matrices. Specifically, each work proposes a specific sampling distribution and then uses results from random matrix theory to demonstrate that it has good properties. In this work we reverse the approach, aiming for its logical conclusion. We start from a cornerstone result in random matrix theory and work backwards to reverse-engineer nearoptimal distributions with respect to the notion of probabilistic deviations captured by the inequality. The inequality we use is the Matrix-Bernstein inequality for sums of independent random matrices (see e.g., [Tro12b], Theorem 1.6). In the following, we often denote A 2 as A to lighten notation. Theorem 3.1 (Matrix Bernstein inequality). Consider a finite sequence Xi of i.i.d. random m n matrices, where E X1 0 and X1 R. Let σ2 max E X1XT 1 , E XT 1 X1 . For some fixed s 1, let X X1 Xs s. For all " 0, Pr X " m n exp s"2 σ2 R" 3 . To get a feeling for our approach, fix any probability distribution p over the non-zero elements of A. Let B be a random m n matrix with exactly one non-zero element, formed by sampling an element Aij of A according to p and letting Bij Aij pij. Observe that for every i, j , regardless of the choice of p, we have E Bij Aij, and thus B is always an unbiased estimator of A. Clearly, the same is true if we repeat this s times, taking i.i.d. samples B1, . . . , Bs, and let our matrix B be their average. With this approach in mind, the goal is now to find a distribution p minimizing E A B1 Bs s . Writing sE A B1 A Bs we see that sE is the operator norm of a sum of i.i.d. zero-mean random matrices Xi A Bi, i.e., exactly the setting 3 of Theorem 3.1. The relevant parameters are σ2 max E A B1 A B1 T , E A B1 T A B1 (1) R max A B1 over all possible realizations of B1 . (2) Equations (1) and (2) mark the starting point of our work. Our goal is to find probability distributions over the elements of A that optimize (1) and (2) simultaneously with respect to their functional form in Theorem 3.1, thus yielding the strongest possible bound on A B . A conceptual contribution of our work is the discovery that good distributions depend on the sample budget s, a fact also borne out in experiments. The fact that minimizing the deviation metric of Theorem 3.1, i.e., σ2 R✏3, suffices to bring out this dependence can be viewed as testament to the theorem’s sharpness. Theorem 3.1 is stated as a bound on the probability that the norm of the error matrix is greater than some target error " given the number of samples s. In practice, the target error " is typically not known in advance, but rather is the quantity to minimize, given the matrix A, the number of samples s, and the target confidence δ. Specifically, for any given distribution p on the elements of A, define "1 p inf " : m n exp s"2 σ p 2 R p " 3 δ . (3) Our goal in the rest of the paper is to seek the distribution p minimizing "1. Our result is an easily computable distribution p which comes within a factor of 3 of "1 p and, as a result, within a factor of 9 in terms of sample complexity (in practice we expect this to be even smaller, as the factor of 3 comes from consolidating bounds for a number of different worst-case matrices). To put this in perspective note that the definition of p does not place any restriction either on the access model for A while computing p , or on the amount of time needed to compute p . In other words, we are competing against an oracle which in order to determine p has all of A in its purview at once and can spend an unbounded amount of computation to determine it. In contrast, the only global information regarding A we require are the ratios between the L1 norms of the rows of the matrix. Trivially, the exact L1 norms of the rows (and therefore their ratios) can be computed in a single pass over the matrix, yielding a 2-pass algorithm. Slightly less trivially, standard concentration arguments imply that these ratios can be estimated very well by sampling only a small number of columns. In the setting of data analysis, though, it is in fact reasonable to expect that good estimates of these ratios are available a priori. This is because different rows correspond to different attributes and the ratios between the row norms reflect the ratios between the average absolute values of the features. For example, if the matrix corresponds to text documents, knowing the ratios amounts to knowing global word frequencies. Moreover these ratios do not need to be known exactly to apply the algorithm, as even rough estimates of them give highly competitive results. Indeed, even disregarding this issue completely and simply assuming that all ratios equal 1, yields an algorithm that appears quite competitive in practice, as demonstrated by our experiments. 4 Data Matrices and Statement of Results Throughout A i and A j will denote the i-th row and j-th column of A, respectively. Also, we use the notation A 1 i,j Aij and A 2 F i,j A2 ij. Before we formally state our result we introduce a definition that expresses the class of matrices for which our results hold. Definition 4.1. An m n matrix A is a Data matrix if: 1. mini A i 1 maxj A j 1. 2. A 2 1 A 2 2 30m. 3. m 30. Regarding Condition 1, recall that we think of A as being generated by a measurement process of a fixed number of attributes (rows), each column corresponding to an observation. As a result, columns have bounded L1 norm, i.e., A j 1 constant. While this constant may depend on the type of object and its dimensionality, it is independent of the number of objects. On the other hand, A i 1 grows linearly with the number of columns (objects). As a result, we can expect Definition 4.1 to hold for all large enough data sets. Regarding Condition 2, it is easy to verify that 4 unless the values of the entries of A exhibit unbounded variance as n grows, the ratio A 2 1 A 2 2 grows as ⌦n and Condition 2 follows from n m. Condition 3 is trivial. All in all, out of the three conditions the essential one is Condition 1. The other two are merely technical and hold in all non-trivial cases where Condition 1 applies. One last point is that to apply Theorem 3.1, the entries of A must be sampled with replacement. A simple way to achieve this in the streaming model was presented in [DKM06] that uses O s operations per matrix element and O s active memory. In Section D (see supplementary material) we discuss how to implement sampling with replacement far more efficiently, using O log s active memory, ˜O s space, and O 1 operations per element. To simplify the exposition of our algorithm, below, we describe it in the non-streaming setting. That is, we assume we know m and n and that we can compute numbers zi A i 1 as well as repeatedly sample entries from the matrix. We stress, however, that these conditions are not required and that the algorithm can be implemented efficiently in the streaming model as discussed in Section D. Algorithm 1 Construct a sketch B of a data matrix A 1: Input: Data matrix A Rm n, sampling budget s, acceptable failure probability δ 2: Set ⇢ COMPUTEROWDISTRIBUTION(A, s, δ) 3: Sample s elements of A with replacement, each Aij having probability pij ⇢i Aij A i 1 4: For each sample i, j, Aij `, let B` be the matrix with B` i, j Aij pij and zero elsewhere. 5: Output: B 1 s s ` 1 B`. 6: function COMPUTEROWDISTRIBUTION(A, s, δ) 7: Obtain z such that zi A i 1 for i m 8: Set ↵ log m n δ s and β log m n δ 3s 9: Define ⇢i ⇣ ↵zi 2⇣ ↵zi 2⇣2 βzi ⇣ 2 10: Find ⇣1 such that m i 1 ⇢i ⇣1 1 11: return ⇢such that ⇢i ⇢i ⇣1 for i m Steps 6–11 compute a distribution ⇢over the rows. Assuming step 7 can be implemented efficiently (or skipped altogether in the case z are known a priori), we see that the running time of ComputeRowDistribution is independent of n. Specifically, finding ⇣1 in step 10 can be done efficiently by binary search because the function i ⇢i ⇣is strictly decreasing in ⇣. Conceptually, we see that the probability assigned to each element Aij in Step 3 is simply the probability ⇢i of its row times its intra-row weight Aij A i 1. We are now able to state our main lemma. We defer its proof to Section 5 and subsequent details to appendices (see supplementary material). Theorem 4.2. If A is a Data matrix per Definition 4.1 and p is the probability distribution defined in Algorithm 1, then "1 p 3 "1 p , where p is the minimizer of "1. To compare our result with previous ones we first define several matrix metrics. We then state the bound implied by Theorem 4.2 on the minimal number of samples s0 needed by our algorithm to achieve an approximation B to the matrix A such that A B " A with constant probability. Stable rank: Denoted as sr and defined as A 2 F A 2 2. This is a smooth analog for the algebraic rank, always bounded by it from above, and resilient to small perturbations of the matrix. For data matrices we expect it is small, even constant, and that it captures the “inherent dimensionality” of the data. Numeric density: Denoted as nd and defined as A 2 1 A 2 F , this is a smooth analog of the number of non-zero entries nnz A . For 0-1 matrices it equals nnz A , but when there is variance in the magnitude of the entries it is smaller. Numeric row density: Denoted as nrd and defined as i A i 2 1 A 2 F n. In practice, it is often close to the average numeric density of a single row, a quantity typically much smaller than n. 5 Theorem 4.3. Let A be a Data Matrix per Definition 4.1 and let B be the matrix returned by Algorithm 1 for δ 1 10, " 0 and any s s0 ⇥nrd sr "2 log n sr nd "2 log n 1 2 . With probability at least 9 10, A B " A . The proof of Theorem 4.3 is given in Appendix C (see supplementary material). The third column of the table below shows the number of samples needed to guarantee that A B " A occurs with constant probability, in terms of the matrix metrics defined above. The fourth column presents the ratio of the samples needed by previous results divided by the samples needed by our method. (To simplify the expressions, we present the ratio between our bound and [AHK06] only when the result of [AHK06] gives superior bounds to [DZ11], i.e., we always compare our bound to the stronger of the two bounds implied by these works). Holding " and the stable rank constant we readily see that our method requires roughly 1 n the samples needed by [AHK06]. In the comparison with [DZ11] we see that the key parameter is the ratio nrd n, a quantity typically much smaller than 1 for data matrices. As a point of reference for the assumptions, in the experimental Section 6 we provide the values of all relevant matrix metrics for all the real data matrices we worked with, wherein the ratio nrd n is typically around 10 2. By this discussion, one would expect that L2-sampling should fare better than L1-sampling in experiments. As we will see, quite the opposite is true. A potential explanation for this phenomenon is the relative looseness of the bound of [AHK06] for the performance of L1-sampling. Citation Method Number of samples needed Improvement ratio of Theorem 4.3 [AM07] L1, L2 sr n "2 n polylog n [DZ11] L2 sr n "2 log n nrd n nd n " sr log n [AHK06] L1 nd n "2 1 2 sr log n n This paper Bernstein nrd sr "2 log n sr nd "2 log n 1 2 5 Proof outline We start by iteratively replacing the objective functions (1) and (2) with simpler and simpler functions. Each replacement will incur a (small) loss in accuracy but will bring us closer to a function for which we can give a closed form solution. Recalling the definitions of ↵, β from Algorithm 1 and rewriting the requirement in (3) as a quadratic form in " gives "2 "βR ↵σ 2 0. Our first step is to observe that for any c, d 0, the equation "2 " c d 0 has one negative and one positive solution and that the latter is at least c d 2 and at most c d. Therefore, if we define2 "2 : ↵σ βR we see that 1 2 "1 "2 1. Our next simplification encompasses Conditions 2, 3 of Definition 4.1. Let "3 : ↵˜σ β ˜R where ˜σ2 : max max i j A2 ij pij , max j i A2 ij pij and ˜R : max ij Aij pij . Lemma 5.1. For every matrix A satisfying Conditions 2 and 3 of Definition 4.1, for every probability distribution on the elements of A, "2 "3 1 1 30. Lemma 5.1 is proved in section A (see supplementary material) by showing that ˜σ σ and ˜R R. This allows us to optimize p with respect to "3 instead of "2. In minimizing "3 we see that there is freedom to use different rows to optimize ˜σ and ˜R. At a cost of a factor of 2, we will couple the two 2Here and in the following, to lighten notation, we will omit all arguments, i.e., p, σ p , R p , from the objective functions "i we seeks to optimize, as they are readily understood from context. 6 minimizations by minimizing "4 max "5, "6 where "5 : max i ↵ j A2 ij pij β max j Aij pij , "6 : max j ↵ i A2 ij pij β max i Aij pij . (4) Note that the maximization of ˜R in "5 (and "6) is coupled with that of the ˜σ-related term by constraining the optimization to consider only one row (column) at a time. Clearly, 1 "3 "4 2. Next we focus on "5, the first term in the maximization of "4. The following key lemma establishes that for all data matrices satisfying Condition 1 of Definition 4.1, by minimizing "5 we also minimize "4 max "5, "6 . Lemma 5.2. For every matrix satisfying Condition 1 of Definition 4.1, argminp "5 argminp "4. At this point we can derive in closed form the probability distribution p minimizing "5. Lemma 5.3. The function "5 is minimized by pij ⇢iqij where qij Aij A i 1. To define ⇢i let zi A i 1 and define ⇢i ⇣ ↵zi 2⇣ ↵zi 2⇣2 βzi ⇣ 2 . Let ⇣1 0 be the unique solution to3 i ⇢i ⇣1 1. Let ⇢i : ⇢i ⇣1 . To prove Theorem 4.2 we see that Lemmas 5.2 and 5.3 combined imply that there is an efficient algorithm for minimizing "4 for every matrix A satisfying Condition 1 of Definition 4.1. If A also satisfies Conditions 2 and 3 of Definition 4.1, then it is possible to lower and upper bound the ratios "1 "2, "2 "3 and "3 "4. Combined, these bounds guarantee a lower and upper bound for "1 "4. In general, if c "4 "1 C we can conclude that "1 arg min "4 C c min "1 . Thus, calculating the constants shows "1 arg min "4 3 min "1 , yielding Theorem 4.3. 6 Experiments We experimented with 4 matrices with different characteristics, summarized in the table below. See Section 4 for the definition of the different characteristics. Measure m n nnz A A 1 A F A 2 sr nd nrd Synthetic 1.0e+2 1.0e+4 5.0e+5 1.8e+7 3.2e+4 8.7e+3 1.3e+1 3.1e+5 3.2e+3 Enron 1.3e+4 1.8e+5 7.2e+5 4.0e+9 5.8e+6 1.0e+6 3.2e+1 4.9e+5 1.5e+3 Images 5.1e+3 4.9e+5 2.5e+8 6.5e+9 2.0e+6 1.8e+6 1.3e+0 1.1e+7 2.3e+3 Wikipedia 4.4e+5 3.4e+6 5.3e+8 5.3e+9 7.5e+5 1.6e+5 2.1e+1 5.0e+7 1.9e+4 Enron: Subject lines of emails in the Enron email corpus [Sty11]. Columns correspond to subject lines, rows to words, and entries to tf-idf values. This matrix is extremely sparse to begin with. Wikipedia: Term-document matrix of a fragment of Wikipedia in English. Entries are tf-idf values. Images: A collection of images of buildings from Oxford [PCI 07]. Each column represents the wavelet transform of a single 128 128 pixel grayscale image. Synthetic: This synthetic matrix simulates a collaborative filtering matrix. Each row corresponds to an item and each column to a user. Each user and each item was first assigned a random latent vector (i.i.d. Gaussian). Each value in the matrix is the dot product of the corresponding latent vectors plus additional Gaussian noise. We simulated the fact that some items are more popular than others by retaining each entry of each item i with probability 1 i m where i 0, . . . , m 1. 6.1 Sampling techniques and quality measure The experiments report the accuracy of sampling according to four different distributions. In Figure 6.1, Bernstein denotes the distribution of this paper, defined in Lemma 5.3. The Row-L1 distribution is a simplified version of the Bernstein distribution, where pij Aij A i 1. L1 and L2 refer to pij Aij and pij Aij 2, respectively, as defined earlier in the paper. The case of L2 3Notice that the function ⇢i ⇣is monotonically decreasing for ⇣ 0 hence the solution is indeed unique. 7 sampling was split into three sampling methods corresponding to different trimming thresholds. In the method referred to as L2 no trimming is made and pij Aij 2. In the case referred to as L2 trim 0.1, pij Aij 2 for any entry where Aij 2 0.1 Eij Aij 2 and pij 0 otherwise. The sampling technique referred to as L2 trim 0.01 is analogous with threshold 0.01 Eij Aij 2 . Although to derive our sampling probability distributions we targeted minimizing A B 2, in experiments it is more informative to consider a more sensitive measure of quality of approximation. The reason is that for a number of values of s, the scaling of entries required for B to be an unbiased estimator of A, results in A B A which would suggest that the all zeros matrix is a better sketch for A than the sampled matrix. We will see that this is far from being the case. As a trivial example, consider the possibility B 10A. Clearly, B is very informative of A although A B 9 A . To avoid this pitfall, we measure P B k A F Ak F , where P B k is the projection on the top k left singular vectors of B. Thus, Ak P A k A is the optimal rank k approximation of A. Intuitively, this measures how well the top k left singular vectors of B capture A, compared to A’s own (optimal) top-k left singular vectors. We also compute AQB k F Ak F where QB k is the projection on the top k right singular vectors of A. Note that, for a given k, approximating the row-space is harder than approximating the column-space since it is of dimension n which is significantly larger than m, a fact also borne out in the experiments. In the experiments we made sure to choose a sufficiently wide range of sample sizes so that at least the best method for each matrix goes from poor to near-perfect both in approximating the row and the column space. In all cases we report on k 20 which is close to the upper end of what could be efficiently computed on a single machine for matrices of this size. The results for all smaller values of k are qualitatively indistinguishable. 0" 0.2" 0.4" 0.6" 0.8" 1" 4" 4.7" 5" 5.7" 6" 6.7" 7" 0" 0.2" 0.4" 0.6" 0.8" 1" 4" 4.7" 5" 5.7" 6" 6.7" 7" 0.75% 0.8% 0.85% 0.9% 0.95% 1% 4% 4.7% 5% 5.7% 6% 0.5$ 0.6$ 0.7$ 0.8$ 0.9$ 1$ 4$ 4.7$ 5$ 5.7$ 6$ 0" 0.2" 0.4" 0.6" 0.8" 1" 4" 4.7" 5" 5.7" 6" 6.7" 7" 0" 0.2" 0.4" 0.6" 0.8" 1" 4" 4.7" 5" 5.7" 6" 6.7" 7" 0" 0.2" 0.4" 0.6" 0.8" 1" 4" 4.7" 5" 5.7" 6" 0" 0.2" 0.4" 0.6" 0.8" 1" 4" 4.7" 5" 5.7" 6" Figure 1: Each vertical pair of plots corresponds to one matrix. Left to right: Wikipedia, Images, Enron, Synthetic . Each top plot shows the quality of the column-space approximation ratio, P k BA F Ak , while the bottom plots show the row-space approximation ratio AQk B F Ak . The number of samples s is on the x-axis in log scale x log10 s . 6.2 Insights The experiments demonstrate three main insights. First and most important, Bernstein-sampling is never worse than any of the other techniques and is often strictly better. A dramatic example of this is the Wikipedia matrix for which it is far superior to all other methods. The second insight is that L1-sampling, i.e., simply taking pij Aij A 1, performs rather well in many cases. Hence, if it is impossible to perform more than one pass over the matrix and one can not even obtain an estimate of the ratios of the L1-weights of the rows, L1-sampling seems to be a highly viable option. The third insight is that for L2-sampling, discarding small entries may drastically improve the performance. However, it is not clear which threshold should be chosen in advance. In any case, in all of the example matrices, both L1-sampling and Bernstein-sampling proved to outperform or perform equally to L2-sampling, even with the correct trimming threshold. 8 References [AHK05] Sanjeev Arora, Elad Hazan, and Satyen Kale. Fast algorithms for approximate semidefinite programming using the multiplicative weights update method. In Foundations of Computer Science, 2005. FOCS 2005. 46th Annual IEEE Symposium on, pages 339–348. IEEE, 2005. [AHK06] Sanjeev Arora, Elad Hazan, and Satyen Kale. A fast random sampling algorithm for sparsifying matrices. In Proceedings of the 9th international conference on Approximation Algorithms for Combinatorial Optimization Problems, and 10th international conference on Randomization and Computation, APPROX’06/RANDOM’06, pages 272–279, Berlin, Heidelberg, 2006. Springer-Verlag. [AKV02] Noga Alon, Michael Krivelevich, and VanH. Vu. On the concentration of eigenvalues of random symmetric matrices. Israel Journal of Mathematics, 131:259–267, 2002. [AM01] Dimitris Achlioptas and Frank McSherry. Fast computation of low rank matrix approximations. In Proceedings of the thirty-third annual ACM symposium on Theory of computing, pages 611–618. ACM, 2001. [AM07] Dimitris Achlioptas and Frank Mcsherry. Fast computation of low-rank matrix approximations. J. ACM, 54(2), april 2007. [AW02] Rudolf Ahlswede and Andreas Winter. Strong converse for identification via quantum channels. IEEE Transactions on Information Theory, 48(3):569–579, 2002. [Ber07] Aleˇs Berkopec. Hyperquick algorithm for discrete hypergeometric distribution. Journal of Discrete Algorithms, 5(2):341–347, 2007. [CR09] Emmanuel J Cand`es and Benjamin Recht. Exact matrix completion via convex optimization. Foundations of Computational mathematics, 9(6):717–772, 2009. [CT10] Emmanuel J Cand`es and Terence Tao. The power of convex relaxation: Near-optimal matrix completion. Information Theory, IEEE Transactions on, 56(5):2053–2080, 2010. [d’A08] Alexandre d’Aspremont. Subsampling algorithms for semidefinite programming. arXiv preprint arXiv:0803.1990, 2008. [DKM06] Petros Drineas, Ravi Kannan, and Michael W. Mahoney. Fast monte carlo algorithms for matrices; approximating matrix multiplication. SIAM J. Comput., 36(1):132–157, July 2006. [DZ11] Petros Drineas and Anastasios Zouzias. A note on element-wise matrix sparsification via a matrixvalued bernstein inequality. Inf. Process. Lett., 111(8):385–389, 2011. [FK81] Z. F¨uredi and J. Koml´os. The eigenvalues of random symmetric matrices. Combinatorica, 1(3):233– 241, 1981. [GT09] Alex Gittens and Joel A Tropp. Error bounds for random matrix approximation schemes. arXiv preprint arXiv:0911.4108, 2009. [Juh81] F. Juh´asz. On the spectrum of a random graph. In Algebraic methods in graph theory, Vol. I, II (Szeged, 1978), volume 25 of Colloq. Math. Soc. J´anos Bolyai, pages 313–316. North-Holland, Amsterdam, 1981. [NDT09] NH Nguyen, Petros Drineas, and TD Tran. Matrix sparsification via the khintchine inequality, 2009. [NDT10] Nam H Nguyen, Petros Drineas, and Trac D Tran. Tensor sparsification via a bound on the spectral norm of random tensors. arXiv preprint arXiv:1005.4732, 2010. [PCI 07] J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman. Object retrieval with large vocabularies and fast spatial matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2007. [Rec11] Benjamin Recht. A simpler approach to matrix completion. J. Mach. Learn. Res., 12:3413–3430, December 2011. [RV07] Mark Rudelson and Roman Vershynin. Sampling from large matrices: An approach through geometric functional analysis. J. ACM, 54(4), July 2007. [Sty11] Will Styler. The enronsent corpus. In Technical Report 01-2011, University of Colorado at Boulder Institute of Cognitive Science, Boulder, CO., 2011. [Tro12a] Joel A. Tropp. User-friendly tail bounds for sums of random matrices. Foundations of Computational Mathematics, 12(4):389–434, 2012. [Tro12b] Joel A Tropp. User-friendly tail bounds for sums of random matrices. Foundations of Computational Mathematics, 12(4):389–434, 2012. [Wig58] Eugene P. Wigner. On the distribution of the roots of certain symmetric matrices. Annals of Mathematics, 67(2):pp. 325–327, 1958. 9
|
2013
|
143
|
4,869
|
A Comparative Framework for Preconditioned Lasso Algorithms Fabian L. Wauthier Statistics and WTCHG University of Oxford flw@stats.ox.ac.uk Nebojsa Jojic Microsoft Research, Redmond jojic@microsoft.com Michael I. Jordan Computer Science Division University of California, Berkeley jordan@cs.berkeley.edu Abstract The Lasso is a cornerstone of modern multivariate data analysis, yet its performance suffers in the common situation in which covariates are correlated. This limitation has led to a growing number of Preconditioned Lasso algorithms that pre-multiply X and y by matrices PX, Py prior to running the standard Lasso. A direct comparison of these and similar Lasso-style algorithms to the original Lasso is difficult because the performance of all of these methods depends critically on an auxiliary penalty parameter λ. In this paper we propose an agnostic framework for comparing Preconditioned Lasso algorithms to the Lasso without having to choose λ. We apply our framework to three Preconditioned Lasso instances and highlight cases when they will outperform the Lasso. Additionally, our theory reveals fragilities of these algorithms to which we provide partial solutions. 1 Introduction Variable selection is a core inferential problem in a multitude of statistical analyses. Confronted with a large number of (potentially) predictive variables, the goal is to select a small subset of variables that can be used to construct a parsimonious model. Variable selection is especially relevant in linear observation models of the form y = Xβ∗+ w with w ∼N(0, σ2In×n), (1) where X is an n × p matrix of features or predictors, β∗is an unknown p-dimensional regression parameter, and w is a noise vector. In high-dimensional settings where n ≪p, ordinary least squares is generally inappropriate. Assuming that β∗is sparse (i.e., the support set S(β∗) ≜{i|β∗ i ̸= 0} has cardinality k < n), a mainstay algorithm for such settings is the Lasso [10]: Lasso: ˆβ = argminβ∈Rp 1 2n ||y −Xβ||2 2 + λ ||β||1 . (2) For a particular choice of λ, the variable selection properties of the Lasso can be analyzed by quantifying how well the estimated support S(ˆβ) approximates the true support S(β∗). More careful analyses focus instead on recovering the signed support S±(β∗), S±(β∗ i ) ≜ ( +1 if β∗ i > 0 −1 if β∗ i < 0 0 o.w. . (3) Theoretical developments during the last decade have shed light onto the support recovery properties of the Lasso and highlighted practical difficulties when the columns of X are correlated. These developments have led to various conditions on X for support recovery, such as the mutual incoherence or the irrepresentable condition [1, 3, 8, 12, 13]. 1 In recent years, several modifications of the standard Lasso have been proposed to improve its support recovery properties [2, 7, 14, 15]. In this paper we focus on a class of “Preconditioned Lasso” algorithms [5, 6, 9] that pre-multiply X and y by suitable matrices PX and Py to yield ¯X = PXX, ¯y = Pyy, prior to running Lasso. Thus, the general strategy of these methods is Preconditioned Lasso: ˆ¯β = argminβ∈Rp 1 2n ¯y −¯Xβ 2 2 + ¯λ ||β||1 . (4) Although this class of algorithms often compares favorably to the Lasso in practice, our theoretical understanding of them is at present still fairly poor. Huang and Jojic [5], for example, consider only empirical evaluations, while both Jia and Rohe [6] and Paul et al. [9] consider asymptotic consistency under various assumptions. Important and necessary as they are, consistency results do not provide insight into the relative performance of Preconditioned Lasso variants for finite data sets. In this paper we provide a new theoretical basis for making such comparisons. Although the focus of the paper is on problems of the form of Eq. (4), we note that the core ideas can also be applied to algorithms that right-multiply X and/or y with some matrices (e.g., [4, 11]). For particular instances of X, β∗, we want to discover whether a given Preconditioned Lasso algorithm following Eq. (4) improves or degrades signed support recovery relative to the standard Lasso of Eq. (2). A major roadblock to a one-to-one comparison are the auxiliary penalty parameters, λ, ¯λ, which trade off the ℓ1 penalty to the quadratic objective in both Eq. (2) and Eq. (4). A correct choice of penalty parameter is essential for signed support recovery: If it is too small, the algorithm behaves like ordinary least squares; if it is too large, the estimated support may be empty. Unfortunately, in all but the simplest cases, pre-multiplying data X, y by matrices PX, Py changes the relative geometry of the ℓ1 penalty contours to the elliptical objective contours in a nontrivial way. Suppose we wanted to compare the Lasso to the Preconditioned Lasso by choosing for each λ in Eq. (2) a suitable, matching ¯λ in Eq. (4). For a fair comparison, the resulting mapping would have to capture the change of relative geometry induced by preconditioning of X, y, i.e. ¯λ = f(λ, X, y, PX, Py). It seems difficult to theoretically characterize such a mapping. Furthermore, it seems unlikely that a comparative framework could be built by independently choosing “ideal” penalty parameters λ, ¯λ: Meinshausen and B¨uhlmann [8], for example, demonstrate that a seemingly reasonable oracle estimator of λ will not lead to consistent support recovery in the Lasso. In the Preconditioned Lasso literature this problem is commonly sidestepped either by resorting to asymptotic comparisons [6, 9], empirically comparing regularization paths [5], or using modelselection techniques which aim to choose reasonably “good” matching penalty parameters [6]. We deem these approaches to be unsatisfactory—asymptotic and empirical analyses provide limited insight, and model selection strategies add a layer of complexity that may lead to unfair comparisons. It is our view that all of these approaches place unnecessary emphasis on particular choices of penalty parameter. In this paper we propose an alternative strategy that instead compares the Lasso to the Preconditioned Lasso by comparing data-dependent upper and lower penalty parameter bounds. Specifically, we give bounds (λu, λl) on λ so that the Lasso in Eq. (2) is guaranteed to recover the signed support iff λl < λ < λu. Consequently, if λl > λu signed support recovery is not possible. The Preconditioned Lasso in Eq. (4) uses data ¯X = PXX, ¯y = Pyy and will thus induce new bounds (¯λu, ¯λl) on ¯λ. The comparison of Lasso and Preconditioned Lasso on an instance X, β∗ then proceeds by suitably comparing the bounds on λ and ¯λ. The advantage of this approach is that the upper and lower bounds are easy to compute, even though a general mapping between specific penalty parameters cannot be readily derived. To demonstrate the effectiveness of our framework, we use it to analyze three Preconditioned Lasso algorithms [5, 6, 9]. Using our framework we make several contributions: (1) We confirm intuitions about advantages and disadvantages of the algorithms proposed in [5, 9]; (2) We show that for an SVD-based construction of n × p matrices X, the algorithm in [6] changes the bounds deterministically; (3) We show that in the context of our framework, this SVD-based construction can be thought of as a limit point of a Gaussian construction. The paper is organized as follows. In Section 2 we will discuss three recent instances of Eq. (4). We outline our comparative framework in Section 3 and highlight some immediate consequences for [5] and [9] on general matrices X in Section 4. More detailed comparisons can be made by considering a generative model for X. In Section 5 we introduce such a model based on a block-wise SVD of X and then analyze [6] for specific instances of this generative model. Finally, we show that in terms of signed support recovery, this generative model can be thought of as a limit point of a Gaussian 2 construction. Section 6 concludes with some final thoughts. The proofs of all lemmas and theorems are in the supplementary material. 2 Preconditioned Lasso Algorithms Our interest lies in the class of Preconditioned Lasso algorithms that is summarized by Eq. (4). Extensions to related algorithms, such as [4, 11] will follow readily. In this section we focus on three recent Preconditioned Lasso examples and instantiate the matrices PX, Py appropriately. Detailed derivations can be found in the supplementary material. For later reference, we will denote each algorithm by the author initials. Huang and Jojic [5] (HJ). Huang and Jojic proposed Correlation Sifting [5], which, although not presented as a preconditioning algorithm, can be rewritten as one. Let the SVD of X be X = UDV ⊤. Given an algorithm parameter q, let UA be the set of q smallest left singular vectors of X1. Then HJ amounts to setting PX = Py = UAU ⊤ A. (5) Paul et al. [9] (PBHT). An earlier instance of the preconditioning idea was put forward by Paul et al. [9]. For some algorithm parameter q, let A be the q column indices of X with largest absolute correlation to y, (i.e., where |X⊤ j y|/||Xj||2 is largest). Define UA to be the q largest left singular vectors of XA. With this, PBHT can be expressed as setting PX = In×n Py = UAU ⊤ A. (6) Jia and Rohe [6] (JR). Jia and Rohe [6] propose a preconditioning method that amounts to whitening the matrix X. If X = UDV ⊤is full rank, then JR defines2 PX = Py = U DD⊤−1/2 U ⊤. (7) If n < p then ¯X ¯X⊤= PXXX⊤P ⊤ X ∝In×n and if n > p then ¯X⊤¯X = X⊤P ⊤ X PXX ∝Ip×p. Both HJ and PBHT estimate a basis UA for a q-dimensional subspace onto which they project y and/or X. However, since the methods differ substantially in their assumptions, the estimators differ also. Empirical results in [5] and [9] suggest that the respective assumptions are useful in a variety of situations. In contrast, JR reweights the column space directions U and requires no extra parameter q to be estimated. 3 Comparative Framework In this section we propose a new comparative approach for Preconditioned Lasso algorithms which avoids choosing particular penalty parameters λ, ¯λ. We first derive upper and lower bounds for λ and ¯λ respectively so that signed support recovery can be guaranteed iff λ and ¯λ satisfy the bounds. We then compare estimators by comparing the resulting bounds. 3.1 Conditions for signed support recovery Before proceeding, we make some definitions motivated by Wainwright [12]. Suppose that the support set of β∗is S ≜S(β∗), with |S| = k. To simplify notation, we will assume throughout that S = {1, . . . , k} so that the corresponding off-support set is Sc = {1, . . . , p} \S, with |Sc| = p −k. Denote by Xj column j of X and by XA the submatrix of X consisting of columns indexed by set A. Define the following variables: For all j ∈Sc and i ∈S, let µj = X⊤ j XS(X⊤ S XS)−1sgn(β∗ S) ηj = X⊤ j In×n −XS(X⊤ S XS)−1X⊤ S w n (8) γi = e⊤ i 1 nX⊤ S XS −1 sgn(β∗ S) ϵi = e⊤ i 1 nX⊤ S XS −1 X⊤ S w n . (9) 1The choice of smallest singular vectors is considered for matrices X with sharply decaying spectrum. 2We note that Jia and Rohe [6] let D be square, so that it can be directly inverted. If X is not full rank, the pseudo-inverse of D can be used. 3 0.5 1 1.5 0 0.2 0.4 0.6 0.8 1 P (S ±( ˆβ) = S ±(β ∗)) f (a) Signed support recovery around λl. 0.5 1 1.5 0 0.2 0.4 0.6 0.8 1 P (S ±( ˆβ) = S ±(β ∗)) f (b) Signed support recovery around λu. Figure 1: Empirical evaluation of the penalty parameter bounds of Lemma 1. For each of 500 synthetic Lasso problems (n = 300, p = 1000, k = 10) we computed λl, λu as per Lemma 1. Then we ran Lasso using penalty parameters fλl in Figure (a) and fλu in Figure (b), where the factor f = 0.5, . . . , 1.5. The figures show the empirical probability of signed support recovery as a function of the factor f for both λl and λu. As expected, the probabilities change sharply at f = 1. For the traditional Lasso of Eq. (2), results in (for example) Wainwright [12] connect settings of λ with instances of X, β∗, w to certify whether or not Lasso will recover the signed support. We invert these results and, for particular instances of X, β∗, w, derive bounds on λ so that signed support recovery is guaranteed if and only if the bounds are satisfied. Specifically, we prove the following Lemma in the supplementary material. Lemma 1. Suppose that X⊤ S XS is invertible, |µj| < 1, ∀j ∈Sc, and sgn(β∗ i )γi > 0, ∀i ∈S. Then the Lasso has a unique solution ˆβ which recovers the signed support (i.e., S±(ˆβ) = S±(β∗)) if and only if λl < λ < λu, where λl = max j∈Sc ηj (2Jηj > 0K −1) −µj λu = min i∈S β∗ i + ϵi γi + , (10) J·K denotes the indicator function and | · |+ = max(0, ·) denotes the hinge function. On the other hand, if X⊤ S XS is not invertible, then the signed support cannot in general be recovered. Lemma 1 recapitulates well-worn intuitions about when the Lasso has difficulty recovering the signed support. For instance, assuming that w has symmetric distribution with mean 0, if 1 −|µj| is small (i.e., the irrepresentable condition almost fails to hold), then λl will tend to be large. In extreme cases we might have λl > λu so that signed support recovery is impossible. Figure 1 empirically validates the bounds of Lemma 1 by estimating probabilities of signed support recovery for a range of penalty parameters on synthetic Lasso problems. 3.2 Comparisons In this paper we propose to compare a preconditioning algorithm to the traditional Lasso by comparing the penalty parameter bounds produced by Lemma 1. As highlighted in Eq. 4, the preconditioning framework runs Lasso on modified variables ¯X = PXX, ¯y = Pyy. For the purpose of applying Lemma 1, these transformations induce a new noise vector ¯w = ¯y −¯Xβ∗= Py (Xβ∗+ w) −PXXβ∗. (11) Note that if PX = Py then ¯w = Pyw. Provided the conditions of Lemma 1 hold for ¯X, β∗we can define updated variables ¯µj, ¯γi, ¯ηj, ¯ϵi from which the bounds ¯λu, ¯λl on the penalty parameter ¯λ can be derived. In order for our comparison to be scale-invariant, we will compare algorithms by ratios of resulting penalty parameter bounds. That is, we deem a Preconditioned Lasso algorithm to be more effective than the traditional Lasso if ¯λu/¯λl > λu/λl. Intuitively, the upper bound ¯λu is then disproportionately larger than ¯λl relative to λu and λl, which in principle allows easier tuning of ¯λ3. We will later encounter the special case ¯λu ̸= 0, ¯λl = 0 in which case we define ¯λu/¯λl ≜∞to indicate that the preconditioned problem is very easy. If ¯λu/¯λl < 1 then signed support recovery is in general impossible. Finally, to match this intuition, we define ¯λu/¯λl ≜0 if ¯λu = ¯λl = 0. 3Other functions of λl, λu and ¯λl, ¯λu could also be considered. However, we find the ratio to be a particularly intuitive measure. 4 4 General Comparisons We begin our comparisons with some immediate consequences of Lemma 1 for HJ and PBHT. In order to highlight the utility of the proposed framework, we focus in this section on special cases of PX, Py. The framework can of course also be applied to general matrices PX, Py. As we will see, both HJ and PBHT have the potential to improve signed support recovery relative to the traditional Lasso, provided the matrices PX, Py are suitably estimated. The following notation will be used during our comparisons: We will write ¯A ⪯A to indicate that random variable A stochastically dominates ¯A, that is, ∀t P( ¯A ≥t) ≤P(A ≥t). We also let US be a minimal basis for the column space of the submatrix XS, and define span(US) = x ∃c ∈Rk s.t. x = USc ⊆Rn. Finally, we let USc be a minimal basis for the orthogonal complement of span(US). Consequences for HJ. Recall from Section 2 that HJ uses PX = Py = UAU ⊤ A, where UA is a column basis estimated from X. We have the following theorem: Theorem 1. Suppose that the conditions of Lemma 1 are met for a fixed instance of X, β∗. If span(US) ⊆span(UA), then after preconditioning using HJ the conditions continue to hold, and λu λl ⪯ ¯λu ¯λl , (12) where the stochasticity on both sides is due to independent noise vectors w. On the other hand, if X⊤ S P ⊤ X PXXS is not invertible, then HJ cannot in general recover the signed support. We briefly sketch the proof of Theorem 1. If span(US) ⊆span(UA) then plugging in the definition of PX into ¯µj, ¯γi, ¯ηj, ¯ϵi, one can derive the following ¯µj = µj ¯γi = γi (13) ¯ηj = X⊤ j In×n −USU ⊤ S UAU ⊤ A w n ¯ϵi = ϵi. (14) If span(UA) = span(US), then it is easy to see that ¯ηj = 0. Notice that because ¯µj and ¯γi are unchanged, if the conditions of Lemma 1 hold for the original Lasso problem (i.e., X⊤ S XS is invertible, |µj| < 1 ∀j ∈Sc and sgn(β∗ i )γi > 0 ∀i ∈S), they will continue to hold for the preconditioned problem. Suppose then that the conditions set forth in Lemma 1 are met. With some additional work one can show that ¯λu = min i∈S β∗ i + ¯ϵi ¯γi + = λu ¯λl = max j∈Sc ¯ηj (2J¯ηj > 0K −1) −¯µj ⪯λl. (15) The result then follows by showing that ¯λl, λl are both independent of ¯λu = λu. Note that if span(UA) = span(US), then ¯λl = 0 and so ¯λu/¯λl ≜∞. In the more common case when span(US) ̸⊆span(UA) the performance of the Lasso depends on how misaligned UA and US are. In extreme cases, X⊤ S P ⊤ X PXXS is singular and so signed support recovery is not in general possible. Consequences for PBHT. Recall from Section 2 that PBHT uses PX = In×n, Py = UAU ⊤ A, where UA is a column basis estimated from X. We have the following theorem. Theorem 2. Suppose that the conditions of Lemma 1 are met for a fixed instance of X, β∗. If span(US) ⊆span(UA), after preconditioning using PBHT the conditions continue to hold, and λu λl ⪯ ¯λu ¯λl , (16) where the stochasticity on both sides is due to independent noise vectors w. On the other hand, if span(USc) = span(UA), then PBHT cannot recover the signed support. As before, we sketch the proof to build some intuition. Because PBHT does not set PX = Py as HJ does, there is no danger of X⊤ S P ⊤ X PXXS becoming singular. On the other hand, this complicates the form of the induced noise vector ¯w. Plugging PX and Py into Eq. (11), we find ¯w = (UAU ⊤ A − In×n)Xβ∗+ UAU ⊤ Aw. However, even though the noise has a more complicated form, derivations in the supplementary material show that if span(US) ⊆span(UA), then ¯µj = µj ¯γi = γi (17) ¯ηj = X⊤ j In×n −USU ⊤ S UAU ⊤ A w n ¯ϵi = ϵi. (18) 5 0 1000 2000 3000 4000 5000 0 0.2 0.4 0.6 0.8 1 t P (λ u/λ l < t) Lasso 55 35 25 15 10 (a) Empirical validation of Theorems 1 and 2. 0.2 0.4 0.6 0.8 1 1.2 1.4 0 0.5 1 1.5 2 2.5 f ¯λ u/ ¯λ l λ u/ λ l Orthogonal data Gaussian data (b) Evaluation of JR on Gaussian ensembles. Figure 2: Experimental evaluations. Figure (a) shows empirical c.d.f.’s of penalty parameter bounds ratios estimated from 1000 variable selection problems. Each problem consists of Gaussians X and w, and β∗, with n = 100, p = 300, k = 5. The blue curve shows the c.d.f. for λu/λl estimated on the original data (Lasso). Then we projected the data using PX = Py = UAU ⊤ A, where span(US) ⊆ span(UA) but dim(UA) = dim(span(UA)) is variable (see legend), and estimated the resulting c.d.f. for the updated bounds ratio ¯λu/¯λl. As predicted by Theorems 1 and 2, λu/λl ⪯¯λu/¯λl. In Figure (b) the blue curve shows the scale factor (p −k)/(n + pκ2 −k) predicted by Theorem 3 for problems constructed from Eq. (19) for κ = f p 1 −(n/p). The red curve plots the corresponding factor estimated from the Gaussian construction in Eq. (25) (n = 100, m = 2000, p = 200, k = 5) using the same ΣS, ΣSc as in Theorem 3, averaged over 50 problem instances and with error bars for one standard deviation. As in Theorem 3, the factor is approximately 1 if f = 1. As with HJ, if span(UA) = span(US), then ¯ηj = 0. Because ¯µj and ¯γi are again unchanged, the conditions of Lemma 1 will continue to hold for the preconditioned problem if they hold for the original Lasso problem. With the previous equalities established, the remainder of the proof is identical to that of Theorem 1. The fact that the above ¯µj, ¯ηj, ¯γi, ¯ϵi are identical to those of HJ depends crucially on the fact that span(US) ⊆span(UA). In general the values will differ because PBHT sets PX = In×n, but HJ does not. On the other hand, if span(US) ̸⊆span(UA) then the distribution of ¯ϵi depends on how misaligned UA and US are. In the extreme case when span(USc) = span(UA), one can show that ¯ϵi = −β∗ i , which results in ¯λu = 0, ¯λl ⪯λl. Because P(¯λl ≥0) = 1, signed support recovery is not possible. Remarks. Our theoretical analyses show that both HJ and PBHT can indeed lead to improved signed support recovery relative to the Lasso on finite datasets. To underline our findings, we empirically validate Theorems 1 and 2 in Figure 2(a), where we plot estimated c.d.f.’s for penalty parameter bounds ratios of Lasso and Preconditioned Lasso for various subspaces UA. Our theorems focussed on specific settings of PX, Py and ignored others. In general, the gains of HJ and PBHT over Lasso depend on how much the decoy signals in XSc are suppressed and how much of the true signal due to XS is preserved. Further comparison of HJ and PBHT must thus analyze how the subspaces span(UA) are estimated in the context of the assumptions made in [5] and [9]. A final note concerns the dimension of the subspace span(UA). Both HJ and PBHT were proposed with the implicit goal of finding a basis UA that has the same span as US. This of course requires estimating |S| = k by q, which adds another layer of complexity to these algorithms. Theorems 1 and 2 suggest that underestimating k can be more detrimental to signed support recovery than overestimating it. By overestimating q > k, we can trade off milder improvement when span(US) ⊆span(UA) against poor behavior should we have span(US) ̸⊆span(UA). 5 Model-Based Comparisons In the previous section we used Lemma 1 in conjunction with assumptions on UA to make statements about HJ and PBHT. Of course, the quality of the estimated UA depends on the specific instances X, β∗, w, which hinders a general analysis. Similarly, a direct application of Lemma 1 to JR yields bounds that exhibit strong X dependence. It is possible to crystallize prototypical examples by specializing X and w to come from a generative model. In this section we briefly present this model and will show the resulting penalty parameter bounds for JR. 6 5.1 Generative model for X As discussed in Section 2, many preconditioning algorithms can be phrased as truncating or reweighting column subspaces associated with X [5, 6, 9]. This suggests that a natural generative model for X can be formulated in terms of the SVD of submatrices of X. Assume p −k > n and let ΣS, ΣSc be fixed-spectrum matrices of dimension n × k and n × p −k respectively. We will assume throughout this paper that the top left “diagonal” entries of ΣS, ΣSc are positive and the remainder is zero. Furthermore, we let U, VS, VSc be orthonormal bases of dimension n × n, k × k and p −k × p −k respectively. We assume that these bases are chosen uniformly at random from the corresponding Stiefel manifold. As before and without loss of generality, suppose S = {1, . . . , k}. Then we let the Lasso problem be y = Xβ∗+ w with X = U ΣSV ⊤ S , ΣScV ⊤ Sc w ∼N(0, σ2In×n), (19) To ensure that the column norms of X are controlled, we compute the spectra ΣS, ΣSc by normalizing spectra ˆΣS and ˆΣSc with arbitrary positive elements on the diagonal. Specifically, we let ΣS = ˆΣS ||ˆΣS||F √ kn ΣSc = ˆΣSc ||ˆΣSc||F p (p −k)n. (20) We verify in the supplementary material that with these assumptions the squared column norms of X are in expectation n (provided the orthonormal bases are chosen uniformly at random). Intuition. Note that any matrix X can be decomposed using a block-wise SVD as X = [XS, XSc] = U ΣSV ⊤ S , TΣScV ⊤ Sc , (21) with orthonormal bases U, T, VS, VSc. Our model in Eq. (19) is only a minor restriction of this model, where we set T = In×n. To develop more intuition, let us temporarily set VS = Ik×k, VSc = Ip−k×p−k. Then X = [XS, XSc] = U [ΣS, ΣSc] and we see that up to scaling XS equals the first k columns of XSc. The difficulty for Lasso thus lies in correctly selecting the columns in XS, which are highly correlated with the first few columns in XSc. 5.2 Piecewise constant spectra For notational clarity we will now focus on a special case of the above model. To begin, we develop some notation. In previous sections we used US to denote a basis for the column space of XS. We will continue to use this notation, and let US contain the first k columns of U. Accordingly, we denote the last n −k columns of U by USc. We let the diagonal elements of ΣS, ˆΣS, ΣSc, ˆΣSc be identified by their column indices. That is, the diagonal entries σS,c of ΣS and ˆσS,c of ˆΣS are indexed by c ∈{1, . . . , k}; the diagonal entries σSc,c of ΣSc and ˆσSc,c of ˆΣSc are indexed by c ∈{1, . . . , n}. Each of the diagonal entries in ΣS, ΣSc is associated with a column of U. The set of diagonal entries of ΣS and ΣSc associated with US is σ(S) = {1, . . . , k} and the set of diagonal entries in ΣSc associated with USc is σ(Sc) = {1, . . . , n}\σ(S). We will construct spectrum matrices ΣS, ΣSc that are piecewise constant on their diagonals. For some κ ≥0, we let ˆσS,i = 1, ˆσSc,i = κ ∀i ∈σ(S) and ˆσSc,j = 1 ∀j ∈σ(Sc). Consequences for JR. Recall that for JR, if X = UDV ⊤, then PX = Py = U DD⊤−1/2 U ⊤. We have the following theorem. Theorem 3. Assume the Lasso problem was generated according to the generative model of Eq. (19) with ∀i ∈σ(S), ˆσS,i = 1, ˆσSc,i = κ and ∀j ∈σ(Sc), ˆσSc,j = 1 and that κ < √ n −k/ p k(p −k −1). Then the conditions of Lemma 1 hold before and after preconditioning using JR. Moreover, ¯λu ¯λl = (p −k) n + pκ2 −k λu λl . (22) In other words, JR deterministically scales the ratio of penalty parameter bounds. The proof idea is as follows. It is easy to see that X⊤ S XS is always invertible. Furthermore, one can show that if 7 κ < √ n −k/ p k(p −k −1), we have |µj| < 1, ∀j ∈Sc and sgn(β∗ i )γi > 0, ∀i ∈S. Thus, by our assumptions, the preconditions of Lemma 1 are satisfied for the original Lasso problem. Plugging in the definitions of ΣS, ΣSc into Eq. (19) we find that the SVD becomes X = UDV ⊤, where U is the same column basis as in Eq. (19), and the diagonal elements of D are determined by κ. Substituting this into the definitions of ¯µj, ¯γi, ¯ηj, ¯ϵi, we have that after preconditioning using JR ¯µj = µj ¯γi = n + n(p −k)κ2 kκ2 + n −k γi (23) ¯ηj = (kκ2 + n −k) n(p −k) ηj ¯ϵi = ϵi. (24) Thus, if the conditions of Lemma 1 hold for X, β∗, they will continue to hold after preconditioning using JR. Furthermore, notice that (2J¯ηj > 0K −1) −¯µj = (2Jηj > 0K −1) −µj. Applying Lemma 1 then gives the new ratio ¯λu/¯λl as claimed. According to Theorem 3 the ratio ¯λu/¯λl will be larger than λu/λl iff κ < p 1 −(n/p). Indeed, if κ = p 1 −(n/p) then PX = Py ∝In×n and so JR coincides with standard Lasso. 5.3 Extension to Gaussian ensembles The construction in Eq. (19) uses an orthonormal matrix U as the column basis of X. At first sight this may appear to be restrictive. However, as we show in the supplementary material, one can construct Lasso problems using a Gaussian basis W m which lead to penalty parameter bounds ratios that converge in distribution to those of the Lasso problem in Eq. (19). For some fixed β∗, VS, VSc, ΣS and ΣSc, generate two independent problems: One using Eq. (19), and one according to ym = Xmβ∗+ wm with Xm = 1 √nW m ΣSV ⊤ S , ΣScV ⊤ Sc wm ∼N 0, σ2 m n Im×m , (25) where W m is an m × n standard Gaussian ensemble. Note that an X so constructed is low rank if n < p. The latter generative model bears some resemblance to Gaussian models considered in Paul et al. [9] (Eq. (7)) and Jia and Rohe [6] (Proposition 2). Note that while the problem in Eq. (19) uses n observations with noise variance σ2, Eq. (25) has m observations with noise variance σ2m/n. The increased variance is necessary because the matrix W m has expected column length m, while columns in U are of length 1. We will think of n as fixed and will let m →∞. Let the penalty parameter bounds ratio induced by the problem in Eq. (19) be λu/λl and that induced by Eq. (25) be λm u /λm l . Then we have the following result. Theorem 4. Let VS, VSc, ΣS, ΣSc and β∗be fixed. If the conditions of Lemma 1 hold for X, β∗, then for m large enough they will hold for Xm, β∗. Furthermore, as m →∞ λm u λm l d→λu λl , (26) where the stochasticity on the left is due to W m, wm and on the right is due to w. Thus, with respect to the bounds ratio λu/λl, the construction of Eq. (19) can be thought of as the limiting construction of Gaussian Lasso problems in Eq. (25) for large m. As such, we believe that Eq. (19) is a useful proxy for less restrictive generative models. Indeed, as the experiment in Figure 2(b) shows, Theorem 3 can be used to predict the scaling factor for penalty parameter bounds ratios (i.e., ¯λu/¯λl / (λu/λl)) with good accuracy even for Gaussian ensembles. 6 Conclusions This paper proposes a new framework for comparing Preconditioned Lasso algorithms to the standard Lasso which skirts the difficulty of choosing penalty parameters. By eliminating this parameter from consideration, finite data comparisons can be greatly simplified, avoiding the use of model selection strategies. To demonstrate the framework’s usefulness, we applied it to a number of Preconditioned Lasso algorithms and in the process confirmed intuitions and revealed fragilities and mitigation strategies. Additionally, we presented an SVD-based generative model for Lasso problems that can be thought of as the limit point of a less restrictive Gaussian model. We believe this work to be a first step towards a comprehensive theory for evaluating and comparing Lasso-style algorithms and believe that the strategy can be extended to comparing other penalized likelihood methods on finite datasets. 8 References [1] D.L. Donoho, M. Elad, and V.N. Temlyakov. Stable recovery of sparse overcomplete representations in the presence of noise. Information Theory, IEEE Transactions on, 52(1):6–18, 2006. [2] J. Fan and R. Li. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96:1348–1360, 2001. [3] J.J. Fuchs. Recovery of exact sparse representations in the presence of bounded noise. Information Theory, IEEE Transactions on, 51(10):3601–3608, 2005. [4] H.-C. Huang, N.-J. Hsu, D.M. Theobald, and F.J. Breidt. Spatial Lasso with applications to GIS model selection. Journal of Computational and Graphical Statistics, 19(4):963–983, 2010. [5] J.C. Huang and N. Jojic. Variable selection through Correlation Sifting. In V. Bafna and S.C. Sahinalp, editors, RECOMB, volume 6577 of Lecture Notes in Computer Science, pages 106–123. Springer, 2011. [6] J. Jia and K. Rohe. “Preconditioning” to comply with the irrepresentable condition. 2012. [7] N. Meinshausen. Lasso with relaxation. Technical Report 129, Eidgen¨ossische Technische Hochschule, Z¨urich, 2005. [8] N. Meinshausen and P. B¨uhlmann. High-dimensional graphs and variable selection with the Lasso. Annals of Statistics, 34(3):1436–1462, 2006. [9] D. Paul, E. Bair, T. Hastie, and R. Tibshirani. “Preconditioning” for feature selection and regression in high-dimensional problems. Annals of Statistics, 36(4):1595–1618, 2008. [10] R. Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society, Series B, 58(1):267–288, 1994. [11] R.J. Tibshirani. The solution path of the Generalized Lasso. Stanford University, 2011. [12] M.J. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using ℓ1-constrained quadratic programming (Lasso). IEEE Transactions on Information Theory, 55(5):2183–2202, 2009. [13] P. Zhao and B. Yu. On model selection consistency of Lasso. Journal of Machine Learning Research, 7:2541–2563, 2006. [14] H. Zou. The Adaptive Lasso and its oracle properties. Journal of the American Statistical Association, 101:1418–1429, 2006. [15] H. Zou and T. Hastie. Regularization and variable selection via the Elastic Net. Journal of the Royal Statistical Society, Series B, 67:301–320, 2005. 9
|
2013
|
144
|
4,870
|
Universal models for binary spike patterns using centered Dirichlet processes Il Memming Park123, Evan Archer24, Kenneth Latimer12, Jonathan W. Pillow1234 1. Institue for Neuroscience, 2. Center for Perceptual Systems, 3. Department of Psychology 4. Division of Statistics & Scientific Computation The University of Texas at Austin {memming@austin., earcher@, latimerk@, pillow@mail.} utexas.edu Abstract Probabilistic models for binary spike patterns provide a powerful tool for understanding the statistical dependencies in large-scale neural recordings. Maximum entropy (or “maxent”) models, which seek to explain dependencies in terms of low-order interactions between neurons, have enjoyed remarkable success in modeling such patterns, particularly for small groups of neurons. However, these models are computationally intractable for large populations, and low-order maxent models have been shown to be inadequate for some datasets. To overcome these limitations, we propose a family of “universal” models for binary spike patterns, where universality refers to the ability to model arbitrary distributions over all 2m binary patterns. We construct universal models using a Dirichlet process centered on a well-behaved parametric base measure, which naturally combines the flexibility of a histogram and the parsimony of a parametric model. We derive computationally efficient inference methods using Bernoulli and cascaded logistic base measures, which scale tractably to large populations. We also establish a condition for equivalence between the cascaded logistic and the 2nd-order maxent or “Ising” model, making cascaded logistic a reasonable choice for base measure in a universal model. We illustrate the performance of these models using neural data. 1 Introduction Probability distributions over spike words form the fundamental building blocks of the neural code. Accurate estimates of these distributions are difficult to obtain in the context of modern experimental techniques, which make it possible to record the simultaneous spiking activity of hundreds of neurons. These difficulties, both computational and statistical, arise fundamentally from the exponential scaling (in population size) of the number of possible words a given population is capable of expressing. One strategy for combating this combinatorial explosion is to introduce a parametric model which seeks to make trade-offs between flexibility, computational expense [1, 2], or mathematical completeness [3] in order to be applicable to large-scale neural recordings. A variety of parametric models have been proposed in the literature, including the 2nd-order maxent or Ising model [4, 5], the reliable interaction model [3], restricted Boltzmann machine [6], deep learning [7], mixture of Bernoulli model [8], and the dichotomized Gaussian model [9]. However, while the number of parameters in a model chosen from a given parametric family may increase with the number of neurons, it cannot increase exponentially with the number of words. Thus, as the size of a population increases, a parametric model rapidly loses flexibility in describing the full spike distribution. In contrast, nonparametric models allow flexibility to grow with the amount of data [10, 11, 12, 13, 14]. A naive nonparametric model, such as the histogram of spike words, theoretically preserves representational power and computational simplicity. Yet in practice, the empirical histogram may be extremely slow to converge, especially for the high dimensional data we are primarily interested 1 A B C D m neurons time independent Bernoulli model cascaded logistic model Figure 1: (A) Binary representation of neural population activity. A single spike word x is indicated in red. (B) Hierarchical Dirichlet process prior for the universal binary model (UBM) over spike words. Each word is drawn with probability ⇡j. The ⇡’s are drawn from a Dirichlet with parameters given by ↵and a base distribution over spike words with parameter ✓. (C, D) Graphical models of two base measures over spike words: independent Bernoulli model and cascaded logistic model. The base measure is also a distribution over each spike word x = (x1, . . . , xm). in. In most cases, we expect never to have enough data for the empirical histogram to converge. Perhaps even more concerning is that a naive histogram model fails smooth over the space of words: unobserved words are not accounted for in the model. We propose a framework which combines the parsimony of parametric models with the flexibility of nonparametric models. We model the spike word distribution as a Dirichlet process centered on a parametric base measure. An appropriately chosen base measure smooths the observations, while the Dirichlet process allows for data that depart systematically from the base measure. These models are universal in the sense that they can converge to any distribution supported on the (2m −1)dimensional simplex. The influence of any base measure diminishes with increasing sample size, and the model ultimately converges to the empirical distribution function. The choice of base measure influences the small-sample behavior and computational tractability of universal models, both of which are crucial for neural applications. We consider two base measures that exploit a priori knowledge about neural data while remaining computationally tractable for large populations: the independent Bernoulli spiking model, and the cascaded logistic model [15, 16]. Both the Bernoulli and cascaded logistic models show better performance when used as a base measure for a universal model than when used alone. We apply these models to several simulated and neural data examples. 2 Universal binary model Consider a (random) binary spike word of length m, x 2 {0, 1}m, where m denotes the number of distinct neurons (and/or time bins; Fig. 1A). There are K = 2m possible words, which we index by k 2 {1, . . . , K}. The universal binary model is a hierarchical probabilistic model where on the bottom level (Fig. 1B), x is drawn from a multinomial (categorical) distribution with the probability of observing each word given by the vector ⇡(spike word distribution). On the top level, we model ⇡as a Dirichlet process [11] with a discrete base measure G✓, hence, x ⇠Cat(⇡), ⇡⇠DP(↵G✓), ✓⇠p(✓|λ), (1) where ↵is the concentration parameter, G✓is the base measure, a discrete probability distribution over spike words, parameterized by ✓, and p(✓|λ) is the hyper-prior. We choose a discrete probability measure for G✓such that it has positive measure only over {1, . . . , K}, and denote gk = G✓(k). Thus, the Dirichlet process has probability mass only on the K spike words, and is described by a (finite dimensional) Dirichlet distribution, ⇡⇠Dir(↵g1, . . . , ↵gK). (2) In the absence of data, the parametric base measure controls the mean of this nonparametric model, E[⇡|↵] = G✓, (3) 2 regardless of ↵. Therefore, we loosely say that ⇡is “centered” around G✓.1 We can start with good parametric models of neural populations, and extend them into a nonparametric model by using them as the base measure [17]. Under this scheme, the base measure quickly learns much of the basic structure of the data while the Dirichlet extension takes into account any deviations in the data which are not predicted by the parametric component. We call such an extension a universal binary model (UBM) with base measure G✓. The marginal distribution of a collection of words X = {xi}N i=1 under UBM is obtained by integrating over ⇡, and has the form of a Polya (a.k.a. Dirichlet-Multinomial) distribution: P(X|↵, G✓) = Γ(↵) Γ (N + ↵) K Y k=1 Γ(nk + ↵gk) Γ(↵gk) , (4) where nk is the number of observations of the word k. This leads to a simple formula for sampling from the predictive distribution over words: Pr(xN+1 = k|XN, ↵, G✓) = nk + ↵gk N + ↵. (5) Thus, sampling proceeds exactly as in the Chinese restaurant process (CRP): we set the (N + 1)-th word to be k with probability proportional to nk + ↵gk, and with probability proportional to ↵we draw a new word from G✓(which in turn increases the probability of getting word k on the next draw). Note that as ↵! 0, the predictive distribution converges to the histogram estimate nk N , and as ↵! 1, it converges to the base measure itself. We use the Jensen-Shannon divergence to the predictive distribution to quantify the performance in our experiments. 2.1 Model fitting Given data, we fit the UBM via maximum a posteriori (MAP) inference for ↵and ✓, using coordinate ascent. The marginal log-likelihood from (4) is given by, L = log P(XN|↵, ✓) = X k log Γ(nk + ↵gk) − X k log Γ(↵gk) + log Γ(↵) −log Γ (N + ↵) . (6) Derivatives with respect to ↵and ✓are, @L @✓= ↵ X k ( (nk + ↵gk) − (↵gk)) @ @✓gk, (7) @L @↵= X k gk ( (nk + ↵gk) − (↵gk)) + (↵) − (N + ↵) , (8) where denotes the digamma function. Note that the summation terms vanish when we have no observations (nk = 0), so we only need to consider the words observed in the dataset. Note also that in the limit ↵! 1, dL d✓converges to P nk gk @ @✓gk, the derivative of the logarithm of the base measure with respect to ✓. On the other hand, in the limit ↵! 0, the derivative goes to P 1 gk @ @✓gk, reflecting the fact that the number of observations nk is ignored: the likelihood effectively reflects only a single draw from the base distribution with probability gk. Even when the likelihood defined by the base measure is a convex or log-convex in ✓, the UBM likelihood is not guaranteed to be convex. Hence, we optimize by a coordinate ascent procedure that alternates between optimizing ↵and ✓. 2.2 Hyper-prior When modeling large populations of neurons, the number of parameters ✓of the base measure grows and over-fitting becomes a concern. Since the UBM relies on the base measure to provide smoothing over words, it is critical to properly regularize our estimate of ✓. 1 Technically, the mode of ⇡is G✓only for ↵≥1, and for ↵< 1, the distribution is symmetric around G✓, but the probability mass is concentrated on the corners of the simplex. 3 We place a hyper-prior p(✓|λ) on ✓for regularization. We consider both l2 and l1 regularization, which correspond to Gaussian and double exponential priors, respectively. With regularization, the loss function for optimization is L −λk✓kp p, where p = 1, 2. In a typical multi-neuron recording, the connectivity is known to be sparse and lower order [1, 3], and so we assume the connectivity is sparse. The l1 prior in particular promotes sparsity. 3 Base measures The scalability of UBM hinges on the scalability of its base measure. We describe two computationally efficient base measures. 3.1 Independent Bernoulli model We consider the independent Bernoulli model which assumes (statistically) independent spiking neurons. It is often used as a baseline model for its simplicity [4, 3]. The Bernoulli base measure takes the form, G✓(k) = p(x1, . . . , xm|✓) = m Y i pxi i (1 −pi)1−xi, (9) where pi ≥0 and ✓= (p1, . . . , pm). The distribution has full support on K spike words as long as all pi’s are non-zero. Although the Bernoulli model cannot capture the higher-order correlation structure of the spike word distribution with only m parameters, inference is fast and memoryefficient. 3.2 Cascaded logistic model To introduce a rich dependence structure among the neurons, we assume the joint firing probability of each neuron factors with a cascaded structure (see Fig. 1D): p(x1, x2, . . . , xm) = p(x1)p(x2|x1)p(x3|x1, x2) · · · p(xm|x1, x2, . . . , xm−1). (10) Along with a parametric form of conditional distribution p(xi|x1, . . . , pi−1), it provides a probabilistic model of spike words. A natural choice of the conditional is the logistic-Bernoulli linear model—a widely used model for binary observations [2]. p(xi = 1|x1:i−1, ✓) = logistic(hi + X j<i wijxj) (11) where ✓= (hi, wij)i,j<i are the parameters. The combination of the factorization and the likelihoods give rise to the cascaded logistic (Bernoulli) model2, which can be written as, G✓(k) = p(x1, . . . , xm|✓) = m Y i=1 p(xi|x1:i−1) (12) p(xi|x1:i−1, ✓) = h 1 + exp ⇣ −(2xi −1) ⇣ hi + Pi−1 j=1wijxj ⌘⌘i−1 (13) The cascaded logistic model and the Ising model (second order maxent model) have the same number of parameters m(m+1) 2 , but a different parametric form. The Ising model can be written as3, p(x1, . . . , xm|✓) = 1 Z(J) exp 0 @ X i,ji Jijxixj 1 A (14) where ✓= J is a upper triangular matrix of parameters, and Z(J) is the normalizer. However, unlike the cascaded logistic model, it is difficult to evaluate the likelihood of the Ising model, since it does not have a computationally tractable normalizer (partition function). Hence, fitting an Ising model is typically challenging. Since each conditional can be independently fit with a logistic regression (a 4 D B A C 10 ï 10 ï 10 ï 10 ï 10 ï 10 ï 0 0.2 0.4 0 50 100 Bernoulli Bernoulli cascaded logistic Example cascaded logistic model for Theorem 1 Equivalent Ising model cascaded logistic sparse Ising E F 10 ï 10 ï 10 ï 10 ï 10 ï 10 ï 0 0.2 0.4 0 50 100 -6ïGLYHUJHQFH )UHTXHQF\ S[,VLQJDFWXDO S[PRGHOILW dense Ising sparse Ising dense Ising Figure 2: Tight relation between cascaded logistic model and the Ising model. (A) A cascaded logistic model depicted as a graphical model with at most two conditioning (incoming arrow) per node (see Theorem 2). The hi parameters are given in the nodes and the interaction terms, wij are shown on the arrows between nodes. (B) Parameter matrix J of an Ising model equivalent to (A). (C) A scatter plot of three simulated Ising models fit with cascaded logistic (blue tone) and independent Bernoulli (red tone) models. Each point is a word in the m = 15 spike word space. The x-axis gives probability of the word under the actual Ising model and the y-axis shows the estimated probability from the fit model. The Ising model parameters were sparsely connect and generated randomly. The diagonal terms (Jii) were drawn from a standard normal. 80% of the off-diagonal (Jij, i 6= j) terms were set to 0 and the rest drawn from a normal with mean 0 and standard deviation 3. Both models were fit by maximum likelihood using 107 samples. (D) A histogram of the JensenShannon (JS) divergence between 100 random pairs of sparse Ising model and the fit models. (E,F) Same as (C,D) for Ising models generated with dense connectivity. The diagonal terms in the Ising model parameters were constant -2. The off-diagonal terms were drawn from a standard normal distribution. convex optimization), cascaded logistic model’s estimation is computationally tractable for a large number of neurons [2]. Despite these differences, remarkably, the Ising model and the cascaded logistic models overlap substantially. Up to m = 3 neurons, Ising model and cascaded logistic model are equivalent. For larger populations, the following theorem describes the intersection of the two models. Theorem 1 (Pentadiagonal Ising model is a cascaded logistic model). An Ising model with Jij = 0 for j < i−2 or j > i+2, is also a cascaded logistic model. Moreover, the parameter transformation is bijective. The mapping between models parameters is given by Jm,m = hm (15) Jm−1,m = wm,m−1 (16) Jm−1,m−1 = hm−1 + log ✓ 1 + exp(hm) 1 + exp(hm + wm,m−1) ◆ (17) Ji,i = hi + log ✓ 1 + exp(hi+1) 1 + exp(hi+1 + wi+1,i) ◆ + log ✓ 1 + exp(hi+2) 1 + exp(hi+2 + wi+2,i) ◆ (18) Ji,i+1 = wi+1,i + log ✓(1 + exp(hi+2 + wi+2,i))(1 + exp(hi+2 + wi+2,i+1)) (1 + exp(hi+2))(1 + exp(hi+2 + wi+2,i+1 + wi+2,i)) ◆ (19) Ji,i+2 = wi+2,i (20) for 1 i n −2, for a symmetric J. Proof can be found in the supplemental material. 2Also known as the logistic autoregressive network. See [15], chapter 3.2. 3Note that for xi 2 {0, 1}, the mean hi’s can be incorporated as the diagonal of J. 5 10 2 10 3 10 4 10 5 10 ï 10 ï -6ïGLYHUJHQFH RIVDPSOHV 0 1 2 3 4 5 6 7 8 0 2 4 6 x 10 4 FRXQWKLVWRJUDP RIVSLNHVLQDZRUG LQGHSHQGHQW%HUQRXOOL +LVWRJUDP FDVFDGHGORJLVWLF 8%0%HUQRXOOL 8%0FDVFDGHGORJLVWLF A B Figure 3: 3rd order maxent distribution experiment. (A) Convergence in Jensen-Shannon (JS) divergence between the fit model and the true model. Error bar represents SEM over 10 repeats. (B) Histogram of the number of spikes per word. (C) Scatter plots of the log-likelihood ratio log(Pemp(k)) −log(Pmodel(k)) for each model (column), and two sample sizes of N = 1000 and N = 100000 (rows). Note the scale difference on the y-axes. Error line represents twice the standard deviation over 10 repeats. Shaded area represents frequentist 95% confidence interval for histogram estimator assuming the same amount of data. The number on the bottom right is the JS divergence. Unlike the Ising model, the order of the neurons plays a role in the formulation of the cascaded logistic model. Since a permutation of a pentadiagonal matrix is not necessarily pentadiagonal, this poses a potential challenge to the application of this equivalency. However, the Cuthill-McKee algorithm can be used as a heuristic to find a permutation of J with the lowest bandwidth (i.e., closest to pentadiagonal) [18]. This theorem can be generalized to sparse, structured cascaded logistic models. Theorem 2 (Intersection between cascaded logistic model and Ising model). A cascaded logistic model with at most two interactions with other neurons is also an Ising model. For example, cascaded logistic with a sparse cascade p(x1)p(x2|x1)p(x3|x1)p(x4|x1, x3)p(x5|x2, x4) is an Ising model (Fig. 2A)4. We remark that although the cascaded logistic model can be written as an exponential family form, the cascaded logistic does not correspond to a simple family of maximum entropy models in general. The theorems show that only a subset of Ising models are equivalent to cascaded logistic models. However, cascaded logistic models generally provide good approximations to the Ising model. We demonstrate this by drawing random Ising models (both with sparse and dense pairwise coupling J), and then fitting with a cascaded logistic model (Fig. 2C-F). Since Ising models are widely accepted as effective models of neural populations, the cascaded logistic model presents a computationally tractable alternative. 4 Simulations We compare two parametric models (independent Bernoulli and cascaded logistic model) with three nonparametric models (two universal binary models centered on the parametric models, and a naive histogram estimator) on simulated data with 15 neurons. We find the MAP solution as the parameter estimate for each model. We use an l1 regularization to fit the cascaded logistic model and the corresponding UBM. The l1 regularizer λ was selected by scanning on a grid until the cross-validation likelihood started decreasing on 10% of the training data. In Fig. 3, we simulate a maximum entropy (maxent) distribution with a third order interaction. As the number of samples increases, Jensen-Shannon (JS) divergence between the estimated model and true maxent model decreases exponentially for the nonparametric models. The JS-divergence of the 4We provide MATLAB code to convert back and forth between a subset of Ising models and the corresponding subset of cascaded logistic models (see online supplemental material). 6 10 2 10 3 10 4 10 5 10 ï 10 ï -6ïGLYHUJHQFH RIVDPSOHV 0 1 2 3 4 5 6 7 8 91011 0 1 2 3 4 x 10 4 FRXQWKLVWRJUDP RIVSLNHVLQDZRUG LQGHSHQGHQW%HUQRXOOL +LVWRJUDP FDVFDGHGORJLVWLF 8%0%HUQRXOOL 8%0FDVFDGHGORJLVWLF A B Figure 4: Synchrony histogram model. Each word with the same number of total spikes regardless of neuron identity has the same probability. Both Bernoulli and cascaded logistic models do not provide a good approximation in this case and saturate, in terms of JS divergence. Same format as Fig. 3. 10 2 10 3 10 4 10 5 10 ï -6ïGLYHUJHQFH RIVDPSOHV 0 1 2 3 4 5 6 7 8 0 1 2 3 4 x 10 4 FRXQWKLVWRJUDP RIVSLNHVLQDZRUG LQGHSHQGHQW%HUQRXOOL +LVWRJUDP FDVFDGHORJLVWLF 8%0%HUQRXOOL 8%0FDVFDGHORJLVWLF Figure 5: Ising model with 1-D nearest neighbor interaction. Same format as Fig. 3. Note that cascaded logistic and UBM with cascaded logistic base measure perform almost identically, and their convergence does not saturate (as expected by Theorem 1). parametric models saturates since the actual distribution does not lie within the same parametric family. The cascaded logistic model and the UBM centered on it show the best performance for the small sample regime, but eventually other nonparametric models catch up with the cascaded logistic model. The scatter plot (Fig. 3C) displays the log-likelihood ratio log(Ptrue) −log(Pmodel) to quantify the accuracy of the predictive distribution. Where significant deviations from the base measure model can be observed in Fig. 3C, the corresponding UBM adapts to account for those deviations. In Fig. 4, we draw samples from a distribution with higher-order dependences; Each word with the same number of total spikes are assigned the same probability. For example, words with exactly 10 neurons spiking (and 5 not spiking, out of 15 neurons) occur with high probability as can be seen from the histogram of the total spikes (Fig. 4B). Neither the Bernoulli model nor the cascaded logistic model can capture this structure accurately, indicated by a plateau in the convergence plots (Fig. 4A,C). In this case, all three nonparameteric models behave similarly: both UBMs converge with the histogram. In addition, we see that if the data comes from the model class assumed by the base measure, then UBM is just as good as the base measure alone (Fig. 5). Together, these results suggest that UBM 7 10 3 10 4 10 5 10 ï -6ïGLYHUJHQFH RIVDPSOHV 0 1 3 4 5 6 7 8 9 0 4 6 8 10 x 10 4 FRXQWKLVWRJUDP RIVSLNHVLQDZRUG LQGHSHQGHQW%HUQRXOOL +LVWRJUDP ,VLQJ FDVFDGHGORJLVWLF 8%0FDVFDGHORJLVWLF 10 ï 10 0 ï ï ï 0 4 8%0&/RJLVWLF JS = 0.0016 10 ï 10 0 10 ï 10 0 10 ï 10 0 ï ï ï 0 4 %HUQRXOOL JS = 0.0349 ï ï ï 0 4 &DVFDGHG/RJLVWLF -6 ï ï ï 0 4 ,VLQJ -6 //5 //5 A C D B 10 3 10 4 10 5 10 4 10 6 10 8 10 10 10 10 14 RIVDPSOHV Figure 6: Various models fit to a population of ten retinal ganglion neurons’ response to naturalistic movie [3]. Words consisted of 20 ms, binarized responses. 1 ⇥105 samples were reserved for testing. (A) JS divergence between the estimated model, and histogram constructed from the test data. Ising model is included, and its trace is closely followed by the cascaded logistic model. (B) Histogram of number of spikes per word. (C) Log-likelihood ratio scatter plot for the models trained with 105 randomized observations. (D) The concentration parameter ↵as a function of sample size. supplements the base measure to model flexibly the observed firing patterns, and performs at least as well as the histogram in the worst case. 5 Neural data We apply UBMs to a simultaneously recorded population of 10 retinal gangilion cells, and compare to the Ising model. In Fig. 6A we evaluate the convergence of each model. Three models—cascaded logistic, its corresponding UBM, and the Ising model—initially perform similarly, however, as more data is provided, UBM predicts the probabilities better. In panel C, we confirm that the cascaded logistic UBM gives the best fit. The decrease in corresponding ↵, shown in panel D, indicates that the cascaded logistic UBM is becoming less confident that the data is from an actual cascaded logistic model as we obtain more data. 6 Conclusion We proposed universal binary models (UBMs), a nonparametric framework that extends parametric models of neural recordings. UBMs flexibly trade off between smoothing from the base measure and “histogram-like” behavior. The Dirichlet process can incorporate deviations from the base measure when supported by the data, even as the base measure buttresses the nonparametric approach with desirable properties of parametric models, such as fast convergence and interpretability. Unlike the reliable interaction model [3], which aims to provide the same features in a heuristic manner, the UBM is a well-defined probabilistic model. Since the main source of smoothing is the base measure, UBM’s ability to extrapolate is limited to repeatedly observed words. However, UBM is capable of adjusting the probabilities of the most frequent words to focus on fitting the regularities of small probability events. We proposed the cascaded logistic model for use as a powerful, but still computationally tractable, base measure. We showed, both theoretically and empirically, that the cascaded logistic model is an effective, scalable alternative to the Ising model, which is usually limited to smaller populations. The UBM model class has the potential to reveal complex structure in large-scale recordings without the limitations of a priori parametric assumptions. Acknowledgments We thank R. Segev and E. Ganmor for the retinal data. This work was supported by a Sloan Research Fellowship, McKnight Scholar’s Award, and NSF CAREER Award IIS-1150186 (JP). 8 References [1] I. E. Ohiorhenuan, F. Mechler, K. P. Purpura, A. M. Schmid, Q. Hu, and J. D. Victor. Sparse coding and high-order correlations in fine-scale cortical networks. Nature, 466(7306):617–621, July 2010. [2] P. Ravikumar, M. Wainwright, and J. Lafferty. High-dimensional Ising model selection using L1regularized logistic regression. The Annals of Statistics, 38(3):1287–1319, 2010. [3] E. Ganmor, R. Segev, and E. Schneidman. Sparse low-order interaction network underlies a highly correlated and learnable neural population code. Proceedings of the National Academy of Sciences, 108(23):9679–9684, 2011. [4] E. Schneidman, M. J. Berry, R. Segev, and W. Bialek. Weak pairwise correlations imply strongly correlated network states in a neural population. Nature, 440(7087):1007–1012, Apr 2006. [5] J. Shlens, G. Field, J. Gauthier, M. Grivich, D. Petrusca, A. Sher, L. A. M., and E. J. Chichilnisky. The structure of multi-neuron firing patterns in primate retina. J Neurosci, 26:8254–8266, 2006. [6] P. Smolensky. Parallel distributed processing: explorations in the microstructure of cognition, vol. 1. chapter Information processing in dynamical systems: foundations of harmony theory, pages 194–281. MIT Press, Cambridge, MA, USA, 1986. [7] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507, 2006. [8] G. J. McLachlan and D. Peel. Finite mixture models. Wiley, 2000. [9] M. Bethge and P. Berens. Near-maximum entropy models for binary neural representations of natural images. Advances in neural information processing systems, 20:97–104, 2008. [10] P. M¨uller and F. A. Quintana. Nonparametric bayesian data analysis. Statistical science, 19(1):95–110, 2004. [11] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101(476):1566–1581, 2006. [12] W. Truccolo and J. P. Donoghue. Nonparametric modeling of neural point processes via stochastic gradient boosting regression. Neural computation, 19(3):672–705, 2007. [13] R. P. Adams, I. Murray, and D. J. C. MacKay. Tractable nonparametric bayesian inference in poisson processes with gaussian process intensities. In Proceedings of the 26th Annual International Conference on Machine Learning. ACM New York, NY, USA, 2009. [14] A. Kottas, S. Behseta, D. E. Moorman, V. Poynor, and C. R. Olson. Bayesian nonparametric analysis of neuronal intensity rates. Journal of Neuroscience Methods, 203(1):241–253, January 2012. [15] B. J. Frey. Graphical models for machine learning and digital communication. MIT Press, 1998. [16] M. Pachitariu, B. Petreska, and M. Sahani. Recurrent linear models of simultaneously-recorded neural populations. Advances in Neural Information Processing (NIPS), 2013. [17] E. Archer, I. M. Park, and J. W. Pillow. Bayesian entropy estimation for binary spike train data using parametric prior knowledge. In Advances in Neural Information Processing Systems (NIPS), 2013. [18] E. Cuthill and J. McKee. Reducing the bandwidth of sparse symmetric matrices. In Proceedings of the 1969 24th national conference, ACM ’69, pages 157–172, New York, NY, USA, 1969. ACM. 9
|
2013
|
145
|
4,871
|
What Are the Invariant Occlusive Components of Image Patches? A Probabilistic Generative Approach Zhenwen Dai University of Sheffield, UK, and FIAS, Goethe-University Frankfurt, Germany z.dai@sheffield.ac.uk Georgios Exarchakis Redwood Center for Theoretical Neuroscience, The University of California, Berkeley, US exarchakis@berkeley.edu J¨org L¨ucke Cluster of Excellence Hearing4all, University of Oldenburg, Germany, and BCCN Berlin, Technical University Berlin, Germany joerg.luecke@uni-oldenburg.de Abstract We study optimal image encoding based on a generative approach with non-linear feature combinations and explicit position encoding. By far most approaches to unsupervised learning of visual features, such as sparse coding or ICA, account for translations by representing the same features at different positions. Some earlier models used a separate encoding of features and their positions to facilitate invariant data encoding and recognition. All probabilistic generative models with explicit position encoding have so far assumed a linear superposition of components to encode image patches. Here, we for the first time apply a model with non-linear feature superposition and explicit position encoding for patches. By avoiding linear superpositions, the studied model represents a closer match to component occlusions which are ubiquitous in natural images. In order to account for occlusions, the non-linear model encodes patches qualitatively very different from linear models by using component representations separated into mask and feature parameters. We first investigated encodings learned by the model using artificial data with mutually occluding components. We find that the model extracts the components, and that it can correctly identify the occlusive components with the hidden variables of the model. On natural image patches, the model learns component masks and features for typical image components. By using reverse correlation, we estimate the receptive fields associated with the model’s hidden units. We find many Gabor-like or globular receptive fields as well as fields sensitive to more complex structures. Our results show that probabilistic models that capture occlusions and invariances can be trained efficiently on image patches, and that the resulting encoding represents an alternative model for the neural encoding of images in the primary visual cortex. 1 Introduction Probabilistic generative models are used to mathematically formulate the generation process of observed data. Based on a good probabilistic model of the data, we can infer the processes that have generated a given data point, i.e., we can estimate the hidden causes of the generation. These hidden causes are usually the objects we want to infer knowledge about, be it for medical data, biological processes, or sensory data such as acoustic or visual data. However, real data are usually very complex, which makes the formulation of an exact data model infeasible. Image data are a typical example of such complex data. The true generation process of images involves, for instance, different objects with different features at different positions, mutual occlusions, object shades, lighting 1 Component 1 mask feature Component 2 Background Translation Figure 1: An illustration of the generation process of our model. conditions and reflections due to self-structure and nearby objects. Even if a generative model can capture some of these features, an inversion of the model using Bayes’ rule very rapidly becomes analytically and computationally intractable. As a consequence, generative modelers make compromises to allow for trainability and applicability of their generative approaches. Two properties that have, since long, been identified as crucial for models of images are object occlusions [1–5] and the invariance of object identity to translations [6–13]. However, models incorporating both occlusions and invariances suffer from a very pronounced combinatorial complexity. They could, so far, only be trained with very low dimensional hidden spaces [2, 14, 15]. At first glance, occlusion modeling is, furthermore, mathematically more inconvenient. For these reasons, many studies including style and content models [16], other bi-linear models [17, 18], invariant sparse coding [19, 20], or invariant NMF [21] do not model occlusions. Analytical and computation reasons are often explicitly stated as the main motivation for the use of the linear superposition of components (see, e.g., [16, 17]). In this work, we for the first time study the encoding of natural image patches using a model with both non-linear feature combinations and translation invariances. 2 A Generative Model with Non-linear and Invariant Components The model used to study image patch encoding assumes an exclusive component combination, i.e., for each pixel exclusively one cause is made responsible. It thus shares the property of exclusiveness with visual occlusions. The model will later be shown to capture occluding components. We will, however, not model explicit occlusion using a depth variable (compare [2]) but will focus on the exclusiveness property. The applied model is a novel version of the invariant occlusive components model studied for mid-level vision earlier [22]. We first briefly reiterate the basic model in the following and discuss the main differences of the new version afterwards. We consider image patches ⃗y with D2 observed scalar variables, ⃗y = (y1, . . . , yD2). An image patch is assumed to contain a subset from a set of H components. Each component h can be located at a different position denoted by an index variable xh ∈{1, . . . , D2}, which is associated with a set of permutation matrices that covers all the possible planar translations {T1, . . . , TD2} (similar formulations have also been used in sprite models [14, 15]). Each component h is modeled to appear in an image patch with probability πh ∈(0, 1). Following [22], we do not model component presence and absence explicitly but, for mathematical convenience, assign the special ‘position’ −1 to all the components which are not chosen to generate the patch. Assuming a uniform distribution for the positions, the prior distribution for components and their positions is thus given by: p(⃗x|⃗π) = Y h p(xh|πh), p(xh|πh) = ( 1 −πh, xh = −1 πh D2 , otherwise , (1) where the hidden variable ⃗x = (x1, . . . , xH) contains the information on presence/absence and position of all the image components. In contrast to linear models, the studied approach requires two sets of parameters for the encoding of image components: component masks and component features. Component masks describe where an image component is located, and component features describe what a component encodes (compare [2, 3, 14, 15]). High values of mask parameters ⃗αh encode the pixels most associated with a component h but the encoding has to be understood relative to a global component position. The feature parameters ⃗wh encode the values of a component’s features. Fig. 1 shows an example 2 of the mask and feature parameters for two typical low-level visual features. Given a particular position, the mask and feature parameters of the component are transformed to the target position by multiplying a corresponding translation matrix like Txh⃗αh and Txh ⃗wh. When generating an image patch, two or more components may occupy the same pixel, but according to occlusion the pixel value is exclusively determined by only one of them. This exclusiveness is formulated by defining a mask variable ⃗m = (m1, . . . , mD2). For a pixel at a position d, md determines which component is responsible for the pixel value. Therefore, md takes a value from the set of present components Γ = {h|xh ̸= −1} plus a special value “0” indicating background, and the prior distribution of ⃗m is defined as: p(⃗m|⃗x, A) = D2 Y d=1 p(md|⃗x, A), p(md = h|⃗x, A) = α0 α0+P h′∈Γ(Txh′ ⃗αh′ )d , h = 0 (Txh ⃗αh)d α0+P h′∈Γ(Txh′ ⃗αh′ )d , h ∈Γ , (2) where A = (⃗α1, . . . , ⃗αH) contains the mask parameters for all the components, and α0 defines the mask parameter for background. The mask variable md chooses the component h with a high likelihood if the translated mask parameter of the corresponding component is high at the position d. Note that md follows a mixture model given the presence/absence and positions of all the components ⃗x. This can be thought of as an approximation to the distribution of mask variables marginalizing the depth orderings and pixel transparency in the exact occlusive model (see Supplement A for a comparison). After drawing the values of the hidden variables ⃗x and ⃗m, an image patch can be generated with a Gaussian noise model, which is given by: p(⃗y |⃗m, ⃗x, Θ) = D2 Y d=1 p(yd|md, ⃗x, Θ), p(yd|md = h, ⃗x, Θ) = ( N(yd; B, σ2 B), h = 0 N(yd; (Txh ⃗wh)d, σ2), h ∈Γ , (3) where σ2 is the variance of components, and Θ = (⃗π, W, A, σ2, α0, B, σ2 B) are all the model parameters. The background distribution is a Gaussian distribution with mean B and variance σ2 B. Compared to an occlusive model with exact EM (see Supplement A), our approach will use the exclusiveness approximation and a truncated posterior approximation in order to make learning tractable. The model described in (1) to (3) has been optimized for the encoding of image patches. First, feature variables are scalar to encode light intensities or input by the lateral geniculus nucleus (LGN) rather than color features for mid-level vision. Second, to capture the frequency of presence for individual components, we implement the learning of the prior parameter of presence ⃗π. Third, the pre-selection function in the variational approximation (see below) has been adapted to the usage of scalar valued features. As a scalar value is much less distinctive than the sophisticated image features used in [22], the pre-selection of components has been changed to the complete component instead of only salient features. 3 Efficient Likelihood Optimization Given a set of image patches Y = (⃗y(1), . . . , ⃗y(n)), learning is formulated as inferring the best model parameters w.r.t. the log-likelihood L = p(Y |Θ). Following the Expectation Maximization (EM) approach, the parameter update equations are derived. The equations of the mask parameter ⃗αh, and feature parameter ⃗wh are the same as in [22]. Additionally, we derived the update equation for the prior parameter of presence: πh = 1 N N X n=1 X ⃗x∈{xh̸=−1} p(⃗x|⃗y(n), Θ). (4) By learning the prior parameters πh, the probabilities of individual components’ presence can be estimated. This allows us to gain more insights about the statistics of image components. In the update equations, a posterior distribution has been estimated for each data point, which corresponds to the E-step of an EM algorithm. The posterior distribution of our model can be decomposed as: p(⃗m, ⃗x|⃗y, Θ) = p(⃗x|⃗y, Θ) QD2 d=1 p(md|⃗x, ⃗y, Θ), (5) in which p(⃗x|⃗y, Θ) and p(md|⃗x, ⃗y, Θ) are estimated separately. Computing the exact distribution of p(⃗x|⃗y, Θ) is intractable, as it includes the combinatorics of the presence/absence of components and their positions. An efficient posterior approximation, Expectation Truncation (ET), has been successfully employed. ET approximates the posterior distribution as a truncated distribution [23]: p(⃗x|⃗y, Θ) ≈ p(⃗y, ⃗x|Θ) P ⃗x′∈Kn p(⃗y, ⃗x′|Θ), if ⃗x ∈Kn, (6) and zero otherwise. If Kn is chosen to be small but to contain the states with most posterior probability mass, the computation of the posterior distribution becomes tractable while a high accuracy 3 Figure 2: Numerical Experiments on Artificial Data. (a) eight samples of the generated data sets. (b) The parameters of the eight components used to generate the data set. The 1st row contains the binary transparency parameters and the 2nd row shows the feature parameters. (c) The learned model parameters (H = 9). The top plot shows the learned prior probabilities ⃗π. The 1st row shows the mask parameters A; the 2nd row shows the feature parameters W; the 3rd row gives a good visualization of only the frequent used elements/pixels (setting the feature parameter whd of the elements/pixels with αhd < 0.5 to zero). (d) The result of inference given a image patch (shown on the left). The right side shows the four components inferred to be present (each takes a column). The 1st and 2nd rows show the mask and features parameters shifted according to the MAP inference ⃗xMAP, and the 3rd row shows the inferred posterior p(md|⃗xMAP, ⃗y, Θ). All the plots are heat map (Jet color map) visualizations of scalar values. of the approximations can be maintained [23]. To select a proper subspace Kn, τ features (pixel intensities) are chosen according to their mask parameters. Based on the chosen features, a score value S(xh) is computed for each component at each position (see [22]). We select H′ components, denoted as H, for the candidates that may appear in the given image according to the probability p(⃗y, ˇxh|Θ). ˇxh corresponds to the vector ⃗x with xh = x∗ h and the rest components absent (xh′ = −1, h′ ̸= h), where x∗ h is the best position of the component h w.r.t. S(xh). This is different from the earlier work [22], where Kn is constructed directly according to S(xh). For each component, we select the set of its candidate positions Xh, xh ∈Xh, which contains the p best positions w.r.t. S(xh). Then the truncated subspace Kn is defined as: Kn = {⃗x | ( X j sj ≤γ and si = 0, ∀i /∈H) or X j′ sj′ ≤1}, (7) where sh represents the presence/absence state of the component h (sh = 0 if xh = −1 ∪xh /∈Xh and sh = 1 if xh ∈Xh). To avoid converging to local optima, we used the directional annealing scheme [22] for our learning algorithm. 4 Numerical Experiments on Artificial Data The goal of the experiment on artificial data is to verify that the model and inference method can recover the correct parameters, and to investigate inference on the data generated according to occlusions with explicit depth variable. We generated 4×4 gray-scale image patches. In the data set, eight different components are used, which are four vertical ‘bars’ and four horizontal ‘bars’, and each bar has a different intensity and has a binary vector indicating its ‘transparency’ (1 for non-transparent and 0 for transparent, see Fig. 2b) . When generating an image patch, a subset of components is selected according to their prior probabilities πh = 0.25, and the selected components are combined according to a random depth ordering (flat priors on the ordering). A component with smaller depth will occlude the components with larger depth, and for each image patch we sample a new depthordering. For the pixels in which all the selected components are transparent, the value is determined according to the background with zero intensity (B = 0). All the pixels generated by components are subject to a Gaussian noise with σ = 0.02 and the pixels belonging to the background have a Gaussian noise with σB = 0.001. In total, we generated N = 1, 000 image patches. Fig. 2a shows eight samples. The artificial data is similar to data generated by the occlusive components analysis model (OCA; [2]), except of the use of scalar features and the assumption of shift-invariance. Fig. 2c shows the learned model parameters on the generated data set. We learned nine components (H = 9). The initial feature value W was set to randomly selected data points. The initial mask parameter A was independently and uniformly drawn from the interval (0, 1). The initial annealing temperature was set to T = 5. After keeping constant for 20 iterations, the temperature linearly decreased to 1 in 100 iterations. For the robustness of learning, σ decreased together with the temperature from 0.2 to 0.02, and an additive Gaussian noise with zero mean and σw = 0.04 was 4 injected into W and σw gradually decreased to zero. The algorithm terminated when the temperature was equal to 1 and the difference of the pseudo data log-likelihood of two consecutive iterations was sufficiently small (less than 0.1%). The approximation parameters used in learning was H′ = 8, γ = 4, p = 2 and τ = 3. In this result, all the eight generative components have been successfully learned. The 2nd to last component (see Fig. 2c) is a dumpy component (low πh, i.e., very rarely used). Its single pixel structure is therefore an artifact. With the learned parameters, the model could infer the present components, their positions and the pixel-to-component assignment. Fig. 2d shows a typical example. Given an image patch on the left, the present components and their positions are correctly inferred. Furthermore, as shown on the 3rd row, the posterior probabilities of the mask variable p(md|⃗x, ⃗y, Θ) give a clear assignment of the contributing component for each pixel. This information is potentially very valuable for tasks like parts-based object segmentation or to infer the depth ordering among the components. We assess the reliability of our learning algorithm by repeating the learning procedure with the same configuration but different random parameter initializations. The algorithm recovers all the generative components in 11 out of 20 repetitive runs. The 9 runs not recovering all bars did still recover reasonable solutions with usually 7 bars out of 8 bars represented. In general, optima of bar stimuli seem to have much more pronounced local optima, e.g., compared to image patches. 5 Numerical Experiments on Image Patches After we verified the inference and learning algorithm on artificial data, it was applied to patches of natural images. As training set we used N = 100, 000 patches of size 16 × 16 pixels extracted at random positions from random images of the van Hateren natural image database [24]. We modeled the sensitivity of neurons in the LGN using a difference-of-Gaussians (DoG) filter for different positions, i.e., we processed all patches by convolving them with a DoG kernel. Following earlier studies (see [5] for references), the ratio between the standard deviation of the positive and the negative Gaussian was chosen to be 1/3 and the amplitudes chosen to obtain a mean-free centersurround filter. Fig. 3a shows some samples of the image patches after preprocessing. Our algorithm learned H = 100 components from the natural image data set. The model parameters were initialized in the same way as for artificial data. The annealing temperature was initialized with T = 10, kept constant for 10 iterations, the temperature linearly decreased to 1 in 100 iterations. σ decreased together with the temperature from 0.5 to 0.2, and an additive Gaussian noise with zero mean and σw = 0.2 was injected into W and σw gradually decreased to zero. The approximation parameters used for learning were H′ = 6, γ = 4, p = 2 and τ = 50. After 134 iterations, the model parameters had essentially converged. Figs. 3bc show the learned mask parameters and the learned feature values for all the 100 components. Mask parameters define the frequently used areas within a component, and feature parameters reveal the appearance of a component on image patches. As can be observed, image components are very differently represented from linear models. See the component in Fig. 3d as an example: mask parameters are localized and all positive; feature parameters have positive and negative values across the whole patch. Masks and features can be combined to resemble a familiar Gabor function via point-wise multiplication (see Fig. 3d). All the above shown component representations are sorted in descending order according to the learned prior probabilities of occurrence ⃗π (see Fig. 3e). 6 Estimation of Receptive Fields For visualization, mask and feature parameters can be combined via point-wise multiplication. To more systematically and quantitatively interpret the learned components and to compare them to biological experimental findings, we estimated the predicted receptive fields (RFs). RFs estimates were computed with reverse correlation based on the model inference results. Reverse correlation can be defined as procedure to find the best linear approximation of the components’ presence given an image patch ⃗y(n). More formally, we search for a set of predicted receptive fields ⃗Rh, h ∈ {1, . . . , H} that minimize the following cost function: f = 1 N P n P ⃗x∈Kn p(⃗x |⃗y(n), Θ) P h(⃗RT h ¯Txh⃗y(n) −sh)2 + λ P h ⃗RT h ⃗Rh, (8) where ⃗y(n) is the nth stimulus and λ is the coefficient for L2 regularization. sh is a binary variable representing the presence/absence state of the component h, where sh = 0 if xh = −1, and sh = 1 5 (b) (c) (d) (a) (e) (f) RF Figure 3: The invariant occlusive components from natural image patches. (a) shows 20 samples of the pre-processed image patches. (b) shows the mask parameter and (c) shows the feature parameter. (d) shows an example of the relation with the learned model parameters and the estimated RFs. (e) shows the learned prior probabilities ⃗π. (f) shows the estimated Receptive Fields (RF). The RFs were fitted with 2 dimensional Gabor and DoG functions. The dashed line marks the RFs that have a more globular structure. The solid lines mark the RFs the were fitted accurately by a Gabor function. The dotted lines marks the RFs that were not approximated very well by the fitted function. All the shown model parameters in (b-c) and receptive fields in (f) are sorted in descent according to ⃗π. The plots (a-d) and (f) are heat map visualization with local scaling on individual fields (Jet color map), and (a), (c) and (f) fix light green to be zero. otherwise. As our model allows the components to be at different locations, the reverse correlation is computed by shifting the stimuli according to the inferred location of each components. ¯Txh represents the transformation matrix applied to the stimulus for the component h, which is the opposite transformation of the inferred transformation Txh ( ¯TxhTxh = 1). For the absent components, the stimulus is used without any transformations (T−1 = 1). Due to the intractability of computing an exact posterior distribution, given a data point, the cost function only sums across the truncated subspace Kn in the variational approximation (see Sec. 3). By setting the derivative of the cost function to zero, ⃗Rh can be estimated as: ⃗Rh = λN1 + P n⟨¯Txh⃗y(n)( ¯Txh⃗y(n))T ⟩qn −1 P n⟨⃗s( ¯Txh⃗y(n))T ⟩qn (9) where ⟨·⟩qn denotes the expectation value w.r.t. the posterior distribution p(⃗x |⃗y(n), Θ) and 1 is an identity matrix. When solving ⃗Rh, we often observe that many of the eigenvalues of the data covariance matrix PN n=1⟨¯Txh⃗y(n)( ¯Txh⃗y(n))T ⟩qn are close to zero, which makes the solution of ⃗Rh very unstable. Therefore, we introduce a L2 regularization to the cost function. The regularization coefficient λ is chosen between the minimum and maximum element of the data covariance matrix. The estimated receptive fields are not sensitive to the value of the regularization coefficient λ as long as λ is large enough to resolve the numerical instability (see Supplement for a comparison of the receptive fields estimated with different λ values). From the experiments with artificial data and 6 natural image patches, we observed that the L2 regularization successfully eliminated the numerical stability problem. Fig. 3f shows the RFs estimated according to our model. For further analysis, we matched the RFs using Gabor functions and DoG functions as was suggested in [5]. If we factored in the occurrence probabilities, we found that the model considered about 17% of all components of the patches to be globular, 56% to be Gabor-like and 27% to have another structure (see Supplement for details). The prevalence of ‘center-on’ globular fields may be a consequence of the prevalence of convex object shapes. 7 Discussion The encoding of image patches investigated in this study separates feature and position information of visual components. Functionally, such an encoding has been found very useful, e.g., for the construction of object recognition systems. Many state-of-the-art systems for visual object classification make use of convolutional neural networks [12, 25, 26]. Such networks compute the responses of a set of filters for all positions in a predefined area and use the maximal response for further processing ([12] for a review). If we identify the predefined area with one image patch as processed by our approach, then the encoding studied here is to some extent similar to convolutional networks: (A) it uses like convolutional networks one set of component parameters for all positions; and (B) a hidden component variable of the generative model integrates or ‘pools’ the information across all positions. As the here studied approach is based on a generative data model, the integration across positions can directly be interpreted as inversion of the generation process. Crucially, the inversion can take occlusions of visual features into account while convolutional networks do not model occlusions. Furthermore, the generative model uses a probabilistic encoding, i.e., it assigns probabilities to positions and features of a joint feature and position space. Ambiguous visual input can therefore be represented appropriately. In contrast, convolutional networks use one position for each feature as representation. In this sense a convolutional encoding could be regarded as MAP estimate for the feature position while the generative integration could be interpreted as probabilistic pooling. Many bilinear models have also been applied to image patches, e.g., [17, 18]. Such studies do report that neurally plausible receptive fields (RFs) in the form of Gabor functions emerge [17, 18]. Likewise, invariant versions of NMF [21] or ICA (in the form of ISA [9] have been applied to image patches. In addition to Gabors, we observed in our study a large variety of further types of RFs. Gabor filters with different orientations, phase and frequencies, as well as globular fields and fields with more complex structures (Fig. 3f). Gabors have been studied since several decades, globular and more complex fields have attracted attention in the last couple of years. In particular, globular fields have attracted attention [5, 27, 28] as they have been reported together with Gabors in macaques and other species ([29] and [5] for further references). Such fields have been associated with occlusions before [5, 28, 30]; and our study now for the first time reports globular fields for an occlusive and translation invariant approach. The results may be taken as further evidence of the connection between occlusions and globular fields. However, also linear convolutional approaches have recently reported such fields [19, 31]. Linear approaches seem to require a high degree of overcompleteness or specific priors while globular fields naturally emerge for occlusion-like non-linearities. More concretely: for non-invariant linear sparse coding, globular fields only emerged from a sufficiently high degree of overcompleteness onwards [32, 33] or with specific prior settings and overcompleteness [27]; for non-invariant occlusive models [5, 30] globular fields always emerge alongside Gabors for any overcompleteness. The results reported here can be taken as confirming this observation for position invariant encoding. The invariant non-linear model assigns high degrees of occurrences (high πh) to Gabor-like and to globular fields (first rows in Fig. 3f). Components with more complex structures are assigned lower occurrence frequencies. In total the model assumes a fraction between 10 and 20% of all data components to be globular. Such high percentages may be related to the high percentages of globular fields (∼16-23%) measured in vivo ([29] and [5] for references). In contrast, the highest degrees of occurrences, e.g., for convolutional matching pursuit [31] seems to be assigned exclusively to Gabor features. Globular fields only emerge (alongside other non-Gabor fields) for higher degrees of overcompleteness. A direct comparison in terms of occurrence frequencies is difficult because the linear models to not infer occurrence frequencies from data. The closest match to such frequencies would be an (inverse) sparsity which is set by hand for almost all linear approaches. The reason is the use of MAP-based point-estimates while our approach uses a more probabilistic posterior estimate. 7 Because of their separate encoding of features and positions, all models with separate position encoding can represent high degrees of over-completeness. Convolutional matching pursuit [31] shows results for up to 64 filters of size 8 × 8. With 8 horizontal and 8 vertical shifts, the number of noninvariant components would amount to 8 × 8 × 64 = 3136. Convolutional sparse coding [19] reports results by assuming 128 components for 9 × 9 patches.The number of non-invariant components would therefore amount to 10, 368. For our network we obtained results for up to 100 components of size 16 × 16. With 16 horizontal and 16 vertical shift this amounts to 25, 600 noninvariant components. In terms of components per observed variable, invariant models are therefore now computationally feasible in a regime the visual cortex is estimated to operate in [33]. The hidden units associated with component feature are fully translation invariant. In terms of neural encoding, their insensitivity to stimulus shifts would therefore place them into the category of V1 complex cells. Also globular fields or fields that seem sensitive to structures such as corners would warrant such units the label ‘complex cell’. No hidden variable in the model can directly be associated with simple cell responses. However, a possible neural network implementation of the model is an explicit representation of component features at different positions. The weight sharing of the model would be lost but units with explicit non-invariant representation could correspond to simple cells. While such a correspondence can connect our predictions to experimental studies of simple cells, recently developed approaches for the estimation of translation invariant cell responses [34, 35] can represent a more direct connection. To approximately implement the non-linear generative model neurally, the integration of information would have to be a very active process. In contrast to passive pooling mechanisms across units representing linear filters (such as simple cells), it would involve neural units with explicit position encoding. Such units would control or ‘gate’ the information transfer from simple cells to downstream complex cells. As such our probabilistic model can be related to ideas of active control units for individual components [6, 7, 10, 11, 36] (also compare [37]). A notable difference to all these models is that the here studied approach allows to interpret active control as optimal inference w.r.t. a generative model of translations and occlusions. Future work can go in different directions. Different transformations could be considered or learned [37], explicit modeling in time could be incorporated (compare [17]), and/or further hierarchical stages could be considered. The crucial challenge all such developments face are computational intractabilities due to large combinatorial hidden spaces. Base on the presented results, we believe, however, that advances in analytical and computational training technology will enable an increasingly sophisticated modeling of image patches in the future. Acknowledgement. We thank Richard E. Turner for helpful discussions and acknowledge funding by DFG grant LU 1196/4-2. References [1] D. Mumford and B. Gidas. Stochastic models for generic images. Q. Appl. Math., 59:85–111, 2001. [2] J. L¨ucke, R. Turner, M. Sahani, and M. Henniges. Occlusive Components Analysis. NIPS, 22:1069–77, 2009. [3] Nicolas LeRoux, Nicolas Heess, Jamie Shotton, and John Winn. Learning a generative model of images by factoring appearance and shape. Neural Computation, 23:593–650, 2011. [4] D. Zoran and Y. Weiss. Natural images, Gaussian mixtures and dead leaves. NIPS, 25:1745–1753, 2012. [5] J. Bornschein, M. Henniges, and J. L¨ucke. Are V1 receptive fields shaped by low-level visual occlusions? A comparative study. PLoS Computational Biology, 9(6):e1003062. [6] G. E. Hinton. A parallel computation that assigns canonical object-based frames of reference. In Proc. IJCAI, pages 683–685, 1981. [7] C. H. Anderson and D. C. Van Essen. Shifter circuits: a computational strategy for dynamic aspects of visual processing. PNAS, 84(17):6297–6301, 1987. [8] M. Lades, J. Vorbr¨uggen, J. Buhmann, J. Lange, C. v. d. Malsburg, R. W¨urtz, and W. Konen. Distortion invariant object recognition in the dynamic link architecture. IEEE Transactions on Computers, 42(3):300–311, 1993. [9] A. Hyv¨arinen and P. Hoyer. Emergence of phase- and shift-invariant features by decomposition of natural images into independent feature subspaces. Neural Computation, 12(7):1705–20, 2000. [10] D. W. Arathorn. Map-Seeking circuits in Visual Cognition — A Computational Mechanism for Biological and Machine Vision. Standford Univ. Press, Stanford, California, 2002. 8 [11] J. L¨ucke, C. Keck, and C. von der Malsburg. Rapid convergence to feature layer correspondences. Neural Computation, 20(10):2441–2463, 2008. [12] Y. LeCun, K. Kavukcuoglu, and C. Farabet. Convolutional networks and applications in vision. Proceedings of 2010 IEEE International Symposium on Circuits and Systems, pages 253–6, 2010. [13] Y. Hu, K. Zhai, S. Williamson, and J. Boyd-Graber. Modeling Images using Transformed Indian Buffet Processes. In ICML, 2012. [14] N. Jojic and B. Frey. Learning flexible sprites in video layers. In CVPR, 2001. [15] C. K. I. Williams and M. K. Titsias. Greedy learning of multiple objects in images using robust statistics and factorial learning. Neural Computation, 16:1039–62, 2004. [16] J. B. Tenenbaum and W. T. Freeman. Separating Style and Content with Bilinear Models. Neural Computation, 12(6):1247–83, 2000. [17] P. Berkes, R. E. Turner, and M. Sahani. A structured model of video reproduces primary visual cortical organisation. PLoS Computational Biology, 5(9):e1000495, 2009. [18] C. F. Cadieu and B. A. Olshausen. Learning intermediate-level representations of form and motion from natural movies. Neural Computation, 24(4):827–866, 2012. [19] K. Kavukcuoglu, P. Sermanet, Y.L. Boureau, K. Gregor, M. Mathieu, and Y. LeCun. Learning convolutional feature hierarchies for visual recognition. NIPS, 23:14, 2010. [20] K. Gregor and Y. LeCun. Efficient learning of sparse invariant representations. CoRR, abs/1105.5307, 2011. [21] J. Eggert, H. Wersing, and E. K¨orner. Transformation-invariant representation and NMF. In 2004 IEEE International Joint Conference on Neural Networks, pages 2535–39, 2004. [22] Z. Dai and J. L¨ucke. Unsupervised learning of translation invariant occlusive components. In CVPR, pages 2400–2407. 2012. [23] J. L¨ucke and J. Eggert. Expectation truncation and the benefits of preselection in training generative models. Journal of Machine Learning Research, 11:2855–900, 2010. [24] J. H. van Hateren and A. van der Schaaf. Independent component filters of natural images compared with simple cells in primary visual cortex. Proceedings of the Royal Society of London B, 265:359–66, 1998. [25] M. Riesenhuber and T. Poggio. Hierarchical models of object recognition in cortex. Nature Neuroscience, 211(11):1019 – 1025, 1999. [26] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, volume 25, pages 1106–1114, 2012. [27] M. Rehn and F. T. Sommer. A network that uses few active neurones to code visual input predicts the diverse shapes of cortical receptive fields. Journal of Computational Neuroscience, 22(2):135–46, 2007. [28] J. L¨ucke. Receptive field self-organization in a model of the fine-structure in V1 cortical columns. Neural Computation, 21(10):2805–45, 2009. [29] D. L. Ringach. Spatial structure and symmetry of simple-cell receptive fields in macaque primary visual cortex. Journal of Neurophysiology, 88:455–63, 2002. [30] G. Puertas, J. Bornschein, and J. L¨ucke. The maximal causes of natural scenes are edge filters. In NIPS, volume 23, pages 1939–1947. 2010. [31] A. Szlam, K. Kavukcuoglu, and Y. LeCun. Convolutional matching pursuit and dictionary training. arXiv preprint arXiv:1010.0422, 2010. [32] B. A. Olshausen, C. F. Cadieu, and D. K. Warland. Learning real and complex overcomplete representations from the statistics of natural images. volume 7446, page 74460S. SPIE, 2009. [33] B. A. Olshausen. Highly overcomplete sparse coding. In Proc. of HVEI, page 86510S, 2013. [34] M. Eickenberg, R.J. Rowekamp, M. Kouh, and T.O. Sharpee. Characterizing responses of translationinvariant neurons to natural stimuli: maximally informative invariant dimensions. Neural Computation, 24(9):2384–421, 2012. [35] B. Vintch, A. Zaharia, J.A. Movshon, and E.P. Simoncelli. Efficient and direct estimation of a neural subunit model for sensory coding. In Proc. of NIPS, pages 3113–3121, 2012. [36] B. Olshausen, C. Anderson, and D. Van Essen. A neurobiological model of visual attention and invariant pattern recognition based on dynamic routing of information. J Neuroscience, 13(11):4700–4719, 1993. [37] R. Memisevic and G. E. Hinton. Learning to represent spatial transformations with factored higher-order Boltzmann machines. Neural Computation, 22(6):1473–1492, 2010. [38] M.J.D. Powell. An efficient method for finding the minimum of a function of several variables without calculating derivatives. The Computer Journal, 7(2):155–162, 1964. 9
|
2013
|
146
|
4,872
|
Correlations strike back (again): the case of associative memory retrieval Cristina Savin1 cs664@cam.ac.uk Peter Dayan2 dayan@gatsby.ucl.ac.uk M´at´e Lengyel1 m.lengyel@eng.cam.ac.uk 1Computational & Biological Learning Lab, Dept. Engineering, University of Cambridge, UK 2Gatsby Computational Neuroscience Unit, University College London, UK Abstract It has long been recognised that statistical dependencies in neuronal activity need to be taken into account when decoding stimuli encoded in a neural population. Less studied, though equally pernicious, is the need to take account of dependencies between synaptic weights when decoding patterns previously encoded in an auto-associative memory. We show that activity-dependent learning generically produces such correlations, and failing to take them into account in the dynamics of memory retrieval leads to catastrophically poor recall. We derive optimal network dynamics for recall in the face of synaptic correlations caused by a range of synaptic plasticity rules. These dynamics involve well-studied circuit motifs, such as forms of feedback inhibition and experimentally observed dendritic nonlinearities. We therefore show how addressing the problem of synaptic correlations leads to a novel functional account of key biophysical features of the neural substrate. 1 Introduction Auto-associative memories have a venerable history in computational neuroscience. However, it is only rather recently that the statistical revolution in the wider field has provided theoretical traction for this problem [1]. The idea is to see memory storage as a form of lossy compression – information on the item being stored is mapped into a set of synaptic changes – with the neural dynamics during retrieval representing a biological analog of a corresponding decompression algorithm. This implies there should be a tight, and indeed testable, link between the learning rule used for encoding and the neural dynamics used for retrieval [2]. One issue that has been either ignored or trivialized in these treatments of recall is correlations among the synapses [1–4] – beyond the perfect (anti-)correlations emerging between reciprocal synapses with precisely (anti-)symmetric learning rules [5]. There is ample experimental data for the existence of such correlations: for example, in rat visual cortex, synaptic connections tend to cluster together in the form of overrepresented patterns, or motifs, with reciprocal connections being much more common than expected by chance, and the strengths of the connections to and from each neuron being correlated [6]. The study of neural coding has indicated that it is essential to treat correlations in neural activity appropriately in order to extract stimulus information well [7– 9]. Similarly, it becomes pressing to examine the nature of correlations among synaptic weights in auto-associative memories, the consequences for retrieval of ignoring them, and methods by which they might be accommodated. 1 Here, we consider several well-known learning rules, from simple additive ones to bounded synapses with metaplasticity, and show that, with a few significant exceptions, they induce correlations between synapses that share a pre- or a post-synaptic partner. To assess the importance of these dependencies for recall, we adopt the strategy of comparing the performance of decoders which either do or do not take them into account [10], showing that they do indeed have an important effect on efficient retrieval. Finally, we show that approximately optimal retrieval involves particular forms of nonlinear interactions between different neuronal inputs, as observed experimentally [11]. 2 General problem formulation We consider a network of N binary neurons that enjoy all-to-all connectivity.1 As is conventional, and indeed plausibly underpinned by neuromodulatory interactions [12], we assume that network dynamics do not play a role during storage (with stimuli being imposed as patterns of activity on the neurons), and that learning does not occur during retrieval. To isolate the effects of different plasticity rules on synaptic correlations from other sources of correlations, we assume that the patterns of activity inducing the synaptic changes have no particular structure, i.e. their distribution factorizes. For further simplicity, we take these activity patterns to be binary with pattern density f, i.e. a prior over patterns defined as: Pstore(x) = Y i Pstore(xi) Pstore(xi) = f xi · (1 −f)1−xi (1) During recall, the network is presented with a cue, ˜x, which is a noisy or partial version of one of the originally stored patterns. Network dynamics should complete this partial pattern, using the information in the weights W (and the cue). We start by considering arbitrary dynamics; later we impose the critical constraint for biological realisability that they be strictly local, i.e. the activity of neuron i should depend exclusively on inputs through incoming synapses Wi,·. Since information storage by synaptic plasticity is lossy, recall is inherently a probabilistic inference problem [1, 13] (Fig. 1a), requiring estimation of the posterior over patterns, given the information in the weights and the recall cue: P (x|W, ˜x) ∝Pstore(x) · Pnoise(˜x|x) · P(W|x) (2) This formulation has formed the foundation of recent work on constructing efficient autoassociative recall dynamics for a range of different learning rules [2–4]. In this paper, we focus on the last term P(W|x), which expresses the probability of obtaining W as the synaptic weight matrix when x is stored along with T −1 random patterns (sampled from the prior, Eq. 1). Critically, this is where we diverge from previous analyses that assumed this distribution was factorised, or only trivially correlated due to reciprocal synapses being precisely (anti-)symmetric [1, 2, 4]. In contrast, we explicitly study the emergence and effects of non-trivial correlations in the synaptic weight matrixdistribtion, because almost all synaptic plasticity rules induce statistical dependencies between the synaptic weights of each neuron (Fig. 1a, d). The inference problem expressed by Eq. 2 can be translated into neural dynamics in several ways – dynamics could be deterministic, attractor-like, converging to the most likely pattern (a MAP estimate) of the distribution of x [2], or to a mean-field approximate solution [3]; alternatively, the dynamics could be stochastic, with the activity over time representing samples from the posterior, and hence implicitly capturing the uncertainty associated with the answer [4]. We consider the latter. Since we estimate performance by average errors, the optimal response is the mean of the posterior, which can be estimated by integrating the activity of the network during retrieval. We start by analysing the class of additive learning rules, to get a sense for the effect of correlations on retrieval. Later, we focus on multi-state synapses, for which learning rules are described by transition probabilities between the states [14]. These have been used to capture a variety of important biological constraints such as bounds on synaptic strengths and metaplasticity, i.e. the fact that synaptic changes induced by a certain activity pattern depend on the history of activity at the synapse [15]. The two classes of learning rule are radically different; so if synaptic correlations matter during retrieval in both cases, then the conclusion likely applies in general. 1Complete connectivity simplifies the computation of the parameters for the optimal dynamics for cascadelike learning rules considered in the following, but is not necessary for the theory. 2 0 0.5 1 cortical data (Song 2005) simple Hebb rule corr covariance rule a b d e 25 50 100 0 10 20 30 error (%) control N 25 50 100 0 5 10 simple (ignoring correlations) exact (considering correlations) error (%) control N 0 0.5 1 corr c Figure 1: Memory recall as inference and additive learning rules. a. Top: Synaptic weights, W, arise by storing the target pattern x together with T −1 other patterns, {x(t)}t=1...T−1. During recall, the cue, ˜x, is a noisy version of the target pattern. The task of recall is to infer x given W and ˜x (by marginalising out {x(t)}). Bottom: The activity of neuron i across the stored patterns is a source of shared variability between synapses connecting it to neurons j and k. b-c. Covariance rule: patterns of synaptic correlations and recall performance for retrieval dynamics ignoring or considering synaptic correlations; T = 5. d-e. Same for the simple Hebbian learning rule. The control is an optimal decoder that ignores W. 3 Additive learning rules Local additive learning rules assume that synaptic changes induced by different activity patterns combine additively; such that storing a sequence of T patterns from Pstore(x), results in weights Wij = P t Ω(x(t) i , x(t) j ), with function Ω(xi, xj) describing the change in synaptic strength induced by presynaptic activity xj and postsynaptic activity xi. We consider a generalized Hebbian form for this function, with Ω(xi, xj) = (xi −α)(xj −β). This class includes, for example, the covariance rule (α = β = f), classically used in Hopfield networks, or simple Hebbian learning (α = β = 0). As synaptic changes are deterministic, the only source of uncertainty in the distribution P(W|x) is the identity of the other stored patterns. To estimate this, let us first consider the distribution of the weights after storing one random pattern from Pstore(x). The mean µ and covariance C of the weight change induced by this event can be computed as:2 µ= Z Pstore(x)Ω|(x)dx, C= Z Pstore(x) Ω|(x) · Ω|(x)T dx −µ · µT (3) Since the rule is additive and the patterns are independent, the mean and covariance scale linearly with the number of intervening patterns. Hence, the distribution over possible weight values at recall, given that pattern x is stored along with T −1 other, random, patterns has mean µW = Ω(x) + (T −1) · µ, and covariance CW = (T −1) · C. Most importantly, because the rule is additive, in the limit of many stored patterns (and in practice even for modest values of T), the distribution P(W|x) approaches a multivariate Gaussian that is characterized completely by these two quantities; moreover, its covariance is independent of x. For retrieval dynamics based on Gibbs sampling, the key quantity is the log-odds ratio Ii = log P(xi = 1|x¬i, W, ˜x) P(xi = 0|x¬i, W, ˜x) (4) for neuron i, which could be represented by the total current entering the unit. This would translate into a probability of firing given by the sigmoid activation function f(Ii) = 1/(1 + e−Ii). The total current entering a neuron is a sum of two terms: one term from the external input of the form c1 · ˜xi + c2 (with constants c1 and c2 determined by parameters f and r [16]), and one term from the recurrent input, of the form: Irec i = 1 2(T −1) W| −µ(0) W T C−1 W| −µ(0) W − W| −µ(1) W T C−1 W| −µ(1) W (5) 2For notational convenience, we use a column-vector form of the matrix of weight changes Ω, and the weight matrix W, marked by subscript |. 3 where µ(0/1) W = Ω|(x(0/1))+(T−1)µ and x(0/1) is the vector of activities obtained from x in which the activity of neuron i is set to 0, or 1, respectively. It is easy to see that for the covariance rule, Ω(xi, xj) = (xi −f)(xj −f), synapses sharing a single pre- or post-synaptic partner happen to be uncorrelated (Fig. 1b). Moreover, as for any (anti-)symmetric additive learning rule, reciprocal connections are perfectly correlated (Wij =Wji). The (non-degenerate part of the) covariance matrix in this case becomes diagonal, and the total current in optimal retrieval reduces to simple linear dynamics : Ii = 1 (T −1) σ2 W X j Wijxj | {z } recurrent input −(1 −2f)2 2 X j xj | {z } feedback inhibition −f X j Wij | {z } homeostatic term −f 2 1 −2f 2 | {z } constant (6) where σ2 W is the variance of a synaptic weight resulting from storing a single pattern. This term includes a contribution from recurrent excitatory input, dynamic feedback inhibition (proportional to the total population activity) and a homeostatic term that reduces neuronal excitability as function of the net strength of its synapses (a proxy for average current the neuron expects to receive) [17]. Reassuringly, the optimal decoder for the covariance rule recovers a form for the input current that is closely related to classic Hopfield-like [5] dynamics (with external field [1, 18]): feedback inhibition is needed only when the stored patterns are not balanced (f ̸= 0.5); for the balanced case, the homeostatic term can be integrated in the recurrent current, by rewriting neural activities as spins. In sum, for the covariance rule, synapses are fortuitously uncorrelated (except for symmetric pairs which are perfectly correlated), and thus simple, classical linear recall dynamics suffice (Fig. 1c). The covariance rule is, however, the exception rather than the rule. For example, for simple Hebbian learning, Ω(xi, xj)=xi·xj, synapses sharing a pre- or post-synaptic partner are correlated (Fig. 1d) and so the covariance matrix C is no longer diagonal. Interestingly, the final expression of the recurrent current to a neuron remains strictly local (because of additivity and symmetry), and very similar to Eq. 6, but feedback inhibition becomes a non-linear function of the total activity in the network [16]. In this case, synaptic correlations have a dramatic effect: using the optimal non-linear dynamics ensures high performance, but trying to retrieve information using a decoder that assumes synaptic independence (and thus uses linear dynamics) yields extremely poor performance, which is even worse than the obvious control of relying only on the information in the recall cue and the prior over patterns (Fig. 1e). For the generalized Hebbian case, Ω(xi, xj)=(xi−α)(xj−β) with α̸=β, the optimal decoder becomes even more complex, with the total current including additional terms accounting for pairwise correlations between any two synapses that have neuron i as a pre- or post-synaptic partner [16]. Hence, retrieval is no longer strictly local3 and a biological implementation will require approximating the contribution of non-local terms as a function of locally available information, as we discuss in detail for palimpsest learning below. 4 Palimpsest learning rules Though additive learning rules are attractive for their analytical tractability, they ignore several important aspects of synaptic plasticity, e.g. they assume that synapses can grow without bound. We investigate the effects of bounded weights by considering another class of learning rules, which assumes synaptic efficacies can only take binary values, with stochastic transitions between the two underpinned by paired cascades of latent internal states [14] (Fig. 2). These learning rules, though very simple, capture an important aspect of memory – the fact that memory is leaky, and information about the past is overwritten by newly stored items (usually referred to as the palimpsest property). Additionally, such rules can account for experimentally observed synaptic metaplasticity [15]. 3For additive learning rules, the current to neuron i always depends only on synapses local to a neuron, but these can also include outgoing synapses of which the weight, W·i, should not influence its dynamics. We refer to such dynamics as ‘semi-local’. For other learning rules, the optimal current to neuron i may depend on all connections in the network, including Wjk with j, k̸=i (‘non-local’ dynamics). 4 a correlation coefficient cortex data (Song 2005) P D P D - P D pre post 0 0 1 1 R1 R3 P D R2 b simple dynamics corr-dependent dynamics d 0 0.2 0.4 c 0 10 20 0 10 20 0 10 20 error (%) correlated synapses exact approx * * pseudostorage 0 0.3 0.6 0 0.2 0.4 Figure 2: Palimpsest learning. a. The cascade model. Colored circles are latent states (V ) that belong to two different synaptic weights (W), arrows are state transitions (blue: depression, red: potentiation) b. Different variants of mapping pre- and post-synaptic activations to depression (D) and potentiation (P): R1–postsynaptically gated, R2–presynaptically gated, R3–XOR rule. c. Correlation structure induced by these learning rules. c. Retrieval performance for each rule. Learning rule Learning is stochastic and local, with changes in the state of a synapse Vij being determined only by the activation of the pre- and post-synaptic neurons, xj and xi. In general, one could define separate transition matrices for each activity pattern, M(xi, xj), describing the probability of a synaptic state transitioning between any two states Vij to V ′ ij following an activity pattern, (xi, xj). For simplicity, we define only two such matrices, for potentiation, M+, and depression, M−, respectively, and then map different activity patterns to these events. In particular, we assume Fusi’s cascade model [14]4 and three possible mappings (Fig. 2b [16]): 1) a postsynaptically gated learning rule, where changes occur only when the postsynaptic neuron is active, with co-activation of pre- and post- neuron leading to potentiation, and to depression otherwise5; 2) a presynaptically gated learning rule, typically assumed when analysing cascades[20, 21]; and 3) an XOR-like learning rule which assumes potentiation occurs whenever the pre- and post- synaptic activity levels are the same, with depression otherwise. The last rule, proposed by Ref. 22, was specifically designed to eliminate correlations between synapses, and can be viewed as a version of the classic covariance rule fashioned for binary synapses. Estimating the mean and covariance of synaptic weights At the level of a single synapse, the presentation of a sequence of uncorrelated patterns from Pstore(x) corresponds to a Markov random walk, defined by a transition matrix M, which averages over possible neural activity patterns: M = P xi,xj Pstore(xi) · Pstore(xj) · M(xi, xj). The distribution over synaptic states t steps after the initial encoding can be calculated by starting from the stationary distribution of the weights πV 0 (assuming a large number of other patterns have previously been stored; formally, this is the eigenvector of M corresponding to eigenvalue λ = 1), then storing the pattern (xi, xj), and finally t −1 other patterns from the prior: πV (xi, xj, t) = M t−1 · M(xi, xj) · πV 0, (7) with the distribution over states given as a column vector, πV l = P(Vij = l|xi, xj), l ∈{1 . . . 2n}, where n is the depth of the cascade. Lastly, the distribution over weights, P(Wij|xi, xj), can be derived as πW = MV →W · πV , where MV →W is a deterministic map from states to observed weights (Fig. 2a). As in the additive case, the states of synapses sharing a pre- or post- synaptic partner will be correlated (Figs. 1a, 2c). The degree of correlations for different synaptic configurations can be estimated by generalising the above procedure to computing the joint distribution of the states of pairs of synapses, which we represent as a matrix ρ. E.g. for a pair of synapses sharing a postsynaptic partner (Figs. 1b, d, and 2c), element (u, v) is ρuv = P(Vpost,pre1 = u, Vpost,pre2 = v). Hence, the presentation of an activity pattern (xpre1, xpre2, xpost) induces changes in the corresponding pair of 4Other models, e.g. serial [19], could be used as well without qualitatively affecting the results. 5One could argue that this is the most biologically relevant as plasticity is often NMDA-receptor dependent, and hence it requires postsynaptic depolarisation for any effect to occur. 5 incoming synapses to neuron post as ρ(1) = M(xpost, xpre1) · ρ(0) · M(xpost, xpre2)T, where ρ(0) is the stationary distribution corresponding to storing an infinite number of triplets from the pattern distribution [16]. Replacing πV with ρ (which is now a function of the triplet (xpre1, xpre2, xpost)), and the multiplication by M with the slightly more complicated operator above, we can estimate the evolution of the joint distribution over synaptic states in a manner very similar to Eq. 7: ρ(t) = X xi Pstore(xi) · ˆ M(xi) · ρ(t−1) · ˆ M(xi)T, (8) where ˆ M(xi) = P xj Pstore(xj)M(xi, xj). Also as above, the final joint distribution over states can be mapped into a joint distribution over synaptic weights as MV →W · ρ(t) · MT V →W . This approach can be naturally extended to all other correlated pairs of synapses [16]. The structure of correlations for different synaptic pairs varies significantly as a function of the learning rule (Fig. 2c), with the overall degree of correlations depending on a range of factors. Correlations tend to decrease with cascade depth and pattern sparsity. The first two variants of the learning rule considered are not symmetric, and so induce different patterns of correlations than the additive rules above. The XOR rule is similar to the covariance rule, but the reciprocal connections are no longer perfectly correlated (due to metaplasticity), which means that it is no longer possible to factorize P(W|x). Hence, assuming independence at decoding seems bound to introduce errors. Approximately optimal retrieval when synapses are independent If we ignore synaptic correlations, the evidence from the weights factorizes, P(W|x) = Q i,j P(Wij|xi, xj), and so the exact dynamics would be semi-local3. We can further approximate the contribution of the outgoing weights by its mean, which recovers the same simple dynamics derived for the additive case: Ii = log P(xi = 1|x¬i, W, ˜x) P(xi = 0|x¬i, W, ˜x) = c1 X j Wijxj + c2 X j Wij + c3 X j xj + c4 ˜xi + c5 (9) The parameters c. depend on the prior over x, the noise model, the parameters of the learning rule and t. Again, the optimal decoder is similar to previously derived attractor dynamics; in particular, for stochastic binary synapses with presynaptically gated learning the optimal dynamics require dynamic inhibition only for sparse patterns, and no homeostatic term, as used in [21] . To validate these dynamics, we remove synaptic correlations by a pseudo-storage procedure in which synapses are allowed to evolve independently according to transition matrix M, rather than changing as actual intermediate patterns are stored. The dynamics work well in this case, as expected (Fig. 2d, blue bars). However, when storing actual patterns drawn from the prior, performance becomes extremely poor, and often worse than the control (Fig. 2d, gray bars). Moreover, performance worsens as the network size increases (not shown). Hence, ignoring correlations is highly detrimental for this class of learning rules too. Approximately optimal retrieval when synapses are correlated To accommodate synaptic correlations, we approximate P(W|x) with a maximum entropy distribution with the same marginals and covariance structure, ignoring the higher order moments.6 Specifically, we assume the evidence from the weights has the functional form: P(W|x, t) = 1 Z(x, t)exp X ij kij(x, t) · Wij + 1 2 X ijkl J(ij)(kl)(x, t) · WijWkl (10) We use the TAP mean-field method [23] to find parameters k and J and the partition function, Z, for each possible activity pattern x, given the mean and covariance for the synaptic weights matrix, computed above7 [16]. 6This is just a generalisation of the simple dynamics which assume a first order max entropy model; moreover, the resulting weight distribution is a binary analog of the multivariate normal used in the additive case, allowing the two to be directly compared. 7Here, we ask whether it is possible to accommodate correlations in appropriate neural dynamics at all, ignoring the issue of how the optimal values for the parameters of the network dynamics would come about. 6 0 10 20 −0.5 0 0.5 0 10 20 −10 −5 0 5 0 10 20 −0.05 0 0.05 0 2 4 6 8 10 12 −2 0 6 12 number of coactive inputs postsynaptic current a c d no corr corr number of coactive inputs postsynaptic current 0 2 4 6 8 10 12 −10 −5 0 5 10 0 10 20 −0.01 0 0.01 b e TIP MIDDLE BASE number of inputs normalized EPSP 0.0 0.2 0.4 0.6 0.8 1.0 0 1 2 3 4 5 6 7 Figure 3: Implications for neural dynamics. a. R1: parameters for Irec i ; linear modulation by network activity, nb. b. R2: nonlinear modulation of pairwise term by network activity (cf. middle panel in a); other parameters have linear dependences on nb. c. R1: Total current as function of number of coactivated inputs, P j Wijxj; lines: different levels of neural excitability P j Wij, line widths scale with frequency of occurrence in a sample run. d. Same for R2. e. Nonlinear integration in dendrites, reproduced from [11], cf. curves in c. Exact retrieval dynamics based on Eq. 10, but not respecting locality constraints, work substantially better in the presence of synaptic correlations, for all rules (Fig. 2d, yellow bars). It is important to note that for the XOR rule, which was supposed to be the closest analog to the covariance rule and hence afford simple recall dynamics [22], error rates stay above control, suggesting that it is actually a case in which even dependencies beyond 2nd-order correlation would need to be considered. As in the additive case, exact recall dynamics are biologically implausible, as the total current to the neuron depends on the full weight matrix. It is possible to approximate the dynamics using strictly local information by replacing the nonlocal term by its mean, which, however, is no longer a constant, but rather a linear function of the total activity in the network, nb = P j̸=i xj [16]. Under this approximation, the current from recurrent connections corresponding to the evidence from the weights becomes: Irec i = log P(W|x(1)) P(W|x(0)) = X j k△ ij (x)Wij + 1 2 X jk J△ (ij)(ik)(x)WijWik −Z△ (11) where i is the index of the neuron to be updated, and x(0/1) activity vector has the to-be-updated neuron’s activity set to 1 or 0, respectively, and all other components given by the current network state. The functions k△ ij (x) = kij(x(1))−kij(x(0)), J△ (ij)(kl)(x) = J(ij)(kl) x(1) −J(ij)(kl) x(0) , and Z△= log Z x(1) −log Z x(0) depend on the local activity at the indexed synapses, modulated by the number of active neurons in the network, nb. This approximation is again consistent with our previous analysis, i.e. in the absence of synaptic correlations, the complex dynamics recover the simple case presented before. Importantly, this approximation also does about as well as exact dynamics (Fig. 2d, red bars). For post-synaptically gated learning, comparing the parameters of the dynamics in the case of independent versus correlated synapses (Fig. 3a) reveals a modest modulation of the recurrent input by the total activity. More importantly, the net current to the postsynaptic neuron depends non-linearly (formally, quadratically) on the number of co-active inputs, nW 1 = P j xjWij, (Fig. 3c), which is reminiscent of experimentally observed dendritic non-linearities [11] (Fig. 3e). Conversely, for the presynaptically gated learning rule, approximately optimal dynamics predict a non-monotonic modulation of activity by lateral inhibition (Fig. 3b), but linear neural integration (Fig. 3d).8 Lastly, retrieval based on the XOR rule has the same form as the simple dynamics derived for the factorized case [16]. However, the total current has to be rescaled to compensate for the correlations introduced by reciprocal connections. 8The difference between the two rules emerges exclusively because of the constraint of strict locality of the approximation, since the exact form of the dynamics is essentially the same for the two. 7 RULE EXACT DYNAMICS NEURAL IMPLEMENTATION additive covariance strictly local, linear linear feedback inh., homeostasis simple Hebbian strictly local, nonlinear nonlinear feedback inh. generalized Hebbian semi-local, nonlinear nonlinear feedback inh. cascade presyn. gated nonlocal, nonlinear nonlinear feedback inh., linear dendritic integr. postsyn. gated nonlocal, nonlinear linear feedback inh., non-linear dendritic integr. XOR beyond correlations ? Table 1: Results summary: circuit adaptations against correlations for different learning rules. 5 Discussion Statistical dependencies between synaptic efficacies are a natural consequence of activity dependent synaptic plasticity, and yet their implications for network function have been unexplored. Here, in the context of an auto-associative memory network, we investigated the patterns of synaptic correlations induced by several well-known learning rules and their consequent effects on retrieval. We showed that most rules considered do indeed induce synaptic correlations and that failing to take them into account greatly damages recall. One fortuitous exception is the covariance rule, for which there are no synaptic correlations. This might explain why the bulk of classical treatments of autoassociative memories, using the covariance rule, could achieve satisfying capacity levels despite overlooking the issue of synaptic correlations [5, 24, 25]. In general, taking correlations into account optimally during recall requires dynamics in which there are non-local interactions between neurons. However, we derived approximations that perform well and are biologically realisable without such non-locality (Table 1). Examples include the modulation of neural responses by the total activity of the population, which could be mediated by feedback inhibition, and specific dendritic nonlinearities. In particular, for the post-synaptically gated learning rule, which may be viewed as an abstract model of hippocampal NMDA receptor-dependent plasticity, our model predicts a form of non-linear mapping of recurrent inputs into postsynaptic currents which is similar to experimentally observed dendritic integration in cortical pyramidal cells [11]. In general, the tight coupling between the synaptic plasticity used for encoding (manifested in patterns of synaptic correlations) and circuit dynamics offers an important route for experimental validation [2]. None of the rules governing synaptic plasticity that we considered perfectly reproduced the pattern of correlations in [6]; and indeed, exactly which rule applies in what region of the brain under which neuromodulatory influences is unclear. Furthermore, results in [6] concern the neocortex rather than the hippocampus, which is a more common target for models of auto-associative memory. Nonetheless, our analysis has shown that synaptic correlations matter for a range of very different learning rules that span the spectrum of empirical observations. Another strategy to handle the negative effects of synaptic correlations is to weaken or eliminate them. For instance, in the palimpsest synaptic model [14], the deeper the cascade, the weaker the correlations, and so metaplasticity may have the beneficial effect of making recall easier. Another, popular, idea is to use very sparse patterns [21], although this reduces the information content of each one. More speculatively, one might imagine a process of off-line synaptic pruning or recoding, in which strong correlations are removed or the weights adjusted so that simple recall methods will work. Here, we focused on second-order correlations. However, for plasticity rules such as XOR, we showed that this does not suffice. Rather, higher-order correlations would need to be considered, and thus, presumably higher-order interactions between neurons approximated. Finally, we know from work on neural coding of sensory stimuli that there are regimes in which correlations either help or hurt the informational quality of the code, assuming that decoding takes them into account. Given our results, it becomes important to look at the relative quality of different plasticity rules, assuming realizable decoding – it is not clear whether rules that strive to eliminate correlations will be bested by ones that do not. Acknowledgments This work was supported by the Wellcome Trust (CS, ML), the Gatsby Charitable Foundation (PD), and the European Union Seventh Framework Programme (FP7/2007–2013) under grant agreement no. 269921 (BrainScaleS) (ML). 8 References 1. Sommer, F.T. & Dayan, P. Bayesian retrieval in associative memories with storage errors. IEEE transactions on neural networks 9, 705–713 (1998). 2. Lengyel, M., Kwag, J., Paulsen, O. & Dayan, P. Matching storage and recall: hippocampal spike timingdependent plasticity and phase response curves. Nature Neuroscience 8, 1677–1683 (2005). 3. Lengyel, M. & Dayan, P. Uncertainty, phase and oscillatory hippocampal recall. Advances in Neural Information Processing (2007). 4. Savin, C., Dayan, P. & Lengyel, M. Two is better than one: distinct roles for familiarity and recollection in retrieving palimpsest memories. in Advances in Neural Information Processing Systems, 24 (MIT Press, Cambridge, MA, 2011). 5. Hopfield, J.J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 76, 2554–2558 (1982). 6. Song, S., Sj¨ostr¨om, P.J., Reigl, M., Nelson, S. & Chklovskii, D.B. Highly nonrandom features of synaptic connectivity in local cortical circuits. PLoS biology 3, e68 (2005). 7. Dayan, P. & Abbott, L. Theoretical Neuroscience (MIT Press, 2001). 8. Averbeck, B.B., Latham, P.E. & Pouget, A. Neural correlations, population coding and computation. Nature Reviews Neuroscience 7, 358–366 (2006). 9. Pillow, J.W. et al. Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature 454, 995–999 (2008). 10. Latham, P.E. & Nirenberg, S. Synergy, redundancy, and independence in population codes, revisited. Journal of Neuroscience 25, 5195–5206 (2005). 11. Branco, T. & H¨ausser, M. Synaptic integration gradients in single cortical pyramidal cell dendrites. Neuron 69, 885–892 (2011). 12. Hasselmo, M.E. & Bower, J.M. Acetylcholine and memory. Trends Neurosci. 16, 218–222 (1993). 13. MacKay, D.J.C. Maximum entropy connections: neural networks. in Maximum Entropy and Bayesian Methods, Laramie, 1990 (eds. Grandy, Jr, W.T. & Schick, L.H.) 237–244 (Kluwer, Dordrecht, The Netherlands, 1991). 14. Fusi, S., Drew, P.J. & Abbott, L.F. Cascade models of synaptically stored memories. Neuron 45, 599–611 (2005). 15. Abraham, W.C. Metaplasticity: tuning synapses and networks for plasticity. Nature Reviews Neuroscience 9, 387 (2008). 16. For details, see Supplementary Information. 17. Zhang, W. & Linden, D. The other side of the engram: experience-driven changes in neuronal intrinsic excitability. Nature Reviews Neuroscience (2003). 18. Engel, A., Englisch, H. & Sch¨utte, A. Improved retrieval in neural networks with external fields. Europhysics Letters (EPL) 8, 393–397 (1989). 19. Leibold, C. & Kempter, R. Sparseness constrains the prolongation of memory lifetime via synaptic metaplasticity. Cerebral cortex (New York, N.Y. : 1991) 18, 67–77 (2008). 20. Amit, Y. & Huang, Y. Precise capacity analysis in binary networks with multiple coding level inputs. Neural computation 22, 660–688 (2010). 21. Huang, Y. & Amit, Y. Capacity analysis in multi-state synaptic models: a retrieval probability perspective. Journal of computational neuroscience (2011). 22. Dayan Rubin, B. & Fusi, S. Long memory lifetimes require complex synapses and limited sparseness. Frontiers in Computational Neuroscience (2007). 23. Thouless, D.J., Anderson, P.W. & Palmer, R.G. Solution of ’Solvable model of a spin glass’. Philosophical Magazine 35, 593–601 (1977). 24. Amit, D., Gutfreund, H. & Sompolinsky, H. Storing infinite numbers of patterns in a spin-glass model of neural networks. Phys Rev Lett 55, 1530–1533 (1985). 25. Treves, A. & Rolls, E.T. What determines the capacity of autoassociative memories in the brain? Network 2, 371–397 (1991). 9
|
2013
|
147
|
4,873
|
Understanding Dropout Pierre Baldi Department of Computer Science University of California, Irvine Irvine, CA 92697 pfbaldi@uci.edu Peter Sadowski Department of Computer Science University of California, Irvine Irvine, CA 92697 pjsadows@ics.uci.edu Abstract Dropout is a relatively new algorithm for training neural networks which relies on stochastically “dropping out” neurons during training in order to avoid the co-adaptation of feature detectors. We introduce a general formalism for studying dropout on either units or connections, with arbitrary probability values, and use it to analyze the averaging and regularizing properties of dropout in both linear and non-linear networks. For deep neural networks, the averaging properties of dropout are characterized by three recursive equations, including the approximation of expectations by normalized weighted geometric means. We provide estimates and bounds for these approximations and corroborate the results with simulations. Among other results, we also show how dropout performs stochastic gradient descent on a regularized error function. 1 Introduction Dropout is an algorithm for training neural networks that was described at NIPS 2012 [7]. In its most simple form, during training, at each example presentation, feature detectors are deleted with probability q = 1 −p = 0.5 and the remaining weights are trained by backpropagation. All weights are shared across all example presentations. During prediction, the weights are divided by two. The main motivation behind the algorithm is to prevent the co-adaptation of feature detectors, or overfitting, by forcing neurons to be robust and rely on population behavior, rather than on the activity of other specific units. In [7], dropout is reported to achieve state-of-the-art performance on several benchmark datasets. It is also noted that for a single logistic unit dropout performs a kind of “geometric averaging” over the ensemble of possible subnetworks, and conjectured that something similar may occur also in multilayer networks leading to the view that dropout may be an economical approximation to training and using a very large ensemble of networks. In spite of the impressive results that have been reported, little is known about dropout from a theoretical standpoint, in particular about its averaging, regularization, and convergence properties. Likewise little is known about the importance of using q = 0.5, whether different values of q can be used including different values for different layers or different units, and whether dropout can be applied to the connections rather than the units. Here we address these questions. 2 Dropout in Linear Networks It is instructive to first look at some of the properties of dropout in linear networks, since these can be studied exactly in the most general setting of a multilayer feedforward network described by an underlying acyclic graph. The activity in unit i of layer h can be expressed as: Sh i (I) = X l<h X j whl ij Sl j with S0 j = Ij (1) 1 where the variables w denote the weights and I the input vector. Dropout applied to the units can be expressed in the form Sh i = X l<h X j whl ij δl jSl j with S0 j = Ij (2) where δl j is a gating 0-1 Bernoulli variable, with P(δl j = 1) = pl j . Throughout this paper we assume that the variables δl j are independent of each other, independent of the weights, and independent of the activity of the units. Similarly, dropout applied to the connections leads to the random variables Sh i = X l<h X j δhl ij whl ij Sl j with S0 j = Ij (3) For brevity in the rest of this paper, we focus exclusively on dropout applied to the units, but all the results remain true for the case of dropout applied to the connections with minor adjustments. For a fixed input vector, the expectation of the activity of all the units, taken over all possible realizations of the gating variables hence all possible subnetworks, is given by: E(Sh i ) = X l<h X j whl ij pl jE(Sl j) for h > 0 (4) with E(S0 j ) = Ij in the input layer. In short, the ensemble average can easily be computed by feedforward propagation in the original network, simply replacing the weights whl ij by whl ij pl j. 3 Dropout in Neural Networks 3.1 Dropout in Shallow Neural Networks Consider first a single logistic unit with n inputs O = σ(S) = 1/(1 + ce−λS) and S = Pn 1 wjIj. To achieve the greatest level of generality, we assume that the unit produces different outputs O1, . . . , Om, corresponding to different sums S1 . . . , Sm with different probabilities P1, . . . , Pm (P Pm = 1). In the most relevant case, these outputs and these sums are associated with the m = 2n possible subnetworks of the unit. The probabilities P1, . . . , Pm could be generated, for instance, by using Bernoulli gating variables, although this is not necessary for this derivation. It is useful to define the following four quantities: the mean E = P PiOi; the mean of the complements E′ = P Pi(1 −Oi) = 1 −E; the weighted geometric mean (WGM) G = Q i OPi i ; and the weighted geometric mean of the complements G′ = Q i(1 −Oi)Pi. We also define the normalized weighted geometric mean NWGM = G/(G + G′). We can now prove the key averaging theorem for logistic functions: NWGM(O1, . . . , Om) = 1 1 + ce−λE(S) = σ(E(S)) (5) To prove this result, we write NWGM(O1, . . . , Om) = 1 1 + Q (1−Oi)Pi Q O Pi i = 1 1 + Q (1−σ(Si))Pi Q σ(Si)Pi (6) The logistic function satisfies the identity [1 −σ(x)]/σ(x) = ce−λx and thus NWGM(O1, . . . , Om) = 1 1 + Q[ce−λSi]Pi = 1 1 + ce−λP PiSi = σ(E(S)) (7) Thus in the case of Bernoulli gating variables, we can compute the NWGM over all possible dropout configurations by simple forward propagation by: NWGM = σ(Pn 1 wjpjIj). A similar result is true also for normalized exponential transfer functions. Finally, one can also show that the only class of functions f that satisfy NWGM(f) = f(E) are the constant functions and the logistic functions [1]. 2 3.2 Dropout in Deep Neural Networks We can now deal with the most interesting case of deep feedforward networks of sigmoidal units 1, described by a set of equations of the form Oh i = σ(Sh i ) = σ( X l<h X j whl ij Ol j) with O0 j = Ij (8) where Oh i is the output of unit i in layer h. Dropout on the units can be described by Oh i = σ(Sh i ) = σ( X l<h X j whl ij δl jOl j) with O0 j = Ij (9) using the Bernoulli selector variables δl j. For each sigmoidal unit NWGM(Oh i ) = Q N (Oh i )P (N ) Q N (Oh i )P (N ) + Q N (1 −Oh i )P (N ) (10) where N ranges over all possible subnetworks. Assume for now that the NWGM provides a good approximation to the expectation (this point will be analyzed in the next section). Then the averaging properties of dropout are described by the following three recursive equations. First the approximation of means by NWGMs: E(Oh i ) ≈NWGM(Oh i ) (11) Second, using the result of the previous section, the propagation of expectation symbols: NWGM(Oh i ) = σh i E(Sh i ) (12) And third, using the linearity of the expectation with respect to sums, and to products of independent random variables: E(Sh i ) = X l<h X j whl ij pl jE(Ol j) (13) Equations 11, 12, and 13 are the fundamental equations explaining the averaging properties of the dropout procedure. The only approximation is of course Equation 11 which is analyzed in the next section. If the network contains linear units, then Equation 11 is not necessary for those units and their average can be computed exactly. In the case of regression with linear units in the top layers, this allows one to shave off one layer of approximations. The same is true in binary classification by requiring the output layer to compute directly the NWGM of the ensemble rather than the expectation. It can be shown that for any error function that is convex up (∪), the error of the mean, weighted geometric mean, and normalized weighted geometric mean of an ensemble is always less than the expected error of the models [1]. Equation 11 is exact if and only if the numbers Oh i are identical over all possible subnetworks N. Thus it is useful to measure the consistency C(Oh i , I) of neuron i in layer h for input I by using the variance V ar Oh i (I) taken over all subnetworks N and their distribution when the input I is fixed. The larger the variance is, the less consistent the neuron is, and the worse we can expect the approximation in Equation 11 to be. Note that for a random variable O in [0,1] the variance cannot exceed 1/4 anyway. This is because V ar(O) = E(O2) −(E(O))2 ≤E(O) −(E(O))2 = E(O)(1 −E(O)) ≤1/4. This measure can also be averaged over a training set or a test set. 1Given the results of the previous sections, the network can also include linear units or normalized exponential units. 3 4 The Dropout Approximation Given a set of numbers O1, . . . , Om between 0 and 1, with probabilities P1, . . . , PM (corresponding to the outputs of a sigmoidal neuron for a fixed input and different subnetworks), we are primarily interested in the approximation of E by NWGM. The NWGM provides a good approximation because we show below that to a first order of approximation: E ≈NWGM and E ≈G. Furthermore, there are formulae in the literature for bounding the error E −G in terms of the consistency (e.g. the Cartwright and Field inequality [6]). However, one can suspect that the NWGM provides even a better approximation to E than the geometric mean. For instance, if the numbers Oi satisfy 0 < Oi ≤0.5 (consistently low), then G G′ ≤E E′ and therefore G ≤ G G + G′ ≤E (14) This is proven by applying Jensen’s inequality to the function ln x −ln(1 −x) for x ∈(0, 0.5]. It is also known as the Ky Fan inequality [2, 8, 9]. To get even better results, one must consider a second order approximation. For this, we write Oi = 0.5 + ϵi with 0 ≤|ϵi| ≤0.5. Thus we have E(O) = 0.5 + E(ϵ) and V ar(O) = V ar(ϵ). Using a Taylor expansion: G = 1 2 Y i ∞ X n=0 pi n (2ϵi)n = 1 2 1 + X i pi2ϵi + X i pi(pi −1) 2 (2ϵi)2 + X i<j 4pipjϵiϵj + R3(ϵi) (15) where R3(ϵi) is the remainder and R3(ϵi) = pi 3 (2ϵi)3 (1 + ui)3−pi (16) where |ui| ≤2|ϵi|. Expanding the product gives G = 1 2+ X i piϵi+( X i ϵi)2− X piϵ2 i +R3(ϵ) = 1 2+E(ϵ)−V ar(ϵ)+R3(ϵ) = E(O)−V ar(O)+R3(ϵ) (17) By symmetry, we have G′ = Y i (1 −Oi)pi = 1 −E(O) −V ar(O) + R3(ϵ) (18) where R3(ϵ) is the higher order remainder. Neglecting the remainder and writing E = E(O) and V = V ar(O) we have G G + G′ ≈E −V 1 −2V and G′ G + G′ ≈1 −E −V 1 −2V (19) Thus, to a second order, the differences between the mean and the geometric mean and the normalized geometric means satisfy E −G ≈V and E − G G + G′ ≈V (1 −2E) 1 −2V (20) and 1 −E −G′ ≈V and (1 −E) − G′ G + G′ ≈V (1 −2E) 1 −2V (21) Finally it is easy to check that the factor (1 −2E)/(1 −2V ) is always less or equal to 1. In addition we always have V ≤E(1 −E), with equality achieved only for 0-1 Bernoulli variables. Thus 4 |E − G G + G′ | ≈V |1 −2E| 1 −2V ≤E(1 −E)|1 −2E| 1 −2V ≤2E(1 −E)|1 −2E| (22) The first inequality is optimal in the sense that it is attained in the case of a Bernoulli variable with expectation E and, intuitively, the second inequality shows that the approximation error is always small, regardless of whether E is close to 0, 0.5, or 1. In short, the NWGM provides a very good approximation to E, better than the geometric mean G. The property is always true to a second order of approximation and it is exact when the activities are consistently low, or when NWGM ≤E, since the latter implies G ≤NWGM ≤E. Several additional properties of the dropout approximation, including the extension to rectified linear units and other transfer functions, are studied in [1]. 5 Dropout Dynamics Dropout performs gradient descent on-line with respect to both the training examples and the ensemble of all possible subnetworks. As such, and with the appropriately decreasing learning rates, it is almost surely convergent like other forms of stochastic gradient descent [11, 4, 5]. To further understand the properties of dropout, it is again instructive to look at the properties of the gradient in the linear case. 5.1 Single Linear Unit In the case of a single linear unit, consider the two error functions EENS and ED associated with the ensemble of all possible subnetworks and the network with dropout. For a single input I, these are defined by: EENS = 1 2(t −OENS)2 = 1 2(t − n X i=1 piwiIi)2 (23) ED = 1 2(t −OD)2 = 1 2(t − n X i=1 δiwiIi)2 (24) We use a single training input I for notational simplicity, otherwise the errors of each training example can be combined additively. The learning gradient is given by ∂EENS ∂wi = −(t −OENS)∂OENS ∂wi = −(t −OENS)piIi (25) ∂ED ∂wi = −(t −OD)∂OD ∂wi = −(t −OD)δiIi = −tλiIi + wiδ2 i I2 i + X j̸=i wjδiδjIiIj (26) The dropout gradient is a random variable and we can take its expectation. A short calculation yields E ∂ED ∂wi = ∂EENS ∂wi + wipi(1 −pi)I2 i ∂EENS ∂wi + wiI2 i V ar(δi) (27) Thus, remarkably, in this case the expectation of the gradient with dropout is the gradient of the regularized ensemble error E = EENS + 1 2 n X i=1 w2 i I2 i V ar(δi) (28) The regularization term is the usual weight decay or Gaussian prior term based on the square of the weights to prevent overfitting. Dropout provides immediately the magnitude of the regularization term which is adaptively scaled by the inputs and by the variance of the dropout variables. Note that pi = 0.5 is the value that provides the highest level of regularization. 5 5.2 Single Sigmoidal Unit The previous result generalizes to a sigmoidal unit O = σ(S) = 1/(1 + ce−λS) trained to minimize the relative entropy error E = −(t log O + (1 −t) log(1 −O)). In this case, ∂ED ∂wi = −λ(t −O) ∂S ∂wi = −λ(t −O)δiIi (29) The terms O and Ii are not independent but using a Taylor expansion with the NWGM approximation gives E ∂ED ∂wi ≈∂EENS ∂wi + λσ′(SENS)wiI2 i V ar(δi) (30) with SENS = P j wjpjIj. Thus, as in the linear case, the expectation of the dropout gradient is approximately the gradient of the ensemble network regularized by weight decay terms with the proper adaptive coefficients. A similar analysis, can be carried also for a set of normalized exponential units and for deeper networks [1]. 5.3 Learning Phases and Sparse Coding During dropout learning, we can expect three learning phases: (1) At the beginning of learning, when the weights are typically small and random, the total input to each unit is close to 0 for all the units and the consistency is high: the output of the units remains roughly constant across subnetworks (and equal to 0.5 with c = 1). (2) As learning progresses, activities tend to move towards 0 or 1 and the consistency decreases, i.e. for a given input the variance of the units across subnetworks increases. (3) As the stochastic gradient learning procedure converges, the consistency of the units converges to a stable value. Finally, for simplicity, assume that dropout is applied only in layer h where the units have an output of the form Oh i = σ(Sh i ) and Sh i = P l<h whl ij δl jOl j. For a fixed input, Ol j is a constant since dropout is not applied to layer l. Thus V ar(Sh i ) = X l<h (whl ij )2(Ol j)2pl j(1 −pl j) (31) under the usual assumption that the selector variables δl j are independent of each other. Thus V ar(Sh i ) depends on three factors. Everything else being equal, it is reduced by: (1) Small weights which goes together with the regularizing effect of dropout; (2) Small activities, which shows that dropout is not symmetric with respect to small or large activities. Overall, dropout tends to favor small activities and thus sparse coding; and (3) Small (close to 0) or large (close to 1) values of the dropout probabilities pl j. Thus values pl j = 0.5 maximize the regularization effect but may also lead to slower convergence to the consistent state. Additional results and simulations are given in [1]. 6 Simulation Results We use Monte Carlo simulation to partially investigate the approximation framework embodied by the three fundamental dropout equations 11, 12, and 13, the accuracy of the second-order approximation and bounds in Equations 20 and 22, and the dynamics of dropout learning. We experiment with an MNIST classifier of four hidden layers (784-1200-1200-1200-1200-10) that replicates the results in [7] using the Pylearn2 and Theano software libraries[12, 3]. The network is trained with a dropout probability of 0.8 in the input, and 0.5 in the four hidden layers. For fixed weights and a fixed input, 10,000 Monte Carlo simulations are used to estimate the distribution of activity O in each neuron. Let O∗be the activation under the deterministic setting with the weights scaled appropriately. The left column of Figure 1 confirms empirically that the second-order approximation in Equation 20 and the bound in Equation 22 are accurate. The right column of Figure 1 shows the difference between the true ensemble average E(O) and the prediction-time neuron activity O∗. This difference grows very slowly in the higher layers, and only for active neurons. 6 Figure 1: Left: The difference E(O) −NWGM(O), it’s second-order approximation in Equation 20, and the bound from Equation 22, plotted for four hidden layers and a typical fixed input. Right: The difference between the true ensemble average E(O) and the final neuron prediction O∗. Next, we examine the neuron consistency during dropout training. Figure 2a shows the three phases of learning for a typical neuron. In Figure 2b, we observe that the consistency does not decline in higher layers of the network. One clue into how this happens is the distribution of neuron activity. As noted in [10] and section 5 above, dropout training results in sparse activity in the hidden layers (Figure 3). This increases the consistency of neurons in the next layer. 7 (a) The three phases of learning. For a particular input, a typical active neuron (red) starts out with low variance, experiences a large increase in variance during learning, and eventually settles to some steady constant value. In contrast, a typical inactive neuron (blue) quickly learns to stay silent. Shown are the mean with 5% and 95% percentiles. (b) Consistency does not noticeably decline in the upper layers. Shown here are the mean Std(O) for active neurons (0.1 < O after training) in each layer, along with the 5% and 95% percentiles. Figure 2 Figure 3: In every hidden layer of a dropout trained network, the distribution of neuron activations O∗is sparse and not symmetric. These histograms were totalled over a set of 100 random inputs. 8 References [1] P. Baldi and P. Sadowski. The Dropout Learning Algorithm. Artificial Intelligence, 2014. In press. [2] E. F. Beckenbach and R. Bellman. Inequalities. Springer-Verlag Berlin, 1965. [3] J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde-Farley, and Y. Bengio. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy), Austin, TX, June 2010. Oral Presentation. [4] L. Bottou. Online algorithms and stochastic approximations. In D. Saad, editor, Online Learning and Neural Networks. Cambridge University Press, Cambridge, UK, 1998. [5] L. Bottou. Stochastic learning. In O. Bousquet and U. von Luxburg, editors, Advanced Lectures on Machine Learning, Lecture Notes in Artificial Intelligence, LNAI 3176, pages 146–168. Springer Verlag, Berlin, 2004. [6] D. Cartwright and M. Field. A refinement of the arithmetic mean-geometric mean inequality. Proceedings of the American Mathematical Society, pages 36–38, 1978. [7] G. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. http://arxiv.org/abs/1207.0580, 2012. [8] E. Neuman and J. S´andor. On the Ky Fan inequality and related inequalities i. MATHEMATICAL INEQUALITIES AND APPLICATIONS, 5:49–56, 2002. [9] E. Neuman and J. Sandor. On the Ky Fan inequality and related inequalities ii. Bulletin of the Australian Mathematical Society, 72(1):87–108, 2005. [10] S. Nitish. Improving Neural Networks with Dropout. PhD thesis, University of Toronto, Toronto, Canada, 2013. [11] H. Robbins and D. Siegmund. A convergence theorem for non negative almost supermartingales and some applications. Optimizing methods in statistics, pages 233–257, 1971. [12] D. Warde-Farley, I. Goodfellow, P. Lamblin, G. Desjardins, F. Bastien, and Y. Bengio. pylearn2. 2011. http://deeplearning.net/software/pylearn2. 9
|
2013
|
148
|
4,874
|
Efficient Supervised Sparse Analysis and Synthesis Operators Pablo Sprechmann Duke University pablo.sprechmann@duke.edu Roee Litman Tel Aviv University roeelitman@post.tau.ac.il Tal Ben Yakar Tel Aviv University talby10@gmail.com Alex Bronstein Tel Aviv University bron@eng.tau.ac.il Guillermo Sapiro Duke University guillermo.sapiro@duke.edu ∗ Abstract In this paper, we propose a new computationally efficient framework for learning sparse models. We formulate a unified approach that contains as particular cases models promoting sparse synthesis and analysis type of priors, and mixtures thereof. The supervised training of the proposed model is formulated as a bilevel optimization problem, in which the operators are optimized to achieve the best possible performance on a specific task, e.g., reconstruction or classification. By restricting the operators to be shift invariant, our approach can be thought as a way of learning sparsity-promoting convolutional operators. Leveraging recent ideas on fast trainable regressors designed to approximate exact sparse codes, we propose a way of constructing feed-forward networks capable of approximating the learned models at a fraction of the computational cost of exact solvers. In the shift-invariant case, this leads to a principled way of constructing a form of taskspecific convolutional networks. We illustrate the proposed models on several experiments in music analysis and image processing applications. 1 Introduction Parsimony, preferring a simple explanation to a more complex one, is probably one of the most intuitive principles widely adopted in the modeling of nature. The past two decades of research have shown the power of parsimonious representation in a vast variety of applications from diverse domains of science. Parsimony in the form of sparsity has been shown particularly useful in the fields of signal and image processing and machine learning. Sparse models impose sparsity-promoting priors on the signal, which can be roughly categorized as synthesis or analysis. Synthesis priors are generative, asserting that the signal is approximated well as a superposition of a small number of vectors from a (possibly redundant) synthesis dictionary. Analysis priors, on the other hand, assume that the signal admits a sparse projection onto an analysis dictionary. Many classes of signals, in particular, speech, music, and natural images, have been shown to be sparsely representable in overcomplete wavelet and Gabor frames, which have been successfully adopted as synthesis dictionaries in numerous applications [14]. Analysis priors involving differential operators, of which total variation is a popular instance, have also been shown very successful in regularizing ill-posed image restoration problems [19]. ∗Work partially supported by ARO, BSF, NGA, ONR, NSF, NSSEFF, and Israel-Us Binational. 1 Despite the spectacular success of these axiomatically constructed synthesis and analysis operators, significant empirical evidence suggests that better performance is achieved when a data- or problemspecific dictionary is used instead of a predefined one. Works [1, 16], followed by many others, demonstrated that synthesis dictionaries can be constructed to best represent training data by solving essentially a matrix factorization problem. Despite the lack of convexity, many efficient dictionary learning procedures have been proposed. This unsupervised or data-driven approach to synthesis dictionary learning is well-suited for reconstruction tasks such as image restoration. For example, synthesis models with learned dictionaries, have achieved excellent results in denoising [9, 13]. However, in discriminative tasks such as classification, good data reconstruction is not necessarily required or even desirable. Attempts to replicate the success of sparse models in discriminative tasks led to the recent interest in supervised or a task- rather than data-driven dictionary learning, which appeared to be a significantly more difficult modeling and computational problem compared to its unsupervised counterpart [6]. Supervised learning also seems to be the only practical option for learning unstructured nongenerative analysis operators, for which no simple unsupervised alternatives exist. While the supervised analysis operator learning has been mainly used as regularization on inverse problems, e.g., denoising [5], we argue that it is often better suited for classification tasks than it synthesis counterpart, since the feature learning and the reconstruction are separated. Recent works proposed to address the supervised learning of ℓ1 norm synthesis [12] and analysis [5, 17] priors via bilevel optimization [8], in which the minimization of a task-specific loss with respect to a dictionary depends in turn on the minimizer of a representation pursuit problem using that dictionary. For the synthesis case, the task-oriented bilevel optimization problem is smooth and can be efficiently solved using stochastic gradient descent (SGD) [12]. However, [12] heavily relies on the separability of the proximal operator of the ℓ1 norm, and thus cannot be extended to the analysis case, where the ℓ1 term is not separable. The approach proposed in [17] formulates an analysis model with a smoothed ℓ1-type prior and uses implicit differentiation to obtain its gradients with respect to the dictionary required for the solution of the bilevel problem. However, such approximate priors are known to produce inferior results compared to their exact counterparts. Main contributions. This paper focuses on supervised learning of synthesis and analysis priors, making three main contributions: First, we consider a more general sparse model encompassing analysis and synthesis priors as particular cases, and formulate its supervised learning as a bilevel optimization problem. We propose a new analysis technique, for which the (almost everywhere) smoothness of the proposed bilevel problem is shown, and its exact subgradients are derived. We also show that the model can be extended to include a sensing matrix and a non-Euclidean metric in the data term, both of which can be learned as well. We relate the learning of the latter metric matrix to task-driven metric learning techniques. Second, we show a systematic way of constructing fast fixed-complexity approximations to the solution of the proposed exact pursuit problem by unrolling few iterations of the exact iterative solver into a feed-forward network, whose parameters are learned in the supervised regime. The idea of deriving a fast approximation of sparse codes from an iterative algorithm has been recently successfully advocated in [11] for the synthesis model. We present an extension of this line of research to the various settings of analysis-flavored sparse models. Third, we dedicate special attention to the shift-invariant particular case of our model. The fast approximation in this case assumes the form of a convolutional neural network. 2 Analysis, synthesis, and mixed sparse models We consider a generalization of the Lasso-type [21, 22] pursuit problem min y 1 2∥M1x −M2y∥2 2 + λ1∥Ωy∥1 + λ2 2 ∥y∥2 2, (1) where x ∈Rn, y ∈Rk, M1, M2 are m × n and m × k, respectively, Ωis r × k, and λ1, λ2 > 0 are parameters. Pursuit problem (1) encompasses many important particular cases that have been extensively studied in the literature: By setting M1 = I, Ω= I, and M2 = D to be a columnovercomplete dictionary (k > m), the standard sparse synthesis model is obtained, which attempts to 2 input : Data x, matrices M1, M2, Ω, weights λ1, λ2, parameter ρ > 0. output: Sparse code y. Initialize µ0 = 0, z0 = 0 for j = 1, 2, . . . until convergence do yj+1 = (MT 2 M2 + ρΩTΩ+ λ2I)−1(MT 2 M1x + ρΩT(zj −µj)) zj+1 = σ λ1 ρ (Ωyj+1 + µj) µj+1 = µj + Ωyj+1 −zj+1 end Algorithm 1: Alternating direction method of multipliers (ADMM). Here, σt(z) = sign(z) · max{|z| −t, 0} denotes the element-wise soft thresholding (the proximal operator of ℓ1). represent the data vector x as a sparse linear combination of the atoms of D. The case where the data are unavailable directly, but rather through a set of (usually fewer, m < n) linear measurements, is handled by supplying x ∈Rm and setting M2 = ΦD, with Φ being an m×n sensing matrix. Such a case arises frequently in compressed sensing applications as well as in general inverse problems. One the other hand, by setting M1, M2 = I, and Ωa row-overcomplete dictionary (r > k), the standard sparse analysis model is obtained, which attempts to approximate the data vector x by another vector y in the same space admitting a sparse projection on Ω. For example, by setting Ωto be the matrix of discrete derivatives leads to total variation regularization, which has been shown extremely successful in numerous signal processing applications. The analysis model can also be extended by adding an m × k sensing operator M2 = Φ, assuming that x is given in the mdimensional measurement space. This leads to popular analysis formulations of image deblurring, super-resolution, and other inverse problems. Keeping both the analysis and the synthesis dictionaries and setting M2 = D, Ω= [Ω′D; I], leads to the mixed model. Note that the reconstructed data vector is now obtained by ˆx = Dy with sparse y; as a result, the ℓ1 term is extended to make sparse the projection of ˆx on the analysis dictionary Ω′, as well as impose sparsity of y. A sensing matrix can be incorporated in this setting as well, by setting M1 = Φ and M2 = ΦD. Alternatively, we can interpret Φ as the projection matrix parametrizing a ΦTΦ Mahalanobis metric, thus generalizing the traditional Euclidean data term. A particularly important family of analysis operators is obtained when the operator is restricted to be shift-invariant. In this case, the operator can be expressed as a convolution with a filter, γ ∗y, whose impulse response γ ∈Rf is generally of a much smaller dimension than y. A straightforward generalization would be to consider an analysis operator consisting of q filters, Ω(γ1, . . . , γq) = Ω1(γ1); · · · ; Ωq(γq) with Ωiy = γi ∗y, 1 ≤i ≤q. (2) This model includes as a particular case the isotropic total variation priors. In this case, q = 2 and the filters correspond to the discrete horizontal and vertical derivatives. In general, the exact form of the operator depends on the dimension of the convolution, and the type of boundary conditions. On of the most attractive properties of pursuit problem (1) is convexity, which becomes strict for λ2 > 0. While for Ω= I, (1) can be solved efficiently using the popular proximal methods [15] (such as FISTA [2]), this is no more an option in the case of a non-trivial Ω, as ∥Ωy∥1 has no more a closed-form proximal operator. A way to circumvent this difficulty is by introducing an auxiliary variable z = Ωy and solving the constrained convex program min y,z 1 2∥M1x −M2y∥2 2 + λ1∥z∥1 + λ2 2 ∥y∥2 2 s.t z = Ωy, (3) with an unscaled ℓ1 term. This leads to a family of the so-called split-Bregman methods; the application of augmented Lagrangian techniques to solve (3) is known in the literature as alternating direction method of multipliers (ADMM) [4], summarized in Algorithm 1. Particular instances might be solved more efficiently with alternative algorithms (i.e. proximal splitting methods). 3 Bilevel sparse models A central focus of this paper is to develop a framework for supervised learning of the parameters in (1), collectively denoted by Θ = {M1, M2, D, Ω}, to achieve the best possible performance in a 3 specific task such as reconstruction or classification. Supervised schemes arise very naturally when dealing with analysis operators. In sharp contrast to the generative synthesis models, where data reconstruction can be enforced unsupervisedly, there is no trivial way for unsupervised training of analysis operators without restricting them to satisfy some external, frequently arbitrary, constraints. Clearly, unconstrained minimization of (1) over Ωwould lead to a trivial solution Ω= 0. The ideas proposed in [12] fit very well here, and were in fact used in [5, 17] for learning of unstructured analysis operators. However, in both cases the authors used a smoothed version of the ℓ1 penalty, which is known to produce inferior results. In this work we extend these ideas, without smoothing the penalty. Formally, given an observed variable x ∈Rn coming from a certain distribution PX , we aim at predicting a corresponding latent variable y ∈Rk. The latter can be discrete, representing a label in a classification task, or continuous like in regression or reconstruction problems. As noted before, when λ2 > 0, problem (1) is strictly convex and, consequently, has a unique minimizer. The solution of the pursuit problem defines, therefore, an unambiguous deterministic map from the space of the observations to the space of the latent variables, which we denote by y∗ Θ(x). The map depends on the model parameters Θ. The goal of supervised learning is to select such Θ that minimize the expectation over PX of some problem-specific loss function ℓ. In practice, the distribution PX is usually unknown, and the expected loss is substituted by an empirical loss computed on a training set of pairs (x, y) ∈(X, Y). The task-driven model learning problem becomes [12] min Θ 1 |X| X (x,y)∈(X,Y) ℓ(y, x, y∗ Θ(x)) + φ(Θ), (4) where φ(Θ) denotes a regularizer on the model parameters added to stabilize the solution. Problem (4) is a bilevel optimization problem [8], as we need to optimize the loss function ℓ, which in turn depends on the minimizer of (1). As an example, let us examine the generic class of signal reconstruction problems, in which, as explained in Section 2, the matrix M2 = Φ plays the role of a linear degradation (e.g., blur and subsampling in case of image super-resolution problems), producing the degraded and, possibly, noisy observation x = Φy+n from the latent clean signal y. The goal of the model learning problem is to select the model parameters Θ yielding the most accurate inverse operator, y∗ Θ(Φy) ≈y. Assuming a simple white Gaussian noise model, this can be achieved through the following loss ℓ(y, x, y∗) = 1 2∥y −y∗∥2 2. (5) While the supervised learning of analysis operator has been considered for solving denoising problems [5, 17], here we address more general scenarios. In particular, we argue that, when used along with metric learning, it is often better suited for classification tasks than its synthesis counterpart, because the non-generative nature of analysis models is more suitable for feature learning. For simplicity, we consider the case of a linear binary classifier of the form sign(wTz + b) operating on the “feature vector” z = Ωy∗ Θ(x). Using a loss of the form ℓ(y, x, z) = f(−y(wTz + b)), with f being, e.g., the logistic regression function f(t) = log(1 + e−t), we train the model parameters Θ simultaneously with the classifier parameters w, b. In this context, the learning of Θ can be interpreted as feature learning. The generalization to multi-class classification problems is straightforward, by using a matrix W and a vector b instead of w and b. It is worthwhile noting that more stable classifiers are obtained by adding a regularization of the form φ = ∥W∥2 F to the learning problem (4). Optimization. A local minimizer of the non-convex model learning problem (4) can be found via stochastic optimization [8, 12, 17], by performing gradient descent steps on each of the variables in Θ with the pair (x, y) each time drawn at random from the training set. Specifically, the parameters at iteration i + 1 are obtained by Θi+1 ←Θi −ηi∇Θℓ(x, y, y∗ Θi(x)), (6) where 0 ≤ηi ≤η is a decreasing sequence of step-sizes. Following [12], we use a step size of the form ηi = min(η, ηi0/i) in all our experiments, which means that a fixed step size is used during the first k0 iterations, after which it decays according to the 1/i annealing strategy. Note that the learning requires the gradient ∇Θℓ, which in turn relies on the gradient of y∗ Θ(x) with respect to Θ. Even though y∗ Θ(x) is obtained by solving a non-smooth optimization problem, we will 4 show that it is almost everywhere differentiable, and one can compute its gradient with respect to Θ = {M1, M2, D, Ω} explicitly and in closed form. In the next section, we briefly summarize the derivation of the gradients for ∇M2ℓand ∇Ωℓ, as these two are the most interesting cases. The gradients needed for the remaining model settings described in Section 2 can be obtained straightforwardly from ∇M2ℓand ∇Ωℓ. Gradient computation. To obtain the gradients of the cost function with respect to the matrices M2 and Ω, we consider a version of (3) in which the equality constrained is relaxed by a penalty, min z,y 1 2∥M1x −M2y∥2 2 + t 2∥Ωy −z∥2 2 + λ1∥z∥1 + λ2 2 ∥y∥2 2, (7) with t > 0 being the penalty parameter. We denote by y∗ t and z∗ t the unique minimizers of this strongly convex optimization problem with t, x, M1, M2 and Ωfixed. Naturally, y∗ t and z∗ t are functions of x and Θ, the same way as y∗ Θ(x). Throughout this section, we will omit this dependence to simplify notation. The first-order optimality conditions of (8) lead to the equalities MT 2 (M2y∗ t −M1x) + tΩT(Ωy∗ t −z∗ t ) + λ2y∗ t = 0, (8) t(z∗ t −Ωy∗ t ) + λ1(sign(z∗ t ) + α) = 0, (9) where the sign of zero is defined as zero and α is a vector in Rr such that αΛ = 0 and |αΛc| ≤1. Here, αΛ denotes the sub-vector of α whose rows are reduced to Λ, the set of non-zero coefficients (active set) of z∗ t . It has been shown that the solution of the synthesis [12], analysis [23], and generalized Lasso [22] regularization problems are all piecewise affine functions of the observations and the regularization parameter. This means that the active set of the solution is constant on intervals of the regularization parameter λ1. Moreover, the number of transition points (values of λ1 that for a given observation x the active set of the solution changes) is finite and thus negligible. It can be shown that if λ1 is not a transition point of x, then a small perturbation in Ω, M1, or M2 leaves Λ and the sign of the coefficients in the solution unchanged [12]. Applying this result to (8), we can state that sign(z∗ t ) = sign(Ωy∗ t ). Let IΛ be the projection onto Λ, and let PΛ = IT ΛIΛ = diag{|sign(z∗)|} denote the matrix setting to zero the rows corresponding to Λc. Multiplying the second optimality condition by PΛ, we have z∗ t = PΛz∗ t = PΛΩy∗ t −λ1 t sign(z∗ t ), where we used the fact that PΛsign(z∗ t ) = sign(z∗ t ). We can plug the latter result into (9), obtaining y∗ t = Qt(MT 2 M1x −λ1ΩTsign(z∗ t )), (10) where Qt = (tΩTPΛcΩ+ B)−1 and B = MT 2 M2 + λ2I. By using the first-order Taylor’s expansion of (11), we can obtain an expression for the gradients of ℓ(y∗ t ) with respect to M2 and Ω, ∇Ωℓ(y∗ t ) = −λ1sign(z∗ t )βT −PΛcΩ(ty∗ t βT t + tβty∗ t T), (11) ∇M2ℓ(y∗ t ) = M2(y∗ t βT t + βty∗ t T), (12) where βt = Qt∇y∗ℓ(y∗ t ). Note that since the (unique) solution of (8) can be made arbitrarily close to the (unique) solution of (1) by increasing t, we can obtain the exact gradients of y∗by taking the limit t →∞in the above expressions. First, observe that Qt = (tΩTPΛcΩ+ B)−1 = (B(tB−1ΩTPΛcΩ+ I))−1 = (tC + I)−1B−1, where C = B−1ΩTPΛcΩ. Note that B is invertible if M2 is full-rank or if λ2 > 0. Let C = UHU−1 be the eigen-decomposition of C, with H a diagonal matrix with the elements hi, 1 ≤i ≤ n. Then, Qt = UHtU−1B−1, where Ht is diagonal with 1/(thi + 1) on the diagonal. In the limit, thi →0 if hi = 0, and thi →∞otherwise, yielding Q = lim t→∞Qt = UH′U−1B−1 with H′ = diag{h′ i}, h′ i = 0 : hi ̸= 0, 1 : hi = 0. (13) The optimum of (1) is given by y∗= Q(MT 2 M1x −λ1ΩTsign(z∗)). Analogously, we take the limit in the expressions describing the gradients in (12) and (13). We summarize our main result in Proposition 1 below, for which we define ˜Q = lim t→∞tQt = UH′′U−1B−1 with H′′ = diag{h′′ i }, h′′ i = 1 hi : hi ̸= 0, 0 : hi = 0. (14) 5 Layer21 x zout bout zout = bout = F bprev σ (bin) t zout −zin) + ··· H( Gbin + ··· b0 = Ax zin bin bin Layer2K zout = bout = F bprev σ (bin) t zout −zin) + ··· H( Gbin + ··· bprev bin bin zin bprev 0 0 Reconstruction2Layer zout bout yout = 2zout −b ) V( Ux + ··· out yout Figure 1: ADMM neural network encoder. The network comprises K identical layers parameterized by the matrices A and B and the threshold vector t, and one output layer parameterized by the matrices U and V. The initial values of the learned parameters are given by ADMM (see Algorithm 1) according to U = (MT 2 M2+ρΩTΩ+λ2I)−1MT 2 M1, V = ρ(MT 2 M2+ρΩTΩ+λ2I)−1ΩT, A = ΩU, H = 2ΩV−I, G = 2I −ΩV, F = ΩV −I, and t = λ1 ρ 1. Proposition 1. The functional y∗= y∗ Θ(x) in (1) is almost everywhere differentiable for λ2 > 0, and its gradients satisfy ∇Ωℓ(y∗) = −λ1sign(Ωy∗)βT −PΛcΩ(˜y∗βT + ˜βy∗T), ∇M1ℓ(y∗) = M2(y∗βT + βy∗T), where the vectors β, ˜β and ˜y in Rk are defined as β = Q∇y∗ℓ(x, Θ), ˜β = ˜Q∇y∗ℓ(x, Θ), and ˜y∗= ˜Q(MT 2 M1x −λ1ΩTsign(z∗)), with Q and ˜Q given by (14) and (15) respectively. In addition to being a useful analytic tool, the relationship between (1) and its relaxed version (8) also has practical implications. Obtaining the exact gradients given in Proposition 1 requires computing the eigendecomposition of C, which is in general computationally expensive. In practice, we approximate the gradients using the expressions in (12) and (13) with a fixed sufficiently large value of t. The supervised model learning framework can be straightforwardly specialized to the shift-invariant case, in which filters γi in (2) are learned instead of a full matrix Ω. The gradients of ℓwith respect to the filter coefficients are obtained using Proposition 1 and the chain rule. 4 Fast approximation The discussed sparse models rely on an iterative optimization scheme such as ADMM, required to solve the pursuit problem (1). This has relatively high computational complexity and latency, which is furthermore data-dependent. ADMM typically requires hundreds or thousands of iterations to converge, greatly depending on the problem and the input. While the classical optimization theory provides worst-case (data-independent) convergence rate bounds for many families of iterative algorithms, very little is known about their behavior on specific data, coming, e.g., from a distribution supported on a low-dimensional manifold – characteristics often exhibited by real data. The common practice of sparse modeling concentrates on creating sophisticated data models, and then relies on computational and analytic techniques that are totally agnostic of the data structure. Such a discrepancy hides a (possibly dramatic) potential of computational improvement [11]. From the perspective of the pursuit process, the minimization of (1) is merely a proxy to obtaining a highly non-linear map between the data vector x and the representation vector y (which can also be the “feature” vector ΩDy or the reconstructed data vector Dy, depending on the application). Adopting ADMM, such a map can be expressed by unrolling a sufficient number K of iterations into a feed-forward network comprising K (identical) layers depicted in Figure 1, where the parameters A, B, U, V, and t, collectively denoted as Ψ, are prescribed by the ADMM iteration. Fixing K, we obtain a fixed-complexity and latency encoder ˆyK,Ψ(x), parameterized by Ψ. Note that for a sufficiently large K, ˆyK,Ψ(x) ≈y∗(x), with the latter denoting the exact minimizer of (1) given the input x. However, when complexity budget constraints require K to be truncated at a small fixed number, the output of ˆyK,Ψ is usually unsatisfactory, and the worst-case analysis provided by the classical optimization theory is of little use. However, within the family of functions {ˆyK,Ψ : Ψ}, there might exist better parameters for which ˆy performs better on relevant input data. Such parameters can be obtained via learning, as described in the sequel. Similar ideas were first advocated by [11], who considered Lasso sparse synthesis models, and showed that by unrolling iterative shrinkage thresholding algorithms (ISTA) into a neural network, 6 and learning a new set of parameters, approximate solutions to the pursuit problem could be obtained at a fraction of the cost of the exact solution, if the inputs were restricted to data coming from a distribution similar to that used at training. This approach was later extended to more elaborated structured sparse and low-rank models, with applications in audio separation and denoising [20]. Here is the first attempt to extend it to sparse analysis and mixed analysis-synthesis models. The learning of the fast encoder is performed by plugging it into the training problem (4) in place of the exact encoder. The minimization of a loss function ℓ(Ψ) with respect to Ψ requires the computation of the (sub)gradients dℓ(y)/dΨ, which is achieved by the back-propagation procedure (essentially, an iterated application of the chain rule). Back-propagation starts with differentiating ℓ(Ψ) with respect to the output of the last network layer, and propagating the (sub)gradients down to the input layer, multiplying them by the Jacobian matrices of the traversed layers. For completeness, we summarize the procedure in the supplementary materials. There is no principled way of choosing the number of layers K and in practice this is done via cross-validation. In Section 5 we discuss the selection of K for a particular example. In the particular setting of a shift-invariant analysis model, the described neural network encoder assumes a structure resembling that of a convolutional network. The matrices A, B, U, and V parameterizing the network in Figure 1 are replaced by a set of filter coefficients. The initial inverse kernels of the form (ρΩTΩ+(1+λ2)I)−1 prescribed by ADMM are approximated by finite-support filters, which are computed using a standard least squares procedure. 5 Experimental results and discussion In what follows, we illustrate the proposed approaches on two experiments: single-image superresolution (demonstrating a reconstruction problem), and polyphonic music transcription (demonstrating a classification problem). Additional figures are provided in the supplementary materials. Single-image super-resolution. Single-image super-resolution is an inverse problem in which a high-resolution image is reconstructed from its blurred and down-sampled version lacking the high-frequency details. Low-resolution images were created by blurring the original ones with an anti-aliasing filter, followed by down-sampling operator. In [25], it has been demonstrated that prefiltering a high resolution image with a Gaussian kernel with σ = 0.8s guarantees that the following s × s sub-sampling generates an almost aliasing-free low resolution image. This models very well practical image decimation schemes, since allowing a certain amount of aliasing improves the visual perception. Super-resolution consists in inverting both the blurring and sub-sampling together as a compound operator. Since the amount of aliasing is limited, a bi-cubic spline interpolation is more accurate than lower ordered interpolations for restoring the images to their original size. As shown in [26], up-sampling the low resolution image in this way, produces an image that is very close to the pre-filtered high resolution counterpart. Then, the problem reduces to deconvolution with a Gaussian kernel. In all our experiments we used the scaling factor s = 2. A shift-invariant analysis model was tested in three configurations: a TV prior created using horizontal and vertical derivative filters; a bank of 48 7 × 7 non-constant DCT filters (referred to henceforth as A-DCT); and a combination of the former two settings tuned using the proposed supervised scheme with the loss function (5). The training set consisted of random image patches from [24]. We also tested a convolutional neural network approximation of the third model, trained under similar conditions. Pursuit problem was solved using ADMM with ρ = 1, requiring about 100 iterations to converge. Table 1 reports the obtained PSNR results on seven standard images used in super-resolution experiments. Visual results are shown in the supplementary materials. We observe that on the average, the supervised model outperforms A-DCT and TV by 1 −3 dB PSNR. While performing slightly inferior to the exact supervised model, the neural network approximation is about ten times faster. Automatic polyphonic music transcription. The goal of automatic music transcription is to obtain a musical score from an input audio signal. This task is particularly difficult when the audio signal is polyphonic, i.e., contains multiple pitches present simultaneously. Like the majority of music and speech analysis techniques, music transcription typically operates on the magnitude of the audio time-frequency representation such as the short-time Fourier transform or constant-Q transform (CQT) [7] (adopted here). Given a spectral frame x at some time, the transcription problem consists of producing a binary label vector p ∈{−1, +1}k, whose i-th element indicates the pres7 method mean ±std. dev. man woman barbara boats lena house peppers Bicubic 29.51 ± 4.39 28.52 38.22 24.02 27.38 30.77 29.75 27.95 TV 29.04 ± 3.51 30.23 33.39 24.25 29.44 31.75 29.91 24.31 A-DCT 31.06 ± 4.84 29.85 40.23 24.32 28.89 32.72 31.68 29.71 SI-ADMM 32.03 ± 4.84 31.05 40.62 24.55 30.06 34.06 32.91 30.93 SI-NN (K = 10) 31.53 ± 5.03 30.42 40.99 24.53 29.12 33.58 31.82 30.21 Table 1: PSNR in dB of different image super-resolution methods: bicubic interpolation (Bicubic), shiftinvariant analysis models with TV and DCT priors (TV and A-DCT), supervised shift-invariant analysis model (SI-ADMM), and its fast approximation with K = 10 layers (SI-NN). 10 0 10 1 10 2 0 10 20 30 40 50 60 Number of iterations / layers (K) Accuracy (%) Analysis−ADMM Analysis−NN Nonneg. synthesis Benetos & Dixon Poliner & Ellis 0 20 40 60 80 100 0 10 20 30 40 50 60 70 80 90 100 Precision (%) Recall (%) Analysis ADMM Analysis NN (K=1) Analysis NN (K=10) Nonneg. synthesis Figure 2: Left: Accuracy of the proposed analysis model (Analysis-ADMM) and its fast approximation (Analysis-NN) as the function of number of iterations or layers K. For reference, the accuracy of a nonnegative synthesis model as well as two leading methods [3, 18] is shown. Right: Precision-recall curve. ence (+1) or absence (−1) of the i-th pitch at that time. We use k = 88 corresponding to the span of the standard piano keyboard (MIDI pitches 21 −108). We used an analysis model with a square dictionary Ωand a square metric matrix M1 = M2 to produce the feature vector z = Ωy, which was then fed to a classifier of the form p = sign(Wz+b). The parameters Ω, M2, W, and b were trained using the logistic loss on the MAPS Disklavier dataset [10] containing examples of polyphonic piano recordings with time-aligned groundtruth. The testing was performed on another annotated real piano dataset from [18]. Transcription was performed frame-by-frame, and the output of the classifier was temporally filtered using a hidden Markov model proposed in [3]. For comparison, we show the performance of a supervised nonnegative synthesis model and two leading methods [3, 18] evaluated in the same settings. Performance was measured using the standard precision-recall curve depicted in Figure 2 (right); in addition we used accuracy measure Acc = TP/(FP + FN + TP), where TP (true positives) is the number of correctly predicted pitches, and FP (false positives) and FN (false negatives) are the number of pitches incorrectly transcribed as ON or OFF, respectively. This measure is frequently used in the music analysis literature [3, 18]. The supervised analysis model outperforms leading pitch transcription methods. Figure 2 (left) shows that replacing the exact ADMM solver by a fast approximation described in Section 4 achieves comparable performance, with significantly lower complexity. In this example, ten layers are enough for having a good representation and the improvement obtained by adding layers begins to be very marginal around this point. Conclusion. We presented a bilevel optimization framework for the supervised learning of a superset of sparse analysis and synthesis models. We also showed that in applications requiring low complexity or latency, a fast approximation to the exact solution of the pursuit problem can be achieved by a feed-forward architecture derived from truncated ADMM. The obtained fast regressor can be initialized with the model parameters trained through the supervised bilevel framework, and tuned similarly to the training and adaptation of neural networks. We observed that the structure of the network becomes essentially a convolutional network in the case of shift-invariant models. The generative setting of the proposed approaches was demonstrated on an image restoration experiment, while the discriminative setting was tested in a polyphonic piano transcription experiment. In the former we obtained a very good and fast solution while in the latter the results comparable or superior to the state-of-the-art. 8 References [1] M. Aharon, M. Elad, and A. Bruckstein. k-SVD: an algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Sig. Proc., 54(11):4311–4322, 2006. [2] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Img. Sci., 2:183–202, March 2009. [3] E. Benetos and S. Dixon. Multiple-instrument polyphonic music transcription using a convolutive probabilistic model. In Sound and Music Computing Conference, pages 19–24, 2011. [4] D.P. Bertsekas. Nonlinear programming. 1999. [5] H. Bischof, Y. Chen, and T. Pock. Learning l1-based analysis and synthesis sparsity priors using bi-level optimization. NIPS workshop, 2012. [6] M. M. Bronstein, A. M. Bronstein, M. Zibulevsky, and Y. Y. Zeevi. Blind deconvolution of images using optimal sparse representations. IEEE Trans. Im. Proc., 14(6):726–736, 2005. [7] J. C. Brown. Calculation of a constant Q spectral transform. The Journal of the Acoustical Society of America, 89:425, 1991. [8] B. Colson, P. Marcotte, and G. Savard. An overview of bilevel optimization. Annals of operations research, 153(1):235–256, 2007. [9] M. Elad and M. Aharon. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. on Im. Proc., 54(12):3736–3745, 2006. [10] V. Emiya, R. Badeau, and B. David. Multipitch estimation of piano sounds using a new probabilistic spectral smoothness principle. IEEE Trans. Audio, Speech, and Language Proc., 18(6):1643–1654, 2010. [11] K. Gregor and Y. LeCun. Learning fast approximations of sparse coding. In ICML, pages 399–406, 2010. [12] J. Mairal, F. Bach, and J. Ponce. Task-driven dictionary learning. IEEE Trans. PAMI, 34(4):791–804, 2012. [13] J. Mairal, M. Elad, and G. Sapiro. Sparse representation for color image restoration. IEEE Trans. on Im. Proc., 17(1):53–69, 2008. [14] S. Mallat. A Wavelet Tour of Signal Processing, Second Edition. Academic Press, 1999. [15] Y. Nesterov. Gradient methods for minimizing composite objective function. In CORE. Catholic University of Louvain, Louvain-la-Neuve, Belgium, 2007. [16] B.A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583):607–609, 1996. [17] G. Peyr´e and J. Fadili. Learning analysis sparsity priors. SAMPTA’11, 2011. [18] G. E. Poliner and D. Ellis. A discriminative model for polyphonic piano transcription. EURASIP J. Adv. in Sig. Proc., 2007, 2006. [19] L.I. Rudin, S. Osher, and E. Fatemi. Nonlinear total variation-based noise removal algorithms. Physica D, 60(1-4):259–268, 1992. [20] P. Sprechmann, A. M. Bronstein, and G. Sapiro. Learning efficient sparse and low rank models. arXiv preprint arXiv:1212.3631, 2012. [21] R. Tibshirani. Regression shrinkage and selection via the LASSO. J. Royal Stat. Society: Series B, 58(1):267–288, 1996. [22] Ryan Joseph Tibshirani. The solution path of the generalized lasso. Stanford University, 2011. [23] S. Vaiter, G. Peyre, C. Dossal, and J. Fadili. Robust sparse analysis regularization. Information Theory, IEEE Transactions on, 59(4):2001–2016, 2013. [24] J. Yang, John W., T. Huang, and Y. Ma. Image super-resolution as sparse representation of raw image patches. In Proc. CVPR, pages 1–8. IEEE, 2008. [25] G. Yu and J.-M. Morel. On the consistency of the SIFT method. Inverse problems and Imaging, 2009. [26] G. Yu, G. Sapiro, and S. Mallat. Solving inverse problems with piecewise linear estimators: from gaussian mixture models to structured sparsity. IEEE Trans. Im. Proc., 21(5):2481–2499, 2012. 9
|
2013
|
149
|
4,875
|
Reinforcement Learning in Robust Markov Decision Processes Shiau Hong Lim Department of Mechanical Engineering National University of Singapore Singapore mpelsh@nus.edu.sg Huan Xu Department of Mechanical Engineering National University of Singapore Singapore mpexuh@nus.edu.sg Shie Mannor Department of Electrical Engineering Technion, Israel shie@ee.technion.ac.il Abstract An important challenge in Markov decision processes is to ensure robustness with respect to unexpected or adversarial system behavior while taking advantage of well-behaving parts of the system. We consider a problem setting where some unknown parts of the state space can have arbitrary transitions while other parts are purely stochastic. We devise an algorithm that is adaptive to potentially adversarial behavior and show that it achieves similar regret bounds as the purely stochastic case. 1 Introduction Markov decision processes (MDPs) [Puterman, 1994] have been widely used to model and solve sequential decision problems in stochastic environments. Given the parameters of an MDP, namely, the rewards and transition probabilities, an optimal policy can be computed. In practice, these parameters are often estimated from noisy data and furthermore, they may change during the execution of a policy. Hence, the performance of the chosen policy may deteriorate significantly; see [Mannor et al., 2007] for numerical experiments. The robust MDP framework has been proposed to address this issue of parameter uncertainty (e.g., [Nilim and El Ghaoui, 2005] and [Iyengar, 2005]). The robust MDP setting assumes that the true parameters fall within some uncertainty set U and seeks a policy that performs the best under the worst realization of the parameters. These solutions, however, can be overly conservative since they are based on worst-case realization. Variants of robust MDP formulations have been proposed to mitigate the conservativeness when additional information on parameter distribution [Strens, 2000, Xu and Mannor, 2012] or coupling among the parameters [Mannor et al., 2012] are known. A major drawback of previous work on robust MDPs is that they all focused on the planning problem with no effort to learn the uncertainty. Since in practice it is often difficult to accurately quantify the uncertainty, the solutions to the robust MDP can be conservative if a too large uncertainty set is used. In this work, we make the first attempt to perform learning in robust MDPs. We assume that some of the state-action pairs are adversarial in the sense that their parameters can change arbitrarily within U from one step to another. However, others are benign in the sense that they are fixed and behave purely stochastically. The learner, however, is given only the uncertainty set U and knows neither the parameters nor the true nature of each state-action pair. 1 In this setting, a traditional robust MDP approach would be equivalent to assuming that all parameters are adversarial and therefore would always execute the minimax policy. This is too conservative since it could be the case that most of the parameters are stochastic. Alternatively, one could use an existing online learning algorithm such as UCRL2 [Jaksch et al., 2010] and assume that all parameters are stochastic. This, as we show in the next section, may lead to suboptimal performance when some of the states are adversarial. Instead, we propose an online learning approach to robust MDPs. We show that the cumulative reward obtained from this method is as good as the minimax policy that knows the true nature of each state-action pair. This means that by incorporating learning in robust MDPs, we can effectively resolve the “conservativeness due to not knowing the uncertainty” effect. The rest of the paper is structured as follows. Section 2 discusses the key difficulties in our setting and explains why existing solutions are not applicable. In subsequent sections, we present our algorithm, its theoretical performance bound and its analysis. Sections 3 and 4 cover the finitehorizon case while Section 5 deals with the infinite-horizon case. We present some experiment results in Section 6 and conclude in Section 7. 2 Problem setting We consider an MDP M with a finite state space S and a finite action space A. Let S = |S| and A = |A|. Executing action a in state s results in a random transition according to a distribution ps,a(·) where ps,a(s′) gives the probability of transitioning to state s′, and accumulate an immediate reward r(s, a). A robust MDP considers the case where the transition probability is determined in an adversarial way. That is, when action a is taken at state s, the transition probability ps,a(·) can be an arbitrary element of the uncertainty set U(s, a). In particular, for different visits of same (s, a), the realization of ps,a can be different, possibly depends on the history. This can model cases where the system dynamics are influenced by competitors or exogeneous factors that are hard to model, or the MDP is a simplification of a complicated dynamic system. Previous research in robust MDPs focused exclusively on the planning problem. Here, the power of the adversary – the uncertainty set of the parameter – is precisely known, and the goal is to find the minimax policy – the policy with the best performance under the worst admissible parameters. This paper considers the learning problem of robust MDPs. We ask the following question: suppose the power of the adversary (the extent to which it can affect the system) is not completely revealed to the decision maker, if we are allowed to play the MDP many times, can we still obtain an optimal policy as if we knew the true extent of its power? Or to put it that way, can we develop a procedure that provides the exact amount of protection against the unknown adversary? Our specific setup is as follows: for each (s, a) ∈S×A an uncertainty set U(s, a) is given. However, not all states are adversarial. Only a subset F ⊂S ×A is truly adversarial while all the other stateaction pairs behave purely stochastically, i.e., with a fixed unknown ps,a. Moreover, the set F is not known to the algorithm. This setting differs from existing setups, and is challenging for the following reasons: 1. The adversarial actions ps,a are not directly observable. 2. The adversarial behavior is not constrained, except it must belong to the uncertainty set. 3. Ignoring the adversarial component results in sub-optimal behavior. The first challenge precludes the use of algorithm based on stochastic games such as R-Max [Brafman and Tennenholtz, 2002]. The R-Max algorithm deals with stochastic games where the opponent’s action-set for each state is known and the opponent’s actions are always observable. In our setting, only the outcome (i.e., the next-state and the reward) of each transition is observable. The algorithm does not observe the action ps,a taken by the adversary. Indeed, because the set F is unknown, even the action set of the adversary is unknown to the algorithm. The second challenge is due to unconstrained adversarial behavior. For state-action pairs (s, a) ∈F, the opponent is free to choose any ps,a ∈U(s, a) for each transition, possibly depends on the his2 tory and the strategy of the decision maker (i.e., non-oblivious). This affects the sort of performance guarantee one can reasonably expect from any algorithms. In particular, when considering the regret against the best stationary policy “in hindsight”, [Yu and Mannor, 2009] show that small change in transition probabilities can cause large regret. Even with additional constraints on the allowed adversarial behavior, they showed that the regret bound still does not vanish with respect to the number of steps. Indeed, most results for adversarial MDPs [Even-Dar et al., 2005, Even-Dar et al., 2009, Yu et al., 2009, Neu et al., 2010, Neu et al., 2012] only deal with adversarial rewards while the transitions are assumed stochastic and fixed, which is considerably simpler than our setting. Since it is not possible to achieve vanishing regret against best stationary policy in hindsight, we choose to measure the regret against the performance of a minimax policy that knows exactly which state-actions are adversarial (i.e., the set F) as well as the true ps,a for all stochastic state-action pairs. Intuitively, this means that if the adversary chooses to play “nicely”, we are not constrained to exploit this. Finally, given that we are competing against the minimax policy, one might ask whether we could simply apply existing algorithms such as UCRL2 [Jaksch et al., 2010] and treat every state-action pair as stochastic. The following example shows that ignoring any adversarial behavior may lead to large regret compared to the minimax policy. s0 s1 s3 s2 s4 g∗ g∗+ β g∗−α g∗+ β a1 a2 a3 Figure 1: Example MDP with adversarial transitions. Consider the MDP in Figure 1. Suppose that a UCRL2-like algorithm is used, where all transitions are assumed purely stochastic. There are 3 alternative policies, each corresponds to choosing action a1, a2 and a3 respectively in state s0. Action a1 leads to the optimal minimax average reward of g∗. State s2 leads to average reward of g∗+ β for some β > 0. State s1 has adversarial transition, where both s2 and s4 are possible next states. s4 has a similar behavior, where it may either lead to g∗+ β or a “bad” region with average reward g∗−α for some 2β < α < 3β. We consider two phases. In phase 1, the adversary behaves “benignly” by choosing all solid-line transitions. Since both a2 and a3 lead to similar outcome, we assume that in phase 1, both a2 and a3 are chosen for T steps each. In phase 2, the adversary chooses the dashed-line transitions in both s1 and s4. Due to a2 and a3 having similar values (both g∗+ β > g∗) we can assume that a2 is always chosen in phase 2 (if a3 is ever chosen in phase 2 its value will quickly drop below that of a2). Suppose that a2 also runs for T steps in phase 2. A little algebra (see the supplementary material for details) shows that at the end of phase 2 the expected value of s4 (from the learner’s point of view) is g4 = g∗+ β−α 2 and therefore the expected value of s1 is g1 = g∗+ 3β−α 4 > g∗. The total accumulated rewards over both phases is however 3Tg∗+ T(2β −α). Let c = α −2β > 0. This means that the overall total regret is cT which is linear in T. Note that in the above example, the expected value of a2 remains greater than the minimax value g∗throughout phase 2 and therefore the algorithm will continue to prefer a2, even though the actual accumulated average value is already way below g∗. The reason behind this is that the Markov property, which is crucial for UCRL2-like algorithms to work, has been violated due to s1 and s4 behaving in a non-independent way caused by the adversary. 3 Algorithm and main result In this section, we present our algorithm and the main result for the finite-horizon case with the total reward as the performance measure. Section 5 provides the corresponding algorithm and result for the infinite-horizon average-reward case. 3 For simplicity, we assume without loss of generality a deterministic and known reward function r(s, a). We also assume that rewards are bounded such that r(s, a) ∈[0, 1]. It is straight-forward, by introducing additional states, to extend the algorithm and analysis to the case where the reward function is random, unknown and even adversarial. In the finite horizon case, we consider an episodic setting where each episode has a fixed and known length T. The algorithm starts at a (possibly random) state s0 and executes T stages. After that, a new episode begins, with an arbitrarily chosen start state (it can simply be the last state of the previous episode). This goes on indefinitely. Let π be a finite-horizon (non-stationary) policy where πt(s) gives the action to be executed in state s at step t in an episode, where t = 0, . . . , (T −1). Let Pt be a particular choice of ps,a ∈U(s, a) for every (s, a) ∈F at step t. For each t = 0, . . . , (T −1), we define V π t (s) = min Pt,...,PT −2 EPt,...,PT −2 T −1 X t′=t r(st′, πt′(st′)) and V ∗ t (s) = max π V π t (s), where st = s and st+1, . . . , sT −1 are random variables due to the random transitions. We assume that U is such that the minimum above exists (e.g., compact set). It is not hard to show that given state s, there exists a policy π with V π 0 (s) = V ∗ 0 (s) and we can compute such a minimax policy if the algorithm knows F and ps,a for all (s, a) /∈F, from literature of robust MDP (e.g., [Nilim and El Ghaoui, 2005] and [Iyengar, 2005]). The main message of this paper is that we can determine a policy as good as the minimax policy without knowing either F or ps,a for (s, a) /∈F. To make this formal, we define the regret (against the minimax performance) in episode i, for i = 1, 2, . . . as ∆i = V ∗ 0 (si 0) − T −1 X t=0 r(si t, ai t), where si t and ai t denote the actual state visited and action taken at step t of episode i.1 The total regret for m episodes, which we want to minimize, is thus defined as ∆(m) = m X i=1 ∆i. The main algorithm is given in Figure 2. OLRM is basically UCRL2 [Jaksch et al., 2010] with an additional stochastic check to detect adversarial state-action pairs. Like UCRL2, the algorithm employs the “optimism under uncertainty” principle. We start by assuming that all states are stochastic. If the adversary plays “nicely”, nothing else would have to be done. The key challenge, however, is to successfully identify the adversarial state-action pairs when they start to behave maliciously. A similar scenario in the multi-armed bandit setting has been addressed by [Bubeck and Slivkins, 2012]. They show that it is possible to achieve near-optimal regret without knowing a priori whether a bandit is stochastic or adversarial. In [Bubeck and Slivkins, 2012], the key is to check some consistency conditions that would be satisfied if the behavior is stochastic. We use the same strategy and the question is then, which condition? We discuss this in section 3.2. Note that the index k = 1, 2, . . . tracks the number of policies. A policy is executed until either a new pair (s, a) fails the stochastic check, and hence deemed to be adversarial, or some state-action pair has been executed too many times. In either case, we need to re-compute the current optimistic policy (see Section 3.1 for the detail). Every time a new policy is computed we call it a new epoch. While each episode has the same length (T), each epoch can span multiple episodes, and an epoch can begin in the middle of an episode. 3.1 Computing an optimistic policy Figure 3 shows the algorithm for computing the optimistic minimax policy, where we treat all stateaction pairs in the set F as adversarial, and (similar to UCRL2) use optimistic values for other state-action pairs. 1We provide high-probability regret bounds for any single trial, from which the expected regret can be readily derived, if desired. 4 Input: S, A, T, δ, and for each (s, a), U(s, a) 1. Initialize the set F ←{}. 2. Initialize k ←1. 3. Compute an optimistic policy ˜π, assuming all state-action pairs in F are adversarial (Section 3.1). 4. Execute ˜π until one of the followings happen: • The execution count of some state-action (s, a) has been doubled. • The executed state-action pair (s, a) fails the stochastic check (Section 3.2). In this case (s, a) is added to F. 5. Increment k. Go back to step 3. Figure 2: The OLRM algorithm Here, to simplify notations, we frequently use V (·) to mean the vector whose elements are V (s) for each s ∈S. This applies to both value functions as well as probability distributions over S. In particular, we use p(·)V (·) to mean the dot product between two such vectors, i.e. P s p(s)V (s). We use Nk(s, a) to denote the total number of times the state-action pair (s, a) has been executed before epoch k. The corresponding empirical next-state distribution based on these transitions is denoted as ˆPk(·|s, a). If (s, a) has never been executed before epoch k, we define Nk(s, a) = 1 and assume ˆPk(·|s, a) to be arbitrarily defined. Input: S, A, T, δ, F, k, and for each (s, a), U(s, a), ˆPk(·|s, a) and Nk(s, a). 1. Set ˜V k T −1(s) = maxa r(s, a) for all s. 2. Repeat, for t = T −2, . . . , 0: • For each (s, a) ∈F, set ˜Qk t (s, a) = min T −t, min p∈U(s,a) r(s, a) + p(·) ˜V k t+1(·) . • For each (s, a) /∈F, set ˜Qk t (s, a) = min ( T −t, r(s, a) + ˆPk(·|s, a) ˜V k t+1(·) + T s 2 Nk(s, a) log 2SATk2 δ ) . • For each s, set ˜V k t (s) = maxa ˜Qk t (s, a) and ˜πt(s) = arg maxa ˜Qk t (s, a). 3. Output ˜π. Figure 3: Algorithm for computing an optimistic minimax policy. 3.2 Stochasticity check Every time a state-action (s, a) /∈F is executed, the outcome is recorded and subjected to a “stochasticity check”. Let n be the total number of times (s, a) has been executed (including the latest one) and s′ 1, . . . , s′ n are the next-states for each of these transitions. Let k1, . . . , kn be the epochs in which each of these transitions happens. Let t1, . . . , tn be the step within the episodes (i.e. episode stage) where these transitions happen. Let τ be the total number of steps executed by the algorithm (from the beginning) so far. The stochastic check fails if: n X j=1 ˆPkj(·|s, a) ˜V kj tj+1(·) − n X j=1 ˜V kj tj+1(s′ j) > 5T r nS log 4SATτ 2 δ . The stochastic check follows the intuitive saying “if it is not broke, don’t fix it”, by checking whether the value of actual transition from (s, a) is below what is expected from the parameter estimation. 5 One can show that with high probability, all stochastic state-action pairs will always pass the stochastic check. Now consider an adversarial (s, a) pair: if the adversary plays “nicely”, the current policy accumulates satisfactory reward and hence nothing needs to be changed, even if the transitions themselves fail to “look” stochastic; if the adversary plays “nasty”, then the stochastic check will detect it, and subsequently protect against it. 3.3 Main result The following theorem summarizes the performance of OLRM. Here and in the sequel, we use ˜O when the log terms are omitted. Our result for the infinite-horizon case is similar (see Section 5). Theorem 1. Given δ, T, S, A, the total regret of OLRM is ∆(m) ≤˜O(ST 3/2√ Am) for all m, with probability at least 1 −δ. Note that the above is with respect to the total number of episodes m. Since the total number of steps is τ = mT, the regret bound in terms of τ is therefore ˜O(ST √ Aτ). This gives the familiar √τ regret as in UCRL2. Also, the bound has the same dependencies on S and A as in UCRL2. The horizon length T plays the role of the “diameter” in the infinite-horizon case and again it has the same dependency as its counterpart in UCRL2. The result shows that even though the algorithm deals with unknown stochastic and potentially adversarial states, it achieves the same regret bound as in the fully stochastic case. In the case where all states are in fact stochastic, this reduces to the same UCRL2 result. 4 Analysis of OLRM We briefly explain the roadmap of the proof of Theorem 1. The complete proof can be found in the supplementary material. Our proof starts with the following technical Lemma. Lemma 1. The following holds for all state-action pair (s, a) /∈F and for t = 0, . . . , (T −1) in all epochs k ≥1, with probability at least 1 −δ: ˆPk(·|s, a) ˜V k t+1(·) −ps,a(·) ˜V k t+1(·) ≤T s 2S Nk(s, a) log 4SATk2 δ . Proof sketch. Since (s, a) /∈F is stochastic, we apply the bound from [Weissman et al., 2003] for the 1-norm deviation between ˆPk(·|s, a) and ps,a. The bound follows from ∥˜V k t+1(·)∥∞≤T. Using Lemma 1, we show the following lemma that with high probability, all purely stochastic state-action pairs will always pass the stochastic check. Lemma 2. The probability that any state-action pair (s, a) /∈F gets added into set F while running the algorithm is at most 2δ. Proof sketch. Each (s, a) /∈F is purely stochastic. Suppose (s, a) has been executed n times and s′ 1, . . . , s′ n are the next-states for these transitions. Recall that the check fails if n X j=1 ˆPkj(·|s, a) ˜V kj tj+1(·) − n X j=1 ˜V kj tj+1(s′ j) > 5T r nS log 4SATτ 2 δ . We can derive a high-probability bound that satisfies the stochastic check by applying the AzumaHoeffding inequality on the martingale difference sequence Xj = ps,a(·) ˜V kj tj+1(·) −˜V kj tj+1(s′ j) followed by an application of Lemma 1. 6 We then show that all value estimates ˜V k t are always optimistic. Lemma 3. With probability at least 1 −δ, and assume that no state-action pairs (s, a) /∈F have been added to F, the following holds for every state s ∈S, every t ∈{0, . . . , T −1} and every k ≥1: ˜V k t (s) ≥V ∗ t (s). Proof sketch. The key challenge is to prove that state-actions in F (adversarial) that have not been identified (i.e. all past transitions passed the test) would have optimistic ˜Q values. This can be done by, again, applying the Azuma-Hoeffding inequality. Equipped with the previous three lemmas, we are now able to establish Theorem 1. Proof sketch. Lemma 3 established that all value estimates ˜V k t are always optimistic. We can therefore bound the regret by bounding the difference between ˜V k t and the actual rewards received by the algorithm. The “optimistic gap” shrinks in an expected manner as the number of steps executed by the algorithm grows if all state-actions are stochastic. For an adversarial state-action (s, a) ∈F, we use the following facts to ensure the above: (i) If (s, a) has been added to F (i.e., it failed the stochastic check) then all policies afterwards would correctly evaluate its value; (ii) All transitions before (s, a) is added to F (if ever) must have passed the stochastic check and the check condition ensures that its behavior is consistent with what one would expect if (s, a) was stochastic. 5 Infinite horizon case In the infinite horizon case, let P be a particular choice of ps,a ∈U(s, a) for every (s, a) ∈F. Given a (stationary) policy π, its average undiscounted reward (or “gain”) is defined as follows: gπ P (s) = lim τ→∞ 1 τ EP " τ X t=1 r(si, π(si)) # where s1 = s. The limit always exists for finite MDPs [Puterman, 1994]. We make the assumption that regardless of the choice of P, the resulting MDP is communicating and unichain. 2 In this case gπ P (s) is a constant and independent of s so we can drop the argument s. We define the worst-case average reward of π over all possible P as gπ = minP gπ P . An optimal minimax policy π∗is any policy whose gain gπ∗= g∗= maxπ gπ. We define the regret after executing the MDP M for τ steps as ∆(τ) = τg∗− τ X t=1 r(st, at). The main algorithm for the infinite-horizon case, which we refer as OLRM2, is essentially identical to OLRM. The main difference is in computing the optimistic policy and the corresponding stochastic check. The detailed algorithm is presented in the supplementary material. The algorithms from [Tewari and Bartlett, 2007] can be used to compute an optimistic minimax policy. In particular, for each (s, a) ∈F, its transition function is chosen pessimistically from U(s, a). For each (s, a) /∈F, its transition function is chosen optimistically from the following set: {p : ∥p(·) −ˆPk(·|s, a)∥1 ≤σ} where σ = s 2S Nk(s, a) log 4SAk2 δ . 2 In more general settings, such as communicating or weakly communicating MDPs, although the optimal policies (for a fixed P) always have constant gain, the optimal minimax policies (over all possible P) might have non-constant gain. Additional assumptions on U, as well as a slight change in the definition of the regret are needed to deal with these cases. This is left for future research. 7 Let ˜Pk(·|s, ˜πk(s)) be the minimax choice of transition functions for each s where the minimax gain g˜πk is attained. The bias hk can be obtained by solving the following system of equations for h(·) (see [Puterman, 1994]): ∀s ∈S, g˜πk + h(s) = r(s, ˜πk(s)) + ˜Pk(·|s, ˜πk(s))h(·). (1) The stochastic check for the infinite-horizon case is mostly identical to the finite-horizon case, except that we replace T with the maximal span ˜H of the bias, defined as follows: ˜H = max k∈{k1,...,kn} max s hk(s) −min s hk(s) . The stochastic check fails if: n X j=1 ˜Pkj(·|s, a)hkj(·) − n X j=1 hkj(s′ j) > 5 ˜H r nS log 4SAτ 2 δ . Let H be the maximal span of the bias of any optimal minimax policies. The following summarizes the performance of OLRM2. The proof, deferred in the supplementary material, is similar to Theorem 1. Theorem 2. Given δ, S, A, the total regret of OLRM2 is ∆(τ) ≤˜O(SH √ Aτ) for all τ, with probability at least 1 −δ. 6 Experiment 0 2 4 6 8 x 10 6 0 0.5 1 1.5 2 2.5 x 10 6 Time steps Total reward OLRM2 UCRL2 Standard robust MDP Optimal minimax policy Figure 4: Total accumulated rewards. The vertical line marks the start of “breakdown”. We run both our algorithm as well as UCRL2 on the example MDP in Figure 1 for the infinitehorizon case. Figure 4 shows the result for g∗= 0.18, β = 0.07 and α = 0.17. It shows that UCRL2 accumulates smaller total rewards than the optimal minimax policy while our algorithm actually accumulates larger total rewards than the minimax policy. We also include the result for a standard robust MDP that treats all state-action pairs as adversarial and therefore performs poorly. Additional details are provided in the supplementary material. 7 Conclusion We presented an algorithm for online learning of robust MDPs with unknown parameters, some can be adversarial. We show that it achieves similar regret bound as in the fully stochastic case. A natural extension is to allow the learning of the uncertainty sets in adversarial states, where the true uncertainty set is unknown. Our preliminary results show that very similar regret bounds can be obtained for learning from a class of nested uncertainty sets. Acknowledgments This work is partially supported by the Ministry of Education of Singapore through AcRF Tier Two grant R-265-000-443-112 and NUS startup grant R-265-000-384-133. The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013)/ ERC Grant Agreement n.306638. 8 References [Brafman and Tennenholtz, 2002] Brafman, R. I. and Tennenholtz, M. (2002). R-max - a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research, 3:213–231. [Bubeck and Slivkins, 2012] Bubeck, S. and Slivkins, A. (2012). The best of both worlds: Stochastic and adversarial bandits. Journal of Machine Learning Research - Proceedings Track, 23:42.1– 42.23. [Even-Dar et al., 2005] Even-Dar, E., Kakade, S. M., and Mansour, Y. (2005). Experts in a markov decision process. In Saul, L. K., Weiss, Y., and Bottou, L., editors, Advances in Neural Information Processing Systems 17, pages 401–408. MIT Press, Cambridge, MA. [Even-Dar et al., 2009] Even-Dar, E., Kakade, S. M., and Mansour, Y. (2009). Online markov decision processes. Math. Oper. Res., 34(3):726–736. [Iyengar, 2005] Iyengar, G. N. (2005). Robust dynamic programming. Math. Oper. Res., 30(2):257– 280. [Jaksch et al., 2010] Jaksch, T., Ortner, R., and Auer, P. (2010). Near-optimal regret bounds for reinforcement learning. J. Mach. Learn. Res., 99:1563–1600. [Mannor et al., 2012] Mannor, S., Mebel, O., and Xu, H. (2012). Lightning does not strike twice: Robust mdps with coupled uncertainty. In ICML. [Mannor et al., 2007] Mannor, S., Simester, D., Sun, P., and Tsitsiklis, J. N. (2007). Bias and variance approximation in value function estimates. Manage. Sci., 53(2):308–322. [McDiarmid, 1989] McDiarmid, C. (1989). On the method of bounded differences. In Surveys in Combinatorics, number 141 in London Mathematical Society Lecture Note Series, pages 148– 188. Cambridge University Press. [Neu et al., 2012] Neu, G., Gy¨orgy, A., and Szepesv´ari, C. (2012). The adversarial stochastic shortest path problem with unknown transition probabilities. Journal of Machine Learning Research - Proceedings Track, 22:805–813. [Neu et al., 2010] Neu, G., Gy¨orgy, A., Szepesv´ari, C., and Antos, A. (2010). Online markov decision processes under bandit feedback. In NIPS, pages 1804–1812. [Nilim and El Ghaoui, 2005] Nilim, A. and El Ghaoui, L. (2005). Robust control of markov decision processes with uncertain transition matrices. Oper. Res., 53(5):780–798. [Puterman, 1994] Puterman, M. L. (1994). Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley-Interscience. [Strens, 2000] Strens, M. (2000). A bayesian framework for reinforcement learning. In In Proceedings of the Seventeenth International Conference on Machine Learning, pages 943–950. ICML. [Tewari and Bartlett, 2007] Tewari, A. and Bartlett, P. (2007). Bounded parameter markov decision processes with average reward criterion. Learning Theory, pages 263–277. [Weissman et al., 2003] Weissman, T., Ordentlich, E., Seroussi, G., Verdu, S., and Weinberger, M. J. (2003). Inequalities for the l1 deviation of the empirical distribution. Technical report, Information Theory Research Group, HP Laboratories. [Xu and Mannor, 2012] Xu, H. and Mannor, S. (2012). Distributionally robust markov decision processes. Math. Oper. Res., 37(2):288–300. [Yu and Mannor, 2009] Yu, J. Y. and Mannor, S. (2009). Arbitrarily modulated markov decision processes. In CDC, pages 2946–2953. [Yu et al., 2009] Yu, J. Y., Mannor, S., and Shimkin, N. (2009). Markov decision processes with arbitrary reward processes. Math. Oper. Res., 34(3):737–757. 9
|
2013
|
15
|
4,876
|
The Pareto Regret Frontier Wouter M. Koolen Queensland University of Technology wouter.koolen@qut.edu.au Abstract Performance guarantees for online learning algorithms typically take the form of regret bounds, which express that the cumulative loss overhead compared to the best expert in hindsight is small. In the common case of large but structured expert sets we typically wish to keep the regret especially small compared to simple experts, at the cost of modest additional overhead compared to more complex others. We study which such regret trade-offs can be achieved, and how. We analyse regret w.r.t. each individual expert as a multi-objective criterion in the simple but fundamental case of absolute loss. We characterise the achievable and Pareto optimal trade-offs, and the corresponding optimal strategies for each sample size both exactly for each finite horizon and asymptotically. 1 Introduction One of the central problems studied in online learning is prediction with expert advice. In this task a learner is given access to K strategies, customarily referred to as experts. He needs to make a sequence of T decisions with the objective of performing as well as the best expert in hindsight. This goal can be achieved with modest overhead, called regret. Typical algorithms, e.g. Hedge [1] with learning rate η = p 8/T ln K, guarantee LT −Lk T ≤ p T/2 ln K for each expert k. (1) where LT and Lk T are the cumulative losses of the learner and expert k after all T rounds. Here we take a closer look at that right-hand side. For it is not always desirable to have a uniform regret bound w.r.t. all experts. Instead, we may want to single out a few special experts and demand to be really close to them, at the cost of increased overhead compared to the rest. When the number of experts K is large or infinite, such favouritism even seems unavoidable for non-trivial regret bounds. The typical proof of the regret bound (1) suggests that the following can be guaranteed as well. For each choice of probability distribution q on experts, there is an algorithm that guarantees LT −Lk T ≤ p T/2(−ln q(k)) for each expert k. (2) However, it is not immediately obvious how this can be achieved. For example, the Hedge learning rate η would need to be tuned differently for different experts. We are only aware of a single (complex) algorithm that achieves something along these lines [2]. On the flip side, it is also not obvious that this trade-off profile is optimal. In this paper we study the Pareto (achievable and non-dominated) regret trade-offs. Let us say that a candidate trade-off ⟨r1, . . . , rK⟩∈RK is T-realisable if there is an algorithm that guarantees LT −Lk T ≤rk for each expert k. Which trade-offs are realisable? Among them, which are optimal? And what is the strategy that witnesses these realisable strategies? 1 1.1 This paper We resolve the preceding questions for the simplest case of absolute loss, where K = 2. We first obtain an exact characterisation of the set of realisable trade-offs. We then construct for each realisable profile a witnessing strategy. We also give a randomised procedure for optimal play that extends the randomised procedures for balanced regret profiles from [3] and later [4, 5]. We then focus on the relation between priors and regret bounds, to see if the particular form (2) is achievable, and if so, whether it is optimal. To this end, we characterise the asymptotic Pareto frontier as T →∞. We find that the form (2) is indeed achievable but fundamentally sub-optimal. This is of philosophical interest as it hints that approaching absolute loss by essentially reducing it to information theory (including Bayesian and Minimum Description Length methods, relative entropy based optimisation (instance of Mirror Descent), Defensive Forecasting etc.) is lossy. Finally, we show that our solution for absolute loss equals that of K = 2 experts with bounded linear loss. We then show how to obtain the bound (1) for K ≥2 experts using a recursive combination of two-expert predictors. Counter-intuitively, this cannot be achieved with a balanced binary tree of predictors, but requires the most unbalanced tree possible. Recursive combination with non-uniform prior weights allows us to obtain (2) (with higher constant) for any prior q. 1.2 Related work Our work lies in the intersection of two lines of work, and uses ideas from both. On the one hand there are the game-theoretic (minimax) approaches to prediction with expert advice. In [6] CesaBianchi, Freund, Haussler, Helmbold, Schapire and Warmuth analysed the minimax strategy for absolute loss with a known time horizon T. In [5] Cesa-Bianchi and Shamir used random walks to implement it efficiently for K = 2 experts or K ≥2 static experts. A similar analysis was given by Koolen in [4] with an application to tracking. In [7] Abernethy, Langford and Warmuth obtained the optimal strategy for absolute loss with experts that issue binary predictions, now controlling the game complexity by imposing a bound on the loss of the best expert. Then in [3] Abernethy, Warmuth and Yellin obtained the worst case optimal algorithm for K ≥2 arbitrary experts. More general budgets were subsequently analysed by Abernethy and Warmuth in [8]. Connections between minimax values and algorithms were studied by Rakhlin, Shamir and Sridharan in [9]. On the other hand there are the approaches that do not treat all experts equally. Freund and Schapire obtain a non-uniform bound for Hedge in [1] using priors, although they leave the tuning problem open. The tuning problem was addressed by Hutter and Poland in [2] using two-stages of Follow the Perturbed Leader. Even-Dar, Kearns, Mansour and Wortman characterise the achievable tradeoffs when we desire especially small regret compared to a fixed average of the experts’ losses in [10]. Their bounds were subsequently tightened by Kapralov and Panigrahy in [11]. An at least tangentially related problem is to ensure smaller regret when there are several good experts. This was achieved by Chaudhuri, Freund and Hsu in [12], and later refined by Chernov and Vovk in [13]. 2 Setup The absolute loss game is one of the core decision problems studied in online learning [14]. In it, the learner sequentially predicts T binary outcomes. Each round t ∈{1, . . . , T} the learner assigns a probability pt ∈[0, 1] to the next outcome being a 1, after which the actual outcome xt ∈{0, 1} is revealed, and the learner suffers absolute loss |pt −xt|. Note that absolute loss equals expected 0/1 loss, that is, the probability of a mistake if a “hard” prediction in {0, 1} is sampled with bias p on 1. Realising that the learner cannot avoid high cumulative loss without assumptions on the origin of the outcomes, the learner’s objective is defined to ensure low cumulative loss compared to a fixed set of baseline strategies. Meeting this goal ensures that the easier the outcome sequence (i.e. for which some reference strategy has low loss), the lower the cumulative loss incurred by the learner. 2 0 1 2 3 4 0 1 2 3 4 regret w.r.t. 1 regret w.r.t. 0 T=1 T=2 T=3 T=4 T=5 T=6 T=7 T=8 T=9 T=10 (a) The Pareto trade-off profiles for small T. The sets GT consist of the points to the north-east of each curve. 0 0 1 0 1 2 0 1 2 3 r0 r1 0 0 1 1 2 2 3 3 (b) Realisable trade-off profiles for T = 0, 1, 2, 3. The vertices on the profile for each horizon T are numbered 0, . . . , T from left to right. Figure 1: Exact regret trade-off profile The regret w.r.t. the strategy k ∈{0, 1} that always predicts k is given by 1 Rk T := T X t=1 |pt −xt| −|k −xt| . Minimising regret, defined in this way, is a multi-objective optimisation problem. The classical approach is to “scalarise” it into the single objective RT := maxk Rk T , that is, to ensure small regret compared to the best expert in hindsight. In this paper we study the full Pareto trade-off curve. Definition 1. A candidate trade-off ⟨r0, r1⟩∈R2 is called T-realisable for the T-round absolute loss game if there is a strategy that keeps the regret w.r.t. each k ∈{0, 1} below rk, i.e. if ∃p1∀x1 · · · ∃pT ∀xT : R0 T ≤r0 and R1 T ≤r1 where pt ∈[0, 1] and xt ∈{0, 1} in each round t. We denote the set of all T-realisable pairs by GT . This definition extends easily to other losses, many experts, fancy reference combinations of experts (e.g. shifts, drift, mixtures), protocols with side information etc. We consider some of these extension in Section 5, but for now our goal is to keep it as simple as possible. 3 The exact regret trade-off profile In this section we characterise the set GT ⊂R2 of T-realisable trade-offs. We show that it is a convex polygon, that we subsequently characterise by its vertices and edges. We also exhibit the optimal strategy witnessing each Pareto optimal trade-off and discuss the connection with random walks. We first present some useful observations about GT . The linearity of the loss as a function of the prediction already renders GT highly regular. Lemma 2. The set GT of T-realisable trade-offs is convex for each T. Proof. Take rA and rB in GT . We need to show that αrA +(1−α)rB ∈GT for all α ∈[0, 1]. Let A and B be strategies witnessing the T-realisability of these points. Now consider the strategy that in each round t plays the mixture αpA t + (1 −α)pB t . As the absolute loss is linear in the prediction, this strategy guarantees LT = αLA T +(1−α)LB T ≤Lk T +αrA k +(1−α)rB k for each k ∈{0, 1}. Guarantees violated early cannot be restored later. Lemma 3. A strategy that guarantees Rk T ≤rk must maintain Rk t ≤rk for all 0 ≤t ≤T. 1One could define the regret Rk T for all static reference probabilities k ∈[0, 1], but as the loss is minimised by either k = 0 or k = 1, we immediately restrict to only comparing against these two. 3 Proof. Suppose toward contradiction that Rk t > rk at some t < T. An adversary may set all xt+1 . . . xT to k to fix Lk T = Lk t . As LT ≥Lt, we have Rk T = LT −Lk T ≥Lt−Lk t = Rk t > rk. The two extreme trade-offs ⟨0, T⟩and ⟨T, 0⟩are Pareto optimal. Lemma 4. Fix horizon T and r1 ∈R. The candidate profile ⟨0, r1⟩is T-realisable iff r1 ≥T. Proof. The static strategy pt = 0 witnesses ⟨0, T⟩∈GT for every horizon T. To ensure R1 T < T, any strategy will have to play pt > 0 at some time t ≤T. But then it cannot maintain R0 t = 0. It is also intuitive that maintaining low regret becomes progressively harder with T. Lemma 5. G0 ⊃G1 ⊃. . . Proof. Lemma 3 establishes ⊇, whereas Lemma 4 establishes ̸=. We now come to our first main result, the characterisation of GT . We will directly characterise its south-west frontier, that is, the set of Pareto optimal trade-offs. These frontiers are graphed up to T = 10 in Figure 1a. The vertex numbering we introduce below is illustrated by Figure 1b. Theorem 6. The Pareto frontier of GT is the piece-wise linear curve through the T + 1 vertices fT (i), fT (T −i) for i ∈{0, . . . , T} where fT (i) := i X j=0 j2j−T T −j −1 T −i −1 . Moreover, for T > 0 the optimal strategy at vertex i assigns to the outcome x = 1 the probability pT (0) := 0, pT (T) := 1, and pT (i) := fT −1(i) −fT −1(i −1) 2 for 0 < i < T, and the optimal probability interpolates linearly in between consecutive vertices. Proof. By induction on T. We first consider the base case T = 0. By Definition 1 G0 = ⟨r0, r1⟩ r0 ≥0 and r1 ≥0 is the positive orthant, which has the origin as its single Pareto optimal vertex, and indeed ⟨f0(0), f0(0)⟩= ⟨0, 0⟩. We now turn to T ≥1. Again by Definition 1 ⟨r0, r1⟩∈GT if ∃p ∈[0, 1]∀x ∈{0, 1} : r0 −|p −x| + |0 −x|, r1 −|p −x| + |1 −x| ∈GT −1, that is if ∃p ∈[0, 1] : r0 −p, r1 −p + 1 ∈GT −1 and r0 + p, r1 + p −1 ∈GT −1. By the induction hypothesis we know that the south-west frontier curve for GT −1 is piecewise linear. We will characterise GT via its frontier as well. For each r0, let r1(r0) and p(r0) denote the value and minimiser of the optimisation problem min p∈[0,1] r1 both ⟨r0, r1⟩± ⟨p, p −1⟩∈GT −1 . We also refer to ⟨r0, r1(r0)⟩± ⟨p(r0), p(r0) −1⟩as the rear(−) and front(+) contact points. For r0 = 0 we find r1(0) = T, with witness p(0) = 0 and rear/front contact points ⟨0, T + 1⟩and ⟨0, T −1⟩, and for r0 = T we find r1(T) = 0 with witness p(T) = 1 and rear/front contact points ⟨T −1, 0⟩and ⟨T + 1, 0⟩. It remains to consider the intermediate trajectory of r1(r0) as r0 runs from 0 to T. Initially at r0 = 0 the rear contact point lies on the edge of GT −1 entering vertex i = 0 of GT −1, while the front contact point lies on the edge emanating from that same vertex. So if we increase r0 slightly, the contact points will slide along their respective lines. By Lemma 11 (supplementary material), r1(r0) will trace along a straight line as a result. Once we increase r0 enough, both the rear and front contact point will hit the vertex at the end of their edges simultaneously (a fortunate fact that greatly simplifies our analysis), as shown in Lemma 12 (supplementary material). The contact points then transition to tracing the next pair of edges of GT −1. At this point r0 the slope of r1(r0) changes, and we have discovered a vertex of GT . Given that at each such transition ⟨r0, r1(r0)⟩is the midpoint between both contact points, this implies that all midpoints between successive vertices of GT −1 are vertices of GT . And in addition, there are the two boundary vertices ⟨0, T⟩and ⟨T, 0⟩. 4 0 0.5 1 1.5 2 0 0.5 1 1.5 2 normalised regret w.r.t. 1 normalised regret w.r.t. 0 T=10000 asymptotically realisable sqrt min-log-prior (a) Normal scale 1 10 1e-50 1e-40 1e-30 1e-20 1e-10 1 normalised regret w.r.t. 1 normalised regret w.r.t. 0 T=10000 asymptotically realisable sqrt min-log-prior (b) Log-log scale to highlight the tail behaviour Figure 2: Pareto frontier of G, the asymptotically realisable trade-off rates. There is no noticeable difference with the normalised regret trade-off profile GT / √ T for T = 10000. We also graph the curve p −ln(q), p −ln(1 −q) for all q ∈[0, 1]. 3.1 The optimal strategy and random walks In this section we describe how to follow the optimal strategy. First suppose we desire to witness a T-realisable trade-off that happens to be a vertex of GT , say vertex i at ⟨fT (i), fT (T −i)⟩. With T rounds remaining and in state i, the strategy predicts with pT (i). Then the outcome x ∈{0, 1} is revealed. If x = 0, we need to witness in the remaining T −1 rounds the trade-off ⟨fT (i), fT (T − i)⟩−⟨pT (i), pT (i) + 1⟩= ⟨fT −1(i −1), fT −1(T −1)⟩, which is vertex i −1 of GT −1. So the strategy transition to state i −1. Similarly upon x = 1 we update our internal state to i. If the state ever either exceeds the number of rounds remaining or goes negative we simply clamp it. Second, if we desire to witness a T-realisable trade-off that is a convex combination of successive vertices, we simply follow the mixture strategy as constructed in Lemma 2. Third, if we desire to witness a sub-optimal element of GT , we may follow any strategy that witnesses a Pareto optimal dominating trade-off. The probability p issued by the algorithm is sometimes used to randomly sample a “hard prediction” from {0, 1}. The expression |p −x| then denotes the expected loss, which equals the probability of making a mistake. We present, following [3], a random-walk based method to sample a 1 with probability pT (i). Our random walk starts in state ⟨T, i⟩. In each round it transitions from state ⟨T, i⟩ to either state ⟨T −1, i⟩or state ⟨T −1, i −1⟩with equal probability. It is stopped when the state ⟨T, i⟩becomes extreme in the sense that i ∈{0, T}. Note that this process always terminates. Then the probability that this process is stopped with i = T equals pT (i). In our case of absolute loss, evaluating pT (i) and performing the random walk both take T units of time. The random walks considered in [3] for K ≥2 experts still take T steps, whereas direct evaluation of the optimal strategy scales rather badly with K. 4 The asymptotic regret rate trade-off profile In the previous section we obtained for each time horizon T a combinatorial characterisation of the set GT of T-realisable trade-offs. In this section we show that properly normalised Pareto frontiers for increasing T are better and better approximations of a certain intrinsic smooth limit curve. We obtain a formula for this curve, and use it to study the question of realisability for large T. Definition 7. Let us define the set G of asymptotically realisable regret rate trade-offs by G := lim T →∞ GT √ T . Despite the disappearance of the horizon T from the notation, the set G still captures the trade-offs that can be achieved with prior knowledge of T. Each achievable regret rate trade-off ⟨ρ0, ρ1⟩∈G 5 may be witnessed by a different strategy for each T. This is fine for our intended interpretation of √ T G as a proxy for GT . We briefly mention horizon-free algorithms at the end of this section. The literature [2] suggests that, for some constant c, ⟨ p −c ln(q), p −c ln(1 −q)⟩should be asymptotically realisable for each q ∈[0, 1]. We indeed confirm this below, and determine the optimal constant to be c = 1. We then discuss the philosophical implications of the quality of this bound. We now come to our second main result, the characterisation of the asymptotically realisable tradeoff rates. The Pareto frontier is graphed in Figure 2 both on normal axes for comparison to Figure 1a, and on a log-log scale to show its tails. Note the remarkable quality of approximation to GT / √ T. Theorem 8. The Pareto frontier of the set G of asymptotically realisable trade-offs is the curve f(u), f(−u) for u ∈R, where f(u) := u erf √ 2u + e−2u2 √ 2π + u, and erf(u) = 2 √π R u 0 e−v2 dv is the error function. Moreover, the optimal strategy converges to p(u) = 1 −erf √ 2u 2 . Proof. We calculate the limit of normalised Pareto frontiers at vertex i = T/2 + u √ T, and obtain lim T →∞ fT T/2 + u √ T √ T = lim T →∞ 1 √ T T/2+u √ T X j=0 j2j−T T −j −1 T/2 −u √ T −1 = lim T →∞ 1 √ T Z T/2+u √ T 0 j2j−T T −j −1 T/2 −u √ T −1 dj = lim T →∞ Z u − √ T /2 (u −v)2(u−v) √ T −T T −(u −v) √ T −1 T/2 −u √ T −1 √ T dv = Z u −∞ (u −v) lim T →∞2(u−v) √ T −T T −(u −v) √ T −1 T/2 −u √ T −1 √ T dv = Z u −∞ (u −v)e−1 2 (u+v)2 √ 2π dv = u erf √ 2u + e−2u2 √ 2π + u In the first step we replace the sum by an integral. We can do this as the summand is continuous in j, and the approximation error is multiplied by 2−T and hence goes to 0 with T. In the second step we perform the variable substitution v = u−j/ √ T. We then exchange limit and integral, subsequently evaluate the limit, and in the final step we evaluate the integral. To obtain the optimal strategy, we observe the following relation between the slope of the Pareto curve and the optimal strategy for each horizon T. Let g and h denote the Pareto curves at times T and T +1 as a function of r0. The optimal strategy p for T +1 at r0 satisfied the system of equations h(r0) + p −1 = g(u + p) h(r0) −p + 1 = g(u −p) to which the solution satisfies 1 −1 p = g(r0 + p) −g(r0 −p) 2p ≈dg(r0) dr0 , so that p ≈ 1 1 −dg(r0) dr0 . Since slope is invariant under normalisation, this relation between slope and optimal strategy becomes exact as T tends to infinity, and we find p(u) = 1 1 + df(r0(u)) dr0(u) = 1 1 + f ′(u) f ′(−u) = 1 −erf √ 2u 2 . We believe this last argument is more insightful than a direct evaluation of the limit. 6 4.1 Square root of min log prior Results for Hedge suggest — modulo a daunting tuning problem — that a trade-off featuring square root negative log prior akin to (2) should be realisable. We first show that this is indeed the case, we then determine the optimal leading constant and we finally discuss its sub-optimality. Theorem 9. The parametric curve p −c ln(q), p −c ln(1 −q) for q ∈[0, 1] is contained in G (i.e. asymptotically realisable) iff c ≥1. Proof. By Theorem 8, the frontier of G is of the form ⟨f(u), f(−u)⟩. Our argument revolves around the tails (extreme u) of G. For large u ≫0, we find that f(u) ≈2u. For small u ≪0, we find that f(u) ≈ e−2u2 4 √ 2πu2 . This is obtained by a 3rd order Taylor series expansion around u = −∞. We need to go to 3rd order since all prior orders evaluate to 0. The additive approximation error is of order e−2u2u−4, which is negligible. So for large r0 ≫0, the least realisable r1 is approximately r1 ≈e− r2 0 2 −2 ln r0 √ 2π . (3) With the candidate relation r0 = p −c ln(q) and r1 = p −c ln(1 −q), still for large r0 ≫0 so that q is small and −ln(1 −q) ≈q, we would instead find least realisable r1 approximately equal to r1 ≈√ce− r2 0 2c . (4) The candidate tail (4) must be at least the actual tail (3) for all large r0. The minimal c for which this holds is c = 1. The graphs of Figure 2 illustrate this tail behaviour for c = 1, and at the same time verify that there are no violations for moderate u. Even though the sqrt-min-log-prior trade-off is realisable, we see that its tail (4) exceeds the actual tail (3) by the factor r2 0 √ 2π, which gets progressively worse with the extremity of the tail r0. Figure 2a shows that its behaviour for moderate ⟨r0, r1⟩is also not brilliant. For example it gives us a symmetric bound of √ ln 2 ≈0.833, whereas f(0) = 1/ √ 2π ≈0.399 is optimal. For certain log loss games, each Pareto regret trade-off is witnessed uniquely by the Bayesian mixture of expert predictions w.r.t. a certain non-uniform prior and vice versa (not shown). In this sense the Bayesian method is the ideal answer to data compression/investment/gambling. Be that as it may, we conclude that the world of absolute loss is not information theory: simply putting a prior is not the definitive answer to non-uniform guarantees. It is a useful intuition that leads to the convenient sqrt-min-log-prior bounds. We hope that our results contribute to obtaining tighter bounds that remain manageable. 4.2 The asymptotic algorithm The previous theorem immediately suggests an approximate algorithm for finite horizon T. To approximately witness ⟨r0, r1⟩, find the value of u for which √ T⟨f(u), f(−u)⟩is closest to it. Then play p(u). This will not guarantee ⟨r0, r1⟩exactly, but intuitively it will be close. We leave analysing this idea to the journal version. Conversely, by taking the limit of the game protocol, which involves the absolute loss function, we might obtain an interesting protocol and “asymptotic” loss function2, for which u is the natural state, p(u) is the optimal strategy, and u is updated in a certain way. Investigating such questions will probably lead to interesting insights, for example horizon-free strategies that maintain Rk T / √ T ≤ρk for all T simultaneously. Again this will be pursued for the journal version. 2 We have seen an instance of this before. When the Hedge algorithm with learning rate η plays weights w and faces loss vector ℓ, its dot loss is given by wT ℓ. Now consider the same loss vector handed out in identical pieces ℓ/n over the course of n trials, during which the weights w update as usual. In the limit of n →∞, the resulting loss becomes the mix loss −1 η ln P k w(k)e−ηℓk. 7 5 Extension 5.1 Beyond absolute loss In this section we consider the general setting with K = 2 expert, that we still refer to as 0 and 1. Here the learner plays p ∈[0, 1] which is now interpreted as the weight allocated to expert 1, the adversary chooses a loss vector ℓ= ⟨ℓ0, ℓ1⟩∈[0, 1]2, and the learner incurs dot loss given by (1 −p)ℓ0 + pℓ1. The regrets are now redefined as follows Rk T := T X t=1 ptℓ1 t + (1 −pt)ℓ0 t − T X t=1 ℓk t for each expert k ∈{0, 1}. Theorem 10. The T-realisable trade-offs for absolute loss and K = 2 expert dot loss coincide. Proof. By induction on T. The loss is irrelevant in the base case T = 0. For T > 0, a trade-off ⟨r0, r1⟩is T-realisable for dot loss if ∃p ∈[0, 1]∀ℓ∈[0, 1]2 : ⟨r0 + pℓ1 + (1 −p)ℓ0 −ℓ0, r1 + pℓ1 + (1 −p)ℓ0 −ℓ1⟩∈GT −1 that is if ∃p ∈[0, 1]∀δ ∈[−1, 1] : ⟨r0 −pδ, r1 + (1 −p)δ⟩∈GT −1 . We recover the absolute loss case by restricting δ to {−1, 1}. These requirements are equivalent since GT is convex by Lemma 2. 5.2 More than 2 experts In the general experts problem we compete with K instead of 2 experts. We now argue that an algorithm guaranteeing Rk T ≤ √ cT ln K w.r.t. each expert k can be obtained. The intuitive approach, combining the K experts in a balanced binary tree of two-expert predictors, does not achieve this goal: each internal node contributes the optimal symmetric regret of p T/(2π). This accumulates to Rk T ≤ln K √ cT, where the log sits outside the square root. Counter-intuitively, the maximally unbalanced binary tree does result in a √ ln K factor when the internal nodes are properly skewed. At each level we combine K experts one-vs-all, permitting large regret w.r.t. the first expert but tiny regret w.r.t. the recursive combination of the remaining K −1 experts. The argument can be found in Appendix A.1. The same argument shows that, for any prior q on k = 1, 2, . . ., combining the expert with the smallest prior with the recursive combination of the rest guarantees regret p −cT ln q(k) w.r.t. each expert k. 6 Conclusion We studied asymmetric regret guarantees for the fundamental online learning setting of the absolute loss game. We obtained exactly the achievable skewed regret guarantees, and the corresponding optimal algorithm. We then studied the profile in the limit of large T. We conclude that the expected √ T⟨ p −ln(q), p −ln(1 −q)⟩trade-off is achievable for any prior probability q ∈[0, 1], but that it is not tight. We then showed how our results transfer from absolute loss to general linear losses, and to more than two experts. Major next steps are to determine the optimal trade-offs for K > 2 experts, to replace our traditional √ T budget by modern variants q Lk T [15], q Lk T (T −Lk T ) T [16], p Varmax T [17], √D∞[18], ∆T [19] etc. and to find the Pareto frontier for horizon-free strategies maintaining Rk T ≤ρk √ T at any T. Acknowledgements This work benefited substantially from discussions with Peter Gr¨unwald. 8 References [1] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55:119–139, 1997. [2] Marcus Hutter and Jan Poland. Adaptive online prediction by following the perturbed leader. Journal of Machine Learning Research, 6:639–660, 2005. [3] Jacob Abernethy, Manfred K. Warmuth, and Joel Yellin. When random play is optimal against an adversary. In Rocco A. Servedio and Tong Zhang, editors, COLT, pages 437–446. Omnipress, 2008. [4] Wouter M. Koolen. Combining Strategies Efficiently: High-quality Decisions from Conflicting Advice. PhD thesis, Institute of Logic, Language and Computation (ILLC), University of Amsterdam, January 2011. [5] Nicol`o Cesa-Bianchi and Ohad Shamir. Efficient online learning via randomized rounding. In J. Shawe-Taylor, R.S. Zemel, P. Bartlett, F.C.N. Pereira, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 343–351, 2011. [6] Nicol`o Cesa-Bianchi, Yoav Freund, David Haussler, David P. Helmbold, Robert E. Schapire, and Manfred K. Warmuth. How to use expert advice. Journal of the ACM, 44(3):427–485, 1997. [7] Jacob Abernethy, John Langford, and Manfred K Warmuth. Continuous experts and the Binning algorithm. In Learning Theory, pages 544–558. Springer, 2006. [8] Jacob Abernethy and Manfred K. Warmuth. Repeated games against budgeted adversaries. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R.S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 1–9, 2010. [9] Sasha Rakhlin, Ohad Shamir, and Karthik Sridharan. Relax and randomize : From value to algorithms. In P. Bartlett, F.C.N. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 2150–2158, 2012. [10] Eyal Even-Dar, Michael Kearns, Yishay Mansour, and Jennifer Wortman. Regret to the best vs. regret to the average. Machine Learning, 72(1-2):21–37, 2008. [11] Michael Kapralov and Rina Panigrahy. Prediction strategies without loss. In J. Shawe-Taylor, R.S. Zemel, P. Bartlett, F.C.N. Pereira, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 828–836, 2011. [12] Kamalika Chaudhuri, Yoav Freund, and Daniel Hsu. A parameter-free hedging algorithm. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 297–305, 2009. [13] Alexey V. Chernov and Vladimir Vovk. Prediction with advice of unknown number of experts. In Peter Gr¨unwald and Peter Spirtes, editors, UAI, pages 117–125. AUAI Press, 2010. [14] Nicol`o Cesa-Bianchi and G´abor Lugosi. Prediction, Learning, and Games. Cambridge University Press, 2006. [15] Peter Auer, Nicol`o Cesa-Bianchi, and Claudio Gentile. Adaptive and self-confident on-line learning algorithms. Journal of Computer and System Sciences, 64(1):48–75, 2002. [16] Nicol`o Cesa-Bianchi, Yishay Mansour, and Gilles Stoltz. Improved second-order bounds for prediction with expert advice. Machine Learning, 66(2-3):321–352, 2007. [17] Elad Hazan and Satyen Kale. Extracting certainty from uncertainty: Regret bounded by variation in costs. Machine learning, 80(2-3):165–188, 2010. [18] Chao-Kai Chiang, Tianbao Yangand Chia-Jung Leeand Mehrdad Mahdaviand Chi-Jen Luand Rong Jin, and Shenghuo Zhu. Online optimization with gradual variations. In Proceedings of the 25th Annual Conference on Learning Theory, number 23 in JMLR W&CP, pages 6.1 – 6.20, June 2012. [19] Steven de Rooij, Tim van Erven, Peter D. Gr¨unwald, and Wouter M. Koolen. Follow the leader if you can, Hedge if you must. ArXiv, 1301.0534, January 2013. 9
|
2013
|
150
|
4,877
|
Approximate Dynamic Programming Finally Performs Well in the Game of Tetris Victor Gabillon INRIA Lille - Nord Europe, Team SequeL, FRANCE victor.gabillon@inria.fr Mohammad Ghavamzadeh∗ INRIA Lille - Team SequeL & Adobe Research mohammad.ghavamzadeh@inria.fr Bruno Scherrer INRIA Nancy - Grand Est, Team Maia, FRANCE bruno.scherrer@inria.fr Abstract Tetris is a video game that has been widely used as a benchmark for various optimization techniques including approximate dynamic programming (ADP) algorithms. A look at the literature of this game shows that while ADP algorithms that have been (almost) entirely based on approximating the value function (value function based) have performed poorly in Tetris, the methods that search directly in the space of policies by learning the policy parameters using an optimization black box, such as the cross entropy (CE) method, have achieved the best reported results. This makes us conjecture that Tetris is a game in which good policies are easier to represent, and thus, learn than their corresponding value functions. So, in order to obtain a good performance with ADP, we should use ADP algorithms that search in a policy space, instead of the more traditional ones that search in a value function space. In this paper, we put our conjecture to test by applying such an ADP algorithm, called classification-based modified policy iteration (CBMPI), to the game of Tetris. Our experimental results show that for the first time an ADP algorithm, namely CBMPI, obtains the best results reported in the literature for Tetris in both small 10 × 10 and large 10 × 20 boards. Although the CBMPI’s results are similar to those of the CE method in the large board, CBMPI uses considerably fewer (almost 1/6) samples (calls to the generative model) than CE. 1 Introduction Tetris is a popular video game created by Alexey Pajitnov in 1985. The game is played on a grid originally composed of 20 rows and 10 columns, where pieces of 7 different shapes fall from the top – see Figure 1. The player has to choose where to place each falling piece by moving it horizontally and rotating it. When a row is filled, it is removed and all the cells above it move one line down. The goal is to remove as many rows as possible before the game is over, i.e., when there is no space available at the top of the grid for the new piece. Figure 1: A screen-shot of the game of Tetris with its seven pieces (shapes). In this paper, we consider the variation of the game in which the player knows only the current falling piece, and not the next several coming pieces. This game constitutes an interesting optimization benchmark in which the goal is to find a controller (policy) that maximizes the average (over multiple games) number of lines removed in a game (score).1 This optimization problem is known to be computationally hard. It contains a huge number of board configurations (about 2200 1.6 × 1060), and even in the case that the sequence of pieces is known in advance, finding the optimal strategy is an NP hard problem [4]. Approximate dynamic programming (ADP) and reinforcement learning (RL) algorithms have been used in Tetris. These algorithms formulate Tetris as a Markov decision process (MDP) in which the state is defined by the current board configuration plus the falling piece, the actions are the ∗Mohammad Ghavamzadeh is currently at Adobe Research, on leave of absence from INRIA. 1Note that this number is finite because it was shown that Tetris is a game that ends with probability one [3]. 1 possible orientations of the piece and the possible locations that it can be placed on the board,2 and the reward is defined such that maximizing the expected sum of rewards from each state coincides with maximizing the score from that state. Since the state space is large in Tetris, these methods use value function approximation schemes (often linear approximation) and try to tune the value function parameters (weights) from game simulations. The first application of ADP in Tetris seems to be by Tsitsiklis and Van Roy [22]. They used the approximate value iteration algorithm with two state features: the board height and the number of holes in the board, and obtained a low score of 30 to 40. Bertsekas and Ioffe [1] proposed the λ-Policy Iteration (λ-PI) algorithm (a generalization of value and policy iteration) and applied it to Tetris. They approximated the value function as a linear combination of a more elaborate set of 22 features and reported the score of 3, 200 lines. The exact same empirical study was revisited recently by Scherrer [16], who corrected an implementation bug in [1], and reported more stable learning curves and the score of 4, 000 lines. At least three other ADP and RL papers have used the same set of features, we refer to them as the “Bertsekas features”, in the game of Tetris. Kakade [11] applied a natural policy gradient method to Tetris and reported a score of about 6, 800 lines. Farias and Van Roy [6] applied a linear programming algorithm to the game and achieved the score of 4, 700 lines. Furmston and Barber [8] proposed an approximate Newton method to search in a policy space and were able to obtain a score of about 14, 000. Despite all the above applications of ADP in Tetris (and possibly more), for a long time, the best Tetris controller was the one designed by Dellacherie [5]. He used a heuristic evaluation function to give a score to each possible strategy (in a way similar to value function in ADP), and eventually returned the one with the highest score. Dellacherie’s evaluation function is made of 6 high-quality features with weights chosen by hand, and achieved a score of about 5, 000, 000 lines [19]. Szita and L˝orincz [18] used the “Bertsekas features” and optimized the weights by running a black box optimizer based on the cross entropy (CE) method [15]. They reported the score of 350, 000 lines averaged over 30 games, outperforming the ADP and RL approaches that used the same features. More recently, Thiery and Scherrer [20] selected a set of 9 features (including those of Dellacherie’s) and optimized the weights with the CE method. This led to the best publicly known controller (to the best of our knowledge) with the score of around 35, 000, 000 lines. Due to the high variance of the score and its sensitivity to some implementation details [19], it is difficult to have a precise evaluation of Tetris controllers. However, our brief tour d’horizon of the literature, and in particular the work by Szita and L˝orincz [18] (optimizing the “Bertsekas features” by CE), indicate that ADP algorithms, even with relatively good features, have performed extremely worse than the methods that directly search in the space of policies (such as CE and genetic algorithms). It is important to note that almost all these ADP methods are value function based algorithms that first define a value function representation (space) and then search in this space for a good function, which later gives us a policy. The main motivation of our work comes from the above observation. This observation makes us conjecture that Tetris is a game whose policy space is easier to represent, and as a result to search in, than its value function space. Therefore, in order to obtain a good performance with ADP algorithms in this game, we should use those ADP methods that search in a policy space, instead of the more traditional ones that search in a value function space. Fortunately a class of such ADP algorithms, called classification-based policy iteration (CbPI), have been recently developed and analyzed [12, 7, 13, 9, 17]. These algorithms differ from the standard value function based ADP methods in how the greedy policy is computed. Specifically, at each iteration CbPI algorithms approximate the entire greedy policy as the output of a classifier, while in the standard methods, at every given state, the required action from the greedy policy is individually calculated based on the approximation of the value function of the current policy. Since CbPI methods search in a policy space (defined by a classifier) instead of a value function space, we believe that they should perform better than their value function based counterparts in problems in which good policies are easier to represent than their corresponding value functions. In this paper, we put our conjecture to test by applying an algorithm in this class, called classification-based modified policy iteration (CBMPI) [17], to the game of Tetris, and compare its performance with the CE method and the λ-PI algorithm. The choice of CE and λ-PI is because the former has achieved the best known results in Tetris and the latter’s performance is among the best reported for value function based ADP algorithms. Our extensive experimental results show that for the first time an ADP algorithm, namely CBMPI, obtains the best results reported in the literature for Tetris in both small 10 × 10 and large 10 × 20 boards. Although 2The total number of actions at a state depends on the falling piece, with the maximum of 32, i.e. |A| ≤32. 2 Input: parameter space Θ, number of parameter vectors n, proportion ρ ≤1, noise η Initialize: Set the parameter µ = ¯0 and σ2 = 100I (I is the identity matrix) for k = 1, 2, . . . do Generate a random sample of n parameter vectors {θi}n i=1 ∼N(µ, σ2I) For each θi, play L games and calculate the average number of rows removed (score) by the controller Select ρn parameters with the highest score θ 1, . . . , θ ρn Update µ and σ: µ(j) = 1 ρn ρn i=1 θ i(j) and σ2(j) = 1 ρn ρn i=1 [θ i(j) −µ(j)]2 + η Figure 2: The pseudo-code of the cross-entropy (CE) method used in our experiments. the CBMPI’s results are similar to those achieved by the CE method in the large board, CBMPI uses considerably fewer (almost 1/6) samples (call to the generative model of the game) than CE. In Section 2, we briefly describe the algorithms used in our experiments. In Section 3, we outline the setting of each algorithm in our experiments and report our results followed by discussion. 2 Algorithms In this section, we briefly describe the algorithms used in our experiments: the cross entropy (CE) method, classification-based modified policy iteration (CBMPI) [17] and its slight variation direct policy iteration (DPI) [13], and λ-policy iteration (see [16] for a description of λ-PI). We begin by defining some terms and notations. A state s in Tetris consists of two components: the description of the board b and the type of the falling piece p. All controllers rely on an evaluation function that gives a value to each possible action at a given state. Then, the controller chooses the action with the highest value. In ADP, algorithms aim at tuning the weights such that the evaluation function approximates well the optimal expected future score from each state. Since the total number of states is large in Tetris, the evaluation function f is usually defined as a linear combination of a set of features φ, i.e., f(·) = φ(·)θ. We can think of the parameter vector θ as a policy (controller) whose performance is specified by the corresponding evaluation function f(·) = φ(·)θ. The features used in Tetris for a state-action pair (s, a) may depend on the description of the board b resulted from taking action a in state s, e.g., the maximum height of b. Computing such features requires the knowledge of the game’s dynamics, which is known in Tetris. 2.1 Cross Entropy Method Cross-entropy (CE) [15] is an iterative method whose goal is to optimize a function f parameterized by a vector θ ∈Θ by direct search in the parameter space Θ. Figure 2 contains the pseudo-code of the CE algorithm used in our experiments [18, 20]. At each iteration k, we sample n parameter vectors {θi}n i=1 from a multivariate Gaussian distribution N(µ, σ2I). At the beginning, the parameters of this Gaussian have been set to cover a wide region of Θ. For each parameter θi, we play L games and calculate the average number of rows removed by this controller (an estimate of the evaluation function). We then select ρn of these parameters with the highest score, θ 1, . . . , θ ρn, and use them to update the mean µ and variance σ2 of the Gaussian distribution, as shown in Figure 2. This updated Gaussian is used to sample the n parameters at the next iteration. The goal of this update is to sample more parameters from the promising part of Θ at the next iteration, and eventually converge to a global maximum of f. 2.2 Classification-based Modified Policy Iteration (CBMPI) Modified policy iteration (MPI) [14] is an iterative algorithm to compute the optimal policy of a MDP that starts with initial policy π1 and value v0, and generates a sequence of value-policy pairs vk = (Tπk)mvk−1 (evaluation step), πk+1 = G (Tπk)mvk−1 (greedy step), where Gvk is a greedy policy w.r.t. vk, Tπk is the Bellman operator associated with the policy πk, and m ≥1 is a parameter. MPI generalizes the well-known value and policy iteration algorithms for the values m = 1 and m = ∞, respectively. CBMPI [17] is an approximation of MPI that uses an explicit representation for the policies πk, in addition to the one used for the value functions vk. The idea is similar to the classification-based PI algorithms [12, 7, 13] in which we search for the greedy policy in a policy space Π (defined by a classifier) instead of computing it from the estimated value function (as in the standard implementation of MPI). As described in Figure 3, CBMPI begins with an arbitrary initial policy π1 ∈Π and value function v0 ∈F.3 At each iteration k, a new value func3Note that the function space F and policy space Π are defined by the choice of the regressor and classifier. 3 Input: value function space F, policy space Π, state distribution µ Initialize: Set π1 ∈Π and v0 ∈F to an arbitrary policy and value function for k = 1, 2, . . . do • Perform rollouts: Construct the rollout set Dk = {s(i)}N i=1, s(i) iid∼µ for all states s(i) ∈Dk do Perform a rollout and return vk(s(i)) (using Equation 1) Construct the rollout set D k = {s(i)}N i=1, s(i) iid∼µ for all states s(i) ∈D k and actions a ∈A do for j = 1 to M do Perform a rollout and return Rj k(s(i), a) (using Equation 4) Qk(s(i), a) = 1 M M j=1 Rj k(s(i), a) • Approximate value function: vk ∈argmin v∈F LF k (µ; v) (regression) (see Equation 2) • Approximate greedy policy: πk+1 ∈argmin π∈Π L Π k (µ; π) (classification) (see Equation 3) Figure 3: The pseudo-code of the CBMPI algorithm. tion vk is built as the best approximation of the m-step Bellman operator (Tπk)mvk−1 in F (evaluation step). This is done by solving a regression problem whose target function is (Tπk)mvk−1. To set up the regression problem, we build a rollout set Dk by sampling N states i.i.d. from a distribution µ. For each state s(i) ∈Dk, we generate a rollout s(i), a(i) 0 , r(i) 0 , s(i) 1 , . . . , a(i) m−1, r(i) m−1, s(i) m of size m, where a(i) t = πk(s(i) t ), and r(i) t and s(i) t+1 are the reward and next state induced by this choice of action. From this rollout, we compute an unbiased estimate vk(s(i)) of (Tπk)mvk−1 (s(i)) as vk(s(i)) = m−1 t=0 γtr(i) t + γmvk−1(s(i) m ), (γ is the discount factor), (1) and use it to build a training set s(i), vk(s(i)) N i=1. This training set is then used by the regressor to compute vk as an estimate of (Tπk)mvk−1. The regressor finds a function v ∈F that minimizes the empirical error LF k (µ; v) = 1 N N i=1 vk(s(i)) −v(s(i)) 2. (2) The greedy step at iteration k computes the policy πk+1 as the best approximation of G (Tπk)mvk−1 by minimizing the cost-sensitive empirical error (cost-sensitive classification) L Π k (µ; π) = 1 N N i=1 max a∈A Qk(s(i), a) −Qk s(i), π(s(i)) . (3) To set up this cost-sensitive classification problem, we build a rollout set D k by sampling N states i.i.d. from a distribution µ. For each state s(i) ∈D k and each action a ∈A, we build M independent rollouts of size m + 1, i.e., s(i), a, r(i,j) 0 , s(i,j) 1 , a(i,j) 1 , . . . , a(i,j) m , r(i,j) m , s(i,j) m+1 M j=1, where for t ≥1, a(i,j) t = πk(s(i,j) t ), and r(i,j) t and s(i,j) t+1 are the reward and next state induced by this choice of action. From these rollouts, we compute an unbiased estimate of Qk(s(i), a) as Qk(s(i), a) = 1 M M j=1 Rj k(s(i), a) where each rollout estimate is defined as Rj k(s(i), a) = m t=0 γtr(i,j) t + γm+1vk−1(s(i,j) m+1). (4) If we remove the regressor from CBMPI and only use the m-truncated rollouts Rj k(s(i), a) = m t=0 γtr(i,j) t to compute Qk(s(i), a), then CBMPI become the direct policy iteration (DPI) algorithm [13] that we also use in our experiments (see [17] for more details on the CBMPI algorithm). 4 In our implementation of CBMPI (DPI) in Tetris (Section 3), we use the same rollout set (Dk = D k) and rollouts for the classifier and regressor. This is mainly to be more sample efficient. Fortunately, we observed that this does not affect the overall performance of the algorithm. We set the discount factor γ = 1. Regressor: We use linear function approximation for the value function, i.e., vk(s(i)) = φ(s(i))w, where φ(·) and w are the feature and weight vectors, and minimize the empirical error LF k (µ; v) using the standard least-squares method. Classifier: The training set of the classifier is of size N with s(i) ∈D k as input and maxa Qk(s(i), a) −Qk(s(i), a1), . . . , maxa Qk(s(i), a) −Qk(s(i), a|A|) as output. We use the policies of the form πu(s) = argmaxa ψ(s, a)u, where ψ is the policy feature vector (possibly different from the value function feature vector φ) and u is the policy parameter vector. We compute the next policy πk+1 by minimizing the empirical error L Π k (µ; πu), defined by (3), using the covariance matrix adaptation evolution strategy (CMA-ES) algorithm [10]. In order to evaluate a policy u in CMA-ES, we only need to compute L Π k (µ; πu), and given the training set, this procedure does not require any simulation of the game. This is in contrary with policy evaluation in CE that requires playing several games, and it is the main reason that we obtain the same performance as CE with CBMPI with almost 1/6 number of samples (see Section 3.2). 3 Experimental Results In this section, we evaluate the performance of CBMPI (DPI) and compare it with CE and λ-PI. CE is the state-of-the-art method in Tetris with huge performance advantage over ADP/RL methods [18, 19, 20]. In our experiments, we show that for a well-selected set of features, CBMPI improves over all the previously reported ADP results. Moreover, its performance is comparable to that of the CE method, while using considerably fewer samples (call to the generative model of the game). 3.1 Experimental Setup In our experiments, the policies learned by the algorithms are evaluated by their score (average number of rows removed in a game) averaged over 200 games in the small 10 × 10 board and over 20 games in the large 10×20 board. The performance of each algorithm is represented by a learning curve whose value at each iteration is the average score of the policies learned by the algorithm at that iteration in 100 separate runs of the algorithm. In addition to their score, we also evaluate the algorithms by the number of samples they use. In particular, we show that CBMPI/DPI use 6 times less samples than CE. As discussed in Section 2.2, this is due the fact that although the classifier in CBMPI/DPI uses a direct search in the space of policies (for the greedy policy), it evaluates each candidate policy using the empirical error of Eq. 3, and thus, does not require any simulation of the game (other than those used to estimate the Qk’s in its training set). In fact, the budget B of CBMPI/DPI is fixed in advance by the number of rollouts NM and the rollout’s length m as B = (m + 1)NM|A|. In contrary, CE evaluates a candidate policy by playing several games, a process that can be extremely costly (sample-wise), especially for good policies in the large board. In our CBMPI/DPI experiments, we set the number of rollouts per state-action pair M = 1, as this value has shown the best performance. Thus, we only study the behavior of CBMPI/DPI as a function of m and N. In CBMPI, the parameter m balances between the errors in evaluating the value function and the policy. For large values of m, the size of the rollout set decreases as N = O(B/m), which in turn decreases the accuracy of both the regressor and classifier. This leads to a trade-off between long rollouts and the number of states in the rollout set. The solution to this trade-off (bias/variance tradeoff in estimation of Qk’s) strictly depends on the capacity of the value function space F. A rich value function space leads to solve the trade-off for small values of m, while a poor space, or no space in the case of DPI, suggests large values of m, but not too large to still guarantee a large enough N. We sample the rollout states in CBMPI/DPI from the trajectories generated by a very good policy for Tetris, namely the DU controller [20]. Since the DU policy is good, this rollout set is biased towards boards with small height. We noticed from our experiments that the performance can be significantly improved if we use boards with different heights in the rollout sets. This means that better performance can be achieved with more uniform sampling distribution, which is consistent with what we can learn from the CBMPI and DPI performance bounds. We set the initial value function parameter to w = ¯0 and select the initial policy π1 (policy parameter u) randomly. We also set the CMA-ES parameters (classifier parameters) to ρ = 0.5, η = 0, and n equal to 15 times the number of features. 5 In the CE experiments, we set ρ = 0.1 and η = 4, the best parameters reported in [20]. We also set n = 1000 and L = 10 in the small board and n = 100 and L = 1 in the large board. Set of Features: We use the following features, plus a constant offset feature, in our experiments:4 (i) Bertsekas features: First introduced by [2], this set of 22 features has been mainly used in the ADP/RL community and consists of: the number of holes in the board, the height of each column, the difference in height between two consecutive columns, and the maximum height of the board. (ii) Dellacherie-Thiery (D-T) features: This set consists of the six features of Dellacherie [5], i.e., the landing height of the falling piece, the number of eroded piece cells, the row transitions, the column transitions, the number of holes, and the number of board wells; plus 3 additional features proposed in [20], i.e., the hole depth, the number of rows with holes, and the pattern diversity feature. Note that the best policies reported in the literature have been learned using this set of features. (iii) RBF height features: These new 5 features are defined as exp( −|c−ih/4|2 2(h/5)2 ), i = 0, . . . , 4, where c is the average height of the columns and h = 10 or 20 is the total number of rows in the board. 3.2 Experiments We first run the algorithms on the small board to study the role of their parameters and to select the best features and parameters (Section 3.2.1). We then use the selected features and parameters and apply the algorithms to the large board (Figure 5 (d)) Finally, we compare the best policies found in our experiments with the best controllers reported in the literature (Tables 1 and 2). 3.2.1 Small (10 × 10) Board Here we run the algorithms with two different feature sets: Dellacherie-Thiery (D-T) and Bertsekas. D-T features: Figure 4 shows the learning curves of CE, λ-PI, DPI, and CBMPI algorithms. Here we use D-T features for the evaluation function in CE, the value function in λ-PI, and the policy in DPI and CBMPI. We ran CBMPI with different feature sets for the value function and “D-T plus the 5 RBF features” achieved the best performance (Figure 4 (d)).5 The budget of CBMPI and DPI is set to B = 8, 000, 000 per iteration. The CE method reaches the score 3000 after 10 iterations using an average budget B = 65, 000, 000. λ-PI with the best value of λ only manages to score 400. In Figure 4 (c), we report the performance of DPI for different values of m. DPI achieves its best performance for m = 5 and m = 10 by removing 3400 lines on average. As explained in Section 3.1, having short rollouts (m = 1) in DPI leads to poor action-value estimates Q, while having too long rollouts (m = 20) decreases the size of the training set of the classifier N. CBMPI outperforms the other algorithms, including CE, by reaching the score of 4300 for m = 2. The value of m = 2 corresponds to N = 8000000 (2+1)×32 ≈84, 000. Note that unlike DPI, CBMPI achieves good performance with very short rollouts m = 1. This indicates that CBMPI is able to approximate the value function well, and as a result, to build a more accurate training set for its classifier than DPI. The results of Figure 4 show that an ADP algorithm, namely CBMPI, outperforms the CE method using a similar budget (80 vs. 65 millions after 10 iterations). Note that CBMPI takes less iterations to converge than CE. More generally Figure 4 confirms the superiority of the policy search and classification-based PI methods to value function based ADP algorithms (λ-PI). This suggests that the D-T features are more suitable to represent the policies than the value functions in Tetris. Bertsekas features: Figures 5 (a)-(c) show the performance of CE, λ-PI, DPI, and CBMPI algorithms. Here all the approximations in the algorithms are with the Bertsekas features. CE achieves the score of 500 after about 60 iterations and outperforms λ-PI with score of 350. It is clear that the Bertsekas features lead to much weaker results than those obtained by the D-T features in Figure 4 for all the algorithms. We may conclude then that the D-T features are more suitable than the Bertsekas features to represent both value functions and policies in Tetris. In DPI and CBMPI, we managed to obtain results similar to CE, only after multiplying the per iteration budget B used in the D-T experiments by 10. However, CBMPI and CE use the same number of samples, 150, 000, 000, when they converge after 2 and 60 iterations, respectively (see Figure 5). Note that DPI and CBMPI obtain the same performance, which means that the use of a value function approximation by CBMPI 4For a precise definition of the features, see [19] or the documentation of their code [21]. 5Note that we use D-T+5 features only for the value function of CBMPI, and thus, we have a fair comparison between CBMPI and DPI. To have a fair comparison with λ-PI, we ran this algorithm with D-T+5 features, and it only raised its performance to 800, still far from the CBMPI’s performance. 6 5 10 15 20 0 1000 2000 3000 4000 Iterations Averaged lines removed CE (a) The cross-entropy (CE) method. 0 20 40 60 80 100 0 100 200 300 400 500 Iterations Averaged lines removed Parameter λ 0 0.4 0.7 0.9 (b) λ-PI with λ = {0, 0.4, 0.7, 0.9}. 2 4 6 8 10 0 1000 2000 3000 4000 Iterations Averaged lines removed Rollout size m of DPI 1 2 5 10 20 (c) DPI with budget B = 8, 000, 000 per iteration and m = {1, 2, 5, 10, 20}. 2 4 6 8 10 0 1000 2000 3000 4000 Iterations Averaged lines removed Rollout size m of CBMPI 1 2 5 10 20 (d) CBMPI with budget B = 8, 000, 000 per iteration and m = {1, 2, 5, 10, 20}. Figure 4: Learning curves of CE, λ-PI, DPI, and CBMPI algorithms using the 9 Dellacherie-Thiery (D-T) features on the small 10×10 board. The results are averaged over 100 runs of the algorithms. does not lead to a significant performance improvement over DPI. At the end, we tried several values of m in this setting among which m = 10 achieved the best performance for both DPI and CBMPI. 3.2.2 Large (10 × 20) Board We now use the best parameters and features in the small board experiments, run CE, DPI, and CBMPI algorithms in the large board, and report their results in Figure 5 (d). The per iteration budget of DPI and CBMPI is set to B = 16, 000, 000. While λ-PI with per iteration budget 620, 000, at its best, achieves the score of 2500 (due to space limitation, we do not report these results here), DPI and CBMPI, with m = 10, reach the scores of 12, 000, 000 and 21, 000, 000 after 3 and 6 iterations, respectively. CE matches the performances of CBMPI with the score of 20, 000, 000 after 8 iterations. However, this is achieved with almost 6 times more samples, i.e., after 8 iterations, CBMPI and CE use 256, 000, 000 and 1, 700, 000, 000 samples, respectively. Comparison of the best policies: So far the reported scores for each algorithm was averaged over the policies learned in 100 separate runs. Here we select the best policies observed in our all experiments and compute their scores more accurately by averaging over 10, 000 games. We then compare these results with the best policies reported in the literature, i.e., DU and BDU [20] in both small and large boards in Table 1. The DT-10 and DT-20 policies, whose weights and features are given in Table 2, are policies learned by CBMPI with D-T features in the small and large boards, respectively. As shown in Table 1, DT-10 removes 5000 lines and outperforms DU, BDU, and DT-20 in the small board. Note that DT-10 is the only policy among these four that has been learned in the small board. In the large board, DT-20 obtains the score of 51, 000, 000 and not only outperforms the other three policies, but also achieves the best reported result in the literature (to the best of our knowledge). 7 0 50 100 150 0 100 200 300 400 500 600 Iterations Averaged lines removed CE (a) The cross-entropy (CE) method. 0 20 40 60 80 100 0 100 200 300 400 500 600 Iterations Averaged lines removed Parameter λ 0 0.4 0.7 0.9 (b) λ-PI with λ = {0, 0.4, 0.7, 0.9}. 2 4 6 8 10 0 100 200 300 400 500 600 Iterations Averaged lines removed Rollout size m=10 DPI CBMPI (c) DPI (dash-dotted line) & CBMPI (dash line) with budget B = 80, 000, 000 per iteration and m = 10. 1 2 3 4 5 6 7 8 Iterations Averaged lines removed ( × 106 ) 0 10 20 Rollout size m of CBMPI 5 10 Rollout size m of DPI 5 10 CE (d) DPI (dash-dotted line) and CBMPI (dash line) with m = {5, 10} and CE (solid line). Figure 5: (a)-(c) Learning curves of CE, λ-PI, DPI, and CBMPI algorithms using the 22 Bertsekas features on the small 10 × 10 board. (d) Learning curves of CE, DPI, and CBMPI algorithms using the 9 Dellacherie-Thiery (D-T) features on the large 10 × 20 board. Boards \ Policies DU BDU DT-10 DT-20 Small (10 × 10) board 3800 4200 5000 4300 Large (10 × 20) board 31, 000, 000 36, 000, 000 29, 000, 000 51, 000, 000 Table 1: Average (over 10, 000 games) score of DU, BDU, DT-10, and DT-20 policies. feature weight feature weight feature weight landing height -2.18 -2.68 column transitions -3.31 -6.32 hole depth -0.81 -0.43 eroded piece cells 2.42 1.38 holes 0.95 2.03 rows with holes -9.65 -9.48 row transitions -2.17 -2.41 board wells -2.22 -2.71 diversity 1.27 0.89 Table 2: The weights of the 9 Dellacherie-Thiery features in DT-10 (left) and DT-20 (right) policies. 4 Conclusions The game of Tetris has been always challenging for approximate dynamic programming (ADP) algorithms. Surprisingly, much simpler black box optimization methods, such as cross entropy (CE), have produced controllers far superior to those learned by the ADP algorithms. In this paper, we applied a relatively novel ADP algorithm, called classification-based modified policy iteration (CBMPI), to Tetris. Our results showed that for the first time an ADP algorithm (CBMPI) performed extremely well in both small 10×10 and large 10×20 boards and achieved performance either better (in the small board) or equal with considerably fewer samples (in the large board) than the state-ofthe-art CE methods. In particular, the best policy learned by CBMPI obtained the performance of 51, 000, 000 lines on average, a new record in the large board of Tetris. 8 References [1] D. Bertsekas and S. Ioffe. Temporal differences-based policy iteration and applications in neuro-dynamic programming. Technical report, MIT, 1996. [2] D. Bertsekas and J. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996. [3] H. Burgiel. How to Lose at Tetris. Mathematical Gazette, 81:194–200, 1997. [4] E. Demaine, S. Hohenberger, and D. Liben-Nowell. Tetris is hard, even to approximate. In Proceedings of the Ninth International Computing and Combinatorics Conference, pages 351– 363, 2003. [5] C. Fahey. Tetris AI, Computer plays Tetris, 2003. http://colinfahey.com/tetris/ tetris.html. [6] V. Farias and B. van Roy. Tetris: A study of randomized constraint sampling. Springer-Verlag, 2006. [7] A. Fern, S. Yoon, and R. Givan. Approximate Policy Iteration with a Policy Language Bias: Solving Relational Markov Decision Processes. Journal of Artificial Intelligence Research, 25:75–118, 2006. [8] T. Furmston and D. Barber. A unifying perspective of parametric policy search methods for Markov decision processes. In Proceedings of the Advances in Neural Information Processing Systems, pages 2726–2734, 2012. [9] V. Gabillon, A. Lazaric, M. Ghavamzadeh, and B. Scherrer. Classification-based policy iteration with a critic. In Proceedings of ICML, pages 1049–1056, 2011. [10] N. Hansen and A. Ostermeier. Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation, 9:159–195, 2001. [11] S. Kakade. A natural policy gradient. In Proceedings of the Advances in Neural Information Processing Systems, pages 1531–1538, 2001. [12] M. Lagoudakis and R. Parr. Reinforcement Learning as Classification: Leveraging Modern Classifiers. In Proceedings of ICML, pages 424–431, 2003. [13] A. Lazaric, M. Ghavamzadeh, and R. Munos. Analysis of a Classification-based Policy Iteration Algorithm. In Proceedings of ICML, pages 607–614, 2010. [14] M. Puterman and M. Shin. Modified policy iteration algorithms for discounted Markov decision problems. Management Science, 24(11), 1978. [15] R. Rubinstein and D. Kroese. The cross-entropy method: A unified approach to combinatorial optimization, Monte-Carlo simulation, and machine learning. Springer-Verlag, 2004. [16] B. Scherrer. Performance Bounds for λ-Policy Iteration and Application to the Game of Tetris. Journal of Machine Learning Research, 14:1175–1221, 2013. [17] B. Scherrer, M. Ghavamzadeh, V. Gabillon, and M. Geist. Approximate modified policy iteration. In Proceedings of ICML, pages 1207–1214, 2012. [18] I. Szita and A. L˝orincz. Learning Tetris Using the Noisy Cross-Entropy Method. Neural Computation, 18(12):2936–2941, 2006. [19] C. Thiery and B. Scherrer. Building Controllers for Tetris. International Computer Games Association Journal, 32:3–11, 2009. [20] C. Thiery and B. Scherrer. Improvements on Learning Tetris with Cross Entropy. International Computer Games Association Journal, 32, 2009. [21] C. Thiery and B. Scherrer. MDPTetris features documentation, 2010. http:// mdptetris.gforge.inria.fr/doc/feature_functions_8h.html. [22] J. Tsitsiklis and B Van Roy. Feature-based methods for large scale dynamic programming. Machine Learning, 22:59–94, 1996. 9
|
2013
|
151
|
4,878
|
Learning Feature Selection Dependencies in Multi-task Learning Daniel Hern´andez-Lobato Computer Science Department Universidad Aut´onoma de Madrid daniel.hernandez@uam.es Jos´e Miguel Hern´andez-Lobato Department of Engineering University of Cambridge jmh233@cam.ac.uk Abstract A probabilistic model based on the horseshoe prior is proposed for learning dependencies in the process of identifying relevant features for prediction. Exact inference is intractable in this model. However, expectation propagation offers an approximate alternative. Because the process of estimating feature selection dependencies may suffer from over-fitting in the model proposed, additional data from a multi-task learning scenario are considered for induction. The same model can be used in this setting with few modifications. Furthermore, the assumptions made are less restrictive than in other multi-task methods: The different tasks must share feature selection dependencies, but can have different relevant features and model coefficients. Experiments with real and synthetic data show that this model performs better than other multi-task alternatives from the literature. The experiments also show that the model is able to induce suitable feature selection dependencies for the problems considered, only from the training data. 1 Introduction Many linear regression problems are characterized by a large number d of features or explaining attributes and by a reduced number n of training instances. In this large d but small n scenario there is an infinite number of potential model coefficients that explain the training data perfectly well. To avoid over-fitting problems and to obtain estimates with good generalization properties, a typical regularization is to assume that the model coefficients are sparse, i.e., most coefficients are equal to zero [1]. This is equivalent to considering that only a subset of the features or attributes are relevant for prediction. The sparsity assumption can be introduced by carrying out Bayesian inference under a sparsity enforcing prior for the model coefficients [2, 3], or by minimizing a loss function penalized by some sparse regularizer [4, 5]. Among the priors that enforce sparsity, the horseshoe has some attractive properties that are very convenient for the scenario described [3]. In particular, this prior has heavy tails, to model coefficients that significantly differ from zero, and an infinitely tall spike at the origin, to favor coefficients that take negligible values. The estimation of the coefficients under the sparsity assumption can be improved by introducing dependencies in the process of determining which coefficients are zero [6, 7]. An extreme case of these dependencies appears in group feature selection methods in which groups of coefficients are considered to be jointly equal or different from zero [8, 9]. However, a practical limitation is that the dependency structure (groups) is often assumed to be given. Here, we propose a model based on the horseshoe prior that induces the dependencies in the feature selection process from the training data. These dependencies are expressed by a correlation matrix that is specified by O(d) parameters. Unfortunately, the estimation of these parameters from the training data is difficult since we consider n < d instances only. Thus, over-fitting problems are likely to appear. To improve the estimation process we assume a multi-task learning setting, where several learning tasks share feature selection dependencies. The method proposed can be adapted to such a scenario with few modifications. 1 Traditionally, methods for multi-task learning under the sparsity assumption have considered common relevant and irrelevant features among tasks [8, 10, 11, 12, 13, 14]. Nevertheless, recent research cautions against this assumption when the supports and values of the coefficients for each task can vary widely [15]. The model proposed here limits the impact of this problem because it is has fewer restrictions. The tasks used for induction can have, besides different model coefficients, different relevant features. They must share only the dependency structure for the selection process. The model described here is most related to the method for sparse coding introduced in [16], where spike-and-slab priors [2] are considered for multi-task linear regression under the sparsity assumption and dependencies in the feature selection process are specified by a Boltzmann machine. Fitting exactly the parameters of a Boltzmann machine to the observed data has exponential cost in the number of dimensions of the learning problem. Thus, when compared to the proposed model, the model considered in [16] is particularly difficult to train. For this, an approximate algorithm based on block-coordinate optimization has been described in [17]. The algorithm alternates between greedy MAP estimation of the sparsity patterns of each task and maximum pseudo-likelihood estimation of the Boltzmann parameters. Nevertheless, this algorithm lacks a proof of convergence and we have observed that is prone to get trapped in sub-optimal solutions. Our experiments with real and synthetic data show the better performance of the model proposed when compared to other methods that try to overcome the problem of different supports among tasks. These methods include the model described in [16] and the model for dirty data proposed in [15]. These experiments also illustrate the benefits of the proposed model for inducing dependencies in the feature selection process. Specifically, the dependencies obtained are suitable for the multi-task learning problems considered. Finally, a difficulty of the model proposed is that exact Bayesian inference is intractable. Therefore, expectation propagation (EP) is employed for efficient approximate inference. In our model EP has a cost that is O(Kn2d), where K is the number of learning tasks, n is the number of samples of each task, and d is the dimensionality of the data. The rest of the paper is organized as follows: Section 2 describes the proposed model for learning feature selection dependencies. Section 3 shows how to use expectation propagation to approximate the quantities required for induction. Section 4 compares this model with others from the literature on synthetic and real data regression problems. Finally, Section 5 gives the conclusions of the paper and some ideas for future work. 2 A Model for Learning Feature Selection Dependencies We describe a linear regression model that can be used for learning dependencies in the process of identifying relevant features or attributes for prediction. For simplicity, we first deal with the case of a single learning task. Then, we show how this model can be extended to address multitask learning problems. In the single task scenario we consider some training data in the form of n d-dimensional vectors summarized in a design matrix X = (x1, . . . , xn)T and associated targets y = (y1, . . . , yn)T, with yi ∈R. A linear predictive rule is assumed for y given X. Namely, y = Xw + ϵ, where w is a vector of latent coefficients and ϵ is a vector of independent Gaussian noise with variance σ2, i.e., ϵ ∼N(0, σ2I). Given X and y, the likelihood for w is: p(y|X, w) = n Y i=1 p(yi|xi, w) = n Y i=1 N(yi|wTxi, σ2) = N(y|Xw, σ2I) . (1) Consider the under-determined scenario n < d. In this case, the likelihood is not strictly concave and infinitely many values of w fit the training data perfectly well. A strong regularization technique that is often used in this context is to assume that only some features are relevant for prediction [1]. This is equivalent to assuming that w is sparse with many zeros. This inductive bias can be naturally incorporated into the model using a horseshoe sparsity enforcing prior for w [3]. The horseshoe prior lacks a closed form but can be defined as a scale mixture of Gaussians: p(w|τ) = d Y j=1 p(wj|τ) , p(wj|τ) = Z N(wj|0, λ2 jτ 2) C+(λj|0, 1) dλj , (2) where λj is a latent scale for coefficient wj, C+(·|0, 1) is a half-Cauchy distribution with zero location and unit scale and τ > 0 is a global shrinkage parameter that controls the level of sparsity. The 2 smaller the value of τ the sparser the prior and vice-versa. Figure 1 (left) and (middle) show a comparison of the horseshoe with other priors from the literature. The horseshoe has an infinitely tall spike at the origin which favors coefficients with small values, and has heavy tails which favor coefficients that take values that significantly differ from zero. Furthermore, assume that τ = σ2 = 1 and that X = I, and define κj = 1/(1 + λ2 j). Then, the posterior mean for wj is (1 −κj)yj, where κj is a random shrinkage coefficient that can be interpreted as the amount of weight placed at the origin [3]. Figure 1 (right) shows the prior density for κj that results from the horseshoe. It is from the shape of this figure that the horseshoe takes its name. We note that one expects to see two things under this prior: relevant coefficients (κj ≈0, no shrinkage), and zeros (κj ≈1, total shrinkage). The horseshoe is therefore very convenient for the sparse inducing scenario described before. −3 −2 −1 0 1 2 3 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Prob. Density Horseshoe Gaussian Student−t(df=1) Laplace 4 5 6 7 0.000 0.005 0.010 0.015 0.020 0.025 Prob. Density Horseshoe Gaussian Student−t(df=1) Laplace 0.0 0.2 0.4 0.6 0.8 1.0 0 1 2 3 4 5 Prob. Density Figure 1: (left) Density of different priors, horseshoe, Gaussian, Student-t and Laplace near the origin. Note the infinitely tall spike of the horseshoe. (middle) Tails of the different priors considered before. (right) Prior density of the shrinkage parameter κj for the horseshoe prior. A limitation of the horseshoe is that it does not consider dependencies in the feature selection process. Specifically, the fact that one feature is actually relevant for prediction has no impact at all in the prior relevancy or irrelevancy of other features. We now describe how to introduce these dependencies in the horseshoe. Consider the definition of a Cauchy distribution as the ratio of two independent standard Gaussian random variables [18]. An equivalent representation of the prior is: p(w|ρ2, γ2) = Z d Y j=1 N(wj|0, u2 j/v2 j ) N(uj|0, ρ2) N(vj|0, γ2) dujdvj . (3) where uj and vj are latent variables introduced for each dimension j. In particular, λj = ujγ/vjρ. Furthermore, τ has been incorporated into the prior for uj and vj using τ 2 = ρ2/γ2. The latent variables uj and vj can be interpreted as indicators of the relevance or irrelevance of feature j. The larger u2 j, the more relevant the feature. Conversely, the larger v2 j , the more irrelevant. A simple way of introducing dependencies in the feature selection process is to consider correlations among variables uj and vj, with j = 1, . . . , d. These correlations can be introduced in (3) as follows: p(w|ρ2, γ2, C) = Z d Y j=1 N(wj|0, u2 j/v2 j ) N(u|0, ρ2C) N(v|0, γ2C) dudv , (4) where u = (u1, . . . , ud)T, v = (v1, . . . , vd)T, C is a correlation matrix that specifies the dependencies in the feature selection process, and ρ2 and γ2 act as regularization parameters that control the level of sparsity. When C = I, (4) factorizes and gives the same prior as the one in (2) and (3). In practice, however, C has to be estimated from the data. This can be problematic since it will involve the estimation of O(d2) free parameters which can lead to over-fitting. To alleviate this problem and also to allow for efficient approximate inference we consider a special form for C: C = ∆M∆, M = (D + PPT) , ∆= diag(1/ p M11, . . . , 1/ p Mdd) , (5) where diag(a1, . . . , ad) denotes a diagonal matrix with entries a1, . . . , ad; D is a diagonal matrix whose entries are all equal to some small positive constant (this matrix guarantees that C−1 exists); the products by ∆ensure that the entries of C are in the range (−1, 1); and P is a d × m matrix of real entries which specifies the correlation structure of C. Thus, C is fully determined by P and will only have O(md) free parameters with m < d. The value of m is a regularization parameter that limits the complexity of C. The larger its value, the more expressive C is. For computational reasons described later on we will set in our experiments m equal to n, the number of data instances. 3 2.1 Inference, Prediction and Learning Feature Selection Dependencies Denote by z = (wT, uT, vT)T the vector of latent variables of the model described above. Based on the formulation of the previous section, the joint probability distribution of y and z is: p(y, z|X, σ2, ρ2, γ2, C) = N(y|Xw, σ2I)N(u|0, ρ2C)N(v|0, γ2C) d Y j=1 N wj|0, u2 j/v2 j . (6) Figure 2 shows the factor graph corresponding to this joint probability distribution. This graph summarizes the interactions between the random variables in the model. All the factors in (6) are Gaussian, except the ones corresponding to the prior for wj given uj and vj, N(wj|0, u2 j/v2 j ). Given the observed targets y one is typically interested in inferring the latent variables z of the model. For this, Bayes’ theorem can be used: p(z|X, y, σ2, ρ2, γ2, C) = p(y, z|X, σ2, ρ2, γ2, C) p(y|X, σ2, ρ2, γ2, C) , (7) where the numerator in the r.h.s. of (7) is the joint distribution (6) and the denominator is simply a normalization constant (the model evidence) which can be used for Bayesian model selection [19]. The posterior distribution in (7) is useful to compute a predictive distribution for the target ynew associated to a new unseen data instance xnew: p(ynew|xnew, X, σ2, ρ2, γ2, C) = Z p(ynew|xnew, w) p(z|X, σ2, ρ2, γ2, C) dz . (8) Similarly, one can marginalize (7) with respect to w to obtain a posterior distribution for u and v which can be useful to identify the most relevant or irrelevant features. ... ... ... Figure 2: Factor graph of the probabilistic model. The factor f(·) corresponds to the likelihood N(y|Xw, σ2I), and each gj(·) to the prior for wj given uj and vj, N(wj|0, u2 j/v2 j ). Finally, hu(·) and hv(·) correspond to N(u|0, ρ2C) and N(v|0, γ2C), respectively. Only the targets y are observed, the other variables are latent. Ideally, however, one should also infer C, the correlation matrix that describes the dependencies in the feature selection process, and compute a posterior distribution for it. This can be complicated, even for approximate inference methods. Denote by Z the model evidence, i.e., the denominator in the r.h.s. of (7). A simpler alternative is to use gradient ascent to maximize log Z (and therefore Z) with respect to P, the matrix that completely specifies C. This corresponds to type-II maximum likelihood (ML) estimation and allows to determine P from the training data alone, without resorting to cross-validation [19]. The gradient of log Z with respect to P, i.e., ∂log Z/∂P can be used for this task. The other hyper-parameters of the model σ2, ρ2 and γ2 can be found following a similar approach. Unfortunately, neither (7), (8) nor the model evidence can be computed in closed form. Specifically, it is not possible to compute the required integrals analytically. Thus, one has to resort to approximate inference. For this, we use expectation propagation [20]. See Section 3 for details. 2.2 Extension to the Multi-Task Learning Setting In the single-task learning setting maximizing the model evidence with respect to P is not expected to be effective to improve the prediction accuracy. The reason is the difficulty of obtaining an accurate estimate of P. This matrix has m × d free parameters and these have to be induced from a small number of n < d training instances. The estimation process is hence likely to be affected by over-fitting. One way to mitigate over-fitting problems is to consider additional data for the estimation process. These additional data may come from a multi-task learning setting, where there are K 4 related but different tasks available for induction. A simple assumption is that all these tasks share a common dependency structure C for the feature selection process, although the model coefficients and the actual relevant features may differ between tasks. This assumption is less restrictive than assuming jointly relevant and irrelevant features across tasks and can be incorporated into the learning process using the described model with few modifications. By using the data from the K tasks for the estimation of P we expect to obtain better estimates and to improve the prediction accuracy. Assume there are K learning tasks available for induction and that each task k = 1, . . . , K consists of a design matrix Xk with nk d-dimensional data instances and target values yk. As in (1), a linear predictive rule with additive Gaussian noise σ2 k is considered for each task. Let wk be the model coefficients of task k. Assume for the model coefficients of each task a horseshoe prior as the one specified in (4) with a shared correlation matrix C, but with task specific hyper-parameters ρ2 k and γ2 k. Denote by uk and vk the vectors of latent Gaussian variables of the prior for task k. Similarly, let zk = (wT k, uT k, vT k)T be the vector of latent variables of task k. Then, the joint posterior distribution of the latent variables of the different tasks factorizes as follows: p {z}K k=1|{Xk, yk, τ 2 k, ρ2 k, σ2 k}K k=1, C = K Y k=1 p(yk, zk|Xk, σ2 k, ρ2 k, γ2 k, C) p(yk|Xk, σ2 k, ρ2 k, γ2 k, C) , (9) where each factor in the r.h.s. of (9) is given by (7). This indicates that the K models for each task can be learnt independently given C and σ2 k, ρ2 k and γ2 k ∀k. Denote by ZMT the denominator in the r.h.s. of (9), i.e., ZMT = QK k=1 p(yk|Xk, σ2 k, ρ2 k, γ2 k, C) = QK k=1 Zk, with Zk the evidence for task k. Then, ZMT is the model evidence for the multi-task setting. As in single-task learning, specific values for the hyper-parameters of each task and C can be found by a type-II maximum likelihood (ML) approach. For this, log ZMT is maximized using gradient ascent. Specifically, the gradient of log ZMT with respect to σ2 k, ρ2 k, γ2 k and P can be easily computed in terms of the gradient of each log Zk. In summary, if there is a method to approximate the required quantities for learning a single task using the model proposed, implementing a multi-task learning method that assumes shared feature selection dependencies but task dependent hyper-parameters is straight-forward. 3 Approximate Inference Expectation propagation (EP) [20] is used to approximate the posterior distribution and the evidence of the model described in Section 2. For the clarity of presentation we focus on the model for a single learning task. The multi-task extension of Section 2.2 is straight-forward. Consider the posterior distribution of z, (6). Up to a normalization constant this distribution can be written as p(z|X, y, σ2, ρ2, γ2) ∝f(w)hu(u)hv(v) d Y j=1 gj(z) , (10) where the factors in the r.h.s. of (10) are displayed in Figure 2. Note that all factors except the gj’s are Gaussian. EP approximates (10) by a distribution q(z) ∝f(w)hu(u)hv(v) Qd j=1 ˜gj(z), which is obtained by replacing each non-Gaussian factor gj in (10) with an approximate factor ˜gj that is Gaussian but need not be normalized. Since the Gaussian distribution belongs to the exponential family of distributions, which is closed under the product and division operations [21], q is Gaussian with natural parameters equal to the sum of the natural parameters of each factor. EP iteratively updates each ˜gj until convergence by first computing q\j ∝q/˜gj and then minimizing the Kullback-Leibler (KL) divergence between gjq\j and qnew, KL(gjq\j||qnew), with respect to qnew. The new approximate factor is obtained as ˜gnew j = sjqnew/q\j, where sj is the normalization constant of gjq\j. This update rule ensures that ˜gj looks similar to gj in regions of high posterior probability in terms of q\j [20]. Minimizing the KL divergence is a convex problem whose optimum is found by matching the means and the covariance matrices between gjq\j and qnew. These expectations can be readily obtained from the derivatives of log sj with respect to the natural parameters of q\j [21]. Unfortunately, the computation of sj is intractable under the horseshoe. As a practical alternative, our EP implementation employes numerical quadrature to evaluate sj and its derivatives. Importantly, gj, and therefore ˜gj, depend only on wj, uj and vj, so a three-dimensional quadrature 5 will suffice. However, using similar arguments to those in [7] more efficient alternatives exist. Assume that q\j(wj, uj, vj) = N(wj|mj, ηj)N(uj|0, νj)N(vj|0, ξj), i.e., q\j factorizes with respect to wj, uj and vj and that the mean of uj and vj is zero. Since gj is symmetric with respect to uj and vj then E[uj] = E[vj] = E[ujvj] = E[ujwj] = E[vjwj] = 0 under gjq\j. Thus, if the initial approximate factors ˜gj factorize with respect to wj, uj and vj, and have zero mean with respect to uj and vj, any updated factor will also satisfy these properties and q\j will have the assumed form. The crucial point here is that the dependencies introduced by gj do not lead to correlations that need to be tracked under a Gaussian approximation. In this situation, the integral of gjq\j with respect to wj is given by the convolution of two Gaussians and the integral of the result with respect to uj and vj can be simplified using arguments similar to those employed to obtain (3). Namely, sj = Z N mj|0, νj ξj λ2 j + ηj C+(λj|0, 1)dλj , (11) where mj, ηj, νj and ξj are the parameters of q\j. The derivatives of log sj with respect to the natural parameters of q\j can also be evaluated using a one-dimensional quadrature. Therefore, each update of ˜gj requires five quadratures: one to evaluate sj and four to evaluate its derivatives. Instead of sequentially updating each ˜gj, we follow [7] and update these factors in parallel. For this, we compute all q\j at the same time and update each ˜gj. The marginals of q are strictly required for this task. These can be efficiently obtained using the low rank representation structure of the covariance matrix of q that results from the fact that all the ˜gj’s are factorizing univariate Gaussians and from the assumed form for C in (5). Specifically, if m (the number of columns of P) is equal to n, the cost of this operation (and hence the cost of EP) is O(n2d). Lastly, we damp the update of each ˜gj as follows: ˜gj = (˜gnew j )α(˜gold j )1−α, where ˜gnew j and ˜gold j respectively denote the new and the old ˜gj, and α ∈[0, 1] is a parameter that controls the amount of damping. Damping significantly improves the convergence of EP and leaves the fixed points of the algorithm invariant [22]. After EP has converged, q can be used instead of the exact posterior in (8) to make predictions. Similarly, the model evidence in (7) can be approximated by ˜Z, the normalization constant of q: ˜Z = Z f(w)hu(u)hv(v) d Y j=1 ˜gj(z)dwdudv . (12) Since all the factors in (12) are Gaussian, log ˜Z can be readily computed and maximized with respect to σ2, ρ2, γ2 and P to find good values for these hyper-parameters. Specifically, once EP has converged, the gradient of the natural parameters of the ˜gj’s with respect to these hyper-parameters is zero [21]. Thus, the gradient of log ˜Z with respect to σ2, ρ2, γ2 and P can be computed in terms of the gradient of the exact factors. The derivations are long and tedious and hence omitted here, but by careful consideration of the covariance structure of q it is possible to limit the complexity of these computations to O(n2d) if m is equal to n. Therefore, to fit a model that maximizes log ˜Z we alternate between running EP to obtain the estimate of log ˜Z and its gradient, and doing a gradient ascent step to maximize this estimate with respect to σ2, ρ2, γ2 and P. The derivation details of the EP algorithm and an R-code implementation of it can be found in the supplementary material. 4 Experiments We carry out experiments to evaluate the performance of the model described in Section 2. We refer to this model as HSDep. Other methods from the literature are also evaluated. The first one, HSST, is a particular case of HSDep that is obtained when each task is learnt independently and correlations in the feature selection process are ignored (i.e., C = I). A multi-task learning model, HSMT, which assumes common relevant and irrelevant features among tasks is also considered. The details of this model are omitted, but it follows [10] closely. It assumes a horseshoe prior in which the scale parameters λj in (2) are shared among tasks, i.e., each feature is either relevant or irrelevant in all tasks. A variant of HSMT , SSMT, is also evaluated. SSMT considers a spike-and-slab prior for joint feature selection across all tasks, instead of a horseshoe prior. The details about the prior of SSMT are given in [10]. EP is used for approximate inference in both HSMT and SSMT. The dirty model, DM, described in [15] is also considered. This model assumes shared relevant and irrelevant features 6 among tasks. However, some tasks are allowed to have specific relevant features. For this, a loss function is minimized via combined ℓ1 and ℓ1/ℓ∞block regularization. Particular cases of DM are the lasso [4] and the group lasso [8]. Finally, we evaluate the model introduced in [16]. This model, BM, uses spike-and-slab priors for feature selection and specifies dependencies in this process using a Boltzmann machine. BM is trained using the approximate block-coordinate algorithm described in [17]. All models considered assume Gaussian additive noise around the targets. 4.1 Experiments with Synthetic Data A first batch of experiments is carried out using synthetic data. We generate K = 64 different tasks of n = 64 samples and d = 128 features. In each task, the entries of Xk are sampled from a standard Gaussian distribution and the model coefficients, wk, are all set to zero except for the i-th group of 8 consecutive coefficients, with i chosen randomly for each task from the set {1, 2, . . . , 16}. The values of these 8 non-zero coefficients are uniformly distributed in the interval [−1, 1]. Thus, in each task there are only 8 relevant features for prediction. Given each Xk and each wk, the targets yk are obtained using (1) with σ2 k = 0.5 ∀k. The hyper-parameters of each method are set as follows: In HSST ρ2 k and γ2 k are found Method Error HSST 0.29±0.01 HSMT 0.38±0.03 SSMT 0.77±0.01 DM 0.37±0.01 BM 0.24±0.02 HSDep 0.21±0.01 Figure 3: (top) Average reconstruction error of each method. (bottom) Average absolute value of the entries of the matrix C estimated by HSDep in gray scale (white=0 and black=1). Black squares are groups of jointly relevant / irrelevant features. by type-II ML. In HSMT ρ2 and γ2 are set to the average value found by HSST for ρ2 k and γ2 k, respectively. In SSMT the parameters of the spike-and-slab prior are found by type-II ML. In HSDep m = n. Furthermore, ρ2 k and γ2 k take the values found by HSST while P is obtained using type-II ML. In all models we set the variance of the noise for task k, σ2 k, equal to 0.5. Finally, in DM we try different hyper-parameters and report the best results observed. After training each model on the data, we measure the average reconstruction error of wk. Denote by ˆwk the estimate of the model coefficients for task k (this is the posterior mean except in BM and DM). The reconstruction error is measured as || ˆwk −wk||2/||wk||2, where || · ||2 is the ℓ2-norm and wk are the exact coefficients of task k. Figure 3 (top) shows the average reconstruction error of each method over 50 repetitions of the experiments described. HSDep obtains the lowest error. The observed differences in performance are significant according to a Student’s t-test (p-value < 5%). BM performs worse than HSDep because the greedy MAP estimation of the sparsity patterns of each task is sometimes trapped in sub-optimal solutions. The poor results of HSMT, SSMT and DM are due to the assumption made by these models of all tasks sharing relevant features, which is not satisfied. Figure 3 (bottom) shows the average entries in absolute value of the correlation matrix C estimated by HSDep. The matrix has a block diagonal form, with blocks of size 8 × 8 (8 is the number of relevant coefficients in each task). Thus, within each block the corresponding latent variables uj and vj are strongly correlated, indicating jointly relevant or irrelevant features. This is the expected estimation for the scenario considered. 4.2 Reconstruction of Images of Hand-written Digits from the MNIST A second batch of experiments considers the reconstruction of images of hand-written digits extracted from the MNIST data set [23]. These images are in gray scale with pixel values between 0 and 255. Most pixels are inactive and equal to 0. Thus, the images are sparse and suitable to be reconstructed using the model proposed. The images are reduced to size 10×10 pixels and the pixel intensities are normalized to lie in the interval [0, 1]. Then, K = 100 tasks of n = 75 samples each are generated. For this, we randomly choose 50 images corresponding to the digit 3 and 50 images corresponding to the digit 5 (these digits are chosen because they differ significantly). Similar results (not shown) to the ones reported here are obtained for other pairs of digits. For each task, the entries of Xk are sampled from a standard Gaussian. The model coefficients, wk, are simply the pixel values of each image (i.e., d = 100). Importantly, unlike in the previous experiments, the model coefficients are not synthetically generated but correspond to actual images. Furthermore, since the 7 tasks contain images of different digits they are expected to have different relevant features. Given Xk and wk, the targets yk are generated using (1) with σ2 k = 0.1 ∀k. The objective is to reconstruct wk from Xk and yk for each task k. The hyper-parameters are set as in Section 4.1 with σ2 k = 0.1 ∀k. The reconstruction error is also measured as in that section. Figure 4 (top) shows the average reconstruction error of each method over 50 repetitions of the experiments described. Again, HSDep performs best. Furthermore, the differences in performance are also statistically significant. The second best result corresponds to HSMT, probably due to background pixels which are irrelevant in all the tasks and to the heavy-tails of the horseshoe prior. HSST, SSMT , BM and DM perform significantly worse. DM performs poorly probably because of the inferior shrinking properties of the ℓ1 norm compared to the horseshoe [3]. The poor results of SSMT are due to the lack of heavy-tails in the spike-and-slab prior. In BM we have observed that the greedy MAP estimation of the task supports is more frequently trapped in sub-optimal solutions. Furthermore, the algorithm described in [17] fails to converge most times in this scenario. Figure 4 (right, bottom) shows a representative subset of the images reconstructed by each method. The best reconstructions correspond to HSDep. Finally, Figure 4 (left, bottom) shows in gray scale the average correlations in absolute value induced by HSDep for the selection process of each pixel of the image with respect to the selection of a particular pixel which is displayed in green. Correlations are high to avoid the selection of background pixels and to select pixels that actually correspond to the digits 3 and 5. The correlations induced are hence appropriate for the multi-task problem considered. HSST HSMT SSMT DM BM HSDep Error 0.36±0.02 0.25±0.02 0.39±0.01 0.37±0.01 0.52±0.03 0.20±0.01 G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G Figure 4: (top) Average reconstruction error each method. (left, bottom) Average absolute value correlation in a gray scale (white=0 and black=1) between the latent variables uj and vj corresponding to the pixel displayed in green and the variables uj and vj corresponding to all the other pixels of the image. (right, bottom) Examples of actual and reconstructed images by each method. The best reconstruction results correspond to HSDep. 5 Conclusions and Future Work We have described a linear sparse model for learning dependencies in the feature selection process. The model can be used in a multi-task learning setting with several tasks available for induction that need not share relevant features, but only dependencies in the feature selection process. Exact inference is intractable in such a model. However, expectation propagation provides an efficient approximate alternative with a cost in O(Kn2d), where K is the number of tasks, n is the number of samples of each task, and d is the dimensionality of the data. Experiments with real and synthetic data illustrate the benefits of the proposed method. Specifically, this model performs better than other multi-task alternatives from the literature. Our experiments also show that the proposed model is able to induce relevant feature selection dependencies from the training data alone. Future paths of research include the evaluation of this model in practical problems of sparse coding, i.e., when all tasks share a common design matrix X that has to be induced from the data alongside with the model coefficients, with potential applications to image denoising and image inpainting [24]. Acknowledgment: Daniel Hern´andez-Lobato is supported by the Spanish MCyT (Ref. TIN201021575-C02-02). Jos´e Miguel Hern´andez-Lobato is supported by Infosys Labs, Infosys Limited. 8 References [1] I. M. Johnstone and D. M. Titterington. Statistical challenges of high-dimensional data. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 367(1906):4237, 2009. [2] T. J. Mitchell and J. J. Beauchamp. Bayesian variable selection in linear regression. Journal of the American Statistical Association, 83(404):1023–1032, 1988. [3] C. M. Carvalho, N. G. Polson, and J. G. Scott. Handling sparsity via the horseshoe. Journal of Machine Learning Research W&CP, 5:73–80, 2009. [4] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), 58(1):267–288, 1996. [5] M. E. Tipping. Sparse Bayesian learning and the relevance vector machine. Journal of Machine Learning Research, 1:211–244, 2001. [6] J. M. Hern´andez-Lobato, D. Hern´andez-Lobato, and A. Su´arez. Network-based sparse Bayesian classification. Pattern Recognition, 44:886–900, 2011. [7] M. Van Gerven, B. Cseke, R. Oostenveld, and T. Heskes. Bayesian source localization with the multivariate Laplace prior. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 1901–1909, 2009. [8] Julia E. Vogt and Volker Roth. The group-lasso: ℓ1,∞regularization versus ℓ1,2 regularization. In Goesele et al., editor, 32nd Anual Symposium of the German Association for Pattern Recognition, volume 6376, pages 252–261. Springer, 2010. [9] Y. Kim, J. Kim, and Y. Kim. Blockwise sparse regression. Statistica Sinica, 16(2):375, 2006. [10] D. Hern´andez-Lobato, J. M. Hern´andez-Lobato, T. Helleputte, and P. Dupont. Expectation propagation for Bayesian multi-task feature selection. In Jos´e L. Balc´azar, Francesco Bonchi, Aristides Gionis, and Mich`ele Sebag, editors, Proceedings of the European Conference on Machine Learning, volume 6321, pages 522–537. Springer, 2010. [11] G. Obozinski, B. Taskar, and M. I. Jordan. Joint covariate selection and joint subspace selection for multiple classification problems. Statistics and Computing, pages 1–22, 2009. [12] T. Xiong, J. Bi, B. Rao, and V. Cherkassky. Probabilistic joint feature selection for multi-task learning. In Proceedings of the Seventh SIAM International Conference on Data Mining, pages 332–342. SIAM, 2007. [13] T. Jebara. Multi-task feature and kernel selection for svms. In Proceedings of the twenty-first international conference on Machine learning, pages 55–62. ACM, 2004. [14] A. Argyriou, T. Evgeniou, and M. Pontil. Multi-task feature learning. In B. Sch¨olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 41–48. MIT Press, Cambridge, MA, 2007. [15] A. Jalali, P. Ravikumar, S. Sanghavi, and C. Ruan. A dirty model for multi-task learning. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R.S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 964–972. 2010. [16] P. Garrigues and B. Olshausen. Learning horizontal connections in a sparse coding model of natural images. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 505–512. MIT Press, Cambridge, MA, 2008. [17] T. Peleg, Y. C Eldar, and M. Elad. Exploiting statistical dependencies in sparse representations for signal recovery. Signal Processing, IEEE Transactions on, 60(5):2286–2303, 2012. [18] A. Papoulis. Probability, Random Variables, and Stochastic Processes. Mc-Graw Hill, 1984. [19] C. M. Bishop. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer, August 2006. [20] T. Minka. A Family of Algorithms for approximate Bayesian Inference. PhD thesis, Massachusetts Institute of Technology, 2001. [21] M. W. Seeger. Expectation propagation for exponential families. Technical report, Department of EECS, University of California, Berkeley, 2006. [22] T. Minka. Power EP. Technical report, Carnegie Mellon University, Department of Statistics, 2004. [23] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [24] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online learning for matrix factorization and sparse coding. Journal of Machine Learning Research, 11:19–60, 2010. 9
|
2013
|
152
|
4,879
|
Dimension-Free Exponentiated Gradient Francesco Orabona Toyota Technological Institute at Chicago Chicago, USA francesco@orabona.com Abstract I present a new online learning algorithm that extends the exponentiated gradient framework to infinite dimensional spaces. My analysis shows that the algorithm is implicitly able to estimate the L2 norm of the unknown competitor, U, achieving a regret bound of the order of O(U log(U T + 1)) √ T), instead of the standard O((U 2 + 1) √ T), achievable without knowing U. For this analysis, I introduce novel tools for algorithms with time-varying regularizers, through the use of local smoothness. Through a lower bound, I also show that the algorithm is optimal up to p log(UT) term for linear and Lipschitz losses. 1 Introduction Online learning provides a scalable and flexible approach for solving a wide range of prediction problems, including classification, regression, ranking, and portfolio management. These algorithms work in rounds, where at each round a new instance is given and the algorithm makes a prediction. After the true label of the instance is revealed, the learning algorithm updates its internal hypothesis. The aim of the classifier is to minimize the cumulative loss it suffers due to its prediction, such as the total number of mistakes. Popular online algorithms for classification include the standard Perceptron and its many variants, such as kernel Perceptron [6], and p-norm Perceptron [7]. Other online algorithms, with properties different from those of the standard Perceptron, are based on multiplicative (rather than additive) updates, such as Winnow [10] for classification and Exponentiated Gradient (EG) [9] for regression. Recently, Online Mirror Descent (OMD)1 and has been proposed as a general meta-algorithm for online learning, parametrized by a regularizer [16]. By appropriately choosing the regularizer, most online learning algorithms are recovered as special cases of OMD. Moreover, performance guarantees can also be derived simply by instantiating the general OMD bounds to the specific regularizer being used. So, for all the first-order online learning algorithms, it is possible to prove regret bounds of the order of O(f(u) √ T), where T is the number of rounds and f(u) is the regularizer used in OMD, evaluated on the competitor vector u. Hence, different choices of the regularizer will give rise to different algorithms and guarantees. For example, p-norm algorithms can be derived from the squared Lp-norm regularization, while EG can be derived from the entropic one. In particular for the Euclidean regularizer η √ T∥u∥2, we have a regret bound of O( √ T(∥u∥2/η +η)). Knowing ∥u∥it is possible to tune η to have a O(∥u∥ √ T) bound, that is optimal [1]. On the other hand, EG has a regret bound of O(√T log d), where d is the dimension of the space. In this paper, I use OMD to extend EG to infinite dimensional spaces, through the use of a carefully designed time-varying regularizer. The algorithm, that I call Dimension-Free Exponentiated Gradient (DFEG), does not need direct access to single components of the vectors, rather it only requires 1The algorithm should be more correctly called Follow the Regularized Leader, however here I follow Shalev-Shwartz in [16], and I will denote it by OMD. 1 to access them through inner products. Hence, DFEG can be used with kernels too, extending for the first time EG to the kernel domain. I prove a regret bound of O(∥u∥log(∥u∥T +1) √ T). Up to logarithmic terms, the bound of DFEG is equal to the optimal bound obtained through the knowledge of ∥u∥, but it does not require the tuning of any parameter. I built upon ideas of [19], but I designed my new algorithm as an instantiation of OMD, rather than using an ad-hoc analysis. I believe that this route increases the comprehension of the inner working of the algorithm, its relation to other algorithms, and it makes easier to extend it in other directions as well. In order to analyze DFEG, I also introduce new and general techniques to cope with timevarying regularizers for OMD, using the local smoothness of the dual of the regularization function, that might be of independent interest. I also extend and improve the lower bound in [19], to match the upper bound of DFEG up to a √log T term, and to show an implicit trade-off on the regret versus different competitors. 1.1 Related works Exponentiated gradient algorithms have been proposed by [9]. The algorithms have multiplicative updates and regret bounds that depend logarithmically on the dimension of the input space. In particular, they proposed a version of EG where the weights are not normalized, called EGU. A closer algorithm to mine is the epoch-free in [19]. Indeed, DFEG is equivalent to theirs when used on one dimensional problems. However, the extension to infinite dimensional spaces is nontrivial and very different in nature from their extension to d-dimensional problems, that consists on running a copy of the algorithm independently on each coordinate. Their regret bound depends on the dimension of the space and can neither be used with infinite dimensional spaces nor with kernels. Vovk proposed two algorithms for square loss, with regret bounds of O((∥u∥+ Y ) √ T) and O(∥u∥ √ T) respectively, where Y is an upper bound on the range of the target values [20]. A matching lower bound is also presented, proving the optimality of the second algorithm. However, the algorithms seem specific to the square loss and it is not possible to adapt them to other losses. Indeed, the lower bound I prove shows that for linear and Lipschitz losses a p log(∥u∥T) term is unavoidable. Moreover, the second algorithm, being an instantiation of the Aggregating Algorithm [21], does not seem to have an efficient implementation. My algorithm also shares similarities in spirit with the family of self-confident algorithms [2, 7, 15], in which the algorithm self-tunes its parameters based on internal estimates. From the point of view of the proof technique, the primal-dual analysis of OMD is due to [15, 17]. Starting from the work of [8], it is now clear that OMD can be easily analyzed using only a few basic convex duality properties. See the recent survey [16] for a lucid description of these developments. The time-varying regularization for OMD has been explored in [4, 12, 15], but in none of these works does the negative terms in the bound due to the time-varying regularizer play a decisive role. The use of the local estimates of strong smoothness is new, as far as I know. A related way to have a local analysis is through the local norms [16], but my approach is better tailored to my needs. 2 Problem setting and definitions In the online learning scenario the learning algorithms work in rounds [3]. Let X a Euclidean vector space2, at each round t, an instance xt ∈X, is presented to the algorithm, which then predicts a label ˆyt ∈R. Then, the correct label yt is revealed, and the algorithm pays a loss ℓ(ˆyt, yt), for having predicted ˆyt, instead of yt. The aim of the online learning algorithm is to minimize the cumulative sum of the losses, on any sequence of data/labels {(xt, yt)}T t=1. Typical examples of loss functions are, for example, the absolute loss, |ˆyt −yt|, and the hinge loss, max(1 −ˆytyt, 0). Note that the loss function can change over time, so in the following I will denote by ℓt : R →R the generic loss function received by the algorithm at time t. In this paper I focus on linear prediction of the form ˆyt = ⟨wt, xt⟩, where wt ∈X represents the hypothesis of the online algorithm at time t. 2All the theorems hold also in general Hilbert spaces, but for simplicity of exposition I consider a Euclidean setting. 2 Algorithm 1 Dimension-Free Exponentiated Gradient. Parameters: 0.882 ≤a ≤1.109, L > 0, δ > 0. Initialize: θ1 = 0 ∈X, H0 = δ for t = 1, 2, . . . do Receive ∥xt∥, where xt ∈X Set Ht = Ht−1 + L2 max ∥xt∥, ∥xt∥2 Set αt = a√Ht, βt = H 3 2 t if ∥θt∥== 0 then choose wt = 0 else choose wt = θt βt∥θt∥exp ∥θt∥ αt Suffer loss ℓt(⟨wt, xt⟩) Update θt+1 = θt −∂ℓt(⟨wt, xt⟩)xt end for We strive to design online learning algorithms for which it is possible to prove a relative regret bound. Such analysis bounds the regret, that is the difference between the cumulative loss of the algorithm, PT t=1 ℓt(⟨wt, xt⟩), and the one of an arbitrary and fixed competitor u, PT t=1 ℓt(⟨u, xt⟩). We will consider L-Lipschitz losses, that is |ℓt(y) −ℓt(y′)| ≤L|y −y′|, ∀y, y′. I now introduce some basic notions of convex analysis that are used in the paper. I refer to [14] for definitions and terminology. I consider functions f : X →R that are closed and convex. Given a closed and convex function f with domain S ⊆X, its Fenchel conjugate f ∗: X →R is defined as f ∗(u) = supv∈S ⟨v, u⟩−f(v) . The Fenchel-Young inequality states that f(u)+f ∗(v) ≥⟨u, v⟩ for all v, u. A vector x is a subgradient of a convex function f at v if f(u) −f(v) ≥⟨u −v, x⟩ for any u in the domain of f. The differential set of f at v, denoted by ∂f(v), is the set of all the subgradients of f at v. If f is also differentiable at v, then ∂f(v) contains a single vector, denoted by ∇f(v), which is the gradient of f at v. Strong convexity and strong smoothness are key properties in the design of online learning algorithms, they are defined as follows. A function f is γ-strongly convex with respect to a norm ∥·∥if for any u, v in its domain, and any x ∈∂f(u), f(v) ≥f(u) + ⟨v −u, x⟩+ γ 2 ∥u −v∥2 . The Fenchel conjugate f ∗of a γ-strongly convex function f is everywhere differentiable and 1 γ strongly smooth [8], this means that for all u, v ∈X, f ∗(v) ≤f ∗(u) + ⟨v −u, ∇f ∗(u)⟩+ 1 2γ ∥u −v∥2 ∗. In the remainder of the paper all the norms considered will be the L2 ones. 3 Dimension-Free Exponentiated Gradient In this section I describe the DFEG algorithm. The pseudo-code is in Algorithm 1. It shares some similarities with the exponentiated gradient with unnormalized weights algorithm [9], to the selftuning variant of exponentiated gradient in [15], and to the epoch-free algorithm in [19]. However, note that it does not access to single coordinates of wt and xt, but only their inner products. Hence, we expect the algorithm not to depend on the dimension of X, that can be even infinite. In other words, DFEG can be used with kernels as well, on contrary of all the mentioned algorithms above. For the DFEG algorithm we have the following regret bound, that will be proved in Section 4. Theorem 1. Let 0.882 ≤a ≤1.109, δ > 0, then, for any sequence of input vectors {xt}T t=1, any sequence of L-Lipschitz convex losses {ℓt(·)}T t=1, and any u ∈X, the following bound on the regret holds for Algorithm 1 T X t=1 ℓt(⟨wt, xt⟩) − T X t=1 ℓt(⟨u, xt⟩) ≤4 exp(1 + 1 a) L √ δ + a∥u∥ p HT ln H 3 2 T ∥u∥ −1 , where HT = δ + PT t=1 L2 max(∥xt∥, ∥xt∥2). 3 The bound has a logarithmic part, typical of the family of exponentiated gradient algorithms, but instead of depending on the dimension, it depends on the norm of the competitor, ∥u∥. Hence, the regret bound of DFEG holds for infinite dimensional spaces as well, that is, it is dimension-free. It is interesting to compare this bound with the usual bound for online learning using an L2 regularizer. Using a time-varying regularizer ft(w) = √ t η ∥w∥2 it is easy to see, e.g. [15], that the bound would be3 O((∥u∥2/η + η) √ T). If an upper bound U on ∥u∥is known, we can use it to tune η to obtain an upper bound of the order of O(U √ T). On the other hand, we obtain for DFEG a bound of O(∥u∥log(∥u∥T + 1) √ T), that is optimal bound, up to logarithmic terms, without knowing U. So my bound goes to constant if the norm of the competitor goes to zero. However, note that, for any fixed competitor, the gradient descent bound is asymptotically better. The lower bound on the range of a we get comes from technical details of the analysis. The parameter a is directly linked to the leading constant of the regret bound; therefore, it is intuitive that the range of acceptable values must have a lower bound different from zero. This is also confirmed by the lower bound in Theorem 2 below. Notice that the bound is data-dependent because it depends on the sequence of observed input vectors xt. A data-independent bound can be easily obtained from the upper bound on the norm of the input vectors. The use of the function max(∥xt∥, ∥xt∥2) is necessary to have such a data-dependent bound and it seems that it cannot be avoided in order to prove the regret bound. It is natural to ask if the log term in the bound can be avoided. Extending Theorem 7 in [19], we can reply in the negative to this question. In particular, the following theorem shows that the regret of any online learning algorithm has a satisfy to a trade-off between the guarantees against the competitor with norm equal to zero and the ones against other competitors. A similar trade-off has been proven in the expert settings [5]. Theorem 2. Fix a non-trivial vector space X, a specific online learning algorithm, and let the sequence of losses be composed by linear losses. If the algorithm guarantees a zero regret against the competitor with zero L2 norm, then there exists a sequence of T vectors in X, such that the regret against any other competitor is Ω(T). On the other hand, if the algorithm guarantees a regret at most of ϵ > 0 against the competitor with zero L2 norm, then, for any 0 < η < 1, there exists a T0 and a sequence of T ≥T0 unitary norm vectors zt ∈X, and a vector u ∈X such that T X t=1 ⟨u, zt⟩− T X t=1 ⟨wt, zt⟩≥(1 −η)∥u∥ r 1 log 2 v u u tT log η∥u∥ √ T 3ϵ ! −2 . The proof can be found in the supplementary material. It is possible to show that the optimal η is of the order of 1 log T , so that the leading constant approaches q 1 log 2 ≈1.2011 when T goes to infinity. It is also interesting to note that an L2 regularizer suffers a loss of O( √ T) against a competitor with zero norm, that cancels the √log T term. 4 Analysis In this section I prove my main result. I will first briefly introduce the general OMD algorithm with time-varying regularizers on which my algorithm is based. 4.1 Online mirror descent and local smoothness Algorithm 2 is a generic meta-algorithm for online learning. Most of the online learning algorithms can be derived from it, choosing the functions ft and the vectors zt. The following lemma, that is a generalization of Corollary 4 in [8], Corollary 3 in [4], and Lemma 1 in [12], is the main tool to prove the regret bound for the DFEG algorithm. The proof is in the supplementary material. 3Despite what claimed in Section 1 of [19], the use of the time-varying regularizer ft(w) = √ t η ∥w∥2 guarantees a sublinear regret for unconstrained online convex optimization, for any η > 0. 4 Algorithm 2 Time-varying Online Mirror Descent Parameters: A sequence of convex functions f1, f2, . . . defined on S ⊆X. Initialize: θ1 = 0 ∈X for t = 1, 2, . . . do Choose wt = ∇f ∗ t (θt) Observe zt ∈X Update θt+1 = θt + zt end for Lemma 1. Assume Algorithm 2 is run with functions f1, f2, . . . defined on a common domain S ⊆X. Then for any w′ t, u ∈S we have T X t=1 ⟨zt, u −w′ t⟩≤fT (u) + T X t=1 f ∗ t (θt+1) −f ∗ t−1(θt) −⟨w′ t, zt⟩ , where we set f ∗ 0 (w′ 1) = 0. Moreover, if f ∗ 1 , f ∗ 2 , . . . are twice differentiable, and max0≤τ≤1 ∥∇2f ∗ t (θt + τzt)∥≤λt, then we have f ∗ t (θt+1) −f ∗ t−1(θt) −⟨wt, zt⟩≤f ∗ t (θt) −f ∗ t−1(θt) + λt 2 ∥zt∥2 . Note that the above Lemma is usually stated assuming the strong convexity of ft, that is equivalent to the strong smoothness of f ∗ t , that in turns for twice differentiable functions is equivalent to a global bound on the norm of the Hessian of f ∗ t (see Theorem 2.1.6 in [11]). Here I take a different route, assuming the functions f ∗ t to be twice differentiable, but using the weaker hypothesis of local boundedness of the Hessian of f ∗ t . Hence, for twice differentiable conjugate functions, this bound is always tighter than the ones in [4, 8, 12]. Indeed, in our case, the global strong smoothness cannot be used to prove any meaningful regret bound. We derive the Dimension-Free Exponentiated Gradient from the general OMD above. Set in Algorithm 2 ft(w) = αt∥w∥(log(βt∥w∥) −1), where αt and βt are defined in Algorithm 1, and zt = −∂ℓt(⟨wt, xt⟩)xt. The proof idea of my theorem is the following. First, assume that we are on a round where we have a local upper bound on the norm of the Hessian f ∗ t . The usual approach in these kind of proof is to have a regularizer that is growing over time as √ t, so that the terms f ∗ t (θt) −f ∗ t−1(θt) are negative and can be safely discarded. At the same time the sum of the squared norms of the gradients will typically be of the order of O( √ T), giving us a O( √ T) regret bound (see for example the proofs in [4]). However, following this approach in DFEG we would have that the sum of norms of the squared gradients grows much faster than O( √ T). This is due to the fact that the global strong smoothness is too small. Hence I introduce a different proof method. In the following, I will show the surprising result that with my choice of the regularizers ft, the terms f ∗ t (θt) −f ∗ t−1(θt) and the squared norm of the gradient cancel out. Notice that already in [12, 13] it has been advocated not to discard those terms to obtain tighter bounds. Here the same terms play a major role in the proof and they are present thanks to the time-varying regularization. This is in agreement with Theorem 9 in [19] that rules out algorithms with a fixed regularizer to obtain regret bounds like Theorem 1. It remains to bound the regret in the rounds where we do not have an upper bound on the norm of the Hessian. In these rounds I show that the norm of wt (and θt) is small enough so that the regret is still bounded, thanks to the choice of βt. 4.2 Proof of the main result We start defining the new regularizer and show its properties in the following Lemma (proof in the supplementary material). Note the similarities with EGU, where the regularizer is Pd i=1 wi(log(wi) −1), w ∈Rd, wi ≥0 [9]. Lemma 2. Define f(w) = α∥w∥(ln(β∥w∥) −1), for α, β > 0. The following properties hold • f ∗(θ) = α β exp ∥θ∥ α . 5 • ∇f ∗(θ) = θ β∥θ∥exp ∥θ∥ α . • ∥∇2f ∗(θ)∥2 ≤ 1 β min(∥θ∥,α) exp( ∥θ∥ α ). Equipped with a local upper bound on the Hessian of f ∗, we can now use Lemma 1. We notice that Lemma 1 also guides us in the choice of the sequences αt. In fact if we want the regret to be ˜O( √ T), αt must be ˜O( √ T) too. In the proof of Theorem 1 we also use the following three technical lemmas, whose proofs are in the supplementary material. The first two are used to upper bound the exponential function with quadratic functions. Lemma 3. Let M > 0, then for any exp(M) M 2+1 ≤p ≤exp(M), and 0 ≤x ≤M, we have exp(x) ≤ p + exp(M)−p M 2 x2 . Lemma 4. Let M > 0, then for any 0 ≤x ≤M, we have exp(x) ≤1 + x + exp(M)−1−M M 2 x2. Lemma 5. For any p, q > 0 we have that 2 √p − 2 √p+q ≥ q (p+q) 3 2 . Proof of Theorem 1. In the following denote by n(x) := max(∥x∥, ∥x∥2). We will use Lemma 1 to upper bound the regret of DFEG. Hence, using the notation in Algorithm 1, set zt = −∂ℓt(⟨wt, xt⟩)xt, and ft(w) = αt∥w∥(log(βt∥w∥) −1). Observe that, by the hypothesis on ℓt, we have ∥zt∥≤L∥xt∥. We first consider two cases, based on the norm of θt. Case 1: ∥θt∥> αt + ∥zt∥. With this assumption, and using the third property of Lemma 2, we have max 0≤τ≤1 ∥∇2f ∗ t (θt + τzt)∥≤max 0≤τ≤1 exp ∥θt+τzt∥ αt βt min(∥θt + τzt∥, αt) ≤ exp ∥θt∥+∥zt∥ αt βtαt . We now use the second statement of Lemma 1. We have that λt∥zt∥2 2 + f ∗ t (θt) −f ∗ t−1(θt) can be upper bounded by ∥zt∥2 2αtβt exp ∥θt∥+ ∥zt∥ αt + αt βt exp ∥θt∥ αt −αt−1 βt−1 exp ∥θt∥ αt−1 ≤∥zt∥2 2αtβt exp ∥θt∥+ ∥zt∥ αt + αt βt exp ∥θt∥ αt −αt−1 βt−1 exp ∥θt∥ αt = exp ∥θt∥ αt ∥zt∥2 2aH2 t exp ∥zt∥ αt + a Ht − a Ht−1 . (1) We will now prove that the term in the parenthesis of (1) is negative. It can be rewritten as ∥zt∥2 2aH2 t exp ∥zt∥ αt + a Ht − a Ht−1 = ∥zt∥2Ht−1 exp ∥zt∥ αt −2a2Ht−1L2n(xt) −2a2L4(n(xt))2 2aH2 t Ht−1 , and from the expression of αt we have that ∥zt∥ αt ≤1 a, so we now use Lemma 3 with p = 2a2 and M = 1/a. These are valid settings because exp( 1 a ) 1 a2 +1 ≤2a2 ≤exp( 1 a), ∀0.825 ≤a ≤1.109, as it can be verified numerically. ∥zt∥2 2aH2 t exp ∥zt∥ αt + a Ht − a Ht−1 ≤ ∥zt∥2Ht−1 2a2 + a2(exp( 1 a) −2a2) ∥zt∥2 α2 t −2a2Ht−1L2n(xt) −2a2L4(n(xt))2 2aH2 t Ht−1 ≤ L2∥xt∥2Ht−1 2a2 + a2(exp( 1 a) −2a2) L2∥xt∥2 a2Ht −2a2Ht−1L2∥xt∥2 −2a2L4∥xt∥2 2aH2 t Ht−1 ≤L4∥xt∥4(exp( 1 a) −4a2) 2aH2 t Ht−1 ≤0, (2) 6 where in last step we used the fact that exp( 1 a) ≤4a2, ∀a ≥0.882, as again it can be verified numerically. Case 2: ∥θt∥≤αt + ∥zt∥. We use the first statement of Lemma 1, setting w′ t = wt if ∥θ∦= 0, and w′ t = 0 otherwise. In this way, from the second property of Lemma 2, we have that ∥w′ t∥≤ 1 βt exp( ∥θt∥ αt ). Note that any other choice of w′ t satisfying the the previous relation on the norm of w′ t would have worked as well. f ∗ t (θt+1) −f ∗ t−1(θt) = αt βt exp ∥θt+1∥ αt −αt−1 βt−1 exp ∥θt∥ αt−1 ≤exp ∥θt∥ αt αt βt exp ∥zt∥ αt −αt−1 βt−1 = a exp ∥θt∥ αt exp ∥zt∥ a√Ht Ht−1 −Ht Ht−1Ht . (3) Remembering that ∥zt∥ αt ≤1 a, and using Lemma 4 with M = 1 a, we have Ht−1 exp ∥zt∥ a√Ht −Ht−1 −L2n(xt) ≤Ht−1 exp L∥xt∥ a√Ht −Ht−1 −L2∥xt∥2 ≤Ht−1 1 + L∥xt∥ a√Ht + a2 exp 1 a −1 −1 a L2∥xt∥2 a2Ht −Ht−1 −L2∥xt∥2 = LHt−1∥xt∥ a√Ht + exp 1 a −1 −1 a L2Ht−1∥xt∥2 Ht −L2∥xt∥2 ≤LHt−1∥xt∥ a√Ht + L2∥xt∥2 exp 1 a −2 −1 a ≤LHt−1∥xt∥ a√Ht , (4) where in the last step we used the fact that exp( 1 a) −2 −1 a ≤0, ∀a ≥0.873, verified numerically. Putting together (3) and (4), we have f ∗ t (θt+1) −f ∗ t−1(θt) −⟨w′ t, zt⟩≤exp ∥θt∥ αt L∥xt∥ H 3 2 t −⟨w′ t, zt⟩ ≤exp ∥θt∥ αt L∥xt∥ H 3 2 t + L∥w′ t∥∥xt∥≤exp ∥θt∥ αt L∥xt∥ H 3 2 t + exp ∥θt∥ αt L∥xt∥ βt = 2 exp ∥θt∥ αt L∥xt∥ H 3 2 t ≤2 exp(1 + 1 a)L∥xt∥ H 3 2 t , (5) where in the second inequality we used the Cauchy-Schwarz inequality and the Lipschitzness of ℓt, in the third the bound on the norm of w′ t, and in the last inequality the fact that ∥θt∥≤αt + ∥zt∥ implies exp( ∥θt∥ αt ) ≤exp(1 + 1 a). Putting together (2) and (5) and summing over t, we have T X t=1 f ∗ t (θt+1) −f ∗ t−1(θt) −⟨w′ t, zt⟩ ≤ T X t=1 2 exp(1 + 1 a)L∥xt∥ H 3 2 t ≤2 L T X t=1 exp(1 + 1 a)L2∥xt∥ (Pt j=1 L2∥xt∥+ δ) 3 2 ≤4 exp(1 + 1 a) L T X t=1 1 qPt−1 j=1 L2∥xt∥+ δ − 1 qPt j=1 L2∥xt∥+ δ ≤4 exp(1 + 1 a) L √ δ , where in the third inequality we used Lemma 5. The stated bound can be obtained observing that ℓt(⟨wt, xt⟩) −ℓt(⟨u, xt⟩) ≤⟨u −wt, zt⟩, from the convexity of ℓt and the definition of zt. 5 Experiments A full empirical evaluation of DFEG is beyond the scope of this paper. Here I just want to show the empirical effect of some of its theoretical properties. In all the experiments I used the absolute loss, 7 0 2000 4000 6000 8000 10000 0 2 4 6 8 10 12 14 16 18 20 Numer of samples Regret Synthetic dataset DFEG Kernel GD, eta=0.05 Kernel GD, eta=0.1 Kernel GD, eta=0.2 10 −2 10 −1 10 0 10 1 10 2 1.5 1.55 1.6 1.65 1.7 1.75 1.8 1.85 1.9 1.95 2 x 10 4 eta Total loss cadata dataset DFEG Kernel GD, various eta 10 −1 10 0 10 1 10 2 2000 4000 6000 8000 10000 12000 14000 eta Total loss cpusmall dataset DFEG Kernel GD, various eta Figure 1: Left: regret versus number of input vectors on synthetic dataset. Center and Right: total loss for DFEG and Kernel GD on the cadata and cpusmall dataset respectively. so L = 1, a is set to the minimal value allowed by Theorem 1 and δ = 1. I denote by Kernel GD the OMD with the regularizer √ t η ∥w∥2. First, I generated synthetic data as in the proof of Theorem 2, that is the input vectors are all the same and the yt is equal to 1 for the t even and −1 for the others. In this case we know that the optimal predictor has norm equal to zero and we can exactly calculate the value of the regret. Figure 1(left) I have plotted the regret as a function of the number of input vectors. As predicted by the theory, DFEG has a constant regret, while Kernel GD has a regret of the form O(η √ T). Hence, it can have a constant regret only when η is set to zero, and this can be done only with prior knowledge of ∥u∥, that is impossible in practical applications. For the second experiment, I analyzed the behavior of DFEG on two real word regression datasets, cadata and cpusmall4. I used the Gaussian kernel with variance equal to the average distance between training input vectors. I have plotted in Figure 1(central) the final cumulative loss of DFEG and the ones of GD with varying values of η. We see that, while the performance of Kernel GD can be better of the one of DFEG, as predicted by the theory, its performance varies greatly in relation to η. On the other hand the performance of DFEG is close to the optimal one without the need to tune any parameters. It is also worth noting the catastrophic result we can get from a wrong tuning of η in GD. Similar considerations hold for the cpusmall dataset in Figure 1(right). 6 Discussion I have presented a new algorithm for online learning, the first one in the family of exponentiated gradient to be dimension-free. Thanks to new analysis tools, I have proved that DFEG attains a regret bound of O(U log(U T + 1)) √ T), without any parameter to tune. I also proved a lower bound that shows that the algorithm is optimal up to √log T term for linear and Lipschitz losses. The problem of deriving a regret bound that depends on the sequence of the gradients, rather than on the xt, remains open. Resolving this issue would result in the tighter O( qPT t=1 ℓt(⟨wt, xt⟩)) regret bounds in the case that the ℓt are smooth [18]. The difficulty in proving these kind of bounds seem to lie in the fact that (2) is negative only because Ht −Ht−1 is bigger than ∥zt∥2. Acknowledgments I am thankful to Jennifer Batt for her help and support during the writing of this paper, to Nicol`o Cesa-Bianchi for the useful comments on an early version of this work, and to Tamir Hazan for his writing style suggestions. I also thank the anonymous reviewers for their precise comments, which helped me to improve the clarity of this manuscript. 4http://www.csie.ntu.edu.tw/˜cjlin/libsvmtools/datasets/ 8 References [1] J. Abernethy, A. Agarwal, P. L. Bartlett, and A. Rakhlin. A stochastic view of optimal regret through minimax duality. In COLT, 2009. [2] P. Auer, N. Cesa-Bianchi, and C. Gentile. Adaptive and self-confident on-line learning algorithms. J. Comput. Syst. Sci., 64(1):48–75, 2002. [3] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006. [4] J. C. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159, 2011. [5] E. Even-Dar, M. Kearns, Y. Mansour, and J. Wortman. Regret to the best vs. regret to the average. In N. H. Bshouty and C. Gentile, editors, COLT, volume 4539 of Lecture Notes in Computer Science, pages 233–247. Springer, 2007. [6] Y. Freund and R. E. Schapire. Large margin classification using the Perceptron algorithm. Machine Learning, pages 277–296, 1999. [7] C. Gentile. The robustness of the p-norm algorithms. Machine Learning, 53(3):265–299, 2003. [8] S. M. Kakade, S. Shalev-Shwartz, and A. Tewari. Regularization techniques for learning with matrices. CoRR, abs/0910.0610, 2009. [9] J. Kivinen and M. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132(1):1–63, January 1997. [10] N. Littlestone. Learning quickly when irrelevant attributes abound: a new linear-threshold algorithm. Machine Learning, 2(4):285–318, 1988. [11] Y. Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer, 2003. [12] F. Orabona and K. Crammer. New adaptive algorithms for online classification. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R.S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 1840–1848. 2010. [13] F. Orabona, K. Crammer, and N. Cesa-Bianchi. A generalized online mirror descent with applications to classification and regression, 2013. arXiv:1304.2994. [14] R. T. Rockafellar. Convex Analysis (Princeton Mathematical Series). Princeton University Press, 1970. [15] S. Shalev-Shwartz. Online learning: Theory, algorithms, and applications. Technical report, The Hebrew University, 2007. PhD thesis. [16] S. Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2), 2012. [17] S. Shalev-Shwartz and Y. Singer. A primal-dual perspective of online learning algorithms. Machine Learning Journal, 2007. [18] N. Srebro, K. Sridharan, and A. Tewari. Smoothness, low noise and fast rates. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 2199–2207. 2010. [19] M. Streeter and B. McMahan. No-regret algorithms for unconstrained online convex optimization. In P. Bartlett, F.C.N. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 2411–2419. 2012. [20] V. Vovk. On-line regression competitive with reproducing kernel hilbert spaces. In Jin-Yi Cai, S.Barry Cooper, and Angsheng Li, editors, Theory and Applications of Models of Computation, volume 3959 of Lecture Notes in Computer Science, pages 452–463. Springer Berlin Heidelberg, 2006. [21] V. G. Vovk. Aggregating strategies. In COLT, pages 371–386, 1990. 9
|
2013
|
153
|
4,880
|
Memory Limited, Streaming PCA Ioannis Mitliagkas Dept. of Electrical and Computer Engineering The University of Texas at Austin ioannis@utexas.edu Constantine Caramanis Dept. of Electrical and Computer Engineering The University of Texas at Austin constantine@utexas.edu Prateek Jain Microsoft Research Bangalore, India prajain@microsoft.com Abstract We consider streaming, one-pass principal component analysis (PCA), in the highdimensional regime, with limited memory. Here, p-dimensional samples are presented sequentially, and the goal is to produce the k-dimensional subspace that best approximates these points. Standard algorithms require O(p2) memory; meanwhile no algorithm can do better than O(kp) memory, since this is what the output itself requires. Memory (or storage) complexity is most meaningful when understood in the context of computational and sample complexity. Sample complexity for high-dimensional PCA is typically studied in the setting of the spiked covariance model, where p-dimensional points are generated from a population covariance equal to the identity (white noise) plus a low-dimensional perturbation (the spike) which is the signal to be recovered. It is now well-understood that the spike can be recovered when the number of samples, n, scales proportionally with the dimension, p. Yet, all algorithms that provably achieve this, have memory complexity O(p2). Meanwhile, algorithms with memory-complexity O(kp) do not have provable bounds on sample complexity comparable to p. We present an algorithm that achieves both: it uses O(kp) memory (meaning storage of any kind) and is able to compute the k-dimensional spike with O(p log p) samplecomplexity – the first algorithm of its kind. While our theoretical analysis focuses on the spiked covariance model, our simulations show that our algorithm is successful on much more general models for the data. 1 Introduction Principal component analysis is a fundamental tool for dimensionality reduction, clustering, classification, and many more learning tasks. It is a basic preprocessing step for learning, recognition, and estimation procedures. The core computational element of PCA is performing a (partial) singular value decomposition, and much work over the last half century has focused on efficient algorithms (e.g., Golub & Van Loan (2012) and references therein) and hence on computational complexity. The recent focus on understanding high-dimensional data, where the dimensionality of the data scales together with the number of available sample points, has led to an exploration of the sample complexity of covariance estimation. This direction was largely influenced by Johnstone’s spiked covariance model, where data samples are drawn from a distribution whose (population) covariance is a low-rank perturbation of the identity matrix Johnstone (2001). Work initiated there, and also work done in Vershynin (2010a) (and references therein) has explored the power of batch PCA in the p-dimensional setting with sub-Gaussian noise, and demonstrated that the singular value decom1 position (SVD) of the empirical covariance matrix succeeds in recovering the principal components (extreme eigenvectors of the population covariance) with high probability, given n = O(p) samples. This paper brings the focus on another critical quantity: memory/storage. The only currently available algorithms with provable sample complexity guarantees either store all n = O(p) samples (note that for more than a single pass over the data, the samples must all be stored) or explicitly form or approximate the empirical p × p (typically dense) covariance matrix. All cases require as much as O(p2) storage for exact recovery. In certain high-dimensional applications, where data points are high resolution photographs, biometrics, video, etc., p often is of the order of 1010 −1012, making the need for O(p2) memory prohibitive. At many computing scales, manipulating vectors of length O(p) is possible, when storage of O(p2) is not. A typical desktop may have 10-20 GB of RAM, but will not have more than a few TB of total storage. A modern smart-phone may have as much as a GB of RAM, but has a few GB, not TB, of storage. In distributed storage systems, the scalability in storage comes at the heavy cost of communication. In this light, we consider the streaming data setting, where the samples xt ∈Rp are collected sequentially, and unless we store them, they are irretrievably gone.1 On the spiked covariance model (and natural generalizations), we show that a simple algorithm requiring O(kp) storage – the best possible – performs as well as batch algorithms (namely, SVD on the empirical covariance matrix), with sample complexity O(p log p). To the best of our knowledge, this is the only algorithm with both storage complexity and sample complexity guarantees. We discuss connections to past work in detail in Section 2, introduce the model in Section 3, and present the solution to the rank 1 case, the rank k case, and the perturbed-rank-k case in Sections 4.1, 4.2 and 4.3, respectively. In Section 5 we provide experiments that not only confirm the theoretical results, but demonstrate that our algorithm works well outside the assumptions of our main theorems. 2 Related Work Memory- and computation-efficient algorithms that operate on streaming data are plentiful in the literature and many seem to do well in practice. However, there is no algorithm that provably recovers the principal components in the same noise and sample-complexity regime as the batch PCA algorithm does and maintains a provably light memory footprint. Because of the practical relevance, there is renewed interest in this problem. The fact that it is an important unresolved issue has been pointed out in numerous places, e.g., Warmuth & Kuzmin (2008); Arora et al. (2012). Online-PCA for regret minimization is considered in several papers, most recently in Warmuth & Kuzmin (2008). There the multiplicative weights approach is adapted to this problem, with experts corresponding to subspaces. The goal is to control the regret, improving on the natural followthe-leader algorithm that performs batch-PCA at each step. However, the algorithm can require O(p2) memory, in order to store the multiplicative weights. A memory-light variant described in Arora et al. (2012) typically requires much less memory, but there are no guarantees for this, and moreover, for certain problem instances, its memory requirement is on the order of p2. Sub-sampling, dimensionality-reduction and sketching form another family of low-complexity and low-memory techniques, see, e.g., Clarkson & Woodruff (2009); Nadler (2008); Halko et al. (2011). These save on memory and computation by performing SVD on the resulting smaller matrix. The results in this line of work provide worst-case guarantees over the pool of data, and typically require a rapidly decaying spectrum, not required in our setting, to produce good bounds. More fundamentally, these approaches are not appropriate for data coming from a statistical model such as the spiked covariance model. It is clear that subsampling approaches, for instance, simply correspond to discarding most of the data, and for fundamental sample complexity reasons, cannot work. Sketching produces a similar effect: each column of the sketch is a random (+/−) sum of the data points. If the data points are, e.g., independent Gaussian vectors, then so will each element of the sketch, and thus this approach again runs against fundamental sample complexity constraints. Indeed, it is straightforward to check that the guarantees presented in (Clarkson & Woodruff (2009); Halko et al. (2011)) are not strong enough to guarantee recovery of the spike. This is not because the results are weak; it is because they are geared towards worst-case bounds. 1This is similar to what is sometimes referred to as the single pass model. 2 Algorithms focused on sequential SVD (e.g., Brand (2002, 2006), Comon & Golub (1990),Li (2004) and more recently Balzano et al. (2010); He et al. (2011)) seek to have the best subspace estimate at every time (i.e., each time a new data sample arrives) but without performing full-blown SVD at each step. While these algorithms indeed reduce both the computational and memory burden of batch-PCA, there are no rigorous guarantees on the quality of the principal components or on the statistical performance of these methods. In a Bayesian mindset, some researchers have come up with expectation maximization approaches Roweis (1998); Tipping & Bishop (1999), that can be used in an incremental fashion. The finite sample behavior is not known. Stochastic-approximation-based algorithms along the lines of Robbins & Monro (1951) are also quite popular, due to their low computational and memory complexity, and excellent performance. They go under a variety of names, including Incremental PCA (though the term Incremental has been used in the online setting as well Herbster & Warmuth (2001)), Hebbian learning, and stochastic power method Arora et al. (2012). The basic algorithms are some version of the following: upon receiving data point xt at time t, update the estimate of the top k principal components via: U (t+1) = Proj(U (t) + ηtxtx⊤ t U (t)), (1) where Proj(·) denotes the “projection” that takes the SVD of the argument, and sets the top k singular values to 1 and the rest to zero (see Arora et al. (2012) for discussion). While empirically these algorithms perform well, to the best of our knowledge - and efforts - there is no associated finite sample guarantee. The analytical challenge lies in the high variance at each step, which makes direct analysis difficult. In summary, while much work has focused on memory-constrained PCA, there has as of yet been no work that simultaneously provides sample complexity guarantees competitive with batch algorithms, and also memory/storage complexity guarantees close to the minimal requirement of O(kp) – the memory required to store only the output. We present an algorithm that provably does both. 3 Problem Formulation and Notation We consider the streaming model: at each time step t, we receive a point xt ∈Rp. Any point that is not explicitly stored can never be revisited. Our goal is to compute the top k principal components of the data: the k-dimensional subspace that offers the best squared-error estimate for the points. We assume a probabilistic generative model, from which the data is sampled at each step t. Specifically, xt = Azt + wt, (2) where A ∈Rp×k is a fixed matrix, zt ∈Rk×1 is a multivariate normal random variable, i.e., zt ∼N(0k×1, Ik×k), and wt ∈Rp×1 is the “noise” vector, also sampled from a multivariate normal distribution, i.e., wt ∼N(0p×1, σ2Ip×p). Furthermore, we assume that all 2n random vectors (zt, wt, ∀1 ≤t ≤n) are mutually independent. In this regime, it is well-known that batch-PCA is asymptotically consistent (hence recovering A up to unitary transformations) with number of samples scaling as n = O(p) Vershynin (2010b). It is interesting to note that in this high-dimensional regime, the signal-to-noise ratio quickly approaches zero, as the signal, or “elongation” of the major axis, ∥Az∥2, is O(1), while the noise magnitude, ∥w∥2, scales as O(√p). The central goal of this paper is to provide finite sample guarantees for a streaming algorithm that requires memory no more than O(kp) and matches the consistency results of batch PCA in the sampling regime n = O(p) (possibly with additional log factors, or factors depending on σ and k). We denote matrices by capital letters (e.g. A) and vectors by lower-case bold-face letters (x). ∥x∥q denotes the ℓq norm of x; ∥x∥denotes the ℓ2 norm of x. ∥A∥or ∥A∥2 denotes the spectral norm of A while ∥A∥F denotes the Frobenius norm of A. Without loss of generality (WLOG), we assume that: ∥A∥2 = 1, where ∥A∥2 = max∥x∥2=1 ∥Ax∥2 denotes the spectral norm of A. Finally, we write ⟨a, b⟩= a⊤b for the inner product between a, b. In proofs the constant C is used loosely and its value may vary from line to line. 3 Algorithm 1 Block-Stochastic Power Method Block-Stochastic Orthogonal Iteration input {x1, . . . , xn}, Block size: B 1: q0 ∼N(0, Ip×p) (Initialization) Hi ∼N(0, Ip×p), 1 ≤i ≤k (Initialization) 2: q0 ←q0/∥q0∥2 H ←Q0R0 (QR-decomposition) 3: for τ = 0, . . . , n/B −1 do 4: sτ+1 ←0 Sτ+1 ←0 5: for t = Bτ + 1, . . . , B(τ + 1) do 6: sτ+1 ←sτ+1 + 1 B ⟨qτ, xt⟩xt Sτ+1 ←Sτ+1 + 1 B xtx⊤ t Qτ 7: end for 8: qτ+1 ←sτ+1/∥sτ+1∥2 Sτ+1 = Qτ+1Rτ+1 (QR-decomposition) 9: end for output 4 Algorithm and Guarantees In this section, we present our proposed algorithm and its finite sample analysis. It is a block-wise stochastic variant of the classical power-method. Stochastic versions of the power method already exist in the literature; see Arora et al. (2012). The main impediment to the analysis of such stochastic algorithms (as in (1)) is the large variance of each step, in the presence of noise. This motivates us to consider a modified stochastic power method algorithm, that has a variance reduction step built in. At a high level, our method updates only once in a “block” and within one block we average out noise to reduce the variance. Below, we first illustrate the main ideas of our method as well as our sample complexity proof for the simpler rank-1 case. The rank-1 and rank-k algorithms are so similar, that we present them in the same panel. We provide the rank-k analysis in Section 4.2. We note that, while our algorithm describes {x1, . . . , xn} as “input,” we mean this in the streaming sense: the data are no-where stored, and can never be revisited unless the algorithm explicitly stores them. 4.1 Rank-One Case We first consider the rank-1 case for which each sample xt is generated using: xt = uzt + wt where u ∈Rp is the principal component that we wish to recover. Our algorithm is a block-wise method where all the n samples are divided in n/B blocks (for simplicity we assume that n/B is an integer). In the (τ + 1)-st block, we compute sτ+1 = 1 B B(τ+1) X t=Bτ+1 xtx⊤ t qτ. (3) Then, the iterate qτ is updated using qτ+1 = sτ+1/∥sτ+1∥2. Note that, sτ+1 can be computed online, with O(p) operations per step. Furthermore, storage requirement is also linear in p. 4.1.1 Analysis We now present the sample complexity analysis of our proposed method. Using O(σ4p log(p)/ǫ2) samples, Algorithm 1 obtains a solution qT of accuracy ǫ, i.e. ∥qT −u∥2 ≤ǫ. Theorem 1. Denote the data stream by x1, . . . , xn, where xt ∈Rp, ∀t is generated by (2). Set the total number of iterations T = Ω( log(p/ǫ) log((σ2+.75)/(σ2+.5))) and the block size B = Ω( (1+3(σ+σ2)√p)2 log(T ) ǫ2 ). Then, with probability 0.99, ∥qT −u∥2 ≤ǫ, where qT is the T-th iterate of Algorithm 1. That is, Algorithm 1 obtains an ǫ-accurate solution with number of samples (n) given by: n = ˜Ω (1 + 3(σ + σ2)√p)2 log(p/ǫ) ǫ2 log((σ2 + .75)/(σ2 + .5)) . Note that in the total sample complexity, we use the notation ˜Ω(·) to suppress the extra log(T) factor for clarity of exposition, as T already appears in the expression linearly. 4 Proof. The proof decomposes the current iterate into the component of the current iterate, qτ, in the direction of the true principal component (the spike) u, and the perpendicular component, showing that the former eventually dominates. Doing so hinges on three key components: (a) for large enough B, the empirical covariance matrix Fτ+1 = 1 B PB(τ+1) t=Bτ+1 xtx⊤ t is close to the true covariance matrix M = uu⊤+ σ2I, i.e., ∥Fτ+1 −M∥2 is small. In the process, we obtain “tighter” bounds for ∥u⊤(Fτ+1 −M)u∥for fixed u; (b) with probability 0.99 (or any other constant probability), the initial point q0 has a component of at least O(1/√p) magnitude along the true direction u; (c) after τ iterations, the error in estimation is at most O(γτ) where γ < 1 is a constant. There are several results that we use repeatedly, which we collect here, and prove individually in the full version of the paper (Mitliagkas et al. (2013)). Lemmas 4, 5 and 6. Let B, T and the data stream {xi} be as defined in the theorem. Then: • (Lemma 4): With probability 1 −C/T, for C a universal constant, we have:
1 B X t xtx⊤ t −uu⊤−σ2I
2 ≤ǫ. • (Lemma 5): With probability 1 −C/T, for C a universal constant, we have: u⊤sτ+1 ≥u⊤qτ(1 + σ2) 1 − ǫ 4(1 + σ2) , where st = 1 B P Bτ<t≤B(τ+1) xtx⊤ t qτ. • (Lemma 6): Let q0 be the initial guess for u, given by Steps 1 and 2 of Algorithm 1. Then, w.p. 0.99: |⟨q0, u⟩| ≥C0 √p, where C0 > 0 is a universal constant. Step (a) is proved in Lemmas 4 and 5, while Lemma 6 provides the required result for the initial vector q0. Using these lemmas, we next complete the proof of the theorem. We note that both (a) and (b) follow from well-known results; we provide them for completeness. Let qτ = √1 −δτu+√δτgτ, 1 ≤τ ≤n/B, where gτ is the component of qτ that is perpendicular to u and √1 −δτ is the magnitude of the component of qτ along u. Note that gτ may well change at each iteration; we only wish to show δτ →0. Now, using Lemma 5, the following holds with probability at least 1 −C/T: u⊤sτ+1 ≥ p 1 −δτ(1 + σ2) 1 − ǫ 4(1 + σ2) . (4) Next, we consider the component of sτ+1 that is perpendicular to u: g⊤ τ+1sτ+1 = g⊤ τ+1 1 B B(τ+1) X t=Bτ+1 xtx⊤ t qτ = g⊤ τ+1(M + Eτ)qτ, where M = uu⊤+σ2I and Eτ is the error matrix: Eτ = M −1 B PB(τ+1) t=Bτ+1 xtx⊤ t . Using Lemma 4, ∥Eτ∥2 ≤ǫ (w.p. ≥1 −C/T). Hence, w.p. ≥1 −C/T: g⊤ τ+1sτ+1 = σ2g⊤ τ+1qτ + ∥gτ+1∥2∥Eτ∥2∥qτ∥2 ≤σ2p δτ + ǫ. (5) Now, since qτ+1 = sτ+1/∥sτ+1∥2, δτ+1 = (g⊤ τ+1qτ+1)2 = (g⊤ τ+1sτ+1)2 (u⊤sτ+1)2 + (g⊤ τ+1sτ+1)2 , (i) ≤ (g⊤ τ+1sτ+1)2 (1 −δτ) 1 + σ2 −ǫ 4 2 + (g⊤ τ+1sτ+1)2 , (ii) ≤ (σ2√δτ + ǫ)2 (1 −δτ) 1 + σ2 −ǫ 4 2 + (σ2√δτ + ǫ)2 , (6) 5 where, (i) follows from (4) and (ii) follows from (5) along with the fact that x c+x is an increasing function in x for c, x ≥0. Assuming √δτ ≥2ǫ and using (6) and bounding the failure probability with a union bound, we get (w.p. ≥1 −τ · C/T) δτ+1 ≤ δτ(σ2 + 1/2)2 (1 −δτ)(σ2 + 3/4)2 + δτ(σ2 + 1/2)2 (i) ≤ γ2τδ0 1 −(1 −γ2τ)δ0 (ii) ≤C1γ2τp, (7) where γ = σ2+1/2 σ2+3/4 and C1 > 0 is a global constant. Inequality (ii) follows from Lemma 6; to prove (i), we need the following lemma. It shows that in the recursion given by (7), δτ decreases at a fast rate. The rate of decrease in δτ might be initially (for small τ) sub-linear, but for large enough τ the rate is linear. We defer the proof to the full version of the paper (Mitliagkas et al. (2013)). Lemma 2. If for any τ ≥0 and 0 < γ < 1, we have δτ+1 ≤ γ2δτ 1−δτ +γ2δτ , then, δτ+1 ≤ γ2t+2δ0 1 −(1 −γ2t+2)δ0 . Hence, using the above equation after T = O (log(p/ǫ)/ log (1/γ)) updates, with probability at least 1 −C, √δT ≤2ǫ. The result now follows by noting that ∥u −qT ∥2 ≤2√δT . Remark: In Theorem 1, the probability of recovery is a constant and does not decay with p. One can correct this by either paying a price of O(log p) in storage, or in sample complexity: for the former, we can run O(log p) instances of Algorithm 1 in parallel; alternatively, we can run Algorithm 1 O(log p) times on fresh data each time, using the next block of data to evaluate the old solutions, always keeping the best one. Either approach guarantees a success probability of at least 1 − 1 pO(1) . 4.2 General Rank-k Case In this section, we consider the general rank-k PCA problem where each sample is assumed to be generated using the model of equation (2), where A ∈Rp×k represents the k principal components that need to be recovered. Let A = UΛV ⊤be the SVD of A where U ∈Rp×k, Λ, V ∈Rk×k. The matrices U and V are orthogonal, i.e., U ⊤U = I, V ⊤V = I, and Σ is a diagonal matrix with diagonal elements λ1 ≥λ2 · · · ≥λk. The goal is to recover the space spanned by A, i.e., span(U). Without loss of generality, we can assume that ∥A∥2 = λ1 = 1. Similar to the rank-1 problem, our algorithm for the rank-k problem can be viewed as a streaming variant of the classical orthogonal iteration used for SVD. But unlike the rank-1 case, we require a more careful analysis as we need to bound spectral norms of various quantities in intermediate steps and simple, crude analysis can lead to significantly worse bounds. Interestingly, the analysis is entirely different from the standard analysis of the orthogonal iteration as there, the empirical estimate of the covariance matrix is fixed while in our case it varies with each block. For the general rank-k problem, we use the largest-principal-angle-based distance function between any two given subspaces: dist (span(U), span(V )) = dist(U, V ) = ∥U ⊤ ⊥V ∥2 = ∥V ⊤ ⊥U∥2, where U⊥and V⊥represent an orthogonal basis of the perpendicular subspace to span(U) and span(V ), respectively. For the spiked covariance model, it is straightforward to see that this is equivalent to the usual PCA figure-of-merit, the expressed variance. Theorem 3. Consider a data stream, where xt ∈Rp for every t is generated by (2), and the SVD of A ∈Rp×k is given by A = UΛV ⊤. Let, wlog, λ1 = 1 ≥λ2 ≥· · · ≥λk > 0. Let, T = Ω log p kǫ / log σ2 + 0.75λ2 k σ2 + 0.5λ2 k , B = Ω (1 + σ)2√ k + σ √ 1 + σ2k√p 2 log(T) λ4 kǫ2 . Then, after T B-size-block-updates, w.p. 0.99, dist(U, QT ) ≤ǫ. Hence, the sufficient number of samples for ǫ-accurate recovery of all the top-k principal components is: n = ˜Ω (1 + σ)2√ k + σ √ 1 + σ2k√p 2 log(p/kǫ) λ4 kǫ2 log σ2+0.75λ2 k σ2+0.5λ2 k . 6 Again, we use ˜Ω(·) to suppress the extra log(T) factor. The key part of the proof requires the following additional lemmas that bound the energy of the current iterate along the desired subspace and its perpendicular space (Lemmas 8 and 9), and Lemma 10, which controls the quality of the initialization. Lemmas 8, 9 and 10. Let the data stream, A, B, and T be as defined in Theorem 3, σ be the variance of noise, Fτ+1 = 1 B P Bτ<t≤B(τ+1) xtx⊤ t and Qτ be the τ-th iterate of Algorithm 1. • (Lemma 8): ∀v ∈Rk and ∥v∥2 = 1, w.p. 1 −5C/T we have: ∥U ⊤Fτ+1Qτv∥2 ≥(λ2 k + σ2 −λ2 kǫ 4 ) q 1 −∥U ⊤ ⊥Qτ∥2 2. • (Lemma 9): With probability at least 1 −4C/T, ∥U ⊤ ⊥Fτ+1Qτ∥2 ≤σ2∥U ⊤ ⊥Qτ∥2 + λ2 kǫ/2. • (Lemma 10): Let Q0 ∈Rp×k be sampled uniformly at random as in Algorithm 1. Then, w.p. at least 0.99: σk(U ⊤Q0) ≥C q 1 kp. We provide the proof of the lemmas and theorem in the full version (Mitliagkas et al. (2013)). 4.3 Perturbation-tolerant Subspace Recovery While our results thus far assume A has rank exactly k, and k is known a priori, here we show that both these can be relaxed; hence our results hold in a quite broad setting. Let xt = Azt + wt be the t-th step sample, with A = UΛV T ∈Rp×r and U ∈Rp×r, where r ≥k is the unknown true rank of A. We run Algorithm 1 with rank k to recover a subspace QT that is contained in U. The largest principal angle-based distance, from the previous section, can be used directly in our more general setting. That is, dist(U, QT ) = ∥U T ⊥QT ∥2 measures the component of QT “outside” the subspace U. Now, our analysis can be easily modified to handle this case. Naturally, now the number of samples we require increases according to r. In particular, if n = ˜Ω (1 + σ)2√r + σ √ 1 + σ2r√p 2 log(p/rǫ) λ4rǫ2 log σ2+0.75λ2 r σ2+0.5λ2r , then dist(U, QT ) ≤ǫ. Furthermore, if we assume r ≥C · k (or a large enough constant C > 0) then the initialization step provides us better distance, i.e., dist(U, Q0) ≤C′/√p rather than dist(U, Q0) ≤C′/√kp bound if r = k. This initialization step enables us to give tighter sample complexity as the r√p in the numerator above can be replaced by √rp. 5 Experiments In this section, we show that, as predicted by our theoretical results, our algorithm performs close to the optimal batch SVD. We provide the results from simulating the spiked covariance model, and demonstrate the phase-transition in the probability of successful recovery that is inherent to the statistical problem. Then we stray from the analyzed model and performance metric and test our algorithm on real world–and some very big–datasets, using the metric of explained variance. In the experiments for Figures 1 (a)-(b), we draw data from the generative model of (2). Our results are averaged over at least 200 independent runs. Algorithm 1 uses the block size prescribed in Theorem 3, with the empirically tuned constant of 0.2. As expected, our algorithm exhibits linear scaling with respect to the ambient dimension p – the same as the batch SVD. The missing point on batch SVD’s curve (Figure 1(a)), corresponds to p > 2.4 · 104. Performing SVD on a dense p × p matrix, either fails or takes a very long time on most modern desktop computers; in contrast, our streaming algorithm easily runs on this size problem. The phase transition plot in Figure 1(b) 7 10 2 10 3 10 4 10 0 10 2 10 4 10 6 p (dimension) n (samples) Samples to retrieve spike (σ=0.5, ε=0.05) Batch SVD Our algorithm (streaming) 10 −2 10 −1 10 0 10 1 10 2 10 3 10 4 Probability of success (n=1000, ε=0.05). Noise standard deviation (σ). Ambient dimension (p). 0 0.2 0.4 0.6 0.8 1 (a) (b) 2 4 6 8 10 0% 10% 20% 30% 40% k (number of components) Explained variance NIPS bag−of−words dataset Optimal (batch) Our algorithm (streaming) Optimal using B samples 1 2 3 4 5 6 7 0% 10% 20% k (number of components) Explained variance Our algorithm on large bag−of−words datasets NY Times: 300K samples, p=103K PubMed: 8.2M samples, p=140K (c) (d) Figure 1: (a) Number of samples required for recovery of a single component (k = 1) from the spiked covariance model, with noise standard deviation σ = 0.5 and desired accuracy ǫ = 0.05. (b) Fraction of trials in which Algorithm 1 successfully recovers the principal component (k = 1) in the same model, with ǫ = 0.05 and n = 1000 samples, (c) Explained variance by Algorithm 1 compared to the optimal batch SVD, on the NIPS bag-of-words dataset. (d) Explained variance by Algorithm 1 on the NY Times and PubMed datasets. shows the empirical sample complexity on a large class of problems and corroborates the scaling with respect to the noise variance we obtain theoretically. Figures 1 (c)-(d) complement our complete treatment of the spiked covariance model, with some out-of-model experiments. We used three bag-of-words datasets from Porteous et al. (2008). We evaluated our algorithm’s performance with respect to the fraction of explained variance metric: given the p × k matrix V output from the algorithm, and all the provided samples in matrix X, the fraction of explained variance is defined as Tr(V T XXT V )/ Tr(XXT ). To be consistent with our theory, for a dataset of n samples of dimension p, we set the number of blocks to be T = ⌈log(p)⌉ and the size of blocks to B = ⌊n/T⌋in our algorithm. The NIPS dataset is the smallest, with 1500 documents and 12K words and allowed us to compare our algorithm with the optimal, batch SVD. We had the two algorithms work on the document space (p = 1500) and report the results in Figure 1(c). The dashed line represents the optimal using B samples. The figure is consistent with our theoretical result: our algorithm performs as well as the batch, with an added log(p) factor in the sample complexity. Finally, in Figure 1 (d), we show our algorithm’s ability to tackle very large problems. Both the NY Times and PubMed datasets are of prohibitive size for traditional batch methods – the latter including 8.2 million documents on a vocabulary of 141 thousand words – so we just report the performance of Algorithm 1. It was able to extract the top 7 components for each dataset in a few hours on a desktop computer. A second pass was made on the data to evaluate the results, and we saw 7-10 percent of the variance explained on spaces with p > 104. 8 References Arora, R., Cotter, A., Livescu, K., and Srebro, N. Stochastic optimization for PCA and PLS. In 50th Allerton Conference on Communication, Control, and Computing, Monticello, IL, 2012. Balzano, L., Nowak, R., and Recht, B. Online identification and tracking of subspaces from highly incomplete information. In Communication, Control, and Computing (Allerton), 2010 48th Annual Allerton Conference on, pp. 704–711, 2010. Brand, M. Fast low-rank modifications of the thin singular value decomposition. Linear algebra and its applications, 415(1):20–30, 2006. Brand, Matthew. Incremental singular value decomposition of uncertain data with missing values. Computer Vision—ECCV 2002, pp. 707–720, 2002. Clarkson, Kenneth L. and Woodruff, David P. Numerical linear algebra in the streaming model. In Proceedings of the 41st annual ACM symposium on Theory of computing, pp. 205–214, 2009. Comon, P. and Golub, G. H. Tracking a few extreme singular values and vectors in signal processing. Proceedings of the IEEE, 78(8):1327–1343, 1990. Golub, Gene H. and Van Loan, Charles F. Matrix computations, volume 3. JHUP, 2012. Halko, Nathan, Martinsson, Per-Gunnar, and Tropp, Joel A. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM review, 53(2):217–288, 2011. He, J., Balzano, L., and Lui, J. Online robust subspace tracking from partial information. arXiv preprint arXiv:1109.3827, 2011. Herbster, Mark and Warmuth, Manfred K. Tracking the best linear predictor. The Journal of Machine Learning Research, 1:281–309, 2001. Johnstone, Iain M. On the distribution of the largest eigenvalue in principal components analysis.(english. Ann. Statist, 29(2):295–327, 2001. Li, Y. On incremental and robust subspace learning. Pattern recognition, 37(7):1509–1518, 2004. Mitliagkas, Ioannis, Caramanis, Constantine, and Jain, Prateek. Memory limited, streaming PCA. arXiv preprint arXiv:1307.0032, 2013. Nadler, Boaz. Finite sample approximation results for principal component analysis: a matrix perturbation approach. The Annals of Statistics, pp. 2791–2817, 2008. Porteous, Ian, Newman, David, Ihler, Alexander, Asuncion, Arthur, Smyth, Padhraic, and Welling, Max. Fast collapsed gibbs sampling for latent dirichlet allocation. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 569–577, 2008. Robbins, Herbert and Monro, Sutton. A stochastic approximation method. The Annals of Mathematical Statistics, pp. 400–407, 1951. Roweis, Sam. EM algorithms for PCA and SPCA. Advances in neural information processing systems, pp. 626–632, 1998. Rudelson, Mark and Vershynin, Roman. Smallest singular value of a random rectangular matrix. Communications on Pure and Applied Mathematics, 62(12):1707–1739, 2009. Tipping, Michael E. and Bishop, Christopher M. Probabilistic principal component analysis. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 61(3):611–622, 1999. Vershynin, R. How close is the sample covariance matrix to the actual covariance matrix? Journal of Theoretical Probability, pp. 1–32, 2010a. Vershynin, Roman. Introduction to the non-asymptotic analysis of random matrices. arXiv preprint arXiv:1011.3027, 2010b. Warmuth, Manfred K. and Kuzmin, Dima. Randomized online PCA algorithms with regret bounds that are logarithmic in the dimension. Journal of Machine Learning Research, 9:2287–2320, 2008. 9
|
2013
|
154
|
4,881
|
Σ-Optimality for Active Learning on Gaussian Random Fields Yifei Ma Machine Learning Department Carnegie Mellon University yifeim@cs.cmu.edu Roman Garnett Computer Science Department University of Bonn rgarnett@uni-bonn.de Jeff Schneider Robotics Institute Carnegie Mellon University schneide@cs.cmu.edu Abstract A common classifier for unlabeled nodes on undirected graphs uses label propagation from the labeled nodes, equivalent to the harmonic predictor on Gaussian random fields (GRFs). For active learning on GRFs, the commonly used V-optimality criterion queries nodes that reduce the L2 (regression) loss. V-optimality satisfies a submodularity property showing that greedy reduction produces a (1 −1/e) globally optimal solution. However, L2 loss may not characterise the true nature of 0/1 loss in classification problems and thus may not be the best choice for active learning. We consider a new criterion we call Σ-optimality, which queries the node that minimizes the sum of the elements in the predictive covariance. Σ-optimality directly optimizes the risk of the surveying problem, which is to determine the proportion of nodes belonging to one class. In this paper we extend submodularity guarantees from V-optimality to Σ-optimality using properties specific to GRFs. We further show that GRFs satisfy the suppressor-free condition in addition to the conditional independence inherited from Markov random fields. We test Σoptimality on real-world graphs with both synthetic and real data and show that it outperforms V-optimality and other related methods on classification. 1 Introduction Real-world data are often presented as a graph where the nodes in the graph bear labels that vary smoothly along edges. For example, for scientific publications, the content of one paper is highly correlated with the content of papers that it references or is referenced by, the field of interest of a scholar is highly correlated with other scholars s/he coauthors with, etc. Many of these networks can be described using an undirected graph with nonnegative edge weights set to be the strengths of the connections between nodes. The model for label prediction in this paper is the harmonic function on the Gaussian random field (GRF) by Zhu et al. (2003). It can generalize two popular and intuitive algorithms: label propagation (Zhu & Ghahramani, 2002), and random walk with absorptions (Wu et al., 2012). GRFs can be seen as a Gaussian process (GP) (Rasmussen & Williams, 2006) with its (maybe improper) prior covariance matrix whose (pseudo)inverse is set to be the graph Laplacian. Like other learning problems, labels may be insufficient and expensive to gather, especially if one wants to discover a new phenomenon on a graph. Active learning addresses these issues by making automated decisions on which nodes to query for labels from experts or the crowd. Some popular criteria are empirical risk minimization (Settles, 2010; Zhu et al., 2003), mutual information gain (Krause et al., 2008), and V-optimality (Ji & Han, 2012). Here we consider an alternative criterion, Σ-optimality, and establish several related theoretical results. Namely, we show that greedy reduction of Σ-optimality provides a (1−1/e) approximation bound to the global optimum. We also show 1 that Gaussian random fields satisfy the suppressor-free condition, described below. Finally, we show that Σ-optimality outperforms other approaches for active learning with GRFs for classification. 1.1 V-optimality on Gaussian Random Fields Ji & Han (2012) proposed greedy variance minimization as a cheap and high profile surrogate active classification criterion. To decide which node to query next, the active learning algorithm finds the unlabeled node which leads to the smallest average predictive variance on all other unlabeled nodes. It corresponds to standard V-optimality in optimal experiment design. We will discuss several aspects of V-optimality on GRFs below: 1. The motivation behind Voptimality can be paraphrased as the expected risk minimization with the L2-surrogate loss (Section 2.1). 2. The greedy solution to the set optimization problem in V-optimality is comparable to the global solution up to a constant (Theorem 1). 3. The greedy application of V-optimality can also be interpreted as a heuristic which selects nodes that have high correlation to nodes with high variances (Observation 4). Some previous work is related to point 2 above. Nemhauser et al. (1978) shows that any submodular, monotone and normalized set function yields a (1 −1/e) global optimality guarantee for greedy solutions. Our proof techniques coincides with Friedland & Gaubert (2011) in principle, but we are not restricted to spectral functions. Krause et al. (2008) showed a counter example where the V-optimality objective function with GP models does not satisfy submodularity. 1.2 Σ-optimality on Gaussian Random Fields We define Σ-optimality on GRFs to be another variance minimization criterion that minimizes the sum of all entries in the predictive covariance matrix. As we will show in Lemma 7, the predictive covariance matrix is nonnegative entry-wise and thus the definition is proper. Σ-optimality was originally proposed by Garnett et al. (2012) in the context of active surveying, which is to determine the proportion of nodes belonging to one class. However, we focus on its performance as a criterion in active classification heuristics. The survey-risk of Σ-optimality replaces the L2-risk of V-optimality as an alternative surrogate risk for the 0/1-risk. We also prove that the greedy application of Σ-optimality has a similar theoretical bound as Voptimality. We will show that greedily minimizing Σ-optimality empirically outperforms greedily minimizing V-optimality on classification problems. The exact reason explaining the superiority of Σ-optimality as a surrogate loss in the GRF model is still an open question, but we observe that Σ-optimality tends to select cluster centers whereas V-optimality goes after outliers (Section 3.1). Finally, greedy application of both Σ-optimality and V-optimality need O(N) time per query candidate evaluation after one-time inverse of a N × N matrix. 1.3 GRFs Are Suppressor Free In linear regression, an explanatory variable is called a suppressor if adding it as a new variable enhances correlations between the old variables and the dependent variable (Walker, 2003; Das & Kempe, 2008). Suppressors are persistent in real-world data. We show GRFs to be suppressorfree. Intuitively, this means that with more labels acquired, the conditional correlation between unlabeled nodes decreases even when their Markov blanket has not formed. That GRFs present natural examples for the otherwise obscure suppressor-free condition is interesting. 2 Learning Model & Active Learning Objectives We use Gaussian random field/label propagation (GRF/LP) as our learning model. Suppose the dataset can be represented in the form of a connected undirected graph G = (V, E) where each node has an (either known or unknown) label and each edge eij has a fixed nonnegative weight wij(= wji) that reflects the proximity, similarity, etc. between nodes vi and vj. Define the graph Laplacian of G to be L = diag (W1) −W, i.e., lii = P j wij and lij = −wij when i ̸= j. Let Lδ = L + δI be the generalized Laplacian obtained by adding self-loops. In the following, we will write L to also encompass βLδ for the set of hyper-parameters β > 0 and δ ≥0. 2 The binary GRF is a Bayesian model to generate yi ∈{0, +1} for every node vi according to, p(y) ∝exp n −β 2 X i,j wij(yi −yj)2 + δ X i y2 i o = exp −1 2yT Ly . (2.1) Suppose nodes ℓ= {vℓ1, . . . , vℓ|ℓ|} are labeled as yℓ= (yℓ1, . . . , yℓ|ℓ|)T ; A GRF infers the output distribution on unlabeled nodes, yu = (yu1, . . . , yu|u|)T by the conditional distribution given yℓ, as Pr(yu|yℓ) ∝N(ˆyu, L−1 u ) = N(ˆyu, L−1 (v−ℓ)), (2.2) where ˆyu = (−L−1 u Luℓyℓ) is the vector of predictive means on unlabeled nodes and Lu is the principal submatrix consisting of the unlabeled row and column indices in L, that is, the lower-right block of L = Lℓ Lℓu Luℓ Lu . By convention, L−1 (v−ℓ) means the inverse of the principal submatrix. We use L(v−ℓ) and Lu interchangeably because ℓand u partition the set of all nodes v. Finally, GRF, or GRF/LP, is a relaxation of the binary GRF to continuous outputs, because the latter is computationally intractable even for a-priori generations. LP stands for label propagation, because the predictive mean on a node is the probability of a random walk leaving that node hitting a positive label before hitting a zero label. For multi-class problems, Zhu et al. (2003) proposed the harmonic predictor which looks at predictive means in one-versus-all comparisons. Remark: An alternative approximation to the binary GRF is the GRF-sigmoid model, which draws the binary outputs from Bernoulli distributions with means set to be the sigmoid function of the GRF (latent) variables. However, this alternative is very slow to compute and may not be compatible with the theoretical results in this paper. 2.1 Active Learning Objective 1: L2 Risk Minimization (V-Optimality) Since in GRFs, regression responses are taken directly as probability predictions, it is computationally and analytically more convenient to apply the regression loss directly in the GRF as in Ji & Han (2012). Assume the L2 loss to be our classification loss. The risk function, whose input variable is the labeled subset ℓ, is: RV (ℓ) = Eyℓyu X ui∈u (yui −ˆyui)2 = E " E " X ui∈u (yui −ˆyui)2 yℓ ## = tr(L−1 u ). (2.3) This risk is written with a subscript V because minimizing (2.3) is also the V-optimality criterion, which minimizes mean prediction variance in active learning. In active learning, we strive to select a subset ℓof nodes to query for labels, constrained by a given budget C, such that the risk is minimized. Formally, arg min ℓ: |ℓ|≤C R(ℓ) = RV (ℓ) = tr(L−1 (v−ℓ)). (2.4) 2.2 Active Learning Objective 2: Survey Risk Minimization (Σ-Optimality) Another objective building on the GRF model (2.2) is to determine the proportion of nodes belonging to class 1, as would happen when performing a survey. For active surveying, the risk would be: RΣ(ℓ) = Eyℓyu X ui∈u yui − X ui∈u ˆyui 2 = E E 1T yu −1T ˆyu 2|yℓ = 1T L−1 u 1, (2.5) which could substitute the risk R(ℓ) in (2.4) and yield another heuristic for selecting nodes in batch active learning. We will refer to this modified optimization objective as the Σ-optimality heuristic: arg min ℓ: |ℓ|≤C R(ℓ) = RΣ(ℓ) = 1T L−1 (v−ℓ)1. (2.6) Further, we will also consider the application of Σ-optimality in active classification because (2.6) is another metric of the predictive variance. Surprisingly, although both (2.3) and (2.5) are approximations of the real objective (the 0/1 risk), greedy reduction of the Σ-optimality criterion outperforms greedy reduction of the V-optimality criterion in active classification (Section 3.1 and 5.1), as well as several other methods including expected error reduction. 3 2.3 Greedy Sequential Application of V/Σ-Optimality Both (2.4) and (2.6) are subset optimization problems. Calculating the global optimum may be intractable. As will be shown later in the theoretical results, the reduction of both risks are submodular set functions and the greedy sequential update algorithm yields a solution that has a guaranteed approximation ratio to the optimum (Theorem 1). At the k-th query decision, denote the covariance matrix conditioned on the previous (k −1) queries as C = (L(v−ℓ(k−1)))−1. By Shur’s Lemma (or the GP-regression update rule), the one-step lookahead covariance matrix conditioned on ℓ(k−1) ∪{v}, denoted as C′ = (L(v−(ℓ(k−1)∪{v})))−1, has the following update formula: C′ 0 0 0 = C − 1 Cvv · C:vCv:, (2.7) where without loss of generality v was positioned as the last node. Further denoting Cij = ρijσiσj, we can put (2.7) inside RΣ(·) and RV (·) to get the following equivalent criteria: V-optimality : v(k) ∗ = arg max v∈u P t∈u(Cvt)2 Cvv = X t∈u ρ2 vtσ2 t , (2.8) Σ-optimality : v(k) ∗ = arg max v∈u (P t∈u Cvt)2 Cvv = ( X t∈u ρvtσt)2. (2.9) 3 Theoretical Results & Insights For the general GP model, greedy optimization of the L2 risk has no guarantee that the solution can be comparable to the brute-force global optimum (taking exponential time to compute), because the objective function, the trace of the predictive covariance matrix, fails to satisfy submodularity in all cases (Krause et al., 2008). However, in the special case of GPs with kernel matrix equal to the inverse of a graph Laplacian (with ℓ̸= ∅or δ > 0), the GRF does provide such theoretical guarantees, both for V-optimality and Σ-optimality. The latter is a novel result. The following theoretical results concern greedy maximization of the risk reduction function (which is shown to be submodular): R∆(ℓ) = R(∅) −R(ℓ) for either R(·) = RV (·) or RΣ(·). Theorem 1 (Near-optimal guarantee for greedy applications of V/Σ-optimality). In risk reduction, R∆(ℓg) ≥(1 −1/e) · R∆(ℓ∗), (3.1) where R∆(ℓ) = R(∅) −R(ℓ) for either R(·) = RV (·) or RΣ(·), e is Euler’s number, ℓg is the greedy optimizer, and ℓ∗is the true global optimizer under the constraint |ℓ∗| ≤|ℓg|. According to Nemhauser et al. (1978), it suffices to show the following properties of R∆(ℓ): Lemma 2 (Normalization, Monotonicity, and Submodularity). ∀ℓ1 ⊂ℓ2 ⊂v, v ∈v, R∆(∅) = 0, (3.2) R∆(ℓ2) ≥R∆(ℓ1), (3.3) R∆ ℓ1 ∪{v} −R∆(ℓ1) ≥R∆ ℓ2 ∪{v} −R∆(ℓ2). (3.4) Another sufficient condition for Theorem 1, which is itself an interesting observation, is the suppressor-free condition. Walker (2003) describes a suppressor as a variable, knowing which will suddenly create a strong correlation between the predictors. An example is yi + yj = yk. Knowing any one of these will create correlations between the others. Walker further states that suppressors are common in regression problems. Das & Kempe (2008) extend the suppressor-free condition to sets and showed that this condition is sufficient to prove (2.3). Formally, the condition is: corr(yi, yj | ℓ1 ∪ℓ2) ≤ corr(yi, yj | ℓ1) ∀vi, vj ∈v, ∀ℓ1, ℓ2 ⊂v. (3.5) It may be easier to understand (3.5) as a decreasing correlation property. It is well known for Markov random fields that the labels of two nodes on a graph become independent given labels of their Markov blanket. Here we establish that GRF boasts more than that: the correlation between any two nodes decreases as more nodes get labeled, even before a Markov blanket is formed. Formally: 4 Theorem 3 (Suppressor-Free Condition). (3.5) holds for pairs of nodes in the GRF model. Note that since the conditional covariance of the GRF model is L−1 (v−ℓ), we can properly define the corresponding conditional correlation to be corr(yu|ℓ) = D−1 2 L−1 (v−ℓ)D−1 2 , with D = diag L−1 (v−ℓ) . (3.6) 3.1 Insights From Comparing the Greedy Applications of the Σ/V-Optimality Criteria Both the V/Σ-optimality are approximations to the 0/1 risk minimization objective. Unfortunately, we cannot theoretically reason why greedy Σ-optimality outperforms V-optimality in the experiments. However, we made two observations during our investigation that provide some insights. An illustrative toy example is also provided in Section 5.1. Observation 4. Eq. (2.8) and (2.9) suggest that both the greedy Σ/V-optimality selects nodes that (1) have high variance and (2) are highly correlated to high-variance nodes, conditioned on the labeled nodes. Notice Lemma 7 proves that predictive correlations are always nonnegative. In order to contrast Σ/V-optimality, rewrite (2.9) as: (Σ-optimality) : arg max v∈u (P t∈u ρvtσt)2 = P t∈u ρ2 vtσ2 t + P t1̸=t2∈u ρvt1ρvt2σt1σt2. (3.7) Observation 5. Σ-optimality has one more term that involves cross products of (ρvt1σt1) and (ρvt2σt2) (which are nonnegative according to Lemma 9). By the Cauchy–Schwartz Inequality, the sum of these cross products are maximized when they are equal. So, the Σ-optimality additionally favors nodes that (3) have consistent global influence, i.e., that are more likely to be in cluster centers. 4 Proof Sketches Our results predicate on and extend to GPs whose inverse covariance matrix meets Proposition 6. Proposition 6. L satisfies the following. 1 # Textual description Mathematical expression p6.1 L has proper signs. lij ≥0 if i = j and lij ≤0 if i ̸= j. p6.2 L is undirected and connected. lij = lji∀i, j and P j̸=i(−lij) > 0. p6.3 Node degree no less than number of edges. lii ≥P j̸=i(−lij) = P j̸=i(−lji) > 0, ∀i. p6.4 L is nonsingular and positive-definite. ∃i : lii > P j̸=i(−lij) = P j̸=i(−lji) > 0. Although the properties of V-optimality fall into the more general class of spectral functions (Friedland & Gaubert, 2011), we have seen no proof of either the suppressor-free condition or the submodularity of Σ-optimality on GRFs. We write the ideas behind the proofs. Details are in the appendix.2 Lemma 7. For any L satisfying (p6.1-4), L−1 ≥0 entry-wise.3 Proof. Sketch: Suppose L = D −W = D(I −D−1W), with D = diag (L). Then we can show the convergence of the Taylor expansion (Appendix A.1): L−1 = [I + P∞ r=1(D−1W)r]D−1. (4.1) It suffices to observe that every term on the right hand side (RHS) is nonnegative. Corollary 8. The GRF prediction operator L−1 u Lul maps yℓ∈[0, 1]|ℓ| to ˆyu = −L−1 u Lulyℓ∈ [0, 1]|u|. When L is singular, the mapping is onto. 1Property p6.4 holds after the first query is done or when the regularizor δ > 0 in (2.1). 2Available at http://www.autonlab.org/autonweb/21763.html 3In the following, for any vector or matrix A, A ≥0 always stands for A being (entry-wise) nonnegative. 5 Proof. For yℓ= 1, (Lu, Lul) · 1 ≥0 and L−1 u ≥0 imply I, L−1 u Lul · 1 ≥0, i.e. 1 ≥ −L−1 u Lul1 = ˆyu. As both Lu ≥0 and −Lul ≥0, we have yℓ≥0 ⇒ˆyu ≥0 and yℓ≥y′ ℓ⇒ˆyu ≥ˆy′ u. Lemma 9. Suppose L = L11 L12 L21 L22 . Then L−1 − L−1 11 0 0 0 ≥0 and is positive-semidefinite. Proof. As L−1 ≥0 and is PSD, the RHS below is term-wise nonnegative and the middle term PSD (Appendix A.2): L−1− L−1 11 0 0 0 = L−1 11 (−L12) I (L22−L21L−1 11 L12)−1 (−L21)L−1 11 , I As a corollary, the monotonicity in (3.3) for both R(·) = RV (·) or RΣ(·) can be shown. Both proofs for submodularity in (3.4) and Theorem 3 result from more careful execution of matrix inversions similar to Lemma 9 (detailed in Appendix A.4). We sketch Theorem 3 for example. Proof. Without loss of generality, let u = v −ℓ= {1, . . . , k}. By Shur’s Lemma (Appendix A.3): L(v−ℓ) := Au bu bT u cu ⇒ Cov(yi, yk|ℓ) Var(yk|ℓ) = (L−1 (v−ℓ))ik (L−1 (v−ℓ))kk = (A−1 u (−bu))i, ∀i ̸= k (4.2) where the LHS is a reparamatrization with cu being a scaler. Lemma 9 shows that u1 ⊃u2 ⇒ A−1 u1 ≥A−1 u2 at corresponding entries. Also notice that −bu1 ≥−bu2 at corresponding entries and so the RHS of (4.2) is larger with u1. It suffices to draw a similar inequality in the other direction, Cov(yk, yi|ℓ)/ Var(yi|ℓ). 5 A Toy Example and Some Simulations 5.1 Comparing V-Optimality and Σ-Optimality: Active Node Classification on a Graph class 1 class 2 class 3 Σ−optimality V−optimality Figure 1: Toy graph demonstrating the behavior of Σ-optimality vs. V-optimality. To visualize the intuitions described in Section 3.1, Figure 1 shows the first few nodes selected by different optimality criteria. This graph is constructed by a breadth-first search from a random node in a larger DBLP coauthorship network graph that we will introduce in the next section. On this toy graph, both criteria pick the same center node to query first. However, for the second and third queries, Voptimality weighs the uncertainty of the candidate node more, choosing outliers, whereas Σ-optimality favors nodes with universal influence over the graph and goes to cluster centers. 5.2 Simulating Labels on a Graph To further investigate the behavior of Σ- and V optimality, we conducted experiments on synthetic labels generated on real-world network graphs. The node labels were first simulated using the model in order to compare the active learning criteria directly without raising questions of model fit. We carry out tests on the same graphs with real data in the next section. We simulated the binary labels with the GRF-sigmoid model and performed active learning with the GRF/LP model for predictions. The parameters in the generation phase were β = 0.01 and δ = 0.05, which maximizes the average classification accuracy increases from 50 random training nodes to 200 random training nodes using the GRF/LP model for predictions. Figure 2 shows the binary classification accuracy versus the number of queries on both the DBLP coauthorship graph 6 0 50 100 150 200 0.5 0.52 0.54 0.56 0.58 0.6 0.62 0.64 0.66 0.68 number of queries classification accuracy Σ−optimality V−optimality Random (a) DBLP coauthorship. 68.3% LOO accuracy. 0 50 100 150 200 0.5 0.52 0.54 0.56 0.58 0.6 number of queries classification accuracy Σ−optimality V−optimality Random (b) CORA citation. 60.5% LOO accuracy. Figure 2: Simulating binary labels by the GRF-Sigmoid; learning with the GRF/LP, 480 repetitions. and the CORA citation graph that we will describe below. The best possible classification results are indicated by the leave-one-out (LOO) accuracies given under each plot. Figure 2 can be a surprise due to the reasoning behind the L2 surrogate loss, especially when the predictive means are trapped between [−1, 1], but we see here that our reasoning in Sections (3.1 and 5.1) can lead to the greedy survey loss actually making a better active learning objective. We have also performed experiments with different values of β and δ. Despite the fact that larger β and δ increase label independence on the graph structure and undermine the effectiveness of both V/Σ-optimality heuristics, we have seen that whenever the V-optimality establishes a superiority over random selections, Σ-optimality yields better performance. 6 Real-World Experiments The active learning heuristics to be compared are:4 1. The new Σ-optimality with greedy sequential updates: minv′ 1⊤(Luk\{v′})−11 . 2. Greedy V-optimality (Ji & Han, 2012): minv′ tr (Luk\{v′})−1 . 3. Mutual information gain (MIG) (Krause et al., 2008): maxv′ L−1 uk v′,v′ (Lℓk∪{v′})−1 v′,v′ 4. Uncertainty sampling (US) picking the largest prediction margin: maxv′ ˆy(1) v′ −ˆy(2) v′ . 5. Expected error reduction (EER) (Settles, 2010; Zhu et al., 2003). Selected nodes maximize the average prediction confidence in expectation: maxv′ Eyv′ hP ui∈uˆy(1) ui yv′ yℓk i . 6. Random selection with 12 repetitions. Comparisons are made on three real-world network graphs. 1. DBLP coauthorship network.5 The nodes represent scholars and the weighted edges are the number of papers bearing both scholars’ names. The largest connected component has 1711 nodes and 2898 edges. The node labels were hand assigned in Ji & Han (2012) to one of the four expertise areas of the scholars: machine learning, data mining, information retrieval, and databases. Each class has around 400 nodes. 2. Cora citation network.6 This is a citation graph of 2708 publications, each of which is classified into one of seven classes: case based, genetic algorithms, neural networks, probabilistic methods, reinforcement learning, rule learning, and theory. The network has 5429 links. We took its largest connected component, with 2485 nodes and 5069 undirected and unweighted edges. 4Code available at http://www.autonlab.org/autonweb/21763 5http://www.informatik.uni-trier.de/˜ley/db/ 6http://www.cs.umd.edu/projects/linqs/projects/lbc/index.html 7 0 10 20 30 40 50 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 Σ−opt V−opt Rand MIG Unc EER (a) DBLP. 84% LOO accuracy. 0 10 20 30 40 50 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Σ−opt V−opt Rand MIG Unc EER (b) CORA. 86& LOO accuracy. 0 10 20 30 40 50 0.2 0.3 0.4 0.5 0.6 0.7 Σ−opt V−opt Rand MIG Unc EER (c) CITESEER 76% LOO accuracy. Figure 3: Classification accuracy vs the number of queries. β = 1, δ = 0. Randomized first query. 3. CiteSeer citation network.6 This is another citation graph of 3312 publications, each of which is classified into one of six classes: agents, artificial intelligence, databases, information retrieval, machine learning, human computer interaction. The network has 4732 links. We took its largest connected component, with 2109 nodes and 3665 undirected and unweighted edges. On all three datasets, Σ-optimality outperforms other methods by a large margin especially during the first five to ten queries. The runner-up, EER, catches up to Σ-optimality in some cases, but EER does not have theoretical guarantees. The win of Σ-optimality over V-optimality has been intuitively explained in Section 5.1 as Σoptimality having better exploration ability and robustness against outliers. The node choices by both criteria were also visually inspected after embedding the graph to the 2-dimensional space using OpenOrd method developed by Martin et al. (2011). The analysis there was similar to Figure 1. We also performed real-world experiments on the root-mean-square-error of the class proportion estimations, which is the survey risk that the Σ-optimality minimizes. Σ-optimality beats V-optimality. Details were omitted for space concerns. 7 Conclusion For active learning on GRFs, it is common to use variance minimization criteria with greedy onestep lookahead heuristics. V-optimality and Σ-optimality are two criteria based on statistics of the predictive covariance matrix. They both are also risk minimization criteria: V-optimality minimizes the L2 risk (2.3), whereas Σ-optimality minimizes the survey risk (2.5). Active learning with both criteria can be seen as subset optimization problems (2.4), (2.6). Both objective functions are supermodular set functions. Therefore, risk reduction is submodular and the greedy one-step lookahead heuristics can achieve a (1 −1/e) global optimality ratio. Moreover, we have shown that GRFs serve as a tangible example of the suppressor-free condition. While the V-optimality on GRFs inherits from label propagation (and random walk with absorptions) and have good empirical performance, it is not directly minimizing the 0/1 classification risk. We found that the Σ-optimality performs even better. The intuition is described in Section 5.1. Future work include deeper understanding of the direct motivations behind Σ-optimality on the GRF classification model and extending the GRF to continuous spaces. Acknowledgments This work is funded in part by NSF grant IIS0911032 and DARPA grant FA87501220324. 8 References Das, Abhimanyu and Kempe, David. Algorithms for subset selection in linear regression. In Proceedings of the 40th annual ACM symposium on Theory of computing, pp. 45–54. ACM, 2008. Friedland, S and Gaubert, S. Submodular spectral functions of principal submatrices of a hermitian matrix, extensions and applications. Linear Algebra and its Applications, 2011. Garnett, Roman, Krishnamurthy, Yamuna, Xiong, Xuehan, Schneider, Jeff, and Mann, Richard. Bayesian optimal active search and surveying. In ICML, 2012. Ji, Ming and Han, Jiawei. A variance minimization criterion to active learning on graphs. In AISTAT, 2012. Krause, Andreas, Singh, Ajit, and Guestrin, Carlos. Near-optimal sensor placements in gaussian processes: Theory, efficient algorithms and empirical studies. Journal of Machine Learning Research (JMLR), 9:235–284, February 2008. Martin, Shawn, Brown, W Michael, Klavans, Richard, and Boyack, Kevin W. Openord: an opensource toolbox for large graph layout. In IS&T/SPIE Electronic Imaging, pp. 786806–786806. International Society for Optics and Photonics, 2011. Nemhauser, George L, Wolsey, Laurence A, and Fisher, Marshall L. An analysis of approximations for maximizing submodular set functionsi. Mathematical Programming, 14(1):265–294, 1978. Rasmussen, Carl Edward and Williams, Christopher KI. Gaussian processes for machine learning, volume 1. MIT press Cambridge, MA, 2006. Settles, Burr. Active learning literature survey. University of Wisconsin, Madison, 2010. Walker, David A. Suppressor variable (s) importance within a regression model: an example of salary compression from career services. Journal of College Student Development, 44(1):127– 133, 2003. Wu, Xiao-Ming, Li, Zhenguo, So, Anthony Man-Cho, Wright, John, and Chang, Shih-Fu. Learning with partially absorbing random walks. In Advances in Neural Information Processing Systems 25, pp. 3086–3094, 2012. Zhu, Xiaojin and Ghahramani, Zoubin. Learning from labeled and unlabeled data with label propagation. Technical report, Technical Report CMU-CALD-02-107, Carnegie Mellon University, 2002. Zhu, Xiaojin, Lafferty, John, and Ghahramani, Zoubin. Combining active learning and semisupervised learning using gaussian fields and harmonic functions. In ICML 2003 workshop on The Continuum from Labeled to Unlabeled Data in Machine Learning and Data Mining, pp. 58–65, 2003. 9
|
2013
|
155
|
4,882
|
Recurrent linear models of simultaneously-recorded neural populations Marius Pachitariu, Biljana Petreska, Maneesh Sahani Gatsby Computational Neuroscience Unit University College London, UK {marius,biljana,maneesh}@gatsby.ucl.ac.uk Abstract Population neural recordings with long-range temporal structure are often best understood in terms of a common underlying low-dimensional dynamical process. Advances in recording technology provide access to an ever-larger fraction of the population, but the standard computational approaches available to identify the collective dynamics scale poorly with the size of the dataset. We describe a new, scalable approach to discovering low-dimensional dynamics that underlie simultaneously recorded spike trains from a neural population. We formulate the Recurrent Linear Model (RLM) by generalising the Kalman-filter-based likelihood calculation for latent linear dynamical systems to incorporate a generalised-linear observation process. We show that RLMs describe motor-cortical population data better than either directly-coupled generalised-linear models or latent linear dynamical system models with generalised-linear observations. We also introduce the cascaded generalised-linear model (CGLM) to capture low-dimensional instantaneous correlations in neural populations. The CGLM describes the cortical recordings better than either Ising or Gaussian models and, like the RLM, can be fit exactly and quickly. The CGLM can also be seen as a generalisation of a lowrank Gaussian model, in this case factor analysis. The computational tractability of the RLM and CGLM allow both to scale to very high-dimensional neural data. 1 Introduction Many essential neural computations are implemented by large populations of neurons working in concert, and recent studies have sought both to monitor increasingly large groups of neurons [1, 2] and to characterise their collective behaviour [3, 4]. In this paper we introduce a new computational tool to model coordinated behaviour in very large neural data sets. While we explicitly discuss only multi-electrode extracellular recordings, the same model can be readily used to characterise 2-photon calcium-marker image data, EEG, fMRI or even large-scale biologically-faithful simulations. Populational neural data may be represented at each time point by a vector yt with as many dimensions as neurons, and as many indices t as time points in the experiment. For spiking neurons, yt will have positive integer elements corresponding to the number of spikes fired by each neuron in the time interval corresponding to the t-th bin. As others have before [5, 6], we assume that the coordinated activity reflected in the measurement yt arises from a low-dimensional set of processes, collected into a vector xt, which is not directly observed. However, unlike the previous studies, we construct a recurrent model in which the hidden processes xt are driven directly and explicitly by the measured neural signals y1 . . . yt−1. This assumption simplifies the estimation process. We assume for simplicity that xt evolves with linear dynamics and affects the future state of the neural signal yt in a generalised-linear manner, although both assumptions may be relaxed. As in the latent dynamical system, the resulting model enforces a “bottleneck”, whereby predictions of yt based on y1 . . . yt−1 must be carried by the low-dimensional xt. 1 State prediction in the RLM is related to the Kalman filter [7] and we show in the next section a formal equivalence between the likelihoods of the RLM and the latent dynamical model when observation noise is Gaussian distributed. However, spiking data is not well modelled as Gaussian, and the generalisation of our approach to Poisson noise leads to a departure from the latent dynamical approach. Unlike latent linear models with conditionally Poisson observations, the parameters of our model can be estimated efficiently and without approximation. We show that, perhaps in consequence, the RLM can provide superior descriptions of neural population data. 2 From the Kalman filter to the recurrent linear model (RLM) Consider a latent linear dynamical system (LDS) model with linear-Gaussian observations. Its graphical model is shown in Fig. 1A. The latent process is parametrised by a dynamics matrix A and innovations covariance Q that describe the evolution of the latent state xt: P(xt|xt−1) = N(xt|Axt−1, Q) , where N(x|µ, Σ) represents a normal distribution on x with mean µ and (co)variance Σ. For brevity, we omit here and below the special case of the first time-step, in which x1 is drawn from a multivariate Gaussian. The output distribution is determined by an observation loading matrix C and a noise covariance R often taken to be diagonal so that all covariance is modelled by the latent process: P(yt|xt) = N(yt|Cxt, R) . In the LDS, the joint likelihood of the observations {yt} can be written as the product: P(y1 . . . yT ) = P(y1) T Y t=2 P(yt|y1 . . . yt−1) and in the Gaussian case can be computed using the usual Kalman filter approach to find the conditional distributon at time t iteratively: P(yt+1|y1 . . . yt) = Z dxt+1 P(yt+1|xt+1)P(xt+1|y1 . . . yt) = Z dxt+1 N(yt+1|Cxt+1, R) N(xt+1|Aˆxt, Vt+1) = N(yt+1|CAˆxt, CVt+1C⊤+ R) , where we have introduced the (filtered) state estimate ˆxt = E [xt|y1 . . . yt] and (predictive) uncertainty Vt+1 = E (xt+1 −Aˆxt)2|y1 . . . yt . Both quantities are computed recursively using the Kalman gain Kt = VtC⊤(CVtC⊤+ R)−1, giving the following recursive recipe to calculate the conditional likelihood of yt+1: ˆxt = Aˆxt−1 + Kt(yt −ˆyt) Vt+1 = A(I −KtC)VtA⊤+ Q ˆyt+1 = CAˆxt P(yt+1|y1 . . . yt) = N(yt+1|ˆyt+1, CVt+1C⊤+ R) For the Gaussian LDS, the Kalman gain Kt and state uncertainty Vt+1 (and thus the output covariance CVt+1C⊤+ R) depend on the model parameters (A, C, R, Q) and on the time step—although as time grows they both converge to stationary values. Neither depends on the observations. Thus, we might consider a relaxation of the Gaussian LDS model in which these matrices are taken to be stationary from the outset, and are parametrised independently so that they are no longer constrained to take on the “correct” values as computed for Kalman inference. Let us call this parametric form of the Kalman gain W and the parametric form of the output covariance S. Then the conditional likelihood iteration becomes ˆxt = Aˆxt−1 + W(yt −ˆyt) ˆyt+1 = CAˆxt P(yt+1|y1 . . . yt) = N(yt+1|ˆyt+1, S) . 2 A x1 x2 x3 xT y1 y2 y3 yT • • • A A A A C C C C B x0 x1 x2 xT -1 y1 y2 y3 yT • • • η1 η2 η3 ηT -1 LDS C x0 x1 x2 xT -1 y1 y2 y3 yT • • • A A A A CA CA CA CA W W W W RLM Figure 1: Graphical representations of the latent linear dynamical system (LDS: A, B) and recurrent linear model (RLM: C). Shaded variables are observed, unshaded circles are latent random variables and squares are variables that depend deterministically on their parents. In B the LDS is redrawn in terms of the random innovations ηt = xt −Axt−1, facilitating the transition towards the RLM. The RLM is then obtained by replacing ηt with a deterministically derived estimate W(yt −ˆyt). The parameters of this new model are A, C, W and S. This is a relaxation of the Gaussian latent LDS model because W has more degrees of freedom than Q, as does S than R (at least if R is constrained to be diagonal). The new model has a recurrent linear structure in that the random observation yt is fed back linearly to perturb the otherwise deterministic evolution of the state ˆxt. A graphical representation of this model is shown in Fig. 1C, along with a redrawn graph of the LDS model. The RLM can be viewed as replacing the random innovation variables ηt = xt −Axt−1 with data-derived estimates W(yt −ˆyt); estimates which are made possible by the fact that ηt contributes to the variability of yt around ˆyt. 3 Recurrent linear models with Poisson observations The discussion above has transformed a stochastic-latent LDS model with Gaussian output to an RLM with deterministic latent, but still with Gaussian output. Our goal, however, is to fit a model with an output distribution better suited to the binned point-processes that characterise neural spiking. Both linear Kalman-filtering steps above and the eventual stationarity of the inference parameters depend on the joint Gaussian structure of the assumed LDS model. They would not apply if we were to begin a similar derivation from an LDS with Poisson output. However, a tractable approach to modelling point-process data with low-dimensional temporal structure may be provided by introducing a generalised-linear output stage directly to the RLM. This model is given by: ˆxt = Aˆxt−1 + W(yt −ˆyt) g(ˆyt+1) = CAˆxt (1) P(yt+1|y1 . . . yt) = ExpFam(yt+1|ˆyt+1) where ExpFam is an exponential-family distribution such as Poisson, and the element-wise link function g allows for a nonlinear mapping from xt to the predicted mean ˆyt+1. In the following, we will write f for the inverse-link as is more common for neural models, so that ˆyt+1 = f(CAˆxt). The simplest Poisson-based generalised-linear RLM might take as its output distribution P(yt|ˆyt) = Y i Poisson(yti|ˆyti); ˆyt = f(CAˆxt−1)) , where yti is the spike count of the ith cell in bin t and the function f is non-negative. However, comparison with the output distribution derived for the Gaussian RLM suggests that this choice would fail to capture the instantaneous covariance that the LDS formulation transfers to the output distribution (and which appears in the low-rank structure of S above). We can address this concern in two ways. One option is to bin the data more finely, thus diminishing the influence of the instantaneous covariance. The alternative is to replace the independent Poissons with a correlated output distribution on spike counts. The cascaded generalised-linear model introduced below is a natural choice, and we will show that it captures instantaneous correlations faithfully with very few hidden dimensions. 3 In practice, we also sometimes add a fixed input µt to equation 1 that varies in time and determines the average behavior of the population or the peri-stimulus time histogram (PSTH). ˆyt+1 = f (µt + CAxt) Note that the matrices A and C retain their interpretation from the LDS models. The matrix A controls the evolution of the dynamical process xt. The phenomenology of its dynamics is determined by the complex eigenvalues of A. Eigenvalues with moduli close to 1 correspond to long timescales of fluctuation around the PSTH. Eigenvalues with non-zero imaginary part correspond to oscillatory components. Finally, the dynamics will be stable iff all the eigenvalues lie within the unit disc. The matrix C describes the dependence of the high-dimensional neural signals on the lowdimensional latent processes xt. In particular, equation 2 determines the firing rate of the neurons. This generalised-linear stage ensures that the firing rates are positive through the link function f, and the observation process is Poisson. For other types of data, the generalised-linear stage might be replaced by other appropriate link functions and output distributions. 3.1 Relationship to other models RLMs are related to recurrent neural networks [8]. The differences lie in the state evolution, which in the neural network is nonlinear: xt = h (Axt−1 + Wyt−1); and in the recurrent term which depends on the observation rather than the prediction error. On the data considered here, we found that using sigmoidal or threshold-linear functions h resulted in models comparable in likelihood to the RLM, and so we restricted our attention to simple linear dynamics. We also found that using the prediction error term W (yt−1 −ˆyt) resulted in better models than the simple neural-net formulation, and we attribute this difference to the link between the RLM and Kalman inference. It is also possible to work within the stochatic latent LDS framework, replacing the Gaussian output distribution with a generalised-linear Poisson output (e.g. [6]). The main difficulty here is the intractability of the estimation procedure. For an unobserved latent process xt, an inference procedure needs to be devised to estimate the posterior distribution on the entire sequence x1 . . . xt. For linear-Gaussian observations, this inference is tractable and is provided by Kalman smoothing. However, with generalised-linear observations, inference becomes intractable and the necessary approximations [6] are computationally intense and can jeopardize the quality of the fitted models. By contrast, in the RLM xt is a deterministic function of data. In effect, the Kalman filter has been built into the model as the accurate estimation procedure, and efficient fitting is possible by direct gradient ascent on the log-likelihood. Empirically we did not encounter difficulties with local minima during optimization, as has been reported for LDS models fit by approximate EM [9]. Multiple restarts from different random values of the parameters always led to models with similar likelihoods. Note that to estimate the matrices A and W the gradient must be backpropagated through successive iterations of equation 1. This technique, known as backpropagation-through-time, was first described by [10] as a technique to fit recurrent neural network models. Recent implementations have demonstrated state-of-the-art language models [11]. Backpropagation-through-time is thought to be inherently unstable when propagated past many timesteps and often the gradient is truncated prematurely [11]. We found that using large values of momentum in the gradient ascent alleviated these instabilities and allowed us to use backpropagation without the truncation. 4 The cascaded generalised-linear model (CGLM) The link between the RLM and the LDS raises the possibility that a model for simultaneouslyrecorded correlated spike counts might be derived in a similar way, starting from a non-dynamical, but low-dimensional, Gaussian model. Stationary models of population activity have attracted recent interest for their own sake (e.g. [1]), and would also provide a way model correlations introduced by common innovations that were neglected by the simple Poisson form of the RLM. Thus, we consider vectors y of spike counts from N neurons, without explicit reference to the time at which they were collected. A Gaussian model for y can certainly describe correlations between the cells, but is ill-matched to discrete count observations. Thus, as with the derivation of the RLM from the Kalman filter, we derive here a new generalisation of a low-dimensional, structured Gaussian model to spike count data. 4 The distribution of any multivariate variable y can be factorized into a “cascaded” product of multiple one-dimensional distributions: P (y) = N Y n=1 P (yn|y<n) . (2) Here n indexes the neurons up to the last neuron N, and y<n is the (n–1)-vector [y1 . . . yn−1]. For a Gaussian-distributed y, the conditionals P (yn|y<n) would be linear-Gaussian. Thus, we propose the “cascaded generalised linear model” (CGLM) in which each such one-dimensional conditional distribution is a generalised-linear model: ˆyn = f µn + ST n y<n (3) P (yn|y<n) = ExpFam (ˆyn) (4) and in which the linear weights Sn take on a structured form developed below. The equations 3 and 4 subsume the Gaussian distribution with arbitrary covariance in the case that f is linear, and the ExpFam conditionals are Gaussian. In this case, for a joint covariance of Σ, it is straightforward to derive the expression Sn = 1 (Σ≤n,≤n)−1 n,n (Σ≤n,≤n)−1 n,<n . (5) where the subscripts <n and ≤n restrict the matrix to the first (n −1) and n rows and/or columns respectively. Thus, we might construct suitably structured linear weights for the CGLM by applying this result to the covariance matrix induced by the low-dimensional Gaussian model known as factor analysis [12]. Factor analysis assumes that data are generated from a K-dimensional latent process x ∼N (0, I), where I is the K×K identity matrix, and y has the conditional distribution P (y|x) = N (Λx, Ψ) with Ψ a diagonal matrix and Λ an N × K loading matrix. This leads to a covariance of y given by Σ = Ψ + ΛΛT . If we repeat the derivation of equations 3, 4 and 5 for this covariance matrix, we obtain an expression for Sn via the matrix inversion lemma: Sn = 1 (Σ≤n,≤n)−1 n,n Ψ≤n,≤n + Λ≤n,·ΛT ≤n,· −1 n,<n = 1 (Σ≤n,≤n)−1 n,n Ψ−1 ≤n,≤n −Ψ−1 ≤n,≤nΛ<n,· (· · ·) ΛT <n,·Ψ−1 <n,<n n,<n (6) = − 1 (Σ≤n,≤n)−1 n,n Ψ−1Λ ≤n,· (· · ·) ΛΨ−1T ≤n,· n,<n where the omitted factor (· · · ) is a K × K matrix. The first term in equation 6 vanishes because it involves only the off-diagonal entries of Ψ. The surviving factor shows that Sn is formed by taking a linear combination of the columns of Ψ−1Λ and then truncating to the first n −1 elements. Thus, if we arrange all Sn as the upper columns of an N × N matrix S, we can write S = upper zwT for some low-dimensional matrices z = Ψ−1Λ and w, where the operation upper extracts the strictly upper triangular part of a matrix. This is the natural structure imposed on the cascaded conditionals by factor analysis. Thus, we adopt the same constraint on S in the case of generalisedlinear observations. The resulting (CGLM) is shown below to provide better fits to binarized neural data than standard Ising models (see the Results section), even with as few as three latent dimensions. Another useful property of the CGLM is that it allows stimulus-dependent inputs in equation 3. The CGLM can also be used in combination with the generalised-linear RLM, with the CGLM replacing the otherwise independent observation model. This approach can be useful when large bins are used to discretize spike trains. In both cases the model can be estimated quickly with standard gradient ascent techniques. 5 Alternative models 5.1 Alternative for temporal interactions: causally-coupled generalised linear model One popular and simple model of simultaneously recorded neuronal populations [3] constructs temporal dependencies between units by directly coupling each neuron’s probability of firing to the past 5 spikes in the entire population: yt ∝Poisson(f(µt + N X i=1 Bi (hi ⋆yt))) Here, hi ⋆yt are convolutions of the spike trains with a set of basis functions hi, and Bi are pairwise interaction weights. Each matrix Bi has N 2 parameters where N is the number of neurons, so the number of parameters grows quadratically with the population size. This type of scaling makes the model prohibitive to use with very large-scale array recordings. Even with aggresive regularization techniques, the model’s parameters are difficult to identify with limited amounts of data. Perhaps more importantly, the model does not have a physical interpretation. Neurons recorded in cortex are rarely directly-connected and retinal ganglion cells almost never directly connect to each other. Instead, such directly-coupled GLMs are used to describe so-called ’functional’ interactions between neurons [3]. We believe a much better interpretation for the correlations observed between pairs of neurons is that they are caused by common inputs to these neurons which seem often to be confined to a small number of dimensions. The models we propose here, the RLM and the CGLM, are aimed at discovering such inputs. 5.2 Alternative for instantaneous interactions: the Ising model Instantaneous interactions between binary data (as would be obtained by counting spikes in short intervals) can be modelled in terms of their pairwise interactions [1] embodied in the Ising model: P (y) = 1 Z eyT Jy. (7) where J is a pairwise interaction matrix and Z is the partition function, or the normalization constant of the model. The model’s attractiveness is that for a given covariance structure it makes the weakest possible assumptions about the distribution of y, that is, like a Gaussian for continuous data, it has the largest possible entropy under the covariance constraint. However, the Ising model and the so-called functional interactions J have no physical interpretation when applied to neural data. Furthermore, Ising models are difficult to fit as they require estimates of the gradients of the partition function Z; they also suffer from the same quadratic scaling in number of paramters as does the directly-coupled GLM. Ising models are even harder to estimate when stimulus-dependent inputs are added in equation 7, but for data collected in the retina or other sensory areas [1], much of the covariation in y may be expected to arise from common stimulus input. Another short-coming of the Ising model is that it can only model binarized data and cannot be normalized for integer y-s [6], so either the time bins need to be reduced to ensure no neuron fires more than one spike in a single bin or the spike counts must be capped at 1. 6 Results 6.1 Simulated data We began by evaluating RLM models fit to simulated data where the true generative parameters were known. Two aspects of the estimated models were of particular interest: the phenomenology of the dynamics (captured by the eigenvalues of the dynamics matrix A) and the relationship between the dynamical subspace and measured neural activity (captured by the output matrix C). We evaluated the agreement between the estimated and generative output matrices by measuring the principal angles between the corresponding subspaces. These report, in succession, the smallest angle achievable between a line in one subspace and a line in the second subspace, once all previous such vectors of maximal agreement have been projected out. Exactly aligned n-dimensional subspaces have all n principal angles equal to 0◦. Unrelated low-dimensional subspaces embedded in high dimensions are close to orthogonal and so have principal angles near 90◦. We first verified the robustness of maximisation of the generalised-linear RLM likelihood by fitting models to simulated data generated by a known RLM. Fig. 2(a) shows eigenvalues from several simulated RLMs and the eigenvalues recovered by fitting parameters to simulated data. The agreement is generally good. In particular, the qualitative aspects of the dynamics reflected in the absolute values and imaginary parts of the eigenvalues are well characterised. Fig. 2(d) shows that the RLM fits 6 0 0.2 0.4 0.6 0.8 1 −0.4 −0.2 0 0.2 0.4 Real Imaginary RLM recovers eigenvalues of simulated dynamics Ground truth Identified (a) 0.2 0.4 0.6 0.8 1 −0.4 −0.2 0 0.2 0.4 Real Imaginary RLM identifies the eigenvalues of diverse PLDS models Generative PLDS Identified by PLDS Identified by RLM (b) 0.85 0.9 0.95 1 −0.1 −0.05 0 0.05 0.1 Real Imaginary RLM identifies the eigenvalues of a PLDS model fit to real data Generative PLDS Identified by RLM (c) PCA GLDS RLM 0 45 90 Principal angles between ground truth (RLM) and identified subspaces Degrees (d) PCA LDS RLM PLDS 0 45 90 Principal angles between true and identified subspaces Degrees (e) PCA GLDS RLM 0 45 90 Principal angles between PLDS fit to data and identified subspaces Degrees (f) Figure 2: Experiments on 100-dimensional simulated data generated from a 5-dimensional latent process. Generating models were Poisson RLM (ad), Poisson LDS with random parameters (cf) and Poisson LDS model with parameters fit to neural data (cf). The models fit were PCA, LDS with Gaussian (LDS/GLDS) or Poisson (PLDS) output, and RLM with Poisson output (RLM). In the upper plots, eigenvalues from different runs are shown in different colors. also recover the subspace defined by the loading matrix C, and do so substantially more accurately than either principal components analysis (PCA) or GLDS models. It is important to note that the likelihoods of LDS models with Poisson observations are difficult to optimise, and so may yield poor results even when fit to within-class data. In practice we did not observe local optima with the RLM or CGLM. We also asked whether the RLM could recover the dynamical properties and latent subspace of data generated by a latent LDS model with Poisson observations. Fig. 2(b) shows that the dynamical eigenvalues of the maximum-likelihood RLM are close to the eigenvalues of generative LDS dynamics, whilst Fig. 2(e) shows that the dynamical subspace is also correctly recovered. Parameters for these simulations were chosen randomly. We then asked whether the quality of parameter identification extended to Poisson-output LDS models with realistic parameters, by generating data from a Poisson-output LDS model that had been fit to a neural recording. As seen in figs. 2(c) and 2(f), the RLM fits remain accurate in this regime, yielding better subspace estimates than either PCA or a Gaussian LDS. 6.2 Array recorded data We next compared the performance of the novel models on neural data. The RLM was compared to the directed-coupled GLM (fit by gradient-based likelihood optimisation) as well as LDS models with Gaussian or Poisson outputs (fit by EM, with a Laplace approximation E-step). The CGLM was compared to the Ising model. We used a dataset of 92 neurons recorded with a Utah array implanted in the premotor and motor cortices of a rhesus macaque monkey performing a delayed center-out reach task. For all comparisons below we use datasets of 108 trials in which the monkey made movements to the same target. We discretized spike trains into time bins of 10ms. The directed-coupled GLM needed substantial regularization in order to make good predictions on held-out test data. Figure 3(a) shows only the best cross-validation result for the GLM, but results without regularization for models with 7 GLM − SCGLMPLDS10 LDS10LDS20RLM10RLM20 RLM3+PSTH 5 6 7 8 9 10 11 12 13 MSEbaseline − MSE baseline = PSTH (low rank) Filtering prediction on test data (a) Ising rank=1 r=2 r=3 r=4 r=5 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 Likelihood per spike −baseline (bits) (b) Figure 3: a. Predictive performance of various models on test data (higher is better). GLM-type models are helped greatly by self-coupling filters (which the other models do not have). The best model is an RLM with three latent dimensions and a low-rank model of the PSTH (see the supplementary material for more information about this model). Adding self-coupling filters to this model further increases its predictive performance by 5 (not shown). b. The likelihood per spike of Ising models as well as CGLM models with small numbers of hidden dimensions. The CGLM saturates at three dimensions and performs better than Ising models. low-dimensional parametrisation. Performance was measured by the causal mean-squared-error in prediction subtracted from the error of a low-rank smoothed PSTH model (based on a singularvalue decomposition of the matrix of all smoothed PSTHs). The number of dimensions (5) and the standard deviation of the Gaussian smoothing filter (20 ms) were cross-validated to find the best possible PSTH performance. Thus, our evaluation is focuses on each model’s ability to predict trial-to-trial co-variation in firing around the mean. A second measure of performance for the RLM was obtained by studying probabilistic samples obtained from the fitted model. Figure 4 in the supplemental material shows averaged noise crosscorrelograms obtained from a large set of samples. Note that the PSTHs have been subtracted from each trial to reveal only the extra correlation structure that is not repeated amongst trials. Even with few hidden dimensions, the model captures well the full temporal structure of the noise correlations. In the case of the Ising model we binarized the data by replacing all spike counts larger than 1 with 1. The log-likelihood of the Ising model could only be estimated for small numbers of neurons, so for comparison we took only the 30 most active neurons. The measure of performance reported in figure 3(b) is the extra log-likelihood per spike obtained above that of a model that makes constant predictions equal to the mean firing rate of each neuron. The CGLM model with only three hidden dimensions achieves the best generalisation performance, surpassing the Ising model. Similar results for the performance of the CGLM can be seen on the full dataset of 92 neurons with non-binarized data, indicating that three latent dimensions suffice to describe the full space visited by the neuronal population on a trial-by-trial basis. 7 Discussion The generalised-linear RLM model, while sharing motivation with latent LDS model, can be fit more efficiently and without approximation to non-Gaussian data. We have shown improved performance on both simulated data and on population recordings from the motor cortex of behaving monkeys. The model is easily extended to other output distributions (such as Bernoulli or negative binomial), to mixed continuous and discrete data, to nonlinear outputs, and to nonlinear dynamics. For the motor data considered here, the generalised-linear model performed as well as models with further non-linearites. However, preliminary results on data from sensory cortical areas suggests that nonlinear models may be of greater value in other settings. 8 Acknowledgments We thank Krishna Shenoy and members of his lab for generously providing access to data. Funding from the Gatsby Charitable Foundation and DARPA REPAIR N66001-10-C-2010. 8 References [1] E Schneidman, MJ Berry, R Segev, and W Bialek. Weak pairwise correlations imply strongly correlated network states in a neural population. Nature, 440:1007–1012, 2005. [2] Gyorgy Buzsaki. Large-scale recording of neuronal ensembles. NatNeurosci, 7(5):446–51, 2004. [3] J. W. Pillow, J. Shlens, L. Paninski, A. Sher, A. M. Litke, E. J. Chichilnisky, and E. P. Simoncelli. Spatiotemporal correlations and visual signalling in a complete neuronal population. Nature, 454(7207):995– 999, 2008. [4] Mark M. Churchland, Byron M. Yu, Maneesh Sahani, and Krishna V. Shenoy. Techniques for extracting single-trial activity patterns from large-scale neural recordings. CurrOpinNeurobiol, 17(5):609–618, 2007. [5] BM Yu, A Afshar, G Santhanam, SI Ryu, KV Shenoy, and M Sahani. Extracting dynamical structure embedded in neural activity. Advances in Neural Information Processing Systems, 18:1545–1552, 2006. [6] JH Macke, L Bsing, JP Cunningham, BM Yu, KV Shenoy, and M Sahani. Empirical models of spiking in neural populations. Advances in Neural Information Processing Systems, 24:1350–1358, 2011. [7] R.E. Kalman. A new approach to linear filtering and prediction problems. Journal of Basic Engineering, 82(1):35–45, 1960. [8] JL Elman. Finding structure in time. Cognitive Science, 14:179–211, 1990. [9] L Buesing, JH Macke, and M Sahani. Spectral learning of linear dynamics from generalised-linear observations with application to neural population data. Advances in Neural Information Processing Systems, 25, 2012. [10] DE Rumelhart, GE Hinton, and RJ Williams. Learning internal representations by error propagation. Mit Press Computational Models Of Cognition And Perception Series, pages 318–462, 1986. [11] T Mikolov, A Deoras, S Kombrink, L Burget, and JH Cernocky. Empirical evaluation and combination of advanced language modeling techniques. Conference of the International Speech Communication Association, 2011. [12] Christopher M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006. 9
|
2013
|
156
|
4,883
|
On the Complexity and Approximation of Binary Evidence in Lifted Inference Guy Van den Broeck and Adnan Darwiche Computer Science Department University of California, Los Angeles {guyvdb,darwiche}@cs.ucla.edu Abstract Lifted inference algorithms exploit symmetries in probabilistic models to speed up inference. They show impressive performance when calculating unconditional probabilities in relational models, but often resort to non-lifted inference when computing conditional probabilities. The reason is that conditioning on evidence breaks many of the model’s symmetries, which can preempt standard lifting techniques. Recent theoretical results show, for example, that conditioning on evidence which corresponds to binary relations is #P-hard, suggesting that no lifting is to be expected in the worst case. In this paper, we balance this negative result by identifying the Boolean rank of the evidence as a key parameter for characterizing the complexity of conditioning in lifted inference. In particular, we show that conditioning on binary evidence with bounded Boolean rank is efficient. This opens up the possibility of approximating evidence by a low-rank Boolean matrix factorization, which we investigate both theoretically and empirically. 1 Introduction Statistical relational models are capable of representing both probabilistic dependencies and relational structure [1, 2]. Due to their first-order expressivity, they concisely represent probability distributions over a large number of propositional random variables, causing inference in these models to quickly become intractable. Lifted inference algorithms [3] attempt to overcome this problem by exploiting symmetries found in the relational structure of the model. In the absence of evidence, exact lifted inference algorithms can work well. For large classes of statistical relational models [4], they perform inference that is polynomial in the number of objects in the model [5], and are therein exponentially faster than classical inference algorithms. When conditioning a query on a set of evidence literals, however, these lifted algorithms lose their advantage over classical ones. The intuitive reason is that evidence breaks the symmetries in the model. The technical reason is that these algorithms perform an operation called shattering, which ends up reducing the first-order model to a propositional one. This issue is implicitly reflected in the experiment sections of exact lifted inference papers. Most report on experiments without evidence. Examples include publications on FOVE [3, 6, 7] and WFOMC [8, 5]. Others found ways to efficiently deal with evidence on only unary predicates. They perform experiments without evidence on binary or higher-arity relations. There are examples for FOVE [9, 10], WFOMC [11], PTP [12] and CP [13]. This evidence problem has largely been ignored in the exact lifted inference literature, until recently, when Bui et al. [10] and Van den Broeck and Davis [11] showed that conditioning on unary evidence is tractable. More precisely, conditioning on unary evidence is polynomial in the size of evidence. This type of evidence expresses attributes of objects in the world, but not relations between them. Unfortunately, Van den Broeck and Davis [11] also showed that this tractability does not extend to 1 evidence on binary relations, for which conditioning on evidence is #P-hard. Even if conditioning is hard in general, its complexity should depend on properties of the specific relation that is conditioned on. It is clear that some binary evidence is easy to condition on, even if it talks about a large number of objects, for example when all atoms are true (∀X, Y p(X, Y )) or false (∀X, Y ¬ p(X, Y )). As our first main contribution, we formalize this intuition and characterize the complexity of conditioning more precisely in terms of the Boolean rank of the evidence. We show that it is a measure of how much lifting is possible, and that one can efficiently condition on large amounts of evidence, provided that its Boolean rank is bounded. Despite the limitations, useful applications of exact lifted inference were found by sidestepping the evidence problem. For example, in lifted generative learning [14], the most challenging task is to compute partition functions without evidence. Regardless, the lack of symmetries in real applications is often cited as a reason for rejecting the idea of lifted inference entirely (informally called the “death sentence for lifted inference”). This problem has been avoided for too long, and as lifted inference gains maturity, solving it becomes paramount. As our second main contribution, we present a first general solution to the evidence problem. We propose to approximate evidence by an over-symmetric matrix, and will show that this can be achieved by minimizing Boolean rank. The need for approximating evidence is new and specific to lifted inference: in (undirected) probabilistic graphical models, more evidence typically makes inference easier. Practically, we will show that existing tools from the data mining community can be used for this low-rank Boolean matrix factorization task. The evidence problem is less pronounced in the approximate lifted inference literature. These algorithms often introduce approximations that lead to symmetries in their computation, even when there are no symmetries in the model. Also for approximate methods, however, the benefits of lifting will decrease with the amount of symmetry-breaking evidence (e.g., Kersting et al. [15]). We will show experimentally that over-symmetric evidence approximation is also a viable technique for approximate lifted inference. 2 Encoding Binary Relations in Unary Our analysis of conditioning is based on a reduction, turning evidence on a binary relation into evidence on several unary predicates. We first introduce some necessary background. 2.1 Background An atom p(t1, . . . , tn) consists of a predicate p /n of arity n followed by n arguments, which are either (lowercase) constants or (uppercase) logical variables. A literal is an atom a or its negation ¬a. A formula combines atoms with logical connectives (e.g., ∨, ∧, ⇔). A formula is ground if it does not contain any logical variables. A possible world assigns a truth value to each ground atom. Statistical relational languages define a probability distribution over possible words, where ground atoms are individual random variables. Numerous languages have been proposed in recent years, and our analysis will apply to many, including MLNs [16], parfactors [3] and WFOMC problems [8]. Example 1. The following MLNs model the dependencies between web pages. A first, peer-to-peer model says that student web pages are more likely to link to other student pages. w studentpage(X) ∧linkto(X, Y ) ⇒studentpage(Y ) It increases the probability of a world by a factor ew with every pair of pages X, Y that satisfies the formula. A second, hierarchical model says that professors are more likely to link to course pages. w profpage(X) ∧linkto(X, Y ) ⇒coursepage(Y ) In this context, evidence e is a truth-value assignment to a set of ground atoms, and is often represented as a conjunction of literals. In unary evidence, atoms have one argument (e.g., studentpage(a)) while in binary evidence, they have two (e.g., linkto(a, b)). Without loss of generality, we assume full evidence on certain predicates (i.e., all their ground atoms are in e).1 We will sometimes represent unary evidence as a Boolean vector and binary evidence as a Boolean matrix. 1Partial evidence on the relation p can be encoded as full evidence on predicates p0 and p1 by adding formulas ∀X, Y p(X, Y ) ⇐p1(X, Y ) and ∀X, Y ¬ p(X, Y ) ⇐p0(X, Y ) to the model. 2 Example 2. Evidence e = p(a, a) ∧p(a, b) ∧¬ p(a, c) ∧· · · ∧¬ p(d, c) ∧p(d, d) is represented by P = p(X,Y ) Y =a Y =b Y =c Y =d X=a 1 1 0 0 X=b 1 1 0 1 X=c 0 0 1 0 X=d 1 0 0 1 We will look at computing conditional probabilities Pr(q | e) for single ground atoms q. Finally, we assume a representation language that can express universally quantified logical constraints. 2.2 Vector-Product Binary Evidence Certain binary relations can be represented by a pair of unary predicates. By adding the formula ∀X, ∀Y, p(X, Y ) ⇔q(X) ∧r(Y ) (1) to our statistical relational model and conditioning on the q and r relations, we can condition on certain types of binary p relations. Assuming that we condition on the q and r predicates, adding this formula (as hard clauses) to the model does not change the probability distribution over the atoms in the original model. It is merely an indirect way of conditioning on the p relation. If we now represent these unary relations by vectors q and r, and the binary relation by the binary matrix P, the above technique allows us to condition on any relation P that can be factorized in the outer vector product P = q r⊺. Example 3. Consider the following outer vector factorization of the Boolean matrix P. P = 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 = 0 1 0 1 1 0 0 1 ⊺ In a model containing Formula 1, this factorization indicates that we can condition on the 16 binary evidence literals ¬ p(a, a) ∧¬ p(a, b) ∧· · · ∧¬ p(d, c) ∧p(d, d) of P by conditioning on the the 8 unary literals ¬ q(a) ∧q(b) ∧¬ q(c) ∧q(d) ∧r(a) ∧¬ r(b) ∧¬ r(c) ∧r(d) represented by q and r. 2.3 Matrix-Product Binary Evidence This idea of encoding a binary relation in unary relations can be generalized to n pairs of unary relations, by adding the following formula to our model. ∀X, ∀Y, p(X, Y ) ⇔(q1(X) ∧r1(Y )) ∨(q2(X) ∧r2(Y )) ∨· · · ∨(qn(X) ∧rn(Y )) (2) By conditioning on the qi and ri relations, we can now condition on a much richer set of binary p relations. The relations that can be expressed this way are all the matrices that can be represented by the sum of outer products (in Boolean algebra, where + is ∨and 1 ∨1 = 1): P = q1 r⊺ 1 ∨q2 r⊺ 2 ∨· · · ∨qn r⊺ n = Q R⊺ (3) where the columns of Q and R are the qi and ri vectors respectively, and the matrix multiplication is performed in Boolean algebra, that is, (Q R⊺)i,j = W r Qi,r ∧Rj,r Example 4. Consider the following P, its decomposition into a sum/disjunction of outer vector products, and the corresponding Boolean matrix multiplication. P = 1 1 0 0 1 1 0 1 0 0 1 0 1 0 0 1 = 0 1 0 1 1 0 0 1 ⊺ ∨ 1 1 0 0 1 1 0 0 ⊺ ∨ 0 0 1 0 0 0 1 0 ⊺ = 0 1 0 1 1 0 0 0 1 1 0 0 1 1 0 0 1 0 0 0 1 1 0 0 ⊺ This factorization shows that we can condition on the binary evidence literals of P (see Example 2) by conditioning on the unary literals e = [¬ q1(a) ∧q1(b) ∧¬ q1(c) ∧q1(d)] ∧[r1(a) ∧¬ r1(b) ∧¬ r1(c) ∧r1(d)] ∧[q2(a) ∧q2(b) ∧¬ q2(c) ∧¬ q2(d)] ∧[r2(a) ∧r2(b) ∧¬ r2(c) ∧¬ r2(d)] ∧[¬ q3(a) ∧¬ q3(b) ∧q3(c) ∧¬ q3(d)] ∧[¬ r3(a) ∧¬ r3(b) ∧r3(c) ∧¬ r3(d)] . 3 3 Boolean Matrix Factorization Matrix factorization (or decomposition) is a popular linear algebra tool. Some well-known instances are singular value decomposition and non-negative matrix factorization (NMF) [17, 18]. NMF factorizes into a product of non-negative matrices, which are more easily interpretable, and therefore attracted much attention for unsupervised learning and feature extraction. These factorizations all work with real-valued matrices. We instead consider Boolean-valued matrices, with only 0/1 entries. 3.1 Boolean Rank Factorizing a matrix P as Q R⊺in Boolean algebra is a known problem called Boolean Matrix Factorization (BMF) [19, 20]. BMF factorizes a (k × l) matrix P into a (k × n) matrix Q and a (l × n) matrix R, where potentially n ≪k and n ≪l and we always have that n ≤min(k, l). Any Boolean matrix can be factorized this way and the smallest number n for which it is possible is called the Boolean rank of the matrix. Unlike (textbook) real-valued rank, computing the Boolean rank is NP-hard and cannot be approximated unless P=NP [19]. The Boolean and real-valued rank are incomparable, and the Boolean rank can be exponentially smaller than the real-valued rank. Example 5. The factorization in Example 4 is a BMF with Boolean rank 3. It is only a decomposition in Boolean algebra and not over the real numbers. Indeed, the matrix product over the reals contains an incorrect value of 2: 0 1 0 1 1 0 0 0 1 1 0 0 ×real 1 1 0 0 1 0 0 0 1 1 0 0 ⊺ = 1 1 0 0 2 1 0 1 0 0 1 0 1 0 0 1 ̸= P Note that P is of full real-valued rank (having four non-zero singular values) and that its Boolean rank is lower than its real-valued rank. 3.2 Approximate Boolean Factorization Computing Boolean ranks is a theoretical problem. Because most real-world matrices will have nearly full rank (i.e., almost min(k, l)), applications of BMF look at approximate factorizations. The goal is to find a pair of (small) Boolean matrices Qk×n and Rl×n such that Pk×l ≈ Qk×n R⊺ l×n , or more specifically, to find matrices that optimize some objective that trades off approximation error and Boolean rank n. When n ≪k and n ≪l, this approximation extracts interesting structure and removes noise from the matrix. This has caused BMF to receive considerable attention in the data mining community recently, as a tool for analyzing high-dimensional data. It is used to find important and interpretable (i.e., Boolean) concepts in a data matrix. Unfortunately, the approximate BMF optimization problem is NP-hard as well, and inapproximable [20]. However, several algorithms have been proposed that work well in practice. Algorithms exist that find good approximations for fixed values of n [20], or when P is sparse [21]. BMF is related to other data mining tasks, such as biclustering [22] and tiling databases [23], whose algorithms could also be used for approximate BMF. In the context of social network analysis, BMF is related to stochastic block models [24] and their extensions, such as infinite relational models. 4 Complexity of Binary Evidence Our goal in this section is to provide a new complexity result for reasoning with binary evidence in the context of lifted inference. Our result can be thought of as a parametrized complexity result, similar to ones based on treewidth in the case of propositional inference. To state the new result, however, we must first define formally the computational task. We will also review the key complexity result that is known about this computation now (i.e., the one we will be improving on). Consider an MLN ∆and let Γm contain a set of ground literals representing binary evidence. That is, for some binary predicate p(X, Y ), evidence Γm contains precisely one literal (positive or negative) for each grounding of predicate p(X, Y ). Here, m represents the number of objects that parameters X and Y may take.2 Therefore, evidence Γm must contain precisely m2 literals. 2We assume without loss of generality that all logical variables range over the same set of objects. 4 Suppose now that Prm is the distribution induced by MLN ∆over m objects, and q is a ground literal. Our analysis will apply to classes of models ∆that are domain-liftable [4], which means that the complexity of computing Prm(q) without evidence is polynomial in m. One such class is the set of MLNs with two logical variables per formula [5]. Our task is then to compute the posterior probability Prm(q|em), where em is a conjunction of the ground literals in binary evidence Γm. Moreover, our goal here is to characterize the complexity of this computation as a function of evidence size m. The following recent result provides a lower bound on the complexity of this computation [11]. Theorem 1. Suppose that evidence Γm is binary. Then there exists a domain-liftable MLN ∆with a corresponding distribution Prm, and a posterior marginal Prm(q|em) that cannot be computed by any algorithm whose complexity grows polynomially in evidence size m, unless P = NP. This is an analogue to results according to which, for example, the complexity of computing posterior probabilities in propositional graphical models is exponential in the worst case. Yet, for these models, the complexity of inference can be parametrized, allowing one to bound the complexity of inference on some models. Perhaps the best example of such a parametrized complexity is the one based on treewidth, which can be thought of as a measure of the model’s sparsity (or tree-likeness). In this case, inference can be shown to be linear in the size of the model and exponential only in its treewidth. Hence, this parametrized complexity result allows us to state that inference can be done efficiently on models with bounded treewidth. We now provide a similar parameterized complexity result, but for evidence in lifted inference. In this case, the parameter we use to characterize complexity is that of Boolean rank. Theorem 2. Suppose that evidence Γm is binary and has a bounded Boolean rank. Then for every domain-liftable MLN ∆and corresponding distribution Prm, the complexity of computing posterior marginal Prm(q|em) grows polynomially in evidence size m. The proof of this theorem is based on the reduction from binary to unary evidence, which is described in Section 2. In particular, our reduction first extends the MLN ∆with Formula 2, leading to the new MLN ∆′ and new pairs of unary predicates qi and ri. This does not change the domain-liftability of ∆′, as Formula 2 is itself liftable. We then replace binary evidence Γm by unary evidence Γ′. That is, the ground literals of the binary predicate p are replaced by ground literals of the unary predicates qi and ri (see Example 4). This unary evidence is obtained by Boolean matrix factorization. As the matrix size in our reduction is m2, the following Lemma implies that the first step of our reduction is polynomial in m for bounded rank evidence. Lemma 3 (Miettinen [25]). The complexity of Boolean matrix factorization for matrices with bounded Boolean rank is polynomial in their size. The main observation in our reduction is that Formula 2 has size n, which is the Boolean rank of the given binary evidence. Hence, when the Boolean rank n is bounded by a constant, the size of the extended MLN ∆′ is independent of the evidence size and is proportional to the size of the original MLN ∆. We have now reduced inference on MLN ∆and binary evidence Γm into inference on an extended MLN ∆′ and unary evidence Γ′. The second observation behind the proof is the following. Lemma 4 (Van den Broeck and Davis [11], Van den Broeck [26]). Suppose that evidence Γm is unary. Then for every domain-liftable MLN ∆and corresponding distribution Prm, the complexity of computing posterior marginal Prm(q|em) grows polynomially in evidence size m. Hence, computing posterior probabilities can be done in time which is polynomial in the size of unary evidence m, which completes our proof. We can now identify additional similarities between treewidth and Boolean rank. Exact inference algorithms for probabilistic graphical models typically perform two steps, namely to (a) compute a tree decomposition of the graphical model (or a corresponding variable order), and (b) perform inference that is polynomial in the size of the decomposition, but potentially exponential in its (tree)width. The analogous steps for conditioning are to (a) perform a BMF, and (b) perform inference that is polynomial in the size of the BMF, but potentially exponential in its rank. The (a) steps are both NP-hard, yet are efficient assuming bounded treewidth [27] or bounded Boolean rank (Lemma 3). Whereas 5 treewidth is a measure of tree-likeness and sparsity of the graphical model, Boolean rank seems to be a fundamentally different property, more related to the presence of symmetries in evidence. 5 Over-Symmetric Evidence Approximation Theorem 2 opens up many new possibilities. Even for evidence with high Boolean rank, it is possible to find a low-rank approximate BMF of the evidence, as is commonly done for other data mining and machine learning problems. Algorithms already exist for solving this task (cf. Section 3). Example 6. The evidence matrix from Example 4 has Boolean rank three. Dropping the third pair of vectors reduces the Boolean rank to two. 1 1 0 0 1 1 0 1 0 0 1 0 1 0 0 1 ≈ 0 1 0 1 1 0 0 1 ⊺ ∨ 1 1 0 0 1 1 0 0 ⊺ @ @ @ @ @@ ∨ 0 0 1 0 0 0 1 0 ⊺ = 0 1 1 1 0 0 1 0 1 1 0 1 0 0 1 0 ⊺ = 1 1 0 0 1 1 0 1 0 0 0 0 1 0 0 1 This factorization is approximate, as it flips the evidence for atom p(c, c) from true to false (represented by the bold 0). By paying this price, the evidence has more symmetries, and we can condition on the binary relation by introducing only two instead of three new pairs (qi, ri) of unary predicates. Low-rank approximate BMF is an instance of a more general idea; that of over-symmetric evidence approximation. This means that when we want to compute Pr(q | e), we approximate it by computing Pr(q | e′) instead, with evidence e′ that permits more efficient inference. In this case, it is more efficient because it maintains more symmetries of the model and permits more lifting. Because all lifted inference algorithms, exact or approximate, exploit symmetries, we expect this general idea, and low-rank approximate BMF in particular, to improve the performance of any lifted inference algorithm. Having a small amount of incorrect evidence in the approximation need not be a problem. As these literals are not covered by the first most important vector pairs, they can be considered as noise in the original matrix. Hence, a low-rank approximation may actually improve the performance of, for example, a lifted collective classification algorithm. On the other hand, the approximation made in Example 6 may not be desirable if we are querying attributes of the constant c, and we may prefer to approximate other areas of the evidence matrix instead. There are many challenges in finding appropriate evidence approximations, which makes the task all the more interesting. 6 Empirical Evaluation To complement the theoretical analysis from the previous sections, we will now report on experiments that investigate the following practical questions. Q1 How well can we approximate a real-world relational data set by a low-rank Boolean matrix? Q2 Is Boolean rank a good indicator of the complexity of inference, as suggested by Theorem 2? Q3 Is over-symmetric evidence approximation a viable technique for approximate lifted inference? To answer Q1, we compute approximations of the linkto binary relation in the WebKB data set using the ASSO algorithm for approximate BMF [20]. The WebKB data set consists of web pages from the computer science departments of four universities [28]. The data has information about words that appear on pages, labels of pages and links between web pages (linkto relation). There are four folds, one for each university. The exact evidence matrix for the linkto relation ranges in size from 861 by 861 to 1240 by 1240. Its real-valued rank ranges from 384 to 503. Performing a BMF approximation in this domain adds or removes hyperlinks between web pages, so that more web pages can be grouped together that behave similarly. Figure 1 plots the approximation error for increasing Boolean ranks, measured as the number of incorrect evidence literals. The error goes down quickly for low rank, and is reduced by half after Boolean rank 70 to 80, even though the matrix dimensions and real-valued rank are much higher. Note that these evidence matrices contain around a million entries, and are sparse. Hence, these approximations correctly label 99.7% to 99.95% of the atoms. 6 0 20 40 60 80 100 120 0 1000 2000 3000 Boolean rank Error cornell texas washington wisconsin Figure 1: Approximation BMF error in terms of the number of incorrect literals for the WebKB linkto relation. Rank n Circuit Size (a) Circuit Size (b) 0 18 24 1 58 50 2 160 129 3 1873 371 4 > 2129 1098 5 ? 3191 6 ? 9571 Figure 2: First-order NNF circuit size (number of nodes) for increasing Boolean rank n, and (a) the peer to peer and (b) hierarchical model. 0.1 1 0 200000 400000 600000 Relative KLD Iteration (a) Texas Data Set 0.1 1 0 200000 400000 600000 Relative KLD Iteration (b) Wisconsin Data Set Figure 3: KLD of LMCMC on different BMF approximations, relative to the KLD of vanilla MCMC on the same approximation. From top to bottom, the lines represent exact evidence (blue), and approximations (red) of rank 150, 100, 75, 50, 20, 10, 5, 2, and 1. To answer Q2, we perform two sets of experiments. Firstly, we look at exact lifted inference and investigate the influence of adding Formula 2 to the “peer-to-peer” and “hierarchical” MLNs from Example 1. The goals is to condition on linkto relations with increasing rank n. These models are compiled using the WFOMC [8] algorithm into first-order NNF circuits, which allow for exact domain-lifted inference (c.f., Lemma 4). Table 2 shows the sizes of these circuits. As expected, circuit sizes grow exponentially with n. Evidence breaks more symmetries in the peer-to-peer model than in the hierarchical model, causing the circuit size to increase more quickly with Boolean rank. Since the connection between rank and exact inference is obvious from Theorem 2, the more interesting question in Q2 is whether Boolean rank is indicative of the complexity of approximate lifted inference as well. Therefore, we investigate its influence on the Lifted MCMC algorithm (LMCMC) [29] with Rao-Blackwellized probability estimation [30]. LMCMC interleaves standard MCMC steps (here Gibbs sampling) with jumps to states that are symmetric in the graphical model, in order to speed up mixing of the chain. We run LMCMC on the WebKB MLN of Davis and Domingos [31], which has 333 first-order formulas and over 1 million random variables. It classifies web pages into 6 categories, based on their link structure and the 50 most predictive words they contain. We learn its parameters with the Alchemy package and obtain evidence sets of varying Boolean rank from the factorizations of Figure 1.3. For these, we run both vanilla and lifted MCMC, and measure the KL divergence (KLD) between the marginal distribution at each iteration4, and a ground truth obtained from 3 million iterations on the corresponding evidence set. Figure 3 plots the KLD of LMCMC divided by the KLD of MCMC. It shows that the improvement of LMCMC over MCMC goes down with Boolean rank, answering Q2 positively. To answer Q3, we look at the KLD between different evidence approximations Pr(. | e′ n) of rank n, and the true marginals Pr(. | e) conditioned on exact evidence. As this requires a good estimate of Pr(. | e), we make our learned WebKB model more tractable by removing formulas about word content. For two approximations e′ a and e′ b such that rank a < b, we expect LMCMC to converge faster to Pr(. | e′ a) than to Pr(. | e′ b), as suggested by Figure 3. However, because Pr(. | e′ a) is a more crude approximation of Pr(. | e) than Pr(. | e′ b) is, the KLD at convergence should be worse for a 3 When synthetically generating evidence of these ranks, results are comparable. 4 Runtime per iteration is comparable for both algorithms. BMF runtime is negligible. 7 0.1 1 0 20000 40000 60000 80000 100000 KL Divergence Iteration Ground MCMC Lifted MCMC Lifted MCMC (Rank 2) Lifted MCMC (Rank 10) (a) Cornell, Ranks 2 and 10 0.1 1 0 50000 100000 150000 200000 250000 KL Divergence Iteration Ground MCMC Lifted MCMC Lifted MCMC (Rank 75) Lifted MCMC (Rank 150) (b) Cornell, Ranks 75 and 150 0.1 1 0 50000 100000 150000 200000 250000 KL Divergence Iteration Ground MCMC Lifted MCMC Lifted MCMC (Rank 75) Lifted MCMC (Rank 150) (c) Washington, Ranks 75 and 150 0.1 1 0 50000 100000 150000 200000 250000 KL Divergence Iteration Ground MCMC Lifted MCMC Lifted MCMC (Rank 75) Lifted MCMC (Rank 150) (d) Wisconsin, Ranks 75 and 150 Figure 4: Error for different low-rank approximations of WebKB, in KLD from true marginals. than for b. Hence, we expect to see a trade-off, where the lowest ranks are optimal in the beginning, higher ranks become optimal later one, and the exact model is optimal at convergence. Figure 4 shows exactly that, for a representative sample of ranks and data sets. In Figure 4(a), rank 2 and 10 outperform LMCMC with the exact evidence at first. Exact evidence overtakes rank 2 after 40k iterations, and rank 10 after 50k. After 80k iterations, even non-lifted MCMC outperforms these crude approximations. Figure 4(b) shows the other side of the spectrum, where a rank 75 and 150 approximation are overtaken at iterations 90k and 125k. Figure 4(c) is representative of other datasets. Note here that at around iteration 50k, rank 75 in turn outperforms the rank 150 approximation, which has fewer symmetries and does not permit as much lifting. Finally, Figure 4(d) shows the ideal case for low-rank approximation. This is the largest dataset, and therefore the most challenging inference task. Here, LMCMC on e converges slowly compared to its approximations e′, and e′ results in almost perfect marginals. The crossover point where exact inference outperforms the approximation is never reached in practice. This answers Q3 positively. 7 Conclusions We presented two main results. The first is a more precise complexity characterization of conditioning on binary evidence, in terms of its Boolean rank. The second is a technique to approximate binary evidence by a low-rank Boolean matrix factorization. This is a first type of over-symmetric evidence approximation that can speed up lifted inference. We showed empirically that low-rank BMF speeds up approximate inference, leading to improved approximations. For future work, we want to evaluate the practical implications of the theory developed for other lifted inference algorithms, such as lifted BP, and look at the performance of over-symmetric evidence approximation on machine learning tasks such as collective classification. There are many remaining challenges in finding good evidence-approximation schemes, including ones that are query-specific (cf. de Salvo Braz et al. [32]) or that incrementally run inference to find better approximations (cf. Kersting et al. [33]). Furthermore, we want to investigate other subsets of binary relations for which conditioning could be efficient, in particular functional relations p(X, Y ), where each X has at most a limited number of associated Y values. Acknowledgments We thank Pauli Miettinen, Mathias Niepert, and Jilles Vreeken for helpful suggestions. This work was supported by ONR grant #N00014-12-1-0423, NSF grant #IIS-1118122, NSF grant #IIS0916161, and the Research Foundation-Flanders (FWO-Vlaanderen). 8 References [1] L. Getoor and B. Taskar, editors. An Introduction to Statistical Relational Learning. MIT Press, 2007. [2] Luc De Raedt, Paolo Frasconi, Kristian Kersting, and Stephen Muggleton, editors. Probabilistic inductive logic programming: theory and applications. Springer-Verlag, 2008. [3] David Poole. First-order probabilistic inference. In Proceedings of IJCAI, pages 985–991, 2003. [4] Manfred Jaeger and Guy Van den Broeck. Liftability of probabilistic inference: Upper and lower bounds. In Proceedings of the 2nd International Workshop on Statistical Relational AI,, 2012. [5] Guy Van den Broeck. On the completeness of first-order knowledge compilation for lifted probabilistic inference. In Advances in Neural Information Processing Systems 24 (NIPS), pages 1386–1394, 2011. [6] Rodrigo de Salvo Braz, Eyal Amir, and Dan Roth. Lifted first-order probabilistic inference. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), pages 1319–1325, 2005. [7] B. Milch, L.S. Zettlemoyer, K. Kersting, M. Haimes, and L.P. Kaelbling. Lifted probabilistic inference with counting formulas. Proceedings of the 23rd AAAI Conference on Artificial Intelligence, 2008. [8] Guy Van den Broeck, Nima Taghipour, Wannes Meert, Jesse Davis, and Luc De Raedt. Lifted probabilistic inference by first-order knowledge compilation. In Proceedings of IJCAI, pages 2178–2185, 2011. [9] N. Taghipour, D. Fierens, J. Davis, and H. Blockeel. Lifted variable elimination with arbitrary constraints. In Proceedings of the 15th International Conference on Artificial Intelligence and Statistics, 2012. [10] H.H. Bui, T.N. Huynh, and R. de Salvo Braz. Exact lifted inference with distinct soft evidence on every object. In Proceedings of the 26th AAAI Conference on Artificial Intelligence, 2012. [11] Guy Van den Broeck and Jesse Davis. Conditioning in first-order knowledge compilation and lifted probabilistic inference. In Proceedings of the 26th AAAI Conference on Artificial Intelligence,, 2012. [12] Vibhav Gogate and Pedro Domingos. Probabilistic theorem proving. In Proceedings of the 27th Conference on Uncertainty in Artificial Intelligence (UAI), pages 256–265, 2011. [13] A. Jha, V. Gogate, A. Meliou, and D. Suciu. Lifted inference seen from the other side: The tractable features. In Proceedings of the 24th Conference on Neural Information Processing Systems (NIPS), 2010. [14] Guy Van den Broeck, Wannes Meert, and Jesse Davis. Lifted generative parameter learning. In Statistical Relational AI (StaRAI) workshop, July 2013. [15] K. Kersting, B. Ahmadi, and S. Natarajan. Counting belief propagation. In Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence (UAI), pages 277–284, 2009. [16] M. Richardson and P. Domingos. Markov logic networks. Machine learning, 62(1):107–136, 2006. [17] D. Seung and L. Lee. Algorithms for non-negative matrix factorization. Advances in neural information processing systems, 13:556–562, 2001. [18] M. Berry, M. Browne, A. Langville, V. Pauca, and R. Plemmons. Algorithms and applications for approximate nonnegative matrix factorization. In Computational Statistics and Data Analysis, 2006. [19] Pauli Miettinen, Taneli Mielik¨ainen, Aristides Gionis, Gautam Das, and Heikki Mannila. The discrete basis problem. In Knowledge Discovery in Databases, pages 335–346. Springer, 2006. [20] Pauli Miettinen, Taneli Mielikainen, Aristides Gionis, Gautam Das, and Heikki Mannila. The discrete basis problem. IEEE Transactions on Knowledge and Data Engineering, 20(10):1348–1362, 2008. [21] Pauli Miettinen. Sparse Boolean matrix factorizations. In IEEE 10th International Conference on Data Mining (ICDM), pages 935–940. IEEE, 2010. [22] Boris Mirkin. Mathematical classification and clustering, volume 11. Kluwer Academic Pub, 1996. [23] Floris Geerts, Bart Goethals, and Taneli Mielik¨ainen. Tiling databases. In Discovery science, 2004. [24] Paul W Holland, Kathryn Blackmond Laskey, and Samuel Leinhardt. Stochastic blockmodels: First steps. Social networks, 5(2):109–137, 1983. [25] Pauli Miettinen. Matrix decomposition methods for data mining: Computational complexity and algorithms. PhD thesis, 2009. [26] Guy Van den Broeck. Lifted Inference and Learning in Statistical Relational Models. PhD thesis, KU Leuven, January 2013. [27] Hans L Bodlaender. Treewidth: Algorithmic techniques and results. Springer, 1997. [28] M. Craven and S. Slattery. Relational learning with statistical predicate invention: Better models for hypertext. Machine Learning Journal, 43(1/2):97–119, 2001. [29] Mathias Niepert. Markov chains on orbits of permutation groups. In Proceedings of the 28th Conference on Uncertainty in Artificial Intelligence (UAI), 2012. [30] Mathias Niepert. Symmetry-aware marginal density estimation. In Proceedings of the 27th Conference on Artificial Intelligence (AAAI), 2013. [31] Jesse Davis and Pedro Domingos. Deep transfer via second-order markov logic. In Proceedings of the 26th annual international conference on machine learning, pages 217–224, 2009. [32] R. de Salvo Braz, S. Natarajan, H. Bui, J. Shavlik, and S. Russell. Anytime lifted belief propagation. Proceedings of the 6th International Workshop on Statistical Relational Learning, 2009. [33] K. Kersting, Y. El Massaoudi, B. Ahmadi, and F. Hadiji. Informed lifting for message-passing. In Proceedings of the 24th AAAI Conference on Artificial Intelligence,, 2010. 9
|
2013
|
157
|
4,884
|
Pass-Efficient Unsupervised Feature Selection Crystal Maung Department of Computer Science The University of Texas at Dallas Crystal.Maung@gmail.com Haim Schweitzer Department of Computer Science The University of Texas at Dallas HSchweitzer@utdallas.edu Abstract The goal of unsupervised feature selection is to identify a small number of important features that can represent the data. We propose a new algorithm, a modification of the classical pivoted QR algorithm of Businger and Golub, that requires a small number of passes over the data. The improvements are based on two ideas: keeping track of multiple features in each pass, and skipping calculations that can be shown not to affect the final selection. Our algorithm selects the exact same features as the classical pivoted QR algorithm, and has the same favorable numerical stability. We describe experiments on real-world datasets which sometimes show improvements of several orders of magnitude over the classical algorithm. These results appear to be competitive with recently proposed randomized algorithms in terms of pass efficiency and run time. On the other hand, the randomized algorithms may produce more accurate features, at the cost of small probability of failure. 1 Introduction Work on unsupervised feature selection has received considerable attention. See, e.g., [1, 2, 3, 4, 5, 6, 7, 8] . In numerical linear algebra unsupervised feature selection is known as the column subset selection problem, where one attempts to identify a small subset of matrix columns that can approximate the entire column space of the matrix. See, e.g., [9, Chapter 12]. The distinction between supervised and unsupervised feature selection is as follows. In the supervised case one is given labeled objects as training data and features are selected to help predict that label; in the unsupervised case nothing is known about the labels. We describe an improvement to the classical Businger and Golub pivoted QR algorithm [9, 10]. We refer to the original algorithm as the QRP, and to our improved algorithm as the IQRP. The QRP selects features one by one, using k passes in order to select k features. In each pass the selected feature is the one that is the hardest to approximate by the previously selected features. We achieve improvements to the algorithm run time and pass efficiency without affecting the selection and the excellent numerical stability of the original algorithm. Our algorithm is deterministic, and runs in a small number of passes over the data. It is based on the following two ideas: 1. In each pass we identify multiple features that are hard to approximate with the previously selected features. A second selection step among these features uses an upper bound on unselected features that enables identifying multiple features that are guaranteed to have been selected by the QRP. See Section 4 for details. 2. Since the error of approximating a feature can only decrease when additional features are added to the selection, there is no need to evaluate candidates with error that is already “too small”. This allows a significant reduction in the number of candidate features that need to be considered in each pass. See Section 4 for details. 1 2 Algorithms for unsupervised feature selection The algorithms that we consider take as input large matrices of numeric values. We denote by m the number of rows, by n the number of columns (features), and by k the number of features to be selected. Criteria for evaluating algorithms include their run time and memory requirements, the number of passes over the data, and the algorithm accuracy. The accuracy is a measure of the error of approximating the entire data matrix as a linear combination of the selection. We review some classical and recent algorithms for unsupervised feature selection. 2.1 Related work in numerical linear algebra Businger and Golub QRP was established by Businger and Golub [9, 10]. We discuss it in detail in Section 3. It requires k passes for selecting k features, and its run time is 4kmn −2k2(m + n) + 4k3/3. A recent study [11] compares experimentally the accuracy of the QRP as a feature selection algorithm to some recently proposed state-of-the-art algorithms. Even though the accuracy of the QRP is somewhat below the other algorithms, the results are quite similar. (The only exception was the performance on the Kahan matrix, where the QRP was much less accurate.) Gu and Eisenstat algorithm [1] was considered the most accurate prior to the work on randomized algorithms that had started with [12]. It computes an initial selection (typically by using the QRP), and then repeatedly swaps selected columns with unselected column. The swapping is done so that the product of singular values of the matrix formed by the selected columns is increased with each swapping. The algorithm requires random access memory, and it is not clear how to implement it by a series of passes over the data. Its run time is O(m2n). 2.2 Randomized algorithms Randomized algorithms come with a small probability of failure, but otherwise appear to be more accurate than the classical deterministic algorithms. Frieze et al [12, 13] have proposed a randomized algorithm that requires only two passes over the data. This assumes that the norms of all matrix columns are known in advance, and guarantees only an additive approximation error. We discuss the run time and the accuracy of several generalizations that followed their studies. Volume sampling Deshpande et al [14] have studied a randomized algorithm that samples k-tuples of columns with probability proportional to their “volume”. The volume is the square of the product of the singular values of the submatrix formed by these columns. They show that this sampling scheme gives rise to a randomized algorithm that computes the best possible solution in the Frobenius norm. They describe an efficient O(kmn) randomized algorithm that can be implemented in k passes and approximates this sampling scheme. These results were improved (in terms of accuracy) in [15], by computing the exact volume sampling. The resulting algorithm is slower but much more accurate. Further improvements to the speed of volume sampling in [6] have reduced the run time complexity to O(km2n). As shown in [15, 6], this optimal (in terms of accuracy) algorithm can also be derandomized, with a deterministic run time of O(km3n). Leverage sampling The idea behind leverage sampling is to randomly select features with probability proportional to their “leverage”. Leverage values are norms of the rows of the n × k right eigenvector matrix in the truncated SVD expansion of the data matrix. See [16, 2]. In particular, the “two stage” algorithm described in [2] requires only 2 passes if the leverage values are known. Its run time is dominated by the calculation of the leverage values. To the best of our knowledge the currently best algorithms for estimating leverage values are randomized [17, 18]. One run takes 2 passes and O(mn log n + m3) time. This is dominated by the mn term, and [18] show that it can be further reduced to the number of nonzero values. We note that these algorithms do not compute reliable leverage in 2 passes, since they may fail at a relatively high (e.g., 1/3) probability. As stated in [18] “the success probability can be amplified by independent repetition and taking the coordinate-wise median”. Therefore, accurate estimates of leverage can be computed in constant number of passes. But the constant would be larger than 2. 2 Input: The features (matrix columns) x1, . . . , xn, and an integer k ≤n. Output: An ordered list S of k indices. 1. In the initial pass compute: 1.1. For i = 1, . . . , n set ˜xi = xi, vi = |˜xi|2. (˜xi is the error vector of approximating xi by a linear combination of the columns in S.) At the end of the pass set z1 = arg max i vi, and initialize S = (z1). 2. For each pass j = 2, . . . , k: 2.1. For i = 1, . . . , n set vi to the square error of approximating xi by a linear combination of the columns in S. At the end of pass j set zj = arg max i vi, and add zj to S. Figure 1: The main steps of the QRP algorithm. 2.3 Randomized ID In a recent survey [19] Halko et.al. describe how to compute QR factorization using their randomized Interpolative Decomposition. Their approach produces an accurate Q as a basis of the data matrix column space. They propose an efficient “row extraction” method for computing R, that works when k, the desired rank, is similar to the rank of the data matrix. Otherwise the row extraction introduces unacceptable inaccuracies, which led Halko et.al to recommend using an alternative O(kmn) technique in such cases. 2.4 Our result, the complexity of the IQRP The savings that the IQRP achieves depend on the data. The algorithm takes as input an integer value l, the length of a temporary buffer. As explained in Section 4 our implementation requires temporary storage of l + 1, which takes (l + 1)m floats. The following values depend on the data: the number of passes p, the number of IO-passes q (explained below), and a unit cost of orthogonalization c (see Section 4.3). In terms of l and c the run time is 2mn + 4mnc + 4mlk. Our experiments show that for typical datasets the value of c is below k. For l ≈k our experiments show that the number of passes is typically much smaller than k. The number of passes is even smaller if one considers IO-passes. To explain what we mean by IO-passes consider as an example a situation where the algorithm runs three passes over the data. In the first pass all n features are being accessed. In the second, only two features are being accessed. In the third, only one feature is being accessed. In this case we take the number of IO-passes to be q=1+ 3 n. We believe that q is a relevant measure of the algorithm pass complexity when skipping is cheap, so that the cost of a pass over the data is the amount of data that needs to be read. 3 The Businger Golub algorithm (QRP) In this section we describe the QRP [9, 10] which forms the basis to the IQRP. The main steps are described in Figure 1. There are two standard implementations for Step 2.1 in Figure 1. The first is by means of the “Modified Gram-Schmidt” (e.g., [9]), and the second is by Householder orthogonalization (e.g., [9]). Both methods require approximately the same number of flops, but error analysis (see [9]) shows that the Householder approach is significantly more stable. 3.1 Memory-efficient implementations The implementations shown in Figure 2 update the memory where the matrix A is stored. Specifically, A is overwritten by the R component of the QR factorization. Since we are not interested in R, overwriting A may not be acceptable. The procedure shown in Figure 3 does not overwrite A, but it is more costly. The flops count is dominated by Steps 1 and 2, which cost at most 4(j −1)mn at pass j. Summing up for j = 1, . . . , k this gives a total flops count of approximately 2k2mn flops. 3 Compute zj, qj, Qj for i = 1, . . . , n 1. wi = qT j−1˜xi. 2. ˜xi ←˜xi −wiqj−1. 3. vi ←vi −w2 i . At the end of the pass: 4. zj = arg max i vi. 5. qj = xzj/|xzj|. 6. Qj = (Qj−1, qj). Compute zj, hj, Hj for i = 1, . . . , n 1. ˜xi ←hj−1˜xi. 2. wi = ˜xi(j) (the j’th coordinate of ˜xi). 3. vi ←vi −w2 i . At the end of the pass: 4. zj = arg max i vi. 5. Create the Householder matrix hj from ˜xj. 6. Hj = Hj−1hj. Modified Gram-Schmidt Householder orthogonalization Figure 2: Standard implementations of Step 2.1 of the QRP Compute zj, qj, Qj for i = 1, . . . , n 1. wi = QT j−1xi. 2. vi = |xi|2 −|wi|2. At the end of the pass: 3. zj = arg max i vi. 4. ˜qj = xzj −Qj−1wzj, qj = ˜qj/|˜qj|. 5. Qj = (Qj−1, qj). Compute zj, hj, Hj for i = 1, . . . , n 1. yi = Hj−1xi. 2. vi = Pm t=j+1 y2 i (t). At the end of the pass: 3. zj = arg max i vi. 4. Create hj from yzj. 5. Hj = Hj−1hj. Modified Gram-Schmidt Householder orthogonalization Figure 3: Memory-efficient implementations of Step 2.1 of the QRP 4 The IQRP algorithm In this section we describe our main result: the improved QRP. The algorithm maintains three ordered lists of columns: The list F is the input list containing all columns. The list S contains columns that have already been selected. The list L is of size l, where l is a user defined parameter. For each column xi in F the algorithm maintains an integer value ri and a real value vi. These values can be kept in core or a secondary memory. They are defined as follows: ri ≤|S|, vi = vi(ri) = ∥xi −QriQT rixi∥2 (1) where Qri = (q1, . . . , qri) is an orthonormal basis to the first ri columns in S. Thus, vi(ri) is the (squared) error of approximating xi with the first ri columns in S. In each pass the algorithm identifies the l candidate columns xi corresponding to the l largest values of vi(|S|). That is, the vi values are computed as the error of predicting each candidate by all columns currently in S. The identified l columns with the largest vi(|S|) are stored in the list L. In addition, the value of the l+1’th largest vi(|S|) is kept as the constant BF . Thus, after a pass is terminated the following condition holds: vα(rα) ≤BF for all xα ∈F \ L. (2) The list L and the value BF can be calculated in one pass using a binary heap data structure, with the cost of at most n log(l + 1) comparisons. See [20, Chapter 9]. The main steps of the algorithm are described in Figure 4. Details of Steps 2.0, 2.1 of the IQRP. The threshold value T is defined by: T = −∞ if the heap is not full. top of the heap if the heap is full. (3) 4 Input: The matrix columns (features) x1, . . . , xn, and an integer k ≤n. Output: An ordered list S of k indices. 1. (The initial pass over F.) 1.0. Create a min-heap of size l+1. In one pass go over xi, i = 1, . . . , n: 1.1. Set vi(0) = |xi|2, ri = 0. Fill the heap with the candidates corresponding to the l+1 largest vi(0). 1.2. At the end of the pass: Set BF to the value at the top of the heap. Set L to heap content excluding the top element. Add to S as many candidates from L as possible using BF . 2. Repeat until S has k candidates: 2.0. Create a min-heap of size l+1. Let T be defined by (3). In one pass go over xi, i = 1, . . . , n: 2.1. Skip xi if vi(ri) ≤T. Otherwise update vi, ri, heap. 2.2. At the end of the pass: Set BF = T. Set L to heap content excluding the top element. Add to S as many candidates from L as possible using BF . Figure 4: The main steps of the IQRP algorithm. Thus, when the heap is full, T is the value of v associated with the l+1’th largest candidate encountered so far. The details of Step 2.1 are shown in Figure 5. Step A.2.2.1 can be computed using either Gram-Schmidt or Householder, as shown in Figures 3 and 4. A.1. If vi(ri) ≤T skip xi. A.2. Otherwise check ri: A.2.1. If ri = |S| conditionally insert xi into the heap. A.2.2. If ri < |S|: A.2.2.1. Compute vi(|S|). Set ri = |S|. A.2.2.2. Conditionally insert xi into the heap. Figure 5: Details of Step 2.1 Details of Steps 1.2 and 2.2 of the IQRP. Here we are given the list L and the value of BF satisfying (2). To move candidates from L to S run the QRP on L as long as the pivot value is above BF . (The pivot value is the largest value of vi(|S|) in L.) The details are shown in Figure 6. B.1. z = arg max i∈L vi(|S|) B.2. If vz(|S|) < BF , we are done exploiting L. B.3. Otherwise: B.3.1. Move z from L to S. B.3.2. Update the remaining candidates in L using either Gram-Schmidt or the Householder procedure. For example, with Householder: B.3.2.1. Create the Householder matrix hj from xz. B.3.2.2. for all x in L replace x with hjx. Figure 6: Details of Steps 1.2 and 2.2 5 4.1 Correctness In this section we show that the IQRP computes the same selection as the QRP. The proof is by induction on j, the number of columns in S. For j = 0 the QRP selects xj with vj = |xj|2 = max i |xi|2. The IQRP selects v′ j as the largest among the l largest values in F. Therefore v′ j = maxxi∈L |xi|2 = maxxi∈F |xi|2 = vj. Now assume that for j = |S| the QRP and the IQRP select the same columns in S (this is the inductive assumption). Let vj(|S|) be the value of the j+1’th selection by the QRP, and let v′ j(|S|) be the value of the j+1’th selection by the IQRP. We need to show that v′ j(|S|) = vj(|S|). The QRP selection of j satisfies: vj(|S|) = maxxi∈F vi(|S|). Observe that if xi ∈L then ri = |S|. (Initially L is created from the heap elements that have ri = |S|. Once S is increased in Step B.3.1 the columns in L are updated according to B.3.2 so that they all satisfy ri = |S|.) The IQRP selection satisfies: v′ j(|S|) = max xi∈L vi(|S|) and v′ j(|S|) ≥BF . (4) Additionally for all xα ∈F \ L: BF ≥vα(rα) ≥vα(|S|). (5) This follows from (2), the observation that vα(r) is monotonically decreasing in r, and rα ≤|S|. Therefore, combining (4), and (5) we get v′ j(|S|) = max xi∈F vi(|S|) = vj(|S|). This completes the proof by induction. 4.2 Termination To see that the algorithm terminates it is enough to observe that at least one column is selected in each pass. The condition at Step B.2 in Figure 6 cannot hold at the first time in a new L. The value of BF is the l+1 largest vi(|S|), while the maximum at B.1 is among the l largest vi(|S|). 4.3 Complexity The formulas in this section describe the complexity of the IQRP in terms of the following: n the number of features (matrix columns) m the number of objects (matrix rows) k the number of selected features l user provided parameter. 1 ≤l ≤n p number of passes q number of IO-passes c a unit cost of orthogonalizing F The value of c depends on the implementation of Step A.2.2.1 in Figure 5. We write cmemory for the value of c in the memory-efficient implementation, and cflops for the faster implementation (in terms of flops). We use the following notation. At pass j the number of selected columns is kj, and the number of columns that were not skipped in Step 2.1 of the IQRP (same as Step A.1) is nj. The number of flops in the memory-efficient implementation can be shown to be flopsmemory = 2mn + 4mnc + 4mlk, where c = p X j=2 nj n j−1 X j′=1 kj′ (6) Observe that c ≤k2/2, so that for l < n the worst case behavior is the same as the memoryoptimized QRP algorithm, which is O(k2mn). We show in Section 5 that the typical run time is much faster. In particular, the dependency on k appears to be linear and not quadratic. For the faster implementation that overwrites the input it can be shown that: flopstime = 2mn + 4m n X i=1 ˜ri, where ˜ri is the value of ri at termination. (7) Since ˜ri ≤k −1 it follows that flopstime ≤4kmn. Thus, the worst case behavior is the same as the flops-efficient QRP algorithm. 6 Memory in the memory-efficient implementation requires km in-core floats, and additional memory for the heap, that can be reused for the list L. Additional memory to store and manipulate vi, ri for i = 1, . . . , n is roughly 2n floats. Observe that these memory locations are being accessed consecutively, and can be efficiently stored and manipulated out-of-core. The data itself, the matrix A, is stored out-of-core. When the method of Figure 3 is used in A.2.2.1, these matrix values are read-only. IO-passes. We wish to distinguish between a pass where the entire data is accessed and a pass where most of the data is skipped. This suggests the following definition for the number of IO-passes: q = Pp j=1 nj n = 1 + Pp j=2 nj n . Number of floating point comparisons. Testing for the skipping and manipulating the heap requires floating point comparisons. The number of comparisons is n(p −1 + (q −1) log2(l + 1)). This does not affect the asymptotic complexity since the number of flops is much larger. 5 Experimental results We describe results on several commonly used datasets. “Day1”, with m = 20, 000 and n = 3, 231, 957 is part of the ”URL reputation” collection at the UCI Repository. “thrombin”, with m = 1, 909 and n = 139, 351 is the data used in KDD Cup 2001. “Amazon”, with m = 1, 500 and n = 10, 000 is part of the “Amazon Commerce reviews set” and was obtained from the UCI Repository. “gisette”, with m = 6, 000 and n = 5, 000 was used in NIPS 2003 selection challenge. Measurements. We vary k, and report the following: flopsmemory, flopstime are the ratios between the number of flops used by the IQRP and kmn, for the memory-efficient orthogonalization and the time-efficient orthogonalization. # passes is the number of passes needed to select k features. # IO-passes is discussed in sections 2.4 and 4.3. It is the number of times that the entire data is read. Thus, the ratio between the number of IO-passes and the number of passes is the fraction of the data that was not skipped. Run time. The number of flops of the QRP is between 2kmn and 4kmn. We describe experiments with the list size l taken as l = k. For Day1 the number of flops beats the QRP by a factor of more than 100. For the other datasets the results are not as impressive. There are still significant savings for small and moderate values of k (say up to k = 600), but for larger values the savings are smaller. Most interesting is the observation that the memory-efficient implementation of Step 2.1 is not much slower than the optimization for time. Recall that the memory-optimized QRP is k times slower than the time-optimized QRP. In our experiments they differ by no more than a factor of 4. Number of passes. We describe experiments with the list size l taken as l = k, and also with l = 100 regardless of the value of k. The QRP takes k passes for selecting k features. For the Day1 dataset we observed a reduction by a factor of between 50 to 250 in the number of passes. For IO-passes, the reduction goes up to a factor of almost 1000. Similar improvements are observed for the Amazon and the gisette datasets. For the thrombin it is slightly worse, typically a reduction by a factor of about 70. The number of IO-passes is always significantly below the number of passes, giving a reduction by factors up to 1000. For the recommended setting of l = k we observed the following. In absolute terms the number of passes was below 10 for most of the data; the number of IO-passes was below 2 for most of the data. 6 Concluding remarks This paper describes a new algorithm for unsupervised feature selection. Based on the experiments we recommend using the memory-efficient implementation and setting the parameter l = k. As explained earlier the algorithm maintains 2 numbers for each column, and these can also be kept in-core. This gives a 2(km + n) memory footprint. Our experiments show that for typical datasets the number of passes is significantly smaller than k. In situations where memory can be skipped the notion of IO-passes may be more accurate than passes. IO-passes indicate the amount of data that was actually read and not skipped. 7 Day1, m = 20, 000, n = 3, 231, 957 0 200 400 600 800 1,000 1 2 3 4 ·10−2 k (l = k) flops/kmn flopsmemory flopstime 0 200 400 600 800 1,000 1 2 3 4 5 k (l = k) number of passes #passes #IO-passes 0 200 400 600 800 1,000 0 5 10 15 20 k (l = 100) number of passes #passes #IO-passes thrombin, m = 1, 909, n = 139, 351 0 200 400 600 800 1,000 0 1 2 3 k (l = k) flops/kmn flopsmemory flopstime 0 200 400 600 800 1,000 0 5 10 15 k (l = k) number of passes #passes #IO-passes 0 200 400 600 800 1,000 0 20 40 k (l = 100) number of passes #passes #IO-passes Amazon, m = 1, 500, n = 10, 000 0 200 400 600 800 1,000 0 1 2 3 4 k (l = k) flops/kmn flopsmemory flopstime 0 200 400 600 800 1,000 1 2 3 4 5 k (l = k) number of passes #passes #IO-passes 0 200 400 600 0 5 10 15 k (l = 100) number of passes #passes #IO-passes gisette, m = 6, 000, n = 5, 000 0 200 400 600 800 1,000 0.5 1 1.5 2 2.5 3 k (l = k) flops/kmn flopsmemory flopstime 0 200 400 600 800 1,000 1 2 3 4 5 k (l = k) number of passes #passes #IO-passes 0 200 400 600 800 1,000 0 5 10 15 k (l = 100) number of passes #passes #IO-passes Figure 7: Results of applying the IQRP to several datasets with varying k, and l = k. The performance of the IQRP depends on the data. Therefore, the improvements that we observe can also be viewed as an indication that typical datasets are “easy”. This appears to suggest that worst case analysis should not be considered as the only criterion for evaluating feature selection algorithms. Comparing the IQRP to the current state-of-the-art randomized algorithms that were reviewed in Section 2.2 we observe that the IQRP is competitive in terms of the number of passes and appears to outperform these algorithms in terms of the number of IO-passes. On the other hand, it may be less accurate. 8 References [1] M. Gu and S. C. Eisenstat. Efficient algorithms for computing a strong rank-revealing QR factorization. SIAM J. Computing, 17(4):848–869, 1996. [2] C. Boutsidis, M. W. Mahoney, and P. Drineas. An improved approximation algorithm for the column subset selection problem. In Claire Mathieu, editor, Proceedings of the Twentieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2009, New York, NY, USA, January 4-6, 2009, pages 968– 977. SIAM, 2009. [3] C. Boutsidis, P. Drineas, and M. Magdon-Ismail. Near-optimal column-based matrix reconstruction, February 2011. arXiv e-print (arXiv:1103.0995). [4] A. Dasgupta, P. Drineas, B. Harb, V. Josifovski, and M. W. Mahoney. Feature selection methods for text classification. In Pavel Berkhin, Rich Caruana, and Xindong Wu, editors, KDD, pages 230–239. ACM, 2007. [5] C. Boutsidis, P. Drineas, and M. Magdon-Ismail. Sparse features for PCA-like linear regression. In John Shawe-Taylor, Richard S. Zemel, Peter L. Bartlett, Fernando C. N. Pereira, and Kilian Q. Weinberger, editors, NIPS, pages 2285–2293, 2011. [6] V. Guruswami and A. K. Sinop. Optimal column-based low-rank matrix reconstruction. In Yuval Rabani, editor, Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2012, Kyoto, Japan, January 17-19, 2012, pages 1207–1214. SIAM, 2012. [7] Z. Li, Y. Yang, J. Liu, X. Zhou, and H. Lu. Unsupervised feature selection using nonnegative spectral analysis. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, July 22-26, 2012, Toronto, Ontario, Canada. AAAI Press, 2012. [8] S. Zhang, H.S. Wong, Y. Shen, and D. Xie. A new unsupervised feature ranking method for gene expression data based on consensus affinity. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 9(4):1257–1263, July 2012. [9] G. H. Golub and C. F. Van-Loan. Matrix computations. The Johns Hopkins University Press, third edition, 1996. [10] P. Businger and G. H. Golub. Linear least squares solutions by Householder transformations. Numer. Math., 7:269–276, 1965. [11] A. C¸ ivril and M. Magdon-Ismail. Column subset selection via sparse approximation of SVD. Theoretical Computer Science, 421:1–14, March 2012. [12] A. M. Frieze, R. Kannan, and S. Vempala. Fast Monte-Carlo algorithms for finding low-rank approximations. In IEEE Symposium on Foundations of Computer Science, pages 370–378, 1998. [13] A. M. Frieze, R. Kannan, and S. Vempala. Fast Monte-Carlo algorithms for finding low-rank approximations. Journal of the ACM, 51(6):1025–1041, 2004. [14] A. Deshpande, L. Rademacher, S. Vempala, and G. Wang. Matrix approximation and projective clustering via volume sampling. Theory of Computing, 2(12):225–247, 2006. [15] A. Deshpande and L. Rademacher. Efficient volume sampling for row/column subset selection. In FOCS, pages 329–338. IEEE Computer Society Press, 2010. [16] M. W. Mahoney and P. Drineas. CUR matrix decompositions for improved data analysis. Proceedings of the National Academy of Sciences, 106(3):697–702, 2009. [17] P. Drineas, M. Magdon-Ismail, M. W. Mahoney, and D. P. Woodruff. Fast approximation of matrix coherence and statistical leverage. Journal of Machine Learning Research, 13:3441–3472, 2012. [18] K. L. Clarkson and D. P. Woodruff. Low rank approximation and regression in input sparsity time. arXiv e-print (arXiv:1207.6365v4), April 2013. [19] N. Halko, P. G. Martinsson, and J. A. Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Review, 53(2):217–288, 2011. [20] T. H. Cormen, C. E. Leiserson, and R. L. Rivest. Introduction to algorithms. MIT Press and McGraw-Hill Book Company, third edition, 2009. 9
|
2013
|
158
|
4,885
|
Adaptive dropout for training deep neural networks Lei Jimmy Ba Brendan Frey Department of Electrical and Computer Engineering University of Toronto jimmy, frey@psi.utoronto.ca Abstract Recently, it was shown that deep neural networks can perform very well if the activities of hidden units are regularized during learning, e.g, by randomly dropping out 50% of their activities. We describe a method called ‘standout’ in which a binary belief network is overlaid on a neural network and is used to regularize of its hidden units by selectively setting activities to zero. This ‘adaptive dropout network’ can be trained jointly with the neural network by approximately computing local expectations of binary dropout variables, computing derivatives using back-propagation, and using stochastic gradient descent. Interestingly, experiments show that the learnt dropout network parameters recapitulate the neural network parameters, suggesting that a good dropout network regularizes activities according to magnitude. When evaluated on the MNIST and NORB datasets, we found that our method achieves lower classification error rates than other feature learning methods, including standard dropout, denoising auto-encoders, and restricted Boltzmann machines. For example, our method achieves 0.80% and 5.8% errors on the MNIST and NORB test sets, which is better than state-of-the-art results obtained using feature learning methods, including those that use convolutional architectures. 1 Introduction For decades, deep networks with broad hidden layers and full connectivity could not be trained to produce useful results, because of overfitting, slow convergence and other issues. One approach that has proven to be successful for unsupervised learning of both probabilistic generative models and auto-encoders is to train a deep network layer by layer in a greedy fashion [7]. Each layer of connections is learnt using contrastive divergence in a restricted Boltzmann machine (RBM) [6] or backpropagation through a one-layer auto-encoder [1], and then the hidden activities are used to train the next layer. When the parameters of a deep network are initialized in this way, further fine tuning can be used to improve the model, e.g., for classification [2]. The unsupervised, pre-training stage is a crucial component for achieving competitive overall performance on classification tasks, e.g., Coates et al. [4] have achieved improved classification rates by using different unsupervised learning algorithms. Recently, a technique called dropout was shown to significantly improve the performance of deep neural networks on various tasks [8], including vision problems [10]. Dropout randomly sets hidden unit activities to zero with a probability of 0.5 during training. Each training example can thus be viewed as providing gradients for a different, randomly sampled architecture, so that the final neural network efficiently represents a huge ensemble of neural networks, with good generalization capability. Experimental results on several tasks show that dropout frequently and significantly improves the classification performance of deep architectures. Injecting noise for the purpose of regularization has been studied previously, but in the context of adding noise to the inputs [3],[21] and to network components [16]. Unfortunately, when dropout is used to discriminatively train a deep fully connected neural network on input with high variation, e.g., in viewpoint and angle, little benefit is achieved (section 5.5), unless spatial structure is built in. 1 In this paper, we describe a generalization of dropout, where the dropout probability for each hidden variable is computed using a binary belief network that shares parameters with the deep network. Our method works well both for unsupervised and supervised learning of deep networks. We present results on the MNIST and NORB datasets showing that our ‘standout’ technique can learn better feature detectors for handwritten digit and object recognition tasks. Interestingly, we also find that our method enables the successful training of deep auto-encoders from scratch, i.e., without layer-by-layer pre-training. 2 The model The original dropout technique [8] uses a constant probability for omitting a unit, so a natural question we considered is whether it may help to let this probability be different for different hidden units. In particular, there may be hidden units that can individually make confident predictions for the presence or absence of an important feature or combination of features. Dropout will ignore this confidence and drop the unit out 50% of the time. Viewed another way, suppose after dropout is applied, it is found that several hidden units are highly correlated in the pre-dropout activities. They could be combined into a single hidden unit with a lower dropout probability, freeing up hidden units for other purposes. We denote the activity of unit j in a deep neural network by aj and assume that its inputs are {ai : i < j}. In dropout, aj is randomly set to zero with probability 0.5. Let mj be a binary variable that is used to mask, the activity aj, so that its value is aj = mjg X i:i<j wj,iai , (1) where wj,i is the weight from unit i to unit j and g(·) is the activation function and a0 = 1 accounts for biases. Whereas in standard dropout, mj is Bernoulli with probability 0.5, here we use an adaptive dropout probability that depends on input activities: P(mj = 1|{ai : i < j}) = f X i:i<j πj,iai , (2) where πj,i is the weight from unit i to unit j in the standout network or the adaptive dropout network; f(·) is a sigmoidal function, f : R →[0, 1]. We use the logistic function, f(z) = 1/(1 + exp(−z)). The standout network is an adpative dropout network that can be viewed as a binary belief network that overlays the neural network and stochastically adapts its architecture, depending on the input. Unlike a traditional belief network, the distribution over the output variable is not obtained by marginalizing over the hidden mask variables. Instead, the distribution over the hidden mask variables should be viewed as specifying a Bayesian posterior distribution over models. Traditional Bayesian inference generates a posterior distribution that does not depend on the input at test time, whereas the posterior distribution described here does depend on the test input. At first, this may seem inappropriate. However, if we could exactly compute the Bayesian posterior distribution over neural networks (parameters and architectures), we would find strong correlations between components, such as the connectivity and weight magnitudes in one layer and the connectivity and weight magnitudes in the next layer. The standout network described above can be viewed as approximately taking into account these dependencies through the use of a parametric family of distributions. The standout method described here can be simplified to obtain other dropout techniques. The original dropout method is obtained by clamping πj,i = 0 for 0 ≤i < j. Another interesting setting is obtained by clamping πj,i = 0 for 1 ≤i < j, but learning the input-independent dropout parameter πj,0 for each unit aj. As in standard dropout, to process an input at test time, the stochastic feedforward process is replaced by taking the expectation of equation 1: E[aj] = f X i:i<j πj,iai g X i:i<j wj,iai . (3) We found that this method provides very similar results as randomly simulating the stochastic process and computing the expected output of the neural network. 3 Learning For a specific configuration m of the mask variables, let L(m, w) denote the likelihood of a training set or a minibatch, where w is the set of neural network parameters. It may include a prior as well. 2 The dependence of L on the input and output have been suppressed for notational simplicity. Given the current dropout parameters, π, the standout network acts like a binary belief network that generates a distribution over the mask variables for the training set or minibatch, denoted P(m|π, w). Again, we have suppressed the dependence on the input to the neural network. As described above, this distribution should not be viewed as the distribution over hidden variables in a latent variable model, but as an approximation to a Bayesian posterior distribution over model architectures. The goal is to adjust π and w to make P(m|π, w) close to the true posterior over architectures as given by L(m, w), while also adjusting L(m, w) so as maximize the data likelihood w.r.t. w. Since both the approximate posterior P(m|π, w) and the likelihood L(m, w) depend on the neural network parameters, we use a crude approximation that we found works well in practice. If the approximate posterior were as close as possible to the true posterior, then the derivative of the free energy F(P, L) w.r.t P would be zero and we can ignore terms of the form ∂P/∂w. So, we adjust the neural network parameters using the approximate derivative, − X m P(m|π, w) ∂ ∂w log L(m, w), (4) which can be computed by sampling from P(m|π, w). For a given setting of the neural network parameters, the standout network can in principal be adjusted to be closer to the Bayesian posterior by following the derivative of the free energy F(P, L) w.r.t. π. This is difficult in practice, so we use an approximation where we assume the approximate posterior is correct and sample a configuration of m from it. Then, for each hidden unit, we consider mj = 0 and mj = 1 and determine the partial contribution to the free energy. The standout network parameters are adjusted for that hidden unit so as to decrease the partial contribution to the free energy. Namely, the standout network updates are obtained by sampling the mask variables using the current standout network, performing forward propagation in the neural network, and computing the data likelihood. The mask variables are sequentially perturbed by combining the standout network probability for the mask variable with the data likelihood under the neural network, using a partial forward propagation. The resulting mask variables are used as complete data for updating the standout network. The above learning technique is approximate, but works well in practice and achieves models that outperform standard dropout and other feature learning techniques, as described below. Algorithm 1: Standout learning algorithm: alg1 and alg2 Notation: H · is Heaviside step function ; Input: w, π, α, β alg1: initialize w, π randomly; alg2: initialize w randomly, set π = w; while not stopping criteria do for hidden unit j = 1, 2, ... do P(mj = 1|{ai : i < j}) = f α P i:i<j πj,iai + β ; mj ∼P(mj = 1|{ai : i < j}); aj = mjg P i:i<j wj,iai ; end Update neural network parameter w using ∂ ∂w log L(m, w); /* alg1 */ for hidden unit j = 1, 2, ... do tj = H L(m, w|mj = 1) −L(m, w|mj = 0) end Update standout network π using target t ; /* alg2 */ Update standout network π using π ←w ; end 3.1 Stochastic adaptive mixtures of local experts A neural network of N hidden units can be viewed as 2N possible models given the standout mask M. Each of the 2N models acts like a separate “expert” network that performs well for a subset of the input space. Training all 2N models separately can easily over-fit to the data, but weight sharing among the models can prevent over-fitting. Therefore, the standout network, much like a gating network, also produces a distributed representation to stochastically choses which expert to 3 Figure 1: Weights from hidden units that are least likely to be dropped out, for examples from each of the 10 classes, for (top) auto-encoder and (bottom) discriminative neural networks trained using standout. Figure 2: First layer standout network filters and neural network filters learnt from MNIST data using our method. turn on for a given input. This means 2N models are chosen by N binary numbers in this distributed representation. The standout network partitions the input space into different regions that are suitable for each expert. We can visualize the effect of the standout network by showing the units that output high standout probability for one class but not others. The standout network learns that some hidden units are important for one class and tend to keep those. These hidden units are then more likely to be dropped out when the input comes from a different class. 4 Exploratory experiments Here, we study different aspects of our method using MNIST digits (see below for more details). We trained a shallow one hidden layer auto-encoder on MNIST using the approximate learning algorithm. We can visualize the effect of the standout network by showing the units that output low dropout probability for one class but not others. The standout network learns that some hidden units are important for one class and tends to keep those. These hidden units are more likely to be dropped when the input comes from a different class (see figure 1). The first layer filters of both the standout network and the neural network are shown in figure 2. We noticed that the weights in the two networks are very similar. Since the learning algorithm for adjusting the dropout parameters is computationally burdensome (see above), we considered tying the parameters w and π. To account for different scales and shifts, we set π = αw + β, where α and β are learnt. Concretely, we found empirically that the standout network parameters trained in this way are quite similar (although not identical) to the neural network parameters, up to an affine transformation. This motivated our second algorithm alg2 in psuedocode(1), where the neural network parameters are trained as described in learning section 3, but the standout parameters are set to an affine transformation of the neural network parameters with hyper-parameters alpha and beta. These hyperparameters are determined as explained below. We found that this technique works very well in practice, for the MNIST and NORB datasets (see below). For example, for unsupervised learning on MNIST using the architecture described below, we obtained 153 errors for tied parameters and 158 errors for separately learnt parameters. This tied parameter learning algorithm is used for the experiments in the rest of the paper. In the above description of our method, we mentioned two hyper-parameters that need to be considered: the scale parameter α and the bias parameter β. Here we explore the choice of these parameters, by presenting some experimental results obtained by training a dropout model as described below using MNIST handwritten digit images. α controls the sensitivity of the dropout function to the weighted sum of inputs that is used to determine the hidden activity. In particular, α scales the weighted sum of the activities from the 4 layer before. In contrast, the bias β shifts the dropout probability to be high or low and ultimately controls the sparsity of the hidden unit activities. A model with a more negative β will have most of its hidden activities concentrated near zero. Figure 3(a) illustrates how choices of α and β change the dependence of the dropout probability on the input. It shows a histogram of hidden unit activities after training networks with different α’s and β’s on MNIST images. Figure 3: Histogram of hidden unit activities for various choices of hyper-parameters using the logistic dropout function, including those configurations that are equivalent to dropout and no dropoutbased regularization (AE). Histograms of hidden unit activities for various dropout functions. Various standout function f(·) We also consider different forms of the dropout function other than the logistic function, as shown in figure 3(b). The effect of different functional forms can be observed in the histogram of the activities after training on the MNIST images. The logistic dropout function creates a sparse distribution of activation values, whereas the functions such as f(z) = 1−4(1−σ(z))σ(z) produce a multi-modal distribution over the activation values. 5 Experimental results We consider both unsupervised learning and discriminative learning tasks, and compare results obtained using standout to those obtained using restricted Boltzmann machines (RBMs) and autoencoders trained using dropout, for unsupervised feature learning tasks. We also investigate classification performance by applying standout during discriminative training using the MNIST and NORB [11] datasets. In our experiments, we have made a few engineering choices that are consistent with previous publications in the area, so that our results are comparable to the literature. We used ReLU units, a linear momentum schedule, and an exponentially decaying learning rate (c.f. Nair et al. 2009[13]; Hinton et al. 2012 [8]). In addition, we used cross-validation to search over the learning rate (0.0001, 0.0003, 0.001, 0.003, 0.01, 0.03) and the values of alpha and beta (-2, -1.5, -1, -.5, 0, .5, 1, 1.5, 2) and for the NORB dataset, the number of hidden units (1000, 2000, 4000, 6000). 5.1 Datasets The MNIST handwritten digit dataset is generally considered as a well-studied problem, which offers the ability to ensure that new algorithms produce sensible results when compared to the many other techniques that have been benchmarked. It consists of ten classes of handwritten digits, ranging from 0 to 9. There are, in total, 60,000 training images and 10,000 test images. Each image is 28×28 pixels in size. Following the common convention, we randomly separate the original training set into 50,000 training cases and 10,000 cases used for validating the choice of hyper-parameters. We concatenate all the pixels in an image in a raster scan fashion to create a 784-dimensional vector. The task is to predict the 10 class labels from the 784-dimensional input vector. The small NORB normalized-uniform dataset contains 24,300 training examples and 24,300 test examples. It consists of 50 different objects from five different classes: cars, trucks, planes, animals, and humans. Each data point is represented by a stereo image pair of size 96×96 pixels. The training and test set used different object instances and images are created under different lighting conditions, elevations and azimuths. In order to perform well in NORB, it demands learning algorithms to learn features that can generalize to test set and be able to handle large input dimension. This makes NORB significantly more challenging than the MNIST dataset. The objects in the NORB dataset are 3D under difference out-of-plane rotation, and so on. Therefore, the models trained on NORB have to learn and store implicit representations of 3D structure, lighting and so on. We formulate 5 the data vector following Snoek et al.[17] by down-sampling from 96 × 96 to 32 × 32, so that the final training data vector has 2048 dimensions. Data points are subtracted by the mean and divided by the standard deviation along each input dimension across the whole training set to normalize the contrast. The goal is to predict the five class labels for the previously unseen 24,300 test examples. The training set is separated into 20,000 for training and 4,300 for validation. 5.2 Nonlinearity for feedforward network We used the ReLU [13] activation function for all of the results reported here, both on unsupervised and discriminative tasks. The ReLU function can be written as g(x) = max(0, x). We found that its use significantly speeds up training by up to 10-fold, compared to the commonly used logistic activation function. The speed-up we observed can be explained in two ways. First, computations are saved when using max instead of the exponential function. Second, ReLUs do not suffer from the vanishing gradient problem that logistic functions have for very large inputs. 5.3 Momentum We optimized the model parameters using stochastic gradient descent with the Nesterov momentum technique [19], which can effectively speed up learning when applied to large models compared to standard momentum. When using Nesterov momentum, the cost function J and derivatives ∂J ∂θ are evaluated at θ + vk, where vk = γvk−1 + η ∂J ∂θ is the velocity and θ is the model parameter. γ < 1 is the momentum coefficient and η is the learning rate. Nesterov momentum takes into account the velocity in parameter space when computing updates. Therefore, it further reduces oscillations compared to standard momentum. We schedule the momentum coefficient γ to further speed up the learning process. γ starts at 0.5 in the first epoch and linearly increase to 0.99. The momentum stays at 0.99 during the major portion of learning and then is linearly ramped down to 0.5 during the end of learning. 5.4 Computation time We used the publicly available gnumpy library [20] to implement our models. The models mentioned in this work are trained on a single Nvidia GTX 580 GPU. As in psuedocode(1), the first algorithm is relatively slow, since the number of computations is O(n2) where n is the number of hidden units. The second algorithm is much faster and takes O(kn) time, where k is the number of configurations of the hyper-parameters alpha and beta that are searched over. In particular, for a 784-1000-784 autoencoder model with mini-batches of size 100 and 50,000 training cases on a GTX 580 GPU, learning takes 1.66 seconds per epoch for standard dropout and 1.73 seconds for our second algorithm. The computational cost of the improved representations produced by our algorithm is that a hyperparameter search is needed. We note that some other recently developed dropout-related methods, such as maxout, also involve an additional computational factor. 5.5 Unsupervised feature learning Having good features is crucial for obtaining competitive performance in classification and other high level tasks. Learning algorithms that can take advantage of unlabeled data are appealing due to increasing amount of unlabeled data. Furthermore, on more challenging datasets, such as NORB, a fully connected discriminative neural network trained from scratch tends to perform poorly, even with the help of dropout. (We trained a two hidden layer neural network on NORB to obtain 13% error rate and saw no improvement by using dropout). Such disappointing performance motived us to investigate unsupervised feature learning and pre-training strategies with our new method. Below, we show that our method can extract useful features in a self-taught fashion. The features extracted using our method not only outperform other common feature learning methods, but our method is also quite computationally efficient compared to techniques like sparse coding. We use the following procedures for feature learning. We first extract the features using one of the unsupervised learning algorithms in figure (4). The usefulness of the extracted features are then evaluated by training a linear classifier to predict the object class from the extracted features. This process is similar to that employed in other feature learning research [14]. We trained a number of architectures on MNIST, including standard auto-encoders, dropout autoencoders and standout auto-encoders. As described previously, we compute the expected value of 6 arch. act. func. err. raw pixel 784 7.2% RBM 784-1000 σ(·) 1.81% weight decay DAE 784-1000-784 ReLU(·) 1.95% dropout 784-1000-784 ReLU(·) 1.70% AE 50% hidden dropout standout 784-1000-784 ReLU(·) 1.53% AE standout (a) MNIST arch. act. func. err. raw pixel 8976 23.6% RBM 2048-4000 σ(·) 10.6% weight decay DAE 2048-4000-2048 ReLU(·) 9.5% dropout 2048-4000-2048 ReLU(·) 10.1% AE 50% hidden dropout dropout 2048-4000-2048 ReLU(·) 8.9% AE * 22% hidden dropout standout 2048-4000-2048 ReLU(·) 7.3% AE standout (b) NORB Figure 4: Performance of unsupervised feature learning methods. The dropout probability in the DAE * was optimized using [18] each hidden activity and use that as the feature when training a classifier. We also examined RBM’s, where we the soft probability for each hidden unit as a feature. Different classifiers can be used and give similar performance; we used a linear SVM because it is fast and straightforward to apply. However, on a subset of problems we tried logistic classifiers and they achieved indistinguishable classification rates. Results for the different architectures and learning methods are compared in table 4(a). The autoencoder trained using our proposed technique with α = 1 and β = 0 performed the best on MNIST. We performed extensive experiments on the NORB dataset with larger models. The hyperparameters used for the best result are α = 1 and β = 1. Overall, we observed similar trends to the ones we observed for MNIST. Our standout method consistently performs better than other methods, as shown in table 4(b). 5.6 Discussion The proposed standout method was able to outperform other feature learning methods in both datasets with a noticeable margin. The stochasticity introduced by the standout network successfully removes hidden units that are unnecessary for good performance and that hinder performance. By inspecting the weights from auto-encoders regularized by dropout and standout, we find that the standout auto-encoder weights are sharper than those learnt using dropout, which may be consistent with the improved performance on classification tasks. 500 1000 1500 2000 2500 3000 3500 4000 4500 numer of hidden units 6 8 10 12 14 16 18 20 test error rate (%) Classification error rate as a function of number of hidden units DAE Dropout AE Deterministic standout AE Standout AE Figure 5: Classification error rate as a function of number of hidden units on NORB. The effect of the number of hidden units was studied using networks with sizes 500, 1000, 1500, and up to 4500. Figure 5 shows that all algorithms generally perform better by increasing the number of hidden units. One notable trend for dropout regularization is that it achieves significantly better performance with large numbers of hidden units since all units have equal chance to be omitted. In comparison, standout can achieve similar performance with only half as many hidden units, because highly useful hidden units will be kept more often while only the less effective units will be dropped. One question is whether it is the stochasticity of the standout network that helps, or just a different nonlinearity obtained by the expected activity in equation 3. To address this, we trained a deterministic auto-encoder with hidden activation functions given by equation 3. The result of this ‘deterministic standout method’ is shown in figure 5 and it performs quite poorly. It is believed that sparse features can help improve the performance of linear classifiers. We found that auto-encoders trained using ReLU units and standout produce sparse features. We wondered whether training a sparse auto-encoder with a sparsity level matching the one obtained by our method would yield similar performance. We applied an L1 penalty on the hidden units and trained an auto-encoder to match the sparsity obtained by our method (figure4). The final features extracted using the sparse auto-encoder achieved 10.2% error on NORB, which is significantly worse than our method. Further gains can be achieved by tuning hyper-parameters, but the hyper-parameters for our method are easier to tune and, as shown above, have little effect on the final performance. Moreover, the sparse features learnt using standout are also computationally efficient compared 7 error rate RBM + FT 1.24% DAE + FT 1.3% shallow dropout AE + FT 1.10% deep dropout AE + FT 0.89% standout shallow AE + FT 1.06% standout deep AE + FT 0.80% (a) MNIST fine-tuned error rate DBN [15] 8.3% DBM [15] 7.2% third order RBM [12] 6.5% dropout shallow AE + FT 7.5% dropout deep AE + FT 7.0% standout shallow AE + FT 6.2% standout deep AE + FT 5.8% (b) NORB fine-tuned Figure 6: Performance of fine-tuned classifiers, where FT is fine-tuning to more sophisticated encoding algorithms, e.g., [5]. To find the code for data points with more than 4000 dimensions and 4000 dictionary elements, the sparse coding algorithm quickly becomes impractical. Surprisingly, a shallow network with standout regularization (table 4(b)) outperforms some of the much larger and deeper networks shown. Some of those deeper models have three or four times more parameters than the shallow network we trained here. This particular result show that a simpler model trained using our regularization technique can achieve higher performance compared to other, more complicated methods. 5.7 Discriminative learning In deep learning, a common practice is to use the encoder weights learnt by an unsupervised learning method to initialize the early layers of a multilayer discriminative model. The backpropagation algorithm is then used to learn the weights for the last hidden layer and also fine tune the weights in the layers before. This procedure is often referred to as discriminative fine tuning. We initialized neural networks using the models described above. The regularization method that we used for unsupervised learning (RBM, dropout, standout) is also used for corresponding discriminative fine tuning. For example, if a neural network is initialized using an auto-encoder trained with standout, the neural network will also be fine tuned using standout for all its hidden units, with the same standout function and hyper-parameters as the auto-encoder. During discriminative fine tuning, we hold the weights fixed for all layers except the last one for the first 10 epochs, and then the weights are updated jointly after that. As found by previous authors, we find that classification performance is usually improved by the use of discriminative fine tuning. Impressively, we found that a two-hidden-layer neural network with 1000 ReLU units in its first and second hidden layers trained with standout is able to achieve 80 errors on MNIST data after fine tuning (error rate of 0.80%). This performance is better than the current best non-convolutional result [8] and the training procedure is simpler. On NORB dataset, we similarly achieved 6.2% error rate by fine tuning the simple shallow auto-encoder from table(4(b)). Furthermore, a twohidden-layer neural network with 4000 ReLU units in both hidden layers that is pre-trained using standout achieved 5.8% error rate after fine tuning. It is worth mentioning that a small weight decay of 0.0005 is applied to this network during fine-tuning to further prevent overfitting. It outperforms other models that do not exploit spatial structure. As far as we know, this result is better than any previously published results without distortion or jitter. It even outperforms carefully designed convolutional neural networks found in [9]. Figure 6 reports the classification accuracy obtained by different models, including state-of-the-art deep networks. 6 Conclusions Our results demonstrate that the proposed use of standout networks can significantly improve performance of feature-learning methods. Further, our results provide additional support for the ‘regularization by noise’ hypothesis that has been used to regularize other deep architectures, including RBMs and denoising auto-encoders, and in dropout. An obvious missing piece in this research is a good theoretical understanding of why the standout network provides better regularization compared to the fixed dropout probability of 0.5. While we have motivated our approach as one of approximating the Bayesian posterior, further theoretical justifications are needed. 8 References [1] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks. Advances in neural information processing systems, 19:153, 2007. [2] Yoshua Bengio. Learning deep architectures for ai. Foundations and Trends R⃝in Machine Learning, 2(1):1–127, 2009. [3] C.M. Bishop. Training with noise is equivalent to tikhonov regularization. Neural computation, 7(1):108–116, 1995. [4] A. Coates and A.Y. Ng. The importance of encoding versus training with sparse coding and vector quantization. In International Conference on Machine Learning, volume 8, page 10, 2011. [5] Karol Gregor and Yann LeCun. Learning fast approximations of sparse coding. In International Conference on Machine Learning, 2010. [6] G.E. Hinton, S. Osindero, and Y.W. Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554, 2006. [7] G.E. Hinton and R.R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507, 2006. [8] G.E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R.R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012. [9] Kevin Jarrett, Koray Kavukcuoglu, MarcAurelio Ranzato, and Yann LeCun. What is the best multi-stage architecture for object recognition? In Computer Vision, 2009 IEEE 12th International Conference on, pages 2146–2153. IEEE, 2009. [10] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 25, 2012. [11] Yann LeCun, Fu Jie Huang, and Leon Bottou. Learning methods for generic object recognition with invariance to pose and lighting. In Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on, volume 2, pages II–97. IEEE, 2004. [12] V. Nair and G. Hinton. 3d object recognition with deep belief nets. Advances in Neural Information Processing Systems, 22:1339–1347, 2009. [13] V. Nair and G.E. Hinton. Rectified linear units improve restricted boltzmann machines. In Proc. 27th International Conference on Machine Learning, pages 807–814. Omnipress Madison, WI, 2010. [14] S. Rifai, P. Vincent, X. Muller, X. Glorot, and Y. Bengio. Contractive auto-encoders: Explicit invariance during feature extraction. In Proceedings of the Twenty-eight International Conference on Machine Learning (ICML11), 2011. [15] Ruslan Salakhutdinov and Hugo Larochelle. Efficient learning of deep boltzmann machines. In International Conference on Artificial Intelligence and Statistics. Citeseer, 2010. [16] J. Sietsma and R.J.F. Dow. Creating artificial neural networks that generalize. Neural Networks, 4(1):67–79, 1991. [17] Jasper Snoek, Ryan P Adams, and Hugo Larochelle. Nonparametric guidance of autoencoder representations using label information. Journal of Machine Learning Research, 13:2567– 2588, 2012. [18] Jasper Snoek, Hugo Larochelle, and Ryan Adams. Practical bayesian optimization of machine learning algorithms. In Advances in Neural Information Processing Systems 25, pages 2960– 2968, 2012. [19] Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. [20] Tijmen Tieleman. Gnumpy: an easy way to use gpu boards in python. Department of Computer Science, University of Toronto, 2010. [21] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.A. Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. The Journal of Machine Learning Research, 11:3371–3408, 2010. 9
|
2013
|
159
|
4,886
|
Regularized Spectral Clustering under the Degree-Corrected Stochastic Blockmodel Tai Qin Department of Statistics University of Wisconsin-Madison Madison, WI qin@stat.wisc.edu Karl Rohe Department of Statistics University of Wisconsin-Madison Madison, WI karlrohe@stat.wisc.edu Abstract Spectral clustering is a fast and popular algorithm for finding clusters in networks. Recently, Chaudhuri et al. [1] and Amini et al. [2] proposed inspired variations on the algorithm that artificially inflate the node degrees for improved statistical performance. The current paper extends the previous statistical estimation results to the more canonical spectral clustering algorithm in a way that removes any assumption on the minimum degree and provides guidance on the choice of the tuning parameter. Moreover, our results show how the “star shape” in the eigenvectors–a common feature of empirical networks–can be explained by the Degree-Corrected Stochastic Blockmodel and the Extended Planted Partition model, two statistical models that allow for highly heterogeneous degrees. Throughout, the paper characterizes and justifies several of the variations of the spectral clustering algorithm in terms of these models. 1 Introduction Our lives are embedded in networks–social, biological, communication, etc.– and many researchers wish to analyze these networks to gain a deeper understanding of the underlying mechanisms. Some types of underlying mechanisms generate communities (aka clusters or modularities) in the network. As machine learners, our aim is not merely to devise algorithms for community detection, but also to study the algorithm’s estimation properties, to understand if and when we can make justifiable inferences from the estimated communities to the underlying mechanisms. Spectral clustering is a fast and popular technique for finding communities in networks. Several previous authors have studied the estimation properties of spectral clustering under various statistical network models (McSherry [3], Dasgupta et al. [4], Coja-Oghlan and Lanka [5], Ames and Vavasis [6], Rohe et al. [7], Sussman et al. [8] and Chaudhuri et al. [1]). Recently, Chaudhuri et al. [1] and Amini et al. [2] proposed two inspired ways of artificially inflating the node degrees in ways that provide statistical regularization to spectral clustering. This paper examines the statistical estimation performance of regularized spectral clustering under the Degree-Corrected Stochastic Blockmodel (DC-SBM), an extension of the Stochastic Blockmodel (SBM) that allows for heterogeneous degrees (Holland and Leinhardt [9], Karrer and Newman [10]). The SBM and the DC-SBM are closely related to the planted partition model and the extended planted partition model, respectively. We extend the previous results in the following ways: (a) In contrast to previous studies, this paper studies the regularization step with a canonical version of spectral clustering that uses k-means. The results do not require any assumptions on the minimum expected node degree; instead, there is a threshold demonstrating that higher degree nodes are easier to cluster. This threshold is a function of the leverage scores that have proven essential in other contexts, for both graph algorithms and network data analysis (see Mahoney [11] and references therein). These are the first results that relate leverage scores to the statistical performance 1 of spectral clustering. (b) This paper provides more guidance for data analytic issues than previous approaches. First, the results suggest an appropriate range for the regularization parameter. Second, our analysis gives a (statistical) model-based explanation for the “star-shaped” figure that often appears in empirical eigenvectors. This demonstrates how projecting the rows of the eigenvector matrix onto the unit sphere (an algorithmic step proposed by Ng et al. [12]) removes the ancillary effects of heterogeneous degrees under the DC-SBM. Our results highlight when this step may be unwise. Preliminaries: Throughout, we study undirected and unweighted graphs or networks. Define a graph as G(E, V ), where V = {v1, v2, . . . , vN} is the vertex or node set and E is the edge set. We will refer to node vi as node i. E contains a pair (i, j) if there is an edge between node i and j. The edge set can be represented by the adjacency matrix A ∈{0, 1}n×n. Aij = Aji = 1 if (i, j) is in the edge set and Aij = Aji = 0 otherwise. Define the diagonal matrix D and the normalized Graph Laplacian L, both elements of RN×N, in the following way: Dii = X j Aij, L = D−1/2AD−1/2. The following notations will be used throughout the paper: ||·|| denotes the spectral norm, and ||·||F denotes the Frobenius norm. For two sequence of variables {xN} and {yN}, we say xN = ω(yN) if and only if yN/xN = o(1). δ(.,.) is the indicator function where δx,y = 1 if x = y and δx,y = 0 if x ̸= y. 2 The Algorithm: Regularized Spectral Clustering (RSC) For a sparse network with strong degree heterogeneity, standard spectral clustering often fails to function properly (Amini et al. [2], Jin [13]). To account for this, Chaudhuri et al. [1] proposed the regularized graph Laplacian that can be defined as Lτ = D−1/2 τ AD−1/2 τ ∈RN×N where Dτ = D + τI for τ ≥0. The spectral algorithm proposed and studied by Chaudhuri et al. [1] divides the nodes into two random subsets and only uses the induced subgraph on one of those random subsets to compute the spectral decomposition. In this paper, we will study the more traditional version of spectral algorithm that uses the spectral decomposition on the entire matrix (Ng et al. [12]). Define the regularized spectral clustering (RSC) algorithm as follows: 1. Given input adjacency matrix A, number of clusters K, and regularizer τ, calculate the regularized graph Laplacian Lτ. (As discussed later, a good default for τ is the average node degree.) 2. Find the eigenvectors X1, ..., XK ∈RN corresponding to the K largest eigenvalues of Lτ. Form X = [X1, ..., XK] ∈RN×K by putting the eigenvectors into the columns. 3. Form the matrix X∗∈RN×K from X by normalizing each of X’s rows to have unit length. That is, project each row of X onto the unit sphere of RK (X∗ ij = Xij/(P j X2 ij)1/2). 4. Treat each row of X∗as a point in RK, and run k-means with K clusters. This creates K non-overlapping sets V1, ..., VK whose union is V. 5. Output V1, ..., VK. Node i is assigned to cluster r if the i’th row of X∗is assigned to Vr. This paper will refer to “standard spectral clustering” as the above algorithm with L replacing Lτ. These spectral algorithms have two main steps: 1) find the principal eigenspace of the (regularized) graph Laplacian; 2) determine the clusters in the low dimensional eigenspace. Later, we will study RSC under the Degree-Corrected Stochastic Blockmodel and show rigorously how regularization helps to maintain cluster information in step (a) and why normalizing the rows of X helps in step (b). From now on, we use Xτ and X∗ τ instead of X and X∗to emphasize that they are related to Lτ. Let Xi τ and [X∗ τ ]i denote the i’th row of Xτ and X∗ τ . The next section introduces the Degree-Corrected Stochastic Blockmodel and its matrix formulation. 2 3 The Degree-Corrected Stochastic Blockmodel (DC-SBM) In the Stochastic Blockmodel (SBM), each node belongs to one of K blocks. Each edge corresponds to an independent Bernoulli random variable where the probability of an edge between any two nodes depends only on the block memberships of the two nodes (Holland and Leinhardt [9]). The formal definition is as follows. Definition 3.1. For a node set {1, 2, ..., N}, let z : {1, 2, ..., N} →{1, 2, ..., K} partition the N nodes into K blocks. So, zi equals the block membership for node i. Let B be a K × K matrix where Bab ∈[0, 1] for all a, b. Then under the SBM, the probability of an edge between i and j is Pij = Pji = Bzizj for any i, j = 1, 2, ..., n. Given z, all edges are independent. One limitation of the SBM is that it presumes all nodes within the same block have the same expected degree. The Degree-Corrected Stochastic Blockmodel (DC-SBM) (Karrer and Newman [10]) is a generalization of the SBM that adds an additional set of parameters (θi > 0 for each node i) that control the node degrees. Let B be a K × K matrix where Bab ≥0 for all a, b. Then the probability of an edge between node i and node j is θiθjBzizj, where θiθjBzizj ∈[0, 1] for any i, j = 1, 2, ..., n. Parameters θi are arbitrary to within a multiplicative constant that is absorbed into B. To make it identifiable, Karrer and Newman [10] suggest imposing the constraint that, within each block, the summation of θi’s is 1. That is, P i θiδzi,r = 1 for any block label r. Under this constraint, B has explicit meaning: If s ̸= t, Bst represents the expected number of links between block s and block t and if s = t, Bst is twice the expected number of links within block s. Throughout the paper, we assume that B is positive definite. Under the DC-SBM, define A ≜EA. This matrix can be expressed as a product of the matrices, A = ΘZBZT Θ, where (1) Θ ∈RN×N is a diagonal matrix whose ii’th element is θi and (2) Z ∈{0, 1}N×K is the membership matrix with Zit = 1 if and only if node i belongs to block t (i.e. zi = t). 3.1 Population Analysis Under the DC-SBM, if the partition is identifiable, then one should be able to determine the partition from A . This section shows that with the population adjacency matrix A and a proper regularizer τ, RSC perfectly reconstructs the block partition. Define the diagonal matrix D to contain the expected node degrees, Dii = P j Aij and define Dτ = D + τI where τ ≥0 is the regularizer. Then, define the population graph Laplacian L and the population version of regularized graph Laplacian Lτ, both elements of RN×N, in the following way: L = D−1/2A D−1/2, Lτ = D−1/2 τ A D−1/2 τ . Define DB ∈RK×K as a diagonal matrix whose (s, s)’th element is [DB]ss = P t Bst. A couple lines of algebra shows that [DB]ss = Ws is the total expected degrees of nodes from block s and that Dii = θi[DB]zizi. Using these quantities, the next Lemma gives an explicit form for Lτ as a product of the parameter matrices. Lemma 3.2. (Explicit form for Lτ) Under the DC-SBM with K blocks with parameters {B, Z, Θ}, define θτ i as: θτ i = θ2 i θi + τ/Wzi = θi Dii Dii + τ . Let Θτ ∈Rn×n be a diagonal matrix whose ii’th entry is θτ i . Define BL = D−1/2 B BD−1/2 B , then Lτ can be written Lτ = D −1 2 τ A D −1 2 τ = Θ 1 2τ ZBLZT Θ 1 2τ . Recall that A = ΘZBZT Θ. Lemma 3.2 demonstrates that Lτ has a similarly simple form that separates the block-related information (BL) and node specific information (Θτ). Notice that if τ = 0, then Θ0 = Θ and L = D−1 2 A D−1 2 = Θ 1 2 ZBLZT Θ 1 2 . The next lemma shows that Lτ has rank K and describes how its eigen-decomposition can be expressed in terms of Z and Θ. 3 Lemma 3.3. (Eigen-decomposition for Lτ) Under the DC-SBM with K blocks and parameters {B, Z, Θ}, Lλ has K positive eigenvalues. The remaining N −K eigenvalues are zero. Denote the K positive eigenvalues of Lτ as λ1 ≥λ2 ≥... ≥λK > 0 and let Xτ ∈RN×K contain the eigenvector corresponding to λi in its i’th column. Define X ∗ τ to be the row-normalized version of Xτ, similar to X∗ τ as defined in the RSC algorithm in Section 2. Then, there exists an orthogonal matrix U ∈RK×K depending on τ, such that 1. Xτ = Θ 1 2τ Z(ZT ΘτZ)−1/2U 2. X ∗ τ = ZU, Zi ̸= Zj ⇔ZiU ̸= ZjU, where Zi denote the i’th row of the membership matrix Z. This lemma provides four useful facts about the matrices Xτ and X ∗ τ . First, if two nodes i and j belong to the same block, then the corresponding rows of Xτ (denoted as X i τ and X j τ ) both point in the same direction, but with different lengths: ||X i τ ||2 = ( θτ i P j θτ j δzj ,zi )1/2. Second, if two nodes i and j belong to different blocks, then X i τ and X j τ are orthogonal to each other. Third, if zi = zj then after projecting these points onto the sphere as in X ∗ τ , the rows are equal: [X ∗ τ ]i = [X ∗ τ ]j = Uzi. Finally, if zi ̸= zj, then the rows are perpendicular, [X ∗ τ ]i ⊥[X ∗ τ ]j. Figure 1 illustrates the geometry of Xτ and X ∗ τ when there are three underlying blocks. Notice that running k-means on the rows of X ∗ λ (in right panel of Figure 1) will return perfect clusters. Note that if Θ were the identity matrix, then the left panel in Figure 1 would look like the right panel in Figure 1; without degree heterogeneity, there would be no star shape and no need for a projection step. This suggests that the star shaped figure often observed in data analysis stems from the degree heterogeneity in the network. −0.2 −0.15 −0.1 −0.05 0 −0.2 −0.1 0 0.1 0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2 −1 −0.5 0 −1 −0.5 0 0.5 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 Figure 1: In this numerical example, A comes from the DC-SBM with three blocks. Each point corresponds to one row of the matrix Xτ (in left panel) or X ∗ τ (in right panel). The different colors correspond to three different blocks. The hollow circle is the origin. Without normalization (left panel), the nodes with same block membership share the same direction in the projected space. After normalization (right panel), nodes with same block membership share the same position in the projected space. 4 Regularized Spectral Clustering with the Degree Corrected model This section bounds the mis-clustering rate of Regularized Spectral Clustering under the DC-SBM. The section proceeds as follows: Theorem 4.1 shows that Lτ is close to Lτ. Theorem 4.2 shows that Xτ is close to Xτ and that X∗ τ is close to X ∗ τ . Finally, Theorem 4.4 shows that the output from RSC with Lτ is close to the true partition in the DC-SBM (using Lemma 3.3). Theorem 4.1. (Concentration of the regularized Graph Laplacian) Let G be a random graph, with independent edges and pr(vi ∼vj) = pij. Let δ be the minimum expected degree of G, that is δ = mini Dii. For any ϵ > 0, if δ + τ > 3 ln N + 3 ln(4/ϵ), then with probability at least 1 −ϵ, ||Lτ −Lτ|| ≤4 r 3 ln(4N/ϵ) δ + τ . (1) 4 Remark: This theorem builds on the results of Chung and Radcliffe [14] and Chaudhuri et al. [1] which give a seemingly similar bound on ||L −L || and ||D−1 τ A −D−1 τ A ||. However, the previous papers require that δ ≥c ln N, where c is some constant. This assumption is not satisfied in a large proportion of sparse empirical networks with heterogeneous degrees. In fact, the regularized graph Laplacian is most interesting when this condition fails, i.e. when there are several nodes with very low degrees. Theorem 4.1 only assumes that δ + τ > 3 ln N + 3 ln(4/ϵ). This is the fundamental reason that RSC works for networks containing some nodes with extremely small degrees. It shows that, by introducing a proper regularizer τ, ||Lτ −Lτ|| can be well bounded, even with δ very small. Later we will show that a suitable choice of τ is the average degree. The next theorem bounds the difference between the empirical and population eigenvectors (and their row normalized versions) in terms of the Frobenius norm. Theorem 4.2. Let A be the adjacency matrix generated from the DC-SBM with K blocks and parameters {B, Z, Θ}. Let λ1 ≥λ2 ≥... ≥λK > 0 be the only K positive eigenvalues of Lτ. Let Xτ and Xτ ∈RN×K contain the top K eigenvectors of Lτ and Lτ respectively. Define m = mini{min{||Xi τ||2, ||X i τ ||2}} as the length of the shortest row in Xτ and Xτ. Let X∗ τ and X ∗ τ ∈RN×K be the row normalized versions of Xτ and Xτ, as defined in step 3 of the RSC algorithm. For any ϵ > 0 and sufficiently large N, assume that (a) r K ln(4N/ϵ) δ + τ ≤ 1 8 √ 3λK, (b) δ + τ > 3 ln N + 3 ln(4/ϵ), then with probability at least 1 −ϵ, the following holds, ||Xτ −XτO||F ≤c0 1 λK r K ln(4N/ϵ) δ + τ , and ||X∗ τ −X ∗ τ O||F ≤c0 1 mλK r K ln(4N/ϵ) δ + τ . (2) The proof of Theorem 4.2 can be found in the supplementary materials. Next we use Theorem 4.2 to derive a bound on the mis-clustering rate of RSC. To define “misclustered”, recall that RSC applies the k-means algorithm to the rows of X∗ τ , where each row is a point in RK. Each row is assigned to one cluster, and each of these clusters has a centroid from k-means. Define C1, . . . , Cn ∈RK such that Ci is the centroid corresponding to the i’th row of X∗ τ . Similarly, run k-means on the rows of the population eigenvector matrix X ∗ τ and define the population centroids C1, . . . , Cn ∈RK. In essence, we consider node i correctly clustered if Ci is closer to Ci than it is to any other Cj for all j with Zj ̸= Zi. The definition is complicated by the fact that, if any of the λ1, . . . , λK are equal, then only the subspace spanned by their eigenvectors is identifiable. Similarly, if any of those eigenvalues are close together, then the estimation results for the individual eigenvectors are much worse that for the estimation results for the subspace that they span. Because clustering only requires estimation of the correct subspace, our definition of correctly clustered is amended with the rotation OT ∈RK×K, the matrix which minimizes ∥X∗ τ OT −X ∗ τ ∥F . This is referred to as the orthogonal Procrustes problem and [15] shows how the singular value decomposition gives the solution. Definition 4.3. If CiOT is closer to Ci than it is to any other Cj for j with Zj ̸= Zi, then we say that node i is correctly clustered. Define the set of mis-clustered nodes: M = {i : ∃j ̸= i, s.t.||CiOT −Ci||2 > ||CiOT −Cj||2}. (3) The next theorem bounds the mis-clustering rate |M |/N. Theorem 4.4. (Main Theorem) Suppose A ∈RN×N is an adjacency matrix of a graph G generated from the DC-SBM with K blocks and parameters {B, Z, Θ}. Let λ1 ≥λ2 ≥... ≥λK > 0 be the K positive eigenvalues of Lτ. Define M , the set of mis-clustered nodes, as in Definition 4.3. Let δ be the minimum expected degree of G. For any ϵ > 0 and sufficiently large N, assume (a) and (b) as in Theorem 4.2. Then with probability at least 1 −ϵ, the mis-clustering rate of RSC with regularization constant τ is bounded, |M |/N ≤c1 K ln(N/ϵ) Nm2(δ + τ)λ2 K . (4) 5 Remark 1 (Choice of τ): The quality of the bound in Theorem 4.4 depends on τ through three terms: (δ + τ), λK, and m. Setting τ equal to the average node degree balances these terms. In essence, if τ is too small, there is insufficient regularization. Specifically, if the minimum expected degree δ = O(ln N), then we need τ ≥c(ϵ) ln N to have enough regularization to satisfy condition (b) on δ + τ. Alternatively, if τ is too large, it washes out significant eigenvalues. To see that τ should not be too large, note that C = (ZT ΘτZ)1/2BL(ZT ΘτZ)1/2 ∈RK×K (5) has the same eigenvalues as the largest K eigenvalues of Lτ (see supplementary materials for details). The matrix ZT ΘτZ is diagonal and the (s, s)’th element is the summation of θτ i within block s. If EM = ω(N ln N) where M = P i Dii is the sum of the node degrees, then τ = ω(M/N) sends the smallest diagonal entry of ZT ΘτZ to 0, sending λK, the smallest eigenvalue of C, to zero. The trade-off between these two suggests that a proper range of τ is (α EM N , β EM N ), where 0 < α < β are two constants. Keeping τ within this range guarantees that λK is lower bounded by some constant depending only on K. In simulations, we find that τ = M/N (i.e. the average node degree) provides good results. The theoretical results only suggest that this is the correct rate. So, one could adjust this by a multiplicative constant. Our simulations suggest that the results are not sensitive to such adjustments. Remark 2 (Thresholding m): Mahoney [11] (and references therein) shows how the leverage scores of A and L are informative for both data analysis and algorithmic stability. For L, the leverage score of node i is ||Xi||2 2, the length of the ith row of the matrix containing the top K eigenvectors. Theorem 4.4 is the first result that explicitly relates the leverage scores to the statistical performance of spectral clustering. Recall that m2 is the minimum of the squared row lengths in Xτ and Xτ, that is the minimum leverage score in both Lτ and Lτ. This appears in the denominator of (4). The leverage scores in Lτ have an explicit form ||X i τ ||2 2 = θτ i P j θτ j δzj ,zi . So, if node i has small expected degree, then θτ i is small, rendering ||X i τ ||2 small. This can deteriorate the bound in Theorem 4.4. The problem arises from projecting Xi τ onto the unit sphere for a node i with small leverage; it amplifies a noisy measurement. Motivated by this intuition, the next corollary focuses on the high leverage nodes. More specifically, let m∗denote the threshold. Define S to be a subset of nodes whose leverage scores in Lτ and Xτ, ||X i τ || and ||Xi τ|| exceed the threshold m∗: S = {i : ||X i τ || ≥m∗, ||Xi τ|| ≥m∗}. Then by applying k-means on the set of vectors {[X∗ τ ]i, i ∈S}, we cluster these nodes. The following corollary bounds the mis-clustering rate on S. Corollary 4.5. Let N1 = |S| denote the number of nodes in S and define M1 = M ∩S as the set of mis-clustered nodes restricted in S. With the same settings and assumptions as in Theorem 4.4, let γ > 0 be a constant and set m∗= γ/ √ N. If N/N1 = O(1), then by applying k-means on the set of vectors {[X∗ τ ]i, i ∈S}, we have with probability at least 1 −ϵ, there exist constant c2 independent of ϵ, such that |M1|/N1 ≤c2 K ln(N1/ϵ) γ2(δ + τ)λ2 K . (6) In the main theorem (Theorem 4.4), the denominator of the upper bound contains m2. Since we do not make a minimum degree assumption, this value potentially approaches zero, making the bound useless. Corollary 4.5 replaces Nm2 with the constant γ2, providing a superior bound when there are several small leverage scores. If λK (the Kth largest eigenvalue of Lτ) is bounded below by some constant and τ = ω(ln N), then Corollary 4.5 implies that |M1|/N1 = op(1). The above thresholding procedure only clusters the nodes in S. To cluster all of the nodes, define the thresholded RSC (t-RSC) as follows: (a) Follow step (1), (2), and (3) of RSC as in section 2. (b) Apply k-means with K clusters on the set S = {i, ||Xi τ||2 ≥γ/ √ N} and assign each of them to one of V1, ..., VK. Let C1, ..., CK denote the K centroids given by k-means. (c) For each node i /∈S, find the centroid Cs such that ||[X∗ τ ]i −Cs||2 = min1≤t≤K||[X∗ τ ]i − Ct||2. Assign node i to Vs. Output V1, ...VK. 6 Remark 3 (Applying to SC): Theorem 4.4 can be easily applied to the standard SC algorithm under both the SBM and the DC-SBM by setting τ = 0. In this setting, Theorem 4.4 improves upon the previous results for spectral clustering. Define the four parameter Stochastic Blockmodel SBM(p, r, s, K) as follows: p is the probability of an edge occurring between two nodes from the same block, r is the probability of an out-block linkage, s is the number of nodes within each block, and K is the number of blocks. Because the SBM lacks degree heterogeneity within blocks, the rows of X within the same block already share the same length. So, it is not necessary to project Xi’s to the unit sphere. Under the four parameter model, λK = (K[r/(p −r)] + 1)−1 (Rohe et al. [7]). Using Theorem 4.4, with p and r fixed and p > r, and applying k-means to the rows of X, we have |M |/N = Op K2 ln N N . (7) If K = o( q N ln N ), then |M |/N →0 in probability. This improves the previous results that required K = o(N 1/3) (Rohe et al. [7]). Moreover, it makes the results for spectral clustering comparable to the results for the MLE in Choi et al. [16]. 5 Simulation and Analysis of Political Blogs This section compares five different methods of spectral clustering. Experiment 1 generates networks from the DC-SBM with a power-law degree distribution. Experiment 2 generates networks from the standard SBM. Finally, the benefits of regularization are illustrated on an empirical network from the political blogosphere during the 2004 presidential election (Adamic and Glance [17]). The simulations compare (1) standard spectral clustering (SC), (2) RSC as defined in section 2, (3) RSC without projecting Xτ onto unit sphere (RSC wp), (4) regularized SC with thresholding (tRSC), and (5) spectral clustering with perturbation (SCP) (Amini et al. [2]) which applies SC to the perturbed adjacency matrix Aper = A+a11T . In addition, experiment 2 compares the performance of RSC on the subset of nodes with high leverage scores (RSC on S) with the other 5 methods. We set τ = M/N, threshold parameter γ = 1, and a = M/N 2 except otherwise specified. Experiment 1. This experiment examines how degree heterogeneity affects the performance of the spectral clustering algorithms. The Θ parameters (from the DC-SBM) are drawn from the power law distribution with lower bound xmin = 1 and shape parameter β ∈{2, 2.25, 2.5, 2.75, 3, 3.25, 3.5}. A smaller β indicates to greater degree heterogeneity. For each fixed β, thirty networks are sampled. In each sample, K = 3 and each block contains 300 nodes (N = 900). Define the signal to noise ratio to be the expected number of in-block edges divided by the expected number of out-block edges. Throughout the simulations, the SNR is set to three and the expected average degree is set to eight. The left panel of Figure 2 plots β against the misclustering rate for SC, RSC, RSC wp, t-RSC, SCP and RSC on S. Each point is the average of 30 sampled networks. Each line represents one method. If a method assigns more than 95% of the nodes into one block, then we consider all nodes to be misclustered. The experiment shows that (1) if the degrees are more heterogeneous (β ≤3.5), then regularization improves the performance of the algorithms; (2) if β < 3, then RSC and tRSC outperform RSC wp and SCP, verifying that the normalization step helps when the degrees are highly heterogeneous; and, finally, (3) uniformly across the setting of β, it is easier to cluster nodes with high leverage scores. Experiment 2. This experiment compares SC, RSC, RSC wp, t-RSC and SCP under the SBM with no degree heterogeneity. Each simulation has K = 3 blocks and N = 1500 nodes. As in the previous experiment, SNR is set to three. In this experiment, the average degree has three different settings: 10, 21, 30. For each setting, the results are averaged over 50 samples of the network. The right panel of Figure 2 shows the misclustering rate of SC and RSC for the three different values of the average degree. SCP, RSC wp, t-RSC perform similarly to RSC, demonstrating that under the standard SBM (i.e. without degree heterogeneity) all spectral clustering methods perform comparably. The one exception is that under the sparsest model, SC is less stable than the other methods. 7 G G G G G G G 2.0 2.5 3.0 3.5 0.0 0.2 0.4 0.6 0.8 1.0 beta mis−clustering rate G SC RSC RSC_wp t−RSC SCP RSC on S G G G 10 15 20 25 30 0.0 0.1 0.2 0.3 0.4 0.5 expected average degree mis−clustering rate G SC RSC Figure 2: Left Panel: Comparison of Performance for SC, RSC, RSC wp, t-RSC, SCP and (RSC on S) under different degree heterogeneity. Smaller β corresponds to greater degree heterogeneity. Right Panel: Comparison of Performance for SC and RSC under SBM with different sparsity. Analysis of Blog Network. This empirical network is comprised of political blogs during the 2004 US presidential election (Adamic and Glance [17]). Each blog has a known label as liberal or conservative. As in Karrer and Newman [10], we symmetrize the network and consider only the largest connected component of 1222 nodes. The average degree of the network is roughly 15. We apply RSC to the data set with τ ranging from 0 to 30. In the case where τ = 0, it is standard Spectral Clustering. SC assigns 1144 out of 1222 nodes to the same block, failing to detect the ideological partition. RSC detects the partition, and its performance is insensitive to the τ. With τ ∈[1, 30], RSC misclusters (80 ± 2) nodes out of 1222. If RSC is applied to the 90% of nodes with the largest leverage scores (i.e. excluding the nodes with the smallest leverage scores), then the misclustering rate among these high leverage nodes is 44/1100, which is almost 50% lower. This illustrates how the leverage score corresponding to a node can gauge the strength of the clustering evidence for that node relative to the other nodes. We tried to compare these results to the regularized algorithm in [1]. However, because there are several very small degree nodes in this data, the values computed in step 4 of the algorithm in [1] sometimes take negative values. Then, step 5 (b) cannot be performed. 6 Discussion In this paper, we give theoretical, simulation, and empirical results that demonstrate how a simple adjustment to the standard spectral clustering algorithm can give dramatically better results for networks with heterogeneous degrees. Our theoretical results add to the current results by studying the regularization step in a more canonical version of the spectral clustering algorithm. Moreover, our main results require no assumptions on the minimum node degree. This is crucial because it allows us to study situations where several nodes have small leverage scores; in these situations, regularization is most beneficial. Finally, our results demonstrate that choosing a tuning parameter close to the average degree provides a balance between several competing objectives. Acknowledgements Thanks to Sara Fernandes-Taylor for helpful comments. Research of TQ is supported by NSF Grant DMS-0906818 and NIH Grant EY09946. Research of KR is supported by grants from WARF and NSF grant DMS-1309998. 8 References [1] K. Chaudhuri, F. Chung, and A. Tsiatas. Spectral clustering of graphs with general degrees in the extended planted partition model. Journal of Machine Learning Research, pages 1–23, 2012. [2] Arash A Amini, Aiyou Chen, Peter J Bickel, and Elizaveta Levina. Pseudo-likelihood methods for community detection in large sparse networks. 2012. [3] F. McSherry. Spectral partitioning of random graphs. In Foundations of Computer Science, 2001. Proceedings. 42nd IEEE Symposium on, pages 529–537. IEEE, 2001. [4] Anirban Dasgupta, John E Hopcroft, and Frank McSherry. Spectral analysis of random graphs with skewed degree distributions. In Foundations of Computer Science, 2004. Proceedings. 45th Annual IEEE Symposium on, pages 602–610. IEEE, 2004. [5] Amin Coja-Oghlan and Andr´e Lanka. Finding planted partitions in random graphs with general degree distributions. SIAM Journal on Discrete Mathematics, 23(4):1682–1714, 2009. [6] Brendan PW Ames and Stephen A Vavasis. Convex optimization for the planted k-disjointclique problem. arXiv preprint arXiv:1008.2814, 2010. [7] K. Rohe, S. Chatterjee, and B. Yu. Spectral clustering and the high-dimensional stochastic blockmodel. The Annals of Statistics, 39(4):1878–1915, 2011. [8] D.L. Sussman, M. Tang, D.E. Fishkind, and C.E. Priebe. A consistent adjacency spectral embedding for stochastic blockmodel graphs. Journal of the American Statistical Association, 107(499):1119–1128, 2012. [9] P.W. Holland and S. Leinhardt. Stochastic blockmodels: First steps. Social networks, 5(2): 109–137, 1983. [10] Brian Karrer and Mark EJ Newman. Stochastic blockmodels and community structure in networks. Physical Review E, 83(1):016107, 2011. [11] Michael W Mahoney. Randomized algorithms for matrices and data. Advances in Machine Learning and Data Mining for Astronomy, CRC Press, Taylor & Francis Group, Eds.: Michael J. Way, Jeffrey D. Scargle, Kamal M. Ali, Ashok N. Srivastava, p. 647-672, 1:647–672, 2012. [12] Andrew Y Ng, Michael I Jordan, and Yair Weiss. On spectral clustering: Analysis and an algorithm. Advances in neural information processing systems, 2:849–856, 2002. [13] Jiashun Jin. Fast network community detection by score. arXiv preprint arXiv:1211.5803, 2012. [14] Fan Chung and Mary Radcliffe. On the spectra of general random graphs. the electronic journal of combinatorics, 18(P215):1, 2011. [15] Peter H Sch¨onemann. A generalized solution of the orthogonal procrustes problem. Psychometrika, 31(1):1–10, 1966. [16] D.S. Choi, P.J. Wolfe, and E.M. Airoldi. Stochastic blockmodels with a growing number of classes. Biometrika, 99(2):273–284, 2012. [17] Lada A Adamic and Natalie Glance. The political blogosphere and the 2004 us election: divided they blog. In Proceedings of the 3rd international workshop on Link discovery, pages 36–43. ACM, 2005. 9
|
2013
|
16
|
4,887
|
On the Representational Efficiency of Restricted Boltzmann Machines James Martens∗ Arkadev Chattopadhyay+ Toniann Pitassi∗ Richard Zemel∗ ∗Department of Computer Science +School of Technology & Computer Science University of Toronto Tata Institute of Fundamental Research {jmartens,toni,zemel}@cs.toronto.edu arkadev.c@tifr.res.in Abstract This paper examines the question: What kinds of distributions can be efficiently represented by Restricted Boltzmann Machines (RBMs)? We characterize the RBM’s unnormalized log-likelihood function as a type of neural network, and through a series of simulation results relate these networks to ones whose representational properties are better understood. We show the surprising result that RBMs can efficiently capture any distribution whose density depends on the number of 1’s in their input. We also provide the first known example of a particular type of distribution that provably cannot be efficiently represented by an RBM, assuming a realistic exponential upper bound on the weights. By formally demonstrating that a relatively simple distribution cannot be represented efficiently by an RBM our results provide a new rigorous justification for the use of potentially more expressive generative models, such as deeper ones. 1 Introduction Standard Restricted Boltzmann Machines (RBMs) are a type of Markov Random Field (MRF) characterized by a bipartite dependency structure between a group of binary visible units x ∈{0, 1}n and binary hidden units h ∈{0, 1}m. Their energy function is given by: Eθ(x, h) = −x⊤Wh −c⊤x −b⊤h where W ∈Rn×m is the matrix of weights, c ∈Rn and b ∈Rm are vectors that store the input and hidden biases (respectively) and together these are referred to as the RBM’s parameters θ = {W, c, b}. The energy function specifies the probability distribution over the joint space (x, h) via the Boltzmann distribution p(x, h) = 1 Zθ exp(−Eθ(x, h)) with the partition function Zθ given by P x,h exp(−Eθ(x, h)). Based on this definition, the probability for any subset of variables can be obtained by conditioning and marginalization, although this can only be done efficiently up to a multiplicative constant due to the intractability of the RBM’s partition function (Long and Servedio, 2010). RBMs have been widely applied to various modeling tasks, both as generative models (e.g. Salakhutdinov and Murray, 2008; Hinton, 2000; Courville et al., 2011; Marlin et al., 2010; Tang and Sutskever, 2011), and for pre-training feed-forward neural nets in a layer-wise fashion (Hinton and Salakhutdinov, 2006). This method has led to many new applications in general machine learning problems including object recognition and dimensionality reduction. While promising for practical applications, the scope and basic properties of these statistical models have only begun to be studied. As with any statistical model, it is important to understand the expressive power of RBMs, both to gain insight into the range of problems where they can be successfully applied, and to provide justification for the use of potentially more expressive generative models. In particular, we are interested in the question of how large the number of hidden units m must be in order to capture a particular distribution to arbitrarily high accuracy. The question of size is of practical interest, since very large models will be computationally more demanding (or totally impractical), and will tend to overfit a lot more during training. 1 It was shown by Freund and Haussler (1994), and later by Le Roux and Bengio (2008) that for binary-valued x, any distribution over x can be realized (up to an approximation error which vanishes exponentially quickly in the magnitude of the parameters) by an RBM, as long as m is allowed to grow exponentially fast in input dimension (n). Intuitively, this construction works by instantiating, for each of the up to 2n possible values of x that have support, a single hidden unit which turns on only for that particular value of x (with overwhelming probability), so that the corresponding probability mass can be individually set by manipulating that unit’s bias parameter. An improvement to this result was obtained by Montufar and Ay (2011); however this construction still requires that m grow exponentially fast in n. Recently, Montufar et al. (2011) generalized the construction used by Le Roux and Bengio (2008) so that each hidden unit turns on for, and assigns probability mass to, not just a single x, but a “cubical set” of possible x’s, which is defined as a subset of {0, 1}n where some entries of x are fixed/determined, and the rest are free. By combining such hidden units that are each specialized to a particular cubic set, they showed that any k-component mixture of product distributions over the free variables of mutually disjoint cubic sets can be approximated arbitrarily well by an RBM with m = k hidden units. Unfortunately, families of distributions that are of this specialized form (for some m = k bounded by a polynomial function of n) constitute only a very limited subset of all distributions that have some kind of meaningful/interesting structure. For example, this result would not allow us to efficiently construct simple distributions where the mass is a function of P i xi (e.g., for p(x) ∝PARITY(x)). In terms of what kinds of distributions provably cannot be efficiently represented by RBMs, even less is known. Cueto et al. (2009) characterized the distributions that can be realized by a RBM with k parameters as residing within a manifold inside the entire space of distributions on {0, 1}n whose dimension depends on k. For sub-exponential k this implies the existence of distributions which cannot be represented. However, this kind of result gives us no indication of what these hard-torepresent distributions might look like, leaving the possibility that they might all be structureless or otherwise uninteresting. In this paper we first develop some tools and simulation results which relate RBMs to certain easierto-analyze approximations, and to neural networks with 1 hidden layer of threshold units, for which many results about representational efficiency are already known (Maass, 1992; Maass et al., 1994; Hajnal et al., 1993). This opens the door to a range of potentially relevant complexity results, some of which we apply in this paper. Next, we present a construction that shows how RBMs with m = n2 + 1 can produce arbitrarily good approximations to any distribution where the mass is a symmetric function of the inputs (that is, it depends on P i xi). One example of such a function is the (in)famous PARITY function, which was shown to be hard to compute in the perceptron model by the classic Minsky and Papert book from 1968. This distribution is highly non-smooth and has exponentially many modes. Having ruled out distributions with symmetric mass functions as candidates for ones that are hard for RBMs to represent, we provide a concrete example of one whose mass computation involves only one additional operation vs computing PARITY, and yet whose reprentation by an RBM provably requires m to grow exponentially with n (assuming an exponental upper bound on the size of the RBM’s weights). Because this distribution is particularly simple, it can be viewed as a special case of many other more complex types of distributions, and thus our results speak to the hardness of representing those distributions with RBMs as well. Our results provide a fine delineation between what is “easy” for RBMs to represent, and what is “hard”. Perhaps more importantly, they demonstrate that the distributions that cannot be efficiently represented by RBMs can have a relatively basic structure, and are not simply random in appearance as one might hope given the previous results. This provides perhaps the first completely rigorous justification for the use of deeper generative models such as Deep Boltzmann Machines (Salakhutdinov and Hinton, 2009), and contrastive backpropagation networks (Hinton et al., 2006) over standard RBMs. The rest of the paper is organized as follows. Section 2 characterizes the unnormalized loglikelihood as a type of neural network (called an “RBM network”) and shows how this type is related to single hidden layer neural networks of threshold neurons, and to an easier-to-analyze approximation (which we call a “hardplus RBM network”). Section 3 describes a m = n2 + 1 construction for distributions whose mass is a function of P xi, and in Section 4 we present an exponential lower bound on m for a slightly more complicated class of explicit distributions. Note that all proofs can be found in the Appendix. 2 −4 −2 0 2 4 6 0 1 2 3 4 5 6 Function values y softplus hardplus Figure 1: Left: An illustration of a basic RBM network with n = 3 and m = 5. The hidden biases are omitted to avoid clutter. Right: A plot comparing the soft and hard activation functions. 2 RBM networks 2.1 Free energy function In an RBM, the (negative) unnormalized log probability of x, after h has been marginalized out, is known as the free energy. Denoted by Fθ(x), the free energy satisfies the property that p(x) = exp(−Fθ(x))/Zθ where Zθ is the usual partition function. It is well known (see Appendix A.1 for a derivation) that, due to the bipartite structure of RBMs, computing F is tractable and has a particularly nice form: Fθ(x) = −c⊤x − X j log(1 + exp(x⊤[W]j + bj)) (1) where [W]j is the j-th column of W. Because the free energy completely determines the log probability of x, it fully characterizes an RBM’s distribution. So studying what kinds of distributions an RBM can represent amounts to studying the kinds of functions that can be realized by the free energy function for some setting of θ. 2.2 RBM networks The form of an RBM’s free energy function can be expressed as a standard feed-forward neural network, or equivalently, a real-valued circuit, where instead of using hidden units with the usual sigmoidal activation functions, we have m “neurons” (a term we will use to avoid confusion with the original meaning of a “unit” in the context of RBMs) that use the softplus activation function: soft(y) = log(1 + exp(y)) Note that at the cost of increasing m by one (which does not matter asymptotically) and introducing an arbitrarily small approximation error, we can assume that the visible biases (c) of an RBM are all zero. To see this, note that up to an additive constant, we can very closely approximate c⊤x by soft(K + c⊤x) ≈K + c⊤x for a suitably large value of K (i.e., K ≫∥c∥1 ≥maxx(c⊤x)). Proposition 11 in the Appendix quantifies the very rapid convergence of this approximation as K increases. These observations motivate the following definition of an RBM network, which computes functions with the same form as the negative free energy function of an RBM (assumed to have c = 0), or equivalently the log probability (negative energy) function of an RBM. RBM networks are illustrated in Figure 1. Definition 1 A RBM network with parameters W, b is defined as a neural network with one hidden layer containing m softplus neurons and weights and biases given by W and b, so that each neuron j’s output is soft([W]j + bj). The output layer contains one neuron whose weights and bias are given by 1 ≡[11...1]⊤and the scalar B, respectively. For convenience, we include the bias constant B so that RBM networks shift their output by an additive constant (which does not affect the probability distribution implied by the RBM network since any additive constant is canceled out by log Z in the full log probability). 3 2.3 Hardplus RBM networks A function which is somewhat easier to analyze than the softplus function is the so-called hardplus function (aka ‘plus’ or ‘rectification’), defined by: hard(y) = max(0, y) As their names suggest, the softplus function can be viewed as a smooth approximation of the hardplus, as illustrated in Figure 1. We define a hardplus RBM network in the obvious way: as an RBM network with the softplus activation functions of the hidden neurons replaced with hardplus functions. The strategy we use to prove many of the results in this paper is to first establish them for hardplus RBM networks, and then show how they can be adapted to the standard softplus case via simulation results given in the following section. 2.4 Hardplus RBM networks versus (Softplus) RBM networks In this section we present some approximate simulation results which relate hardplus and standard (softplus) RBM networks. The first result formalizes the simple observation that for large input magnitudes, the softplus and hardplus functions behave very similarly (see Figure 1, and Proposition 11 in the Appendix). Lemma 2. Suppose we have a softplus and hardplus RBM networks with identical sizes and parameters. If, for each possible input x ∈{0, 1}n, the magnitude of the input to each neuron is bounded from below by C, then the two networks compute the same real-valued function, up to an error (measured by | · |) which is bounded by m exp(−C). The next result demonstrates how to approximately simulate a RBM network with a hardplus RBM network while incurring an approximation error which shrinks as the number of neurons increases. The basic idea is to simulate individual softplus neurons with groups of hardplus neurons that compute what amounts to a piece-wise linear approximation of the smooth region of a softplus function. Theorem 3. Suppose we have a (softplus) RBM network with m hidden neurons with parameters bounded in magnitude by C. Let p > 0. Then there exists a hardplus RBM network with ≤2m2p log(mp) + m hidden neurons and with parameters bounded in magnitude by C which computes the same function, up to an approximation error of 1/p. Note that if p and m are polynomial functions of n, then the simulation produces hardplus RBM networks whose size is also polynomial in n. 2.5 Thresholded Networks and Boolean Functions Many relevant results and proof techniques concerning the properties of neural networks focus on the case where the output is thresholded to compute a Boolean function (i.e. a binary classification). In this section we define some key concepts regarding output thresholding, and present some basic propositions that demonstrate how hardness results for computing Boolean functions via thresholding yield analogous hardness results for computing certain real-valued functions. We say that a real-valued function g represents a Boolean function f with margin δ if for all x g satisfies |g(x)| ≥δ and thresh(g(x)) = f(x), where thresh is the 0/1 valued threshold function defined by: thresh(a) = 1 : a ≥0 0 : a < 0 We define a thresholded neural network (a distinct concept from a “threshold network”, which is a neural network with hidden neurons whose activation function is thresh) to be a neural network whose output is a single real value, which is followed by an application of the threshold function. Such a network will be said to compute a given Boolean function f with margin δ (similar to the concept of “separation” from Maass et al. (1994)) if the real valued input g to the final threshold represents f according to the above definition. While the output of a thresholded RBM network does not correspond to the log probability of an RBM, the following observation spells out how we can use thresholded RBM networks to establish lower bounds on the size of an RBM network required to compute certain simple functions (i.e., real-valued functions that represent certain Boolean functions): 4 Proposition 4. If an RBN network of size m can compute a real-valued function g which represents f with margin δ, then there exists a thresholded RBM network that computes f with margin δ. This statement clearly holds if we replace each instance of “RBM network” with “hardplus RBM network” above. Using Theorem 3 we can prove a more interesting result which states that any lower bound result for thresholded hardplus RBMs implies a somewhat weaker lower bound result for standard RBM networks: Proposition 5. If an RBM network of size ≤m with parameters bounded in magnitude by C computes a function which represents a Boolean function f with margin δ, then there exists a thresholded hardplus RBM network of size ≤4m2 log(2m/δ)/δ + m with parameters bounded in magnitude by C (C can be ∞) that computes f(x) with margin δ/2 This proposition implies that any exponential lower bound on the size of a thresholded hardplus RBM network will yield an exponential lower bound for (softplus) RBM networks that compute functions of the given form, provided that the margin δ is bounded from below by some function of the form 1/poly(n). Intuitively, if f is a Boolean function and no RBM network of size m can compute a real-valued function that represents f (with a margin δ), this means that no RBM of size m can represent any distribution where the log probability of each member of {x|f(x) = 1} is at least 2δ higher than each member of {x|f(x) = 0}. In other words, RBMs of this size cannot generate any distribution where the two “classes” implied by f are separated in log probability by more than 2δ. 2.6 RBM networks versus standard neural networks Viewing the RBM log probability function through the formalism of neural networks (or real-valued circuits) allows us to make use of known results for general neural networks, and helps highlight important differences between what an RBM can effectively “compute” (via its log probability) and what a standard neural network can compute. There is a rich literature studying the complexity of various forms of neural networks, with diverse classes of activation functions, e.g., Maass (1992); Maass et al. (1994); Hajnal et al. (1993). RBM networks are distinguished from these, primarily because they have a single hidden layer and because the upper level weights are constrained to be 1. For some activation functions this restriction may not be significant, but for soft/hard-plus neurons, whose output is always positive, it makes particular computations much more awkward (or perhaps impossible) to express efficiently. Intuitively, the jth softplus neuron acts as a “feature detector”, which when “activated” by an input s.t. x⊤wj + bj ≫0, can only contribute positively to the log probability of x, according to an (asymptotically) affine function of x given by that neuron’s input. For example, it is easy to design an RBM network that can (approximately) output 1 for input x = ⃗0 and 0 otherwise (i.e., have a single hidden neuron with weights −M1 for a large M and bias b such that soft(b) = 1), but it is not immediately obvious how an RBM network could efficiently compute (or approximate) the function which is 1 on all inputs except x = ⃗0, and 0 otherwise (it turns out that a non-obvious construction exists for m = n). By comparison, standard threshold networks only requires 1 hidden neuron to compute such a function. In fact, it is easy to show1 that without the constraint on upper level weights, an RBM network would be, up to a linear factor, at least as efficient at representing real-valued functions as a neural network with 1 hidden layer of threshold neurons. From this, and from Theorem 4.1 of Maass et al. (1994), it follows that a thresholded RBM network is, up to a polynomial increase in size, at least as efficient at computing Boolean functions as 1-hidden layer neural networks with any “sigmoid-like” activation function2, and polynomially bounded weights. 1To see this, note that we could use 2 softplus neurons to simulate a single neuron with a “sigmoid-like” activation function (i.e., by setting the weights that connect them to the output neuron to have opposite signs). Then, by increasing the size of the weights so the sigmoid saturates in both directions for all inputs, we could simulate a threshold function arbitrarily well, thus allowing the network to compute any function computable by a one hidden layer threshold network while only using only twice as many neurons. 2This is a broad class and includes the standard logistic sigmoid. See Maass et al. (1994) for a precise technical definition 5 0 1 2 3 4 5 0 50 100 150 Output from Building Blocks X 0 1 2 3 4 5 −15 −10 −5 0 Network Output X Figure 2: Left: The functions computed by the 5 building-blocks as constructed by Theorem 7 when applied to the PARITY function for n = 5. Right: The output total of the hardplus RBM network constructed in Theorem 7. The dotted lines indicate the target 0 and 1 values. Note: For purposes of illustration we have extended the function outputs over all real-values of X in the obvious way. 2.7 Simulating hardplus RBM networks by a one-hidden-layer threshold network Here we provide a natural simulation of hardplus RBM networks by threshold networks with one hidden layer. Because this is an efficient (polynomial) and exact simulation, it implies that a hardplus RBM network can be no more powerful than a threshold network with one hidden layer, for which several lower bound results are already known. Theorem 6. Let f be a real-valued function computed by a hardplus RBM network of size m. Then f can be computed by a single hidden layer threshold network, of size mn. Furthermore, if the weights of the RBM network have magnitude at most C, then the weights of the corresponding threshold network have magnitude at most (n + 1)C. 3 n2 + 1-sized RBM networks can compute any symmetric function In this section we present perhaps the most surprising results of this paper: a construction of an n2-sized RBM network (or hardplus RBM network) for computing any given symmetric function of x. Here, a symmetric function is defined as any real-valued function whose output depends only on the number of 1-bits in the input x. This quantity is denoted X ≡P i xi. A well-known example of a symmetric function is PARITY. Symmetric functions are already known3 to be computable by single hidden layer threshold networks (Hajnal et al., 1993) with m = n. Meanwhile (qualified) exponential lower bounds on m exist for functions which are only slightly more complicated (Hajnal et al., 1993; Forster, 2002). Given that hardplus RBM networks appear to be strictly less expressive than such threshold networks (as discussed in Section 2.6), it is surprising that they can nonetheless efficiently compute functions that test the limits of what those networks can compute efficiently. Theorem 7. Let f : {0, 1}n →R be a symmetric function defined by f(x) = tk for P i xi = k. Then (i) there exists a hardplus RBM network, of size n2 + 1, and with weights polynomial in n and t1, . . . , tk that computes f exactly, and (ii) for every ϵ there is a softplus RBM network of size n2+1, and with weights polynomial in n, t0, . . . , tn and log(1/ϵ) that computes f within an additive error ϵ. The high level idea of this construction is as follows. Our hardplus RBM network consists of n “building blocks”, each composed of n hardplus neurons, plus one additional hardplus neuron, for a total size of m = n2 + 1. Each of these building blocks is designed to compute a function of the form: max(0, γX(e −X)) for parameters γ > 0 and e > 0. This function, examples of which are illustrated in Figure 2, is quadratic from X = 0 to X = e and is 0 otherwise. The main technical challenge is then to choose the parameters of these building blocks so that the sum of n of these “rectified quadratics”, plus the output of the extra hardplus neuron (which handles 3The construction in Hajnal et al. (1993) is only given for Boolean-valued symmetric functions but can be generalized easily. 6 the X = 0 case), yields a function that matches f, up to a additive constant (which we then fix by setting the bias B of the output neuron). This would be easy if we could compute more general rectified quadratics of the form max(0, γ(X −g)(e −X)), since we could just take g = k −1/2 and e = k + 1/2 for each possible value k of X. But the requirement that g = 0 makes this more difficult since significant overlap between non-zero regions of these functions will be unavoidable. Further complicating the situation is the fact that we cannot exploit linear cancelations due to the restriction on the RBM network’s second layer weights. Figure 2 depicts an example of the solution to this problem as given in our proof of Theorem 7. Note that this construction is considerably more complex than the well-known construction used for computing symmetric functions with 1 hidden layer threshold networks Hajnal et al. (1993). While we cannot prove that ours is the most efficient possible construction RBM networks, we can prove that a construction directly analogous to the one used for 1 hidden layer threshold networks—where each individual neuron computes a symmetric function—cannot possibly work for RBM networks. To see this, first observe that any neuron that computes a symmetric function must compute a function of the form g(βX +b), where g is the activation function and β is some scalar. Then noting that both soft(y) and hard(y) are convex functions of y, and that the composition of an affine function and a convex function is convex, we have that each neuron computes a convex function of X. Then because the positive sum of convex functions is convex, the output of the RBM network (which is the unweighted sum of the output of its neurons, plus a constant) is itself convex in X. Thus the symmetric functions computable by such RBM networks must be convex in X, a severe restriction which rules out most examples. 4 Lower bounds on the size of RBM networks for certain functions 4.1 Existential results In this section we prove a result which establishes the existence of functions which cannot be computed by RBM networks that are not exponentially large. Instead of identifying non-representable distributions as lying in the complement of some lowdimensional manifold (as was done previously), we will establish the existence of Boolean functions which cannot be represented with a sufficiently large margin by the output of any sub-exponentially large RBM network. However, this result, like previous such existential results, will say nothing about what these Boolean functions actually look like. To prove this result, we will make use of Proposition 5 and a classical result of Muroga (1971) which allows us to discretize the incoming weights of a threshold neuron (without changing the function it computes), thus allowing us to bound the number of possible Boolean functions computable by 1-layer threshold networks of size m. Theorem 8. Let Fm,δ,n represent the set of those Boolean functions on {0, 1}n that can be computed by a thresholded RBM network of size m with margin δ. Then, there exists a fixed number K such that, Fm,δ,n ≤2poly(s,m,n,δ), where s(m, δ, n) = 4m2n δ log 2m δ + m. In particular, when m2 ≤δ2αn, for any constant α < 1/2, the ratio of the size of the set Fm,δ,n to the total number of Boolean functions on {0, 1}n (which is 22n), rapidly converges to zero with n. 4.2 Qualified lower bound results for the IP function While interesting, existential results such as the one above does not give us a clear picture of what a particular hard-to-compute function for RBM networks might look like. Perhaps these functions will resemble purely random maps without any interesting structure. Perhaps they will consist only of functions that require exponential time to compute on a Turing machine, or even worse, ones that are non-computable. In such cases, not being able to compute such functions would not constitute a meaningful limitation on the expressive efficiency of RBM networks. In this sub-section we present strong evidence that this is not the case by exhibiting a simple Boolean function that provably requires exponentially many neurons to be computed by a thresholded RBM network, provided that the margin is not allowed to be exponentially smaller than the weights. Prior to these results, there was no formal separation between the kinds of unnormalized log-likelihoods realizable by polynomially sized RBMs, and the class of functions computable efficiently by almost any reasonable model of computation, such as arbitrarily deep Boolean circuits. 7 The Boolean function we will consider is the well-known “inner product mod 2” function, denoted IP(x), which is defined as the parity of the the inner product of the first half of x with the second half (we assume for convenience that n is even). This function can be thought of as a strictly harder to compute version of PARITY (since PARITY is trivially reducible to it), which as we saw in Section 7, can be efficiently computed by thresholded RBM network (indeed, an RBM network can efficiently compute any possible real-valued representation of PARITY). Intuively, IP(x) should be harder than PARITY, since it involves an extra “stage” or “layer” of sequential computation, and our formal results with RBMs agree with this intuition. There are many computational problems that IP can be reduced to, so showing that RBM networks cannot compute IP thus proves that RBMs cannot efficiently model a wide range of distributions whose unnormalized log-likelihoods are sufficiently complex in a computational sense. Examples of such log-likelihoods include ones given by the multiplication of binary-represented integers, or the evaluation of the connectivity of an encoded graph. For other examples, see see Corollary 3.5 of Hajnal et al. (1993). Using the simulation of hardplus RBM networks by 1 hidden layer threshold networks (Theorem 6), and Proposition 5, and an existing result about the hardness of computing IP by 1 hidden layer thresholded networks of bounded weights due to Hajnal et al. (1993), we can prove the following basic result: Theorem 9. If m < min 2n/3 C , 2n/6q δ 4C log(2/δ), 2n/9 3q δ 4C then no RBM network of size m, whose weights are bounded in magnitude by C, can compute a function which represents ndimensional IP with margin δ. In particular, for C and 1/δ bounded by polynomials in n, for n sufficiently large, this condition is satisfied whenever m < 2(1/9−ϵ)n for some ϵ > 0. Translating the definitions, this results says the following about the limitations of efficient representation by RBMs: Unless either the weights, or the number units of an RBM are exponentially large in n, an RBM cannot capture any distribution that has the property that x’s s.t. IP(x) = 1 are significantly more probable than the remaining x’s. While the above theorem is easy to prove from known results and the simulation/hardness results given in previous sections, by generalizing the techniques used in Hajnal et al. (1993), we can (with much more effort) derive a stronger result. This gives an improved bound on m and lets us partially relax the magnitude bound on parameters so that they can be arbitrarily negative: Theorem 10. If m < δ 2·max{log 2,nC+log 2} · 2n/4, then no RBM network of size m, whose weights are upper bounded in value by C, can compute a function which represents n-dimensional IP with margin δ. In particular, for C and 1/δ bounded by polynomials in n, for n sufficiently large, this condition is satisfied whenever m < 2(1/4−ϵ)n for some ϵ > 0. The general theorem we use to prove this second result (Theorem 17 in the Appendix) requires only that the neural network have 1 hidden layer of neurons with activation functions that are monotonic and contribute to the top neuron (after multiplication by the outgoing weight) a quantity which can be bounded by a certain exponentially growing function of n (that also depends on δ). Thus this technique can be applied to produce lower bounds for much more general types of neural networks, and thus may be independently interesting. 5 Conclusions and Future Work In this paper we significantly advanced the theoretical understanding of the representational efficiency of RBMs. We treated the RBM’s unnormalized log likelihood as a neural network which allowed us to relate an RBM’s representational efficiency to that of threshold networks, which are much better understood. We showed that, quite suprisingly, RBMs can efficiently represent distributions that are given by symmetric functions such as PARITY, but cannot efficiently represent distributions which are slightly more complicated, assuming an exponential bound on the weights. This provides rigorous justification for the use of potentially more expressive/deeper generative models. Going forward, some promising research directions and open problems include characterizing the expressive power of Deep Boltzmann Machines and more general Boltzmann machines, and proving an exponential lower bound for some specific distribution without any qualifications on the weights. Acknowledgments This research was supported by NSERC. JM is supported by a Google Fellowship; AC by a Ramanujan Fellowship of the DST, India. 8 References Aaron Courville, James Bergstra, and Yoshua Bengio. Unsupervised models of images by spikeand-slab RBMs. In Proceedings of the 28th International Conference on Machine Learning, pages 952–960, 2011. Maria Anglica Cueto, Jason Morton, and Bernd Sturmfels. Geometry of the Restricted Boltzmann Machine. arxiv:0908.4425v1, 2009. J. Forster. A linear lower bound on the unbounded error probabilistic communication complexity. J. Comput. Syst. Sci., 65(4):612–625, 2002. Yoav Freund and David Haussler. Unsupervised learning of distributions on binary vectors using two layer networks, 1994. A. Hajnal, W. Maass, P. Pudl´ak, M. Szegedy, and G. Tur´an. Threshold circuits of bounded depth. J. Comput. System. Sci., 46:129–154, 1993. G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507, 2006. ISSN 1095-9203. Geoffrey Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 14:2002, 2000. Geoffrey E. Hinton, Simon Osindero, Max Welling, and Yee Whye Teh. Unsupervised discovery of nonlinear structure using contrastive backpropagation. Cognitive Science, 30(4):725–731, 2006. Nicolas Le Roux and Yoshua Bengio. Representational power of Restricted Boltzmann Machines and deep belief networks. Neural Computation, 20(6):1631–1649, 2008. Philip Long and Rocco Servedio. Restricted Boltzmann Machines are hard to approximately evaluate or simulate. In Proceedings of the 27th International Conference on Machine Learning, pages 952–960, 2010. Wolfgang Maass. Bounds for the computational power and learning complexity of analog neural nets (extended abstract). In Proc. of the 25th ACM Symp. Theory of Computing, pages 335–344, 1992. Wolfgang Maass, Georg Schnitger, and Eduardo D. Sontag. A comparison of the computational power of sigmoid and boolean threshold circuits. In Theoretical Advances in Neural Computation and Learning, pages 127–151. Kluwer, 1994. Benjamin M. Marlin, Kevin Swersky, Bo Chen, and Nando de Freitas. Inductive principles for Restricted Boltzmann Machine learning. Journal of Machine Learning Research - Proceedings Track, 9:509–516, 2010. G. Montufar, J. Rauh, and N. Ay. Expressive power and approximation errors of Restricted Boltzmann Machines. In Advances in Neural Information Processing Systems, 2011. Guido Montufar and Nihat Ay. Refinements of universal approximation results for deep belief networks and Restricted Boltzmann Machines. Neural Comput., 23(5):1306–1319, May 2011. Saburo Muroga. Threshold logic and its applications. Wiley, 1971. Ruslan Salakhutdinov and Geoffrey E. Hinton. Deep boltzmann machines. Journal of Machine Learning Research - Proceedings Track, 5:448–455, 2009. Ruslan Salakhutdinov and Iain Murray. On the quantitative analysis of Deep Belief Networks. In Andrew McCallum and Sam Roweis, editors, Proceedings of the 25th Annual International Conference on Machine Learning (ICML 2008), pages 872–879. Omnipress, 2008. Yichuan Tang and Ilya Sutskever. Data normalization in the learning of Restricted Boltzmann Machines. Technical Report UTML-TR-11-2, Department of Computer Science, University of Toronto, 2011. 9 A Appendix A.1 Free-energy derivation The following is a derivation of the well-known formula for the free-energy of an RBM. This tractable form is made possible by the bipartite interaction structure of the RBM’s units: p(x) = X h 1 Zθ exp(x⊤Wh + c⊤x + b⊤h) = 1 Zθ exp(c⊤x) Y j X hj∈{0,1} exp(x⊤[W]jhj + bjhj) = 1 Zθ exp(c⊤x) exp( X j (log X hj∈{0,1} exp(x⊤[W]jhj + bjhj))) = 1 Zθ exp(c⊤x + X j log[1 + exp(x⊤[W]j + bj)]) = 1 Zθ exp(−Fθ(x)) A.2 Proofs for Section 2.4 We begin with a useful technical result: Proposition 11. For arbitrary y ∈R the following basic facts for the softplus function hold: y −soft(y) = −soft(−y) soft(y) ≤exp(y) Proof. The first fact follows from: y−soft(y) = log(exp(y)) −log(1 + exp(y)) = log exp(y) 1 + exp(y) = log 1 exp(−y) + 1 = −log(1 + exp(y)) = −soft(−y) To prove the second fact, we will show that the function f(y) = exp(y) −soft(y) is positive. Note that f tends to 0 as y goes to −∞since both exp(y) and soft(y) do. It remains to show that f is monotonically increasing, which we establish by showing that its derivative is positive: f ′(y) = exp(y) − 1 1 + exp(−y) > 0 ⇔ exp(y)(1 + exp(−y)) −1 + exp(−y) 1 + exp(−y) > 0 ⇔ exp(y) + 1 −1 > 0 ⇔ exp(y) > 0 Proof of Lemma 2. Consider a single neuron in the RBM network and the corresponding neuron in the hardplus RBM network, whose net-input are given by y = w⊤x + b. For each x, there are two cases for y. If y ≥0, we have by hypothesis that y ≥C, and so: | hard(y) −soft(y)| = |y −soft(y)| = | −soft(−y)| = soft(−y) ≤exp(−y) ≤exp(−C) And if y < 0, we have by hypothesis that y ≤−C and so: | hard(y) −soft(y)| = |0 −soft(y)| = soft(y) ≤exp(y) ≤exp(−C) 10 Thus, each corresponding pair of neurons computes the same function up to an error bounded by exp(−C). From this it is easy to show that the entire circuits compute the same function, up to an error bounded by m exp(−C), as required. Proof of Theorem 3. Suppose we have a softplus RBM network with a number of hidden neurons given by m. To simulate this with a hardplus RBM network we will replace each neuron with a group of hardplus neurons with weights and biases chosen so that the sum of their outputs approximates the output of the original softplus neuron, to within a maximum error of 1/p where p is some constant > 0. First we describe the construction for the simulation of a single softplus neurons by a group of hardplus neurons. Let g be a positive integer and a > 0. We will define these more precisely later, but for what follows their precise value is not important. At a high level, this construction works by approximating soft(y), where y is the input to the neuron, by a piece-wise linear function expressed as the sum of a number of hardplus functions, whose “corners” all lie inside [−a, a]. Outside this range of values, we use the fact that soft(y) converges exponentially fast (in a) to 0 on the left, and y on the right (which can both be trivially computed by hardplus functions). Formally, for i = 1, 2, ..., g, g + 1, let: qi = (i −1)2a g −a For i = 1, 2, ..., g, let: νi = soft(qi+1) −soft(qi) qi+1 −qi and also let ν0 = 0 and νg+1 = 1. Finally, for i = 1, 2, ..., g, g + 1, let: ηi = νi −νi−1 With these definitions it is straightforward to show that 1 ≥νi > 0, νi > νi−1 and consequently 0 < ηi < 1 for each i. It is also easy to show that qi > qi−1, q0 = −a and qg+1 = a. For i = 1, 2, ..., g, g + 1, we will set the weight vector wi and bias bi of the i-th hardplus neuron in our group so that the neuron outputs hard(ηi(y −qi)). This is accomplished by taking wi = ηiw and bi = ηi(b −qi), where w and b (without the subscripts), are the weight vector and bias of the original softplus neuron. Note that since |ηi| ≤1 we have that the weights of these hard neurons are smaller in magnitude than the weights of the original soft neuron and thus bounded by C as required. The total output (sum) for this group is: T(y) = g+1 X i=1 hard(ηi(y −qi)) We will now bound the approximation error |T(y) −soft(y)| of our single neuron simulation. Note that for a given y we have that the i-th hardplus neuron in the group has a non-negative input iff y ≥qi. Thus for y < −a all of the neurons have a negative input. And for y ≥−a , if we take j to be the largest index i s.t. qi ≤y, then each neuron from i = 1 to i = j will have positive input and each neuron from i = j + 1 to i = g + 1 will have negative input. Consider the case that y < −a. Since the input to each neuron is negative, they each output 0 and thus T(y) = 0. This results in an approximation error ≤exp(−a): |T(y) −soft(y)| = |0 −soft(y)| = soft(y) < soft(−a) ≤exp(−a) 11 Next, consider the case that y ≥−a, and let j be as given above. In such a case we have: T(y) = g+1 X i=1 hard(ηi(y −qi)) = j X i=1 ηi(y −qi) + 0 = j X i=1 (νi −νi−1)(y −qi) = y j X i=1 (νi −νi−1) − j X i=1 (νi −νi−1)qi = yνj −yν0 −νjqj + j−1 X i=1 νi(qi+1 −qi) + ν0q1 = νj(y −qj) + j−1 X i=1 (soft(qi+1) −soft(qi)) = νj(y −qj) + soft(qj) −soft(q1) For y ≤a we note that νj(y −qj) + soft(qj) is a secant approximation to soft(y) generated by the secant from qj to qj+1 and upperbounds soft(y) for y ∈[qj, qj+1]. Thus a crude bound on the error is soft(qj+1) −soft(qj), which only makes use of the fact that soft(y) is monotonic. Then because the slope (derivative) of soft(y) is σ(y) = 1/(1 + exp(−y)) < 1, we can further (crudely) bound this by qj+1 −qj. Thus the approximation error at such y’s may be bounded as: |T(y)−soft(y)| = |(νj(y −qj) + soft(qj) −soft(q1)) −soft(y)| ≤max{|νj(y −qj) + soft(qj) −soft(y)|, soft(q1)} ≤max{qj+1 −qj, exp(−a)} = max 2a g , exp(−a) where we have also used soft(q1) = soft(−a) ≤exp(−a). For the case y > a, all qi > y and the largest index j such that qj ≤y is j = g + 1. So νj(y −qj) + soft(qj) −soft(q1) = y −a + soft(a) −soft(−a) = y. Thus the approximation error at such y’s is: |y −soft(y)| = | −soft(−y)| = soft(−y) ≤soft(−a) ≤exp(−a) Having covered all cases for y we conclude that the general approximation error for a single softplus neuron satisfies the following bound: |y −soft(y)| ≤max 2a g , exp(−a) For a softplus RBM network with m neurons, our hardplus RBM neurons constructed by replacing each neuron with a group of hardplus neurons as described above will require a total of m(g + 1) neurons, and have an approximation error bounded by the sum of the individual approximation errors, which is itself bounded by: m max 2a g , exp(−a) Taking a = log(mp), g = ⌈2mpa⌉. This gives: m max 2a ⌈2mpa⌉, 1 mp ≤m max 2a 2mpa, 1 mp = max 1 p, 1 p = 1 p Thus we see that with m(g + 1) = m(⌈2mp log(mp)⌉+ 1) ≤2m2p log(mp) + m neurons we can produce a hardplus RBM network which approximates the output of our softplus RBM network with error bounded by 1/p. 12 Remark 12. Note that the construction used in the above lemma is likely far from optimal, as the placement of the qi’s could be done more carefully. Also, the error bound we proved is crude and does not make strong use of the properties of the softplus function. Nonetheless, it seems good enough for our purposes. A.3 Proofs for Section 2.5 Proof of Proposition 5. Suppose that there is an RBM network of size m with weights bounded in magnitude by C computes a function g which represent f with margin δ. Then taking p = 2/δ and applying Theorem 3 we have that there exists an hardplus RBM network of size 4m2 log(2m/δ)/δ + m which computes a function g′ s.t. |g(x) −g′(x)| ≤1/p = δ/2 for all x. Note that f(x) = 1 ⇒thresh(g(x)) = 1 ⇒g(x) ≥δ ⇒g′(x) ≥δ −δ/2 = δ/2 and similarly, f(x) = 0 ⇒thresh(g(x)) = 0 ⇒g(x) ≤−δ ⇒g′(x) ≤−δ + δ/2 = −δ/2. Thus we conclude that g′ represents f with margin δ/2. A.4 Proofs for Section 2.7 Proof of Theorem 6. Let f be a Boolean function on n variables computed by a size s hardplus RBM network, with parameters (W, b, d) . We will first construct a three layer hybrid Boolean/threshold circuit/network where the output gate is a simple weighted sum, the middle layer consists of AND gates, and the bottom hidden layer consists of threshold neurons. There will be n·m AND gates, one for every i ∈[n] and j ∈[m]. The (i, j)th AND gate will have inputs: (1) xi and (2) (x⊤[W]j ≥bj). The weights going from the (i, j)th AND gate to the output will be given by [W]i,j. It is not hard to see that our three layer netork computes the same Boolean function as the original hardplus RBM network. In order to obtain a single hidden layer threshold network, we replace each sub-network rooted at an AND gate of the middle layer by a single threshold neuron. Consider a general sub-network consisting of an AND of: (1) a variable xj and (2) a threshold neuron computing (Pn i=1 aixi ≥b). Let Q be some number greater than the sum of all the ai’s. We replace this sub-network by a single threshold gate that computes (Pn i=1 aixi + Qxj ≥b + Q). Note that if the input x is such that P i aixi ≥b and xj = 1, then P i aixi + Qαj will be at least b + Q, so the threshold gate will output 1. In all other cases, the threshold will output zero. (If P i aixi < b, then even if xj = 1, the sum will still be less than Q + b. Similarly, if xj = 0, then since P i aixi is never greater than P i ai, the total sum will be less than Q ≤(n + 1)C.) A.5 Proof of Theorem 7 Proof. We will first describe how to construct a hardplus RBM network which satisfies the properties required for part (i). It will be composed of n special groups of hardplus neurons (which are defined and discussed below), and one additional one we call the “zero-neuron”, which will be defined later. Definition 13 A “building block” is a group of n hardplus neurons, parameterized by the scalars γ and e, where the weight vector w ∈Rn between the i-th neuron in the group and the input layer is given by wi = M −γ and wj = −γ for j ̸= i and the bias will be given by b = γe −M, where M is a constant chosen so that M > γe. For a given x, the input to the i-th neuron of a particular building block is given by: n X j=1 wjxj + b = wixi + X j̸=i wjxj + b = (M −γ)xi −γ(X −xi) + γe −M = γ(e −X) −M(1 −xi) 13 When xi = 0, this is γ(e −X) −M < 0, and so the neuron will output 0 (by definition of the hardplus function). On the other hand, when xi = 1, the input to the neuron will be γ(e −X) and thus the output will be max(0, γ(e −X)). In general, we have that the output will be given by: xi max(0, γ(e −X)) From this it follows that the combined output from the neurons in the building block is: n X i=1 (xi max(0, γ(e −X))) = max(0, γ(e −X)) n X i=1 xi = max(0, γ(e −X))X = max(0, γX(e −X)) Note that whenever X is positive, the output is a concave quadratic function in X, with zeros at X = 0 and X = e, and maximized at X = e/2, with value γe2/4. Next we show how the parameters of the n building blocks used in our construction can be set to produce a hardplus RBM network with the desired output. First, define d to be any number greater than or equal to 2n2 P j |tj|. Indexing the building blocks by j for 1 ≤j ≤n we define their respective parameters γj, ej as follows: γn = tn + d n2 , γj = tj + d j2 −tj+1 + d (j + 1)2 en = 2n, ej = 2 γj tj + d j −tj+1 + d j + 1 where we have assumed that γj ̸= 0 (which will be established, along with some other properties of these definitions, in the next claim). Claim 1. For all j, 1 ≤j ≤n, (i) γj > 0 and (ii) for all j, 1 ≤j ≤n −1, j ≤ej ≤j + 1. Proof of Claim 1. Part (i): For j = n, by definition we know that γn = tn+d n2 . For d ≥ 2n2 P j |tj| > |tn|, the numerator will be positive and therefore γn will be positive. For j < n, we have: γj > 0 ⇔ tj + d j2 > tj+1 + d (j + 1)2 ⇔ (j + 1)2(tj + d) > j2(tj+1 + d) ⇔ d((j + 1)2 −j2) > j2tj+1 −(j + 1)2tj ⇔ d > j2tj+1 −(j + 1)2tj 2j + 1 The right side of the above inequality is less than or equal to (j+1)2(|tj+1|+|tj|) 2j+1 ≤(j+1)(|tj+1|+|tj|) which is strictly upper bounded by 2n2 P j |tj|, and thus by d. So it follows that γj > 0 as needed. Part (ii): 14 j ≤ej = 2 γj tj + d j −tj+1 + d j + 1 ⇔ jγj ≤2 tj + d j −tj+1 + d j + 1 ⇔ tj + d j −j(tj+1 + d) (j + 1)2 ≤2 tj + d j −tj+1 + d j + 1 ⇔ −j(tj+1 + d) (j + 1)2 ≤tj + d j −2tj+1 + d j + 1 ⇔ −(tj+1 + d)j2 ≤(tj + d)(j + 1)2 −2(tj+1 + d)j(j + 1) ⇔ d(j2 −2j(j + 1) + (j + 1)2) ≥−j2tj+1 + 2j(j + 1)tj+1 −(j + 1)2tj ⇔ d ≥−j2tj+1 + 2j(j + 1)tj+1 −(j + 1)2tj where we have used j2 −2j(j + 1) + (j + 1)2 = (j −(j + 1))2 = 12 = 1 at the last line. Thus it suffices to make d large enough to ensure that j ≤ej. For our choice of d, this will be true. For the upper bound we have: 2 γj tj + d j −tj+1 + d j + 1 = ej ≤j + 1 ⇔ 2 tj + d j −tj+1 + d j + 1 ≤(j + 1)γj = (j + 1)(tj + d) j2 −tj+1 + d j + 1 ⇔ 2tj + d j −tj+1 + d j + 1 ≤(j + 1)(tj + d) j2 ⇔ 2(tj + d)j(j + 1) −(tj+1 + d)j2 ≤(j + 1)2(tj + d) ⇔ −(d −tj+1) j + 1 + 2(d + tj) j ≤(j + 1)(d + tj) j2 ⇔ −j2(d + tj+1) + 2j(j + 1)(d + tj) ≤(j + 1)2(d + tj) ⇔ d(j2 −2j(j + 1) + (j + 1)2) ≥−j2tj+1 + 2j(j + 1)tj −(j + 1)2tj ⇔ d ≥−j2tj+1 + 2j(j + 1)tj −(j + 1)2tj where we have used j2 −2j(j + 1) + (j + 1)2 = 1 at the last line. Again, for our choice of d the above inequality is satisfied. Finally, define M to be any number greater than max(t0 + d, maxi{γiei}). In addition to the n building blocks, our hardplus RBM will include an addition unit that we will call the zero-neuron, which handles x = 0. The zero-neuron will have weights w defined by wi = −M for each i, and b = t0 + d. Finally, the output bias B of our hardplus RBM network will be set to −d. The total output of the network is simply the sum of the outputs of the n different building blocks, the zero neuron, and constant bias −d. To show part (i) of the theorem we want to prove that for all k, whenever X = k, our circuit outputs the value tk. We make the following definitions: ak ≡− n X j=k γj bk ≡ n X j=k γjej 15 Claim 2. ak = −(tk + d) k2 bk = 2(tk + d) k bk = −2kak This claim is self-evidently true by examining basic definitions of γj and ej and realizing that ak and bk are telescoping sums. Given these facts, we can prove the following: Claim 3. For all k, 1 ≤k ≤n, when X = k the sum of the outputs of all the n building blocks is given by tk + d. Proof of Claim 3. For X = n, the (γn, en)-block computes max(0, γnX(en −X)) = max(0, −γnX2+γnenX). By the definition of en, n ≤en, and thus when X ≤n, γnX(en−X) ≥ 0. For all other building blocks (γj, ej), j < n, since ej ≤j + 1, this block outputs zero since γjX(ej −X) is less than or equal to zero. Thus the sum of all of the building blocks when X = n is just the output of the (γn, en)-block which is γn · n(en −n) = −γn · n2 + γnen · n = −(tn + d) + 2(tn + d) = tn + d as desired. For X = k, 1 ≤k < n the argument is similar. For all building blocks j ≥k, by Claim 1 we know that ej ≥j and therefore this block on X = k is nonnegative and therefore contributes to the sum. On the other hand, for all building blocks j < k, by Claim 1 we know that ej ≤j + 1 and therefore this outputs 0 and so does not contribute to the sum. Thus the sum of all of the building blocks is equal to the sum of the non-zero regions of the building blocks j for j ≥k. Since each of this is a quadratic function of X, it can written as a single quadratic polynomial of the form akX2 + bkX where ak and bk are defined as before. Plugging in the above expressions for ak and bk from Claim 2, we see that the value of this polynomial at X = k is: akk2 + bkk = −(tk + d) k2 k2 + 2(tk + d) k k = −(tk + d) + 2(tk + d) = tk + d Finally, it remains to ensure that our hardplus RBM network outputs t0 for X = 0. Note that the sum of the outputs of all n building blocks and the output bias is −d at X = 0. To correct this, we set the incoming weights and the bias of the zero-neuron according to wi = −M for each i, and b = t0 + d. When X = 0, this neuron will output t0 + d, making the total output of the network −d + t0 + d = t0 as needed. Furthermore, note that the addition of the zero-neuron does not affect the output of the network when X = k > 0 because the zero-neuron outputs 0 on all of these inputs as long as M ≥t0 + d. This completes the proof of part (i) of the theorem and it remains to prove part (ii). Observe that the size of the weights grows linearly in M and d, which follows directly from their definitions. And note that the magnitude of the input to each neuron is lower bounded by a positive linear function of M and d (a non-trivial fact which we will prove below). From these two observations it follows that to achieve the condition that the magnitude of the input to each neuron is greater than C(n) for some function C of n, the weights need to grow linearly with C. Noting that error bound condition ϵ ≤(n2 + 1) exp(−C) in Lemma 2 can be rewritten as C ≤log((n2 + 1)) + log(1/ϵ), from which part (ii) of the theorem then follows. There are two cases where a hardplus neuron in building block j has a negative input. Either the input is γj(ej −X) −M, or it is γj(ej −X) for X ≥j + 1. In the first case it is clear that as M grows the net input becomes more negative since ej doesn’t depend on M at all. 16 The second case requires more work. First note that from its defintion, ej can be rewritten as 2 (j+1)aj+1−jaj γj . Then for any X ≥j + 1 and j ≤n −1 we have: γj(ej −X) ≤γj(ej −(j + 1)) = γj 2(j + 1)aj+1 −jaj γj −(j + 1) = 2(j + 1)aj+1 −2jaj −(j + 1)γj = 2(j + 1)aj+1 −2jaj −(j + 1)(aj+1 −aj) = (j + 1)aj+1 −2jaj + (j + 1)aj = −(d −tj+1) j + 1 + 2(d + tj+1) j −(j + 1)d + tj+1 j2 = −j2(d + tj+1) + 2j(j + 1)(d + tj) −(j + 1)2(d + tj) j2(j + 1) = −(j2 −2j(j + 1) + (j + 1)2)d −j2tj + 2j(j + 1)tj j2(j + 1) = −(j −(j + 1))2d −j2tj + 2j(j + 1)tj j2(j + 1) = −d −j2tj + 2j(j + 1)tj j2(j + 1) = −d j2(j + 1) + −j2tj + 2j(j + 1)tj j2(j + 1) So we see that as d increases, this bound guarantees that γj(ej −X) becomes more negative for each X ≥j + 1. Also note that for the special zero-neuron, for X ≥1 the net input will be −MX + t0 + d ≤−M + t0 + d, which will shrink as M grows. For neurons belonging to building block j which have a positive valued input, we have that X < ej. Note that for any X ≤j and j < n we have: γj(ej −X) ≥γj(ej −j) = γj 2(j + 1)aj+1 −jaj γj −j = 2(j + 1)aj+1 −2jaj −jγj = 2(j + 1)aj+1 −2jaj −j(aj+1 −aj) = 2(j + 1)aj+1 −jaj −jaj+1 = 2−(d + tj+1) j + 1 + (d + tj) j + j (d + tj+1) (j + 1)2 = −2j(j + 1)(d + tj+1) + (j + 1)2(d + tj) + j2(d + tj+1) j(j + 1)2 = ((j + 1)2 −2j(j + 1) + j2)d + (j + 1)2tj −2j(j + 1)tj+1 + j2tj+1 j(j + 1)2 = (j + 1 −j)2d + (j + 1)2tj −2j(j + 1)tj+1 + j2tj+1 j(j + 1)2 = d + (j + 1)2tj −2j(j + 1)tj+1 + j2tj+1 j(j + 1)2 = d j(j + 1)2 + (j + 1)2tj −2j(j + 1)tj+1 + j2tj+1 j(j + 1)2 And for the case j = n, we have for X ≤j that: γj(ej −X) ≥γj(ej −j) = d + tn n2 (2n −n) = d n + tn n 17 So in all cases we see that as d increases, this bound guarantees that γj(ej −X) grows linearly. Also note that for the special zero-neuron, the net input will be t0 + d for X = 0, which will grow linearly as d increases. A.6 Proofs for Section 4 A.6.1 Proof of Theorem 8 We first state some basic facts which we need. Fact 14 (Muroga (1971)). Let f : {0, 1}n →{0, 1} be a Boolean function computed by a threshold neuron with arbitrary real incoming weights and bias. There exists a constant K and another threshold neuron computing f, all of whose incoming weights and bias are integers with magnitude at most 2Kn log n. A direct consequence of the above fact is the following fact, by now folklore, whose simple proof we present for the sake of completeness. Fact 15. Let fn be the set of all Boolean functions on {0, 1}n. For each 0 < α < 1, let fα,n be the subset of such Boolean functions that are computable by threshold networks with one hidden layer with at most s neurons. Then, there exits a constant K such that, fα,n ≤2K(n2s log n+s2 log s). Proof. Let s be the number of hidden neurons in our threshold network. By using Fact 14 repeatedly for each of the hidden neurons, we obtain another threshold network having still s hidden units computing the same Boolean function such that the incoming weights and biases of all hidden neurons is bounded by 2Kn log n. Finally applying Fact 14 to the output neuron, we convert it to a threshold gate with parameters bounded by 2Ks log s. Henceforth, we count only the total number of Boolean functions that can be computed by such threshold networks with integer weights. We do this by establishing a simple upper bound on the total number of distinct such networks. Clearly, there are at most 2Kn2 log n ways to choose the incoming weights of a given neuron in the hidden layer. There are s incoming weights to choose for the output threshold, each of which is an integer of magnitude at most 2Ks log s. Combining these observations, there are at most 2Ks·n2 log n × 2Ks2 log s distinct networks. Hence, the total number of distinct Boolean functions that can be computed is at most 2K(n2s log n+s2 log s). With these basic facts in hand, we prove below Theorem 8 using Proposition 5 and Theorem 6. Proof of Theorem 8. Consider any thresholded RBM network with m hidden units that is computing a n-dimensional Boolean function with margin δ. Using Proposition 5, we can obtain a thresholded hardplus RBM network of size 4m2/δ · log(2m/δ) + m that computes the same Boolean function as the thresholded original RBM network. Applying Theorem 6 and thresholding the output, we obtain a thresholded network with 1 hidden layer of thresholds which is the same size and computes the same Boolean function. This argument shows that the set of Boolean functions computed by thresholded RBM networks of m hidden units and margin δ is a subset of the Boolean functions computed by 1-hidden-layer threshold networks of size 4m2n/δ·log(2m/δ)+mn. Hence, invoking Fact 15 establishes our theorem. A.6.2 Proof of Theorem 9 Note that the theorems from Hajnal et al. (1993) assume integer weights, but this hypthosis can be easily removed from their Theorem 3.6. In particular, Theorem 3.6 assumes nothing about the lower weights, and as we will see, the integrality assumption on the top level weights can be easily replaced with a margin condition. First note that their Lemma 3.3 only uses the integrality of the upper weights to establish that the margin must be ≥1. Otherwise it is easy to see that with a margin δ, Lemma 3.3 implies that a threshold neuron in a thresholded network of size m is a 2δ α -discriminator (α is the sum of the 18 absolute values of the 2nd-level weights in their notation). Then Theorem 3.6’s proof gives m ≥ δ2(1/3−ϵ)n for sufficiently large n (instead of just m ≥2(1/3−ϵ)n). A more precise bound that they implictly prove in Theorem 3.6 is m ≥6δ2n/3 C . Thus we have the following fact adapted from Hajnal et al. (1993): Fact 16. For a neural network of size m with a single hidden layer of threshold neurons and weights bounded by C that computes a function that represents IP with margin δ, we have m ≥6δ2n/3 C . Proof of Theorem 9. By Proposition 5 it suffices to show that no thresholded hardplus RBM network of size ≤4m2 log(2m/δ)/δ + m with parameters bounded by C can compute IP with margin δ/2. Well, suppose by contradiction that such a thresholded RBM network exists. Then by Theorem 6 there exists a single hidden layer threshold network of size ≤4m2n log(2m/δ)/δ+mn with weights bounded in magnitude by (n + 1)C that computes the same function, i.e. one which represents IP with margin δ/2. Applying the above Fact we have 4m2n log(2m/δ)/δ + mn ≥ 3δ2n/3 (n+1)C . It is simple to check that this bound is violated if m is bounded as in the statement of this theorem. A.6.3 Proof of Theorem 10 We prove a more general result here from which we easily derive Theorem 10 as a special case. To state this general result, we introduce some simple notions. Let h : R →R be an activation function. We say h is monotone if it satisfies the following: Either h(x) ≤h(y) for all x < y OR h(x) ≥h(y) for all x < y. Let ℓ: {0, 1}n →R be an inner function. An (h, ℓ) gate/neuron Gh,ℓ is just one that is obtained by composing h and ℓin the natural way, i.e. Gh,ℓ x = h ℓ(x) . We notate (h, ℓ) ∞= maxx∈{0,1}n Gh,ℓ(x) . We assume for the discussion here that the number of input variables (or observables) is even and is divided into two halves, called x and y, each being a Boolean string of n bits. In this language, the inner production Boolean function, denoted by IP(x, y), is just defined as x1y1+· · ·+xnyn (mod 2). We call an inner function of a neuron/gate to be (x, y)-separable if it can be expressed as g(x)+f(y). For instance, all affine inner functions are (x, y)-separable. Finally, given a set of activation functions H and a set of inner functions I, an (H, I)- network is one each of whose hidden unit is a neuron of the form Gh,ℓfor some h ∈H and ℓ∈I. Let (H, I) ∞= sup (h, ℓ) ∞: h ∈ H, ℓ∈I . Theorem 17. Let H be any set of monotone activation functions and I be a set of (x, y) separable inner functions. Then, every (H, I) network with one layer of m hidden units computing IP with a margin of δ must satisfy the following: m ≥ δ 2 (H, I) ∞ 2n/4. In order to prove Theorem 17, it would be convenient to consider the following 1/-1 valued function: (−1)IP(x,y) = (−1)x1y1+···+xnyn. Please note that when IP evaluates to 0, (−1)IP evaluates to 1 and when IP evaluates to 1, (−1)IP evaluates to -1. We also consider a matrix Mn with entries in {1, −1} which has 2n rows and 2n columns. Each row of Mn is indexed by a unique Boolean string in {0, 1}n. The columns of the matrix are also indexed similarly. The entry Mn[x, y] is just the 1/-1 value of (−1)IP(x,y). We need the following fact that is a special case of the classical result of Lindsey. Lemma 18 (Chor and Goldreich,1988). The magnitude of the sum of elements in every r × s submatrix of Mn is at most √ rs2n. We use Lemma 18 to prove the following key fact about monotone activation functions: Lemma 19. Let Gh,ℓbe any neuron with a monotone activation function h and inner function ℓthat is (x, y)-separable. Then, 19 Ex,y Gh,ℓ x, y (−1)IP x,y ≤||(h, ℓ)||∞· 2−Ω(n). (2) Proof. Let ℓ(x, y) = g(x) + f(y) and let 0 < α < 1 be some constant specified later. Define a total order ≺g on {0, 1}n by setting x ≺g x′ whenever g(x) ≤g(x′) and x occurs before x′ in the lexicographic ordering. We divide {0, 1}n into t = 2(1−α)n groups of equal size as follows: the first group contains the first 2αn elements in the order specified by ≺g, the second group has the next 2αn elements and so on. The ith such group is denoted by Xi for i ≤2(1−α)n. Likewise, we define the total order ≺f and use it to define equal sized blocks Y1, . . . , Y2(1−α)n. The way we estimate the LHS of (2) is to pair points in the block (Xi, Yj) with (Xi+1, Yj+1) in the following manner: wlog assume that the activation function h in non-decreasing. Then, Gh,ℓ(x, y) ≤Gh,ℓ(x′, y′) for each (x, y) ∈(Xi, Yj) and (x′, y′) ∈(Xi+1, Yj+1). Further, applying Lemma 18, we will argue that the total number of points in (Xi, Yj) at which the product in the LHS evaluates negative (positive) is very close to the number of points in (Xi+1, Yj+1) at which the product evaluates to positive (negative). Moreover, by assumption, the composed function (h, ℓ) does not take very large values in our domain by assumption. These observations will be used to show that the points in blocks that are diagonally across each other will almost cancel each other’s contribution to the LHS. There are too few uncancelled blocks and hence the sum in the LHS will be small. Forthwith the details. Let P + i,j = {(x, y) ∈(Xi, Yj) | IP(x, y) = 1} and P − i,j = {(x, y) ∈(Xi, Yi) | IP(x, y) = −1}. Let t = 2(1−α)n. Let hi,j be the max value that the gate takes on points in (Xi, Yj). Note that the non-decreasing assumption on h implies that hi,j ≤hi+1,j+1. Using this observation, we get the following: Ex,y Gh,ℓ x, y (−1)IP x,y ≤1 4n X (i,j)<t hi,j P + i,j − P − i+1,j+1 + 1 4n X i=tORj=t hi,j|Pi,j| (3) We apply Lemma 18 to conclude that P + i+1,j+1 − P − i,j is at most 2 · 2(α+1/2)n. Thus, we get RHS of (3) ≤||(h, ℓ)||∞· 2 · 2−(α−1 2 )n + 4 · 2−(1−α)n . (4) Thus, setting α = 3/4 gives us the bound that the RHS above is arbitrarily close to ||(h, ℓ)||∞·2−n/4. Similarly, pairing things slightly differently, we get Ex,y Gh,ℓ x, y (−1)IP x,y ≥1 4n X (i,j)<t hi+1,j+1 P + i+1,j+1 − P − i,j −1 4n X i=tORj=t |hi,j| · |Pi,j| (5) Again similar conditions and settings of α imply that RHS of (5) is no smaller than −||(h, ℓ)||∞· 2−n/4, thus proving our lemma. We are now ready to prove Theorem 17. Proof of Theorem 17. Let C be any (H, I) network having m hidden units, Gh1,ℓ1, . . . , Ghm,ℓm, where each hi ∈H and each ℓi ∈I is (x, y)-separable. Further, let the output threshold gate be such that whenever the sum is at least b, C outputs 1 and whenever it is at most a, C outputs -1. Then, let f be the sum total of the function feeding into the top threshold gate of C. Define t = f −(a + b)/2. Hence, Ex,y f(x, y)(−1)IP(x,y) = Ex,y t(x, y)(−1)IP(x, y) + a + b 2 Ex,y (−1)IP(x,y) ≥(b −a)/2 + a + b 2 Ex,y (−1)IP(x,y) . 20 Thus, it follows easily Ex,y f(x, y)(−1)IP(x,y) ≥b −a 2 − a + b 2 2−n. (6) On the other hand, by linearity of expectation and applying Lemma 19, we get Ex,y f(x, y)(−1)IP(x,y) ≤ m X j=1 Ex,y Ghj,ℓj x, y (−1)IP(x,y) ≤m · (H, I) ∞· 2−n/4. (7) Comparing (6) and (7), observing that each of |a| and |b| is at most m (H, I) ∞and recalling that δ = (b −a), our desired bound on m follows. Proof of Theorem 10. The proof follows quite simply by noting that the set of activation functions in this case is just the singleton set having only the monotone function soft(y) = log(1+exp(y)). The set of inner functions are all affine functions with each coefficient having value at most C. As the affine functions are (x, y)-separable, we can apply Theorem 17. We do so by noting (H, I) ∞≤ log(1 + exp(nC)) ≤max log 2, nC + log 2 . That yields our result. Remark 20. It is also interesting to note that Theorem 17 appears to be tight in the sense that none of the hypotheses can be removed. That is, for neurons with general non-montonic activation functions, or for neurons with monotonic activation functions whose output magnitude violates the aforementioned bounds, there are example networks that can efficiently compute any real-valued function. Thus, to improve this result (e.g. removing the weight bounds) it appears one would need to use a stronger property of the particular activation function than monotonicity. 21
|
2013
|
160
|
4,888
|
Robust Spatial Filtering with Beta Divergence Wojciech Samek1,4 Duncan Blythe1,4 Klaus-Robert M¨uller1,2 Motoaki Kawanabe3 1Machine Learning Group, Berlin Institute of Technology (TU Berlin), Berlin, German 2Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea 3ATR Brain Information Communication Research Laboratory Group, Kyoto, Japan 4Bernstein Center for Computational Neuroscience, Berlin, Germany Abstract The efficiency of Brain-Computer Interfaces (BCI) largely depends upon a reliable extraction of informative features from the high-dimensional EEG signal. A crucial step in this protocol is the computation of spatial filters. The Common Spatial Patterns (CSP) algorithm computes filters that maximize the difference in band power between two conditions, thus it is tailored to extract the relevant information in motor imagery experiments. However, CSP is highly sensitive to artifacts in the EEG data, i.e. few outliers may alter the estimate drastically and decrease classification performance. Inspired by concepts from the field of information geometry we propose a novel approach for robustifying CSP. More precisely, we formulate CSP as a divergence maximization problem and utilize the property of a particular type of divergence, namely beta divergence, for robustifying the estimation of spatial filters in the presence of artifacts in the data. We demonstrate the usefulness of our method on toy data and on EEG recordings from 80 subjects. 1 Introduction Spatial filtering is a crucial step in the reliable decoding of user intention in Brain-Computer Interfacing (BCI) [1, 2]. It reduces the adverse effects of volume conduction and simplifies the classification problem by increasing the signal-to-noise-ratio. The Common Spatial Patterns (CSP) [3, 4, 5, 6] method is one of the most widely used algorithms for computing spatial filters in motor imagery experiments. A spatial filter computed with CSP maximizes the differences in band power between two conditions, thus it aims to enhance detection of the synchronization and desynchronization effects occurring over different locations of the sensorimotor cortex after performing motor imagery. It is well known that CSP may provide poor results when artifacts are present in the data or when the data is non-stationary [7, 8]. Note that artifacts in the data are often unavoidable and can not always be removed by preprocessing, e.g. with Independent Component Analysis. They may be due to eye movements, muscle movements, loose electrodes, sudden changes of attention, circulation, respiration, external events, among the many possibilities. A straight forward way to robustify CSP against overfitting is to regularize the filters or the covariance matrix estimation [3, 7, 9, 10, 11]. Several other strategies have been proposed for estimating spatial filters under non-stationarity [12, 8, 13, 14]. In this work we propose a novel approach for robustifying CSP inspired from recent results in the field of information geometry [15, 16]. We show that CSP may be formulated as a divergence maximization problem, in particular we prove by using Cauchy’s interlacing theorem [17] that the spatial filters found by CSP span a subspace with maximum symmetric Kullback-Leibler divergence between the distributions of both classes. In order to robustify the CSP algorithm against the influence of outliers we propose solving the divergence maximization problem with a particular type of 1 divergence, namely beta divergence. This divergence has been successfully used for robustifying algorithms such as Independent Component Analysis (ICA) [18] and Non-negative Matrix Factorization (NMF) [19]. In order to capture artifacts on a trial-by-trial basis we reformulate the CSP problem as sum of trial-wise divergences and show that our method downweights the influence of artifactual trials, thus it robustly integrates information from all trials. The remainder of this paper is organized as follows. Section 2 introduces the divergence-based framework for CSP. Section 3 describes the beta-divergence CSP method and discusses its robustness property. Section 4 evaluates the method on toy data and EEG recordings from 80 subjects and interprets the performance improvement. Section 5 concludes the paper with a discussion. An implementation of our method is available at http://www.divergence-methods.org. 2 Divergence-Based Framework for CSP Spatial filters computed by the Common Spatial Patterns (CSP) [3, 4, 5] algorithm have been widely used in Brain-Computer Interfacing as they are well suited to discriminate between distinct motor imagery patterns. A CSP spatial filter w maximizes the variance of band-pass filtered EEG signals in one condition while minimizing it in the other condition. Mathematically the CSP solution can be obtained by solving the generalized eigenvalue problem Σ1wi = λiΣ2wi (1) where Σ1 and Σ2 are the estimated (average) D × D covariance matrices of class 1 and 2, respectively. Note that the spatial filters W = [w1 . . . wD] can be sorted by importance α1 = max{λ1, 1 λ1 } > . . . > αD = max{λD, 1 λD }. 2.1 divCSP Algorithm Information geometry [15] has provided useful frameworks for developing various machine learning (ML) algorithms, e.g. by optimizing divergences between two different probability distributions [20] [21]. In particular, a series of robust ML methods have been successfully obtained from Bregman divergences which are generalization of the Kullback-Leibler (KL) divergence [22]. Among them, we employ in this work the beta divergence. Before proposing our novel algorithm, we show that CSP can also be interpreted as maximization of the symmetric KL divergence. Theorem 1: Let W = [w1 . . . wd] be the d top (sorted by αi) spatial filters computed by CSP and let Σ1 and Σ2 denote the covariance matrices of class 1 and 2. Let V⊤= ˜RP be a d × D dimensional matrix that can be decomposed into a whitening projection P ∈RD×D (P(Σ1 + Σ2)P⊤= I) and an orthogonal projection ˜R ∈Rd×D. Then span(W) = span(V∗) (2) with V∗ = argmax V ˜Dkl V⊤Σ1V || V⊤Σ2V (3) where ˜Dkl(· || ·) denotes the symmetric Kullback-Leibler Divergence1 between zero mean Gaussians and span(M) stands for the subspace spanned by the columns of matrix M. Note that [23] has provided a proof for the special case of one spatial filter, i.e. for V ∈RD×1. Proof: See appendix and supplement material. The objective function that is maximized in Eq. (3) can be written as Lkl(V) = 1 2tr (V⊤Σ1V)−1(V⊤Σ2V) + 1 2tr (V⊤Σ2V)−1(V⊤Σ1V) −d. (4) In order to cater for artifacts on a trial-by-trial basis we need to reformulate the above objective function. Instead of maximizing the divergence between the average class distributions we propose to optimize the sum of trial-wise divergences Lsumkl(V) = N X i=1 ˜Dkl V⊤Σi 1V || V⊤Σi 2V , (5) 1The symmetric Kullback-Leibler Divergence between distributions f(x) and g(x) is defined as ˜Dkl(f(x) || g(x)) = R f(x) · log f(x) g(x) dx + R g(x) · log g(x) f(x)dx. 2 where Σi 1 and Σi 2 denote the covariance matrices estimated from the i-th trial of class 1 and class 2, respectively, and N is the number of trials per class. Note that the reformulated problem is not equivalent to CSP; in Eq. (4) averaging is performed w.r.t. the covariance matrices, whereas in Eq. (5) it is performed w.r.t. the divergences. We denote the former approach by kl-divCSP and the latter one by sumkl-divCSP. The following theorem relates both approaches in the asymptotic case. Theorem 2: Suppose that the number of discriminative sources is one; then let c be such that D/n →c as D, n →∞(D dimensions, n data points per trial). Then if there exists γ(c) with N/D →γ(c) for N →∞(N the number of trials) then the empirical maximizer of Lsumkl(v) (and of course also of Lkl(v)) converges almost surely to the true solution. Sketched Proof: See appendix. Thus Theorem 2 says that both divergence-based CSP variants kl-divCSP and sumkl-divCSP almost surely converge to the same (true) solution in the asymptotic case. The theorem can be easily extended to multiple discriminative sources. 2.2 Optimization Framework We use the methods developed in [24], [25] and [26] for solving the maximization problems in Eq. (4) and Eq. (5). The projection V ∈RD×d to the d-dimensional subspace can be decomposed into three parts, namely V⊤= IdRP where Id is an identity matrix truncated to the first d rows, R is a rotation matrix with RR⊤= I and P is a whitening matrix. The optimization process consists of finding the rotation R that maximizes our objective function and can be performed by gradient descent on the manifold of orthogonal matrices. More precisely, we start with an orthogonal matrix R0 and find an orthogonal update U in the k-th step such that Rk+1 = URk. The update matrix is chosen by identifying the direction of steepest descent in the set of orthogonal transformations and then performing a line search along this direction to find the optimal step. Since the basis of the extracted subspace is arbitrary (one can right multiply a rotation matrix to V without changing the divergence), we select the principal axes of the data distribution of one class (after projection) as basis in order to maximally separate the two classes. The optimization process is summarized in Algorithm 1 and explained in the supplement material of the paper. Algorithm 1 Divergence-based Framework for CSP 1: function DIVCSP(Σ1, Σ2, d) 2: Compute the whitening matrix P = Σ−1 2 3: Initialise R0 with a random rotation matrix 4: Whiten and rotate the data Σc = (R0P)Σc(R0P)⊤with c = {1, 2} 5: repeat 6: Compute the gradient matrix and determine the step size (see supplement material) 7: Update the rotation matrix Rk+1 = URk 8: Apply the rotation to the data Σc = UΣcU⊤ 9: until convergence 10: Let V⊤= IdRk+1P 11: Rotate V by G ∈Rd×d where G are eigenvectors of V⊤Σ1V 12: return V 13: end function 3 Beta Divergence CSP Robustness is a desirable property of algorithms that work in data setups which are known to be contaminated by outliers. For example, in the biomedical fields, signals such as EEG may be highly affected by artifacts, i.e. outliers, which may drastically influence statistical estimation. Note that both of the above approaches kl-divCSP and sumkl-divCSP are not robust w.r.t. artifacts as they both perform simple (non-robust) averaging of the covariance matrices and of the divergence terms, respectively. In this section we show that by using beta divergence we robustify the averaging of the divergence terms as beta divergence downweights the influence of outlier trials. 3 Beta divergence was proposed in [16, 27] and is defined (for β > 0) as Dβ (f(x) || g(x)) = 1 β Z (f β(x) −gβ(x))f(x)dx − 1 β + 1 Z (f β+1(x) −gβ+1(x))dx, (6) where f(x) and g(x) are two probability distributions. Like every statistical divergence it is always positive and equals zero iff g = f [15]. The symmetric version of beta divergence ˜Dβ(f(x) || g(x)) = Dβ(f(x) || g(x)) + Dβ(g(x) || f(x)) can be interpreted as discrepancy between two probability distributions. One can show easily that beta and Kullback-Leibler divergence coincide as β →0. In the context of parameter estimation, one can show that minimizing the divergence function from an empirical distribution p to the statistical model q(φ) is equivalent to maximizing the Ψ-likelihood ¯LΨβ(φ) argmin q(φ) Dβ(p || q(φ)) = argmax q(φ) ¯LΨβ(q(φ)) (7) with ¯LΨβ(q(φ)) = 1 n n X i=1 Ψβ(ℓ(xi, q(φ))) −bΨβ(φ) and Ψβ(z) = exp(βz) −1 β , (8) where ℓ(xi, q(φ)) denotes the log-likelihood of observation xi and distribution q(φ), and bΨβ(φ) := (β + 1)−1 R q(φ)β+1dx. Basu et al. [27] showed that the Ψ-likelihood method weights each observation according to the magnitude of likelihood evaluated at the observation; if an observation is an outlier, i.e. of lower likelihood, then it is downweighted. Thus, beta divergence allows to construct robust estimators as samples with low likelihood are downweighted (see also M-estimators [28]). β-divCSP Algorithm We propose applying beta divergence to the objective function in Eq. (5) in order to downweight the influence of artifacts in the computation of spatial filters. An overview over the different divergencebased CSP variants is provided in Figure 1. The objective function of our β-divCSP approach is Lβ(V) = X i ˜Dβ VTΣi 1V || VTΣi 2V (9) = 1 β X i Z gβ+1 i dx + Z f β+1 i dx − Z f β i gidx − Z figβ i dx , (10) with fi = N 0, ¯Σi 1 and gi = N 0, ¯Σi 2 being the zero-mean Gaussian distributions with projected covariances ¯Σi 1 = VT Σi 1V ∈Rd×d and ¯Σi 2 = VT Σi 2V ∈Rd×d, respectively. One can show easily (see the supplement file to this paper) that Lβ(V) has an explicit form γ X i | ¯Σi 1|−β 2 + | ¯Σi 2|−β 2 −(β + 1) d 2 | ¯Σi 2| 1−β 2 |β ¯Σi 1 + ¯Σi 2|−1 2 + | ¯Σi 1| 1−β 2 |β ¯Σi 2 + ¯Σi 1|−1 2 , (11) with γ = 1 β q 1 (2π)βd(β+1)d . We use Algorithm 1 to maximize the objective function of β-divCSP. In the following we show that the robustness property of β-divCSP can be directly understood from inspection of its objective function. Assume ¯Σi 1 and ¯Σi 2 are full rank d × d covariance matrices. We investigate the behaviour of the objective functions of β-divCSP and sumkl-divCSP when ¯Σi 1 is constant and ¯Σi 2 becomes very large, e.g. because it is affected by artifacts. It is not hard to see that for β > 0 the objective function Lβ does not go to infinity but is constant as ¯Σi 2 becomes arbitrarily large. The first term of the objective function | ¯Σi 1|−β 2 is constant with respect to changes of ¯Σi 2 and all the other terms go to zero as ¯Σi 2 increases. Thus the influence function of the β-divCSP estimator is bounded w.r.t. changes in ¯Σi 2 (the same argument holds for changes of ¯Σi 1). Note that this robustness property vanishes when applying Kullback-Leibler divergences Eq. (4) as the trace term tr ( ¯Σi 1)−1 ¯Σi 2 is not bounded when ¯Σi 2 becomes arbitrarily large, thus this artifactual term will dominate the solution. 4 -divCSP -divCSP robust CSP sum -divCSP Figure 1: Relation between the different CSP formulations outlined in this paper. 4 Experimental Evaluation 4.1 Simulations In order to investigate the effects of artifactual trials on CSP and β-divCSP we generate data x(t) using the following mixture model x(t) = A sdis(t) sndis(t) + ϵ, (12) where A ∈R10×10 is a random orthogonal mixing matrix, sdis is a discriminative source sampled from a zero mean Gaussian with variance 1.8 in one condition and 0.2 in the other one, sndis are 9 sources with variance 1 in both conditions and ϵ is a noise variable with variance 2. We generate 100 trials per condition, each consisting of 200 data points. Furthermore we randomly add artifacts with variance 10 independently to each data dimension (i.e. virtual electrode) and trial with varying probability and evaluate the angle between the true filter extracting the source activity of sdis and the spatial filter computed by CSP and β-divCSP. The median angles are shown in Figure 2 using 100 repetitions. One can clearly see that the angle error between the spatial filter extracted by CSP and the true one increases with larger artifact probability. Furthermore one can see from the figure that using very small β values does not attenuate the artefact problem, but it rather increases the error by adding up trial-wise divergences without downweighting outliers. However, as the β value increases the artifactual trials are downweighted and a robust average is computed over the trial-wise divergence terms. This increased robustness significantly reduces the angle error. angle error (in °) angle error (in °) angle error (in °) angle error (in °) angle error (in °) prob. of outlier 0 0 20 40 60 80 prob. of outlier 0.001 0 20 40 60 80 prob. of outlier 0.005 0 20 40 60 80 angle error (in °) prob. of outlier 0.01 0 20 40 60 80 prob. of outlier 0.02 0 20 40 60 80 prob. of outlier 0.05 beta value 0.001 0.01 0.1 0.25 0.5 0.75 1 1.5 2 0 20 40 60 80 beta value 0.001 0.01 0.1 0.25 0.5 0.75 1 1.5 2 beta value 0.001 0.01 0.1 0.25 0.5 0.75 1 1.5 2 beta value 0.001 0.01 0.1 0.25 0.5 0.75 1 1.5 2 beta value 0.001 0.01 0.1 0.25 0.5 0.75 1 1.5 2 beta value 0.001 0.01 0.1 0.25 0.5 0.75 1 1.5 2 CSP β-divCSP Figure 2: Angle between the true spatial filter and the filter computed by CSP and β-divCSP for different probabilities of artifacts. The robustness of our approach increases with the β value and significantly outperforms the CSP solution. 5 4.2 Data Sets and Experimental Setup The data set [29] used for the evaluation contains EEG recordings from 80 healthy BCIinexperienced volunteers performing motor imagery tasks with the left and right hand or feet. The subjects performed motor imagery first in a calibration session and then in a feedback mode in which they were required to control a 1D cursor application. Activity was recorded from the scalp with multi-channel EEG amplifiers using 119 Ag/AgCl electrodes in an extended 10-20 system sampled at 1000 Hz (downsampled to 100 Hz) and a band-pass from 0.05 to 200 Hz. Three runs with 25 trials of each motor condition were recorded in the calibration session and the two best classes were selected; the subjects performed feedback with three runs of 100 trials. Both sessions were recorded on the same day. For the offline analysis we manually select 62 electrodes densely covering the motor cortex, extract a time segment located from 750ms to 3500ms after the cue indicating the motor imagery class and filter the signal in 8-30 Hz using a 5-th order Butterworth filter. We do not apply manual or automatic rejection of trials or electrodes and use six spatial filters for feature extraction. For classification we apply Linear Discriminant Analysis (LDA) after computing the logarithm of the variance on the spatially filtered data. We measure performance as misclassification rate and normalize the covariance matrices by dividing them by their traces. The parameter β is selected from the set of 15 candidates {0, 0.0001, 0.001, 0.01, 0.05, 0.1, 0.15, 0.2, 0.25, 0.5, 0.75, 1, 1.5, 2, 5} by 5-fold cross-validation on the calibration data using minimal training error rate as selection criterion. For faster convergence we use the rotation part of the CSP solution as initial rotation matrix R0. 4.3 Results We compare our β-divCSP method with three CSP baselines using different estimators for the covariance matrices. The first baseline uses the standard empirical estimator, the second one applies a standard analytic shrinkage estimator [9] and the third one relies on the minimum covariance determinant (MCDE) estimate [30]. Note that the shrinkage estimator usually provides better estimates in small-sample settings, whereas MCDE is robust to outliers. In order to perform a fair comparison we applied MCDE over various ranges [0, 0.05, 0.1 . . . 0.5] of parameters and selected the best one by cross-validation (as with β-divCSP). The MCDE parameter determines the expected proportion of artifacts in the data. The results are shown in Figure 3. Each circle denotes the error rate of one subject. One can see that the β-divCSP method outperforms the baselines as most circles are below the solid line. Furthermore the performance increases are significant according to the one-sided Wilcoxon sign rank test as the p-values are smaller than 0.05. CSP error rate [%] -divCSP error rate [%] p = 0.0005 0 20 40 60 0 20 40 60 shrinkCSP error rate [%] -divCSP error rate [%] p = 0.0178 0 20 40 60 0 20 40 60 MCDE+CSP error rate [%] -divCSP error rate [%] p = 0.0407 0 20 40 60 0 20 40 60 β β β Figure 3: Performance results of the CSP, shrinkage + CSP and MCDE + CSP baselines compared to β-divCSP. Each circle represents the error rate of one subject. Our method outperforms the baselines for circles that are below the solid line. The p-values of the one-sided Wilcoxon sign rank test are shown in the lower right corner. We made an interesting observation when analysing the subject with largest improvement over the CSP baseline; the error rates were 48.6% (CSP), 48.6% (MCDE+CSP) and 11.0% (β-divCSP). Over all ranges of MCDE parameters this subject has an error rate higher than 48% i.e. MCDE was not able help in this case. This example shows that β-divCSP and MCDE are not equivalent. Enforcing robustness on the CSP algorithm may in some cases be better than enforcing robustness when estimating the covariance matrices. 6 In the following we study the robustness property of the β-divCSP method on subject 74, the user with the largest improvement (CSP error rate: 48.6 % and β-divCSP error rate: 11.0 %). The left panel of Figure 4 displays the activity pattern associated with the most important CSP filter of subject 74. One can clearly see that the pattern does not encode neurophysiologically relevant activity, but focuses on a single electrode, namely FFC6. When analysing the (filtered) EEG signal of this electrode one can identify a strong artifact in one of the trials. Since neither the empirical covariance estimator nor the CSP algorithm is robust to this kind of outliers, it dominates the solution. However, the resulting pattern is meaningless as it does not capture motor imaginary related activity. The right panel of Figure 4 shows the relative importance of the divergence term of the artifactual trial with respect to the average divergence terms of the other trials. One can see that the divergence term computed from the artifactual trial is over 1800 times larger than the average of the other trials. This ratio decreases rapidly for larger β values, thus the influence of the artifact decreases. Thus, our experiments provide an excellent example of the robustness property of the β-divCSP approach. CSP pattern artefact in FFC6 beta value Percentage of artefact term 0 0.0001 0.001 0.01 0.05 0.1 0.15 0.2 0.25 0.5 0.75 1 1.5 2 5 0 200 400 600 800 1000 1200 1400 1600 1800 2000 Figure 4: Left: The CSP pattern of subject 74 does not reflect neurophysiological activity but it represents the artifact (red ellipse) in electrode FFC6. Right: The relative importance of this artifactual trial decreases with the β parameters. The relative importance is measured as quotient between the divergence term of the artifactual trial and the average divergence terms of the other trials. 5 Discussion Analysis of EEG data is challenging because the signal of interest is typically present with a low signal to noise ratio. Moreover artifacts and non-stationarity require robust algorithms. This paper has placed its focus on a robust estimation and proposed a novel algorithm family giving rise to a beta divergence algorithm which allows robust spatial filter computation for BCI. In the very common setting where EEG electrodes become loose or movement related artifacts occur in some trials, it is a practical necessity to either ignore these trials (which reduces an already small sample size further) or to enforce intrinsic invariance to these disturbances into the learning procedures. Here, we have used CSP, the standard filtering technique in BCI, as a starting point and reformulated it in terms of an optimization problem maximizing the divergence between the class-distributions that correspond to two cognitive states. By borrowing the concept of beta divergences, we could adapt the optimization problem and arrive at a robust spatial filter computation based on CSP. We showed that our novel method can reduce the influence of artifacts in the data significantly and thus allows to robustly extract relevant filters for BCI applications. In future work we will investigate the properties of other divergences for Brain-Computer Interfacing and consider also further applications like ERP-based BCIs [31] and beyond the neurosciences. Acknowledgment We thank Daniel Bartz and Frank C. Meinecke for valuable discussions. This work was supported by the German Research Foundation (GRK 1589/1), by the Federal Ministry of Education and Research (BMBF) under the project Adaptive BCI (FKZ 01GQ1115) and by the Brain Korea 21 Plus Program through the National Research Foundation of Korea funded by the Ministry of Education. 7 Appendix Sketch of proof of Theorem 1 Cauchy’s interlacing theorem [17] establishes a relation between the eigenvalues µ1 ≤. . . ≤µD of the original covariance matrix Σ and the eigenvalues ν1 ≤. . . ≤νd of the projected one VΣV⊤. The theorem says that µj ≤νj ≤µD−d+j. In the proof we split the optimal projection V∗into two parts U1 ∈Rk×D and U2 ∈Rd−k×D based on whether the first or second trace term in Eq. (4) is larger when applying the spatial filters. By using Cauchy’s theorem we then show that Lkl(U) ≤Lkl(W) where W consists of k eigenvectors with largest eigenvalues; equality only holds if U and W coincide (up to linear transformations). We show an analogous relation for U2 and conclude that V∗must be the CSP solution (up to linear transformations). See the full proof in the supplement material. Sketch of the proof of Theorem 2 Since there is only one discriminative direction we may perform analysis in a basis whereby the covariances of both classes have the form diag(a, 1, . . . , 1) and diag(b, 1, . . . , 1). If we show in this basis that consistency holds then it is a simple matter to prove consistency in the original basis. We want to show that as the number of trials N increases the filter provided by sumkl-divCSP converges to the true solution v∗. If the support of the density of the eigenvalues includes a region around 0, then there is no hope of showing that the matrix inversion is stable. However, it has been shown in the random matrix theory literature [32] that if D and n tend to ∞in a ratio c = D n then all of the eigenvalues apart from the largest lie between (1 −√c)2 and (1 + √c)2 whereas the largest sample eigenvalue (α denotes the true non-unit eigenvalue) converges almost surely to α + c α α−1 provided α > 1 + √c, independently of the distribution of the data; a similar result applies if one true eigenvalue is smaller than the rest. This implies that for sufficient discriminability in the true distribution and sufficiently many data points per trial, each filter maximizing each term in the sum has non-zero dot-product with the true maximizing filter. But since the trials are independent, this implies that in the limit of N trials the maximizing filter corresponds to the true filter. Note that the full proof goes well beyond the scope of this contribution. References [1] G. Dornhege, J. del R. Mill´an, T. Hinterberger, D. McFarland, and K.-R. M¨uller, Eds., Toward Brain-Computer Interfacing. Cambridge, MA: MIT Press, 2007. [2] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, “Braincomputer interfaces for communication and control,” Clin. Neurophysiol., vol. 113, no. 6, pp. 767–791, 2002. [3] B. Blankertz, R. Tomioka, S. Lemm, M. Kawanabe, and K.-R. M¨uller, “Optimizing Spatial filters for Robust EEG Single-Trial Analysis,” IEEE Signal Proc. Magazine, vol. 25, no. 1, pp. 41–56, 2008. [4] H. Ramoser, J. M¨uller-Gerking, and G. Pfurtscheller, “Optimal spatial filtering of single trial eeg during imagined hand movement,” IEEE Trans. Rehab. Eng., vol. 8, no. 4, pp. 441–446, 1998. [5] L. C. Parra, C. D. Spence, A. D. Gerson, and P. Sajda, “Recipes for the linear analysis of eeg,” NeuroImage, vol. 28, pp. 326–341, 2005. [6] S. Lemm, B. Blankertz, T. Dickhaus, and K.-R. M¨uller, “Introduction to machine learning for brain imaging,” NeuroImage, vol. 56, no. 2, pp. 387–399, 2011. [7] F. Lotte and C. Guan, “Regularizing common spatial patterns to improve bci designs: Unified theory and new algorithms,” IEEE Trans. Biomed. Eng., vol. 58, no. 2, pp. 355 –362, 2011. [8] W. Samek, C. Vidaurre, K.-R. M¨uller, and M. Kawanabe, “Stationary common spatial patterns for brain-computer interfacing,” Journal of Neural Engineering, vol. 9, no. 2, p. 026013, 2012. [9] O. Ledoit and M. Wolf, “A well-conditioned estimator for large-dimensional covariance matrices,” Journal of Multivariate Analysis, vol. 88, no. 2, pp. 365 – 411, 2004. 8 [10] H. Lu, H.-L. Eng, C. Guan, K. Plataniotis, and A. Venetsanopoulos, “Regularized common spatial pattern with aggregation for eeg classification in small-sample setting,” IEEE Transactions on Biomedical Engineering, vol. 57, no. 12, pp. 2936–2946, 2010. [11] D. Devlaminck, B. Wyns, M. Grosse-Wentrup, G. Otte, and P. Santens, “Multi-subject learning for common spatial patterns in motor-imagery bci,” Computational Intelligence and Neuroscience, vol. 2011, no. 217987, pp. 1–9, 2011. [12] B. Blankertz, M. K. R. Tomioka, F. U. Hohlefeld, V. Nikulin, and K.-R. M¨uller, “Invariant common spatial patterns: Alleviating nonstationarities in brain-computer interfacing,” in Ad. in NIPS 20, 2008, pp. 113–120. [13] W. Samek, F. C. Meinecke, and K.-R. M¨uller, “Transferring subspaces between subjects in brain-computer interfacing,” IEEE Transactions on Biomedical Engineering, vol. 60, no. 8, pp. 2289–2298, 2013. [14] M. Arvaneh, C. Guan, K. K. Ang, and C. Quek, “Optimizing spatial filters by minimizing within-class dissimilarities in electroencephalogram-based brain-computer interface,” IEEE Trans. Neural Netw. Learn. Syst., vol. 24, no. 4, pp. 610–619, 2013. [15] S. Amari, H. Nagaoka, and D. Harada, Methods of information geometry. American Mathematical Society, 2000. [16] S. Eguchi and Y. Kano, “Robustifying maximum likelihood estimation,” Tokyo Institute of Statistical Mathematics, Tokyo, Japan, Tech. Rep, 2001. [17] R. Bhatia, Matrix analysis, ser. Graduate Texts in Mathematics. Springer, 1997, vol. 169. [18] M. Mihoko and S. Eguchi, “Robust blind source separation by beta divergence,” Neural Comput., vol. 14, no. 8, pp. 1859–1886, Aug. 2002. [19] C. F´evotte and J. Idier, “Algorithms for nonnegative matrix factorization with the βdivergence,” Neural Comput., vol. 23, no. 9, pp. 2421–2456, Sep. 2011. [20] A. Hyv¨arinen, “Survey on independent component analysis,” Neural Computing Surveys, vol. 2, pp. 94–128, 1999. [21] M. Kawanabe, W. Samek, P. von B¨unau, and F. Meinecke, “An information geometrical view of stationary subspace analysis,” in Artificial Neural Networks and Machine Learning - ICANN 2011, ser. LNCS. Springer Berlin / Heidelberg, 2011, vol. 6792, pp. 397–404. [22] N. Murata, T. Takenouchi, and T. Kanamori, “Information geometry of u-boost and bregman divergence,” Neural Computation, vol. 16, pp. 1437–1481, 2004. [23] H. Wang, “Harmonic mean of kullbackleibler divergences for optimizing multi-class eeg spatio-temporal filters,” Neural Processing Letters, vol. 36, no. 2, pp. 161–171, 2012. [24] P. von B¨unau, F. C. Meinecke, F. C. Kir´aly, and K.-R. M¨uller, “Finding Stationary Subspaces in Multivariate Time Series,” Physical Review Letters, vol. 103, no. 21, pp. 214 101+, 2009. [25] P. von B¨unau, “Stationary subspace analysis - towards understanding non-stationary data,” Ph.D. dissertation, Technische Universit¨at Berlin, 2012. [26] W. Samek, M. Kawanabe, and K.-R. M¨uller, “Divergence-based framework for common spatial patterns algorithms,” IEEE Reviews in Biomedical Engineering, 2014, in press. [27] A. Basu, I. R. Harris, N. L. Hjort, and M. C. Jones, “Robust and efficient estimation by minimising a density power divergence,” Biometrika, vol. 85, no. 3, pp. 549–559, 1998. [28] P. J. Huber, Robust Statistics, ser. Wiley Series in Probability and Statistics. WileyInterscience, 1981. [29] B. Blankertz, C. Sannelli, S. Halder, E. M. Hammer, A. K¨ubler, K.-R. M¨uller, G. Curio, and T. Dickhaus, “Neurophysiological predictor of smr-based bci performance,” NeuroImage, vol. 51, no. 4, pp. 1303–1309, 2010. [30] P. J. Rousseeuw and K. V. Driessen, “A fast algorithm for the minimum covariance determinant estimator,” Technometrics, vol. 41, no. 3, pp. 212–223, 1999. [31] B. Blankertz, S. Lemm, M. S. Treder, S. Haufe, and K.-R. M¨uller, “Single-trial analysis and classification of ERP components – a tutorial,” NeuroImage, vol. 56, no. 2, pp. 814–825, 2011. [32] J. Baik and J. Silverstein, “Eigenvalues of large sample covariance matrices of spiked population models,” Journal of Multivariate Analysis, vol. 97, no. 6, pp. 1382–1408, 2006. 9
|
2013
|
161
|
4,889
|
DeViSE: A Deep Visual-Semantic Embedding Model Andrea Frome*, Greg S. Corrado*, Jonathon Shlens*, Samy Bengio Jeffrey Dean, Marc’Aurelio Ranzato, Tomas Mikolov * These authors contributed equally. {afrome, gcorrado, shlens, bengio, jeff, ranzato†, tmikolov}@google.com Google, Inc. Mountain View, CA, USA Abstract Modern visual recognition systems are often limited in their ability to scale to large numbers of object categories. This limitation is in part due to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows. One remedy is to leverage data from other sources – such as text data – both to train visual models and to constrain their predictions. In this paper we present a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text. We demonstrate that this model matches state-of-the-art performance on the 1000-class ImageNet object recognition challenge while making more semantically reasonable errors, and also show that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Semantic knowledge improves such zero-shot predictions achieving hit rates of up to 18% across thousands of novel labels never seen by the visual model. 1 Introduction The visual world is populated with a vast number of objects, the most appropriate labeling of which is often ambiguous, task specific, or admits multiple equally correct answers. Yet state-of-theart vision systems attempt to solve recognition tasks by artificially assigning images to a small number of rigidly defined classes. This has led to building labeled image data sets according to these artificial categories and in turn to building visual recognition systems based on N-way discrete classifiers. While growing the number of labels and labeled images has improved the utility of visual recognition systems [7], scaling such systems beyond a limited number of discrete categories remains an unsolved problem. This problem is exacerbated by the fact that N-way discrete classifiers treat all labels as disconnected and unrelated, resulting in visual recognition systems that cannot transfer semantic information about learned labels to unseen words or phrases. One way of dealing with this issue is to respect the natural continuity of visual space instead of artificially partitioning it into disjoint categories [20]. We propose an approach that addresses these shortcomings by training a visual recognition model with both labeled images and a comparatively large and independent dataset – semantic information from unannotated text data. This deep visual-semantic embedding model (DeViSE) leverages textual data to learn semantic relationships between labels, and explicitly maps images into a rich semantic embedding space. We show that this model performs comparably to state-of-the-art visual object classifiers when trained and evaluated on flat 1-of-N metrics, while simultaneously making fewer semantically unreasonable mistakes along the way. Furthermore, we show that the model leverages †Current affiliation: Facebook, Inc. 1 visual and semantic similarity to correctly predict object category labels for unseen categories, i.e. “zero-shot” classification, even when the number of unseen visual categories is 20,000 for a model trained on just 1,000 categories. 2 Previous Work The current state-of-the-art approach to image classification is a deep convolutional neural network trained with a softmax output layer (i.e. multinomial logistic regression) that has as many units as the number of classes (see, for instance [11]). However, as the number of classes grows, the distinction between classes blurs, and it becomes increasingly difficult to obtain sufficient numbers of training images for rare concepts. One solution to this problem, termed WSABIE [20], is to train a joint embedding model of both images and labels, by employing an online learning-to-rank algorithm. The proposed model contained two sets of parameters: (1) a linear mapping from image features to the joint embedding space, and (2) an embedding vector for each possible label. Compared to the proposed approach, WSABIE only explored linear mappings from image features to the embedding space, and the available labels were only those provided in the image training set. It could thus not generalize to new classes. More recently, Socher et al [18] presented a model for zero-shot learning where a deep neural network was first trained in an unsupervised manner from many images in order to obtain a rich image representation [3]; in parallel, a neural network language model [2] was trained in order to obtain embedding representations for thousands of common terms. The authors trained a linear mapping between the image representations and the word embeddings representing 8 classes for which they had labeled images, thus linking the image representation space to the embedding space. This last step was performed using a mean-squared error criterion. They also trained a simple model to determine if a given image was from any of the 8 original classes or not (i.e., an outlier detector). When the model determined an image to be in the set of 8 classes, a separately trained softmax model was used to perform the 8-way classification; otherwise the model predicted the nearest class in the embedding space (in their setting, only 2 outlier classes were considered). Their model differs from our proposed approach in several ways: first and foremost, the scale, as our model considers 1,000 known classes for the image model and up to 20,000 unknown classes, instead of respectively 8 and 2; second, in [18] there is an inherent trade-off between the quality of predictions for trained and outlier classes; third, by using a different visual model, different language model, and different training objective, we were able to train a single unified model that uses only embeddings. There has been other recent work showing impressive zero-shot performance on visual recognition tasks [12, 17, 16], however all of these rely on a curated source of semantic information for the labels: the WordNet hierarchy is used in [12] and [17], and [16] uses a knowledge base containing descriptive properties for each class. By contrast, our approach learns its semantic representation directly from unannotated data. 3 Proposed Approach Our objective is to leverage semantic knowledge learned in the text domain, and transfer it to a model trained for visual object recognition. We begin by pre-training a simple neural language model wellsuited for learning semantically-meaningful, dense vector representations of words [13]. In parallel, we pre-train a state-of-the-art deep neural network for visual object recognition [11], complete with a traditional softmax output layer. We then construct a deep visual-semantic model by taking the lower layers of the pre-trained visual object recognition network and re-training them to predict the vector representation of the image label text as learned by the language model. These three training phases are detailed below. 3.1 Language Model Pre-training The skip-gram text modeling architecture introduced by Mikolov et al [13, 14] has been shown to efficiently learn semantically-meaningful floating point representations of terms from unannotated text. The model learns to represent each term as a fixed length embedding vector by predicting adjacent terms in the document (Figure 1a, right). We call these vector representations embedding 2 transportation dogs birds musical instruments aquatic life insects animals clothing food reptiles embedding vector lookup table embedding vector lookup table similarity metric label image core visual model transformation Deep Visual Semantic Embedding Model image label softmax layer core visual model Traditional Visual Model source word nearby word softmax layer Skip-gram Language Model parameter initialization parameter initialization A B Figure 1: (a) Left: a visual object categorization network with a softmax output layer; Right: a skip-gram language model; Center: our joint model, which is initialized with parameters pre-trained at the lower layers of the other two models. (b) t-SNE visualization [19] of a subset of the ILSVRC 2012 1K label embeddings learned using skip-gram. vectors. Because synonyms tend to appear in similar contexts, this simple objective function drives the model to learn similar embedding vectors for semantically related words. We trained a skip-gram text model on a corpus of 5.7 million documents (5.4 billion words) extracted from wikipedia.org. The text of the web pages was tokenized into a lexicon of roughly 155,000 single- and multi-word terms consisting of common English words and phrases as well as terms from commonly used visual object recognition datasets [7]. Our skip-gram model used a hierarchical softmax layer for predicting adjacent terms and was trained using a 20-word window with a single pass through the corpus. For more details and a pointer to open-source code, see [13]. We trained skip-gram models of varying hidden dimensions, ranging from 100-D to 2,000-D, and found 500- and 1,000-D embeddings to be a good compromise between training speed, semantic quality, and the ultimate performance of the DeViSE model described below. The semantic quality of the embedding representations learned by these models is impressive.1 A visualization of the language embedding space over a subset of ImageNet labels indicates that the language model learned a rich semantic structure that could be exploited in vision tasks (Figure 1b). 3.2 Visual Model Pre-training The visual model architecture we employ is based on the winning model for the 1,000-class ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012 [11, 6]. The deep neural network model consists of several convolutional filtering, local contrast normalization, and max-pooling layers, followed by several fully connected neural network layers trained using the dropout regularization technique [10]. We trained this model with a softmax output layer, as described in [11], to predict one of 1,000 object categories from the ILSVRC 2012 1K dataset [7], and were able to reproduce their results. This trained model serves both as our benchmark for performance comparisons, as well as the initialization for our joint model. 3.3 Deep Visual-Semantic Embedding Model Our deep visual-semantic embedding model (DeViSE) is initialized from these two pre-trained neural network models (Figure 1a). The embedding vectors learned by the language model are unit normed and used to map label terms into target vector representations2. The core visual model, with its softmax prediction layer now removed, is trained to predict these vectors for each image, by means of a projection layer and a similarity metric. The projection layer is a linear transformation that maps the 4,096-D representation at the top of our core visual model into the 500- or 1,000-D representation native to our language model. 1For example, the 9 nearest terms to tiger shark using cosine distance are bull shark, blacktip shark, shark, oceanic whitetip shark, sandbar shark, dusky shark, blue shark, requiem shark, and great white shark. The 9 nearest terms to car are cars, muscle car, sports car, compact car, automobile, racing car, pickup truck, dealership, and sedans. 2In [13], which introduced the skip-gram model for text, cosine similarity between vectors is used for measuring semantic similarity. Unit-norming the vectors and using dot product similarity is an equivalent similarity measurement. 3 The choice of loss function proved to be important. We used a combination of dot-product similarity and hinge rank loss (similar to [20]) such that the model was trained to produce a higher dot-product similarity between the visual model output and the vector representation of the correct label than between the visual output and other randomly chosen text terms. We defined the per training example hinge rank loss: loss(image, label) = X j̸=label max[0, margin −⃗tlabelM⃗v(image) + ⃗tjM⃗v(image)] (1) where ⃗v(image) is a column vector denoting the output of the top layer of our core visual network for the given image, M is the matrix of trainable parameters in the linear transformation layer, ⃗tlabel is a row vector denoting learned embedding vector for the provided text label, and ⃗tj are the embeddings of other text terms. In practice, we found that it was expedient to randomize the algorithm both by (1) restricting the set of false text terms to possible image labels, and (2) truncating the sum after the first margin-violating false term was encountered. The ⃗t vectors were constrained to be unit norm, and a fixed margin of 0.1 was used in all experiments3. We also experimented with an L2 loss between visual and label embeddings, as suggested by Socher et al. [18], but that consistently yielded about half the accuracy of the rank loss model. We believe this is because the nearest neighbor evaluation is fundamentally a ranking problem and is best solved with a ranking loss, whereas the L2 loss only aims to make the vectors close to one another but remains agnostic to incorrect labels that are closer to the target image. The DeViSE model was trained by asynchronous stochastic gradient descent on a distributed computing platform described in [4]. As above, the model was presented only with images drawn from the ILSVRC 2012 1K training set, but now trained to predict the term strings as text4. The parameters of the projection layer M were first trained while holding both the core visual model and the text representation fixed. In the later stages of training the derivative of the loss function was backpropagated into the core visual model to fine-tune its output5, which typically improved accuracy by 1-3% (absolute). Adagrad per-parameter dynamic learning rates were utilized to keep gradients well scaled at the different layers of the network [9]. At test time, when a new image arrives, one first computes its vector representation using the visual model and the transformation layer; then one needs to look for the nearest labels in the embedding space. This last step can be done efficiently using either a tree or a hashing technique, in order to be faster than the naive linear search approach (see for instance [1]). The nearest labels are then mapped back to ImageNet synsets for scoring (see Supplementary Materials for details). 4 Results The goals of this work are to develop a vision model that makes semantically relevant predictions even when it makes errors and that generalizes to classes outside of its labeled training set, i.e. zeroshot learning. We compare DeViSE to two models that employ the same high-quality core vision model, but lack the semantic structure imparted by our language model: (1) a softmax baseline model – a state-of-the-art vision model [11] which employs a 1000-way softmax classifier; (2) a random embedding model – a version of our model that uses random unit-norm embedding vectors in place of those learned by the language model. Both use the trained visual model described in Section 3.2. In order to demonstrate parity with the softmax baseline on the most commonly-reported metric, we compute “flat” hit@k metrics – the percentage of test images for which the model returns the one true label in its top k predictions. To measure the semantic quality of predictions beyond the true label, we employ a hierarchical precision@k metric based on the label hierarchy provided with the 3The margin was chosen to be a fraction of the norm of the vectors, which is 1.0. A wide range of values would likely work well. 4ImageNet image labels are synsets, a set of synonymous terms, where each term is a word or phrase. We found training the model to predict the first term in each synset to be sufficient, but sampling from the synset terms might work equally well. 5In principle the gradients can also be back-propagated into the vector representations of the text labels. In this case, the language model should continue to train simultaneously in order to maintain the global semantic structure over all terms in the vocabulary. 4 Flat hit@k (%) Hierarchical precision@k Model type dim 1 2 5 10 2 5 10 20 Softmax baseline N/A 55.6 67.4 78.5 85.0 0.452 0.342 0.313 0.319 DeViSE 500 53.2 65.2 76.7 83.3 0.447 0.352 0.331 0.341 1000 54.9 66.9 78.4 85.0 0.454 0.351 0.325 0.331 Random embeddings 500 52.4 63.9 74.8 80.6 0.428 0.315 0.271 0.248 1000 50.5 62.2 74.2 81.5 0.418 0.318 0.290 0.292 Chance N/A 0.1 0.2 0.5 1.0 0.007 0.013 0.022 0.042 Table 1: Comparison of model performance on our test set, taken from the ImageNet ILSVRC 2012 1K validation set. Note that hierarchical precision@1 is equivalent to flat hit@1. See text for details. ImageNet image repository [7]. In particular, for each true label and value of k, we generate a ground truth list from the semantic hierarchy, and compute a per-example precision equal to the fraction of the model’s k predictions that overlap with the ground truth list. We report mean precision across the test set. Detailed descriptions of the generation of the ground truth lists, the hierarchical scoring metric, and train/validation/test dataset splits are provided in the Supplementary Materials. 4.1 ImageNet (ILSVRC) 2012 1K Results This section presents flat and hierarchical results on the ILSVRC 2012 1K dataset, where the classes of the examples presented at test time are the same as those used for training. Table 1 shows results for the DeViSE model for 500- and 1000-dimensional skip-gram models compared to the random embedding and softmax baseline models, on both the flat and hierarchical metrics.6 On the flat metric, the softmax baseline shows higher accuracy for k = 1, 2. At k = 5, 10, the 1000-D DeViSE model has reached parity, and at k = 20 (not shown) it performs slightly better. We expected the softmax model to be the best performing model on the flat metric, given that its cross-entropy training objective is most well matched to the evaluation metric, and are surprised that the performance of DeViSE is so close to softmax performance. On the hierarchical metric, the DeViSE models show better semantic generalization than the softmax baseline, especially for larger k. At k = 5, the 500-D DeViSE model shows a 3% relative improvement over the softmax baseline, and at k = 20 almost a 7% relative improvement. This is a surprisingly large gain, considering that the softmax baseline is a reproduction of the best published model on these data. The gap that exists between the DeViSE model and softmax baseline on the hierarchical metric reflects the benefit of semantic information above and beyond visual similarity [8]. The gap between the DeViSE model and the random embeddings model establishes that the source of the gain is the well-structured embeddings learned by the language model not some other property of our architecture. 4.2 Generalization and Zero-Shot Learning A distinct advantage of our model is its ability to make reasonable inferences about candidate labels it has never visually observed. For example, a DeViSE model trained on images labeled tiger shark, bull shark, and blue shark, but never with images labeled shark, would likely have the ability to generalize to this more coarse-grained descriptor because the language model has learned a representation of the general concept of shark which is similar to all of the specific sharks. Similarly, if tested on images of highly specific classes which the model has never seen before, for example a photo of an oceanic whitecap shark, and asked whether the correct label is more likely oceanic whitecap shark or some other unfamiliar label (say, nuclear submarine), our model stands a fighting chance of guessing correctly because the language model ensures that representation of oceanic whitecap shark is closer to the representation of sharks the model has seen, while the representation of nuclear submarine is closer to those of other sea vessels. 6Note that our softmax baseline results differ from the results in [11] due to a simplification in the evaluation procedure: [11] creates several distorted versions of each test image and aggregates the results for a final label, whereas in our experiments, we evaluate using only the original test image. Our softmax baseline is able to reproduce the performance of the model in [11] when evaluated with the same procedure. 5 barbet patas, hussar monkey, ... babbler, cackler titmouse, tit bowerbird, catbird patas, hussar monkey, ... proboscis monkey, Nasalis ... macaque titi, titi monkey guenon, guenon monkey oboe, hautboy, hautbois bassoon English horn, cor anglais hook and eye hand reel punching bag, punch bag, ... whistle bassoon letter opener, paper knife, ... eyepiece, ocular Polaroid compound lens telephoto lens, zoom lens rangefinder, range finder typewriter keyboard tape player reflex camera CD player space bar Our model Softmax over ImageNet 1K A B C dune buggy, beach buggy searcher beetle, ... seeker, searcher, quester Tragelaphus eurycerus, ... bongo, bongo drum warplane, military plane missile projectile, missile sports car, sport car submarine, pigboat, sub, ... pot, flowerpot cauliflower guacamole cucumber, cuke broccoli comestible, edible, ... dressing, salad dressing Sicilian pizza vegetable, veggie, veg fruit fruit pineapple pineapple plant, Ananas ... sweet orange sweet orange tree, ... pineapple, ananas coral fungus artichoke, globe artichoke sea anemone, anemone cardoon D E F Our model Softmax over ImageNet 1K Figure 2: For each image, the top 5 zero-shot predictions of DeViSE+1K from the 2011 21K label set and the softmax baseline model, both trained on ILSVRC 2012 1K. Predictions ordered by decreasing score, with correct predictions in bold. Ground truth: (a) telephoto lens, zoom lens; (b) English horn, cor anglais; (c) babbler, cackler; (d) pineapple, pineapple plant, Ananas comosus; (e) salad bar; (f) spacecraft, ballistic capsule, space vehicle. Flat hit@k (%) Data Set Model # Candidate Labels 1 2 5 10 20 2-hop DeViSE-0 1,589 6.0 10.0 18.1 26.4 36.4 DeViSE+1K 2,589 0.8 2.7 7.9 14.2 22.7 3-hop DeViSE-0 7,860 1.7 2.9 5.3 8.2 12.5 DeViSE+1K 8,860 0.5 1.4 3.4 5.9 9.7 ImageNet 2011 21K DeViSE-0 20,841 0.8 1.4 2.5 3.9 6.0 DeViSE+1K 21,841 0.3 0.8 1.9 3.2 5.3 Table 2: Flat hit@k performance of DeViSE on ImageNet-based zero-shot datasets of increasing difficulty from top to bottom. DeViSE-0 and DeViSE+1K are the same trained model, but DeViSE-0 is restricted to only predict zero-shot classes, whereas DeViSE+1K predicts both the zero-shot and the 1K training labels. For all, zero-shot classes did not occur in the image training set. To test this hypothesis, we extracted images from the ImageNet 2011 21K dataset with labels that were not included in the ILSVRC 2012 1K dataset on which DeViSE was trained. These are “zeroshot” data sets in the sense that our model has no visual knowledge of these labels, though embeddings for the labels were learned by the language model. The softmax baseline is only able to predict labels from ILSVRC 2012 1K. The zero-shot experiments were performed with the same trained 500-D DeViSE model used for results in Section 4.1, but it is evaluated in two ways: DeViSE-0 only predicts the zero-shot labels, and DeViSE+1K predicts zero-shot labels and the ILSVRC 2012 1K training labels. Figure 2 shows label predictions for a handful of selected examples from this dataset to qualitatively illustrate model behavior. Note that DeViSE successfully predicts a wide range of labels outside its training set, and furthermore, the incorrect predictions are generally semantically “close” to the desired label. Figure 2 (a), (b), (c), and (d) show cases where our model makes significantly better top-5 predictions than the softmax-based model. For example, in Figure 2 (a), the DeViSE model is able to predict a number of lens-related labels even though it was not trained on images in any of the predicted categories. Figure 2 (d) illustrates a case where the top softmax prediction is quite good, but where it is unable to generalize to new labels and its remaining predictions are off the mark, while our model’s predictions are more plausible. Figure 2 (e) highlights a case where neither model gets the exact true label, but both models are giving plausible labels. Figure 2 (f) shows a case where the softmax model emits more nearly correct labels than the DeViSE model. To quantify the performance of the model on zero-shot data, we constructed from our ImageNet 2011 21K zero-shot data three test data sets of increasing difficulty based on the image labels’ tree distance from the training ILSVRC 2012 1K labels in the ImageNet label hierarchy [7]. The easiest dataset, “2-hop”, is comprised of the 1,589 labels that are within two tree hops of the training labels, making them visually and semantically similar to the training set. A more difficult “3-hop” dataset was constructed in the same manner. Finally, we built a third, particularly challenging dataset consisting of all the labels in ImageNet 2011 21K that are not in ILSVRC 2012 1K. 6 Hierarchical precision@k Data Set Model 1 2 5 10 20 2-hop DeViSE-0 0.06 0.152 0.192 0.217 0.233 DeViSE+1K 0.008 0.204 0.196 0.201 0.214 Softmax baseline 0 0.236 0.181 0.174 0.179 3-hop DeViSE-0 0.017 0.037 0.191 0.214 0.236 DeViSE+1K 0.005 0.053 0.192 0.201 0.214 Softmax baseline 0 0.053 0.157 0.143 0.130 ImageNet 2011 21K DeViSE-0 0.008 0.017 0.072 0.085 0.096 DeViSE+1K 0.003 0.025 0.083 0.092 0.101 Softmax baseline 0 0.023 0.071 0.069 0.065 Table 3: Hierarchical precision@k results on zero-shot classification. Performance of DeViSE compared to the softmax baseline model across the same datasets as in Table 2. Note that the softmax model can never directly predict the correct label so its precision@1 is 0. Model 200 labels 1000 labels DeViSE 31.8% 9.0% Mensink et al. 2012 [12] 35.7% 1.9% Rohrbach et al. 2011 [17] 34.8% Table 4: Flat hit@5 accuracy on the zero-shot task from [12]. DeViSE experiments were performed with a 500-D model. The [12] model uses a curated hierarchy over labels for zero-shot classification, but without using this information, our model is close in performance on the 200 zero-shot class label task. When the models can predict any of the 1000 labels, we achieve better accuracy, indicating DeViSE has less of a bias toward training classes than [12]. As in [12], we include a result on a similar task from [17], though their work used a different set of 200 zero-shot classes. We again calculated the flat hit@k measure to determine how frequently DeViSE-0 and DeViSE+1K predicted the correct label for each of these data sets (Table 2). DeViSE-0’s top prediction was the correct label 6.0% of the time across 1,589 novel labels, and the rate increases with k to 36.4% within the top 20 predictions. As the zero-shot data sets become more difficult, the accuracy decreases in absolute terms, though it is better relative to chance (not shown). Since a traditional softmax visual model can never produce the correct label on zero-shot data, its performance would be 0% for all k. The DeViSE+1K model performed uniformly worse than the plain DeViSE-0 model by a margin that indicates it has a bias toward training classes. To provide a stronger baseline for comparison, we compared the performance of our model and the softmax model on the hierarchical metric we employed above. Although the softmax baseline model can never predict exactly the correct label, the hierarchical metric will give the model credit for predicting labels that are in the neighborhood of the correct label in the ImageNet hierarchy (for k > 1). Visual similarity is strongly correlated with semantic similarity for nearby object categories [8], and the softmax model does leverage visual similarity between zero-shot and training images to make predictions that will be scored favorably (e.g. Figure 2d). The easiest dataset, “2-hop”, contains object categories that are as visually and semantically similar to the training set as possible. For this dataset the softmax model outperforms the DeViSE model for hierarchical precision@2, demonstrating just how large a role visual similarity plays in predicting semantically “nearby” labels (Table 3). However, for k = 5, 10, 20, our model produces superior predictions relative to the ImageNet hierarchy, even on this easiest dataset. For the two more difficult datasets, where there are more novel categories and the novel categories are less closely related to those in the training data set, DeViSE outperforms the softmax model at all measured hierarchical precisions. The quantitative gains can be quite large, as much as 82% relative improvement over softmax performance, and qualitatively, the softmax model’s predictions can be surprisingly unreasonable in some cases (e.g. Figure 2c). The random embeddings model we described above performed substantially worse than either of the real models. These results indicate that our architecture succeeds in leveraging the semantic knowledge captured by the language model to make reasonable predictions, even as test images become increasingly dissimilar from those used in the training set. To provide a comparison with other work in zero-shot learning, we also directly compare to the zero-shot results from [12]. These were performed on a particular 800/200 split of the 1000 classes 7 from ImageNet 2010: training and model tuning is performed using the 800 classes, and test images are drawn from the remaining 200 classes. Results are shown in Table 4. Taken together, these zero-shot experiments indicate that the DeViSE model can exploit both visual and semantic information to predict novel classes never before observed. Furthermore, the presence of semantic information in the model substantially improves the quality of its predictions. 5 Conclusion In contrast to previous attempts in this area [18], we have shown that our joint visual-semantic embedding model can be trained to give performance comparable to a state-of-the-art softmax based model on a flat object classification metric, while simultaneously making more semantically reasonable errors, as indicated by its improved performance on a hierarchical label metric. We have also shown that this model is able to make correct predictions across thousands of previously unseen classes by leveraging semantic knowledge elicited only from unannotated text. The advantages of this architecture, however, extend beyond the experiments presented here. First, we believe that our model’s unusual compatibility with larger, less manicured data sets will prove to be a major strength moving forward. In particular, the skip-gram language model we constructed included only a modestly sized vocabulary, and was exposed only to the text of a single online encyclopedia; we believe that the gains available to models with larger vocabularies and trained on vastly larger text corpora will be significant, and easily outstrip methods which rely on manually constructed semantic hierarchies (e.g. [17]). Perhaps more importantly, though here we trained on a curated academic image dataset, our model’s architecture naturally lends itself to being trained on all available images that can be annotated with any text term contained in the (larger) vocabulary. We believe that training massive “open” image datasets of this form will dramatically improve the quality of visual object categorization systems. Second, we believe that the 1-of-N (and nearly balanced) visual object classification problem is soon to be outmoded by practical visual object categorization systems that can handle very large numbers of labels [5] and the re-definition of valid label sets at test time. For example, our model can be trained once on all available data, and simultaneously used in one application requiring only coarse object categorization (e.g. house, car, pedestrian) and another application requiring fine categorization in a very specialized subset (e.g. Honda Civic, Ferrari F355, Tesla Model-S). Moreover, because test time computation can be sub-linear in the number of labels contained in the training set, our model can be used in exactly such systems with much larger numbers of labels, including overlapping or never-observed categories. Moving forward, we are experimenting with techniques which more directly leverage the structure inherent in the learned language embedding, greatly reducing training costs of the joint model and allowing even greater scaling [15]. Acknowledgments Special thanks to those who lent their insight and technical support for this work, including Matthieu Devin, Alex Krizhevsky, Quoc Le, Rajat Monga, Ilya Sutskever, and Wojciech Zaremba. References [1] S. Bengio, J. Weston, and D. Grangier. Label embedding trees for large multi-class tasks. In Advances in Neural Information Processing Systems, NIPS, 2010. [2] Y. Bengio, R. Ducharme, and P. Vincent. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137–1155, 2003. [3] A. Coates and A. Ng. The importance of encoding versus training with sparse coding and vector quantization. In International Conference on Machine Learning (ICML), 2011. [4] Jeffrey Dean, Greg S. Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V. Le, Mark Z. Mao, MarcAurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, and Andrew Y. Ng. Large scale distributed deep networks. In Advances in Neural Information Processing Systems, NIPS, 2012. 8 [5] Thomas Dean, Mark Ruzon, Mark Segal, Jonathon Shlens, Sudheendra Vijayanarasimhan, and Jay Yagnik. Fast, accurate detection of 100,000 object classes on a single machine. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013. [6] Jia Deng, Alex Berg, Sanjeev Satheesh, Hao Su, Aditya Khosla, and Fei-Fei Li. Imagenet large scale visual recognition challenge 2012. [7] Jia Deng, Wei Dong, Richard Socher, Li jia Li, Kai Li, and Li Fei-fei. Imagenet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009. [8] Thomas Deselaers and Vittorio Ferrari. Visual and semantic similarity in imagenet. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011. [9] J. C. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159, 2011. [10] Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012. [11] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, NIPS, 2012. [12] Thomas Mensink, Jakob Verbeek, Florent Perronnin, and Gabriela Csurka. Metric learning for large scale image classification: Generalizing to new classes at near-zero cost. In European Conference on Computer Vision (ECCV), 2012. [13] Tomas Mikolov, Kai Chen, Greg S. Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. In International Conference on Learning Representations (ICLR), Scottsdale, Arizona, USA, 2013. [14] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, NIPS, 2013. [15] Mohammad Norouzi, Tomas Mikolov, Samy Bengio, Jonathon Shlens, Andrea Frome, Greg S. Corrado, and Jeffrey Dean. Zero-shot learning by convex combination of semantic embeddings. arXiv (to be submitted), 2013. [16] Mark Palatucci, Dean Pomerleau, Geoffrey E. Hinton, and Tom M. Mitchell. Zero-shot learning with semantic output codes. In Advances in Neural Information Processing Systems, NIPS, 2009. [17] Marcus Rohrbach, Michael Stark, and Bernt Schiele. Evaluating knowledge transfer and zero-shot learning in a large-scale setting. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011. [18] R. Socher, M. Ganjoo, H. Sridhar, O. Bastani, C. D. Manning, and A. Y. Ng. Zero-shot learning through cross-modal transfer. In International Conference on Learning Representations (ICLR), Scottsdale, Arizona, USA, 2013. [19] L.J.P. van der Maaten and G.E. Hinton. Visualizing high-dimensional data using t-sne. Journal of Machine Learning Research, 9:2579–2605, 2008. [20] Jason Weston, Samy Bengio, and Nicolas Usunier. Large scale image annotation: learning to rank with joint word-image embeddings. Machine Learning, 81(1):21–35, 2010. 9
|
2013
|
162
|
4,890
|
Symbolic Opportunistic Policy Iteration for Factored-Action MDPs Aswin Raghavana Roni Khardonb Alan Ferna Prasad Tadepallia a School of EECS, Oregon State University, Corvallis, OR, USA {nadamuna,afern,tadepall}@eecs.orst.edu b Department of Computer Science, Tufts University, Medford, MA, USA roni@cs.tufts.edu Abstract This paper addresses the scalability of symbolic planning under uncertainty with factored states and actions. Our first contribution is a symbolic implementation of Modified Policy Iteration (MPI) for factored actions that views policy evaluation as policy-constrained value iteration (VI). Unfortunately, a na¨ıve approach to enforce policy constraints can lead to large memory requirements, sometimes making symbolic MPI worse than VI. We address this through our second and main contribution, symbolic Opportunistic Policy Iteration (OPI), which is a novel convergent algorithm lying between VI and MPI, that applies policy constraints if it does not increase the size of the value function representation, and otherwise performs VI backups. We also give a memory bounded version of this algorithm allowing a space-time tradeoff. Empirical results show significantly improved scalability over state-of-the-art symbolic planners. 1 Introduction We study symbolic dynamic programming (SDP) for Markov Decision Processes (MDPs) with exponentially large factored state and action spaces. Most prior SDP work has focused on exact [1] and approximate [2, 3] solutions to MDPs with factored states, assuming just a handful of atomic actions. In contrast to this, many applications are most naturally modeled as having factored actions described in terms of multiple action variables, which yields an exponential number of joint actions. This occurs, e.g., when controlling multiple actuators in parallel, such as in robotics, traffic control, and real-time strategy games. In recent work [4] we have extended SDP to factored actions by giving a symbolic VI algorithm that explicitly reasons about action variables. The key bottleneck of that approach is the space and time complexity of computing symbolic Bellman backups, which requires reasoning about all actions at all states simultaneously. This paper is motivated by addressing this bottleneck via the introduction of alternative and potentially much cheaper backups. We start by considering Modified Policy Iteration (MPI) [5], which adds a few policy evaluation steps between consecutive Bellman backups. MPI is attractive for factored-action spaces because policy evaluation does not require reasoning about all actions at all states, but rather only about the current policy’s action at each state. Existing work on symbolic MPI [6] assumes a small atomic action space and does not scale to factored actions. Our first contribution (Section 3) is a new algorithm, Factored Action MPI (FA-MPI), that conducts exact policy evaluation steps by treating the policy as a constraint on normal Bellman backups. While FA-MPI is shown to improve scalability compared to VI in some cases, we observed that in practice the strict enforcement of the policy constraint can cause the representation of value functions to become too large and dominate run time. Our second and main contribution (Section 4) is to overcome this issue using a new backup operator that lies between policy evaluation and a Bellman 1 Figure 1: Example of a DBN MDP with factored actions. backup and hence is guaranteed to converge. This new algorithm, Opportunistic Policy Iteration (OPI), constrains a select subset of the actions in a way that guarantees that there is no growth in the representation of the value function. We also give a memory-bounded version of the above algorithm (Section 5). Our empirical results (Section 6) show that these algorithms are significantly more scalable than FA-MPI and other state-of-the-art algorithms. 2 MDPs with Factored State and Action Spaces In a factored MDP M, the state space S and action space A are specified by finite sets of binary variables X = (X1, . . . , Xl) and A = (A1, . . . , Am) respectively, so that |S| = 2l and |A| = 2m. For emphasis we refer to such MDPs as factored-action MDPs (FA-MDPs). The transition function T and reward function R are specified compactly using a Dynamic Bayesian Network (DBN). The DBN model consists of a two–time-step graphical model that shows, for each next state variable X′ and the immediate reward, the set of current state and action variables, denoted by parents(X′). Further, following [1], the conditional probability functions are represented by algebraic decision diagrams (ADDs) [7], which represent real-valued functions of boolean variables as a Directed Acyclic Graph (DAG) (i.e., an ADD maps assignments to n boolean variables to real values). We let P X′ i denote the ADD representing the conditional probability table for variable X′ i. For example, Figure 1 shows a DBN for the SysAdmin domain (Section 6.1). The DBN encodes that the computers c1, c2 and c3 are arranged in a directed ring so that the running status of each is influenced by its reboot action and the status of its predecessor. The right part of Figure 1 shows the ADD representing the dynamics for the state variable running c1. The variable running c1’ represents the truth value of running c1 in the next state. The ADD shows that running c1 becomes true if it is rebooted, and otherwise the next state depends on the status of the neighbors. When not rebooted, c1 fails w.p. 0.3 if its neighboring computer c3 has also failed, and w.p. 0.05 otherwise. When not rebooted, a failed computer becomes operational w.p. 0.05. ADDs support binary operations over the functions they represent (F op G = H if and only if ∀x, F(x) op G(x) = H(x)) and marginalization operators (e.g., marginalize x via maximization in G(y) = maxx F(x, y) and through sum in G(y) = P x F(x, y) ). Operations between diagrams will be represented using the usual symbols +, ×, max etc., and the distinction between scalar operations and operations over functions should be clear from context. Importantly, these operations are carried out symbolically and scale polynomially in the size of the ADD rather than the potentially exponentially larger tabular representation of the function. ADD operations assume a total ordering O on the variables and impose that ordering in the DAG structure (interior nodes) of any ADD. SDP uses the compact MDP model to derive compact value functions by iterating symbolic Bellman backups that avoid enumerating all states. It has the advantage that the value function is exact while often being much more compact than explicit tables. Early SDP approaches such as SPUDD [1] only represented the structure in the state variables and enumerate over actions, so that space and time is at least linearly related to the number of actions, and hence exponential in m. In recent work, we extended SDP to factored action spaces by computing Bellman backups using an algorithm called Factored Action Regression (FAR) [4]. This is done by implementing the following equations using ADD operations over a representation like Figure 1. Let T Q(V ) denote the backup 2 operator that computes the next iterate of the Q-value function starting with value function V , T Q(V ) = R + γ X X′ 1 P X′ 1 . . . X X′ l P X′ l × primed(V ) (1) then T (V ) = maxA1 . . . maxAm T Q(V ) gives the next iterate of the value function. Repeating this process we get the VI algorithm. Here primed(V ) swaps the state variables X in the diagram V with next state variables X′ (c.f. DBN representation for next state variables). Equation 1 should be read right to left as follows: each probability diagram P X′ i assigns a probability to X′ i from assignments to Parents(X′ i) ⊆(X, A), introducing the variables Parents(X′ i) into the value function. The P marginalization eliminates the variable X′ i. We arrive at the Q-function that maps variable assignments ⊆(X, A) to real values. Written in this way, where the domain dynamics are explicitly expressed in terms of actions variables and where maxA = maxA1,...,Am is a symbolic marginalization operation over action variables, we get the Factored Action Regression (FAR) algorithm [4]. In the following, we use T() to denote a Bellman-like backup where superscript T Q() denotes that that actions are not maximized out so the output is a function of state and actions, and subscript as in Tπ() defined below denotes that the update is restricted to the actions in π. Similarly T Q π () restricts to a (possibly partial) policy π and does not maximize over the unspecified action choice. In this work we will build on Modified Policy Iteration (MPI), which generalizes value iteration and policy iteration, by interleaving k policy evaluation steps between successive Bellman backups [5]. Here a policy evaluation step corresponds to iterating exact policy backups, denoted by Tπ where the action is prescribed by the policy π in each state. MPI has the potential to speed up convergence over VI because, at least for flat action spaces, policy evaluation is considerably cheaper than full Bellman backups. In addition, when k > 0, one might hope for larger jumps in policy improvement because the greedy action in T is based on a more accurate estimate of the value of the policy. Interestingly, the first approach to symbolic planning in MDPs was a version of MPI for factored states called Structured Policy Iteration (SPI), which was [6] later adapted to relational problems [8]. SPI represents the policy as a decision tree with state-variables labeling interior nodes and a concrete action as a leaf node. The policy backup uses the graphical form of the policy. In each such backup, for each leaf node (policy action) a in the policy tree, its Q-function Qa is computed and attached to the leaf. Although SPI leverages the factored state representation, it represents the policy in terms of concrete joint actions, which fails to capture the structure among the action variables in FA-MDPs. In addition, in factored actions spaces this requires an explicit calculation of Q functions for all joint actions. Finally, the space required for policy backup can be prohibitive because each Q-function Qa is joined to each leaf of the policy. SPI goes to great lengths in order to enforce a policy backup which, intuitively, ought to be much easier to compute than a Bellman backup. In fact, we are not aware of any implementations of this algorithm that scales well for FA-MDPs or even for factored state spaces. The next section provides an alternative algorithm. 3 Factored Action MPI (FA-MPI) In this section, we introduce Factored Action MPI (FA-MPI), which uses a novel form of policy backup. Pseudocode is given in Figure 2. Each iteration of the outer while loop starts with one full Bellman backup using Equation 1, i.e., policy improvement. The inner loop performs k steps of policy backups using a new algorithm described below that avoids enumerating all actions. We represent the policy using a Binary Decision Diagram (BDD) with state and action variables where a leaf value of 1 denotes any combination of action variables that is the policy action, and a leaf value of −∞indicates otherwise. Using this representation, we perform policy backups using T Q π (V ) given in Equation 2 below followed by a max over the actions in the resulting diagram. In this equation, the diagram resulting from the product π × primed(V ) sets the value of all offpolicy state-actions to −∞, before computing any value for them1 and this ensures correctness of the update as indicated by the next proposition. T Q π (V ) = R + γ X X′ 1 P X′ 1 . . . X X′ l P X′ l × (π × primed(V )) (2) 1Notice that T Q π is equivalent to π × T Q but the former is easier to compute. 3 Algorithm 3.1: FA-MPI/OPI(k) V 0 ←0, i ←0 (V i+1 0 , πi+1) ←max A T Q(V i) while ||V i+1 0 −V i|| > ϵ do for j ←1 to k do For Algorithm FA-MPI : V i+1 j ←max A T Q πi+1(V i+1 j−1 ) For Algorithm OPI : V i+1 j ←max A ˆT Q πi+1(V i+1 j−1 ) V i+1 ←V i+1 k i ←i + 1 (V i+1 0 , πi+1) ←max A T Q(V i) return (πi+1). Figure 2: Factored Action MPI and OPI. Algorithm 3.2: P(D, π) d ←variable at the root node of D c ←variable at root node of π if d occurs after c in ordering then P(D, max(πT , πF )) else if d = c then ADD(d, P(DT , πT ), P(DF , πF )) else if d occurs before c in ordering then ADD(d, P(DT , π), P(DF , π)) else if π = −∞return (−∞) else D Figure 3: Pruning procedure for an ADD. Subscripts T and F denote the true and false child respectively. Proposition 1. FA-MPI computes exact policy backups i.e. maxA T Q π = Tπ. The proof uses the fact that (s, a) pairs that do not agree with the policy get a value −∞via the constraints and therefore do not affect the maximum. While FA-MPI can lead to improvements over VI (i.e. FAR), like SPI, FA-MPI can lead to large space requirements in practice. In this case, the bottleneck is the ADD product π ×primed(V ), which can be exponentially larger than primed(V ) in the worst case. The next section shows how to approximate the backup in Equation 2 while ensuring no growth in the size of the ADD. 4 Opportunistic Policy Iteration (OPI) Here we describe Opportunistic Policy Iteration (OPI), which addresses the shortcomings of FAMPI. As seen in Figure 2, OPI is identical to FA-MPI except that it uses an alternative, more conservative policy backup. The sequence of policies generated by FA-MPI (and MPI) may not all have compactly representable ADDs. Fortunately, finding the optimal value function may not require representing the values of the intermediate policies exactly. The key idea in OPI is to enforce the policy constraint opportunistically, i.e. only when they do not increase the size of the value function representation. In an exponential action space, we can sometimes expect a Bellman backup to be a coarser partitioning of state variables than the value function of a given policy (e.g. two states that have the same value under the optimal action have different values under the policy action). In this case enforcing the policy constraint via T Q π (V ) is actually harmful in terms of the size of the representation. OPI is motivated by retaining the coarseness of Bellman backups in some states, and otherwise enforcing the policy constraint. The OPI backup is sensitive to the size of the value ADD so that it is guaranteed to be smaller than the results of both Bellman backup and policy backup. First we describe the symbolic implementation of OPI . The trade-off between policy evaluation and policy improvement is made via a pruning procedure (pseudo-code in Figure 3). This procedure assigns a value of −∞to only those paths in a value function ADD that violate the policy constraint π. The interesting case is when the root variable of π is ordered below the root of D (and thus does not appear in D) so that the only way to violate the constraint is to violate both true and false branches. We therefore recurse D with the diagram max{πT , πF }. Example 1. The pruning procedure is illustrated in Figure 4. Here the input function D does not contain the root variable X of the constraint, and the max under X is also shown. The result of pruning P(D, π) is no more complex than D, whereas the product D × π is more complex. Clearly, the pruning procedure is not sound for ADDs because there may be paths that violate the policy, but are not explicitly represented in the input function D. In order to understand the result of P, let p be a path from a root to a leaf in an ADD. The path p induces a partial assignment to the 4 Figure 4: An example for pruning. D and π denote the given function and constraint respectively. The result of pruning is no larger than D, as opposed to multiplication. T (true) and F (false) branches are denoted by the left and the right child respectively. variables in the diagram. Let E(p) be the set of all extensions of this partial assignment to complete assignments to all variables. As established in the following proposition, a path is pruned if none of its extensions satisfies the constraint. Proposition 2. Let G = P(D, π) where leaves in D do not have the value −∞. Then for all paths p in G we have: 1. p leads to −∞in G iff ∀y ∈E(p), π(y) = −∞. 2. p does not lead to −∞in G iff ∀y ∈E(p), G(y) = D(y). 3. The size of the ADD G is smaller or equal to the size of D. The proof (omitted due to space constraints) uses structural induction on D and π. The novel backup introduced in OPI interleaves the application of pruning with the summation steps so as to prune the diagram as early as possible. Let Pπ(D) be shorthand for P(D, π). The backup used by OPI, which is shown in Figure 2 is ˆT Q π (V ) = Pπ Pπ(R) + γPπ( X X′ 1 P X′ 1 . . . Pπ( X X′ l P X′ l × primed(V ))))) (3) Using the properties of P we can show that ˆT Q π (V ) overestimates the true backup of a policy, but is still bounded by the true value function. Theorem 1. The policy backup used by OPI is bounded between the full Bellman backup and the true policy backup, i.e. Tπ ≤maxA ˆT Q π ≤T . Since none of the value functions generated by OPI overestimate the optimal value function, it follows that both OPI and FA-MPI converge to the optimal policy under the same conditions as MPI [5]. However, the sequence of value functions/policies generated by OPI are in general different from and potentially more compact than those generated by FA-MPI. The relative compactness of these policies is empirically investigated in Section 6. The theorem also implies that OPI converges at least as fast as FA-MPI to the optimal policy, and may converge faster. In terms of a flat MDP, OPI can be interpreted as sometimes picking a greedy off-policy action while evaluating a fixed policy, when the value function of the greedy policy is at least as good and more compact than that of the given policy. Thus, OPI may be viewed as asynchronous policy iteration ([9]). However, unlike traditional asynchronous PI, the policy improvement in OPI is motivated by the size of the representation, rather than any measure of the magnitude of improvement. Example 2. Consider the example in Figure 5. Suppose that π is a policy constraint that says that the action variable A1 must be true when the state variable X2 is false. The backup T Q(R) does not involve X2 and therefore pruning does not change the diagram and Pπ(T Q(R)) = T Q(R). The max chooses A1 = true in all states, regardless of the value of X2, a greedy improvement. Note that the improved policy (always set A1) is more compact than π, and so is its value. In addition, Pπ(T Q(R)) is coarser than π × T Q(R). 5 Memory-Bounded OPI Memory is usually a limiting factor for symbolic planning. In [4] we proposed a symbolic memory bounded (MB) VI algorithm for FA-MDPs, which we refer to below as Memory Bounded Factored 5 (a) A simple policy for an MDP with two state variables, X1 and X2, and one action variable A1. (b) Optimal policy backup in FA-MPI. (c) OPI backup. Note the smaller size of the value function. Figure 5: An illustration where OPI computes an incorrect but more compact value function that is is a partial policy improvement. T (true) and F (false) branches are denoted by the left and the right child respectively. Action Regression (MBFAR). MBFAR generalizes SPUDD and FAR by flexibly trading off computation time for memory. The key idea is that a backup can be computed over a partially instantiated action, by fixing the value of an action variable. MBFAR computes what [10] called “Z-value functions” that are optimal value functions for partially specified actions. But in contrast to their work, where the set of partial actions are hand-coded by the designer, MBFAR is domain-independent and depends on the complexity of the value function. In terms of time to convergence, computing these subsets on the fly may lead to some overhead, but in some cases may lead to a speedup. Memory Bounded FA-MPI (MB-MPI) is a simple extension that uses MBFAR in place of FAR for the backups in Figure 2. MB-MPI is parametrized by k, the number of policy backups, and M, the maximum size (in nodes) of a Z-value function. MB-MPI generalizes MPI in that MB-MPI(k,0) is the same as SPI(k) [6] and MB-MPI(k,∞) is FA-MPI(k). Also, MB-MPI(0,0) is SPUDD [1] and MB-MPI(0,∞) is FAR [4]. We can also combine OPI with memory bounded backup. We will call this algorithm MB-OPI. Since both MB-MPI and OPI address space issues in FA-MPI the question is whether one dominates the other and whether their combination is useful. This is addressed in the experiments. 6 Experiments In this section, we experimentally evaluate the algorithms and the contributions of different components in the algorithms. 6.1 Domain descriptions The following domains were described using the Relational Dynamic Influence Diagram Language (RDDL) [11]. We ground the relational description to arrive at the MDP similar to Figure 1. In our experiments the variables in the ADDs are ordered so that parents(X′ i) occur above X′ i and the X′ is are ordered by |parents(X′ i)|. We heuristically chose to do the expectation over state variables in the top-down way, and maximization of action variables in the bottom-up way with respect to the variable ordering. Inventory Control(IC): This domain consists of n independent shops each being full or empty that can be filled by a deterministic action. The total number of shops that can be filled in one time step is restricted. The rate of arrival of a customer is distributed independently and identically for all shops as Bernoulli(p) with p = 0.05. A customer at an empty shop continues to wait with a reward of -1 until the shop is filled and gives a reward of -0.35. An instance of IC with n shops and m trucks has a joint state and action space of size 22n and Pm i=0 n i respectively. SysAdmin: The “SysAdmin” domain was part of the IPC 2011 benchmark and was introduced in earlier work [12]. It consists of a network of n computers connected in a given topology. Each computer is either running (reward of +1) or failed (reward of 0) so that |S| = 2n, and each computer has an associated deterministic action of rebooting (with a cost of -0.75) so that |A| = 2n. We restrict the number of computers that can be rebooted in one time step. Unlike the previous domain, the exogenous events are not independent of one another. A running computer that is not being 6 1 2 3 4 5 6 7 200 600 1000 Inventory Control − 8 shops #Parallel actions Solution time(mins) VI OPI(2) OPI(5) 1 2 3 4 5 6 7 0 200 400 SysAdmin − Star network − 11 computers # parallel actions Solution time(mins) VI OPI(2) OPI(5) 0.3 0.63 0.87 0.63 1 2 3 4 5 6 7 0 400 800 Bidirectional ring − 10 computers # parallel actions Solution time(mins) VI OPI(2) OPI(5) 1 2 3 4 5 6 7 0 200 400 Unidirectional ring − 10 computers # parallel actions Solution time(mins) VI OPI(2) OPI(5) 0.15 0.34 0.46 0.46 Figure 6: Impact of policy evaluation: Parallel actions vs. Time. In Star and Unidirectional networks VI was stopped at a time limit of six hours and the Bellman error is annotated. 1 2 3 4 5 6 7 200 600 1000 Inventory Control(Uniform) − 8 shops #Parallel actions Solution time(mins) FA−MPI(5) OPI(5) EML EML EML EML EML 8 9 10 11 12 0 50 100 200 Bidirectional ring − 2 parallel actions Computers(|S|,|A|) %Time more than OPI EML FA−MPI(5) OPI(5) 0 100 200 300 400 0 2 4 6 8 Elevator Control − 4 floors, 2 elevators CPU time(mins) Bellman error EML FA−MPI(5) OPI(5) Figure 7: Impact of Pruning. EML denotes Exceeded Memory Limit and the Bellman error is denoted in parenthesis. rebooted is running in the next state with probability p proportional to the number of its running neighbors, where p = 0.45 + 0.5 1+nr 1+nc , nr is the number of neighboring computers that have not failed and nc is the number of neighbors. We test this domain on three topologies of increasing difficulty, viz. a star topology, a unidirectional ring and a bidirectional ring. Elevator control: We consider the problem of controlling m elevators in a building with n floors. A state is described as follows: for each floor, whether a person is waiting to go up or down; for each elevator, whether a person inside the elevator is going up or down, whether the elevator is at each floor, and its current direction (up or down). A person arrives at a floor f, independently of other floors, with a probability Bernoulli(pf), where pf is drawn from Uniform(0.1, 0.3) for each floor. Each person gets into an elevator if it is at the same floor and has the same direction (up or down), and exits at the top or bottom floor based on his direction. Each person gets a reward of -1 when waiting at a floor and -1.5 if he is in an elevator that is moving in a direction opposite to his destination. There is no reward if their directions are the same. Each elevator has three actions: move up or down by one floor, or flip its direction. 6.2 Experimental validation In order to evaluate scaling with respect to the action space we fix the size of the state-space and measure time to convergence (Bellman error less than 0.1 with discount factor of 0.9). Experiments were run on a single core of an Intel Core 2 Quad 2.83GHz with 4GB limit. The charts denote OPI with k steps of evaluation as OPI (k), and MB-OPI with memory bound M as MB-OPI(k, M) (similarly FA-MPI(k) and MB-MPI(k, M)). In addition, we compare to symbolic value iteration: 0 50 100 150 200 0 2 4 6 8 Elevator Control − 4 floors, 2 elevators CPU time(mins) Bellman Error VI OPI(2) OPI(5) Figure 8: Impact of policy evaluation in Elevators. 1 2 3 4 5 6 7 200 600 1000 Inventory Control − 8 shops #Parallel actions Solution time(mins) MB−MPI(5) MB−OPI(5) EML EML 8 9 10 11 12 −100 0 50 150 Bidirectional ring − 2 parallel actions Computers % more time than OPI OPI(5) FA−MPI(5) MB−OPI(5,20k) MB−MPI(5,20k) EML Figure 9: Impact of memory bounding. EML denotes Exceeded Memory Limit. 7 Domain # parallel actions # parallel actions Compression in V Compression in π 2 3 4 5 6 7 2 3 4 5 6 7 IC(8) 0.06 0.03 0.03 0.02 0.02 0.02 0.28 0.36 0.35 0.20 0.09 0.03 Star(11) 0.67 0.58 0.50 0.40 0.37 0.35 1.8e−4 2.3e−4 2.1e−4 1.9e−4 1.4e−4 9.6e−5 Biring(10) 0.96 0.96 0.95 0.94 0.88 0.80 1.1e−3 1.3e−3 1.2e−3 1.1e−3 9.8e−4 7.4e−4 Uniring(10) 0.99 0.99 0.99 0.99 0.99 0.99 9.3e−4 1e−3 9.4e−4 8.2e−4 5.2e−4 2.9e−4 Table 1: Ratio of size of ADD function to a table. the well-established baseline for factored states, SPUDD [1], and factored states and actions FAMPI(0). Since both are variants of VI we will denote the better of the two as VI in the charts. Impact of policy evaluation : We compare symbolic VI and OPI in Figure 6. For Inventory Control, as the number of parallel actions increases, SPUDD takes increasingly more time but FA-MPI(0) takes increasingly less time, giving VI a bell-shaped profile. An increase in the steps of evaluation in OPI(2) and OPI(5) leads to a significant speedup. For the SysAdmin domain, we tested three different topologies. For all the topologies, as the size of the action space increases, VI takes an increasing amount of time. OPI scales significantly better and does better with more steps of policy evaluation, suggesting that more lookahead is useful in this domain. In the Elevator Control domain (Figure 8) OPI(2) is significantly better than VI and OPI(5) is marginally better than OPI(2). Overall, we see that more evaluation helps, and that OPI is consistently better than VI. Impact of pruning : We compare PI vs. FA-MPI to assess the impact of pruning. Figure 7 shows that with increasing state and action spaces FA-MPI exceeds the memory limit (EML) whereas OPI does not and that when both converge OPI converges much faster. In Inventory Control, FAMPI exceeds the memory limit on five out of the seven instances, whereas OPI converges in all cases. In SysAdmin, the plot shows the % time FA-MPI takes more than OPI. On the largest problem, FAMPI exceeds the memory-limit, and is at least 150% slower than OPI. In Elevator control, FA-MPI exceeds the memory limit while OPI does not, and FA-MPI is at least 250% slower. Impact of memory-bounding : Even though memory bounding can mitigate the memory problem in FA-MPI, it can cause a large overhead in time, and can still exceed the limit due to intermediate steps in the exact policy backups. Figure 9 shows the effect of memory bounding. MB-OPI , scales better than either MB-MPI or OPI . In the IC domain, MB-MPI is much worse than MB-OPI in time, and MB-MPI exceeds the memory limit in two instances. In the SysAdmin domain, the figure shows that combined pruning and memory-bounding is better than either one separately. A similar time profile is seen in the elevators domain (results omitted). Representation compactness : The main bottleneck toward scalability beyond our current results is the growth of the value and policy diagrams with problem complexity, which is a function of the suitability of our ADD representation to the problem at hand. To illustrate this, Table 1 shows the compression provided by representing the optimal value functions and policies as ADDs versus tables. We observe orders of magnitude compression for representing policies, which shows that the ADDs are able to capture the rich structure in policies. The compression ratio for value functions is less impressive and surprisingly close to 1 for the Uniring domain. This shows that for these domains ADDs are less effective at capturing the structure of the value function. Possible future directions include better alternative symbolic representations as well as approximations. 7 Discussion This paper presented symbolic variants of MPI that scale to large action spaces and generalize and improve over state-of-the-art algorithms. The insight that the policy can be treated as a loose constraint within value iteration steps gives a new interpretation of MPI. Our algorithm OPI computes some policy improvements during policy evaluation and is related to Asynchronous Policy Iteration [9]. Further scalability can be achieved by incorporating approximate value backups (e.g. similar to APRICODD[2]) as weel as potentially more compact representations(e.g. Affine ADDs [3]). Another avenue for scalability is to use initial state information to focus computation. Previous work [13] has studied theoretical properties of such approximations of MPI, but no efficient symbolic version exists. Developing such algorithms is an interesting direction for future work. Acknowdgements This work is supported by NSF under grant numbers IIS-0964705 and IIS-0964457. 8 References [1] Jesse Hoey, Robert St-Aubin, Alan Hu, and Craig Boutilier. SPUDD: Stochastic Planning Using Decision Diagrams. In Proceedings of the Fifteenth conference on Uncertainty in Artificial Intelligence(UAI), 1999. [2] Robert St-Aubin, Jesse Hoey, and Craig Boutilier. APRICODD: Approximate Policy Construction Using Decision Diagrams. Advances in Neural Information Processing Systems(NIPS), 2001. [3] Scott Sanner, William Uther, and Karina Valdivia Delgado. Approximate Dynamic Programming with Affine ADDs. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems, 2010. [4] Aswin Raghavan, Saket Joshi, Alan Fern, Prasad Tadepalli, and Roni Khardon. Planning in Factored Action Spaces with Symbolic Dynamic Programming. In Twenty-Sixth AAAI Conference on Artificial Intelligence(AAAI), 2012. [5] Martin L Puterman and Moon Chirl Shin. Modified Policy Iteration Algorithms for Discounted Markov Decision Problems. Management Science, 1978. [6] Craig Boutilier, Richard Dearden, and Moises Goldszmidt. Exploiting Structure in Policy Construction. In International Joint Conference on Artificial Intelligence(IJCAI), 1995. [7] R Iris Bahar, Erica A Frohm, Charles M Gaona, Gary D Hachtel, Enrico Macii, Abelardo Pardo, and Fabio Somenzi. Algebraic Decision Diagrams and their Applications. In Computer-Aided Design, 1993. [8] Chenggang Wang and Roni Khardon. Policy Iteration for Relational MDPs. arXiv preprint arXiv:1206.5287, 2012. [9] Dimitri P. Bertsekas and John N. Tsitsiklis. Neuro-Dynamic Programming. 1996. [10] Jason Pazis and Ronald Parr. Generalized Value Functions for Large Action Sets. In Proc. of ICML, 2011. [11] Scott Sanner. Relational Dynamic Influence Diagram Language (RDDL): Language Description. Unpublished ms. Australian National University, 2010. [12] Carlos Guestrin, Daphne Koller, and Ronald Parr. Multiagent Planning with Factored MDPs. Advances in Neural Information Processing Systems(NIPS), 2001. [13] Bruno Scherrer, Victor Gabillon, Mohammad Ghavamzadeh, and Matthieu Geist. Approximate Modified Policy Iteration. In ICML, 2012. 9
|
2013
|
163
|
4,891
|
Least Informative Dimensions Fabian H. Sinz Department for Neuroethology Eberhard Karls University T¨ubingen fabee@epagoge.de Anna St¨ockl Department for Functional Zoology Lund University, Sweden Anna.Stockl@biol.lu.se Jan Grewe Department for Neuroethology Eberhard Karls University T¨ubingen jan.grewe@uni-tuebingen.de Jan Benda Department for Neuroethology Eberhard Karls University T¨ubingen jan.benda@uni-tuebingen.de Abstract We present a novel non-parametric method for finding a subspace of stimulus features that contains all information about the response of a system. Our method generalizes similar approaches to this problem such as spike triggered average, spike triggered covariance, or maximally informative dimensions. Instead of maximizing the mutual information between features and responses directly, we use integral probability metrics in kernel Hilbert spaces to minimize the information between uninformative features and the combination of informative features and responses. Since estimators of these metrics access the data via kernels, are easy to compute, and exhibit good theoretical convergence properties, our method can easily be generalized to populations of neurons or spike patterns. By using a particular expansion of the mutual information, we can show that the informative features must contain all information if we can make the uninformative features independent of the rest. 1 Introduction An important aspect of deciphering the neural code is to determine those stimulus features populations of sensory neurons are most sensitive to. Approaches to that problem include white noise analysis [2, 14], in particular spike-triggered average [4] or spike-triggered covariance [3, 19], canonical correlation analysis or population receptive fields [12], generalized linear models [18, 15], or maximally informative dimensions [22]. All these techniques have in common that they optimize a statistical dependency measure between stimuli and spike responses over the choice of a linear subspace. The particular algorithms differ in the dimensionality of the subspace they extract (one- vs. multi-dimensional), the statistical measure they use (correlation, likelihood, relative entropy), and whether an extension to population responses is feasible or not. While spike-triggered average uses correlation and is restricted to a single subspace, spike-triggered covariance and canonical correlation analysis can already extract multi-dimensional subspaces but are still restricted to second-order statistics. Maximally informative dimensions is the only technique of the above that can extract multiple dimensions that are informative also with respect to higher-order statistics. However, an extension to spike patterns or population responses is not straightforward because of the curse of dimensionality. Here we approach the problem from a different perspective and propose an algorithm that can extract a multi-dimensional subspace containing all relevant information about the neural responses Y in terms of Shannon’s mutual information (if such a subspace exists). Our method does not commit to a particular parametric model, and can easily be extended to spike patterns or population responses. 1 In general, the problem of finding the most informative subspace of the stimuli X about the responses Y can be described as finding an orthogonal matrix Q (a basis for Rn) that separates X into informative and non-informative features (U, V )⊤= QX. Since Q is orthogonal, the mutual information I [X : Y ] between X and Y can be decomposed as [5] I [Y : X] = I [Y : U, V ] = EX,Y log p (U, V , Y ) p (U, V ) p (Y ) = I [Y : U] + EY ,V log p (Y , V | U) p (Y | U) p (V | U) = I [Y : U] + EU [I [Y | U : V | U]] . (1) Since the two terms on the right hand side of equation (1) are always positive and sum up to the mutual information between Y and X, two ways to obtain maximally informative features U about Y would be to either maximize I [Y : U] or to minimize EU [I [Y |U : V |U]] via the choice of Q. The first possibility is along the lines of maximally informative dimensions [22] and involves direct estimation of the mutual information. The second possibility which avoids direct estimation has been proposed by Fukumizu and colleagues [5, 6] (we discuss both in Section 3). Here, we explore a third possibility, which trades practical advantages against a slightly more restrictive objective. The idea is to obtain maximally informative features U by making V as independent as possible from the combination of U and Y . For this reason, we name our approach least informative dimensions (LID). Formally, least informative dimensions tries to minimize the mutual information between the pair Y , U and V . Using the chain rule for multi information we can write it as (see supplementary material) I [Y , U : V ] = I [Y : X] + I [U : V ] −I [Y : U] . (2) This means that minimizing I [Y , U : V ] is equivalent to maximizing I [Y : U] while simultaneously minimizing I [U : V ]. Note that I [Y , U : V ] = 0 implies I [U : V ] = 0. Therefore, if Q can be chosen such that I [Y , U : V ] = 0 equation (2) reduces to I [Y : X] = I [Y : U], pushing all information about Y into U. Since each new choice of Q requires the estimation of the mutual information between (potentially high-dimensional) variables, direct optimization is hard or unfeasible. For this reason, we resort to another dependency measure which is easier to estimate but shares its minimum with mutual information, that is, it is zero if and only if the mutual information is zero. The objective is to choose Q such that (Y , U) and V are independent in that dependency measure. If we can find such a Q, then we know that I [Y , U : V ] is zero as well, which means that U are the most informative features in terms of the Shannon mutual information. This will allow us to obtain maximally informative features without ever having to estimate a mutual information. The easier estimation procedure comes at the cost of only being able to link the alternative dependency measure to the mutual information if both of them are zero. If there is no Q that achieves this, we will still get informative features in the alternative measure, but it is not clear how informative they are in terms of mutual information. 2 Least informative dimensions This section describes how to efficiently find a Q such that I [Y , U : V ] = 0 (if such a Q exists). Unless noted otherwise, (U, V )⊤= QX where U denotes the informative and V the uninformative features. The mutual information is a special case of the relative entropy DKL [p || q] = EX∼p log p (X) log q (X) between two distribution p and q. While being linked to the rich theoretical background of Shannon information theory, the relative entropy is known to be hard to estimate [25]. Alternatives to relative entropy of increasing practical interest are the integral probability metrics (IPM), defined as [25, 17] γF (X : Z) = sup f∈F |EX [f (X)] −EZ [f (Z)]| . (3) Intuitively, the metric in equation (3) searches for a function f, which can detect a difference in the distributions of two random variables X and Z. If no such witness function can be found, the 2 distributions must be equal. If F is chosen to be a sufficiently rich reproducing kernel Hilbert space H [21], then the supremum in equation (3) can be computed explicitly and the divergence can be computed in closed form [7]. This particular type of IPM is called maximum mean discrepancy (MMD) [9, 7, 10]. A kernel k : X × X →R is a symmetric function such that the matrix Kij = k (xi, xj) is positive (semi)-definite for every selection of points x1, ..., xm ∈X [21]. In that case, the functions k (·, x) are elements of a reproducing kernel Hilbert space (RKHS) of functions H. This space is endowed with a dot product ⟨·, ·⟩H with the so called reproducing property ⟨k (·, x) , f⟩H = f (x) for f ∈H. In particular, ⟨k (·, x) , k (·, x′)⟩H = k (x, x′). When setting F in equation (3) to be the unit ball in H, then the IPM can be computed in closed form as the norm of the difference between the mean functions in H [7, 10, 8, 26]: γH (X : Z) = ∥EX [k (·, X)] −EZ [k (·, Z)]∥H (4) = EX,X′ k X, X′ −2EX,Z [k (X, Z)] + EZ,Z′ k Z, Z′ 1 2 , where the first equality is derived in [7], and second equality uses the bi-linearity of the dot product and the reproducing property of k. Furthermore, (X, X′) ∼PX ×PX and (Z, Z′) ∼PZ ×PZ are two independent random variables drawn from the marginal distributions of X and Z, respectively. The function EX [k (·, X)] is an embedding of the distribution of X into the RKHS H via X 7→EX [k (·, X)]. If this map is injective, that is, if it uniquely represents the probability distribution of X, then equation (4) is zero if and only if the probability distributions of X and X′ are the same. Kernels with that property are called characteristic in analogy to the characteristic function φX (t) 7→EX exp it⊤X [26, 27]. This means that for characteristic kernels MMD is zero exactly if the relative entropy DKL [p∥q] is zero as well. Since the mutual information is the relative entropy between the joint distribution and the products of the marginals, we can use MMD to search for a Q such that γH (PY ,U,V : PY ,U × PV ) is zero1, which then implies that I [Y , U : V ] = 0 as well. The finite sample version of (4) is simply given by replacing the expectations with the empirical mean (and possibly some bias correction) [7, 10, 8]. The estimation of γH therefore only involves summation over three kernel matrices and can be done in a few lines of code. Unlike for the relative entropy, the empirical estimation of MMD is therefore much more feasible. Furthermore, the residual error of the empirical estimator can be shown to decrease on the order of 1/√m where m is the number of data points [25]. Note in particular, that this rate does not depend on the dimensionality of the data. Objective function The objective function for our optimization problem now has the following form: We transform input examples xi into features ui and vi via (ui, vi) = Qxi. Then we use a kernel k (ui, vi, yi) , uj, vj, yj to compute and minimize MMD with respect to the choice of Q. In order to do that efficiently, a few adaptations are required. First, without loss of generality, we minimize the squared MMD instead of MMD itself γ2 H (Z1, Z2) = EZ1,Z′ 1 k Z1, Z′ 1 −2EZ1,Z2 [k (Z1, Z2)] + EZ2,Z′ 2 k Z2, Z′ 2 ,(5) where Z1 = (Y , U, V ) ∼PY ,U,V and Z2 = (Y , U, V ) ∼PY ,U × PV . Second, in order to get samples from PY ,U × PV , we assume that our kernel takes the form k (ui, vi, yi) , uj, vj, yj = k1 (ui, yi) , uj, yj · k2 (vi, vj). For this special case, one can incorporate the independence assumption between U, Y and V directly by using the fact that for independent random variables, the expectation of the product is equal to the product of expectations, that is, E k1 (ui, yi) , uj, yj · k2 (vi, vj) = E k1 (ui, yi) , uj, yj E [k2 (vi, vj)] . This special case of MMD is equivalent to the Hilbert-Schmidt Independence Criterion (HSIC) [9, 23] and can be computed as ˆγ2 hs = 1 (m −1)2 tr (K1HK2H) , (6) where K1 and K2 denote the matrices of pairwise kernel values between the data sets {(ui, yi)}m i=1 and {vi}m i=1, respectively, and Hij = δij −m−1. 1With some abuse of notation, we wrote MMD as a function of the probability measures. 3 Note, however, that one could in principle also optimize (5) for a non-factorizing kernel by simply shuffling the (ui, yi) and vi across examples. We can also use shuffling to assess whether the optimal value ˆγ2 hs found during the optimization is significantly different from zero by comparing the value to a null distribution over ˆγ2 hs obtained from datasets where the (ui, yi) and vi have been permuted across examples. Minimization procedure and gradients For optimizing (6) with respect to Q we use gradient descent over the orthogonal group SO(n). The optimization can be carried out by computing the unconstrained gradient ∇Qγ of the objective function with respect to Q (treating Q as an ordinary matrix), projecting that gradient onto the tangent space of SO (n), and performing a line search along the gradient direction. We now present the necessary formulae to implement the optimization in a modular fashion. We first show how to compute the gradient ∇Qγ in terms of the gradients ∇ui,viˆγ2 hs, then we show how to compute the ∇ui,viˆγ2 hs in terms of derivatives of kernel functions, and finally demonstrate how the formulae change when approximating the kernel matrices with an incomplete Cholesky decomposition. Given the unconstrained gradient ∇Qγ the projection onto the tangent space is given by ζ = Q∇Qγ⊤Q −∇Qγ [13, eq. (22)]. The function is then minimized by performing a line-search along π (Q + tζ), where π is the projection onto SO (n) which can easily be computed via singular value decomposition of Q + tζ and setting the singular values to one [13, prop. 7]. This means that all we need for the gradient descent on SO(n) is the unconstrained gradient ∇Qγ. This gradient takes the form of a sum of outer products [16, eq. (20)] ∇Qˆγ2 hs = m X i=1 ∂ˆγ2 hs ∂(ui, vi) · x⊤ i = J⊤Ξ, J = ∂ˆγ2 hs ∂(ui, vi) i , where the matrix Ξ contains the stimuli xi in its rows. The first k columns J(u) η corresponding to the dimension of the features ui and the last n−k columns J(v) corresponding to the dimension of the features vi are given by J(u) η = 2 (m −1)2 diag HK2HD(u)⊤ η and J(v) η = 2 (m −1)2 diag HK1HD(v)⊤ η , where D(u) η ij = ∂ ∂uiη k (ui, vi, yi) , uj, vj, yj ij contains the partial derivatives of the kernel with respect to the ηth dimension of u (and analogously for v) in the first argument (see supplementary material for the derivation). Efficient implementation with incomplete Cholesky decomposition of the kernel matrix So far, the evaluation of HSIC requires the computation of two m×m kernel matrices in each step. For larger datasets this can quickly become computationally prohibitive. In order to speed up computation time, we approximate the kernel matrices by an incomplete Cholesky decomposition K = LL⊤, where L ∈Rm×ℓis a “tall” matrix [1]. In that case, HSIC can be computed much faster as the trace of a product of two ℓ× ℓmatrices because tr (K1HK2H) = tr L⊤ 1 H2L2L⊤ 2 H2L1 , where HLk can be efficiently computed by centering Lk on its row mean. Also in this case, the matrix J can be computed efficiently in terms of derivatives of sub-matrices of the kernel matrix (see supplementary material for the exact formulae). 3 Related work Kernel dimension reduction in regression [5, 6] Fukumizu and colleagues find maximally informative features U by minimizing EU [I [V | U : Y | U]] in equation (1) via conditional kernel 4 covariance operators. They show that the covariance operator equals zero if and only if Y is conditionally independent of V given U, that is, Y ⊥⊥V | U. In that case, U carries all information about Y . Although their approach is closest to ours, it differs in a few key aspects: In contrast to our approach, their objective involves the inversion of a—potentially large—kernel matrix which needs additional regularization in order to be invertible. A conceptual difference is that we are optimizing a slightly more restrictive problem because their objective does not attempt to make U independent of V as well. However, this will not make a difference in many practical cases, since many stimulus distributions are Gaussian for which the dependencies between U and V can be removed by prewhitening the stimulus data before training LID. In that case I [U : V ] = 0 for every choice of Q and equation (2) becomes equivalent to maximizing the mutual information between U and Y . The advantage of our formulation of the problem is that it allows us to detect and quantify independence by comparing the current ˆγhs to its null distribution obtained by shuffling the (yi, ui) against vi across examples. This is hardly possible in the conditional case. Also note that for spherically symmetric data I [U : V ] = const. for every choice of Q. In that case equation (2) becomes equivalent to maximizing I [Y : U]. However, a residual redundancy remains which would show up when comparing ˆγ2 hs to its null distribution. Finally, the use of kernel covariance operators is bound to kernels that factorize. In principle, our method is also applicable to non-factorizing kernels if we use γH instead of γhs and obtain the samples from the product distribution of PY ,U × PV via shuffling. Maximally informative dimensions [22] Sharpee and colleagues maximize the relative entropy Ispike = DKL p v⊤s|spike || p v⊤s between the distribution of stimuli projected onto informative dimensions given a spike, to the marginal distribution of the projection. This relative entropy is the part of the mutual information which is carried by the arrival of a single spike, since I v⊤s : {spike, no spike} = p (spike) · Ispike + p (no spike) Ino spike. Their method is also completely non-parametric and captures higher order dependencies between a stimulus and a single spike. However, by focusing on single spikes and the spike triggered density only, it neglects the dependencies between spikes and the information carried by the silence of the neuron [28]. Additionally, the generalization to spike patterns or population responses is non-trivial because the information between the projected stimuli and spike patterns ϖ1, ..., ϖℓbecomes I v⊤s : ϖ = P i p (ϖi) · Iϖi. This requires the estimation of a conditional distribution p v⊤s|ϖi for each pattern ϖi which can quickly become prohibitive when the number of patterns grows exponentially. 4 Experiments In all the experiments below, we demonstrate the validity of our methods on controlled artificial examples and on P-unit recordings from electric fish. We use an RBF kernel on the vi and a tensor RBF kernel on the (ui, yi): k (vi, vj) = exp −∥vi −vj∥2 σ2 and k (ui, yi) , uj, yj = exp −∥uiy⊤ i −ujy⊤ j ∥2 σ2 ! . The derivatives of the kernels can be found in the supplementary material. Unless noted otherwise the σ were chosen to be the median of pairwise Euclidean distances between data points. In all artificial experiments, Q was chosen randomly. Linear Non-Linear Poisson Model (LNP) In this experiment, we trained LID on a simple linear nonlinear Poisson (LNP) neuron yi ∼Poisson ⌊⟨w, xi⟩−θ⌋+ with an exponentially decaying filter and a rectifying non-linearity (see Figure 1, left). We used m = 5000 data points xi from a 20-dimensional standard normal distribution N (0, I) as input. The offset was chosen such that approximately 35% non-zero spike counts in the yi were obtained. We used one informative and 19 non-informative dimensions, and set σ = 1 for the tensor kernel. After optimization, the first dimension q1 of Q converged to the filter w (Figure 1). We compared the HSIC values ˆγhs h {(yi, ui)}i=1,...,m : {vi}i=1,...,m i before and after the optimization to their null distribution obtained by shuffling. Before the optimization, the dependence of (Y , U) and V 5 Figure 1: Left: LNP Model. The informative dimension (gray during optimization, black after optimization) converges to the true filter of an LNP model (blue line). Before optimization (Y , U) and V are dependent as shown by the left inset (null distribution obtained via shuffling in gray, dashed line shows actual HSIC value). After the optimization (right inset) the HSIC value is even below the null distribution. Right: Two state neuron. LID correctly identifies the subspace (blue dashed) in which the two true filters (solid black) reside since projections of the filters on the subspace (red dashed) closely resemble the original filters. is correctly detected (Figure 1, left, insets). After convergence the actual HSIC value lies left to the null distribution’s domain. Since the appropriate test for independence would be one-sided, the null hypothesis “(Y , U) is independent of V ” would not be rejected in this case. Two state neuron In this experiment, we simulated a neuron with two states that were both attained in 50% of the trials (see Figure 1, right). This time, the output consisted of four “bins” whose statistics varied depending on the state. In the first—steady rate—state, the four bins contained spike counts drawn from an LNP neuron with exponentially decaying filter as above. In the second—burst—state, the first two bins were drawn from Poisson distribution with a fixed base rate independent of the stimulus. The second two bins were drawn from an LNP neuron with a modulated exponential filter and higher gain. We used m = 8000 input stimuli from a 20-dimensional standard normal distribution. We use two informative dimensions and set σ of the tensor kernel to two times the median of the pairwise distances. LID correctly identified the subspace associated with the two filters also in this case (Figure 1, right). Artificial complex cell In a second experiment, we estimated the two-dimensional subspace associated with a artificial complex cell. We generated a quadrature pair w1 and w2 of two 10dimensional filters (see Figure 2, left). We used m = 8000 input points from a standard normal distribution. Responses were generated from a Poisson distribution with the rate given by λi = ⟨w1, xi⟩2 + ⟨w2, xi⟩2. This led to about 34% non-zero neural responses. When using two informative subspaces, LID was able to identify the subspace correctly (Figure 2, left). When comparing the HSIC value against the null distribution found via shuffling, the final value indicated no further dependencies. When only a one-dimensional subspace was used (Figure 2, right), LID did not converge to the correct subspace. Importantly, the HSIC value after optimization was clearly outside the support of the null distribution, thereby correctly indicating residual dependencies. P-Unit recordings from weakly electric fish Finally, we applied our method to P-unit recordings from the weakly electric fish Eigenmannia virescens. These weakly electric fish generate a dipolelike electric field which changes polarity with a frequency at about 300Hz. Sensors in the skin of the fish are tuned to this carrier frequency and respond to amplitude changes caused by close-by objects with different conductive properties than water [20]. In the present recordings, the immobilized fish was stimulated with 10s of 300 −600Hz low-pass filtered full field frozen Gaussian white noise amplitude modulations of its own field. Neural activity was recorded intra-cellularly from the P-unit afferents. Spikes were binned with 1ms precision. We selected m = 8400 random time points in the spike response and the corresponding preceding 20ms of the input (20 dimensions). We used the same 6 Figure 2: Artificial Complex Cell. Left: The original filters are 90° phase shifted Gabor filters which form an orthogonal basis for a two-dimensional subspace. After optimization, the two informative dimensions of LID (first two rows of Q) converge to that subspace and also form a pair of 90° phase shifted filters (note that even if the filters are not the same, they span the same subspace). Comparing the HSIC values before and after optimization shows that this subspace contains the relevant information (left and right inset). Right: If only a one-dimensional informative subspace is used, the filter only slightly converges to the subspace. After optimization, a comparison of the HSIC value to the null distribution obtained via shuffling indicates residual dependencies which are not explained by the one-dimensional subspace (left and right inset). Figure 3: Most informative feature for a weakly electric fish P-Unit: A random filter (blue trace) exhibits HSIC values that are clearly outside the domain of the null distribution (left inset). Using the spike triggered average (red trace) moves the HSIC values of the first feature of Q already inside the null distribution (middle inset). Further optimization with LID refines the feature (black trace) and brings the HSIC values closer to zero (right inset). After optimization, the informative feature U is independent of the features V because the first row and column of the covariance matrix of the transformed Gaussian input show no correlations. The fact that one informative feature is sufficient to bring the HSIC values inside the null distribution indicates that a single subspace captures all information conveyed by these sensory neurons. kernels as in the experiment on the LNP model. We initialized the first row in Q with the normalized spike triggered average (STA; Figure 3, left, red trace). We neither pre-whitened the data for computing the STA nor for the optimization of LID. Unlike a random feature (Figure 3, left, blue trace), the spike triggered average already achieves HSIC values within the null distribution (Figure 3, left and middle inset). The most informative feature corresponding to U looks very similar to the STA but shifts the HSIC value deeper into the domain of the null distribution (Figure 3, right inset). 7 This indicates that one single subspace in the input is sufficient to carry all information between the input and the neural response. 5 Discussion Here we presented a non-parametric method to estimate a subspace of the stimulus space that contains all information about a response variable Y . Even though our method is completely generic and applicable to arbitrary input-output pairs of data, we focused on the application in the context of sensory neuroscience. The advantage of the generic approach is that Y can in principle be anything from spike counts, to spike patterns or population responses. Since our method finds the most informative dimensions by making the complement of those dimensions as independent from the data as possible, we termed it least informative dimensions (LID). We use the Hilbert-Schmidt independence criterion to minimize the dependencies between the uninformative features and the combination of informative features and outputs. This measure is easy to implement, avoids the need to estimate mutual information, and its estimator has good convergence properties independent of the dimensionality of the data. Even though our approach only estimates the informative features and not mutual information itself, it can help to estimate mutual information by reducing the number of dimensions. As in the approach by Fukumizu and colleagues, it might be that no Q exists such that I [Y , U : V ] = 0. In that situation, the price to pay for an easier measure is that it is hard to make definite statements about the informativeness of the features U in terms of the Shannon information, since γH = I [Y , U : V ] = 0 is the point that connects γH to the mutual information. As demonstrated in the experiments, we can detect this case by comparing the actual value of ˆγH to an empirical null distribution of ˆγH values obtained by shuffling the vi against the ui, yi pairs. However, if γH ̸= 0, theoretical upper bounds on the mutual information are unfortunately not available. In fact, using results from [25] and Pinsker’s inequality one can show that γ2 H bounds the mutual information from below. One might now be tempted to think that maximizing γH [Y , U] might be a better way to find informative features. While this might be a way to get some informative features [24], it is not possible to link the features to informativeness in terms of Shannon mutual information, because the point that builds the bridge between the two dependency measures is where both of them are zero. Anywhere else the bound may not be tight so the maximally informative features in terms of γH and in terms of mutual information can be different. Another problem our approach shares with many algorithms that detect higher-order dependencies is the non-convexity of the objective function. In practice, we found that the degree to which this poses a problem very much depends on the problem at hand. For instance, while the subspaces of the LNP or the two state neuron were detected reliably, the two dimensional subspace of the artificial complex cell seems to pose a harder problem. It is likely that the choice of kernel has an influence on the landscape of the objective function. We plan to explore this relationship in more detail in the future. In general, a good initialization of Q helps to get close to the global optimum. Beyond that, however, integral probability metric approaches to maximally informative dimensions offer a great chance to avoid many problems associated with direct estimation of mutual information, and to extend it to much more interesting output structures than single spikes. Acknowledgements Fabian Sinz would like to thank Lucas Theis and Sebastian Gerwinn for helpful discussions and comments on the manuscript. This study is part of the research program of the Bernstein Center for Computational Neuroscience, T¨ubingen, funded by the German Federal Ministry of Education and Research (BMBF; FKZ: 01GQ1002). References [1] F. R. Bach and M. I. Jordan. Predictive low-rank decomposition for kernel methods. In Proceedings of the 22nd international conference on Machine learning - ICML ’05, pages 33–40, New York, New York, USA, 2005. ACM Press. [2] E. D. Boer and P. Kuyper. Triggered Correlation, 1968. 8 [3] N. Brenner, W. Bialek, and R. De Ruyter Van Steveninck. Adaptive rescaling maximizes information transmission. Neuron, 26(3):695–702, 2000. [4] E. J. Chichilnisky. A simple white noise analysis of neuronal light responses. Network: Comput. Neural Syst, 12:199–213, 2001. [5] K. Fukumizu, F. R. Bach, and M. I. Jordan. Dimensionality Reduction for Supervised Learning with Reproducing Kernel Hilbert Spaces. Journal of Machine Learning Research, 5(1):73–99, 2004. [6] K. Fukumizu, F. R. Bach, and M. I. Jordan. Kernel dimension reduction in regression. Annals of Statistics, 37(4):1871–1905, 2009. [7] A. Gretton, K. M. Borgwardt, M. Rasch, B. Sch¨olkopf, and A. Smola. A kernel method for the two sample problem. In B. Sch¨olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 513—-520, Cambridge, MA, 2007. MIT Press. [8] A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Sch¨olkopf, and A. Smola. A Kernel Two-Sample Test. Journal of Machine Learning Research, 13:723–773, 2012. [9] A. Gretton, O. Bousquet, A. Smola, and B. Sch¨olkopf. Measuring Statistical Dependence with HilbertSchmidt Norms. In S. Jain, H. U. Simon, and E. Tomita, editors, Advances in Neural Information Processing Systems, pages 63–77. Springer Berlin / Heidelberg, 2005. [10] A. Gretton, K. Fukumizu, Z. Harchaoui, and B. K. Sriperumbudur. A Fast, Consistent Kernel Two-Sample Test. In Y Bengio, D Schuurmans, J Lafferty, C K I Williams, and A Culotta, editors, Advances in Neural Information Processing Systems, pages 673–681. Curran, Red Hook, NY, USA, 2009. [11] J. D. Hunter. Matplotlib: A 2D graphics environment. Computing In Science & Engineering, 9(3):90–95, 2007. [12] J. Macke, G. Zeck, and M. Bethge. Receptive Fields without Spike-Triggering. Advances in Neural Information Processing Systems 20, pages 1–8, 2007. [13] J. H. Manton. Optimization algorithms exploiting unitary constraints. Signal Processing, IEEE Transactions on, 50(3):635–650, 2002. [14] P. Z. Marmarelis and K. Naka. White-noise analysis of a neuron chain: an application of the Wiener theory. Science, 175(27):1276–1278, 1972. [15] P McCullagh and J A Nelder. Generalized Linear Models, Second Edition. Chapman and Hall, 1989. [16] T. P. Minka. Old and New Matrix Algebra Useful for Statistics. MIT Media Lab Note, pages 1–19, 2000. [17] A. M¨uller. Integral Probability Metrics and Their Generating Classes of Functions. Advances in Applied Probability, 29(2):429–443, 1997. [18] L. Paninski. Maximum likelihood estimation of cascade point-process neural encoding models. Network: Computation in Neural Systems, 15(4):243–262, 2004. [19] J. W. Pillow and E. P. Simoncelli. Dimensionality reduction in neural models: an information-theoretic generalization of spike-triggered average and covariance analysis. Journal of Vision, 6(4):414–428, 2006. [20] H. Scheich, T. H. Bullock, and R. H Hamstra. Coding properties of two classes of afferent nerve fibers: high-frequency electroreceptors in the electric fish, Eigenmannia. Journal of Neurophysiology, 36(1):39– 60, 1973. [21] B. Sch¨olkopf and A. J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond, volume 98 of Adaptive computation and machine learning. MIT Press, 2001. [22] T. Sharpee, N. C. Rust, and W. Bialek. Analyzing neural responses to natural signals: maximally informative dimensions. Neural Computation, 16(2):223–250, 2004. [23] A. Smola, A. Gretton, L. Song, and B. Sch¨olkopf. A Hilbert Space Embedding for Distributions. In Algorithmic Learning Theory: 18th International Conference, pages 13–31. Springer-Verlag, Berlin/Heidelberg, 2007. [24] L. Song, A. Smola, A. Gretton, J. Bedo, and K. Borgwardt. Feature selection via dependence maximization. Journal of Machine Learning Research, 13(May):1393–1434, 2012. [25] B. K. Sriperumbudur, K. Fukumizu, A. Gretton, and G. R. G. Lanckriet. On Integral Probability Metrics, phi-divergences and binary classification. Technical Report 1, arXiv, 2009. [26] B. K. Sriperumbudur, A. Gretton, K. Fukumizu, G. Lanckriet, and B. Sch¨olkopf. Injective Hilbert Space Embeddings of Probability Measures. In Proceedings of the 21st Annual Conference on Learning Theory, number i, pages 111–122. Omnipress, 2008. [27] B. K. Sriperumbudur, A. Gretton, K. Fukumizu, B. Sch¨olkopf, and G.R. G. Lanckriet. Hilbert Space Embeddings and Metrics on Probability Measures. Journal of Machine Learning Research, 11(1):48, 2010. [28] R. S. Williamson, M. Sahani, and J. W. Pillow. Equating information-theoretic and likelihood-based methods for neural dimensionality reduction. Technical Report 1, arXiv, 2013. 9
|
2013
|
164
|
4,892
|
A memory frontier for complex synapses Subhaneil Lahiri and Surya Ganguli Department of Applied Physics, Stanford University, Stanford CA sulahiri@stanford.edu, sganguli@stanford.edu Abstract An incredible gulf separates theoretical models of synapses, often described solely by a single scalar value denoting the size of a postsynaptic potential, from the immense complexity of molecular signaling pathways underlying real synapses. To understand the functional contribution of such molecular complexity to learning and memory, it is essential to expand our theoretical conception of a synapse from a single scalar to an entire dynamical system with many internal molecular functional states. Moreover, theoretical considerations alone demand such an expansion; network models with scalar synapses assuming finite numbers of distinguishable synaptic strengths have strikingly limited memory capacity. This raises the fundamental question, how does synaptic complexity give rise to memory? To address this, we develop new mathematical theorems elucidating the relationship between the structural organization and memory properties of complex synapses that are themselves molecular networks. Moreover, in proving such theorems, we uncover a framework, based on first passage time theory, to impose an order on the internal states of complex synaptic models, thereby simplifying the relationship between synaptic structure and function. 1 Introduction It is widely thought that our very ability to remember the past over long time scales depends crucially on our ability to modify synapses in our brain in an experience dependent manner. Classical models of synaptic plasticity model synaptic efficacy as an analog scalar value, denoting the size of a postsynaptic potential injected into one neuron from another. Theoretical work has shown that such models have a reasonable, extensive memory capacity, in which the number of long term associations that can be stored by a neuron is proportional its number of afferent synapses [1–3]. However, recent experimental work has shown that many synapses are more digital than analog; they cannot robustly assume an infinite continuum of analog values, but rather can only take on a finite number of distinguishable strengths, a number than can be as small as two [4–6] (though see [7]). This one simple modification leads to a catastrophe in memory capacity: classical models with digital synapses, when operating in a palimpset mode in which the ongoing storage of new memories can overwrite previous memories, have a memory capacity proportional to the logarithm of the number of synapses [8, 9]. Intuitively, when synapses are digital, the storage of a new memory can flip a population of synaptic switches, thereby rapidly erasing previous memories stored in the same synaptic population. This result indicates that the dominant theoretical basis for the storage of long term memories in modifiable synaptic switches is flawed. Recent work [10–12] has suggested that a way out of this logarithmic catastrophe is to expand our theoretical conception of a synapse from a single scalar value to an entire stochastic dynamical system in its own right. This conceptual expansion is further necessitated by the experimental reality that synapses contain within them immensely complex molecular signaling pathways, with many internal molecular functional states (e.g. see [4, 13, 14]). While externally, synaptic efficacy could be digital, candidate patterns of electrical activity leading to potentiation or depression could yield transitions between these internal molecular states without necessarily inducing an associated change in 1 synaptic efficacy. This form of synaptic change, known as metaplasticity [15, 16], can allow the probability of synaptic potentiation or depression to acquire a rich dependence on the history of prior changes in efficacy, thereby potentially improving memory capacity. Theoretical studies of complex, metaplastic synapses have focused on analyzing the memory performance of a limited number of very specific molecular dynamical systems, characterized by a number of internal states in which potentiation and depression each induce a specific set of allowable transitions between states (e.g. see Figure 1 below). While these models can vastly outperform simple binary synaptic switches, these analyses leave open several deep and important questions. For example, how does the structure of a synaptic dynamical system determine its memory performance? What are the fundamental limits of memory performance over the space of all possible synaptic dynamical systems? What is the structural organization of synaptic dynamical systems that achieve these limits? Moreover, from an experimental perspective, it is unlikely that all synapses can be described by a single canonical synaptic model; just like the case of neurons, there is an incredible diversity of molecular networks underlying synapses both across species and across brain regions within a single organism [17]. In order to elucidate the functional contribution of this diverse molecular complexity to learning and memory, it is essential to move beyond the analysis of specific models and instead develop a general theory of learning and memory for complex synapses. Moreover, such a general theory of complex synapses could aid in development of novel artificial memory storage devices. Here we initiate such a general theory by proving upper bounds on the memory curve associated with any synaptic dynamical system, within the well established ideal observer framework of [10, 11, 18]. Along the way we develop principles based on first passage time theory to order the structure of synaptic dynamical systems and relate this structure to memory performance. We summarize our main results in the discussion section. 2 Overall framework: synaptic models and their memory curves In this section, we describe the class of models of synaptic plasticity that we are studying and how we quantify their memory performance. In the subsequent sections, we will find upper bounds on this performance. We use a well established formalism for the study of learning and memory with complex synapses (see [10, 11, 18]). In this approach, electrical patterns of activity corresponding to candidate potentiating and depressing plasticity events occur randomly and independently at all synapses at a Poisson rate r. These events reflect possible synaptic changes due to either spontaneous network activity, or the storage of new memories. We let f pot and f dep denote the fraction of these events that are candidate potentiating or depressing events respectively. Furthermore, we assume our synaptic model has M internal molecular functional states, and that a candidate potentiating (depotentiating) event induces a stochastic transition in the internal state described by an M × M discrete time Markov transition matrix Mpot (Mdep). In this framework, the states of different synapses will be independent, and the entire synaptic population can be fully described by the probability distribution across these states, which we will indicate with the row-vector p(t). Thus the i’th component of p(t) denotes the fraction of the synaptic population in state i. Furthermore, each state i has its own synaptic weight, wi, which we take, in the worst case scenario, to be restricted to two values. After shifting and scaling these two values, we can assume they are ±1, without loss of generality. We also employ an “ideal observer” approach to the memory readout, where the synaptic weights are read directly. This provides an upper bound on the quality of any readout using neural activity. For any single memory, stored at time t = 0, we assume there will be an ideal pattern of synaptic weights across a population of N synapses, the N-element vector ⃗wideal, that is +1 at all synapses that experience a candidate potentiation event, and −1 at all synapses that experience a candidate depression event at the time of memory storage. We assume that any pattern of synaptic weights close to ⃗wideal is sufficient to recall the memory. However, the actual pattern of synaptic weights at some later time, t, will change to ⃗w(t) due to further modifications from the storage of subsequent memories. We can use the overlap between these, ⃗wideal · ⃗w(t), as a measure of the quality of the memory. As t →∞, the system will return to its steady state distribution which will be uncorrelated 2 (a) Cascade model (b) Serial model (c) 10 −1 10 0 10 1 10 2 10 3 10 −3 10 −2 10 −1 Time SNR Cascade Serial Figure 1: Models of complex synapses. (a) The cascade model of [10], showing transitions between states of high/low synaptic weight (red/blue circles) due to potentiation/depression (solid red/dashed blue arrows). (b) The serial model of [12]. (c) The memory curves of these two models, showing the decay of the signal-to-noise ratio (to be defined in §2) as subsequent memories are stored. with the memory stored at t = 0. The probability distribution of the quantity ⃗wideal· ⃗w(∞) can be used as a “null model” for comparison. The extent to which the memory has been stored is described by a signal-to-noise ratio (SNR) [10, 11]: SNR(t) = ⟨⃗wideal· ⃗w(t)⟩−⟨⃗wideal· ⃗w(∞)⟩ p Var(⃗wideal· ⃗w(∞)) . (1) The noise in the denominator is essentially √ N. There is a correction when potentiation and depression are imbalanced, but this will not affect the upper bounds that we will discuss below and will be ignored in the subsequent formulae. A simple average memory curve can be derived as follows. All of the preceding plasticity events, prior to t = 0, will put the population of synapses in its steady-state distribution, p∞. The memory we are tracking at t = 0 will change the internal state distribution to p∞Mpot (or p∞Mdep) in those synapses that experience a candidate potentiation (or depression) event. As the potentiating/depressing nature of the subsequent memories is independent of ⃗wideal, we can average over all sequences, resulting in the evolution of the probability distribution: dp(t) dt = rp(t)WF, where WF = f potMpot + f depMdep −I. (2) Here WF is a continuous time transition matrix that models the process of forgetting the memory stored at time t = 0 due to random candidate potentiation/depression events occurring at each synapse due to the storage of subsequent memories. Its stationary distribution is p∞. This results in the following SNR SNR(t) = √ N 2f potf dep p∞ Mpot −Mdep ertWFw. (3) A detailed derivation of this formula can be found in the supplementary material. We will frequently refer to this function as the memory curve. It can be thought of as the excess fraction of synapses (relative to equilibrium) that maintain their ideal synaptic strength at time t, as dictated by the stored memory at time t = 0. Much of the previous work on these types of complex synaptic models has focused on understanding the memory curves of specific models, or choices of Mpot/dep. Two examples of these models are shown in Figure 1. We see that they have different memory properties. The serial model performs relatively well at one particular timescale, but it performs poorly at other times. The cascade model does not perform quite as well at that time, but it maintains its performance over a wider range of timescales. In this work, rather than analyzing specific models, we take a different approach, in order to obtain a more general theory. We consider the entire space of these models and find upper bounds on the memory capacity of any of them. The space of models with a fixed number of internal states M is parameterized by the pair of M × M discrete time stochastic transition matrices Mpot and Mdep, in addition to f pot/dep. The parameters must satisfy the following constraints: Mpot/dep ij ∈[0, 1], f pot/dep ∈[0, 1], p∞WF = 0, wi = ±1, X j Mpot/dep ij = 1, f pot + f dep = 1, X i p∞ i = 1. (4) 3 The upper bounds on Mpot/dep ij and f pot/dep follow automatically from the other constraints. The critical question is: what do these constraints imply about the space of achievable memory curves in (3)? To answer this question, especially for limits on achievable memory at finite times, it will be useful to employ the eigenmode decomposition: WF = X a −qauava, vaub = δab, WFua = −qaua, vaWF = −qava. (5) Here qa are the negative of the eigenvalues of the forgetting process WF, ua are the right (column) eigenvectors and va are the left (row) eigenvectors. This decomposition allows us to write the memory curve as a sum of exponentials, SNR(t) = √ N X a Iae−rt/τa, (6) where Ia = (2f potf dep)p∞(Mpot −Mdep)uavaw and τa = 1/qa. We can then ask the question: what are the constraints on these quantities, namely eigenmode initial SNR’s, Ia, and time constants, τa, implied by the constraints in (4)? We will derive some of these constraints in the next section. 3 Upper bounds on achievable memory capacity In the previous section, in (3) we have described an analytic expression for a memory curve as a function of the structure of a synaptic dynamical system, described by the pair of stochastic transition matrices Mpot/dep. Since the performance measure for memory is an entire memory curve, and not just a single number, there is no universal scalar notion of optimal memory in the space of synaptic dynamical systems. Instead there are tradeoffs between storing proximal and distal memories; often attempts to increase memory at late (early) times by changing Mpot/dep, incurs a performance loss in memory at early (late) times in specific models considered so far [10–12]. Thus our end goal, achieved in §4, is to derive an envelope memory curve in the SNR-time plane, or a curve that forms an upper-bound on the entire memory curve of any model. In order to achieve this goal, in this section, we must first derive upper bounds, over the space of all possible synaptic models, on two different scalar functions of the memory curve: its initial SNR, and the area under the memory curve. In the process of upper-bounding the area, we will develop an essential framework to organize the structure of synaptic dynamical systems based on first passage time theory. 3.1 Bounding initial SNR We now give an upper bound on the initial SNR, SNR(0) = √ N 2f potf dep p∞ Mpot −Mdep w, (7) over all possible models and also find the class of models that saturate this bound. A useful quantity is the equilibrium probability flux between two disjoint sets of states, A and B: ΦAB = X i∈A X j∈B rp∞ i WF ij. (8) The initial SNR is closely related to the flux from the states with wi = −1 to those with wj = +1 (see supplementary material): SNR(0) ≤4 √ NΦ−+ r . (9) This inequality becomes an equality if potentiation never decreases the synaptic weight and depression never increases it, which should be a property of any sensible model. To maximize this flux, potentiation from a weak state must be guaranteed to end in a strong state, and depression must do the reverse. An example of such a model is shown in Figure 2(a,b). These models have a property known as “lumpability” (see [19, §6.3] for the discrete time version and [20, 21] for continuous time). They are completely equivalent (i.e. have the same memory curve) as a two state model with transition probabilities equal to 1, as shown in Figure 2(c). 4 (a) (b) (c) 1 1 Figure 2: Synaptic models that maximize initial SNR. (a) For potentiation, all transitions starting from a weak state lead to a strong state, and the probabilities for all transitions leaving a given weak state sum to 1. (a) Depression is similar to potentiation, but with strong and weak interchanged. (c) The equivalent two state model, with transition probabilities under potentiation and depression equal to one. This two state model has the equilibrium distribution p∞= (f dep, f pot) and its flux is given by Φ−+ = rf potf dep. This is maximized when f pot = f dep = 1 2, leading to the upper bound: SNR(0) ≤ √ N. (10) We note that while this model has high initial SNR, it also has very fast memory decay – with a timescale τ ∼ 1 r. As the synapse is very plastic, the initial memory is encoded very easily, but the subsequent memories also overwrite it rapidly. This is one example of the tradeoff between optimizing memory at early versus late times. 3.2 Imposing order on internal states through first passage times Our goal of understanding the relationship between structure and function in the space of all possible synaptic models is complicated by the fact that this space contains many different possible network topologies, encoded in the nonzero matrix elements of Mpot/dep. To systematically analyze this entire space, we develop an important organizing principle using the theory of first passage times in the stochastic process of forgetting, described by WF. The mean first passage time matrix, Tij, is defined as the average time it takes to reach state j for the first time, starting from state i. The diagonal elements are defined to be zero. A remarkable theorem we will exploit is that the quantity η ≡ X j Tijp∞ j , (11) known as Kemeny’s constant (see [19, §4.4]), is independent of the starting state i. Intuitively, (11) states that the average time it takes to reach any state, weighted by its equilibrium probability, is independent of the starting state, implying a hidden constancy inherent in any stochastic process. In the context of complex synapses, we can define the partial sums η+ i = X j∈+ Tijp∞ j , η− i = X j∈− Tijp∞ j . (12) These can be thought of as the average time it takes to reach the strong/weak states respectively. Using these definitions, we can then impose an order on the states by arranging them in order of decreasing η+ i or increasing η− i . Because η+ i + η− i = η is independent of i, the two orderings are the same. In this order, which depends sensitively on the structure of Mpot/dep, states later (to the right in figures below) can be considered to be more potentiated than states earlier (to the left in figures below), despite the fact that they have the same synaptic efficacy. In essence, in this order, a state is considered to be more potentiated if the average time it takes to reach all the strong efficacy states is shorter. We will see that synaptic models that optimize various measures of memory have an exceedingly simple structure when, and only when, their states are arranged in this order.1 1Note that we do not need to worry about the order of the η± i changing during the optimization: necessary conditions for a maximum only require that there is no infinitesimal perturbation that increases the area. Therefore we need only consider an infinitesimal neighborhood of the model, in which the order will not change. 5 (a) (b) (c) Figure 3: Perturbations that increase the area. (a) Perturbations that increase elements of Mpot above the diagonal and decrease the corresponding elements of Mdep. It can no longer be used when Mdep is lower triangular, i.e. depression must move synapses to “more depressed” states. (b) Perturbations that decrease elements of Mpot below the diagonal and increase the corresponding elements of Mdep. It can no longer be used when Mpot is upper triangular, i.e. potentiation must move synapses to “more potentiated” states. (c) Perturbation that decreases “shortcut” transitions and increases the bypassed “direct” transitions. It can no longer be used when there are only nearestneighbor “direct” transitions. 3.3 Bounding area Now consider the area under the memory curve: A = Z ∞ 0 dt SNR(t). (13) We will find an upper bound on this quantity as well as the model that saturates this bound. First passage time theory introduced in the previous section becomes useful because the area has a simple expression in terms of quantities introduced in (12) (see supplementary material): A = √ N(4f potf dep) X ij p∞ i Mpot ij −Mdep ij η+ i −η+ j = √ N(4f potf dep) X ij p∞ i Mpot ij −Mdep ij η− j −η− i . (14) With the states in the order described above, we can find perturbations of Mpot/dep that will always increase the area, whilst leaving the equilibrium distribution, p∞, unchanged. Some of these perturbations are shown in Figure 3, see supplementary material for details. For example, in Figure 3(a), for two states i on the left and j on the right, with j being more “potentiated” than i (i.e. η+ i > η+ j ), we have proven that increasing Mpot ij and decreasing Mdep ij leads to an increase in area. The only thing that can prevent these perturbations from increasing the area is when they require the decrease of a matrix element that has already been set to 0. This determines the topology (non-zero transition probabilities) of the model with maximal area. It is of the form shown in Figure 4(c),with potentiation moving one step to the right and depression moving one step to the left. Any other topology would allow some class of perturbations (e.g. in Figure 3) to further increase the area. As these perturbations do not change the equilibrium distribution, this means that the area of any model is bounded by that of a linear chain with the same equilibrium distribution. The area of a linear chain model can be expressed directly in terms of its equilibrium state distribution, p∞, yielding the following upper bound on the area of any model with the same p∞(see supplementary material): A ≤2 √ N r X k k − X j jp∞ j p∞ k wk = 2 √ N r X k k − X j jp∞ j p∞ k , (15) where we chose wk = sgn[k −P j jp∞ j ]. We can then maximize this by pushing all of the equilibrium distribution symmetrically to the two end states. This can be done by reducing the transition probabilities out of these states, as in Figure 4(c). This makes it very difficult to exit these states once they have been entered. The resulting area is A ≤ √ N(M −1) r . (16) This analytical result is similar to a numerical result found in [18] under a slightly different information theoretic measure of memory performance. 6 The “sticky” end states result in very slow decay of memory, but they also make it difficult to encode the memory in the first place, since a small fraction of synapses are able to change synaptic efficacy during the storage of a new memory. Thus models that maximize area optimize memory at late times, at the expense of early times. 4 Memory curve envelope Now we will look at the implications of the upper bounds found in the previous section for the SNR at finite times. As argued in (6), the memory curve can be written in the form SNR(t) = √ N X a Iae−rt/τa. (17) The upper bounds on the initial SNR, (10), and the area, (16), imply the following constraints on the parameters {Ia, τa}: X a Ia ≤1, X a Iaτa ≤M −1. (18) We are not claiming that these are a complete set of constraints: not every set {Ia, τa} that satisfies these inequalities will actually be achievable by a synaptic model. However, any set that violates either inequality will definitely not be achievable. Now we can pick some fixed time, t0, and maximize the SNR at that time wrt. the parameters {Ia, τa}, subject to the constraints above. This always results in a single nonzero Ia; in essence, optimizing memory at a single time requires a single exponential. The resulting optimal memory curve, along with the achieved memory at the chosen time, depends on t0 as follows: t0 ≤M −1 r =⇒SNR(t) = √ Ne−rt/(M−1) =⇒SNR(t0) = √ Ne−rt0/(M−1), t0 ≥M −1 r =⇒SNR(t) = √ N(M −1)e−t/t0 rt0 =⇒SNR(t0) = √ N(M −1) ert0 . (19) Both the initial SNR bound and the area bound are saturated at early times. At late times, only the area bound is saturated. The function SNR(t0), the green curve in Figure 4(a), above forms a memory curve envelope with late-time power-law decay ∼t−1 0 . No synaptic model can have an SNR that is greater than this at any time. We can use this to find an upper bound on the memory lifetime, τ(ϵ), by finding the point at which the envelope crosses ϵ: τ(ϵ) ≤ √ N(M −1) ϵer , (20) where we assume N > (ϵe)2. Intriguingly, both the lifetime and memory envelope expand linearly with the number of internal states M, and increase as the square root of the number of synapses N. This leaves the question of whether this bound is achievable. At any time, can we find a model whose memory curve touches the envelope? The red curves in Figure 4(a) show the closest we have come to the envelope with actual models, by repeated numerical optimization of SNR(t0) over Mpot/dep with random initialization and by hand designed models. We see that at early, but not late times, there is a gap between the upper bound that we can prove and what we can achieve with actual models. There may be other models we haven’t found that could beat the ones we have, and come closer to our proven envelope. However, we suspect that the area constraint is not the bottleneck for optimizing memory at times less than O( M r ). We believe there is some other constraint that prevents models from approaching the envelope, and currently are exploring several mathematical conjectures for the precise form of this constraint in order to obtain a potentially tighter envelope. Nevertheless, we have proven rigorously that no model’s memory curve can ever exceed this envelope, and that it is at least tight for late times, longer than O( M r ), where models of the form in Figure 4(c)can come close to the envelope. 5 Discussion We have initiated the development of a general theory of learning and memory with complex synapses, allowing for an exploration of the entire space of complex synaptic models, rather than 7 (a) 10 −1 10 0 10 1 10 2 10 3 10 −2 10 −1 10 0 10 1 Time SNR envelope numerical search hand designed Initial SNR bound active Area bound active (b) (c) ε ε Figure 4: The memory curve envelope for N = 100, M = 12. (a) An upper bound on the SNR at any time is shown in green. The red dashed curve shows the result of numerical optimization of synaptic models with random initialization. The solid red curve shows the highest SNR we have found with hand designed models. At early times these models are of the form shown in (b) with different numbers of states, and all transition probabilities equal to 1. At late times they are of the form shown in (c) with different values of ε. The model shown in (c) also saturates the area bound (16) in the limit ε →0. analyzing individual models one at a time. In doing so, we have obtained several new mathematical results delineating the functional limits of memory achievable by synaptic complexity, and the structural characterization of synaptic dynamical systems that achieve these limits. In particular, operating within the ideal observer framework of [10, 11, 18], we have shown that for a population of N synapses with M internal states, (a) the initial SNR of any synaptic model cannot exceed √ N, and any model that achieves this bound is equivalent to a binary synapse, (b) the area under the memory curve of any model cannot exceed that of a linear chain model with the same equilibrium distribution, (c) both the area and memory lifetime of any model cannot exceed O( √ NM), and the model that achieves this limit has a linear chain topology with only nearest neighbor transitions, (d) we have derived an envelope memory curve in the SNR-time plane that cannot be exceeded by the memory curve of any model, and models that approach this envelope for times greater than O( M r ) are linear chain models, and (e) this late-time envelope is a power-law proportional to O( √ NM/rt), indicating that synaptic complexity can strongly enhance the limits of achievable memory. This theoretical study opens up several avenues for further inquiry. In particular, the tightness of our envelope for early times, less than O( M r ), remains an open question, and we are currently pursuing several conjectures. We have also derived memory constrained envelopes, by asking in the space of models that achieve a given SNR at a given time, what is the maximal SNR achievable at other times. If these two times are beyond a threshold separation, optimal constrained models require two exponentials. It would be interesting to systematically analyze the space of models that achieve good memory at multiple times, and understand their structural organization, and how they give rise to multiple exponentials, leading to power law memory decays. Finally, it would be interesting to design physiological experiments in order to perform optimal systems identification of potential Markovian dynamical systems hiding within biological synapses, given measurements of pre and post-synaptic spike trains along with changes in post-synaptic potentials. Then given our theory, we could match this measured synaptic model to optimal models to understand for which timescales of memory, if any, biological synaptic dynamics may be tuned. In summary, we hope that a deeper theoretical understanding of the functional role of synaptic complexity, initiated here, will help advance our understanding of the neurobiology of learning and memory, aid in the design of engineered memory circuits, and lead to new mathematical theorems about stochastic processes. Acknowledgements We thank Sloan, Genenetech, Burroughs-Wellcome, and Swartz foundations for support. We thank Larry Abbott, Marcus Benna, Stefano Fusi, Jascha Sohl-Dickstein and David Sussillo for useful discussions. 8 References [1] J. J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities,” Proc. Natl. Acad. Sci. U.S.A. 79 (1982) no. 8, 2554–2558. [2] D. J. Amit, H. Gutfreund, and H. Sompolinsky, “Spin-glass models of neural networks,” Phys. Rev. A 32 (Aug, 1985) 1007–1018. [3] E. Gardner, “The space of interactions in neural network models,” Journal of Physics A: Mathematical and General 21 (1988) no. 1, 257. [4] T. V. P. Bliss and G. L. Collingridge, “A synaptic model of memory: long-term potentiation in the hippocampus,” Nature 361 (Jan, 1993) 31–39. [5] C. C. H. Petersen, R. C. Malenka, R. A. Nicoll, and J. J. Hopfield, “All-or-none potentiation at CA3-CA1 synapses,” Proc. Natl. Acad. Sci. U.S.A. 95 (1998) no. 8, 4732–4737. [6] D. H. O’Connor, G. M. Wittenberg, and S. S.-H. Wang, “Graded bidirectional synaptic plasticity is composed of switch-like unitary events,” Proc. Natl. Acad. Sci. U.S.A. 102 (2005) no. 27, 9679–9684. [7] R. Enoki, Y. ling Hu, D. Hamilton, and A. Fine, “Expression of Long-Term Plasticity at Individual Synapses in Hippocampus Is Graded, Bidirectional, and Mainly Presynaptic: Optical Quantal Analysis,” Neuron 62 (2009) no. 2, 242 – 253. [8] D. J. Amit and S. Fusi, “Constraints on learning in dynamic synapses,” Network: Computation in Neural Systems 3 (1992) no. 4, 443–464. [9] D. J. Amit and S. Fusi, “Learning in neural networks with material synapses,” Neural Computation 6 (1994) no. 5, 957–982. [10] S. Fusi, P. J. Drew, and L. F. Abbott, “Cascade models of synaptically stored memories,” Neuron 45 (Feb, 2005) 599–611. [11] S. Fusi and L. F. Abbott, “Limits on the memory storage capacity of bounded synapses,” Nat. Neurosci. 10 (Apr, 2007) 485–493. [12] C. Leibold and R. Kempter, “Sparseness Constrains the Prolongation of Memory Lifetime via Synaptic Metaplasticity,” Cerebral Cortex 18 (2008) no. 1, 67–77. [13] D. S. Bredt and R. A. Nicoll, “AMPA Receptor Trafficking at Excitatory Synapses,” Neuron 40 (2003) no. 2, 361 – 379. [14] M. P. Coba, A. J. Pocklington, M. O. Collins, M. V. Kopanitsa, R. T. Uren, S. Swamy, M. D. Croning, J. S. Choudhary, and S. G. Grant, “Neurotransmitters drive combinatorial multistate postsynaptic density networks,” Sci Signal 2 (2009) no. 68, ra19. [15] W. C. Abraham and M. F. Bear, “Metaplasticity: the plasticity of synaptic plasticity,” Trends in Neurosciences 19 (1996) no. 4, 126 – 130. [16] J. M. Montgomery and D. V. Madison, “State-Dependent Heterogeneity in Synaptic Depression between Pyramidal Cell Pairs,” Neuron 33 (2002) no. 5, 765 – 777. [17] R. D. Emes and S. G. Grant, “Evolution of Synapse Complexity and Diversity,” Annual Review of Neuroscience 35 (2012) no. 1, 111–131. [18] A. B. Barrett and M. C. van Rossum, “Optimal learning rules for discrete synapses,” PLoS Comput. Biol. 4 (Nov, 2008) e1000230. [19] J. Kemeny and J. Snell, Finite markov chains. Springer, 1960. [20] C. Burke and M. Rosenblatt, “A Markovian function of a Markov chain,” The Annals of Mathematical Statistics 29 (1958) no. 4, 1112–1122. [21] F. Ball and G. F. Yeo, “Lumpability and Marginalisability for Continuous-Time Markov Chains,” Journal of Applied Probability 30 (1993) no. 3, 518–528. 9
|
2013
|
165
|
4,893
|
Data-driven Distributionally Robust Polynomial Optimization Martin Mevissen IBM Research—Ireland martmevi@ie.ibm.com Emanuele Ragnoli IBM Research—Ireland eragnoli@ie.ibm.com Jia Yuan Yu IBM Research—Ireland jy@osore.ca Abstract We consider robust optimization for polynomial optimization problems where the uncertainty set is a set of candidate probability density functions. This set is a ball around a density function estimated from data samples, i.e., it is data-driven and random. Polynomial optimization problems are inherently hard due to nonconvex objectives and constraints. However, we show that by employing polynomial and histogram density estimates, we can introduce robustness with respect to distributional uncertainty sets without making the problem harder. We show that the optimum to the distributionally robust problem is the limit of a sequence of tractable semidefinite programming relaxations. We also give finite-sample consistency guarantees for the data-driven uncertainty sets. Finally, we apply our model and solution method in a water network optimization problem. 1 Introduction For many optimization problems, the objective and constraint functions are not adequately modeled by linear or convex functions (e.g., physical phenomena such as fluid or gas flow, energy conservation, etc.). Non-convex polynomial functions are needed to describe the model accurately. The resulting polynomial optimization problems are hard in general. Another salient feature of realworld problems is uncertainty in the parameters of the problem (e.g., due to measurement errors, fundamental principles, or incomplete information), and the need for optimal solutions to be robust against worst case realizations of the uncertainty. Robust optimization and polynomial optimization are already an important topic in machine learning and operations research. In this paper, we combine the polynomial and uncertain features and consider robust polynomial optimization. We introduce a new notion of data-driven distributional robustness: the uncertain problem parameter is a probability distribution from which samples can be observed. Consequently, it is natural to take as the uncertainty set a set of functions, such as a norm ball around an estimated probability distribution. This approach gives solutions that are less conservative than classical robust optimization with a set for the uncertain parameters. It is easy to see that the set uncertainty setting is an extreme case of a distributional uncertainty set comprised of a set of Dirac densities. This stands in sharp contrast with real-world problems where more information is at hand than the support of the distribution of the parameters affected by uncertainty. Uncertain parameters may follow normal, Poisson, or unknown nonparametric distributions. Such parameters arise in queueing theory, economics, etc. We employ methods from both machine learning and optimization. First, we take care to estimate the distribution of the uncertain parameter using polynomial basis functions. This ensures that the resulting robust optimization problem can be reduced to a polynomial optimization problem. In turn, we can then employ an iterative method of SDP relaxations to solve it. Using tools from machine learning, we give a finite-sample consistency guarantee on the estimated uncertainty set. Using tools from optimization, we give an asymptotic guarantee on the solutions of the SDP relaxations. 1 Section 2 presents the model of data-driven distributionally robust polynomial optimization—DRO for short. Section 3 situates our work in the context of the literature. Our contributions are the following. In Section 4, we consider the general case of uncertain multivariate distribution, which yields a generalized problem of moments for the distributionally robust counterpart. In Section 5, we introduce an efficient histogram approximation for the case of uncertain univariate distributions, which yields instead a polynomial optimization problem for the distributionally robust counterpart. In Section 6, we present an application of our model and solution method in the domain of water network optimization with real data. 2 Problem statement Consider the following polynomial optimization problem min x∈X h(x, ξ), (1) where ξ ∈Rn is an uncertain parameter of the problem. We allow h to be a polynomial in x ∈Rm and X to be a basic closed semialgebraic set. That is, even if ξ is fixed, (1) is a hard problem in general. In this work, we are interested in distributionally robust optimization (DRO) problems that take the form (DRO) min x∈X max f∈Dε,N Ef h(x, ξ), for all t, (2) where x is the decision variable, ξ is a random variable distributed according to an unknown probability density function f ∗, which is the uncertain parameter in this setting. The expectation Ef is with respect to a density function f, which belongs to an uncertainty set Dε,N. This uncertainty set itself is a set of possible probability density functions constructed from a given sequence of samples ξ1, . . . , ξN distributed i.i.d. according to the unknown density function f ∗of the uncertain parameter ξ. We call Dε,N a distributional uncertainty set, it is a random set constructed as follows: Dε,N = {f : a prob. density s.t. ∥f −bfN∥⩽ε}, (3) where ε > 0 is a given constant, ∥·∥is a norm, and bfN is an density function estimated from the samples ξ1, . . . , ξN. We describe the construction of the distributional uncertainty set in the cases of multivariate and univariate samples in Sections 4 and 5. We say that a robust optimization problem is data-driven when the uncertainty set is an element of a sequence of uncertainty sets Dε,1 ⊇Dε,2 ⊇. . ., where the index N represents the number of samples of ξ observed by the decision-maker. This definition allows us to completely separate the problem of robust optimization from that of constructing the appropriate uncertainty set Dε,N. The underlying assumption is that the uncertainty set (due to finite-sample estimation of the parameter ξ) adapts continuously to the data as the sample size N increases. By considering data-driven problems, we are essentially employing tools from statistical learning theory to derive consistency guarantees. Let R[x] denote the vector space of real-valued, multivariate polynomials, i.e., every g ∈R[x] is a function g : Rm →R such that g(x) = X |α|⩽d gαxα = X |α|⩽d gαxα1 1 . . . xαm m , α ∈Nm, where {gα} is a set of real numbers. A polynomial optimization problem (POP) is given by min x∈K q(x), (4) where K = {x ∈Rd | g1(x) ⩾0, . . . , gm(x) ⩾0}, q ∈R[x], and gj ∈R[x] for j = 1, . . . , m. One of our key results arises from the observation that the distributional robust counterpart of a POP is a POP as well. A set K defined by a finite number of multivariate polynomial inequality constraints is called a basic closed semialgebraic set. As shown in [1], if the basic closed semialgebraic set K compact and archimedian, there is a hierarchy of SDP relaxations whose minima 2 converge to the minimum of (4) for increasing order of the relaxation. Moreover, if (4) has an unique minimal solution x⋆, then the optimal solution y⋆ τ of the τ-th order SDP relaxation converges to x⋆ as τ →∞. Our work combines robust optimization with notions from statistical machine learning, such as density estimation and consistency. Our data-driven robust polynomial optimization method applies to a number of machine learning problems. One example arises in Markov decision problems where a high-dimensional value-function is approximated by a low-dimensional polynomial V . A distributionally robust variant of value iteration can be cast as: max a∈A min f∈Dε,N Ef{r(x, a, ξ) + γ X x′∈X P(x′ | x, a, ξ)V (x′)}, where ξ is a random parameter with unknown distribution and the uncertainty set Dε,N of possible distribution is constructed by estimation. We present next two further examples. Example 2.1 (Distributionally robust ridge regression). We are given an i.i.d. sequence of observation-label samples {(ξi, yi) ∈Rn−1 × R : i = 1, . . . , N} from an unknown distribution f ∗, where each observation ξi has an associated label yi ∈R. Ridge regression minimizes the empirical residual with ℓ2-regularization and uses the samples to construct residual function. The distributionally robust version of ridge regression is a conceptually different approach: it uses the samples to construct a random uncertainty set Dε,N to estimate the distribution f ∗and can be formulated as min u∈Rn max f∈Dε,N Ef(yN+1 −ξN+1 · u)2 + λ(u · u), where Dε,N is the uncertainty set of possible densities constructed from the N samples. Our solution methods can even be applied to regression problems with nonconvex loss and penalty functions. Example 2.2 (Robust investment). Optimization problems of the form of (2) arise in problems that involve monetary measures of risk in finance [2]. For instance, the problem of robust investment in a vector of (random) financial positions ξ ∈Rn is min v∈∆n sup Q∈Q −EQ U(v · ξ) , where Q denotes a set of probability distributions, U is a utility function, and v · ξ is an allocation among financial positions. If U is polynomial, then the robust utility functional is a special case of DRO. 3 Our contribution in context To situate our work within the literature, it is important to note that we consider distributional uncertainty sets and polynomial constraints and objectives. In this section, we outline related works with different and similar uncertainty sets, constraints and objectives. Robust optimization problems of the form of (2) have been studied in the literature with different uncertain sets. In several works, the uncertainty sets are defined in terms of moment constraints [3, 4, 5]. Moment based uncertainty sets are motivated by the fact that probabilistic constraints can be replaced by constraints on the first and second moments in some cases [6]. In contrast, we do not consider moment constraints, but distributional uncertainty sets based on probability density functions with the Lp-norm as the metric. One reason for our approach is that higher moments are difficult to estimate [7]. In contrast, probability density functions can be readily estimated using a variety of data-driven methods, e.g., empirical histograms, kernel-based [8, 9], and orthogonal basis [10] estimates. Uncertainty sets defined by distribution-based constraints appear also in problems of risk measures [11]. For example uncertainty sets defined using Kantorovich distance are considered in [5, Section 4] and [11] while [5, Section 3] and [12] consider distributional uncertainty with both measure bounds (of the form µ1 ⩽µ ⩽µ2) and moment constraints. [13] considers distributional uncertainty sets with a φ−divergence metric. A notion of distributional uncertainty set has also been studied in the setting of Markov decision problems [14]. However, in those works, the uncertainty set is not data-driven. 3 Robust optimization formulations for polynomial optimization problems have been studied in [1, 15] with deterministic uncertainty sets (i.e., neither distributional, nor data-driven). A contribution is to show how to transform distributionally robust counterparts of polynomial optimization problems into polynomial optimization problems. In order to solve these POP, we take advantage of the hierarchy of SDP relaxations from [1]. Another contribution of this work is to use sampled information to construct distributional uncertainty sets more suitable for problems where more and more data is collected over time. 4 Multivariate uncertainty around polynomial density estimate In this section, we construct a data-driven uncertainty set in the L2-space—with the uniform norm ∥·∥2. Furthermore we assume, the support of ξ is contained in some basic closed semialgebraic set S := {z ∈Rn | sj(z) ⩾0, j = 1, . . . , r}, where sj ∈R[z]. In order to construct a data-driven distributional uncertainty set, we need to estimate the density f ∗ of the parameter ξ. Various density estimation approaches exist—e.g., kernel-density and histogram estimation. Some of these give rise to a computational problem due to the curse of dimensionality. However, to ensure that the resulting robust optimization problem remains an polynomial optimization problem, we define the empirical density estimate bfN as a multivariate polynomial (cf. Section 2). Let {πk} denote univariate Legendre polynomials: πk(a) = r 2k + 1 2 1 2kk! dk dak (a2 −1)k, a ∈R, k = 0, 1, . . . Let α ∈Nn, z ∈Rn, and πα(z) = πα1(z1) . . . παn(zn) denote the multivariate Legendre polynomial. In this section, we employ the following Legendre series density estimator [10]: bfN(z) = P |α|⩽d 1 N PN j=1 πα(ξj)πα(z). In turn, we define the following uncertainty set: Dd,ϵ,N = f ∈R[z]d | Z S f(z) d z = 1,
f −bfN
2 ⩽ϵ . where R[z]d denotes the vector space of polynomials in R[z] of degree at most d. Observe that the polynomials in Dd,ϵ,N are not required to be non-negative on S. However, the non-negativity constraint on S can be added at the expense of making the resulting DRO problem for a POP a generalized problem of moments. 4.1 Solving the DRO Next, we present asymptotic guarantees for solving distributionally robust polynomial optimization through SDP relaxations. This result rests on the following assumptions, which are detailed in [1]. Assumption 4.1. The sets X = {x ∈Rm | kj(z) ⩾0, j = 1, . . . , t} and S = {z ∈Rn | sj(z) ⩾ 0, j = 1, . . . , r} are compact. There exist u ∈R[x] and v ∈R[z] such that u = u0 + Pt j=1 uj kj and v = v0 + Pr j=1 vj sj for some sum-of-squares polynomials {uj}t j=0, {vj}r j=0, and the level sets {x | u(x) ⩾0} and {z | v(z) ⩾0} compact. Note that sets X and S satisfying Assumption 4.1 are called archimedian. This assumption is not much more restrictive than compactness, e.g., if S := {z ∈Rn | sj(z) ⩾0, j = 1, . . . , r} is compact, then there exists a L2-ball of radius R that contains S. Thus, S = ˜S = {z ∈Rn | sj(z) ⩾ 0, j = 1, . . . , r, Pn i=1 z2 i ⩽R}. With Theorem 1 in [22] it follows that ˜S satisfies Assumption 4.1. Theorem 4.1. Suppose that Assumption 4.1 holds. Let h ∈R[x, z], bfN ∈R[z], and let X and S be basic closed semialgebraic sets. Let V ⋆∈R denote the optimum of problem min x∈X max f∈Dd,ε,N Z S h(x, z)f(z)dz. (5) 4 (i) Then, there exists a sequence of SDP relaxations SDPr such that min SDPr ↗V ⋆for r →∞. (ii) If (5) has a unique minimizer x⋆, and mr the sequence of subvectors of optimal solutions of SDPr associated with the first order moments of monomials in x only. Then, mr →x⋆ componentwise for r →∞. All proofs appear in the appendix of the supplementary material. 4.2 Consistency of the uncertainty set In this section, we show that the uncertainty set that we constructed is consistent. In other words, given constants ϵ and δ, we give number of samples N needed to ensure that the closest polynomial to the unknown density f ∗belongs to the uncertainty set Dd,ε,N with probability 1 −δ. Theorem 4.2 ([10, Section 3]). Let cα denote the coefficients cα = R παf ∗for all values of the multi-index α. Suppose that the density function f ∗is square-integrable. We have E∥f ∗−bfN∥2 2 ⩽ CH P α:|α|⩽d min(1/N, c2 α), where CH is a constant that depends only on f ∗. As a corollary of Theorem 4.2, we obtain the following. Corollary 4.3. Suppose that the assumptions of Theorem 4.2 hold. Let g∗ d denote the polynomial function g∗ d(x) = P α:|α|⩽d cαxα. There exists a function1 Φ such that Φ(d) ↘0 as d →∞and such that P(g∗ d ∈Dd,ε,N) ⩾1 − CH P α:|α|⩽d min(1/N, c2 α) + Φ2(d) (ε −Φ(d))2 , for ε > Φ(d). Remark 1. Observe that since P α:|α|⩽d min(1/N, c2 α) ⩽ n+d d /N = (n + d)!/(N d! n!), by an appropriate choice of N, it is possible to guarantee that the right-hand side tends to zero, even as d →∞. 5 Univariate uncertainty around histogram density estimate In this section, we describe an additional layer of approximation for the univariate uncertainty setting. In contrast to Section 4, by approximating the uncertainty set Dε,N by a set of histogram density functions, we reduce the DRO problem to a polynomial optimization problem of degree identical with the original problem. Moreover, we derive finite-sample consistency guarantees. We assume that samples ξ1, . . . , ξN are given for the uncertain parameter ξ, which takes values in a given interval [A, B] ⊂R. I.e., in contrast to the previous section, we assume that the uncertain parameter takes values in a bounded interval. We partition R into K-intervals u0, . . . , uK−1, such that |uk| = |B −A| /K for all k = 0, . . . , K −1. Let m0, . . . , mK−1 denote the midpoints of the respective intervals. We define the empirical density vector bpN,K: bpN,K(k) = 1 N N X i=1 1[ξi∈uk] for all k = 0, . . . , K −1. Recall that the L∞-norm of a function G : X →Rn is: ∥G∥∞= supx∈X |G(x)| . In this section, we approximate the uncertainty set Dε,N by a subset of the simplex in RK: Wε,N = p ∈∆K : ∥p −bpN,K∥∞⩽ε , where p = (p1, . . . , pK) denote a vector in RK. In turn, this will allow us to approximate the DRO problem (2) by the following: (ADRO) : min x∈X max p∈Wε,N K−1 X k=0 h (x, mk) pk. (6) 1The function Φ(d) quantifies the error due to estimation with in a basis of polynomials with finite degree d. 5 5.1 Solving the DRO The following result is an analogue of Theorem 4.1. Theorem 5.1. Suppose that Assumption 4.1 holds. Let h ∈R[x, z], and let X be basic closed semialgebraic2. Let W ⋆∈R denote the optimum of problem min x∈X max p∈Wε,N K−1 X k=0 h (x, mk) pk. (7) (i) Then, there exists a sequence of SDP relaxations SDPr such that min SDPr ↗W ⋆for r →∞. (ii) If (7) has a unique minimizer x⋆, let mr the sequence of subvectors of optimal solutions of SDPr associated with the first order moments of the monomials in x only. Then, mr →x⋆ componentwise for r →∞. 5.2 Approximation error Next, we bound the error of approximating Dε,N with Wε,N. This error depends only on the “degree” K of the histogram approximation. Theorem 5.2. Suppose that the support of ξ is the interval [A, B]. Suppose that |h(x, z)| ⩽H for all x ∈X and z ∈[A, B]. Let ˜ M ≜sup{f ′′(z) : f ∈Dγ,N, z ∈[A, B]} be finite. Let gx(z) ≜h(x, z)f(z) and let M ≜sup{g′ x(z) : f ∈Dγ,N, z ∈[A, B]} be finite. For every γ ⩽Kε/(B −A) and density function f ∈Dγ,N, we have a density vector p ∈Wε,N such that Z z∈[A,B] h(x, z) f(z)dz − K−1 X k=0 h (x, mk) pk ⩽(M + H ˜ M)(B −A)3/(24K2). 5.3 Consistency of the uncertainty set Given ε and δ, we consider in this section the number of samples N that we need to ensure that the unknown probability density is in the uncertainty set Dε,N with probability 1 −δ. The consistency guarantee for the univariate histogram uncertainty set follows as a corollary of the following univariate Dvoretzky-Kiefer-Wolfowitz Inequality. Theorem 5.3 ([16]). Let bFN,k denote the distribution function associated with the probabilities bpN,K, and F ∗the distribution function associated with the density function f ∗. If F ∗is continuous, then P(∥F ∗−bFN,K∥∞> ε) ⩽2 exp(−2ε2/N). Corollary 5.4. Let p∗denotes the histogram density vector of ξ induced by the true density f ∗. As N →∞, we have P(p∗∈Wε,N) ⩾1 −2 exp(−2ε2/N). Remark 2. Provided that the density f ∗is Lipchitz continuous, it follows that the optimal value of (A1) converges to the optimal value without uncertainty as the size ε of the uncertainty set tend to zero and the number of sample N tends to infinity. 6 Application to water network optimization In this section, we consider a problem of optimal operation of a water distribution network (WDN). Let G = (V, E) denote a graph, i.e., V is the set of nodes and E the set of pipes connecting the nodes in a WDN. Let wi denote the pressure, ei the elevation, and ξi the demand at node i ∈V , qi,j the flow from i to j, and ℓi,j the loss caused by friction in case of flow from i to j for (i, j) ∈E. Our objective is to minimize the overall pressure at selected critical points V1 ⊂V in the WDN by optimally setting a number of pressure reducing valves (PRVs) located on certain pipes in the network while adhering to the conservations laws for flow and pressure: min (w,q)∈X h(w, q, ξ), where (8) 2Since S is an interval, the assumption is trivially satisfied for S. 6 h(w, q, ξ) := X i∈V1 wi + σ X j∈V ξj − X k̸=j qk,j + X l̸=j qj,l 2 , X := {(w, q) ∈R|N|+2|E| | wmin ⩽wi ⩽wmax, qmin ⩽qi,j ⩽qmax, qi,j (wj + ej −wi −ei + ℓi,j(qi,j)) ⩽0, wj + ej −wi −ei + ℓi,j(qi,j) ⩾0, ∀(i, j)}. We assume that ℓi,j is a quadratic function in qi,j. The PRV sets the pressure wi at the node i. The derivation of (8) and a detailed description of the problem appear in [17]. Thus, h ∈R[w, q, ξ] and X is a basic closed semialgebraic set. For a fixed vector of demands ξ = (ξ1, . . . , ξ|V |), (8) falls into the class (1). In real-world water networks, the demand ξ is uncertain. Given are ranges for the possible realization of nodal demands, i.e., the support of ξ is given by S := {˜z ∈R|N| | zmin i ⩽ ˜zi ⩽zmax i }. Moreover, we assume that samples ξ1, . . . , ξN of ξ are given and that they corresponds to sensors measurements. Therefore, the distributionally robust counterpart of (8) is of the form of ADRO (6). 1 2 3 4 5 6 7 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 6 8 8 5 1 2 3 4 7 9 10 31 30 32 33 34 36 35 28 37 27 26 25 24 20 19 12 13 14 15 16 17 18 29 11 23 21 22 0 5 10 15 20 25 10 15 20 25 30 35 40 45 50 Figure 1: (a) 25 node network with PRVs on pipes 1, 5 and 11. (b) Scatter plot of demand at node 15 over four months overlaid over the 24 hours of a day. We consider the benchmark WDN with |V | = 25 and |E| = 37 of [18], which is illustrated in Figure 1 (a). We assign demand values at the nodes of this WDN according to real data collected in an anonymous major city. In our experiment we assume the demands at all nodes, except at node 15, are fixed; for node 15 N = 120 samples of daily demands were collected over four months—the dataset is shown in Figure 1 (b). Node 15 has been selected because it is one of the largest consumers and has a demand profile with the largest variation. First, we consider the uncertainty set Wε,N constructed from a histogram estimation with K = 5 bins. We consider, (a) the deterministic problem (8) with three values ξmin := mini ξ15 i , ¯ξ := 1 N P i ξ15 i and ξmax := maxi ξ15 i as the demand at node 15, (b) the distributionally robust counterpart (A1) with ϵ = 0.2 and σ = 1, and (c) the classical robust formulation of (8) with an uncertainty range [ξmin, ξmax] without any distributional assumption, i.e., the problem min(w,q)∈X maxξ15∈[ξmin, ξmax] h(w, q, ξ15) which is equivalent to min (w,q)∈X max h(w, q, ξmin) , h(w, q, ξmax) (9) since ξ15 −P k̸=15 qk,15 + P l̸=15 q15,l 2 in (8) is convex quadratic in ξ15 attains its maximum at the boundary of [ξmin, ξmax]. We solve (9) by solving the two polynomial optimization problems. All three cases (a)–(c), are polynomial optimization problems which we solve by first applying the sparse SDP relaxation of first order [19] with SDPA [20] as the SDP solver, and then applying IPOPT [21] with the SparsePOP solution as starting point. Computations on single blade server with 100GB (total, 80 GB free) of RAM and a processor speed of 3.5GHz. Total computation time is denoted as tC. 7 ξ15 tC optimal setting P i∈V1 wi ξmin 738 (15.0, 15.7, 15.9) 46.7 ¯ξ 868 (15.0, 15.5, 15.6) 46.1 ξmax 624 (15.0, 15.4, 15.5) 45.9 Table 1: Results for non-robust case (a). Problem tC optimal setting objective P wi DRO (b) 1315 (15.0, 15.5, 15.7) 6.62 × 105 46.2 RO (c) 1460 (15.0, 16.9, 17.3) 1.54 × 106 49.2 Table 2: Results for DRO case (b) and classical robust case (c). The results for the deterministic case (a) show that the optimal setting and the overall pressure sum P i∈V1 wi differ even when the demand at only one node changes, as reported in Table 1. Comparing the distributionally robust (b) and robust (c) optimal solution for the optimal PRV setting problem, we observe, that the objective value of the distributionally robust counterpart is substantially smaller than the robust one. Thus, the distributionally robust solution is less conservative than the robust solution. Moreover, the distributionally robust setting is very close to the average case deterministic solution ¯ξ - but it does not coincide. It seems to hedge the solution against the worst case realization for the demand, given by the scenario ξ = ξmin, which results in the highest pressure profile. Moreover, note that solving the distributionally robust (and robust ) counterpart requires the same order of magnitude in computational time as the deterministic problem. That may be due to the fact that both the deterministic and the robust problems are hard polynomial optimization problems. 7 Discussion We introduced a notion of distributional robustness for polynomial optimization problems. The distributional uncertainty sets based on statistical estimates for the probability density functions have the advantage that they are data-driven and consistent with the data for increasing samplesize. Moreover, they give solutions that are less conservative than classical robust optimization with valued-based uncertainty sets. We have shown that these distributional robust counterparts of polynomial optimization problems remain in the same class problems from the perspective of computational complexity. This methodology is promising for a numerous real-world decision problems, where one faces the combined challenge of hard, non-convex models and uncertainty in the input parameters. We can extend the histogram method of Section 5 to the case of multivariate uncertainty, but it is well-known that the sample-complexity of histogram density-estimation is greater than polynomial density-estimation. An alternative definition of the distributional uncertainty set Dε,N is to allow functions that are not proper density functions by removing some constraints; this gives a trade-off between reduced computational complexity and more conservative solutions. The solution method of SDP relaxations comes without any finite-time guarantees. Although such guarantees are hard to come by in general, an open problem is to identify special cases that give insight into the rate of convergence of this method. Acknowledgments J. Y. Yu was supported in part by the EU FP7 project INSIGHT under grant 318225. References [1] J. B. Lasserre. A semidefinite programming approach to the generalized problem of moments. Math. Programming, 112:65–92, 2008. 8 [2] A. Schied. Optimal investments for robust utility functionals in complete market models. Math. Oper. Research, 30(3):750–764, 2005. [3] E. Delage and Y. Ye. Distributionally robust optimization under moment uncertainty with applications to data-driven problems. Operations Research, 2009. [4] D. Bertsimas, X. V. Doan, K. Natarajan, and C.-P. Teo. Models for minimax stochastic linear optimization problems with risk aversion. Math. Oper. Res., 35(3):580–602, 2010. [5] S. Mehrotra and H. Zhang. Models and algorithms for distributionally robust least squares problems. Preprint, 2011. [6] D. Bertsimas and I. Popescu. Optimal inequalities in probability theory: a convex optimization approach. SIAM J. Optimization, 15:780–804, 2000. [7] P. R. Halmos. The theory of unbiased estimation. The Annals of Mathematical Statistics, 17(1):34–43, 1946. [8] B.W. Silverman. Density Estimation for Statistics and Data Analysis. Chapman & Hall/CRC, 1998. [9] L. Devroye and L. Györfi. Nonparametric Density Estimation. Wiley, 1985. [10] P. Hall. On the rate of convergence of orthogonal series density estimators. Journal of the Royal Statistical Society. Series B, 48(1):115–122, 1986. [11] G. Pflug and D. Wozabal. Ambiguity in portfolio selection. Quantitative Finance, 7(4):435– 442, 2007. [12] A. Shapiro and S. Ahmed. On a class of minimax stochastic programs. SIAM J. Optim., 14(4):1237–1249, 2004. [13] A. Ben-Tal, D. den Hertog, A. de Waegenaere, B. Melenerg, and G. Rennen. Robust solutions of optimization problems affected by uncertain probabilities. Management Science, 2012. [14] H. Xu and S. Mannor. Distributionally robust markov decision processes. Mathematics of Operations Research, 37(2):288–300, 2012. [15] R. Laraki and J. B. Lasserre. Semidefinite programming for min-max problems and games. Math. Programming A, 131:305–332, 2010. [16] P. Massart. The tight constant in the Dvoretzky-Kiefer-Wolfowitz inequality. Annals of Probability, 18(3):1269–1283, 1990. [17] B. J. Eck and M. Mevissen. Valve placement in water networks. Technical report, IBM Research, 2012. Report No. RC25307 (IRE1209-014). [18] A. Sterling and A. Bargiela. Leakage reduction by optimised control of valves in water networks. Transactions of the Institute of Measurement and Control, 6(6):293–298, 1984. [19] H. Waki, S. Kim, M. Kojima, M. Muramatsu, and H. Sugimoto. SparsePOP: a sparse semidefinite programming relaxation of polynomial optimization problems. ACM Transactions on Mathematical Software, 35(2), 2008. [20] M. Yamashita, K. Fujisawa, K. Nakata, M. Nakata, M. Fukuda, K. Kobayashi, and K. Goto. A high-performance software package for semidefinite programs: SDPA 7. Technical report, Tokyo Institute of Technology, 2010. [21] A. Waechter and L. T. Biegler. On the implementation of a primal-dual interior point filter line search algorithm for large-scale nonlinear programming. Mathematical Programming, 106(1):25–57, 2006. [22] M. Schweighofer. Optimization of polynomials on compact semialgebraic sets. SIAM J. Optimization, 15:805–825, 2005. 9
|
2013
|
166
|
4,894
|
Learning Stochastic Inverses Andreas Stuhlm¨uller Brain and Cognitive Sciences MIT Jessica Taylor Department of Computer Science Stanford University Noah D. Goodman Department of Psychology Stanford University Abstract We describe a class of algorithms for amortized inference in Bayesian networks. In this setting, we invest computation upfront to support rapid online inference for a wide range of queries. Our approach is based on learning an inverse factorization of a model’s joint distribution: a factorization that turns observations into root nodes. Our algorithms accumulate information to estimate the local conditional distributions that constitute such a factorization. These stochastic inverses can be used to invert each of the computation steps leading to an observation, sampling backwards in order to quickly find a likely explanation. We show that estimated inverses converge asymptotically in number of (prior or posterior) training samples. To make use of inverses before convergence, we describe the Inverse MCMC algorithm, which uses stochastic inverses to make block proposals for a Metropolis-Hastings sampler. We explore the efficiency of this sampler for a variety of parameter regimes and Bayes nets. 1 Introduction Bayesian inference is computationally expensive. Even approximate, sampling-based algorithms tend to take many iterations before they produce reasonable answers. In contrast, human recognition of words, objects, and scenes is extremely rapid, often taking only a few hundred milliseconds—only enough time for a single pass from perceptual evidence to deeper interpretation. Yet human perception and cognition are often well-described by probabilistic inference in complex models. How can we reconcile the speed of recognition with the expense of coherent probabilistic inference? How can we build systems, for applications like robotics and medical diagnosis, that exhibit similarly rapid performance at challenging inference tasks? One response to such questions is that these problems are not, and should not be, solved from scratch each time they are encountered. Humans and robots are in the setting of amortized inference: they have to solve many similar inference problems, and can thus offload part of the computational work to shared precomputation and adaptation over time. This raises the question of which kinds of precomputation and adaptation are useful. There is substantial previous work on adaptive inference algorithms, including Cheng and Druzdzel (2000); Haario et al. (2006); Ortiz and Kaelbling (2000); Roberts and Rosenthal (2009). While much of this work is focused on adaptation for a single posterior inference, amortized inference calls for adaptation across many different inferences. In this setting, we will often have considerable training data available in the form of posterior samples from previous inferences; how should we use this data to adapt our inference procedure? We consider using training samples to learn the inverse structure of a directed model. Posterior inference is the task of inverting a probabilistic model: Bayes’ theorem turns p(d|h) into p(h|d); vision is commonly understood as inverse graphics (Horn, 1977) and, more recently, as inverse physics (Sanborn et al., 2013; Watanabe and Shimojo, 2001); and conditional inference in probabilistic programs can be described as “running a program backwards” (e.g., Wingate and Weber, 2013). However, while this is a good description of the problem that inference solves, conditional sampling usually does not proceed backwards step-by-step. We suggest taking this view more liter1 −10 0 10 20 30 −10 0 10 20 30 luminance (noisify luminance) −4 −2 0 2 4 6 −20−10 0 10 20 30 0 2 4 6 810 reflectance illumination luminance Observation Luminance Reflectance Illumination Luminance Gamma Gaussian Illumination Reflectance Luminance Observation Gamma Gaussian Illumination Reflectance Luminance Observation Figure 1: A Bayesian network modeling brightness constancy in visual perception, a possible inverse factorization, and two of the local joint distributions that determine the inverse conditionals. ally and actually learning the inverse conditionals needed to invert the model. For example, consider the Bayesian network shown in Figure 1. In addition to the default “forward” factorization shown on the left, we can consider an “inverse” factorization shown on the right. Knowing the conditionals for this inverse factorization would allow us to rapidly sample the latent variables given an observation. In this paper, we will explore what these factorizations look like for Bayesian networks, how to learn them, and how to use them to construct block proposals for MCMC. 2 Inverse factorizations Let p be a distribution on latent variables x = (x1, . . . , xm) and observed variables y = (y1, . . . , yn). A Bayesian network G is a directed acyclic graph that expresses a factorization of this joint distribution in terms of the distribution of each node conditioned on its parents in the graph: p(x, y) = m Y i=1 p(xi|paG(xi)) n Y j=1 p(yj|paG(yj)) When interpreted as a generative (causal) model, the observations y typically depend on a non-empty set of parents, but are not themselves parents of any nodes. In general, a distribution can be represented using many different factorizations. We say that a Bayesian network H expresses an inverse factorization of p if the observations y do not have parents (but may themselves be parents of some xi): p(x, y) = p(y) m Y i=1 p(xi|paH(xi)) As an example, consider the forward and inverse networks shown in Figure 1. We call the conditional distributions p(xi|paH(xi)) stochastic inverses, with inputs paH(xi) and output xi. If we could sample from these distributions, we could produce samples from p(x|y) for arbitrary y, which solves the problem of inference for all queries with the same set of observation nodes. In general, there are many possible inverse factorizations. For each latent node, we can find a factorization such that this node does not have children. This fact will be important in Section 4 when we resample subsets of inverse graphs. Algorithm 1 gives a heuristic method for computing an inverse factorization given Bayes net G, observation nodes y, and desired leaf node xi. We compute an ordering on the nodes of the original Bayes net from observations to leaf node. We then add the nodes in order to the inverse graph, with dependencies determined by the graph structure of the original network. In the setting of amortized inference, past tasks provide approximate posterior samples for the corresponding observations. We therefore investigate learning inverses from such samples, and ways of using approximate stochastic inverses for improving the efficiency of solving future inference tasks. 2 Algorithm 1: Heuristic inverse factorization Input: Bayesian network G with latent nodes x and observed nodes y; desired leaf node xi Output: Ordered inverse graph H 1: order x such that nodes close to y are first, leaf node xi is last 2: initialize H to empty graph 3: add nodes y to H 4: for node xj in x do 5: add xj to H 6: set paH(xj) to a minimal set of nodes in H that d-separates xj from the remainder of H based on the graph structure of G 7: end for 3 Learning stochastic inverses It is easy to see that we can estimate conditional distributions p(xi|paH(xi)) using samples S drawn from the prior p(x, y). For simplicity, consider discrete variables and an empirical frequency estimator: θS(xi|paH(xi)) = |{s ∈S : x(s) i ∧pa(s) H (xi)}| |{s ∈S : pa(s) H (xi)| Because θS is a consistent estimator of the probability of each outcome for each setting of the parent variables, the following theorem follows immediately from the strong law of large numbers: Theorem 1. (Learning from prior samples) Let H be an inverse factorization. For samples S drawn from p(x, y), θS(xi|paH(xi)) →p(xi|paH(xi)) almost surely as |S| →∞. Samples generated from the prior may be sparse in regions that have high probability under the posterior, resulting in slow convergence of the inverses. We now show that valid inverse factorizations allow us to learn from posterior samples as well. Theorem 2. (Learning from posterior samples) Let H be an inverse factorization. For samples S drawn from p(x|y), θ(xi|paH(xi)) →p(xi|paH(xi)) almost surely as |S| →∞for values of paH(xi) that have positive probability under p(x|y). Proof. For values paH(xi) that are not in the support of p(x|y), θ(xi|paH(xi)) is undefined. For values paH(xi) in the support, θ(xi|paH(xi)) →p(xi|paH(xi), y) almost surely. By definition, any node in a Bayesian network is independent of its non-descendants given its parent variables. The nodes y are root nodes in H and hence do not descend from xi. Therefore, p(xi|paH(xi), y) = p(xi|paH(xi)) and the theorem holds. Theorem 2 implies that we can use posterior samples from one observation set to learn inverses that apply to all other observation sets—while samples from p(x|y) only provide global estimates for the given posterior, it is guaranteed that the local estimates created by the procedure above are equivalent to the query-independent conditionals p(xi|paH(xi)). In addition, we can combine samples from distributions conditioned on several different observation sets to produce more accurate estimates of the inverse conditionals. In the discussion above, we can replace θ with any consistent estimator of p(xi|paH(xi)). We can also trade consistency for faster learning and generalization. This framework can make use of any supervised machine learning technique that supports sampling from a distribution on predicted outputs. For example, for discrete variables we can employ logistic regression, which provides fast generalization and efficient sampling, but cannot, in general, represent the posterior exactly. Our choice of predictor can be data-dependent—for example, we can add interaction terms to a logistic regression predictor as more data becomes available. For continuous variables, consider a predictor based on k-nearest neighbors that produces samples as follows (Algorithm 2): Given new input values z, retrieve the k previously observed input-output 3 Algorithm 2: K-nearest neighbor density predictor Input: Variable index i, inverse inputs z, samples S, number of neighbors k Output: Sampled value for node xi 1: retrieve k nearest pairs (z(1), x(1) i ), . . . , (z(k), x(k) i ) in S based on distance to z 2: construct density estimate q on x(1) i , . . . , x(k) i 3: sample from q pairs that are closest to the current input values. Then, use a consistent density estimator to construct a density estimate on the nearby previous outputs and sample an output xi from the estimated distribution. Showing that this estimator converges to the true conditional density p(x|z) is more subtle. If the conditional densities are smooth in the sense that: ∀ε > 0 ∃δ > 0 : ∀z1, z2 d(z1, z2) < δ ⇒DKL(p(x|z1), p(x|z2)) < ε then we can achieve any desired accuracy of approximation by assuring that the nearest neighbors used all lie within a δ-ball, but that the number of neighbors goes to infinity. We can achieve this by increasing k slowly enough in |S|. The exact rate at which we may increase depends on the distribution and may be difficult to determine. 4 Inverse MCMC We have described how to compute the structure of inverse Bayes nets, and how to learn the associated conditional distributions and densities from prior and posterior samples. This produces fast, but possibly biased recognition models. To get a consistent estimator, we use these recognition models as part of a Metropolis-Hastings scheme that, as the amount of training data grows, converges to Gibbs sampling for proposals of size 1, to blocked-Gibbs for larger proposals, and to perfect posterior sampling for proposals of size |G|. We propose the following Inverse MCMC procedure (Algorithm 3): Offline, use Algorithm 1 to compute an inverse graph for each latent node and train each local inverse in this graph from (posterior or prior) samples. Online, run Metropolis-Hastings with the proposal mechanism shown in Algorithm 4, which resamples a set of up to k variables using the trained inverses1. With little training data, we will want to make small proposals (small k) in order to achieve a reasonable acceptance rate; with more training data, we can make larger proposals and expect to succeed. Theorem 3. Let G be a Bayesian network, let θ be a consistent estimator (for inverse conditionals), let {Hi}i∈1..m be a collection of inverse graphs produced using Algorithm 1, and assume a source of training samples (prior or posterior) with full support. Then, as training set size |S| →∞, Inverse MCMC with proposal size k converges to block-Gibbs sampling where blocks are the last k nodes in each Hi. In particular, it converges to Gibbs sampling for proposal size k = 1 and to exact posterior sampling for k = |G|. Proof. We must show that proposals are made from the conditional posterior in the limit of large training data. Fix an inverse H, and let x be the last k variables in H. Let paH(x) be the union of H-parents of variables in x that are not themselves in x. By construction according to Algorithm 1, paH(x) form a Markov blanket of x (that is, x is conditionally independent of other variables in G, given paH(x)). Now the conditional distribution over x factorizes along the inverse graph: p(x|paH(x)) = Q|H| i=k p(xi|paH(xi)). But by theorems 1 and 2, the estimators θ converge, when they are defined, to the corresponding conditional distributions, θ(xi|paH(xi)) →p(xi|paH(xi)); since we assume full support, θ(xi|paH(xi)) is defined wherever p(xi|paH(xi)) is defined. Hence, using the estimated inverses to sequentially sample the x variables results, in the limit, in samples from the conditional distribution given remaining variables. (Note that, in the limit, these proposals will always be accepted.) This is the definition of block-Gibbs sampling. The special cases of k = 1 (Gibbs) and k = |G| (posterior sampling) follow immediately. 1In a setting where we only ever resample up to k variables, we only need to estimate the relevant inverses, i.e., not all conditionals for the full inverse graph. 4 Algorithm 3: Inverse MCMC Input: Prior or posterior samples S Output: Samples x(1), . . . , x(T ) Offline (train inverses): 1: for i in 1 . . . m do 2: Hi ←from Algorithm 1 3: for j in 1 . . . m do 4: train inverse θS(xj|paHi(xj)) 5: end for 6: end for Online (MH with inverse proposals): 1: for t in 1 . . . T do 2: x′, pfw, pbw from Algorithm 4 3: x ←x′ with MH acceptance rule 4: end for Algorithm 4: Inverse MCMC proposer Input: State x, observations y, ordered inverse graphs {Hi}i∈1..m, proposal size kmax, inverses θ Output: Proposed state x′, forward and backward probabilities pfw and pbw 1: H ∼Uniform({Hi}i∈1..m) 2: k ∼Uniform({0, 1, . . . , kmax −1}) 3: x′ ←x 4: pfw, pbw ←0 5: for j in n −k, . . . , n do 6: let xl be jth variable in H 7: x′ l ∼θ(xl|paH(x′ l)) 8: pfw ←pfw · pθ(x′ l|paH(x′ l)) 9: pbw ←pbw · pθ(xl|paH(xl)) 10: end for Instead of learning the k=1 “Gibbs” conditionals for each inverse graph, we can often precompute these distributions to “seed” our sampler. This suggests a bootstrapping procedure for amortized inference on observations y(1), . . . , y(t): first, precompute the “Gibbs” distributions so that k=1 proposals will be reasonably effective; then iterate between training on previously generated approximate posterior samples and doing inference on the next observation. Over time, increase the size of proposals, possibly depending on acceptance ratio or other heuristics. For networks with near-deterministic dependencies, Gibbs may be unable to generate training samples of sufficient quality. This poses a chicken-and-egg problem: we need a sufficiently good posterior sampler to generate the data required to train our sampler. To address this problem, we propose a simple annealing scheme: We introduce a temperature parameter t that controls the extent to which (almost-)deterministic dependencies in a network are relaxed. We produce a sequence of trained samplers, one for each temperature, by generating samples for a network with temperature ti+1 using a sampler trained on approximate samples for the network with next-higher temperature ti. Finally, we discard all samplers except for the sampler trained on the network with t = 0, the network of interest. In the next section, we explore the practicality of such bootstrapping schemes as well as the general approach of Inverse MCMC. 5 Experiments We are interested in networks such that (1) there are many layers of nodes, with some nodes far removed from the evidence, (2) there are many observation nodes, allowing for a variety of queries, and (3) there are strong dependencies, making local Gibbs moves challenging. We start by studying the behavior of the Inverse MCMC algorithm with empirical frequency estimator on a 225-node rectangular grid network from the UAI 2008 inference competition. This network has binary nodes and approximately 50% deterministic dependencies, which we relax to dependencies with strength .99. We select the 15 nodes on the diagonal as observations and remove any nodes below, leaving a triangular network with 120 nodes and treewidth 15 (Figure 2). We compute the true marginals P ∗using IJGP (Mateescu et al., 2010), and calculate the error of our estimates P s as error = 1 N N X i=1 1 |Xi| X xi∈Xi |P ∗(Xi = xi) −P s(Xi = xi)|. We generate 20 inference tasks as sources of training samples by sampling values for the 15 observation nodes uniformly at random. We precompute the “final” inverse conditionals as outlined above, producing a Gibbs sampler when k=1. For each inference task, we use this sampler to generate 105 approximate posterior samples. 5 Figure 2: Schema of the Bayes net structure used in experiment 1. Thick arrows indicate almost-deterministic dependencies, shaded nodes are observed. The actual network has 15 layers with a total of 120 nodes. 0 10 20 30 40 50 60 0.00 0.10 0.20 0.30 Time (seconds) Error in marginals Gibbs Inverses (10x10) Inverses (10x100) Inverses (10x1000) Figure 3: The effect of training on approximate posterior samples for 10 inference tasks. As the number of training samples per task increases, Inverse MCMC with proposals of size 20 performs new inference tasks more quickly. 1e+01 1e+02 1e+03 1e+04 1e+05 0.00 0.04 0.08 Number of training samples Error in marginals Inverses (kNN) Figure 4: Learning an inverse distribution for the brightness constancy model (Figure 1) from prior samples using the KNN density predictor. More training samples result in better estimates after the same number of MCMC steps. Figures 3 and 5 show the effect of training the frequency estimator on 10 inference tasks and testing on a different task (averaged over 20 runs). Inverse proposals of (up to) size k=20 do worse than pure Gibbs sampling with little training (due to higher rejection rate), but they speed convergence as the number of training samples increases. More generally, large proposals are likely to be rejected without training, but improve convergence after training. Figure 6 illustrates how the number of inference tasks influences error and MH acceptance ratio in a setting where the total number of training samples is kept constant. Surprisingly, increasing the number of training tasks from 5 to 15 has little effect on error and acceptance ratio for this network. That is, it seems relatively unimportant which posterior the training samples are drawn from; we may expect different results when posteriors are more sparse. Figure 7 shows how different sources of training data affect the quality of the trained sampler (averaged over 20 runs). As the strength of near-deterministic dependencies increases, direct training on Gibbs samples becomes infeasible. In this regime, we can still train on prior samples and on Gibbs samples for networks with relaxed dependencies. Alternatively, we can employ the annealing scheme outlined in the previous section. In this example, we take the temperature ladder to be [.2, .1, .05, .02, .01, 0]—that is, we start by learning inverses for the relaxed network where all CPT probabilities are constrained to lie within [.2, .8]; we then use these inverses as proposers for MCMC inference on a network constrained to CPT probabilities in [.1, .9], learn the corresponding inverses, and continue, until we reach the network of interest (at temperature 0). While the empirical frequency estimator used in the above experiments provides an attractive asymptotic convergence guarantee (Theorem 3), it is likely to generalize slowly from small amounts of training data. For practical purposes, we may be more interested in getting useful generalizations quickly than converging to a perfect proposal distribution. Fortunately, the Inverse MCMC algorithm can be used with any estimator for local conditionals, consistent or not. We evaluate this idea on a 12-node subset of the network used in the previous experiments. We learn complete inverses, resampling up to 12 nodes at once. We compare inference using a logistic regression estimator with L2 regularization (with and without interaction terms) to inference using the empirical frequency estimator. Figure 9 shows the error (integrated over time to better reflect convergence speed) against the number of training examples, averaged over 300 runs. The regression estimator with interaction terms results in significantly better results when training on few posterior samples, but is ultimately overtaken by the consistent empirical estimator. Next, we use the KNN density predictor to learn inverse distributions for the continuous Bayesian network shown in Figure 1. To evaluate the quality of the learned distributions, we take 1000 6 Error in marginals Log10(training samples per task) Maximum proposal size 5 10 15 20 25 30 1 2 3 4 0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 Acceptance ratio Log10(training samples per task) Maximum proposal size 5 10 15 20 25 30 1 2 3 4 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Figure 5: Without training, big inverse proposals result in high error, as they are unlikely to be accepted. As we increase the number of approximate posterior samples used to train the MCMC sampler, the acceptance probability for big proposals goes up, which decreases overall error. Acceptance ratio Number of tasks Maximum proposal size 5 10 15 20 25 30 5 10 15 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 Figure 6: For the network under consideration, increasing the number of tasks (i.e., samples for other observations) we train on has little effect on acceptance ratio (and error) if we keep the total number of training samples constant. Prior Gibbs Relaxed Gibbs Annealing Prior Gibbs Relaxed Gibbs Annealing G G G G G G G G Determinism 0.95 Determinism 0.9999 0.05 0.10 0.15 0.20 0.25 Error by training source Test error (after 10s) Figure 7: For networks without hard determinism, we can train on Gibbs samples. For others, we can use prior samples, Gibbs samples for relaxed networks, and samples from a sequence of annealed Inverse samplers. samples using Inverse MCMC and compare marginals to a solution computed by JAGS (Plummer et al., 2003). As we refine the inverses using forward samples, the error in the estimated marginals decreases towards 0, providing evidence for convergence towards a posterior sampler (Figure 4). To evaluate Inverse MCMC in more breadth, we run the algorithm on all binary Bayes nets with up to 500 nodes that have been submitted to the UAI 08 inference competition (216 networks). Since many of these networks exhibit strong determinism, we train on prior samples and apply the annealing scheme outlined above to generate approximate posterior samples. For training and testing, we use the evidence provided with each network. We compute the error in marginals as described above for both Gibbs (proposal size 1) and Inverse MCMC (maximum proposal size 20). To summarize convergence over the 1200s of test time, we compute the area under the error curves (Figure 8). Each point represents a single run on a single model. We label different classes of networks. For the grid networks, grid-k denotes a network with k% deterministic dependencies. While performance varies across network classes—with extremely deterministic networks making the acquisition of training data challenging—the comparison with Gibbs suggests that learned block proposals frequently help. Overall, these results indicate that Inverse MCMC is of practical benefit for learning block proposals in reasonably large Bayes nets and using a realistic amount of training data (an amount that might result from amortizing over five or ten inferences). 7 G G G G G G G G G G G G G GG G G G G G G G G GG G G GG G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G GG G G G G G G GG G G G G G G G G G G 0.0 0.1 0.2 0.3 0.4 0.0 0.1 0.2 0.3 0.4 Gibbs error integral Inverse MCMC error integral G grid−50 grid−75 grid−90 students fs bn2o Figure 8: Each mark represents a single run of a model from the UAI 08 inference competition. Marks below the line indicate that integrated error over 1200s of inference is lower for Inverse MCMC than Gibbs sampling. 10 20 50 100 200 500 0.0 0.1 0.2 0.3 0.4 Number of training samples Error integral (1s) Frequency estimator Logistic regression (L2) Logistic regression (L2 + ^2) Figure 9: Integrated error (over 1s of inference) as a function of the number of samples used to train inverses, comparing logistic regression with and without interaction terms to an empirical frequency estimator. 6 Related work A recognition network (Morris, 2001) is a multilayer perceptron used to predict posterior marginals. In contrast to our work, a single global predictor is used instead of small, compositional prediction functions. By learning local inverses our technique generalizes in a more fine-grained way, and can be combined with MCMC to provide unbiased samples. Adaptive MCMC techniques such as those presented in Roberts and Rosenthal (2009) and Haario et al. (2006) are used to tune parameters of MCMC algorithms, but do not allow arbitrarily close adaptation of the underlying model to the posterior, whereas our method is designed to allow such close approximation. A number of adaptive importance sampling algorithms have been proposed for Bayesian networks, including Shachter and Peot (1989), Cheng and Druzdzel (2000), Yuan and Druzdzel (2012), Yu and Van Engelen (2012), Hernandez et al. (1998), Salmeron et al. (2000), and Ortiz and Kaelbling (2000). These techniques typically learn Bayes nets which are directed “forward”, which means that the conditional distributions must be learned from posterior samples, creating a chicken-and-egg problem. Because our trained model is directed “backwards”, we can learn from both prior and posterior samples. Gibbs sampling and single-site Metropolis-Hastings are known to converge slowly in the presence of determinism and long-range dependencies. It is well-known that this can be addressed using block proposals, but such proposals typically need to be built manually for each model. In our framework, block proposals are learned from past samples, with a natural parameter for adjusting the block size. 7 Conclusion We have described a class of algorithms, for the setting of amortized inference, based on the idea of learning local stochastic inverses—the information necessary to “run a model backward”. We have given simple methods for estimating and using these inverses as part of an MCMC algorithm. In exploratory experiments, we have shown how learning from past inference tasks can reduce the time required to estimate quantities of interest. Much remains to be done to explore this framework. Based on our results, one particularly promising avenue is to explore estimators that initially generalize quickly (such as regression), but back off to a sound estimator as the training data grows. Acknowledgments We thank Ramki Gummadi and anonymous reviewers for useful comments. This work was supported by a John S. McDonnell Foundation Scholar Award. 8 References J. Cheng and M. Druzdzel. AIS-BN: An adaptive importance sampling algorithm for evidential reasoning in large bayesian networks. Journal of Artificial Intelligence Research, 2000. H. Haario, M. Laine, A. Mira, and E. Saksman. DRAM: efficient adaptive MCMC. Statistics and Computing, 16(4):339–354, 2006. L. D. Hernandez, S. Moral, and A. Salmeron. A Monte Carlo algorithm for probabilistic propagation in belief networks based on importance sampling and stratified simulation techniques. International Journal of Approximate Reasoning, 18(1):53–91, 1998. B. K. Horn. Understanding image intensities. Artificial intelligence, 8(2):201–231, 1977. R. Mateescu, K. Kask, V. Gogate, and R. Dechter. Join-graph propagation algorithms. Journal of Artificial Intelligence Research, 37(1):279–328, 2010. Q. Morris. Recognition networks for approximate inference in BN20 networks. Morgan Kaufmann Publishers Inc., Aug. 2001. L. E. Ortiz and L. P. Kaelbling. Adaptive importance sampling for estimation in structured domains. In Proc. of the 16th Ann. Conf. on Uncertainty in A.I. (UAI-00), pages 446–454. Morgan Kaufmann Publishers, 2000. M. Plummer et al. Jags: A program for analysis of bayesian graphical models using gibbs sampling. URL http://citeseer. ist. psu. edu/plummer03jags. html, 2003. G. Roberts and J. Rosenthal. Examples of adaptive MCMC. Journal of Computational and Graphical Statistics, 18(2):349–367, 2009. A. Salmeron, A. Cano, and S. Moral. Importance sampling in Bayesian networks using probability trees. Computational Statistics and Data Analysis, 34(4):387–413, Oct. 2000. A. N. Sanborn, V. K. Mansinghka, and T. L. Griffiths. Reconciling intuitive physics and Newtonian mechanics for colliding objects. Psychological Review, 120(2):411, Apr. 2013. R. D. Shachter and M. A. Peot. Simulation approaches to general probabilistic inference on belief networks. In Proc. of the 5th Ann. Conf. on Uncertainty in A.I. (UAI-89), pages 311–318, New York, NY, 1989. Elsevier Science. K. Watanabe and S. Shimojo. When sound affects vision: effects of auditory grouping on visual motion perception. Psychological Science, 12(2):109–116, 2001. D. Wingate and T. Weber. Automated variational inference in probabilistic programming. arXiv preprint arXiv:1301.1299, 2013. H. Yu and R. A. Van Engelen. Refractor importance sampling. arXiv preprint arXiv:1206.3295, 2012. C. Yuan and M. J. Druzdzel. Importance sampling in Bayesian networks: An influence-based approximation strategy for importance functions. arXiv preprint arXiv:1207.1422, 2012. 9
|
2013
|
167
|
4,895
|
Stochastic Ratio Matching of RBMs for Sparse High-Dimensional Inputs Yann N. Dauphin, Yoshua Bengio D´epartement d’informatique et de recherche op´erationnelle Universit´e de Montr´eal Montr´eal, QC H3C 3J7 dauphiya@iro.umontreal.ca, Yoshua.Bengio@umontreal.ca Abstract Sparse high-dimensional data vectors are common in many application domains where a very large number of rarely non-zero features can be devised. Unfortunately, this creates a computational bottleneck for unsupervised feature learning algorithms such as those based on auto-encoders and RBMs, because they involve a reconstruction step where the whole input vector is predicted from the current feature values. An algorithm was recently developed to successfully handle the case of auto-encoders, based on an importance sampling scheme stochastically selecting which input elements to actually reconstruct during training for each particular example. To generalize this idea to RBMs, we propose a stochastic ratio-matching algorithm that inherits all the computational advantages and unbiasedness of the importance sampling scheme. We show that stochastic ratio matching is a good estimator, allowing the approach to beat the state-of-the-art on two bag-of-word text classification benchmarks (20 Newsgroups and RCV1), while keeping computational cost linear in the number of non-zeros. 1 Introduction Unsupervised feature learning algorithms have recently attracted much attention, with the promise of letting the data guide the discovery of good representations. In particular, unsupervised feature learning is an important component of many Deep Learning algorithms (Bengio, 2009), such as those based on auto-encoders (Bengio et al., 2007) and Restricted Boltzmann Machines or RBMs (Hinton et al., 2006). Deep Learning of representations involves the discovery of several levels of representation, with some algorithms able to exploit unlabeled examples and unsupervised or semi-supervised learning. Whereas Deep Learning has mostly been applied to computer vision and speech recognition, an important set of application areas involve high-dimensional sparse input vectors, for example in some Natural Language Processing tasks (such as the text categorization tasks tackled here), as well as in information retrieval and other web-related applications where a very large number of rarely nonzero features can be devised. We would like learning algorithms whose computational requirements grow with the number of non-zeros in the input but not with the total number of features. Unfortunately, auto-encoders and RBMs are computationally inconvenient when it comes to handling such high-dimensional sparse input vectors, because they require a form of reconstruction of the input vector, for all the elements of the input vector, even the ones that were zero. In Section 2, we recapitulate the Reconstruction Sampling algorithm (Dauphin et al., 2011) that was proposed to handle that problem in the case of auto-encoder variants. The basic idea is to use an 1 importance sampling scheme to stochastically select a subset of the input elements to reconstruct, and importance weights to obtain an unbiased estimator of the reconstruction error gradient. In this paper, we are interested in extending these ideas to the realm of RBMs. In Section 3 we briefly review the basics of RBMs and the Gibbs chain involved in training them. Ratio matching (Hyv¨arinen, 2007), is an inductive principle and training criterion that can be applied to train RBMs but does not require a Gibbs chain. In Section 4, we present and justify a novel algorithm based on ratio matching order to achieve our objective of taking advantage of highly sparse inputs. The new algorithm is called Stochastic Ratio Matching or SRM. In Section 6 we present a wide array of experimental results demonstrating the successful application of Stochastic Ratio Matching, both in terms of computational performance (flat growth of computation as the number of non-zeros is increased, linear speedup with respect to regular training) and in terms of generalization performance: the state-of-the-art on two text classification benchmarks is achieved and surpassed. An interesting and unexpected result is that we find the biased version of the algorithm (without reweighting) to yield more discriminant features. 2 Reconstruction Sampling An auto-encoder learns an encoder function f mapping inputs x to features h = f(x), and a decoding or reconstruction function g such that g(f(x)) ≈x for training examples x. See Bengio et al. (2012) for a review. In particular, with the denoising auto-encoder, x is stochastically corrupted into ˜x (e.g. by flipping some bits) and trained to make g(f(˜x)) ≈x. To avoid the expensive reconstruction g(h) when the input is very high-dimensional, Dauphin et al. (2011) propose that for each example, a small random subset of the input elements be selected for which gi(h) and the associated reconstruction error is computed. To make the corresponding estimator of reconstruction error (and its gradient) unbiased, they propose to use an importance weighting scheme whereby the loss on the i-th input is weighted by the inverse of the probability that it be selected. To reduce the variance of the estimator, they propose to always reconstruct the i-th input if it was one of the non-zeros in x or in ˜x, and to choose uniformly at random an equal number of zero elements. They show that the unbiased estimator yields the expected linear speedup in training time compared to the deterministic gradient computation, while maintaining good performance for unsupervised feature learning. We would like to extend similar ideas to RBMs. 3 Restricted Boltzmann Machines A restricted Boltzmann machine (RBM) is an undirected graphical model with binary variables (Hinton et al., 2006): observed variables x and hidden variables h. In this model, the hidden variables help uncover higher order correlations in the data. The energy takes the form −E(x, h) = hT Wx + bT h + cT x with parameters θ = (W, b, c). The RBM can be trained by following the gradient of the negative log-likelihood −∂log P(x) ∂θ = Edata ∂F(x) ∂θ −Emodel ∂F(x) ∂θ where F(x) is the free energy (unnormalized log-probability associated with P(x)). However, this gradient is intractable because the second expectation is combinatorial. Stochastic Maximum Likelihood or SML (Younes, 1999; Tieleman, 2008) estimates this expectation using sample averages taken from a persistent MCMC chain (Tieleman, 2008). Starting from xi a step in this chain is taken by sampling hi ∼P(h|xi), then we have xi+1 ∼P(x|hi). SML-k is the variant where k is the number of steps between parameter updates, with SML-1 being the simplest and most common choice, although better results (at greater computational expense) can be achieved with more steps. Training the RBM using SML-1 is on the order of O(dn) where d is the dimension of the input variables and n is the number of hidden variables. In the case of high-dimensional sparse vectors with p non-zeros, SML does not take advantage of the sparsity. More precisely, sampling P(h|x) 2 (inference) can take advantage of sparsity and costs O(pn) computations while “reconstruction”, i.e., sampling from P(x|h) requires O(dn) computations. Thus scaling to larger input sizes n yields a linear increase in training time even if the number of non-zeros p in the input remains constant. 4 Ratio Matching Ratio matching (Hyv¨arinen, 2007) is an estimation method for statistical models where the normalization constant is not known. It is similar to score matching (Hyv¨arinen, 2005) but applied on discrete data whereas score matching is limited to continuous inputs, and both are computationally simple and yield consistent estimators. The use of Ratio Matching in RBMs is of particular interest because their normalization constant is computationally intractable. The core idea of ratio matching is to match ratios of probabilities between the data and the model. Thus Hyv¨arinen (2007) proposes to minimize the following objective function Px(x) d X i=1 g Px(x) Px(¯xi) −g P(x) P(¯xi) 2 + g Px(¯xi) Px(x) −g P(¯xi) P(x) 2 (1) where Px is the true probability distribution, P the distribution defined by the model, g(x) = 1 1+x is an activation function and ¯xi = (x1, x2, . . . , 1 −xi, . . . , xd). In this form, we can see the similarity between score matching and ratio matching. The normalization constant is canceled because P (x) P (¯xi) = e−F (x) e−F (¯xi) , however this objective requires access to the true distribution Px which is rarely available. Hyv¨arinen (2007) shows that the Ratio Matching (RM) objective can be simplified to JRM(x) = d X i=1 g P(x) P(¯xi) 2 (2) which does not require knowledge of the true distribution Px. This objective can be described as ensuring that the training example x has the highest probability in the neighborhood of points at hamming distance 1. We propose to rewrite Eq. 2 in a form reminiscent of auto-encoders: JRM(x) = d X i=1 (xi −P(xi = 1|x−i))2. (3) This will be useful for reasoning about this estimator. The main difference with auto-encoders is that each input variable is predicted by excluding it from the input. Applying Equation 2 to the RBM we obtain JRM(x) = Pd i=1 σ(F(x) −F(¯xi)) 2. The gradients have the familiar form −∂JRM(x) ∂θ = d X i=1 2ηi ∂F(x) ∂θ −∂F(¯xi) ∂θ (4) with ηi = σ(F(x) −F(¯xi)) 2 − σ(F(x) −F(¯xi)) 3. A naive implementation of this objective is O(d2n) because it requires d computations of the free energy per example. This is much more expensive than SML as noted by Marlin et al. (2010). Thankfully, as we argue here, it is possible to greatly reduce this complexity by reusing computation and taking advantage of the parametrization of RBMs. This can be done by saving the results of the computations α = cT x and βj = P i Wjixi+bj when computing F(x). The computation of F(¯xi) can be reduced to O(n) with the formula −F(¯xi) = α−(2xi −1)ci +P j log(1+eβj−(2xi−1)Wji). This implementation is O(dn) which is the same complexity as SML. However, like SML, RM does not take advantage of sparsity in the input. 3 5 Stochastic Ratio Matching We propose Stochastic Ratio Matching (SRM) as a more efficient form of ratio matching for highdimensional sparse distributions. The ratio matching objective requires the summation of d terms in O(n). The basic idea of SRM is to estimate this sum using a very small fraction of the terms, randomly chosen. If we rewrite the ratio matching objective as an expectation over a discrete distribution JRM(x) = d d X i=1 1 dg2 P(x) P(¯xi) = dE g2 P(x) P(¯xi) (5) we can use Monte Carlo methods to estimate JRM without computing all the terms in Equation 2. However, in practice this estimator has a high variance. Thus it is a poor estimator, especially if we want to use very few Monte Carlo samples. The solution proposed for SRM is to use an Importance Sampling scheme to obtain a lower variance estimator of JRM. Combining Monte Carlo with importance sampling, we obtain the SRM objective JSRM(x) = d X i=1 γi E[γi]g2 P(x) P(¯xi) (6) where γi ∼P(γi = 1|x) is the so-called proposal distribution of our importance sampling scheme. The proposal distribution determines which terms will be used to estimate the objective since only the terms where γi = 1 are non-zero. JSRM(x) is an unbiased estimator of JRM(x), i.e., E[JSRM(x)] = d X i=1 E[γi] E[γi]g2 P(x) P(¯xi) = JRM(x) The intuition behind importance sampling is that the variance of the estimator can be reduced by focusing sampling on the largest terms of the expectation. More precisely, it is possible to show that the variance of the estimator is minimized when P(γi = 1|x) ∝g2(P(x)/P(¯xi)). Thus we would like the probability P(γi = 1|x) to reflect how large the error (xi −P(xi = 1|x−i))2 will be. The challenge is finding a good approximation for (xi −P(xi = 1|x−i))2 and to define a proposal distribution that is efficient to sample from. Following Dauphin et al. (2011), we propose such a distribution for high-dimensional sparse distributions. In these types of distributions the marginals Px(xi = 1) are very small. They can easily be learned by the biases c of the model, and may even be initialized very close to their optimal value. Once the marginals are learned, the model will likely only make wrong predictions when Px(xi = 1|x−i) differs significantly from Px(xi = 1). If xi = 0 then the error (0 −P(xi = 1|x−i))2 is likely small because the model has a high bias towards P(xi = 0). Conversely, the error will be high when xi = 1. In other words, the model will mostly make errors for terms where xi = 1 and a small number of dimensions where xi = 0. We can use this to define the heuristic proposal distribution P(γi = 1|x) = 1 if xi = 1 p/(d −P j 1xj>0) otherwise (7) where p is the average number of non-zeros in the data. The idea is to always sample the terms where xi = 1 and a subset of k of the (d −P j 1xj>0) remaining terms where xi = 0. Note that if we sampled the γi independently, we would get E[k] = p. However, instead of sampling those γi bits independently, we find that much smaller variance is obtained by sampling a number of zeros k that is constant for all examples, i.e., k = p. A random k can cause very significant variance in the gradients and this makes stochastic gradient descent more difficult. In our experiments we set k = p = E[P j 1xj>0] which is a small number by definition of these sparse distributions, and guarantees that computation costs will remain constant as n increases for a fixed number of non-zeros. The computational cost of SRM per training example is O(pn), as opposed to O(dn) for RM. While simple, we find that this heuristic proposal distribution works well in practice, as shown below. 4 For comparison, we also perform experiments with a biased version of Equation 6 JBiasedSRM(x) = d X i=1 γig2 P(x) P(¯xi) . (8) This will allow us to gauge the effectiveness of our importance weights for unbiasing the objective. The biased objective can be thought as down-weighting the ratios where xi = 0 by a factor of E[γi]. SRM is related to previous work (Dahl et al., 2012) on applying RBMs to high-dimensional sparse inputs, more precisely multinomial observations, e.g., one K-ary multinomial for each word in an n-gram window. A careful choice of Metropolis-Hastings transitions replaces Gibbs transitions and allows to handle large vocabularies. In comparison, SRM is geared towards general sparse vectors and involves an extremely simple procedure without MCMC. 6 Experimental Results In this section, we demonstrate the effectiveness of SRM for training RBMs. Additionally, we show that RBMs are useful features extractors for topic classification. Datasets We have performed experiments with the Reuters Corpus Volume I (RCV1) and 20 Newsgroups (20 NG). RCV1 is a benchmark for document classification of over 800,000 news wire stories (Lewis et al., 2004). The documents are represented as bag-of-words vectors with 47,236 dimensions. The training set contains 23,149 documents and the test set has 781,265. While there are 3 types of labels for the documents, we focus on the task of predicting the topic. There are a set of 103 non-mutually exclusive topics for a document. We report the performance using the F1.0 measure for comparison with the state of the art. 20 Newsgroups is a collection of Usenet posts composing a training set of 11,269 examples and 7505 test examples. The bag-of-words vectors contain 61,188 dimensions. The postings are to be classified into one of 20 categories. We use the by-date train/test split which ensures that the training set contains postings preceding the test examples in time. Following Larochelle et al. (2012), we report the classification error and for a fair comparison we use the same preprocessing1. Methodology We compare the different estimation methods for the RBM based on the loglikelihoods they achieve. To do this we use Annealed Importance Sampling or AIS (Salakhutdinov and Murray, 2008). For all models we average 100 AIS runs with 10,000 uniformly spaced reverse temperatures βk. We compare RBMs trained with ratio matching, stochastic ratio matching and biased stochastic ratio matching. We include experiments with RBMs trained with SML-1 for comparison. Additionally, we provide experiments to motivate the use of high-dimensional RBMs in NLP. We use the RBM to pretrain the hidden layers of a feed-forward neural network (Hinton et al., 2006). This acts as a regularization for the network and it helps optimization by initializing the network close to a good local minimum (Erhan et al., 2010). The hyper-parameters are cross-validated on a validation set consisting of 5% of the training set. In our experiments with AIS, we use the validation log-likelihood as the objective. For classification, we use the discriminative performance on the validation set. The hyper-parameters are found using random search (Bergstra and Bengio, 2012) with 64 trials per set of experiments. The learning rate for the RBMs is sampled from 10−[0,3], the number of hidden units from [500, 2000] and the number of training epochs from [5, 20]. The learning rate for the MLP is sampled from 10−[2,0]. It is trained for 32 epochs using early-stopping based on the validation set. We regularize the MLP by dropping out 50% of the hidden units during training (Hinton et al., 2012). We adapt the learning rate dynamically by multiplying it by 0.95 when the validation error increases. All experiments are run on a cluster of double quad-core Intel Xeon E5345 running at 2.33Ghz with 2GB of RAM. 5 Table 1: Log-probabilities estimated by AIS for the RBMs trained with the different estimation methods. With a fixed budget of epochs, SRM achieves likelihoods on the test set comparable with RM and SML-1. ESTIMATES AVG. LOG-PROB. log ˆZ log( ˆZ ± ˆσ) TRAIN TEST RCV1 BIASED SRM 1084.96 1079.66, 1085.65 -758.73 -793.20 SRM 325.26 325.24, 325.27 -139.79 -151.30 RM 499.88 499.48, 500.17 -119.98 -147.32 SML-1 323.33 320.69, 323.99 -138.90 -153.50 20 NG BIASED SRM 1723.94 1718.65, 1724.63 -960.34 -1018.73 SRM 546.52 546.55, 546.49 -178.39 -190.72 RM 975.42 975.62, 975.18 -159.92 -185.61 SML-1 612.15 611.68, 612.46 -173.56 -188.82 6.1 Using SRM to train RBMs We can measure the effectiveness of SRM by comparing it with various estimation methods for the RBM. As the RBM is a generative model, we must compare these methods based on the loglikelihoods they achieve. Note that Dauphin et al. (2011) relies on the classification error because there is no accepted performance measure for DAEs. As both RM and SML scale badly with input dimension, we restrict the dimension of the dataset to the p = 1, 000 most frequent words. We will describe experiments with all dimensions in the next section. As seen in Table 1, SRM is a good estimator for training RBMs and is a good approximation of RM. We see that with the same budget of epochs SRM achieves log-likelihoods comparable with RM on both datasets. The striking difference of more than 500 nats with Biased SRM shows that the importance weights successfully unbias the estimator. Interestingly, we observe that RM is able to learn better generative models than SML-1 for both datasets. This is similar to Marlin et al. (2010) where Pseudolikelihood achieves better log-likelihood than SML on a subset of 20 newsgroups. We observe this is an optimization problem since the training log-likelihood is also higher than RM. One explanation is that SML-1 might experience mixing problems (Bengio et al., 2013). Figure 1: Average speedup in the calculation of gradients by using the SRM objective compared to RM. The speed-up is linear and reaches up to 2 orders of magnitude. Figure 1 shows that as expected SRM achieves a linear speed-up compared to RM, reaching speedups of 2 orders of magnitude. In fact, we observed that the computation time of the gradients for RM scale linearly with the size of the input while the computation time of SRM remains fairly constant because the number of non-zeros varies little. This is an important property of SRM which makes it suitable for very large scale inputs. 1http://qwone.com/˜jason/20Newsgroups/20news-bydate-matlab.tgz 6 Figure 2: Average norm of the gradients for the terms in Equation 2 where xi = 1 and xi = 0. Confirming the hypothesis for the proposal distribution the terms where xi = 1 are 2 orders of magnitude larger. The importance sampling scheme of SRM (Equation 7) relies on the hypothesis that terms where xi = 1 produce a larger gradient than terms where xi = 0. We can verify this by monitoring the average gradients during learning on RCV1. Figure 2 demonstrates that the average gradients for the terms where xi = 1 is 2 orders of magnitudes larger than those where xi = 0. This confirms the hypothesis underlying the sampling scheme of SRM. 6.2 Using RBMs as feature extractors for NLP Having established that SRM is an efficient unbiased estimator of RM, we turn to the task of using RBMs not as generative models but as feature extractors. We find that keeping the bias in SRM is helpful for classification. This is similar to the known result that contrastive divergence, which is biased, yields better classification results than persistent contrastive divergence, which is unbiased. The bias increases the weight of non-zeros features. The superior performance of the biased objective suggests that the non-zero features contain more information about the classification task. In other words, for these tasks it’s more important to focus on what is there than what is not there. Table 2: Classification results on RCV1 with all 47,326 dimensions. The DBN trained with SRM achieves state-of-the-art performance. MODEL TEST SET F1 ROCCHIO 0.693 k-NN 0.765 SVM 0.816 SDA-MLP (REC. SAMPLING) 0.831 RBM-MLP (UNBIASED SRM) 0.816 RBM-MLP (BIASED SRM) 0.829 DBN-MLP (BIASED SRM) 0.836 On RCV1, we train our models on all 47,326 dimensions. The RBM trained with SRM improves on the state-of-the-art (Lewis et al., 2004), as shown in Table 2. The total training time for this RBM using SRM is 57 minutes. We also train a Deep Belief Net (DBN) by stacking an RBM trained with SML on top of the RBMs learned with SRM. This type of 2-layer deep architecture is able to significantly improve the performance on that task (Table 2). In particular the DBN does significantly better than a stack of denoising auto-encoders we trained using biased reconstruction sampling (Dauphin et al., 2011), which appears as SDA-MLP (Rec. Sampling) in Table 2. We apply RBMs trained with SRM on 20 newsgroups with all 61,188 dimensions. We see in Table 3 that this approach improves the previous state-of-the-art by over 1% (Larochelle et al., 2012), beating non-pretrained MLPs and SVMs by close to 10 %. This result is closely followed by the DAE trained with reconstruction sampling which in our experiments reaches 20.6% test error. The 7 Table 3: Classification results on 20 Newsgroups with all 61,188 dimensions. Prior results from (Larochelle et al., 2012). The RBM trained with SRM achieves state-of-the-art results. MODEL TEST SET ERROR SVM 32.8 % MLP 28.2 % RBM 24.9 % HDRBM 21.9 % DAE-MLP (REC. SAMPLING) 20.6 % RBM-MLP (BIASED SRM) 20.5 % simpler RBM trained by SRM is able to beat the more powerful HD-RBM model because it uses all the 61,188 dimensions. 7 Conclusion We have proposed a very simple algorithm called Stochastic Ratio Matching (SRM) to take advantage of sparsity in high-dimensional data when training discrete RBMs. It can be used to estimate gradients in O(np) computation where p is the number of non-zeros, yielding linear speedup against the O(nd) of Ratio Matching (RM) where d is the input size. It does so while providing an unbiased estimator of the ratio matching gradient. Using this efficient estimator we train RBMs as features extractors and achieve state-of-the-art results on 2 text classification benchmarks. References Bengio, Y. (2009). Learning deep architectures for AI. Now Publishers. Bengio, Y., Lamblin, P., Popovici, D., and Larochelle, H. (2007). Greedy layer-wise training of deep networks. In NIPS’2006. Bengio, Y., Courville, A., and Vincent, P. (2012). Representation learning: A review and new perspectives. Technical report, arXiv:1206.5538. Bengio, Y., Mesnil, G., Dauphin, Y., and Rifai, S. (2013). Better mixing via deep representations. In ICML’13. Bergstra, J. and Bengio, Y. (2012). Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13, 281–305. Dahl, G., Adams, R., and Larochelle, H. (2012). Training restricted boltzmann machines on word observations. In J. Langford and J. Pineau, editors, Proceedings of the 29th International Conference on Machine Learning (ICML-12), ICML ’12, pages 679–686, New York, NY, USA. Omnipress. Dauphin, Y., Glorot, X., and Bengio, Y. (2011). Large-scale learning of embeddings with reconstruction sampling. In ICML’11. Erhan, D., Bengio, Y., Courville, A., Manzagol, P.-A., Vincent, P., and Bengio, S. (2010). Why does unsupervised pre-training help deep learning? JMLR, 11, 625–660. Hinton, G. E., Osindero, S., and Teh, Y.-W. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18, 1527–1554. Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I., and Salakhutdinv, R. (2012). Improving neural networks by preventing co-adaptation of feature detectors. Technical report, arXiv:1207.0580. Hyv¨arinen, A. (2005). Estimation of non-normalized statistical models using score matching. 6, 695–709. Hyv¨arinen, A. (2007). Some extensions of score matching. Computational Statistics and Data Analysis, 51, 2499–2512. Larochelle, H., Mandel, M. I., Pascanu, R., and Bengio, Y. (2012). Learning algorithms for the classification restricted boltzmann machine. Journal of Machine Learning Research, 13, 643– 669. 8 Lewis, D. D., Yang, Y., Rose, T. G., Li, F., Dietterich, G., and Li, F. (2004). Rcv1: A new benchmark collection for text categorization research. Journal of Machine Learning Research, 5, 361–397. Marlin, B., Swersky, K., Chen, B., and de Freitas, N. (2010). Inductive principles for restricted Boltzmann machine learning. volume 9, pages 509–516. Salakhutdinov, R. and Murray, I. (2008). On the quantitative analysis of deep belief networks. In ICML 2008, volume 25, pages 872–879. Tieleman, T. (2008). Training restricted Boltzmann machines using approximations to the likelihood gradient. In ICML’2008, pages 1064–1071. Younes, L. (1999). On the convergence of Markovian stochastic algorithms with rapidly decreasing ergodicity rates. Stochastics and Stochastic Reports, 65(3), 177–228. 9
|
2013
|
168
|
4,896
|
Distributed k-Means and k-Median Clustering on General Topologies Maria Florina Balcan, Steven Ehrlich, Yingyu Liang School of Computer Science Georgia Institute of Technology Atlanta, GA 30332 {ninamf,sehrlich}@cc.gatech.edu,yliang39@gatech.edu Abstract This paper provides new algorithms for distributed clustering for two popular center-based objectives, k-median and k-means. These algorithms have provable guarantees and improve communication complexity over existing approaches. Following a classic approach in clustering by [13], we reduce the problem of finding a clustering with low cost to the problem of finding a coreset of small size. We provide a distributed method for constructing a global coreset which improves over the previous methods by reducing the communication complexity, and which works over general communication topologies. Experimental results on large scale data sets show that this approach outperforms other coreset-based distributed clustering algorithms. 1 Introduction Most classic clustering algorithms are designed for the centralized setting, but in recent years data has become distributed over different locations, such as distributed databases [21, 5], images and videos over networks [20], surveillance [11] and sensor networks [4, 12]. In many of these applications the data is inherently distributed because, as in sensor networks, it is collected at different sites. As a consequence it has become crucial to develop clustering algorithms which are effective in the distributed setting. Several algorithms for distributed clustering have been proposed and empirically tested. Some of these algorithms [10, 22, 7] are direct adaptations of centralized algorithms which rely on statistics that are easy to compute in a distributed manner. Other algorithms [14, 17] generate summaries of local data and transmit them to a central coordinator which then performs the clustering algorithm. No theoretical guarantees are provided for the clustering quality in these algorithms, and they do not try to minimize the communication cost. Additionally, most of these algorithms assume that the distributed nodes can communicate with all other sites or that there is a central coordinator that communicates with all other sites. In this paper, we study the problem of distributed clustering where the data is distributed across nodes whose communication is restricted to the edges of an arbitrary graph. We provide algorithms with small communication cost and provable guarantees on the clustering quality. Our technique for reducing communication in general graphs is based on the construction of a small set of points which act as a proxy for the entire data set. An ϵ-coreset is a weighted set of points whose cost on any set of centers is approximately the cost of the original data on those same centers up to accuracy ϵ. Thus an approximate solution for the coreset is also an approximate solution for the original data. Coresets have previously been studied in the centralized setting ([13, 8]) but have also recently been used for distributed clustering as in [23] and as implied by [9]. In this work, we propose a distributed algorithm for k-means and k1 5 6 3 1 2 4 C2 C4 C5 C6 C356 (a) Zhang et al.[23] 5 6 3 1 2 4 5 2 3 4 6 1 (b) Our Construction Figure 1: (a) Each node computes a coreset on the weighted pointset for its own data and its subtrees’ coresets. (b) Local constant approximation solutions are computed, and the costs of these solutions are used to coordinate the construction of a local portion on each node. median, by which each node constructs a local portion of a global coreset. Communicating the approximate cost of a global solution to each node is enough for the local construction, leading to low communication cost overall. The nodes then share the local portions of the coreset, which can be done efficiently in general graphs using a message passing approach. More precisely, in Section 3, we propose a distributed coreset construction algorithm based on local approximate solutions. Each node computes an approximate solution for its local data, and then constructs the local portion of a coreset using only its local data and the total cost of each node’s approximation. For ϵ constant, this builds a coreset of size ˜O(kd+nk) for k-median and k-means when the data lies in d dimensions and is distributed over n sites. If there is a central coordinator among the n sites, then clustering can be performed on the coordinator by collecting the local portions of the coreset with a communication cost equal to the coreset size ˜O(kd + nk). For distributed clustering over general connected topologies, we propose an algorithm based on the distributed coreset construction and a message-passing approach, whose communication cost improves over previous coreset-based algorithms. We provide a detailed comparison below. Experimental results on large scale data sets show that our algorithm performs well in practice. For a fixed amount of communication, our algorithm outperforms other coreset construction algorithms. Comparison to Other Coreset Algorithms: Since coresets summarize local information they are a natural tool to use when trying to reduce communication complexity. If each node constructs an ϵcoreset on its local data, then the union of these coresets is clearly an ϵ-coreset for the entire data set. Unfortunately the size of the coreset in this approach increases greatly with the number of nodes. Another approach is the one presented in [23]. Its main idea is to approximate the union of local coresets with another coreset. They assume nodes communicate over a rooted tree, with each node passing its coreset to its parent. Because the approximation factor of the constructed coreset depends on the quality of its component coresets, the accuracy a coreset needs (and thus the overall communication complexity) scales with the height of this tree. Although it is possible to find a spanning tree in any communication network, when the graph has large diameter every tree has large height. In particular many natural networks such as grid networks have a large diameter (Ω(√n) for grids) which greatly increases the size of the local coresets. We show that it is possible to construct a global coreset with low communication overhead. This is done by distributing the coreset construction procedure rather than combining local coresets. The communication needed to construct this coreset is negligible – just a single value from each data set representing the approximate cost of their local optimal clustering. Since the sampled global ϵ-coreset is the same size as any local ϵ-coreset, this leads to an improvement of the communication cost over the other approaches. See Figure 1 for an illustration. The constructed coreset is smaller by a factor of n in general graphs, and is independent of the communication topology. This method excels in sparse networks with large diameters, where the previous approach in [23] requires coresets that are quadratic in the size of the diameter for k-median and quartic for k-means; see Section 4 for details. [9] also merge coresets using coreset construction, but they do so in a model of parallel computation and ignore communication costs. Balcan et al. [3] and Daume et al. [6] consider communication complexity questions arising when doing classification in distributed settings. In concurrent and independent work, Kannan and Vem2 pala [15] study several optimization problems in distributed settings, including k-means clustering under an interesting separability assumption. 2 Preliminaries Let d(p, q) denote the Euclidean distance between any two points p, q ∈Rd. The goal of k-means clustering is to find a set of k centers x = {x1, x2, . . . , xk} which minimize the k-means cost of data set P ⊆Rd. Here the k-means cost is defined as cost(P, x) = P p∈P d(p, x)2 where d(p, x) = minx∈x d(p, x). If P is a weighted data set with a weighting function w, then the k-means cost is defined as P p∈P w(p)d(p, x)2. Similarly, the k-median cost is defined as P p∈P d(p, x). Both k-means and k-median cost functions are known to be NP-hard to minimize (see for example [2]). For both objectives, there exist several readily available polynomial-time algorithms that achieve constant approximation solutions (see for example [16, 18]). In distributed clustering, we consider a set of n nodes V = {vi, 1 ≤i ≤n} which communicate on an undirected connected graph G = (V, E) with m = |E| edges. More precisely, an edge (vi, vj) ∈E indicates that vi and vj can communicate with each other. Here we measure the communication cost in number of points transmitted, and assume for simplicity that there is no latency in the communication. On each node vi, there is a local data set Pi, and the global data set is P = Sn i=1 Pi. The goal is to find a set of k centers x which optimize cost(P, x) while keeping the computation efficient and the communication cost as low as possible. Our focus is to reduce the communication cost while preserving theoretical guarantees for approximating clustering cost. Coresets: For the distributed clustering task, a natural approach to avoid broadcasting raw data is to generate a local summary of the relevant information. If each site computes a summary for their own data set and then communicates this to a central coordinator, a solution can be computed from a much smaller amount of data, drastically reducing the communication. In the centralized setting, the idea of summarization with respect to the clustering task is captured by the concept of coresets [13, 8]. A coreset is a set of weighted points whose cost approximates the cost of the original data for any set of k centers. The formal definition of coresets is: Definition 1 (coreset). An ϵ-coreset for a set of points P with respect to a center-based cost function is a set of points S and a set of weights w : S →R such that for any set of centers x, we have (1 −ϵ)cost(P, x) ≤P p∈S w(p)cost(p, x) ≤(1 + ϵ)cost(P, x). In the centralized setting, many coreset construction algorithms have been proposed for k-median, k-means and some other cost functions. For example, for points in Rd, algorithms in [8] construct coresets of size ˜O(kd/ϵ4) for k-means and coresets of size ˜O(kd/ϵ2) for k-median. In the distributed setting, it is natural to ask whether there exists an algorithm that constructs a small coreset for the entire point set but still has low communication cost. Note that the union of coresets for multiple data sets is a coreset for the union of the data sets. The immediate construction of combining the local coresets from each node would produce a global coreset whose size was larger by a factor of n, greatly increasing the communication complexity. We present a distributed algorithm which constructs a global coreset the same size as the centralized construction and only needs a single value1 communicated to each node. This serves as the basis for our distributed clustering algorithm. 3 Distributed Coreset Construction Here we design a distributed coreset construction algorithm for k-means and k-median. The underlying technique can be extended to other additive clustering objectives such as k-line median. To gain some intuition on the distributed coreset construction algorithm, we briefly review the construction algorithm in [8] in the centralized setting. The coreset is constructed by computing a constant approximation solution for the entire data set, and then sampling points proportional to their contributions to the cost of this solution. Intuitively, the points close to the nearest centers can be approximately represented by the centers while points far away cannot be well represented. Thus, points should be sampled with probability proportional to their contributions to the cost. Directly adapting the algorithm to the distributed setting would require computing a constant approximation 1The value that is communicated is the sum of the costs of approximations to the local optimal clustering. This is guaranteed to be no more than a constant factor times larger than the optimal cost. 3 Algorithm 1 Communication aware distributed coreset construction Input: Local datasets {Pi, 1 ≤i ≤n}, parameter t (number of points to be sampled). Round 1: on each node vi ∈V • Compute a constant approximation Bi for Pi. Communicate cost(Pi, Bi) to all other nodes. Round 2: on each node vi ∈V • Set ti = t cost(Pi,Bi) Pn j=1 cost(Pj,Bj) and mp = cost(p, Bi), ∀p ∈Pi. • Pick a non-uniform random sample Si of ti points from Pi, where for every q ∈Si and p ∈Pi, we have q = p with probability mp/ P z∈Pi mz. Let wq = P i P z∈Pi mz tmq for each q ∈Si. • For ∀b ∈Bi, let Pb = {p ∈Pi : d(p, b) = d(p, Bi)}, wb = |Pb| −P q∈Pb∩S wq. Output: Distributed coreset: points Si ∪Bi with weights {wq : q ∈Si ∪Bi}, 1 ≤i ≤n. solution for the entire data set. We show that a global coreset can be constructed in a distributed fashion by estimating the weight of the entire data set with the sum of local approximations. With this approach, it suffices for nodes to communicate the total costs of their local solutions. Theorem 1. For distributed k-means and k-median clustering on a graph, there exists an algorithm such that with probability at least 1 −δ, the union of its output on all nodes is an ϵ-coreset for P = Sn i=1 Pi. The size of the coreset is O( 1 ϵ4 (kd+log 1 δ )+nk log nk δ ) for k-means, and O( 1 ϵ2 (kd+ log 1 δ ) + nk) for k-median. The total communication cost is O(mn). As described below, the distributed coreset construction can be achieved by using Algorithm 1 with appropriate t, namely O( 1 ϵ4 (kd + log 1 δ ) + nk log nk δ ) for k-means and O( 1 ϵ2 (kd + log 1 δ )) for kmedian. Due to space limitation, we describe a proof sketch highlighting the intuition and provide the details in the supplementary material. Proof Sketch of Theorem 1: The analysis relies on the definition of the pseudo-dimension of a function space and a sampling lemma. Definition 2 ([19, 8]). Let F be a finite set of functions from a set P to R≥0. For f ∈F, let B(f, r) = {p : f(p) ≤r}. The dimension of the function space dim(F, P) is the smallest integer d such that for any G ⊆P, {G ∩B(f, r) : f ∈F, r ≥0} ≤|G|d. Suppose we draw a sample S according to {mp : p ∈P}, namely for each q ∈S and p ∈P, q = p with probability mp P z∈P mz . Set the weights of the points as wp = P z∈P mz mp|S| for p ∈P. Then for any f ∈F, the expectation of the weighted cost of S equals the cost of the original data P, since E hP q∈S wqf(q) i = P q∈S E[wqf(q)] = P q∈S P p∈P Pr[q = p]wpf(p) = P p∈P f(p). If the sample size is large enough, then we also have concentration for any f ∈F. The lemma is implicit in [8] and we include the proof in the supplementary material. Lemma 1. Fix a set F of functions f : P →R≥0. Let S be a sample drawn i.i.d. from P according to {mp ∈R≥0 : p ∈P}: for each q ∈S and p ∈P, q = p with probability mp P z∈P mz . Let wp = P z∈P mz mp|S| for p ∈P. For a sufficiently large c, if |S| ≥ c ϵ2 dim(F, P) + log 1 δ , then with probability at least 1 −δ, ∀f ∈F : P p∈P f(p) −P q∈S wqf(q) ≤ϵ P p∈P mp maxp∈P f(p) mp . To get a small bound on the difference between P p∈P f(p) and P q∈S wqf(q), we need to choose mp such that P p∈P mp is small and maxp∈P f(p) mp is bounded. More precisely, if we choose mp = maxf∈F f(p), then the difference is bounded by ϵ P p∈P mp. We first consider the centralized setting and review how [8] applied the lemma to construct a coreset for k-median as in Definition 1. A natural approach is to apply this lemma directly to the cost fx(p) := cost(p, x). The problem is that a suitable upper bound mp is not available for cost(p, x). However, we can still apply the lemma to a different set of functions defined as follows. Let bp denote the closest center to p in the approximation solution. Aiming to approximate 4 the error P p[cost(p, x) −cost(bp, x)] rather than to approximate P p cost(p, x) directly, we define fx(p) := cost(p, x)−cost(bp, x)+cost(p, bp), where cost(p, bp) is added so that fx(p) ≥0. Since 0 ≤fx(p) ≤2cost(p, bp), we can apply the lemma with mp = 2cost(p, bp). It bounds the difference | P p∈P fx(p) −P q∈S wqfx(q)| by 2ϵ P p∈P cost(p, bp), so we have an O(ϵ)-approximation. Note that P p∈P fx(p) −P q∈S wqfx(q) does not equal P p∈P cost(p, x) −P q∈S wqcost(q, x). However, it equals the difference between P p∈P cost(p, x) and a weighted cost of the sampled points and the centers in the approximation solution. To get a coreset as in Definition 1, we need to add the centers of the approximation solution with specific weights to the coreset. Then when the sample is sufficiently large, the union of the sampled points and the centers is an ϵ-coreset. Our key contribution in this paper is to show that in the distributed setting, it suffices to choose bp from the local approximation solution for the local dataset containing p, rather than from an approximation solution for the global dataset. Furthermore, the sampling and the weighting of the coreset points can be done in a local manner. In the following, we provide a formal verification of our discussion above. We have the following lemma for k-median with F = {fx : fx(p) = d(p, x) −d(bp, x) + d(p, bp), x ∈(Rd)k}. Lemma 2. For k-median, the output of Algorithm 1 is an ϵ-coreset with probability at least 1 −δ, if t ≥ c ϵ2 dim(F, P) + log 1 δ for a sufficiently large constant c. Proof Sketch of Lemma 2: We want to show that for any set of centers x the true cost for using these centers is well approximated by the cost on the weighted coreset. Note that our coreset has two types of points: sampled points q ∈S = ∪n i=1Si with weight wq := P z∈P mz mq|S| and local solution centers b ∈B = ∪n i=1Bi with weight wb := |Pb|−P q∈S∩Pb wq. We use bp to represent the nearest center to p in the local approximation solution. We use Pb to represent the set of points which have b as their closest center in the local approximation solution. As mentioned above, we construct fx(p) to be the difference between the cost of p and the cost of bp so that Lemma 1 can be applied. Note that the centers are weighted such that P b∈B wbd(b, x) = P b∈B |Pb|d(b, x) −P b∈B P q∈S∩Pb wqd(b, x) = P p∈P d(bp, x) − P q∈S wqd(bq, x). Taken together with the fact that P p∈P mp = P q∈S wqmq, we can show that P p∈P d(p, x) −P q∈S∪B wqd(q, x) = P p∈P fx(p) −P q∈S wqfx(q) . Note that 0 ≤ fx(p) ≤2d(p, bp) by triangle inequality, and S is sufficiently large and chosen according to weights mp = d(p, bp), so the conditions of Lemma 1 are met. Thus we can conclude that P p∈P d(p, x) −P q∈S∪B wqd(q, x) ≤O(ϵ) P p∈P d(p, x), as desired. In [8] it is shown that dim(F, P) = O(kd). Therefore, by Lemma 2, when |S| ≥ O 1 ϵ2 (kd + log 1 δ ) , the weighted cost of S ∪B approximates the k-median cost of P for any set of centers, then (S ∪B, w) becomes an ϵ-coreset for P. The total communication cost is bounded by O(mn), since even in the most general case that every node only knows its neighbors, we can broadcast the local costs with O(mn) communication (see Algorithm 3). Proof Sketch for k-means: Similar methods prove that for k-means when t = O( 1 ϵ4 (kd + log 1 δ ) + nk log nk δ )), the algorithm constructs an ϵ-coreset with probability at least 1−δ. The key difference is that triangle inequality does not apply directly to the k-means cost, and so the error |cost(p, x) − cost(bp, x)| and thus fx(p) are not bounded. The main change to the analysis is that we divide the points into two categories: good points whose costs approximately satisfy the triangle inequality (up to a factor of 1/ϵ) and bad points. The good points for a fixed set of centers x are defined as G(x) = {p ∈P : |cost(p, x) −cost(bp, x)| ≤∆p} where the upper bound is ∆p = cost(p,bp) ϵ , and the analysis follows as in Lemma 2. For bad points we can show that the difference in cost must still be small, namely O(ϵ min{cost(p, x), cost(bp, x)}). More formally, let fx(p) = cost(p, x) −cost(bp, x) + ∆p, and let gx(p) be fx(p) if p ∈G(x) and 0 otherwise. Then P p∈P cost(p, x) −P q∈S∪B wqcost(q, x) is decomposed into three terms: X p∈P gx(p) − X q∈S wqgx(q) | {z } (A) + X p∈P \G(x) fx(p) | {z } (B) − X q∈S\G(x) wqfx(q) | {z } (C) 5 Algorithm 2 Distributed clustering on a graph Input: {Pi, 1 ≤i ≤n}: local datasets; {Ni, 1 ≤i ≤n}: the neighbors of vi; Aα: an αapproximation algorithm for weighted clustering instances. Round 1: on each node vi • Construct its local portion Di of an ϵ/2-coreset by Algorithm 1, using Message-Passing for communicating the local costs. Round 2: on each node vi • Call Message-Passing(Di, Ni). Compute x = Aα(S j Dj). Output: x Algorithm 3 Message-Passing(Ii, Ni) Input: Ii is the message, Ni are the neighbors. • Let Ri denote the information received. Initialize Ri = {Ii}, and send Ii to Ni. • While Ri ̸= {Ij, 1 ≤j ≤n}: If receive message Ij ̸∈Ri, then let Ri = Ri ∪{Ij} and send Ij to Ni. Lemma 1 bounds (A) by O(ϵ)cost(P, x), but we need an accuracy of ϵ2 to compensate for the 1/ϵ factor in the upper bound of fx(p). This leads to an O(1/ϵ4) factor in the sample complexity. For (B) and (C), |cost(p, x) −cost(bp, x)| > ∆p since p ̸∈G(x). This can be used to show that p and bp are close to each other and far away from x, and thus |cost(p, x) −cost(bp, x)| is O(ϵ) smaller than cost(p, x) and cost(bp, x). This fact bounds ((B)) by O(ϵ)cost(P, x). It also bounds (C), noting that E[P q∈Pb∩S wq] = |Pb|, and thus P q∈Pb∩S wq ≤2|Pb| when t ≥O(nk log nk δ ). The proof is completed by bounding the function space dimension by O(kd) as in [8]. 4 Effect of Network Topology on Communication Cost If there is a central coordinator in the communication graph, then we can run distributed coreset construction algorithm and send the local portions of the coreset to the coordinator, which can perform the clustering task. The total communication cost is just the size of the coreset. In this section, we consider distributed clustering over arbitrary connected topologies. We propose to use a message passing approach for collecting information for coreset construction and sharing the local portions of the coreset. The details are presented in Algorithm 2 and 3. Since each piece of the coreset is shared at most twice across any particular edge in message passing, we have Theorem 2. Given an α-approximation algorithm for weighted k-means (k-median respectively) as a subroutine, there exists an algorithm that with probability at least 1 −δ outputs a (1 + ϵ)αapproximation solution for distributed k-means (k-median respectively). The communication cost is O(m( 1 ϵ4 (kd + log 1 δ ) + nk log nk δ )) for k-means, and O(m( 1 ϵ2 (kd + log 1 δ ) + nk)) for k-median. In contrast, an approach where each node constructs an ϵ-coreset for k-means and sends it to the other nodes incurs communication cost of ˜O( mnkd ϵ4 ). Our algorithm significantly reduces this. Our algorithm can also be applied on a rooted tree: we can send the coreset portions to the root which then applies an approximation algorithm. Since each portion are transmitted at most h times, Theorem 3. Given an α-approximation algorithm for weighted k-means (k-median respectively) as a subroutine, there exists an algorithm that with probability at least 1 −δ outputs a (1 + ϵ)αapproximation solution for distributed k-means (k-median respectively) clustering on a rooted tree of height h. The total communication cost is O(h( 1 ϵ4 (kd + log 1 δ ) + nk log nk δ )) for k-means, and O(h( 1 ϵ2 (kd + log 1 δ ) + nk)) for k-median. Our approach improves the cost of ˜O( nh4kd ϵ4 ) for k-means and the cost of ˜O( nh2kd ϵ2 ) for k-median in [23] 2. The algorithm in [23] builds on each node a coreset for the union of coresets from its 2 Their algorithm used coreset construction as a subroutine. The construction algorithm they used builds coreset of size ˜O( nkh ϵd log |P|). Throughout this paper, when we compare to [23] we assume they use the coreset construction technique of [8] to reduce their coreset size and communication cost. 6 children, and thus needs O(ϵ/h) accuracy to prevent the accumulation of errors. Since the coreset construction subroutine has quadratic dependence on 1/ϵ for k-median (quartic for k-means), the algorithm then has quadratic dependence on h (quartic for k-means). Our algorithm does not build coreset on top of coresets, resulting in a better dependence on the height of the tree h. In a general graph, any rooted tree will have its height h at least as large as half the diameter. For sensors in a grid network, this implies h = Ω(√n). In this case, our algorithm gains a significant improvement over existing algorithms. 5 Experiments Here we evaluate the effectiveness of our algorithm and compare it to other distributed coreset algorithms. We present the k-means cost of the solution by our algorithm with varying communication cost, and compare to those of other algorithms when they use the same amount of communication. Data sets: We present results on YearPredictionMSD (515345 points in R90, k = 50). Similar results are observed on five other datasets, which are presented in the supplementary material. Experimental Methodology: We first generate a communication graph connecting local sites, and then partition the data into local data sets. The algorithms were evaluated on Erd¨os-Renyi random graphs with p = 0.3, grid graphs, and graphs generated by the preferential attachment mechanism [1]. We used 100 sites for YearPredictionMSD. The data is then distributed over the local sites. There are four partition methods: uniform, similarity-based, weighted, and degree-based. In all methods, each example is distributed to the local sites with probability proportional to the site’s weight. In uniform partition, the sites have equal weights; in similarity-based partition, each site has an associated data point randomly selected from the global data and the weight is the similarity to the associated point; in weighted partition, the weights are chosen from |N(0, 1)|; in degree-based, the weights are the sites’ degrees. To measure the quality of the coreset generated, we run Lloyd’s algorithm on the coreset and the global data respectively to get two solutions, and compute the ratio between the costs of the two solutions over the global data. The average ratio over 30 runs is then reported. We compare our algorithm with COMBINE, the method of combining coresets from local data sets, and with the algorithm of [23] (Zhang et al.). When running the algorithm of Zhang et al., we restrict the network to a spanning tree by picking a root uniformly at random and performing a breadth first search. Results: Figure 2 shows the results over different network topologies and partition methods. We observe that the algorithms perform well with much smaller coreset sizes than predicted by the theoretical bounds. For example, to get 1.1 cost ratio, the coreset size and thus the communication needed is only 0.1% −1% of the theoretical bound. In the uniform partition, our algorithm performs nearly the same as COMBINE. This is not surprising since our algorithm reduces to the COMBINE algorithm when each local site has the same cost and the two algorithms use the same amount of communication. In this case, since in our algorithm the sizes of the local samples are proportional to the costs of the local solutions, it samples the same number of points from each local data set. This is equivalent to the COMBINE algorithm with the same amount of communication. In the similarity-based partition, similar results are observed as it also leads to balanced local costs. However, when the local sites have significantly different costs (as in the weighted and degree-based partitions), our algorithm outperforms COMBINE. As observed in Figure 2, the costs of our solutions consistently improve over those of COMBINE by 2% −5%. Our algorithm then saves 10% −20% communication cost to achieve the same approximation ratio. Figure 3 shows the results over the spanning trees of the graphs. Our algorithm performs much better than the algorithm of Zhang et al., achieving about 20% improvement in cost. This is due to the fact that their algorithm needs larger coresets to prevent the accumulation of errors when constructing coresets from component coresets, and thus needs higher communication cost to achieve the same approximation ratio. Acknowledgements This work was supported by ONR grant N00014-09-1-0751, AFOSR grant FA9550-09-1-0538, and by a Google Research Award. We thank Le Song for generously allowing us to use his computer cluster. 7 COMBINE Our Algo k-means cost ratio ×107 1.6 1.8 2 2.2 1.05 1.1 1.15 (a) random graph, uniform ×107 1.7 1.8 1.9 2 2.1 2.2 2.3 1.04 1.06 1.08 1.1 1.12 1.14 1.16 1.18 1.2 (b) random graph, similarity-based ×107 1.6 1.7 1.8 1.9 2 2.1 2.2 1.04 1.06 1.08 1.1 1.12 1.14 1.16 1.18 1.2 (c) random graph, weighted k-means cost ratio communication cost ×106 2 2.2 2.4 2.6 2.8 1.05 1.1 1.15 (d) grid graph, similarity-based communication cost ×106 2 2.2 2.4 2.6 2.8 1.05 1.1 1.15 (e) grid graph, weighted communication cost ×106 2.2 2.4 2.6 2.8 1.05 1.1 1.15 (f) preferential graph, degree-based Figure 2: k-means cost (normalized by baseline) v.s. communication cost over graphs. The titles indicate the network topology and partition method. Zhang et al. Our Algo k-means cost ratio ×107 1.6 1.8 2 2.2 1 1.1 1.2 1.3 1.4 1.5 (a) random graph, uniform ×107 1.7 1.8 1.9 2 2.1 2.2 2.3 1 1.05 1.1 1.15 1.2 1.25 1.3 1.35 1.4 (b) random graph, similarity-based ×107 1.6 1.7 1.8 1.9 2 2.1 2.2 1 1.05 1.1 1.15 1.2 1.25 1.3 1.35 1.4 (c) random graph, weighted k-means cost ratio communication cost ×106 2 2.2 2.4 2.6 2.8 1 1.1 1.2 1.3 1.4 (d) grid graph, similarity-based communication cost ×106 2 2.2 2.4 2.6 2.8 1 1.1 1.2 1.3 1.4 (e) grid graph, weighted communication cost ×106 2.2 2.4 2.6 2.8 1 1.1 1.2 1.3 1.4 (f) preferential graph, degree-based Figure 3: k-means cost (normalized by baseline) v.s. communication cost over the spanning trees of the graphs. The titles indicate the network topology and partition method. References [1] R. Albert and A.-L. Barab´asi. Statistical mechanics of complex networks. Reviews of Modern Physics, 2002. 8 [2] P. Awasthi and M. Balcan. Center based clustering: A foundational perspective. Survey Chapter in Handbook of Cluster Analysis (Manuscript), 2013. [3] M.-F. Balcan, A. Blum, S. Fine, and Y. Mansour. Distributed learning, communication complexity and privacy. In Proceedings of the Conference on Learning Thoery, 2012. [4] J. Considine, F. Li, G. Kollios, and J. Byers. Approximate aggregation techniques for sensor databases. In Proceedings of the International Conference on Data Engineering, 2004. [5] J. C. Corbett, J. Dean, M. Epstein, A. Fikes, C. Frost, J. Furman, S. Ghemawat, A. Gubarev, C. Heiser, P. Hochschild, et al. Spanner: Googles globally-distributed database. In Proceedings of the USENIX Symposium on Operating Systems Design and Implementation, 2012. [6] H. Daum´e III, J. M. Phillips, A. Saha, and S. Venkatasubramanian. Efficient protocols for distributed classification and optimization. In Algorithmic Learning Theory, pages 154–168. Springer, 2012. [7] S. Dutta, C. Gianella, and H. Kargupta. K-means clustering over peer-to-peer networks. In Proceedings of the International Workshop on High Performance and Distributed Mining, 2005. [8] D. Feldman and M. Langberg. A unified framework for approximating and clustering data. In Proceedings of the Annual ACM Symposium on Theory of Computing, 2011. [9] D. Feldman, A. Sugaya, and D. Rus. An effective coreset compression algorithm for large scale sensor networks. In Proceedings of the International Conference on Information Processing in Sensor Networks, 2012. [10] G. Forman and B. Zhang. Distributed data clustering can be efficient and exact. ACM SIGKDD Explorations Newsletter, 2000. [11] S. Greenhill and S. Venkatesh. Distributed query processing for mobile surveillance. In Proceedings of the International Conference on Multimedia, 2007. [12] M. Greenwald and S. Khanna. Power-conserving computation of order-statistics over sensor networks. In Proceedings of the ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems, 2004. [13] S. Har-Peled and S. Mazumdar. On coresets for k-means and k-median clustering. In Proceedings of the Annual ACM Symposium on Theory of Computing, 2004. [14] E. Januzaj, H. Kriegel, and M. Pfeifle. Towards effective and efficient distributed clustering. In Workshop on Clustering Large Data Sets in the IEEE International Conference on Data Mining, 2003. [15] R. Kannan and S. Vempala. Nimble algorithms for cloud computing. arXiv preprint arXiv:1304.3162, 2013. [16] T. Kanungo, D. M. Mount, N. S. Netanyahu, C. D. Piatko, R. Silverman, and A. Y. Wu. A local search approximation algorithm for k-means clustering. In Proceedings of the Annual Symposium on Computational Geometry, 2002. [17] H. Kargupta, W. Huang, K. Sivakumar, and E. Johnson. Distributed clustering using collective principal component analysis. Knowledge and Information Systems, 2001. [18] S. Li and O. Svensson. Approximating k-median via pseudo-approximation. In Proceedings of the Annual ACM Symposium on Theory of Computing, 2013. [19] Y. Li, P. M. Long, and A. Srinivasan. Improved bounds on the sample complexity of learning. In Proceedings of the eleventh annual ACM-SIAM Symposium on Discrete Algorithms, 2000. [20] S. Mitra, M. Agrawal, A. Yadav, N. Carlsson, D. Eager, and A. Mahanti. Characterizing webbased video sharing workloads. ACM Transactions on the Web, 2011. [21] C. Olston, J. Jiang, and J. Widom. Adaptive filters for continuous queries over distributed data streams. In Proceedings of the ACM SIGMOD International Conference on Management of Data, 2003. [22] D. Tasoulis and M. Vrahatis. Unsupervised distributed clustering. In Proceedings of the International Conference on Parallel and Distributed Computing and Networks, 2004. [23] Q. Zhang, J. Liu, and W. Wang. Approximate clustering on distributed data streams. In Proceedings of the IEEE International Conference on Data Engineering, 2008. 9
|
2013
|
169
|
4,897
|
A Novel Two-Step Method for Cross Language Representation Learning Min Xiao and Yuhong Guo Department of Computer and Information Sciences Temple University, Philadelphia, PA 19122, USA {minxiao, yuhong}@temple.edu Abstract Cross language text classification is an important learning task in natural language processing. A critical challenge of cross language learning arises from the fact that words of different languages are in disjoint feature spaces. In this paper, we propose a two-step representation learning method to bridge the feature spaces of different languages by exploiting a set of parallel bilingual documents. Specifically, we first formulate a matrix completion problem to produce a complete parallel document-term matrix for all documents in two languages, and then induce a low dimensional cross-lingual document representation by applying latent semantic indexing on the obtained matrix. We use a projected gradient descent algorithm to solve the formulated matrix completion problem with convergence guarantees. The proposed method is evaluated by conducting a set of experiments with cross language sentiment classification tasks on Amazon product reviews. The experimental results demonstrate that the proposed learning method outperforms a number of other cross language representation learning methods, especially when the number of parallel bilingual documents is small. 1 Introduction Cross language text classification is an important natural language processing task that exploits a large amount of labeled documents in an auxiliary source language to train a classification model for classifying documents in a target language where labeled data is scarce. An effective cross language learning system can greatly reduce the manual annotation effort in the target language for learning good classification models. Previous work in the literature has demonstrated successful performance of cross language learning systems on various cross language text classification problems, including multilingual document categorization [2], cross language fine-grained genre classification [14], and cross-lingual sentiment classification [18, 16]. The challenge of cross language text classification lies in the language barrier. That is documents in different languages are expressed with different word vocabularies and thus have disjoint feature spaces. A variety of methods have been proposed in the literature to address cross language text classification by bridging the cross language gap, including transforming the training or test data from one language domain into the other language domain by using machine translation tools or bilingual lexicons [18, 6, 23], and constructing cross-lingual representations by using readily available auxiliary resources such as bilingual word pairs [16], comparable corpora [10, 20, 15], and other multilingual resources [3, 14]. In this paper, we propose a two-step learning method to induce cross-lingual feature representations for cross language text classification by exploiting a set of unlabeled parallel bilingual documents. First we construct a concatenated bilingual document-term matrix where each document is represented in the concatenated vocabulary of two languages. In such a matrix, a pair of parallel 1 documents are represented as a row vector filled with observed word features from both the source language domain and the target language domain, while a non-parallel document in a single language is represented as a row vector filled with observed word features only from its own language and has missing values for the word features from the other language. We then learn the unobserved feature entries of this sparse matrix by formulating a matrix completion problem and solving it using a projected gradient descent optimization algorithm. By doing so, we expect to automatically capture important and robust low-rank information based on the word co-occurrence patterns expressed both within each language and across languages. Next we perform latent semantic indexing over the recovered document-term matrix and induce a low-dimensional dense cross-lingual representation of the documents, on which standard monolingual classifiers can be applied. To evaluate the effectiveness of the proposed learning method, we conduct a set of experiments with cross language sentiment classification tasks on multilingual Amazon product reviews. The empirical results show that the proposed method significantly outperforms a number of cross language learning methods. Moreover, the proposed method produces good performance even with a very small number of unlabeled parallel bilingual documents. 2 Related Work Many works in the literature address cross language text classification by first translating documents from one language domain into the other one via machine translation tools or bilingual lexicons and then applying standard monolingual classification algorithms [18, 23], domain adaptation techniques [17, 9, 21], or multi-view learning methods [22, 2, 1, 13, 12]. For example, [17] proposed an expectation-maximization based self-training method, which first initializes a monolingual classifier in the target language with the translated labeled documents from the source language and then retrains the model by adding unlabeled documents from the target language with automatically predicted labels. [21] proposed an instance and feature bi-weighting method by first translating documents from one language domain to the other one and then simultaneously re-weighting instances and features to address the distribution difference across domains. [22] proposed to use the co-training method for cross language sentiment classification on parallel corpora. [2] proposed a multi-view majority voting method to categorize documents in multiple views produced from machine translation tools. [1] proposed a multi-view co-classification method for multilingual document categorization, which minimizes both the training loss for each view and the prediction disagreement between different language views. Our proposed approach in this paper shares similarity with these approaches in exploiting parallel data produced by machine translation tools. But our approach only requires a small set of unlabeled parallel documents, while these approaches require at least translating all the training documents in one language domain. Another important group of cross language text classification methods in the literature construct cross-lingual representations by exploiting bilingual word pairs [16, 7], parallel corpora [10, 20, 15, 19, 8], and other resources [3, 14]. [16] proposed a cross-language structural correspondence learning method to induce language-independent features by using pivot word pairs produced by word translation oracles. [10] proposed a cross-language latent semantic indexing (CL-LSI) method to induce cross-lingual representations by performing LSI over a dual-language document-term matrix, where each dual-language document contains its original words and the corresponding translation text. [20] proposed a cross-lingual kernel canonical correlation analysis (CL-KCCA) method. It first learns two projections (one for each language) by conducting kernel canonical correlation analysis over a paired bilingual corpus and then uses them to project documents from language-specific feature spaces to the shared multilingual semantic feature space. [15] employed cross-lingual oriented principal component analysis (CL-OPCA) over concatenated parallel documents to learn a multilingual projection by simultaneously minimizing the projected distance between parallel documents and maximizing the projected covariance of documents across languages. Some other work uses multilingual topic models such as the coupled probabilistic latent semantic analysis and the bilingual latent Dirichlet allocation to extract latent cross-lingual topics as interlingual representations [19]. [14] proposed to use language-specific part-of-speech (POS) taggers to tag each word and then map those language-specific POS tags to twelve universal POS tags as interlingual features for cross language fine-grained genre classification. Similar to the multilingual semantic representation learning approaches such as CL-LSI, CL-KCCA and CL-OPCA, our two-step learning method exploits parallel documents. But different from these methods which apply operations such as LSI, KCCA, and OPCA directly on the original concatenated document2 term matrix, our method first fills the missing entries of the document-term matrix using matrix completion, and then performs LSI over the recovered low-rank matrix. 3 Approach In this section, we present the proposed two-step learning method for learning cross-lingual document representations. We assume a subset of unlabeled parallel documents from the two languages are given, which can be used to capture the co-occurrence of terms across languages and build connections between the vocabulary sets of the two languages. We first construct a unified documentterm matrix for all documents from the auxiliary source language domain and the target language domain, whose columns correspond to the word features from the unified vocabulary set of the two languages. In this matrix, each pair of parallel documents is represented as a fully observed row vector, and each non-parallel document is represented as a partially observed row vector where only entries corresponding to words in its own language vocabulary are observed. Instead of learning a low-dimensional cross-lingual document representation from this matrix directly, we perform a twostep learning procedure: First we learn a low-rank document-term matrix by automatically filling the missing entries via matrix completion. Next we produce cross-lingual representations by applying the latent semantic indexing method over the learned matrix. Let M 0 ∈Rt×d be the unified document-term matrix, which is partially filled with observed nonnegative feature values, where t is the number of documents and d is the size of the unified vocabulary. We use Ωto denote the index set of the observed features in M 0, such that (i, j) ∈Ωif only if M 0 ij is observed; and use bΩto denote the index set of the missing features in M 0, such that (i, j) ∈bΩ if only if M 0 ij is unobserved. For the i-th document in the data set from one language, if the document does not have a parallel translation in the other language, then all the features in row M 0 i: corresponding to the words in the vocabulary of the other language are viewed as missing features. 3.1 Matrix Completion Note that the document-term matrix M 0 has a large fraction of missing features and the only bridge between the vocabulary sets of the two languages is the small set of parallel bilingual documents. Learning from this partially observed matrix directly by treating missing features as zeros certainly will lose a lot of information. On the other hand, a fully observed document-term matrix is naturally low-rank and sparse, as the vocabulary set is typically very large and each document only contains a small fraction of the words in the vocabulary. Thus we propose to automatically fill the missing entries of M 0 based on the feature co-occurrence information expressed in the observed data, by conducting matrix completion to recover a low-rank and sparse matrix. Specifically, we formulate the matrix completion as the following optimization problem min M rank(M) + µ∥M∥1 subject to Mij = M 0 ij, ∀(i, j) ∈Ω; Mij ≥0, ∀(i, j) ∈bΩ (1) where ∥· ∥1 denotes a ℓ1 norm and is used to enforce sparsity. The rank function however is nonconvex and difficult to optimize. We can relax it to its convex envelope, a convex trace norm ∥M∥∗. Moreover, instead of using the equality constraints in (1), we propose to minimize a regularization loss function, c(Mij, M 0 ij), to cope with observation noise for all the observed feature entries. Meanwhile, we also add regularization terms over the missing features, c(Mij, 0), ∀(i, j) ∈bΩ, to avoid overfitting. In particular, we use the least squared loss function c(x, y) = 1 2∥x −y∥2. Hence we obtain the following relaxed convex optimization problem for matrix completion min M γ∥M∥∗+ µ∥M∥1 + X (i,j)∈Ω c(Mij, M 0 ij) + ρ X (i,j)∈bΩ c(Mij, 0) subject to M ≥0 (2) With nonnegativity constraints M ≥0, the non-smooth ℓ1 norm regularizer in the objective function of (2) is equivalent to a smooth linear function ∥M∥1 = P ij Mij. Nevertheless, with the nonsmooth trace norm ∥M∥∗, the optimization problem (2) remains to be convex but non-smooth. Moreover, the matrix M in cross-language learning tasks is typically very large, and thus a scalable optimization algorithm needs to be developed to conduct efficient optimization. In next section, we will present a scalable projected gradient descent algorithm to solve this minimization problem. 3 Algorithm 1 Projected Gradient Descent Algorithm Input: M 0, γ, ρ ≤1, 0 < τ < min(2, 2 ρ), µ. Initialize M as the nonnegative projection of the rank-1 approximation of M 0. while not converged do 1. gradient descent: M = M −τ∇g(M). 2. shrink: M = Sτγ(M). 3. project onto feasible set: M = max(M, 0). end while 3.2 Latent Semantic Indexing After solving (2) for an optimal low-rank solution M ∗, we can use each row of the sparse matrix M ∗as a vector representation for each document in the concatenated vocabulary space of the two languages. However exploiting such a matrix representation directly for cross language text classification lacks sufficient capacity of handling feature noise and sparseness, as each document is represented using a very small set of words in the vocabulary set. We thus propose to apply a latent semantic indexing (LSI) method on M ∗to produce a low-dimensional semantic representation of the data. LSI uses singular value decomposition to discover the important associative relationships of word features [10], and create a reduced-dimension feature space. Specifically, we first perform singular value decomposition over M ∗, M ∗= USV ⊤, and then obtain a low dimensional representation matrix Z via a projection Z = M ∗Vk, where Vk contains the top k right singular vectors of M ∗. Cross-language text classification can then be conducted over Z using monolingual classifiers. 4 Optimization Algorithm 4.1 Projected Gradient Descent Algorithm A number of algorithms have been developed to solve matrix completion problems in the literature [4, 11]. We use a projected gradient descent algorithm to solve the non-smooth convex optimization problem in (2). This algorithm takes the objective function f(M) in (2) as a composition of a non-smooth term and a convex smooth term such as f(M) = γ∥M∥∗+ g(M) where g(M) = µ∥M∥1 + X (i,j)∈Ω c(Mij, M 0 ij) + ρ X (i,j)∈bΩ c(Mij, 0). (3) It first initializes M as the nonnegative projection of the rank-1 approximation of M 0, and then iteratively updates M using a projected gradient descent procedure. In each iteration, we perform three steps to update M. First, we take a gradient descent step M = M −τ∇g(M) with stepsize τ and gradient function ∇g(M) = µE + (M −M 0) ◦Y + ρM ◦bY (4) where E is a t × d matrix with all 1s; Y and bY are t × d indicator matrices such that Yij = 1 if and only if (i, j) ∈Ωand bY = E −Y ; and “◦” denotes the Hadamard product. Next we perform a shrinkage operation M = Sν(M) over the resulting matrix from the first step to minimize its rank. The shrinkage operator is based on singular value decomposition Sν(M) = UΣ(ν)V ⊤, M = UΣV ⊤, Σ(ν) = max(Σ −ν, 0), (5) where ν = τγ. Finally we project the resulting matrix into the nonnegative feasible set by M = max(M, 0). This update procedure provably converges to an optimal solution. The overall algorithm is given in Algorithm 1. 4.2 Convergence Analysis Let h(·) = I(·) −τ∇g(·) be the gradient descent operator used in the gradient descent step, and let PC(·) = max(·, 0) be the projection operator, while Sν(·) is the shrinkage operator. Below we prove the convergence of the projected gradient descent algorithm. 4 Lemma 1. Let E be a t×d matrix with all 1s, and Q = E−τ(Y +ρbY ). For τ ∈(0, min(2, 2 ρ)), the operator h(·) is non-expansive, i.e., for any M and M ′ ∈Rt×d, ∥h(M)−h(M ′)∥F ≤∥M −M ′∥F . Moreover, ∥h(M) −h(M ′)∥F = ∥M −M ′∥F if and only if h(M) −h(M ′) = M −M ′. Proof. Note that for τ ∈(0, min(2, 2 ρ)), we have −1 < Qij < 1, ∀(i, j). Then following the gradient definition in (4), we have ∥h(M) −h(M ′)∥F =
(M −M ′) ◦Q∥F = ( X ij (Mij −M ′ ij)2Q2 ij) 1 2 ≤ ∥M −M ′∥F The inequalities become equalities if only if h(M) −h(M ′) = M −M ′. Lemma 2. [11, Lemma 1] The shrinkage operator Sν(·) is non-expansive, i.e., for any M and M ′ ∈Rt×d, ∥Sν(M)−Sν(M ′)∥F ≤∥M−M ′∥F . Moreover, ∥Sν(M)−Sν(M ′)∥F = ∥M−M ′∥F if and only if Sν(M) −Sν(M ′) = M −M ′. Lemma 3. The projection operator PC(·) is non-expansive, i.e., ∥PC(M) −PC(M ′)∥F ≤∥M − M ′∥F . Moreover, ∥PC(M)−PC(M ′)∥F = ∥M−M ′∥F if and only if PC(M)−PC(M ′) = M−M ′. Proof. For any given entry index (i, j), there are four cases: • Case 1: Mij ≥0, M ′ ij ≥0. We have (PC(Mij) −PC(M ′ ij))2 = (Mij −M ′ ij)2. • Case 2: Mij ≥0, M ′ ij < 0. We have (PC(Mij) −PC(M ′ ij))2 = M 2 ij < (Mij −M ′ ij)2. • Case 3: Mij < 0, M ′ ij ≥0. We have (PC(Mij) −PC(M ′ ij))2 = M ′2 ij < (Mij −M ′ ij)2. • Case 4: Mij < 0, M ′ ij < 0. We have (PC(Mij) −PC(M ′ ij))2 = 0 ≤(Mij −M ′ ij)2. Therefore, ∥PC(M) −PC(M ′)∥F = X ij (PC(Mij) −PC(M ′ ij))2 1 2 ≤ X ij (Mij −M ′ ij)2 1 2 = ∥M −M ′∥F and ∥PC(M) −PC(M ′)∥F = ∥M −M ′∥F if only if PC(M) −PC(M ′) = M −M ′. Theorem 1. The sequence {M k} generated by the projected gradient descent iterations in Algorithm 1 with 0 < τ < min(2, 2 ρ) converges to M ∗, which is an optimal solution of (2). Proof. Since h(·), Sν(·) and PC(·) are all non-expansive, the composite operator PC(Sν(h(·))) is non-expansive as well. This theorem can then be proved following [11, Theorem 4]. 5 Experiments In this section, we evaluate the proposed two-step learning method by conducting extensive cross language sentiment classification experiments on multilingual Amazon product reviews. 5.1 Experimental Setting Dataset We used the multilingual Amazon product reviews dataset [16], which contains three categories (Books (B), DVD (D), Music (M)) of product reviews in four different languages (English (E), French (F), German (G), Japanese (J)). For each category of the product reviews, there are 2000 positive and 2000 negative English reviews, and 1000 positive and 1000 negative reviews for each of the other three languages. In addition, there are another 2000 unlabeled parallel reviews between English and each of the other three languages. Each review is preprocessed into a unigram bag-ofword feature vector with TF-IDF values. We focused on cross-lingual learning between English and the other three languages and constructed 18 cross language sentiment classification tasks (EFB, FEB, EFD, FED, EFM, FEM, EGB, GEB, EGD, GED, EGM, GEM, EJB, JEB, EJD, JED, EJM, JEM), each for one combination of selected source language, target language and category. For example, the task EFB uses English Books reviews as the source language data and uses French Books reviews as the target language data. 5 Table 1: Average classification accuracies (%) and standard deviations (%) over 10 runs for the 18 cross language sentiment classification tasks. TASK TBOW CL-LSI CL-KCCA CL-OPCA TSL EFB 67.31±0.96 79.56±0.21 77.56±0.14 76.55±0.31 81.92±0.20 FEB 66.82±0.43 76.66±0.34 73.45±0.13 74.43±0.53 79.51±0.21 EFD 67.80±0.94 77.82±0.66 78.19±0.09 70.54±0.41 81.97±0.33 FED 66.15±0.65 76.61±0.25 74.93±0.07 72.49±0.47 78.09±0.32 EFM 67.84±0.43 75.39±0.40 78.24±0.12 73.69±0.49 79.30±0.30 FEM 66.08±0.52 76.33±0.27 73.38±0.12 73.46±0.50 78.53±0.46 EGB 67.23±0.68 77.59±0.21 79.14±0.12 74.72±0.54 79.22±0.31 GEB 67.16±0.55 77.64±0.19 74.15±0.09 74.78±0.39 78.65±0.23 EGD 66.79±0.80 79.22±0.22 76.73±0.10 74.59±0.66 81.34±0.24 GED 66.27±0.69 77.78±0.26 74.26±0.08 74.83±0.45 79.34±0.23 EGM 67.65±0.45 73.81±0.49 79.18±0.05 74.45±0.59 79.39±0.39 GEM 66.74±0.55 77.28±0.51 72.31±0.08 74.15±0.42 79.02±0.34 EJB 63.15±0.69 72.68±0.35 69.46±0.11 71.41±0.48 72.57±0.52 JEB 66.85±0.68 74.63±0.42 67.99±0.18 73.41±0.41 77.17±0.36 EJD 65.47±0.50 72.55±0.28 74.79±0.11 71.84±0.41 76.60±0.49 JED 66.42±0.55 75.18±0.27 72.44±0.16 75.42±0.52 79.01±0.50 EJM 67.62±0.75 73.44±0.50 73.54±0.11 74.96±0.86 76.21±0.40 JEM 66.51±0.51 72.38±0.50 70.00±0.18 72.64±0.66 77.15±0.58 Approaches We compared the proposed two-step learning (TSL) method with the following four methods: TBOW, CL-LSI, CL-OPCA and CL-KCCA. The Target Bag-Of-Word (TBOW) baseline method trains a supervised monolingual classifier in the original bag-of-word feature space with the labeled training data from the target language domain. The Cross-Lingual Latent Semantic Indexing (CL-LSI) method [10] and the Cross-Lingual Oriented Principal Component Analysis (CL-OPCA) method [15] first learn cross-lingual representations with all data from both language domains by performing LSI or OPCA and then train a monolingual classifier with labeled data from both language domains in the induced low-dimensional feature space. The Cross-Lingual Kernel Canonical Component Analysis (CL-KCCA) method [20] first induces two language projections by using unlabeled parallel data and then trains a monolingual classifier on labeled data from both language domains in the projected low-dimensional space. For all experiments, we used linear support vector machine (SVM) as the monolingual classification model. For implementation, we used the libsvm package [5] with default parameter setting. 5.2 Classification Accuracy For each of the 18 cross language sentiment classification tasks, we used all documents from the two languages and the additional 2000 unlabeled parallel documents for representation learning. Then we used all documents in the auxiliary source language and randomly chose 100 documents from the target language as labeled data for classification model training, and used the remaining data in the target language as test data. For the proposed method, TSL, we set µ = 10−6 and τ = 1, chose γ value from {0.01, 0.1, 1, 10}, chose ρ value from {10−5, 10−4, 10−3, 10−2, 10−1, 1}, and chose the dimension k value from {20, 50, 100, 200, 500}. We used the first task EFB to perform model parameter selection by running the algorithm 3 times based on random selections of 100 labeled target training data. This gave us the following parameter setting: γ = 0.1, ρ = 10−4, k = 50. We used the same procedure to select the dimensionality of the learned semantic representations for the other three approaches, CL-LSI, CL-OPCA and CL-KCCA, which produced k = 50 for CL-LSI and CL-OPCA, and k = 100 for CL-KCCA. We then used the selected model parameters for all the 18 tasks and ran each experiment for 10 times based on random selections of 100 labeled target documents. The average classification accuracies and standard deviations are reported in Table 1. We can see that the proposed two-step learning method, TSL, outperforms all other four comparison methods in general. The target baseline TBOW performs poorly on all the 18 tasks, which implies that 100 labeled target training documents are far from enough to obtain a robust sentiment classifier 6 500 1000 1500 2000 65 70 75 80 EFB Unlabeled parallel data Accuracy CL−LSI CL−KCCA CL−OPCA TSL 500 1000 1500 2000 66 68 70 72 74 76 78 80 82 EFD Unlabeled parallel data Accuracy CL−LSI CL−KCCA CL−OPCA TSL 500 1000 1500 2000 64 66 68 70 72 74 76 78 80 EFM Unlabeled parallel data Accuracy CL−LSI CL−KCCA CL−OPCA TSL 500 1000 1500 2000 60 65 70 75 80 EGB Unlabeled parallel data Accuracy CL−LSI CL−KCCA CL−OPCA TSL 500 1000 1500 2000 65 70 75 80 EGD Unlabeled parallel data Accuracy CL−LSI CL−KCCA CL−OPCA TSL 500 1000 1500 2000 60 65 70 75 80 EGM Unlabeled parallel data Accuracy CL−LSI CL−KCCA CL−OPCA TSL 500 1000 1500 2000 56 58 60 62 64 66 68 70 72 EJB Unlabeled parallel data Accuracy CL−LSI CL−KCCA CL−OPCA TSL 500 1000 1500 2000 68 70 72 74 76 EJD Unlabeled parallel data Accuracy CL−LSI CL−KCCA CL−OPCA TSL 500 1000 1500 2000 66 68 70 72 74 76 EJM Unlabeled parallel data Accuracy CL−LSI CL−KCCA CL−OPCA TSL Figure 1: Average test classification accuracies (%) and standard deviations (%) over 10 runs with different numbers of unlabeled parallel documents for adapting a classification system from English to French, German and Japanese. in the target language domain. All the other three cross-lingual representation learning methods, CL-LSI, CL-KCCA and CL-OPCA, consistently outperform this baseline method across all the 18 tasks, which demonstrates that the labeled training data from the source language domain is useful for classifying the target language data under a unified data representation. Nevertheless, the improvements achieved by these three methods over the baseline are much smaller than the proposed TSL method. Across all the 18 tasks, TSL increases the average test accuracy over the baseline TBOW method by at least 8.59 (%) on the EJM task and up to 14.61 (%) on the EFB task. Moreover, TSL also outperforms both CL-KCCA and CL-OPCA across all the 18 tasks, outperforms CL-LSI on 17 out of the 18 tasks and achieves comparable performance with CL-LSI on the remaining one task (EJB). All these results demonstrate the efficacy and robustness of the proposed two-step representation learning method for cross language text classification. 5.3 Impact of the Size of Unlabeled Parallel Data All the four cross-lingual adaptation learning methods, CL-LSI, CL-KCCA, CL-OPCA and TSL, exploit unlabeled parallel reviews for learning cross-lingual representations. Next we investigated the performance of these methods with respect to different numbers of unlabeled parallel reviews. We tested a set of different numbers, np ∈{200, 500, 1000, 2000}. For each number np in the set, we randomly chose np parallel documents from all the 2000 unlabeled parallel reviews to conduct experiments using the same setting from the previous experiments. Each experiment was repeated 10 times based on random selections of labeled target training data. The average test classification accuracies and standard deviations are plotted in Figure 1 and Figure 2. Figure 1 presents the results for the 9 cross-lingual classification tasks that adapt classification systems from English to French, German and Japanese, while Figure 2 presents the results for the other 9 cross-lingual classification tasks that adapt classification systems from French, German and Japanese to English. 7 500 1000 1500 2000 72 74 76 78 80 FEB Unlabeled parallel data Accuracy CL−LSI CL−KCCA CL−OPCA TSL 500 1000 1500 2000 68 70 72 74 76 78 FED Unlabeled parallel data Accuracy CL−LSI CL−KCCA CL−OPCA TSL 500 1000 1500 2000 70 72 74 76 78 FEM Unlabeled parallel data Accuracy CL−LSI CL−KCCA CL−OPCA TSL 500 1000 1500 2000 70 72 74 76 78 GEB Unlabeled parallel data Accuracy CL−LSI CL−KCCA CL−OPCA TSL 500 1000 1500 2000 70 72 74 76 78 80 GED Unlabeled parallel data Accuracy CL−LSI CL−KCCA CL−OPCA TSL 500 1000 1500 2000 70 72 74 76 78 80 GEM Unlabeled parallel data Accuracy CL−LSI CL−KCCA CL−OPCA TSL 500 1000 1500 2000 66 68 70 72 74 76 78 JEB Unlabeled parallel data Accuracy CL−LSI CL−KCCA CL−OPCA TSL 500 1000 1500 2000 68 70 72 74 76 78 80 JED Unlabeled parallel data Accuracy CL−LSI CL−KCCA CL−OPCA TSL 500 1000 1500 2000 66 68 70 72 74 76 78 JEM Unlabeled parallel data Accuracy CL−LSI CL−KCCA CL−OPCA TSL Figure 2: Average test classification accuracies and standard deviations over 10 runs with different numbers of unlabeled parallel documents for adapting a classification system from French, German and Japanese to English. From these results, we can see that the performance of all four methods in general improves with the increase of the unlabeled parallel data. The proposed method, TSL, nevertheless outperforms the other three cross-lingual adaptation learning methods across the range of different np values for 16 out of the 18 cross language sentiment classification tasks. For the remaining two tasks, EFM and EGM, it has similar performance with the CL-KCCA method while significantly outperforming the other two methods. Moreover, for the 9 tasks that make adaptation from English to the other three languages, the TSL method achieves great performance with only 200 unlabeled parallel documents, while the performance of the other three methods decreases significantly with the decrease of the number of unlabeled parallel documents. These results demonstrate the robustness and efficacy of the proposed method, comparing to other methods. 6 Conclusion In this paper, we developed a novel two-step method to learn cross-lingual semantic data representations for cross language text classification by exploiting unlabeled parallel bilingual documents. We first formulated a matrix completion problem to infer unobserved feature values of the concatenated document-term matrix in the space of unified vocabulary set from the source and target languages. Then we performed latent semantic indexing over the completed low-rank document-term matrix to produce a low-dimensional cross-lingual representation of the documents. Monolingual classifiers were then used to conduct cross language text classification based on the learned document representation. To investigate the effectiveness of the proposed learning method, we conducted extensive experiments with tasks of cross language sentiment classification on Amazon product reviews. Our experimental results demonstrated that the proposed two-step learning method significantly outperforms the other four comparison methods. Moreover, the proposed approach needs much less parallel documents to produce a good cross language text classification system. 8 References [1] M. Amini and C. Goutte. A co-classification approach to learning from multilingual corpora. Machine Learning, 79:105–121, 2010. [2] M. Amini, N. Usunier, and C. Goutte. Learning from multiple partially observed views - an application to multilingual text categorization. In NIPS, 2009. [3] B. A.R., A. Joshi, and P. Bhattacharyya. Cross-lingual sentiment analysis for indian languages using linked wordnets. In Proc. of COLING, 2012. [4] E. Cand´es and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6):717–772, 2009. [5] C. Chang and C. Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1–27:27, 2011. [6] W. Dai, Y. Chen, G. Xue, Q. Yang, and Y. Yu. Translated learning: Transfer learning across different feature spaces. In NIPS, 2008. [7] A. Gliozzo. Exploiting comparable corpora and bilingual dictionaries for cross-language text categorization. In Proc. of ICCL-ACL, 2006. [8] J. Jagarlamudi, R. Udupa, H. Daum´e III, and A. Bhole. Improving bilingual projections via sparse covariance matrices. In Proc. of EMNLP, 2011. [9] X. Ling, G. Xue, W. Dai, Y. Jiang, Q. Yang, and Y. Yu. Can chinese web pages be classified with English data source? In Proc. of WWW, 2008. [10] M. Littman, S. Dumais, and T. Landauer. Automatic cross-language information retrieval using latent semantic indexing. In Cross-Language Information Retrieval, chapter 5, pages 51–62. Kluwer Academic Publishers, 1998. [11] S. Ma, D. Goldfarb, and L. Chen. Fixed point and bregman iterative methods for matrix rank minimization. Mathematical Programming: Series A and B archive, 128, Issue 1-2, 2011. [12] X. Meng, F. Wei, X. Liu, M. Zhou, G. Xu, and H. Wang. Cross-lingual mixture model for sentiment classification. In Proc. of ACL, 2012. [13] J. Pan, G. Xue, Y. Yu, and Y. Wang. Cross-lingual sentiment classification via bi-view nonnegative matrix tri-factorization. In Proc. of PAKDD, 2011. [14] P. Petrenz and B. Webber. Label propagation for fine-grained cross-lingual genre classification. In Proc. of the NIPS xLiTe workshop, 2012. [15] J. Platt, K. Toutanova, and W. Yih. Translingual document representations from discriminative projections. In Proc. of EMNLP, 2010. [16] P. Prettenhofer and B. Stein. Cross-language text classification using structural correspondence learning. In Proc. of ACL, 2010. [17] L. Rigutini and M. Maggini. An EM based training algorithm for cross-language text categorization. In Proc. of the Web Intelligence Conference, 2005. [18] J. Shanahan, G. Grefenstette, Y. Qu, and D. Evans. Mining multilingual opinions through classification and translation. In AAAI Spring Symp. on Explor. Attit. and Affect in Text, 2004. [19] W. Smet, J. Tang, and M. Moens. Knowledge transfer across multilingual corpora via latent topics. In Proc. of PAKDD, 2011. [20] A. Vinokourov, J. Shawe-taylor, and N. Cristianini. Inferring a semantic representation of text via cross-language correlation analysis. In NIPS, 2002. [21] C. Wan, R. Pan, and J. Li. Bi-weighting domain adaptation for cross-language text classification. In Proc. of IJCAI, 2011. [22] X. Wan. Co-training for cross-lingual sentiment classification. In Proc. of ACL-IJCNLP, 2009. [23] K. Wu, X. Wang, and B. Lu. Cross language text categorization using a bilingual lexicon. In Proc. of IJCNLP, 2008. 9
|
2013
|
17
|
4,898
|
Non-strongly-convex smooth stochastic approximation with convergence rate O(1/n) Francis Bach INRIA - Sierra Project-team Ecole Normale Sup´erieure, Paris, France francis.bach@ens.fr Eric Moulines LTCI Telecom ParisTech, Paris, France eric.moulines@enst.fr Abstract We consider the stochastic approximation problem where a convex function has to be minimized, given only the knowledge of unbiased estimates of its gradients at certain points, a framework which includes machine learning methods based on the minimization of the empirical risk. We focus on problems without strong convexity, for which all previously known algorithms achieve a convergence rate for function values of O(1/√n) after n iterations. We consider and analyze two algorithms that achieve a rate of O(1/n) for classical supervised learning problems. For least-squares regression, we show that averaged stochastic gradient descent with constant step-size achieves the desired rate. For logistic regression, this is achieved by a simple novel stochastic gradient algorithm that (a) constructs successive local quadratic approximations of the loss functions, while (b) preserving the same running-time complexity as stochastic gradient descent. For these algorithms, we provide a non-asymptotic analysis of the generalization error (in expectation, and also in high probability for least-squares), and run extensive experiments showing that they often outperform existing approaches. 1 Introduction Large-scale machine learning problems are becoming ubiquitous in many areas of science and engineering. Faced with large amounts of data, practitioners typically prefer algorithms that process each observation only once, or a few times. Stochastic approximation algorithms such as stochastic gradient descent (SGD) and its variants, although introduced more than sixty years ago [1], still remain the most widely used and studied method in this context (see, e.g., [2, 3, 4, 5, 6, 7]). We consider minimizing convex functions f, defined on a Euclidean space F, given by f(θ) = E ℓ(y, ⟨θ, x⟩) , where (x, y) ∈F × R denotes the data and ℓdenotes a loss function that is convex with respect to the second variable. This includes logistic and least-squares regression. In the stochastic approximation framework, independent and identically distributed pairs (xn, yn) are observed sequentially and the predictor defined by θ is updated after each pair is seen. We partially understand the properties of f that affect the problem difficulty. Strong convexity (i.e., when f is twice differentiable, a uniform strictly positive lower-bound µ on Hessians of f) is a key property. Indeed, after n observations and with the proper step-sizes, averaged SGD achieves the rate of O(1/µn) in the strongly-convex case [5, 4], while it achieves only O(1/√n) in the nonstrongly-convex case [5], with matching lower-bounds [8]. The main issue with strong convexity is that typical machine learning problems are high-dimensional and have correlated variables so that the strong convexity constant µ is zero or very close to zero, and in any case smaller than O(1/√n). This then makes the non-strongly convex methods better. In this paper, we aim at obtaining algorithms that may deal with arbitrarily small strong-convexity constants, but still achieve a rate of O(1/n). 1 Smoothness plays a central role in the context of deterministic optimization. The known convergence rates for smooth optimization are better than for non-smooth optimization (e.g., see [9]). However, for stochastic optimization the use of smoothness only leads to improvements on constants (e.g., see [10]) but not on the rate itself, which remains O(1/√n) for non-strongly-convex problems. We show that for the square loss and for the logistic loss, we may use the smoothness of the loss and obtain algorithms that have a convergence rate of O(1/n) without any strong convexity assumptions. More precisely, for least-squares regression, we show in Section 2 that averaged stochastic gradient descent with constant step-size achieves the desired rate. For logistic regression this is achieved by a novel stochastic gradient algorithm that (a) constructs successive local quadratic approximations of the loss functions, while (b) preserving the same running-time complexity as stochastic gradient descent (see Section 3). For these algorithms, we provide a non-asymptotic analysis of their generalization error (in expectation, and also in high probability for least-squares), and run extensive experiments on standard machine learning benchmarks showing in Section 4 that they often outperform existing approaches. 2 Constant-step-size least-mean-square algorithm In this section, we consider stochastic approximation for least-squares regression, where SGD is often referred to as the least-mean-square (LMS) algorithm. The novelty of our convergence result is the use of the constant step-size with averaging, which was already considered by [11], but now with an explicit non-asymptotic rate O(1/n) without any dependence on the lowest eigenvalue of the covariance matrix. 2.1 Convergence in expectation We make the following assumptions: (A1) F is a d-dimensional Euclidean space, with d ⩾1. (A2) The observations (xn, zn) ∈F × F are independent and identically distributed. (A3) E∥xn∥2 and E∥zn∥2 are finite. Denote by H = E(xn ⊗xn) the covariance operator from F to F. Without loss of generality, H is assumed invertible (by projecting onto the minimal subspace where xn lies almost surely). However, its eigenvalues may be arbitrarily small. (A4) The global minimum of f(θ) = (1/2)E ⟨θ, xn⟩2 −2⟨θ, zn⟩ is attained at a certain θ∗∈F. We denote by ξn = zn −⟨θ∗, xn⟩xn the residual. We have E ξn = 0, but in general, it is not true that E ξn xn = 0 (unless the model is well-specified). (A5) We study the stochastic gradient (a.k.a. least mean square) recursion defined as θn = θn−1 −γ(⟨θn−1, xn⟩xn −zn) = (I −γxn ⊗xn)θn−1 + γzn, (1) started from θ0 ∈F. We also consider the averaged iterates ¯θn = (n + 1)−1 Pn k=0 θk. (A6) There exists R > 0 and σ > 0 such that E ξn ⊗ξn ≼σ2H and E ∥xn∥2xn ⊗xn ≼R2H, where ≼denotes the the order between self-adjoint operators, i.e., A ≼B if and only if B −A is positive semi-definite. Discussion of assumptions. Assumptions (A1-5) are standard in stochastic approximation (see, e.g., [12, 6]). Note that for least-squares problems, zn is of the form ynxn, where yn ∈R is the response to be predicted as a linear function of xn. We consider a slightly more general case than least-squares because we will need it for the quadratic approximation of the logistic loss in Section 3.1. Note that in assumption (A4), we do not assume that the model is well-specified. Assumption (A6) is true for least-square regression with almost surely bounded data, since, if ∥xn∥2 ⩽R2 almost surely, then E ∥xn∥2xn ⊗xn ≼E R2xn ⊗xn = R2H; a similar inequality holds for the output variables yn. Moreover, it also holds for data with infinite supports, such as Gaussians or mixtures of Gaussians (where all covariance matrices of the mixture components are lower and upper bounded by a constant times the same matrix). Note that the finite-dimensionality assumption could be relaxed, but this would require notions similar to degrees of freedom [13], which is outside of the scope of this paper. The goal of this section is to provide a non-asymptotic bound on the expectation E f(¯θn) −f(θ∗) , that (a) does not depend on the smallest non-zero eigenvalue of H (which could be arbitrarily small) and (b) still scales as O(1/n). 2 Theorem 1 Assume (A1-6). For any constant step-size γ < 1/R2, we have E f(¯θn−1) −f(θ∗) ⩽1 2n σ √ d 1 − p γR2 + R∥θ0 −θ∗∥ 1 p γR2 2 . (2) When γ = 1/(4R2), we obtain E f(¯θn−1) −f(θ∗) ⩽2 n h σ √ d + R∥θ0 −θ∗∥ i2 . Proof technique. We adapt and extend a proof technique from [14] which is based on nonasymptotic expansions in powers of γ. We also use a result from [2] which studied the recursion in Eq. (1), with xn ⊗xn replaced by its expectation H. See [15] for details. Optimality of bounds. Our bound in Eq. (2) leads to a rate of O(1/n), which is known to be optimal for least-squares regression (i.e., under reasonable assumptions, no algorithm, even more complex than averaged SGD can have a better dependence in n) [16]. The term σ2d/n is also unimprovable. Initial conditions. If γ is small, then the initial condition is forgotten more slowly. Note that with additional strong convexity assumptions, the initial condition would be forgotten faster (exponentially fast without averaging), which is one of the traditional uses of constant-step-size LMS [17]. Specificity of constant step-sizes. The non-averaged iterate sequence (θn) is a homogeneous Markov chain; under appropriate technical conditions, this Markov chain has a unique stationary (invariant) distribution and the sequence of iterates (θn) converges in distribution to this invariant distribution; see [18, Chapter 17]. Denote by πγ the invariant distribution. Assuming that the Markov Chain is Harris-recurrent, the ergodic theorem for Harris Markov chain shows that ¯θn−1 = n−1 Pn−1 k=0 θk converges almost-surely to ¯θγ def = R θπγ(dθ), which is the mean of the stationary distribution. Taking the expectation on both side of Eq. (1), we get E[θn] −θ∗= (I −γH)(E[θn−1] −θ∗), which shows, using that limn→∞E[θn] = ¯θγ that H ¯θγ = Hθ∗and therefore ¯θγ = θ∗since H is invertible. Under slightly stronger assumptions, it can be shown that limn→∞nE[(¯θn −θ∗)2] = Varπγ(θ0) + 2 P∞ k=1 Covπγ(θ0, θk) , where Covπγ(θ0, θk) denotes the covariance of θ0 and θk when the Markov chain is started from stationarity. This implies that limn→∞nE[f(¯θn) −f(θ∗)] has a finite limit. Therefore, this interpretation explains why the averaging produces a sequence of estimators which converges to the solution θ∗pointwise, and that the rate of convergence of E[f(θn)−f(θ∗)] is of order O(1/n). Note that (a) our result is stronger since it is independent of the lowest eigenvalue of H, and (b) for other losses than quadratic, the same properties hold except that the mean under the stationary distribution does not coincide with θ∗and its distance to θ∗is typically of order γ2 (see Section 3). 2.2 Convergence in higher orders We are now going to consider an extra assumption in order to bound the p-th moment of the excess risk and then get a high-probability bound. Let p be a real number greater than 1. (A7) There exists R > 0, κ > 0 and τ ⩾σ > 0 such that, for all n ⩾1, ∥xn∥2 ⩽R2 a.s., and E∥ξn∥p ⩽τpRp and E ξn ⊗ξn ≼σ2H, (3) ∀z ∈F, E⟨z, xn⟩4 ⩽κ E⟨z, xn⟩22 = κ⟨z, Hz⟩2. (4) The last condition in Eq. (4) says that the kurtosis of the projection of the covariates xn on any direction z ∈F is bounded. Note that computing the constant κ happens to be equivalent to the optimization problem solved by the FastICA algorithm [19], which thus provides an estimate of κ. In Table 1, we provide such an estimate for the non-sparse datasets which we have used in experiments, while we consider only directions z along the axes for high-dimensional sparse datasets. For these datasets where a given variable is equal to zero except for a few observations, κ is typically quite large. Adapting and analyzing normalized LMS techniques [20] to this set-up is likely to improve the theoretical robustness of the algorithm (but note that results in expectation from Theorem 1 do not use κ). The next theorem provides a bound for the p-th moment of the excess risk. Theorem 2 Assume (A1-7). For any real p ⩾1, and for a step-size γ ⩽1/(12pκR2), we have: E f(¯θn−1) −f(θ∗) p1/p ⩽p 2n 7τ √ d + R∥θ0 −θ∗∥ r 3 + 2 γpR2 2 . (5) 3 For γ = 1/(12pκR2), we get: E f(¯θn−1) −f(θ∗) p1/p ⩽ p 2n 7τ √ d + 6√κR∥θ0 −θ∗∥ 2. Note that to control the p-th order moment, a smaller step-size is needed, which scales as 1/p. We can now provide a high-probability bound; the tails decay polynomially as 1/(nδ12γκR2) and the smaller the step-size γ, the lighter the tails. Corollary 1 For any step-size such that γ ⩽1/(12κR2), any δ ∈(0, 1), P f(¯θn−1) −f(θ∗) ⩾ 1 nδ12γκR2 7τ √ d + R∥θ0 −θ∗∥( √ 3 + √ 24κ) 2 24γκR2 ⩽δ . (6) 3 Beyond least-squares: M-estimation In Section 2, we have shown that for least-squares regression, averaged SGD achieves a convergence rate of O(1/n) with no assumption regarding strong convexity. For all losses, with a constant stepsize γ, the stationary distribution πγ corresponding to the homogeneous Markov chain (θn) does always satisfy R f ′(θ)πγ(dθ) = 0, where f is the generalization error. When the gradient f ′ is linear (i.e., f is quadratic), then this implies that f ′( R θπγ(dθ))=0, i.e., the averaged recursion converges pathwise to ¯θγ = R θπγ(dθ) which coincides with the optimal value θ∗(defined through f ′(θ∗)=0). When the gradient f ′ is no longer linear, then R f ′(θ)πγ(dθ) ̸= f ′( R θπγ(dθ)). Therefore, for general M-estimation problems we should expect that the averaged sequence still converges at rate O(1/n) to the mean of the stationary distribution ¯θγ, but not to the optimal predictor θ∗. Typically, the average distance between θn and θ∗is of order γ (see Section 4 and [21]), while for the averaged iterates that converge pointwise to ¯θγ, it is of order γ2 for strongly convex problems under some additional smoothness conditions on the loss functions (these are satisfied, for example, by the logistic loss [22]). Since quadratic functions may be optimized with rate O(1/n) under weak conditions, we are going to use a quadratic approximation around a well chosen support point, which shares some similarity with the Newton procedure (however, with a non trivial adaptation to the stochastic approximation framework). The Newton step for f around a certain point ˜θ is equivalent to minimizing a quadratic surrogate g of f around ˜θ, i.e., g(θ) = f(˜θ) + ⟨f ′(˜θ), θ −˜θ⟩+ 1 2⟨θ −˜θ, f ′′(˜θ)(θ −˜θ)⟩. If fn(θ) def = ℓ(yn, ⟨θ, xn⟩), then g(θ) = Egn(θ), with gn(θ) = f(˜θ)+⟨f ′ n(˜θ), θ−˜θ⟩+ 1 2⟨θ−˜θ, f ′′ n(˜θ)(θ−˜θ)⟩; the Newton step may thus be solved approximately with stochastic approximation (here constant-step size LMS), with the following recursion: θn = θn−1 −γg′ n(θn−1) = θn−1 −γ f ′ n(˜θ) + f ′′ n(˜θ)(θn−1 −˜θ) . (7) This is equivalent to replacing the gradient f ′ n(θn−1) by its first-order approximation around ˜θ. A crucial point is that for machine learning scenarios where fn is a loss associated to a single data point, its complexity is only twice the complexity of a regular stochastic approximation step, since, with fn(θ) = ℓ(yn, ⟨xn, θ⟩), f ′′ n(θ) is a rank-one matrix. Choice of support points for quadratic approximation. An important aspect is the choice of the support point ˜θ. In this paper, we consider two strategies: – Two-step procedure: for convex losses, averaged SGD with a step-size decaying at O(1/√n) achieves a rate (up to logarithmic terms) of O(1/√n) [5, 6]. We may thus use it to obtain a first decent estimate. The two-stage procedure is as follows (and uses 2n observations): n steps of averaged SGD with constant step size γ ∝1/√n to obtain ˜θ, and then averaged LMS for the Newton step around ˜θ. As shown below, this algorithm achieves the rate O(1/n) for logistic regression. However, it is not the most efficient in practice. – Support point = current average iterate: we simply consider the current averaged iterate ¯θn−1 as the support point ˜θ, leading to the recursion: θn = θn−1 −γ f ′ n(¯θn−1) + f ′′ n(¯θn−1)(θn−1 −¯θn−1) . (8) Although this algorithm has shown to be the most efficient in practice (see Section 4) we currently have no proof of convergence. Given that the behavior of the algorithms does not change much when the support point is updated less frequently than each iteration, there may be some connections to two-time-scale algorithms (see, e.g., [23]). In Section 4, we also consider several other strategies based on doubling tricks. 4 Interestingly, for non-quadratic functions, our algorithm imposes a new bias (by replacing the true gradient by an approximation which is only valid close to ¯θn−1) in order to reach faster convergence (due to the linearity of the underlying gradients). Relationship with one-step-estimators. One-step estimators (see, e.g., [24]) typically take any estimator with O(1/n)-convergence rate, and make a full Newton step to obtain an efficient estimator (i.e., one that achieves the Cramer-Rao lower bound). Although our novel algorithm is largely inspired by one-step estimators, our situation is slightly different since our first estimator has only convergence rate O(1/√n) and is estimated on different observations. 3.1 Self-concordance and logistic regression We make the following assumptions: (B1) F is a d-dimensional Euclidean space, with d ⩾1. (B2) The observations (xn, yn) ∈F × {−1, 1} are independent and identically distributed. (B3) We consider f(θ) = E ℓ(yn, ⟨xn, θ⟩) , with the following assumption on the loss function ℓ (whenever we take derivatives of ℓ, this will be with respect to the second variable): ∀(y, ˆy) ∈{−1, 1} × R, ℓ′(y, ˆy) ⩽1, ℓ′′(y, ˆy) ⩽1/4, |ℓ′′′(y, ˆy)| ⩽ℓ′′(y, ˆy). We denote by θ∗a global minimizer of f, which we thus assume to exist, and we denote by H = f ′′(θ∗) the Hessian operator at a global optimum θ∗. (B4) We assume that there exists R > 0, κ > 0 and ρ > 0 such that ∥xn∥2 ⩽R2 almost surely, and E xn ⊗xn ≼ρE ℓ′′(yn, ⟨θ∗, xn⟩)xn ⊗xn = ρH, (9) ∀z ∈F, θ ∈F, E ℓ′′(yn, ⟨θ, xn⟩)2⟨z, xn⟩4 ⩽κ E ℓ′′(yn, ⟨θ, xn⟩)⟨z, xn⟩22. (10) Assumption (B3) is satisfied for the logistic loss and extends to all generalized linear models (see more details in [22]), and the relationship between the third derivative and second derivative of the loss ℓis often referred to as self-concordance (see [9, 25] and references therein). Note moreover that we must have ρ ⩾4 and κ ⩾1. A loose upper bound for ρ is 1/ infn ℓ′′(yn, ⟨θ∗, xn⟩) but in practice, it is typically much smaller (see Table 1). The condition in Eq. (10) is hard to check because it is uniform in θ. With a slightly more complex proof, we could restrict θ to be close to θ∗; with such constraints, the value of κ we have found is close to the one from Section 2.2 (i.e., without the terms in ℓ′′(yn, ⟨θ, xn⟩)). Theorem 3 Assume (B1-4), and consider the vector ζn obtained as follows: (a) perform n steps of averaged stochastic gradient descent with constant step size 1/2R2√n, to get ˜θn, and (b) perform n step of averaged LMS with constant step-size 1/R2 for the quadratic approximation of f around ˜θn. If n ⩾(19 + 9R∥θ0 −θ∗∥)4, then Ef(ζn) −f(θ∗) ⩽κ3/2ρ3d n (16R∥θ0 −θ∗∥+ 19)4. (11) We get an O(1/n) convergencerate without assuming strong convexity, even locally, thus improving on results from [22] where the the rate is proportional to 1/(nλmin(H)). The proof relies on selfconcordance properties and the sharp analysis of the Newton step (see [15] for details). 4 Experiments 4.1 Synthetic data Least-mean-square algorithm. We consider normally distributed inputs, with covariance matrix H that has random eigenvectors and eigenvalues 1/k, k = 1, . . . , d. The outputs are generated from a linear function with homoscedastic noise with unit signal to noise-ratio. We consider d = 20 and the least-mean-square algorithm with several settings of the step size γn, constant or proportional to 1/√n. Here R2 denotes the average radius of the data, i.e., R2 = tr H. In the left plot of Figure 1, we show the results, averaged over 10 replications. Without averaging, the algorithm with constant step-size does not converge pointwise (it oscillates), and its average excess risk decays as a linear function of γ (indeed, the gap between each values of the constant step-size is close to log10(4), which corresponds to a linear function in γ). 5 0 2 4 6 −5 −4 −3 −2 −1 0 log10(n) log10[f(θ)−f(θ*)] synthetic square 1/2R2 1/8R2 1/32R2 1/2R2n1/2 0 2 4 6 −5 −4 −3 −2 −1 0 log10(n) log10[f(θ)−f(θ*)] synthetic logistic − 1 1/2R2 1/8R2 1/32R2 1/2R2n1/2 0 2 4 6 −5 −4 −3 −2 −1 0 log10(n) log10[f(θ)−f(θ*)] synthetic logistic − 2 every 2p every iter. 2−step 2−step−dbl. Figure 1: Synthetic data. Left: least-squares regression. Middle: logistic regression with averaged SGD with various step-sizes, averaged (plain) and non-averaged (dashed). Right: various Newtonbased schemes for the same logistic regression problem. Best seen in color; see text for details. With averaging, the algorithm with constant step-size does converge at rate O(1/n), and for all values of the constant γ, the rate is actually the same. Moreover (although it is not shown in the plots), the standard deviation is much lower. With decaying step-size γn = 1/(2R2√n) and without averaging, the convergence rate is O(1/√n), and improves to O(1/n) with averaging. Logistic regression. We consider the same input data as for least-squares, but now generates outputs from the logistic probabilistic model. We compare several algorithms and display the results in Figure 1 (middle and right plots). On the middle plot, we consider SGD; without averaging, the algorithm with constant step-size does not converge and its average excess risk reaches a constant value which is a linear function of γ (indeed, the gap between each values of the constant step-size is close to log10(4)). With averaging, the algorithm does converge, but as opposed to least-squares, to a point which is not the optimal solution, with an error proportional to γ2 (the gap between curves is twice as large). On the right plot, we consider various variations of our online Newton-approximation scheme. The “2-step” algorithm is the one for which our convergence rate holds (n being the total number of examples, we perform n/2 steps of averaged SGD, then n/2 steps of LMS). Not surprisingly, it is not the best in practice (in particular at n/2, when starting the constant-size LMS, the performance worsens temporarily). It is classical to use doubling tricks to remedy this problem while preserving convergence rates [26], this is done in “2-step-dbl.”, which avoids the previous erratic behavior. We have also considered getting rid of the first stage where plain averaged stochastic gradient is used to obtain a support point for the quadratic approximation. We now consider only Newton-steps but change only these support points. We consider updating the support point at every iteration, i.e., the recursion from Eq. (8), while we also consider updating it every dyadic point (“dbl.-approx”). The last two algorithms perform very similarly and achieve the O(1/n) early. In all experiments on real data, we have considered the simplest variant (which corresponds to Eq. (8)). 4.2 Standard benchmarks We have considered 6 benchmark datasets which are often used in comparing large-scale optimization methods. The datasets are described in Table 1 and vary in values of d, n and sparsity levels. These are all finite binary classification datasets with outputs in {−1, 1}. For least-squares and logistic regression, we have followed the following experimental protocol: (1) remove all outliers (i.e., sample points xn whose norm is greater than 5 times the average norm), (2) divide the dataset in two equal parts, one for training, one for testing, (3) sample within the training dataset with replacement, for 100 times the number of observations in the training set (this corresponds to 100 effective passes; in all plots, a black dashed line marks the first effective pass), (4) compute averaged costs on training and testing data (based on 10 replications). All the costs are shown in log-scale, normalized to that the first iteration leads to f(θ0) −f(θ∗) = 1. All algorithms that we consider (ours and others) have a step-size, and typically a theoretical value that ensures convergence. We consider two settings: (1) one when this theoretical value is used, (2) one with the best testing error after one effective pass through the data (testing powers of 4 times the theoretical step-size). 6 Here, we only consider covertype, alpha, sido and news, as well as test errors. For all training errors and the two other datasets (quantum, rcv1), see [15]. Least-squares regression. We compare three algorithms: averaged SGD with constant step-size, averaged SGD with step-size decaying as C/R2√n, and the stochastic averaged gradient (SAG) method which is dedicated to finite training data sets [27], which has shown state-of-the-art performance in this set-up. We show the results in the two left plots of Figure 2 and Figure 3. Averaged SGD with decaying step-size equal to C/R2√n is slowest (except for sido). In particular, when the best constant C is used (right columns), the performance typically starts to increase significantly. With that step size, even after 100 passes, there is no sign of overfitting, even for the high-dimensional sparse datasets. SAG and constant-step-size averaged SGD exhibit the best behavior, for the theoretical step-sizes and the best constants, with a significant advantage for constant-step-size SGD. The non-sparse datasets do not lead to overfitting, even close to the global optimum of the (unregularized) training objectives, while the sparse datasets do exhibit some overfitting after more than 10 passes. Logistic regression. We also compare two additional algorithms: our Newton-based technique and “Adagrad” [7], which is a stochastic gradient method with a form a diagonal scaling1 that allows to reduce the convergence rate (which is still in theory proportional to O(1/√n)). We show results in the two right plots of Figure 2 and Figure 3. Averaged SGD with decaying step-size proportional to 1/R2√n has the same behavior than for least-squares (step-size harder to tune, always inferior performance except for sido). SAG, constant-step-size SGD and the novel Newton technique tend to behave similarly (good with theoretical step-size, always among the best methods). They differ notably in some aspects: (1) SAG converges quicker for the training errors (shown in [15]) while it is a bit slower for the testing error, (2) in some instances, constant-step-size averaged SGD does underfit (covertype, alpha, news), which is consistent with the lack of convergence to the global optimum mentioned earlier, (3) the novel online Newton algorithm is consistently better. On the non-sparse datasets, Adagrad performs similarly to the Newton-type method (often better in early iterations and worse later), except for the alpha dataset where the step-size is harder to tune (the best step-size tends to have early iterations that make the cost go up significantly). On sparse datasets like rcv1, the performance is essentially the same as Newton. On the sido data set, Adagrad (with fixed steps size, left column) achieves a good testing loss quickly then levels off, for reasons we cannot explain. On the news dataset, it is inferior without parameter-tuning and a bit better with. Adagrad uses a diagonal rescaling; it could be combined with our technique, early experiments show that it improves results but that it is more sensitive to the choice of step-size. Overall, even with d and κ very large (where our bounds are vacuous), the performance of our algorithm still achieves the state of the art, while being more robust to the selection of the step-size: finer quantities likes degrees of freedom [13] should be able to quantify more accurately the quality of the new algorithms. 5 Conclusion In this paper, we have presented two stochastic approximation algorithms that can achieve rates of O(1/n) for logistic and least-squares regression, without strong-convexity assumptions. Our analysis reinforces the key role of averaging in obtaining fast rates, in particular with large stepsizes. Our work can naturally be extended in several ways: (a) an analysis of the algorithm that updates the support point of the quadratic approximation at every iteration, (b) proximal extensions (easy to implement, but potentially harder to analyze); (c) adaptive ways to find the constant-stepsize; (d) step-sizes that depend on the iterates to increase robustness, like in normalized LMS [20], and (e) non-parametric analysis to improve our theoretical results for large values of d. Acknowledgements. Francis Bach was partially supported by the European Research Council (SIERRA Project). We thank Aymeric Dieuleveut and Nicolas Flammarion for helpful discussions. 1Since a bound on ∥θ∗∥is not available, we have used step-sizes proportional to 1/ supn ∥xn∥∞. 7 Table 1: Datasets used in our experiments. We report the proportion of non-zero entries, as well as estimates for the constant κ and ρ used in our theoretical results, together with the non-sharp constant which is typically used in analysis of logistic regression and which our analysis avoids (these are computed for non-sparse datasets only). Name d n sparsity κ ρ 1/ infn ℓ′′(yn, ⟨θ∗, xn⟩) quantum 79 50 000 100 % 5.8 ×102 16 8.5 ×102 covertype 55 581 012 100 % 9.6 ×102 160 3 ×1012 alpha 501 500 000 100 % 6 18 8 ×104 sido 4 933 12 678 10 % 1.3 ×104 × × rcv1 47 237 20 242 0.2 % 2 ×104 × × news 1 355 192 19 996 0.03 % 2 ×104 × × 0 2 4 6 −3 −2.5 −2 −1.5 −1 −0.5 0 log10(n) log10[f(θ)−f(θ*)] covertype square C=1 test 1/R2 1/R2n1/2 SAG 0 2 4 6 −3 −2.5 −2 −1.5 −1 −0.5 0 log10(n) covertype square C=opt test C/R2 C/R2n1/2 SAG 0 2 4 6 −3 −2 −1 0 log10(n) log10[f(θ)−f(θ*)] covertype logistic C=1 test 1/R2 1/R2n1/2 SAG Adagrad Newton 0 2 4 6 −3 −2 −1 0 log10(n) covertype logistic C=opt test C/R2 C/R2n1/2 SAG Adagrad Newton 0 2 4 6 −2 −1.5 −1 −0.5 0 0.5 1 log10(n) log10[f(θ)−f(θ*)] alpha square C=1 test 1/R2 1/R2n1/2 SAG 0 2 4 6 −2 −1.5 −1 −0.5 0 0.5 1 log10(n) alpha square C=opt test C/R2 C/R2n1/2 SAG 0 2 4 6 −2.5 −2 −1.5 −1 −0.5 0 0.5 log10(n) log10[f(θ)−f(θ*)] alpha logistic C=1 test 1/R2 1/R2n1/2 SAG Adagrad Newton 0 2 4 6 −2.5 −2 −1.5 −1 −0.5 0 0.5 log10(n) alpha logistic C=opt test C/R2 C/R2n1/2 SAG Adagrad Newton Figure 2: Test performance for least-square regression (two left plots) and logistic regression (two right plots). From top to bottom: covertype, alpha. Left: theoretical steps, right: steps optimized for performance after one effective pass through the data. Best seen in color. 0 2 4 −1 −0.5 0 log10(n) log10[f(θ)−f(θ*)] sido square C=1 test 1/R2 1/R2n1/2 SAG 0 2 4 −1 −0.5 0 log10(n) sido square C=opt test C/R2 C/R2n1/2 SAG 0 2 4 −1 −0.5 0 log10(n) log10[f(θ)−f(θ*)] sido logistic C=1 test 1/R2 1/R2n1/2 SAG Adagrad Newton 0 2 4 −1 −0.5 0 log10(n) sido logistic C=opt test C/R2 C/R2n1/2 SAG Adagrad Newton 0 2 4 −0.8 −0.6 −0.4 −0.2 0 0.2 log10(n) log10[f(θ)−f(θ*)] news square C=1 test 1/R2 1/R2n1/2 SAG 0 2 4 −0.8 −0.6 −0.4 −0.2 0 0.2 log10(n) news square C=opt test C/R2 C/R2n1/2 SAG 0 2 4 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 log10(n) log10[f(θ)−f(θ*)] news logistic C=1 test 1/R2 1/R2n1/2 SAG Adagrad Newton 0 2 4 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 log10(n) news logistic C=opt test C/R2 C/R2n1/2 SAG Adagrad Newton Figure 3: Test performance for least-square regression (two left plots) and logistic regression (two right plots). From top to bottom: sido, news. Left: theoretical steps, right: steps optimized for performance after one effective pass through the data. Best seen in color. 8 References [1] H. Robbins and S. Monro. A stochastic approximation method. The Annals of Mathematical Statistics, pages 400–407, 1951. [2] B. T. Polyak and A. B. Juditsky. Acceleration of stochastic approximation by averaging. SIAM Journal on Control and Optimization, 30(4):838–855, 1992. [3] L. Bottou and O. Bousquet. The tradeoffs of large scale learning. In Adv. NIPS, 2008. [4] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver for SVM. In Proc. ICML, 2007. [5] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM Journal on Optimization, 19(4):1574–1609, 2009. [6] F. Bach and E. Moulines. Non-asymptotic analysis of stochastic approximation algorithms for machine learning. In Adv. NIPS, 2011. [7] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159, 2010. [8] A. S. Nemirovsky and D. B. Yudin. Problem complexity and method efficiency in optimization. Wiley & Sons, 1983. [9] Y. Nesterov. Introductory lectures on convex optimization. Kluwer, 2004. [10] G. Lan. An optimal method for stochastic composite optimization. Mathematical Programming, 133(1-2):365–397, 2012. [11] L. Gy¨orfiand H. Walk. On the averaged stochastic approximation for linear regression. SIAM Journal on Control and Optimization, 34(1):31–61, 1996. [12] H. J. Kushner and G. G. Yin. Stochastic approximation and recursive algorithms and applications. Springer-Verlag, second edition, 2003. [13] C. Gu. Smoothing spline ANOVA models. Springer, 2002. [14] R. Aguech, E. Moulines, and P. Priouret. On a perturbation approach for the analysis of stochastic tracking algorithms. SIAM J. Control and Optimization, 39(3):872–899, 2000. [15] F. Bach and E. Moulines. Non-strongly-convex smooth stochastic approximation with convergence rate O(1/n). Technical Report 00831977, HAL, 2013. [16] A. B. Tsybakov. Optimal rates of aggregation. In Proc. COLT, 2003. [17] O. Macchi. Adaptive processing: The least mean squares approach with applications in transmission. Wiley West Sussex, 1995. [18] S. Meyn and R. Tweedie. Markov Chains and Stochastic Stability. Cambridge U. P., 2009. [19] A. Hyv¨arinen and E. Oja. A fast fixed-point algorithm for independent component analysis. Neural computation, 9(7):1483–1492, 1997. [20] N.J. Bershad. Analysis of the normalized LMS algorithm with Gaussian inputs. IEEE Transactions on Acoustics, Speech and Signal Processing, 34(4):793–806, 1986. [21] A. Nedic and D. Bertsekas. Convergence rate of incremental subgradient algorithms. Stochastic Optimization: Algorithms and Applications, pages 263–304, 2000. [22] F. Bach. Adaptivity of averaged stochastic gradient descent to local strong convexity for logistic regression. Technical Report 00804431-v2, HAL, 2013. [23] V. S. Borkar. Stochastic approximation with two time scales. Systems & Control Letters, 29(5):291–294, 1997. [24] A. W. Van der Vaart. Asymptotic statistics, volume 3. Cambridge Univ. Press, 2000. [25] F. Bach. Self-concordant analysis for logistic regression. Electronic Journal of Statistics, 4:384–414, 2010. [26] E. Hazan and S. Kale. Beyond the regret minimization barrier: an optimal algorithm for stochastic strongly-convex optimization. In Proc. COLT, 2001. [27] M. Schmidt, N. Le Roux, and F. Bach. Minimizing finite sums with the stochastic average gradient. Technical Report 00860051, HAL, 2013. 9
|
2013
|
170
|
4,899
|
Predicting Parameters in Deep Learning Misha Denil1 Babak Shakibi2 Laurent Dinh3 Marc’Aurelio Ranzato4 Nando de Freitas1,2 1University of Oxford, United Kingdom 2University of British Columbia, Canada 3Universit´e de Montr´eal, Canada 4Facebook Inc., USA {misha.denil,nando.de.freitas}@cs.ox.ac.uk laurent.dinh@umontreal.ca ranzato@fb.com Abstract We demonstrate that there is significant redundancy in the parameterization of several deep learning models. Given only a few weight values for each feature it is possible to accurately predict the remaining values. Moreover, we show that not only can the parameter values be predicted, but many of them need not be learned at all. We train several different architectures by learning only a small number of weights and predicting the rest. In the best case we are able to predict more than 95% of the weights of a network without any drop in accuracy. 1 Introduction Recent work on scaling deep networks has led to the construction of the largest artificial neural networks to date. It is now possible to train networks with tens of millions [13] or even over a billion parameters [7, 16]. The largest networks (i.e. those of Dean et al. [7]) are trained using asynchronous SGD. In this framework many copies of the model parameters are distributed over many machines and updated independently. An additional synchronization mechanism coordinates between the machines to ensure that different copies of the same set of parameters do not drift far from each other. A major drawback of this technique is that training is very inefficient in how it makes use of parallel resources [1]. In the largest networks of Dean et al. [7], where the gains from distribution are largest, distributing the model over 81 machines reduces the training time per mini-batch by a factor of 12, and increasing to 128 machines achieves a speedup factor of roughly 14. While these speedups are very significant, there is a clear trend of diminishing returns as the overhead of coordinating between the machines grows. Other approaches to distributed learning of neural networks involve training in batch mode [8], but these methods have not been scaled nearly as far as their online counterparts. It seems clear that distributed architectures will always be required for extremely large networks; however, as efficiency decreases with greater distribution, it also makes sense to study techniques for learning larger networks on a single machine. If we can reduce the number of parameters which must be learned and communicated over the network of fixed size, then we can reduce the number of machines required to train it, and hence also reduce the overhead of coordination in a distributed framework. In this work we study techniques for reducing the number of free parameters in neural networks by exploiting the fact that the weights in learned networks tend to be structured. The technique we present is extremely general, and can be applied to a broad range of models. Our technique is also completely orthogonal to the choice of activation function as well as other learning optimizations; it can work alongside other recent advances in neural network training such as dropout [12], rectified units [20] and maxout [9] without modification. 1 Figure 1: The first column in each block shows four learned features (parameters of a deep model). The second column shows a few parameters chosen at random from the original set in the first column. The third column shows that this random set can be used to predict the remaining parameters. From left to right the blocks are: (1) a convnet trained on STL-10 (2) an MLP trained on MNIST, (3) a convnet trained on CIFAR-10, (4) Reconstruction ICA trained on Hyv¨arinen’s natural image dataset (5) Reconstruction ICA trained on STL-10. The intuition motivating the techniques in this paper is the well known observation that the first layer features of a neural network trained on natural image patches tend to be globally smooth with local edge features, similar to local Gabor features [6, 13]. Given this structure, representing the value of each pixel in the feature separately is redundant, since it is highly likely that the value of a pixel will be equal to a weighted average of its neighbours. Taking advantage of this type of structure means we do not need to store weights for every input in each feature. This intuition is illustrated in Figures 1 and 2. The remainder of this paper is dedicated to elaborating on this observation. We describe a general purpose technique for reducing the number of free parameters in neural networks. The core of the technique is based on representing the weight matrix as a low rank product of two smaller matrices. By factoring the weight matrix we are able to directly control the size of the parameterization by controlling the rank of the weight matrix. Figure 2: RICA with different amounts of parameter prediction. In the leftmost column 100% of the parameters are learned with L-BFGS. In the rightmost column, only 10% of the parameters learned, while the remaining values are predicted at each iteration. The intermediate columns interpolate between these extremes in increments of 10%. Na¨ıve application of this technique is straightforward but tends to reduce performance of the networks. We show that by carefully constructing one of the factors, while learning only the other factor, we can train networks with vastly fewer parameters which achieve the same performance as full networks with the same structure. The key to constructing a good first factor is exploiting smoothness in the structure of the inputs. When we have prior knowledge of the smoothness structure we expect to see (e.g. in natural images), we can impose this structure directly through the choice of factor. When no such prior knowledge is available we show that it is still possible to make a good data driven choice. We demonstrate experimentally that our parameter prediction technique is extremely effective. In the best cases we are able to predict more than 95% of the parameters of a network without any drop in predictive accuracy. Throughout this paper we make a distinction between dynamic and static parameters. Dynamic parameters are updated frequently during learning, potentially after each observation or mini-batch. This is in contrast to static parameters, whose values are computed once and not altered. Although the values of these parameters may depend on the data and may be expensive to compute, the computation need only be done once during the entire learning process. The reason for this distinction is that static parameters are much easier to handle in a distributed system, even if their values must be shared between machines. Since the values of static parameters do not change, access to them does not need to be synchronized. Copies of these parameters can be safely distributed across machines without any of the synchronization overhead incurred by distributing dynamic parameters. 2 2 Low rank weight matrices Deep networks are composed of several layers of transformations of the form h = g(vW), where v is an nv-dimensional input, h is an nh-dimensional output, and W is an nv × nh matrix of parameters. A column of W contains the weights connecting each unit in the visible layer to a single unit in the hidden layer. We can to reduce the number of free parameters by representing W as the product of two matrices W = UV, where U has size nv × nα and V has size nα × nh. By making nα much smaller than nv and nh we achieve a substantial reduction in the number of parameters. In principle, learning the factored weight matrices is straightforward. We simply replace W with UV in the objective function and compute derivatives with respect to U and V instead of W. In practice this na¨ıve approach does not preform as well as learning a full rank weight matrix directly. Moreover, the factored representation has redundancy. If Q is any invertible matrix of size nα × nα we have W = UV = (UQ)(Q−1V) = ˜U ˜V. One way to remove this redundancy is to fix the value of U and learn only V. The question remains what is a reasonable choice for U? The following section provides an answer to this question. 3 Feature prediction We can exploit the structure in the features of a deep network to represent the features in a much lower dimensional space. To do this we consider the weights connected to a single hidden unit as a function w : W →R mapping weight space to real numbers estimate values of this function using regression. In the case of p × p image patches, W is the coordinates of each pixel, but other structures for W are possible. A simple regression model which is appropriate here is a linear combination of basis functions. In this view the columns of U form a dictionary of basis functions, and the features of the network are linear combinations of these features parameterized by V. The problem thus becomes one of choosing a good base dictionary for representing network features. 3.1 Choice of dictionary The base dictionary for feature prediction can be constructed in several ways. An obvious choice is to train a single layer unsupervised model and use the features from that model as a dictionary. This approach has the advantage of being extremely flexible—no assumptions about the structure of feature space are required—but has the drawback of requiring an additional training phase. When we have prior knowledge about the structure of feature space we can exploit it to construct an appropriate dictionary. For example when learning features for images we could choose U to be a selection of Fourier or wavelet bases to encode our expectation of smoothness. We can also build U using kernels that encode prior knowledge. One way to achieve this is via kernel ridge regression [25]. Let wα denote the observed values of the weight vector w on a restricted subset of its domain α ⊂W. We introduce a kernel matrix Kα, with entries (Kα)ij = k(i, j), to model the covariance between locations i, j ∈α. The parameters at these locations are (wα)i and (wα)j. The kernel enables us to make smooth predictions of the parameter vector over the entire domain W using the standard kernel ridge predictor: w = kT α(Kα + λI)−1wα , where kα is a matrix whose elements are given by (kα)ij = k(i, j) for i ∈α and j ∈W, and λ is a ridge regularization coefficient. In this case we have U = kT α(Kα + λI)−1 and V = wα. 3.2 A concrete example In this section we describe the feature prediction process as it applies to features derived from image patches using kernel ridge regression, since the intuition is strongest in this case. We defer a discussion of how to select a kernel for deep layers as well as for non-image data in the visible layer to a later section. In those settings the prediction process is formally identical, but the intuition is less clear. 3 If v is a vectorized image patch corresponding to the visible layer of a standard neural network then the hidden activity induced by this patch is given by h = g(vW), where g is the network nonlinearity and W = [w1, . . . , wnh] is a weight matrix whose columns each correspond to features which are to be matched to the visible layer. We consider a single column of the weight matrix, w, whose elements are indexed by i ∈W. In the case of an image patch these indices are multidimensional i = (ix, iy, ic), indicating the spatial location and colour channel of the index i. We select locations α ⊂W at which to represent the filter explicitly and use wα to denote the vector of weights at these locations. There are a wide variety of options for how α can be selected. We have found that choosing α uniformly at random from W (but tied across channels) works well; however, it is possible that performance could be improved by carefully designing a process for selecting α. We can use values for wα to predict the full feature as w = kT α(Kα + λI)−1wα. Notice that we can predict the entire feature matrix in parallel using W = kT α(Kα + λI)−1Wα where Wα = [(w1)α, . . . , (wnh)α]. For image patches, where we expect smoothness in pixel space, an appropriate kernel is the squared exponential kernel k(i, j) = exp −(ix −jx)2 + (iy −jy)2 2σ2 where σ is a length scale parameter which controls the degree of smoothness. Here α has a convenient interpretation as the set of pixel locations in the image, each corresponding to a basis function in the dictionary defined by the kernel. More generically we will use α to index a collection of dictionary elements in the remainder of the paper, even when a dictionary element may not correspond directly to a pixel location as in this example. 3.3 Interpretation as pooling So far we have motivated our technique as a method for predicting features in a neural network; however, the same approach can also be interpreted as a linear pooling process. Recall that the hidden activations in a standard neural network before applying the nonlinearity are given by g−1(h) = vW. Our motivation has proceeded along the lines of replacing W with UαWα and discussing the relationship between W and its predicted counterpart. Alternatively we can write g−1(h) = vαWα where vα = vUα is a linear transformation of the data. Under this interpretation we can think of a predicted layer as being composed to two layers internally. The first is a linear layer which applies a fixed pooling operator given by Uα, and the second is an ordinary fully connected layer with |α| visible units. 3.4 Columnar architecture The prediction process we have described so far assumes that Uα is the same for all features; however, this can be too restrictive. Continuing with the intuition that filters should be smooth local edge detectors we might want to choose α to give high resolution in a local area of pixel space while using a sparser representation in the remainder of the space. Naturally, in this case we would want to choose several different α’s, each of which concentrates high resolution information in different regions. It is straightforward to extend feature prediction to this setting. Suppose we have several different index sets α1, . . . , αJ corresponding to elements from a dictionary U. For each αj we can form the sub-dictionary Uαj and predicted the feature matrix Wj = UαjWαj. The full predicted feature matrix is formed by concatenating each of these matrices blockwise W = [W1, . . . , WJ]. Each block of the full predicted feature matrix can be treated completely independently. Blocks Wi and Wj share no parameters—even their corresponding dictionaries are different. Each αj can be thought of as defining a column of representation inside the layer. The input to each column is shared, but the representations computed in each column are independent. The output of the layer is obtained by concatenating the output of each column. This is represented graphically in Figure 3. 4 v vUαi vUαiwαi g(vUαiwαi) Uα2 wα1 wα2 wα3 g(·) g(·) g(·) v v ∗Uαi v ∗Uαiwαi wα1 wα2 wα3 g(v ∗Uαiwαi) Figure 3: Left: Columnar architecture in a fully connected network, with the path through one column highlighted. Each column corresponds to a different αj. Right: Columnar architecture in a convolutional network. In this setting the wα’s take linear combinations of the feature maps obtained by convolving the input with the dictionary. We make the same abuse of notation here as in the main text—the vectorized filter banks must be reshaped before the convolution takes place. Introducing additional columns into the network increases the number of static parameters but the number of dynamic parameters remains fixed. The increase in static parameters comes from the fact that each column has its own dictionary. The reason that there is not a corresponding increase in the number of dynamic parameters is that for a fixed size hidden layer the hidden units are divided between the columns. The number of dynamic parameters depends only on the number of hidden units and the size of each dictionary. In a convolutional network the interpretation is similar. In this setting we have g−1(h) = v ∗W∗, where W∗is an appropriately sized filter bank. Using W to denote the result of vectorizing the filters of W∗(as is done in non-convolutional models) we can again write W = Uαwα, and using a slight abuse of notation1 we can write g−1(h) = v ∗Uαwα. As above, we re-order the operations to obtain g−1(h) = vαwα resulting in a structure similar to a layer in an ordinary MLP. This structure is illustrated in Figure 3. Note that v is first convolved with Uα to produce vα. That is, preprocessing in each column comes from a convolution with a fixed set of filters, defined by the dictionary. Next, we form linear combinations of these fixed convolutions, with coefficients given by wα. This particular order of operations may result in computational improvements if the number of hidden channels is larger than nα, or if the elements of Uα are separable [22]. 3.5 Constructing dictionaries We now turn our attention to selecting an appropriate dictionary for different layers of the network. The appropriate choice of dictionary inevitably depends on the structure of the weight space. When the weight space has a topological structure where we expect smoothness, for example when the weights correspond to pixels in an image patch, we can choose a kernel-based dictionary to enforce the type of smoothness we expect. When there is no topological structure to exploit, we propose to use data driven dictionaries. An obvious choice here is to use a shallow unsupervised feature learning, such as an autoencoder, to build a dictionary for the layer. Another option is to construct data-driven kernels for ridge regression. Easy choices here are using the empirical covariance or empirical squared covariance of the hidden units, averaged over the data. Since the correlations in hidden activities depend on the weights in lower layers we cannot initialize kernels in deep layers in this way without training the previous layers. We handle this by pre-training each layer as an autoencoder. We construct the kernel using the empirical covariance of the hidden units over the data using the pre-trained weights. Once each layer has been pre-trained in this way 1The vectorized filter bank W = Uαwα must be reshaped before the convolution takes place. 5 0.2 0.4 0.6 0.8 1.0 Proportion of parameters learned 0.015 0.020 0.025 0.030 0.035 0.040 0.045 0.050 Error Compare Completion Methods nokernel LowRank RandCon-RandCon RandFixU-RandFixU SE-Emp SE-Emp2 SE-AE 0.2 0.4 0.6 0.8 1.0 Proportion of Parameters Learned 0.0 0.1 0.2 0.3 0.4 0.5 Phone Error Rate TIMIT Emp-Emp Figure 4: Left: Comparing the performance of different dictionaries when predicting the weights in the first two layers of an MLP network on MNIST. The legend shows the dictionary type in layer1– layer2 (see main text for details). Right: Performance on the TIMIT core test set using an MLP with two hidden layers. we fine-tune the entire network with backpropagation, but in this phase the kernel parameters are fixed. We also experiment with other choices for the dictionary, such as random projections (iid Gaussian dictionary) and random connections (dictionary composed of random columns of the identity). 4 Experiments 4.1 Multilayer perceptron We perform some initial experiments using MLPs [24] in order to demonstrate the effectiveness of our technique. We train several MLP models on MNIST using different strategies for constructing the dictionary, different numbers of columns and different degrees of reduction in the number of dynamic parameters used in each feature. We chose to explore these permutations on MNIST since it is small enough to allow us to have broad coverage. The networks in this experiment all have two hidden layers with a 784–500–500–10 architecture and use a sigmoid activation function. The final layer is a softmax classifier. In all cases we preform parameter prediction in the first and second layers only; the final softmax layer is never predicted. This layer contains approximately 1% of the total network parameters, so a substantial savings is possible even if features in this layer are not predicted. Figure 4 (left) shows performance using several different strategies for constructing the dictionary, each using 10 columns in the first and second layers. We divide the hidden units in each layer equally between columns (so each column connects to 50 units in the layer above). 1.0 0.5 0.25 0.75 Proportion of parameters learned 0.08 0.16 0.24 0.32 0.4 Error Convnet CIFAR-10 convnet Figure 5: Performance of a convnet on CIFAR-10. Learning only 25% of the parameters has a negligible effect on predictive accuracy. The different dictionaries are as follows: nokernel is an ordinary model with no feature prediction (shown as a horizontal line). LowRank is when both U and V are optimized. RandCon is random connections (the dictionary is random columns of the identity). RandFixU is random projections using a matrix of iid Gaussian entries. SE is ridge regression with the squared exponential kernel with length scale 1.0. Emp is ridge regression with the covariance kernel. Emp2 is ridge regression with the squared covariance kernel. AE is a dictionary pre-trained as an autoencoder. The SE–Emp and SE-Emp2 architectures preform substantially better than the alternatives, especially with few dynamic parameters. For consistency we pre-trained all of the models, except for the LowRank, as autoencoders. We did not pretrain the LowRank model because we found the autoencoder pretraining to be extremely unstable for this model. Figure 4 (right) shows the results of a similar experiment on TIMIT. The raw speech data was analyzed using a 25-ms Hamming window with a 10-ms fixed frame rate. In all the experiments, we represented the speech using 12th-order Mel frequency cepstral coefcients (MFCCs) and energy, along with their first and second temporal derivatives. The networks used in this experiment have two hidden layers with 1024 units. Phone error rate was measured by performing Viterbi decoding 6 the phones in each utterance using a bigram language model, and confusions between certain sets of phones were ignored as described in [19]. 4.2 Convolutional network Figure 5 shows the performance of a convnet [17] on CIFAR-10. The first convolutional layer filters the 32 × 32 × 3 input image using 48 filters of size 8 × 8 × 3. The second convolutional layer applies 64 filters of size 8 × 8 × 48 to the output of the first layer. The third convolutional layer further transforms the output of the second layer by applying 64 filters of size 5 × 5 × 64. The output of the third layer is input to a fully connected layer with 500 hidden units and finally into a softmax layer with 10 outputs. Again we do not reduce the parameters in the final softmax layer. The convolutional layers each have one column and the fully connected layer has five columns. Convolutional layers have a natural topological structure to exploit, so we use an dictionary constructed with the squared exponential kernel in each convolutional layer. The input to the fully connected layer at the top of the network comes from a convolutional layer so we use ridge regression with the squared exponential kernel to predict parameters in this layer as well. 4.3 Reconstruction ICA Reconstruction ICA [15] is a method for learning overcomplete ICA models which is similar to a linear autoencoder network. We demonstrate that we can effectively predict parameters in RICA on both CIFAR-10 and STL-10. In order to use RICA as a classifier we follow the procedure of Coates et al. [6]. Figure 6 (left) shows the results of parameter prediction with RICA on CIFAR-10 and STL-10. RICA is a single layer architecture, and we predict parameters a squared exponential kernel dictionary with a length scale of 1.0. The nokernel line shows the performance of RICA with no feature prediction on the same task. In both cases we are able to predict more than half of the dynamic parameters without a substantial drop in accuracy. Figure 6 (right) compares the performance of two RICA models with the same number of dynamic parameters. One of the models is ordinary RICA with no parameter prediction and the other has 50% of the parameters in each feature predicted using squared exponential kernel dictionary with a length scale of 1.0; since 50% of the parameters in each feature are predicted, the second model has twice as many features with the same number of dynamic parameters. 5 Related work and future directions Several other methods for limiting the number of parameters in a neural network have been explored in the literature. An early approach is the technique of “Optimal Brain Damage” [18] which uses approximate second derivative information to remove parameters from an already trained network. This technique does not apply in our setting, since we aim to limit the number of parameters before training, rather than after. The most common approach to limiting the number of parameters is to use locally connected features [6]. The size of the parameterization of locally connected networks can be further reduced by using tiled convolutional networks [10] in which groups of feature weights which tile the input 0.08 0.28 0.49 0.69 0.9 Proportion of parameters learned 0.24 0.31 0.37 0.44 0.5 Error CIFAR-10 nokernel RICA 0.08 0.28 0.49 0.69 0.9 Proportion of parameters learned 0.42 0.44 0.46 0.48 0.5 Error STL-10 nokernel RICA 1800 15750 29700 43650 57600 Number of dynamic parameters 0.24 0.31 0.37 0.44 0.5 Error CIFAR-10 RICA RICA-50% 5000 23750 42500 61250 80000 Number of dynamic parameters 0.42 0.44 0.46 0.48 0.5 Error STL-10 RICA RICA-50% Figure 6: Left: Comparison of the performance of RICA with and without parameter prediction on CIFAR-10 and STL-10. Right: Comparison of RICA, and RICA with 50% parameter prediction using the same number of dynamic parameters (i.e. RICA-50% has twice as many features). There is a substantial gain in accuracy with the same number of dynamic parameters using our technique. Error bars for STL-10 show 90% confidence intervals from the the recommended testing protocol. 7 space are tied. Convolutional neural networks [13] are even more restrictive and force a feature to have tied weights for all receptive fields. Techniques similar to the one in this paper have appeared for shallow models in the computer vision literature. The double sparsity method of Rubinstein et al. [23] involves approximating linear dictionaries with other dictionaries in a similar manner to how we approximate network features. Rigamonti et al. [22] study approximating convolutional filter banks with linear combinations of separable filters. Both of these works focus on shallow single layer models, in contrast to our focus on deep networks. The techniques described in this paper are orthogonal to the parameter reduction achieved by tying weights in a tiled or convolutional pattern. Tying weights effectively reduces the number of feature maps by constraining features at different locations to share parameters. Our approach reduces the number of parameters required to represent each feature and it is straightforward to incorporate into a tiled or convolutional network. Cires¸an et al. [3] control the number of parameters by removing connections between layers in a convolutional network at random. They achieve state-of-the-art results using these randomly connected layers as part of their network. Our technique subsumes the idea of random connections, as described in Section 3.5. The idea of regularizing networks through prior knowledge of smoothness is not new, but it is a delicate process. Lang and Hinton [14] tried imposing explicit smoothness constraints through regularization but found it to universally reduce performance. Na¨ıvely factoring the weight matrix and learning both factors tends to reduce performance as well. Although the idea is simple conceptually, execution is difficult. G¨ulc¸ehre et al. [11] have demonstrated that prior knowledge is extremely important during learning, which highlights the importance of introducing it effectively. Recent work has shown that state of the art results on several benchmark tasks in computer vision can be achieved by training neural networks with several columns of representation [2, 13]. The use of different preprocessing for different columns of representation is of particular relevance [2]. Our approach has an interpretation similar to this as described in Section 3.4. Unlike the work of [2], we do not consider deep columns in this paper; however, collimation is an attractive way for increasing parallelism within a network, as the columns operate completely independently. There is no reason we could not incorporate deeper columns into our networks, and this would make for a potentially interesting avenue of future work. Our approach is superficially similar to the factored RBM [21, 26], whose parameters form a 3tensor. Since the total number of parameters in this model is prohibitively large, the tensor is represented as an outer product of three matrices. Major differences between our technique and the factored RBM include the fact that the factored RBM is a specific model, whereas our technique can be applied more broadly—even to factored RBMs. In addition, in a factored RBM all factors are learned, whereas in our approach the dictionary is fixed judiciously. In this paper we always choose the set α of indices uniformly at random. There are a wide variety of other options which could be considered here. Other works have focused on learning receptive fields directly [5], and would be interesting to incorporate with our technique. In a similar vein, more careful attention to the selection of kernel functions is appropriate. We have considered some simple examples and shown that they preform well, but our study is hardly exhaustive. Using different types of kernels to encode different types of prior knowledge on the weight space, or even learning the kernel functions directly as part of the optimization procedure as in [27] are possibilities that deserve exploration. When no natural topology on the weight space is available we infer a topology for the dictionary from empirical statistics; however, it may be possible to instead construct the dictionary to induce a desired topology on the weight space directly. This has parallels to other work on inducing topology in representations [10] as well as work on learning pooling structures in deep networks [4]. 6 Conclusion We have shown how to achieve significant reductions in the number of dynamic parameters in deep models. The idea is orthogonal but complementary to recent advances in deep learning, such as dropout, rectified units and maxout. It creates many avenues for future work, such as improving large scale industrial implementations of deep networks, but also brings into question whether we have the right parameterizations in deep learning. 8 References [1] Y. Bengio. Deep learning of representations: Looking forward. Technical Report arXiv:1305.0445, Universit´e de Montr´eal, 2013. [2] D. Cires¸an, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classification. In IEEE Computer Vision and Pattern Recognition, pages 3642–3649, 2012. [3] D. Cires¸an, U. Meier, and J. Masci. High-performance neural networks for visual object classification. arXiv:1102.0183, 2011. [4] A. Coates, A. Karpathy, and A. Ng. Emergence of object-selective features in unsupervised feature learning. In Advances in Neural Information Processing Systems, pages 2690–2698, 2012. [5] A. Coates and A. Y. Ng. Selecting receptive fields in deep networks. In Advances in Neural Information Processing Systems, pages 2528–2536, 2011. [6] A. Coates, A. Y. Ng, and H. Lee. An analysis of single-layer networks in unsupervised feature learning. In Artificial Intelligence and Statistics, 2011. [7] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, Q. Le, M. Mao, M. Ranzato, A. Senior, P. Tucker, K. Yang, and A. Ng. Large scale distributed deep networks. In Advances in Neural Information Processing Systems, pages 1232–1240, 2012. [8] L. Deng, D. Yu, and J. Platt. Scalable stacking and learning for building deep architectures. In International Conference on Acoustics, Speech, and Signal Processing, pages 2133–2136, 2012. [9] I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio. Maxout networks. In International Conference on Machine Learning, 2013. [10] K. Gregor and Y. LeCun. Emergence of complex-like cells in a temporal product network with local receptive fields. arXiv preprint arXiv:1006.0448, 2010. [11] C. G¨ulc¸ehre and Y. Bengio. Knowledge matters: Importance of prior information for optimization. In International Conference on Learning Representations, 2013. [12] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. CoRR, abs/1207.0580, 2012. [13] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1106–1114, 2012. [14] K. Lang and G. Hinton. Dimensionality reduction and prior knowledge in e-set recognition. In Advances in Neural Information Processing Systems, 1990. [15] Q. V. Le, A. Karpenko, J. Ngiam, and A. Y. Ng. ICA with reconstruction cost for efficient overcomplete feature learning. Advances in Neural Information Processing Systems, 24:1017–1025, 2011. [16] Q. V. Le, M. Ranzato, R. Monga, M. Devin, K. Chen, G. Corrado, J. Dean, and A. Ng. Building highlevel features using large scale unsupervised learning. In International Conference on Machine Learning, 2012. [17] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [18] Y. LeCun, J. S. Denker, S. Solla, R. E. Howard, and L. D. Jackel. Optimal brain damage. In Advances in Neural Information Processing Systems, pages 598–605, 1990. [19] K.-F. Lee and H.-W. Hon. Speaker-independent phone recognition using hidden markov models. Acoustics, Speech and Signal Processing, IEEE Transactions on, 37(11):1641–1648, 1989. [20] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In Proc. 27th International Conference on Machine Learning, pages 807–814. Omnipress Madison, WI, 2010. [21] M. Ranzato, A. Krizhevsky, and G. E. Hinton. Factored 3-way restricted Boltzmann machines for modeling natural images. In Artificial Intelligence and Statistics, 2010. [22] R. Rigamonti, A. Sironi, V. Lepetit, and P. Fua. Learning separable filters. In IEEE Computer Vision and Pattern Recognition, 2013. [23] R. Rubinstein, M. Zibulevsky, and M. Elad. Double sparsity: learning sparse dictionaries for sparse signal approximation. IEEE Transactions on Signal Processing, 58:1553–1564, 2010. [24] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagating errors. Nature, 323(6088):533–536, 1986. [25] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, New York, NY, USA, 2004. [26] K. Swersky, M. Ranzato, D. Buchman, B. Marlin, and N. Freitas. On autoencoders and score matching for energy based models. In International Conference on Machine Learning, pages 1201–1208, 2011. [27] P. Vincent and Y. Bengio. A neural support vector network architecture with adaptive kernels. In International Joint Conference on Neural Networks, pages 187–192, 2000. 9
|
2013
|
171
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.