aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1507.00436
1460713219
This paper proposes an online transfer framework to capture the interaction among agents and shows that current transfer learning in reinforcement learning is a special case of online transfer. Furthermore, this paper re-characterizes existing agents-teaching-agents methods as online transfer and analyze one such teaching method in three ways. First, the convergence of Q-learning and Sarsa with tabular representation with a finite budget is proven. Second, the convergence of Q-learning and Sarsa with linear function approximation is established. Third, the we show the asymptotic performance cannot be hurt through teaching. Additionally, all theoretical results are empirically validated.
Finally, a branch in computational learning theory called algorithmic teaching tries to understand teaching in theoretical ways @cite_9 . In algorithmic learning theory, the teacher usually determines a example sequence and teach the sequence to the learner. There are a lot of algorithmic teaching models such as teaching dimension @cite_7 and teaching learners with restricted mind changes @cite_4 . However, those models still concentrate on supervised learning. @cite_24 developed a teaching method which is based on algorithm teaching, but their work focuses on one-time optimal teaching sequence computing, which lacks the online setting.
{ "cite_N": [ "@cite_24", "@cite_9", "@cite_4", "@cite_7" ], "mid": [ "950880443", "1583136812", "1590983128", "2020764470" ], "abstract": [ "A helpful teacher can significantly improve the learning rate of a learning agent. Teaching algorithms have been formally studied within the field of Algorithmic Teaching. These give important insights into how a teacher can select the most informative examples while teaching a new concept. However the field has so far focused purely on classification tasks. In this paper we introduce a novel method for optimally teaching sequential decision tasks. We present an algorithm that automatically selects the set of most informative demonstrations and evaluate it on several navigation tasks. Next, we explore the idea of using this algorithm to produce instructions for humans on how to choose examples when teaching sequential decision tasks. We present a user study that demonstrates the utility of such instructions.", "The present paper surveys recent developments in algorithmic teaching. First, the traditional teaching dimension model is recalled. Starting from the observation that the teaching dimension model sometimes leads to counterintuitive results, recently developed approaches are presented. Here, main emphasis is put on the following aspects derived from human teaching learning behavior: the order in which examples are presented should matter; teaching should become harder when the memory size of the learners decreases; teaching should become easier if the learners provide feedback; and it should be possible to teach infinite concepts and or finite and infinite concept classes. Recent developments in the algorithmic teaching achieving (some) of these aspects are presented and compared.", "Within learning theory teaching has been studied in various ways. In a common variant the teacher has to teach all learners that are restricted to output only consistent hypotheses. The complexity of teaching is then measured by the maximum number of mistakes a consistent learner can make until successful learning. This is equivalent to the so-called teaching dimension. However, many interesting concept classes have an exponential teaching dimension and it is only meaningful to consider the teachability of finite concept classes. A refined approach of teaching is proposed by introducing a neighborhood relation over all possible hypotheses. The learners are then restricted to choose a new hypothesis from the neighborhood of their current one. Teachers are either required to teach finitely or in the limit. Moreover, the variant that the teacher receives the current hypothesis of the learner as feedback is considered. The new models are compared to existing ones and to one another in dependence of the neighborhood relations given. In particular, it is shown that feedback can be very helpful. Moreover, within the new model one can also study the teachability of infinite concept classes with potentially infinite concepts such as languages. Finally, it is shown that in our model teachability and learnability can be rather different.", "While most theoretical work in machine learning has focused on the complexity of learning, recently there has been increasing interest in formally studying the complexity of teaching. In this paper we study the complexity of teaching by considering a variant of the on-line learning model in which a helpful teacher selects the instances. We measure the complexity of teaching a concept from a given concept class by a combinatorial measure we call the teaching dimension, Informally, the teaching dimension of a concept class is the minimum number of instances a teacher must reveal to uniquely identify any target concept chosen from the class." ] }
1507.00066
771161470
Cross-validation (CV) is one of the main tools for performance estimation and parameter tuning in machine learning. The general recipe for computing CV estimate is to run a learning algorithm separately for each CV fold, a computationally expensive process. In this paper, we propose a new approach to reduce the computational burden of CV-based performance estimation. As opposed to all previous attempts, which are specific to a particular learning model or problem domain, we propose a general method applicable to a large class of incremental learning algorithms, which are uniquely fitted to big data problems. In particular, our method applies to a wide range of supervised and unsupervised learning tasks with different performance criteria, as long as the base learning algorithm is incremental. We show that the running time of the algorithm scales logarithmically, rather than linearly, in the number of CV folds. Furthermore, the algorithm has favorable properties for parallel and distributed implementation. Experiments with state-of-the-art incremental learning algorithms confirm the practicality of the proposed method.
Various methods, often specialized to specific learning settings, have been proposed to speed up the computation of the @math -CV estimate. Most frequently, efficient @math -CV computation methods are specialized to the regularized least-squares (RLS) learning settings (with squared-RKHS-norm regularization). In particular, the generalized cross-validation method @cite_2 @cite_8 computes the LOOCV estimate in @math time for a dataset of size @math from the solution of the RLS problem over the whole dataset; this is generalized to @math -CV calculation in @math time by Pahikkala al . In the special case of least-squares support vector machines (LSSVMs), Cawley shows that LOOCV can be computed in @math time using a Cholesky factorization (again, after obtaining the solution of the RLS problem). It should be noted that all of the aforementioned methods use the inverse (or some factorization) of a special matrix (called the ) in their calculation; the aforementioned running times are therefore based on the assumption that this inverse is available (usually as a by-product of solving the RLS problem, computed in @math time). In the absence of this assumption, stochastic trace estimators @cite_0 or numerical approximation techniques @cite_3 @cite_11 are used to avoid the costly inversion of the matrix.
{ "cite_N": [ "@cite_8", "@cite_3", "@cite_0", "@cite_2", "@cite_11" ], "mid": [ "", "2030281874", "2126759246", "1990381576", "2109047032" ], "abstract": [ "", "Abstract Although generalized cross-validation is a popular tool for calculating a regularization parameter, it has been rarely applied to large-scale problems until recently. A major difficulty lies in the evaluation of the cross-validation function that requires the calculation of the trace of an inverse matrix. In the last few years stochastic trace estimators have been proposed to alleviate this problem. This article demonstrates numerical approximation techniques that further reduce the computational complexity. The new approach employs Gauss quadrature to compute lower and upper bounds on the cross-validation function. It only requires the operator form of the system matrix—that is, a subroutine to evaluate matrix-vector products. Thus, the factorization of large matrices can be avoided. The new approach has been implemented in MATLAB. Numerical experiments confirm the remarkable accuracy of the stochastic trace estimator. Regularization parameters are computed for ill-posed problems with 100, 1,000...", "We propose a fast Monte-Carlo algorithm for calculating reliable estimates of the trace of the influence matrixA ? involved in regularization of linear equations or data smoothing problems, where ? is the regularization or smoothing parameter. This general algorithm is simply as follows: i) generaten pseudo-random valuesw 1, ...,w n , from the standard normal distribution (wheren is the number of data points) and letw=(w 1, ...,w n ) T , ii) compute the residual vectorw?A ? w, iii) take the normalized\" inner-product (w T (w?A ? w)) (w T w) as an approximation to (1 n)tr(I?A ?). We show, both by theoretical bounds and by numerical simulations on some typical problems, that the expected relative precision of these estimates is very good whenn is large enough, and that they can be used in practice for the minimization with respect to ? of the well known Generalized Cross-Validation (GCV) function. This permits the use of the GCV method for choosing ? in any particular large-scale application, with only a similar amount of work as the standard residual method. Numerical applications of this procedure to optimal spline smoothing in one or two dimensions show its efficiency.", "Consider the ridge estimate (λ) for β in the model unknown, (λ) = (X T X + nλI)−1 X T y. We study the method of generalized cross-validation (GCV) for choosing a good value for λ from the data. The estimate is the minimizer of V(λ) given by where A(λ) = X(X T X + nλI)−1 X T . This estimate is a rotation-invariant version of Allen's PRESS, or ordinary cross-validation. This estimate behaves like a risk improvement estimator, but does not require an estimate of σ2, so can be used when n − p is small, or even if p ≥ 2 n in certain cases. The GCV method can also be used in subset selection and singular value truncation methods for regression, and even to choose from among mixtures of these methods.", "In many image restoration resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method." ] }
1507.00065
759468135
We consider the problem of estimating the perimeter of a smooth domain in the plane based on a sample from the uniform distribution over the domain. We study the performance of the estimator defined as the perimeter of the alpha-shape of the sample. Some numerical experiments corroborate our theoretical findings.
There is a series of papers that consider the problem of estimating the surface area of the boundary of a more general class of supports @math but under a different sampling scheme where two samples are given, one from the uniform distribution on @math and another from the uniform distribution on @math , where @math is a bounded set containing @math . In that line, @cite_3 aim at estimating the Minkowski content of @math , and introduce an estimator that is proved to be consistent under weak assumptions on the set @math . They obtain a convergence rate of @math in dimension 2 when @math has bounded curvature---in which case the Minkowski content coincides with the perimeter. @cite_2 @cite_8 follow their work and propose a different estimator, which is very closely related to the one we study here, obtaining an improved rate convergence of @math in dimension 2. Continuing this line of work, @cite_11 propose an estimator of the perimeter of @math based on a Delaunay triangulation, which is shown to be consistent under mild assumptions on @math .
{ "cite_N": [ "@cite_8", "@cite_11", "@cite_3", "@cite_2" ], "mid": [ "2066074765", "2165156565", "1984700939", "2095901857" ], "abstract": [ "The problem of estimating the Minkowski content L 0 (G) of a body G C R d is considered. For d = 2, the Minkowski content represents the boundary length of G. It is assumed that a ball of radius r can roll inside and outside the boundary of G. We use this shape restriction to propose a new estimator for L 0 (G). This estimator is based on the information provided by a random sample, taken on a square containing G, in which we know whether a sample point is in G or not. We obtain the almost sure convergence rate for the proposed estimator.", "The estimation of surface integrals on the boundary of an unknown body is a challenge for nonparametric methods in statistics, with powerful applications to physics and image analysis, among other fields. Provided that one can determine whether random shots hit the body, [Ann. Statist. 35 (2007) 1031―1051] estimate the boundary measure (the boundary length for planar sets and the surface area for 3-dimensional objects) via the consideration of shots at a box containing the body. The statistics considered by these authors, as well as those in subsequent papers, are based on the estimation of Minkowski content and depend on a smoothing parameter which must be carefully chosen. For the same sampling scheme, we introduce a new approach which bypasses this issue, providing strongly consistent estimators of both the boundary measure and the surface integrals of scalar functions, provided one can collect the function values at the sample points. Examples arise in experiments in which the density of the body can be measured by physical properties of the impacts, or in situations where such quantities as temperature and humidity are observed by randomly distributed sensors. Our method is based on random Delaunay triangulations and involves a simple procedure for surface reconstruction from a dense cloud of points inside and outside the body. We obtain basic asymptotics of the estimator, perform simulations and discuss, via Google Earth's data, an application to the image analysis of the Aral Sea coast and its cliffs.", "The Minkowski content L 0 (G) of a body G ⊂R d represents the boundary length (for d = 2) or the surface area (for d = 3) of G. A method for estimating L 0 (G) is proposed. It relies on a nonparametric estimator based on the information provided by a random sample (taken on a rectangle containing G) in which we are able to identify whether every point is inside or outside G. Some theoretical properties concerning strong consistency, L 1 -error and convergence rates are obtained. A practical application to a problem of image analysis in cardiology is discussed in some detail. A brief simulation study is provided.", "The problem of estimating the surface area, L 0, of a set G⊂ℝ d has been extensively considered in several fields of research. For example, stereology focuses on the estimation of L 0 without needing to reconstruct the set G. From a more geometrical point of view, set estimation theory is interested in estimating the shape of the set. Thus, surface area estimation can be seen as a further step where the emphasis is placed on an important geometric characteristic of G. The Minkowski content is an attractive way to define L 0 that has been previously used in the literature on surface area estimation. Pateiro-Lopez and Rodriguez-Casal [B. Pateiro-Lopez and A. Rodriguez-Casal, Length and surface area estimation under smoothness restrictions, Adv. Appl. Prob. 40(2) (2008), pp. 348–358] proposed an estimator, L n , for L 0 under convexity type assumptions. In this paper, we obtain the L 1-convergence rate of L n ." ] }
1507.00065
759468135
We consider the problem of estimating the perimeter of a smooth domain in the plane based on a sample from the uniform distribution over the domain. We study the performance of the estimator defined as the perimeter of the alpha-shape of the sample. Some numerical experiments corroborate our theoretical findings.
Also closely related is the work of @cite_4 in the context of binary images, which includes the estimation of the length of the boundary of a horizon of the form @math , where @math is a function with H "older regularity. See discussion for further comments.
{ "cite_N": [ "@cite_4" ], "mid": [ "2157169955" ], "abstract": [ "We propose a new method for estimating intrinsic dimension of a dataset derived by applying the principle of maximum likelihood to the distances between close neighbors. We derive the estimator by a Poisson process approximation, assess its bias and variance theoretically and by simulations, and apply it to a number of simulated and real datasets. We also show it has the best overall performance compared with two other intrinsic dimension estimators." ] }
1507.00270
1845833837
A medium access control protocol based on quantum entanglement has been introduced by Berces and Imre (2006) and Van Meter (2012). This protocol entirely avoids collisions. It is assumed that the network consists of one access point and two client stations. We extend this scheme to a network with an arbitrary number of client stations. We propose three approaches, namely, the qubit distribution, transmit first election and temporal ordering protocols. The qubit distribution protocol leverages the concepts of Bell-EPR pair or W state triad. It works for networks of up to four CSs. With up to three CSs, there is no probability of collision. In a four-CS network, there is a low probability of collision. The transmit first election protocol and temporal ordering protocols work for a network with any number of CSs. The transmit first election builds upon the concept of W state of size corresponding to the number of client stations. It is fair and collision free. The temporal ordering protocol employs the concepts of Lehmer code and quantum oracle. It is collision free, has a normalized throughput of 100 and achieves quasi-fairness
Our protocols leverage quantum computing and quantum communications. have achieved free-space transmission of an entangled qubit over a record distance of 144 km @cite_17 . Berces and Imre @cite_11 and Arizmend @cite_3 have explored medium access control protocols building upon the concept of quantum entanglement. In a multi hop wireless network, forwarding a quantum state is an important issue. The use of teleportation has been suggested @cite_5 . Assuming that two parties pre share one part each of a Bell pair, the state of a qubit can be transferred from one location to another using two classical bits. Hence, teleportation has the ability to transfer a quantum state over a classical communication channel, e.g., using electromagnetic waves. Because pre-shared entanglement is required between the parties, long-term storage of qubits is needed by the participants. @cite_10 , @cite_0 and @cite_12 have developed wireless network protocols for hop-by-hop @cite_8 teleportation of qubits. Li and Yang @cite_6 use entanglement swapping @cite_15 in wireless sensor networks to achieve confidentiality.
{ "cite_N": [ "@cite_8", "@cite_17", "@cite_3", "@cite_6", "@cite_0", "@cite_5", "@cite_15", "@cite_10", "@cite_12", "@cite_11" ], "mid": [ "2092105345", "2063519529", "2088595097", "2006853333", "2093504089", "1978553093", "2067204178", "2156461795", "", "2161152425" ], "abstract": [ "In quantum communication via noisy channels, the error probability scales exponentially with the length of the channel. We present a scheme of a quantum repeater that overcomes this limitation. The central idea is to connect a string of (imperfect) entangled pairs of particles by using a novel nested purification protocol, thereby creating a single distant pair of high fidelity. Our scheme tolerates general errors on the percent level, it works with a polynomial overhead in time and a logarithmic overhead in the number of particles that need to be controlled locally.", "Quantum entanglement is the main resource to endow the field of quantum information processing with powers that exceed those of classical communication and computation. In view of applications such as quantum cryptography or quantum teleportation, extension of quantum-entanglement-based protocols to global distances is of considerable practical interest. Here we experimentally demonstrate entanglement-based quantum key distribution over 144 km. One photon is measured locally at the Canary Island of La Palma, whereas the other is sent over an optical free-space link to Tenerife, where the Optical Ground Station of the European Space Agency acts as the receiver. This exceeds previous free-space experiments by more than an order of magnitude in distance, and is an essential step towards future satellite-based quantum communication and experimental tests on quantum physics in space.", "In this work, a novel medium access control method for classical and quantum communications purposes is proposed. Quantum communications promise secure ways to send valuable information, therefore quantum-device network and quantum medium access control methods which avoids information loss will be necessary. On the other hand, excessive colliding transmissions in congested situations are a problem for classical wireless communications. Quantum parallelism and quantum multipartite entanglement are exploited to design a MAC sub layer which provides the devices a fair and efficient access to the channel.", "In the wireless sensor networks (WSNs), sensor nodes may be deployed in the hostile areas. The eavesdropper can intercept the messages in the public channel and the communication between the nodes is easily monitored. Furthermore, any malicious intermediate node can act as a legal receiver to alter the passing messages. Hence, message protection and sensor node identification become important issues in WSN. In this paper, we propose a novel scheme providing unconditional secure communication based on the quantum characteristics, including no-cloning and teleportation. We present a random EPR-pair allocation scheme that is designed to overcome the vulnerability caused by possible compromised nodes. EPR pairs are pre-assigned to sensor nodes randomly and the entangled qubits are used by the nodes with the quantum teleportation scheme to form a secure link. We also show a scheme on how to resist the man-in-the-middle attack. In the framework, the qubits are allocated to each node before deployment and the adversary is unable to create the duplicated nodes. Even if the malicious nodes are added to the network to falsify the messages transmitting in the public channel, the legal nodes can easily detect the fake nodes that have no entangled qubits and verify the counterfeit messages. In addition, we prove that one node sharing EPR pairs with a certain amount of neighbor nodes can teleport information to any node in the sensor network if there are sufficient EPR pairs in the qubits pool. The proposal shows that the distributed quantum wireless sensor network gains better security than classical wireless sensor network and centralized quantum wireless network.", "In distributed wireless quantum communication networks, because of the storage of the clients, the EPR pairs are important resources. The lack of EPR pairs limits the size of networks. Besides, to accomplish quantum teleportation, both classical and quantum information need to be transmitted. We may have difficulty in building classical and quantum channels at the same time. To solve these problems, we introduce mesh structure into wireless quantum communication. The distributed wireless quantum networks can be improved by this mesh structure we proposed. Moreover, a quantum routing protocol based on quantum relay mechanism is proposed in this mesh structure.", "An unknown quantum state 〉 can be disassembled into, then later reconstructed from, purely classical information and purely nonclassical Einstein-Podolsky-Rosen (EPR) correlations. To do so the sender, Alice,'' and the receiver, Bob,'' must prearrange the sharing of an EPR-correlated pair of particles. Alice makes a joint measurement on her EPR particle and the unknown quantum system, and sends Bob the classical result of this measurement. Knowing this, Bob can convert the state of his EPR particle into an exact replica of the unknown state 〉 which Alice destroyed.", "Using independent sources one can realize an «event-ready» Bell-Einstein-Podolsky-Rosen experiment in which one can measure directly the probabilities of the various outcomes including nondetection of both particles. Our propasal involves two parametric down-converters. Subcoherence-time monitoring of the idlers provides a noninteractive quantum measurement entangling and preselecting the independent signals without touching them. We give the conditions for high fringe visibility and particle collection efficiency as required for a Bell test", "In this paper, a quantum routing mechanism is proposed to teleport a quantum state from one quantum device to another wirelessly even though these two devices do not share EPR pairs mutually. This results in the proposed quantum routing mechanism that can be used to construct the quantum wireless networks. In terms of time complexity, the proposed mechanism transports a quantum bit in time almost the same as the quantum teleportation does regardless of the number of hops between the source and destination. From this point of view, the quantum routing mechanism is close to optimal in data transmission time. In addition, in order to realize the wireless communication in the quantum domain, a hierarchical network architecture and its corresponding communication protocol are developed. Based on these network components, a scalable quantum wireless communication can be achieved.", "", "Medium Access Control (MAC) is an important part of wireless telecommunication systems. The main goal of a MAC protocol is to provide the best usage of the common resources for the users. One of these resources is typically the communication channel. By quantum informatics and computation - that gain more and more attention - some calculations and algorithms may become more efficient. The possible implementation of a quantum based system would lead us to great benefits, by applying it to an already existing problem. Here we give a model for medium access control via quantum methods." ] }
1506.08959
2951947688
Updated on 24 09 2015: This update provides preliminary experiment results for fine-grained classification on the surveillance data of CompCars. The train test splits are provided in the updated dataset. See details in Section 6.
To our knowledge, there is no previous attempt on the car model verification task. Closely related to car model verification, face verification has been a popular topic @cite_1 @cite_5 @cite_17 @cite_6 . The recent deep learning based algorithms @cite_17 first train a deep neural network on human identity classification, then train a verification model with the feature extracted from the deep neural network. Joint Bayesian @cite_9 is a widely-used verification model that models two faces jointly with an appropriate prior on the face representation. We adopt Joint Bayesian as a baseline model in car model verification.
{ "cite_N": [ "@cite_9", "@cite_1", "@cite_6", "@cite_5", "@cite_17" ], "mid": [ "170472577", "1782590233", "2131024102", "", "1998808035" ], "abstract": [ "In this paper, we revisit the classical Bayesian face recognition method by Baback and propose a new joint formulation. The classical Bayesian method models the appearance difference between two faces. We observe that this \"difference\" formulation may reduce the separability between classes. Instead, we model two faces jointly with an appropriate prior on the face representation. Our joint formulation leads to an EM-like model learning at the training time and an efficient, closed-formed computation at the test time. On extensive experimental evaluations, our method is superior to the classical Bayesian face and many other supervised approaches. Our method achieved 92.4 test accuracy on the challenging Labeled Face in Wild (LFW) dataset. Comparing with current best commercial system, we reduced the error rate by 10 .", "Most face databases have been created under controlled conditions to facilitate the study of specific parameters on the face recognition problem. These parameters include such variables as position, pose, lighting, background, camera quality, and gender. While there are many applications for face recognition technology in which one can control the parameters of image acquisition, there are also many applications in which the practitioner has little or no control over such parameters. This database, Labeled Faces in the Wild, is provided as an aid in studying the latter, unconstrained, recognition problem. The database contains labeled face photographs spanning the range of conditions typically encountered in everyday life. The database exhibits “natural” variability in factors such as pose, lighting, race, accessories, occlusions, and background. In addition to describing the details of the database, we provide specific experimental paradigms for which the database is suitable. This is done in an effort to make research performed with the database as consistent and comparable as possible. We provide baseline results, including results of a state of the art face recognition system combined with a face alignment system. To facilitate experimentation on the database, we provide several parallel databases, including an aligned version.", "Various factors, such as identity, view, and illumination, are coupled in face images. Disentangling the identity and view representations is a major challenge in face recognition. Existing face recognition systems either use handcrafted features or learn features discriminatively to improve recognition accuracy. This is different from the behavior of primate brain. Recent studies [5, 19] discovered that primate brain has a face-processing network, where view and identity are processed by different neurons. Taking into account this instinct, this paper proposes a novel deep neural net, named multi-view perceptron (MVP), which can untangle the identity and view features, and in the meanwhile infer a full spectrum of multi-view images, given a single 2D face image. The identity features of MVP achieve superior performance on the MultiPIE dataset. MVP is also capable to interpolate and predict images under viewpoints that are unobserved in the training data.", "", "This paper proposes to learn a set of high-level feature representations through deep learning, referred to as Deep hidden IDentity features (DeepID), for face verification. We argue that DeepID can be effectively learned through challenging multi-class face identification tasks, whilst they can be generalized to other tasks (such as verification) and new identities unseen in the training set. Moreover, the generalization capability of DeepID increases as more face classes are to be predicted at training. DeepID features are taken from the last hidden layer neuron activations of deep convolutional networks (ConvNets). When learned as classifiers to recognize about 10, 000 face identities in the training set and configured to keep reducing the neuron numbers along the feature extraction hierarchy, these deep ConvNets gradually form compact identity-related features in the top layers with only a small number of hidden neurons. The proposed features are extracted from various face regions to form complementary and over-complete representations. Any state-of-the-art classifiers can be learned based on these high-level representations for face verification. 97:45 verification accuracy on LFW is achieved with only weakly aligned faces." ] }
1506.08959
2951947688
Updated on 24 09 2015: This update provides preliminary experiment results for fine-grained classification on the surveillance data of CompCars. The train test splits are provided in the updated dataset. See details in Section 6.
Other car-related research includes detection @cite_28 , tracking @cite_14 @cite_26 , joint detection and pose estimation @cite_16 @cite_10 , and 3D parsing @cite_31 . Fine-grained car models are not explored in these studies. Previous research related to car parts includes car logo recognition @cite_29 and car style analysis based on mid-level features @cite_0 .
{ "cite_N": [ "@cite_14", "@cite_26", "@cite_28", "@cite_29", "@cite_0", "@cite_31", "@cite_16", "@cite_10" ], "mid": [ "2073220190", "46376622", "2136767008", "2139323304", "2171322814", "2071042563", "344254576", "" ], "abstract": [ "We describe a vehicle tracking algorithm using input from a network of nonoverlapping cameras. Our algorithm is based on a novel statistical formulation that uses joint kinematic and image appearance information to link local tracks of the same vehicles into global tracks with longer persistence. The algorithm can handle significant spatial separation between the cameras and is robust to challenging tracking conditions such as high traffic density, or complex road infrastructure. In these cases, traditional tracking formulations based on MHT, or JPDA algorithms, may fail to produce track associations across cameras due to the weak predictive models employed. We make several new contributions in this paper. Firstly, we model kinematic constraints between any two local tracks using road networks and transit time distributions. The transit time distributions are calculated dynamically as convolutions of normalized transit time distributions that are learned and adapted separately for individual roads. Secondly, we present a complete statistical tracker formulation, which combines kinematic and appearance likelihoods within a multi-hypothesis framework. We have extensively evaluated the algorithm proposed using a network of ground-based cameras with narrow field of view. The tracking results obtained on a large ground-truthed dataset demonstrate the effectiveness of the algorithm proposed.", "In this work, we focus on the problem of tracking objects under significant viewpoint variations, which poses a big challenge to traditional object tracking methods. We propose a novel method to track an object and estimate its continuous pose and part locations under severe viewpoint change. In order to handle the change in topological appearance introduced by viewpoint transformations, we represent objects with 3D aspect parts and model the relationship between viewpoint and 3D aspect parts in a part-based particle filtering framework. Moreover, we show that instance-level online-learned part appearance can be incorporated into our model, which makes it more robust in difficult scenarios with occlusions. Experiments are conducted on a new dataset of challenging YouTube videos and a subset of the KITTI dataset [14] that include significant viewpoint variations, as well as a standard sequence for car tracking. We demonstrate that our method is able to track the 3D aspect parts and the viewpoint of objects accurately despite significant changes in viewpoint.", "Developing on-board automotive driver assistance systems aiming to alert drivers about driving environments, and possible collision with other vehicles has attracted a lot of attention lately. In these systems, robust and reliable vehicle detection is a critical step. This paper presents a review of recent vision-based on-road vehicle detection systems. Our focus is on systems where the camera is mounted on the vehicle rather than being fixed such as in traffic driveway monitoring systems. First, we discuss the problem of on-road vehicle detection using optical sensors followed by a brief review of intelligent vehicle research worldwide. Then, we discuss active and passive sensors to set the stage for vision-based vehicle detection. Methods aiming to quickly hypothesize the location of vehicles in an image as well as to verify the hypothesized locations are reviewed next. Integrating detection with tracking is also reviewed to illustrate the benefits of exploiting temporal continuity for vehicle detection. Finally, we present a critical overview of the methods discussed, we assess their potential for future deployment, and we present directions for future research.", "In this paper, a new algorithm for vehicle logo recognition on the basis of an enhanced scale-invariant feature transform (SIFT)-based feature-matching scheme is proposed. This algorithm is assessed on a set of 1200 logo images that belong to ten distinctive vehicle manufacturers. A series of experiments are conducted, splitting the 1200 images to a training set and a testing set, respectively. It is shown that the enhanced matching approach proposed in this paper boosts the recognition accuracy compared with the standard SIFT-based feature-matching method. The reported results indicate a high recognition rate in vehicle logos and a fast processing time, making it suitable for real-time applications.", "We present a weakly-supervised visual data mining approach that discovers connections between recurring mid-level visual elements in historic (temporal) and geographic (spatial) image collections, and attempts to capture the underlying visual style. In contrast to existing discovery methods that mine for patterns that remain visually consistent throughout the dataset, our goal is to discover visual elements whose appearance changes due to change in time or location; i.e., exhibit consistent stylistic variations across the label space (date or geo-location). To discover these elements, we first identify groups of patches that are style-sensitive. We then incrementally build correspondences to find the same element across the entire dataset. Finally, we train style-aware regressors that model each element's range of stylistic differences. We apply our approach to date and geo-location prediction and show substantial improvement over several baselines that do not model visual style. We also demonstrate the method's effectiveness on the related task of fine-grained classification.", "Current systems for scene understanding typically represent objects as 2D or 3D bounding boxes. While these representations have proven robust in a variety of applications, they provide only coarse approximations to the true 2D and 3D extent of objects. As a result, object-object interactions, such as occlusions or ground-plane contact, can be represented only superficially. In this paper, we approach the problem of scene understanding from the perspective of 3D shape modeling, and design a 3D scene representation that reasons jointly about the 3D shape of multiple objects. This representation allows to express 3D geometry and occlusion on the fine detail level of individual vertices of 3D wireframe models, and makes it possible to treat dependencies between objects, such as occlusion reasoning, in a deterministic way. In our experiments, we demonstrate the benefit of jointly estimating the 3D shape of multiple objects in a scene over working with coarse boxes, on the recently proposed KITTI dataset of realistic street scenes.", "Object detection and pose estimation are interdependent problems in computer vision. Many past works decouple these problems, either by discretizing the continuous pose and training pose-specific object detectors, or by building pose estimators on top of detector outputs. In this paper, we propose a structured kernel machine approach to treat object detection and pose estimation jointly in a mutually benificial way. In our formulation, a unified, continuously parameterized, discriminative appearance model is learned over the entire pose space. We propose a cascaded discrete-continuous algorithm for efficient inference, and give effective online constraint generation strategies for learning our model using structural SVMs. On three standard benchmarks, our method performs better than, or on par with, state-of-the-art methods in the combined task of object detection and pose estimation.", "" ] }
1506.08909
836999996
This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words. This provides a unique resource for research into building dialogue managers based on neural language models that can make use of large amounts of unlabeled data. The dataset has both the multi-turn property of conversations in the Dialog State Tracking Challenge datasets, and the unstructured nature of interactions from microblog services such as Twitter. We also describe two neural learning architectures suitable for analyzing this dataset, and provide benchmark performance on the task of selecting the best next response.
Recently, a few datasets have been used containing unstructured dialogues extracted from Twitter https: twitter.com . @cite_11 collected 1.3 million conversations; this was extended in @cite_10 to take advantage of longer contexts by using A-B-A triples. @cite_4 used data from a similar Chinese website called Weibo http: www.weibo.com . However to our knowledge, these datasets have not been made public, and furthermore, the post-reply format of such microblogging services is perhaps not as representative of natural dialogue between humans as the continuous stream of messages in a chat room. In fact, estimate that only 37 Part of the Ubuntu chat logs have previously been aggregated into a dataset, called the Ubuntu Chat Corpus @cite_1 . However that resource preserves the multi-participant structure and thus is less amenable to the investigation of more traditional two-party conversations.
{ "cite_N": [ "@cite_1", "@cite_10", "@cite_4", "@cite_11" ], "mid": [ "2121738076", "2951580200", "2159640018", "1654173042" ], "abstract": [ "We present the Ubuntu Chat Corpus as a data source for multiparticipant chat analysis. This addresses the problem of the lack of a large, publicly suitable corpora for research in this medium. The advantages of using this corpus for research is its large number of chat messages, its multiple languages, its technical nature, and all of the original chat messages are in the public domain.", "We present a novel response generation system that can be trained end to end on large quantities of unstructured Twitter conversations. A neural network architecture is used to address sparsity issues that arise when integrating contextual information into classic statistical models, allowing the system to take into account previous dialog utterances. Our dynamic-context generative models show consistent gains over both context-sensitive and non-context-sensitive Machine Translation and Information Retrieval baselines.", "We propose Neural Responding Machine (NRM), a neural network-based response generator for Short-Text Conversation. NRM takes the general encoder-decoder framework: it formalizes the generation of response as a decoding process based on the latent representation of the input text, while both encoding and decoding are realized with recurrent neural networks (RNN). The NRM is trained with a large amount of one-round conversation data collected from a microblogging service. Empirical study shows that NRM can generate grammatically correct and content-wise appropriate responses to over 75 of the input text, outperforming state-of-the-arts in the same setting, including retrieval-based and SMT-based models.", "We propose the first unsupervised approach to the problem of modeling dialogue acts in an open domain. Trained on a corpus of noisy Twitter conversations, our method discovers dialogue acts by clustering raw utterances. Because it accounts for the sequential behaviour of these acts, the learned model can provide insight into the shape of communication in a new medium. We address the challenge of evaluating the emergent model with a qualitative visualization and an intrinsic conversation ordering task. This work is inspired by a corpus of 1.3 million Twitter conversations, which will be made publicly available. This huge amount of data, available only because Twitter blurs the line between chatting and publishing, highlights the need to be able to adapt quickly to a new medium." ] }
1506.08347
799206314
The presence of occluders significantly impacts object recognition accuracy. However, occlusion is typically treated as an unstructured source of noise and explicit models for occluders have lagged behind those for object appearance and shape. In this paper we describe a hierarchical deformable part model for face detection and landmark localization that explicitly models part occlusion. The proposed model structure makes it possible to augment positive training data with large numbers of synthetically occluded instances. This allows us to easily incorporate the statistics of occlusion patterns in a discriminatively trained model. We test the model on several benchmarks for landmark localization and detection including challenging new data sets featuring significant occlusion. We find that the addition of an explicit occlusion model yields a detection system that outperforms existing approaches for occluded instances while maintaining competitive accuracy in detection and landmark localization for unoccluded instances.
There is a long history of face detection in the computer vision literature. A classic approach treats detection as problem aligning a model to a test image using techniques such as Deformable Templates @cite_12 , Active Appearance Models (AAMs) @cite_26 @cite_5 @cite_22 and elastic graph matching @cite_7 . Alignment with full 3D models provides even richer information at the cost of additional computation @cite_15 @cite_42 . A key difficulty in many of these approaches is the dependence on iterative and local search techniques for optimizing model alignment with a query image. This typically results in high computational cost and the concern that local minima may undermine system performance.
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_7", "@cite_42", "@cite_5", "@cite_15", "@cite_12" ], "mid": [ "2152826865", "2102512156", "2180187800", "", "2082308025", "2160096928", "2125848778" ], "abstract": [ "We describe a new method of matching statistical models of appearance to images. A set of model parameters control modes of shape and gray-level variation learned from a training set. We construct an efficient iterative matching algorithm by learning the relationship between perturbations in the model parameters and the induced image errors.", "We make some simple extensions to the Active Shape Model of [4], and use it to locate features in frontal views of upright faces. We show on independent test data that with the extensions the Active Shape Model compares favorably with more sophisticated methods. The extensions are (i) fitting more landmarks than are actually needed (ii) selectively using two- instead of one-dimensional landmark templates (iii) adding noise to the training set (iv) relaxing the shape model where advantageous (v) trimming covariance matrices by setting most entries to zero, and (vi) stacking two Active Shape Models in series.", "", "", "Active Appearance Models (AAMs) and the closely related concepts of Morphable Models and Active Blobs are generative models of a certain visual phenomenon. Although linear in both shape and appearance, overall, AAMs are nonlinear parametric models in terms of the pixel intensities. Fitting an AAM to an image consists of minimising the error between the input image and the closest model instances i.e. solving a nonlinear optimisation problem. We propose an efficient fitting algorithm for AAMs based on the inverse compositional image alignment algorithm. We show that the effects of appearance variation during fitting can be precomputed (“projected out”) using this algorithm and how it can be extended to include a global shape normalising warp, typically a 2D similarity transformation. We evaluate our algorithm to determine which of its novel aspects improve AAM fitting performance.", "We present an approach for aligning a 3D deformable model to a single face image. The model consists of a set of sparse 3D points and the view-based patches associated with every point. Assuming a weak perspective projection model, our algorithm iteratively deforms the model and ad- justs the 3D pose to fit the image. As opposed to previous approaches, our algorithm starts the fitting without resort- ing to manual labeling of key facial points. And it makes no assumptions about global illumination or surface prop- erties, so it can be applied to a wide range of imaging con- ditions. Experiments demonstrate that our approach can effectively handle unseen faces with a variety of pose and illumination variations.", "A method for detecting and describing the features of faces using deformable templates is described. The feature of interest, an eye for example, is described by a parameterized template. An energy function is defined which links edges, peaks, and valleys in the image intensity to corresponding properties of the template. The template then interacts dynamically with the image, by altering its parameter values to minimize the energy function, thereby deforming itself to find the best fit. The final parameter values can be used as descriptors for the features. This method is demonstrated by showing deformable templates detecting eyes and mouths in real images. >" ] }
1506.08347
799206314
The presence of occluders significantly impacts object recognition accuracy. However, occlusion is typically treated as an unstructured source of noise and explicit models for occluders have lagged behind those for object appearance and shape. In this paper we describe a hierarchical deformable part model for face detection and landmark localization that explicitly models part occlusion. The proposed model structure makes it possible to augment positive training data with large numbers of synthetically occluded instances. This allows us to easily incorporate the statistics of occlusion patterns in a discriminatively trained model. We test the model on several benchmarks for landmark localization and detection including challenging new data sets featuring significant occlusion. We find that the addition of an explicit occlusion model yields a detection system that outperforms existing approaches for occluded instances while maintaining competitive accuracy in detection and landmark localization for unoccluded instances.
Recently, approaches based on pose regression , which train regressors that predict landmark locations from both appearance and spatial context provided by other detector responses, has also shown impressive performance @cite_37 @cite_32 @cite_39 @cite_21 @cite_2 @cite_33 @cite_16 @cite_13 @cite_30 . While these approaches lack an explicit model of face shape, stage-wise pose-regression models can be trained efficiently in a discriminative fashion and thus sidestep the optimization problems of global model alignment while providing fast, feed-forward performance at test time.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_33", "@cite_21", "@cite_32", "@cite_39", "@cite_2", "@cite_16", "@cite_13" ], "mid": [ "", "2116277445", "2135132101", "", "2046268678", "2032558548", "", "", "1998294030" ], "abstract": [ "", "Finding fiducial facial points in any frame of a video showing rich naturalistic facial behaviour is an unsolved problem. Yet this is a crucial step for geometric-feature-based facial expression analysis, and methods that use appearance-based features extracted at fiducial facial point locations. In this paper we present a method based on a combination of Support Vector Regression and Markov Random Fields to drastically reduce the time needed to search for a point's location and increase the accuracy and robustness of the algorithm. Using Markov Random Fields allows us to constrain the search space by exploiting the constellations that facial points can form. The regressors on the other hand learn a mapping between the appearance of the area surrounding a point and the positions of these points, which makes detection of the points very fast and can make the algorithm robust to variations of appearance due to facial expression and moderate changes in head pose. The proposed point detection algorithm was tested on 1855 images, the results of which showed we outperform current state of the art point detectors.", "Although facial feature detection from 2D images is a well-studied field, there is a lack of real-time methods that estimate feature points even on low quality images. Here we propose conditional regression forest for this task. While regression forest learn the relations between facial image patches and the location of feature points from the entire set of faces, conditional regression forest learn the relations conditional to global face properties. In our experiments, we use the head pose as a global property and demonstrate that conditional regression forests outperform regression forests for facial feature detection. We have evaluated the method on the challenging Labeled Faces in the Wild [20] database where close-to-human accuracy is achieved while processing images in real-time.", "", "Facial landmark detection is a fundamental step for many tasks in computer vision such as expression recognition and face alignment. In this paper, we focus on the detection of landmarks under realistic scenarios that include pose, illumination and expression challenges as well as blur and low-resolution input. In our approach, an n-point shape of point-landmarks is represented as a union of simpler polygonal sub-shapes. The core idea of our method is to find the sequence of deformation parameters simultaneously for all sub-shapes that transform each point-landmark into its target landmark location. To accomplish this task, we introduce an agglomerate of fern regressors. To optimize the convergence speed and accuracy we take advantage of search localization using component-landmark detectors, multi-scale analysis and learning of point cloud dynamics. Results from extensive experiments on facial images from several challenging publicly available databases demonstrate that our method (ACFeR) can reliably detect landmarks with accuracy comparable to commercial software and other state-of-the-art methods.", "We present a novel approach to localizing parts in images of human faces. The approach combines the output of local detectors with a non-parametric set of global models for the part locations based on over one thousand hand-labeled exemplar images. By assuming that the global models generate the part locations as hidden variables, we derive a Bayesian objective function. This function is optimized using a consensus of models for these hidden variables. The resulting localizer handles a much wider range of expression, pose, lighting and occlusion than prior ones. We show excellent performance on a new dataset gathered from the internet and show that our localizer achieves state-of-the-art performance on the less challenging BioID dataset.", "", "", "This paper presents a highly efficient, very accurate regression approach for face alignment. Our approach has two novel components: a set of local binary features, and a locality principle for learning those features. The locality principle guides us to learn a set of highly discriminative local binary features for each facial landmark independently. The obtained local binary features are used to jointly learn a linear regression for the final output. Our approach achieves the state-of-the-art results when tested on the current most challenging benchmarks. Furthermore, because extracting and regressing local binary features is computationally very cheap, our system is much faster than previous methods. It achieves over 3, 000 fps on a desktop or 300 fps on a mobile phone for locating a few dozens of landmarks." ] }
1506.08347
799206314
The presence of occluders significantly impacts object recognition accuracy. However, occlusion is typically treated as an unstructured source of noise and explicit models for occluders have lagged behind those for object appearance and shape. In this paper we describe a hierarchical deformable part model for face detection and landmark localization that explicitly models part occlusion. The proposed model structure makes it possible to augment positive training data with large numbers of synthetically occluded instances. This allows us to easily incorporate the statistics of occlusion patterns in a discriminatively trained model. We test the model on several benchmarks for landmark localization and detection including challenging new data sets featuring significant occlusion. We find that the addition of an explicit occlusion model yields a detection system that outperforms existing approaches for occluded instances while maintaining competitive accuracy in detection and landmark localization for unoccluded instances.
Our model is most closely related to the work of @cite_40 , which applies discriminatively trained deformable part models (DPM) @cite_11 to face analysis. This offers an intermediate between the extremes of model alignment and landmark regression by utilizing mixtures of simplified shape models that make efficient global optimization of part placements feasible while exploiting discriminative training criteria. Similar to @cite_9 , we use local part and landmark mixtures to encode richer multi-modal shape distributions. We extend this line of work by adding hierarchical structure and explicit occlusion to the model. We introduce intermediate part nodes that do not have an associated root template'' but instead serve to encode an intermediate representation of occlusion and shape state. The notion of hierarchical part models has been explored extensively as a tool for compositional representation and parameter sharing (see e.g., @cite_29 @cite_10 ). While the intermediate state represented in such models can often be formally encoded in by non-hierarchical models with expanded state spaces and tied parameters, our experiments show that the particular choice of model structure proves essential for efficient representation and inference.
{ "cite_N": [ "@cite_9", "@cite_29", "@cite_40", "@cite_10", "@cite_11" ], "mid": [ "2013640163", "2143299724", "2047508432", "2153185908", "2168356304" ], "abstract": [ "We describe a method for articulated human detection and human pose estimation in static images based on a new representation of deformable part models. Rather than modeling articulation using a family of warped (rotated and foreshortened) templates, we use a mixture of small, nonoriented parts. We describe a general, flexible mixture model that jointly captures spatial relations between part locations and co-occurrence relations between part mixtures, augmenting standard pictorial structure models that encode just spatial relations. Our models have several notable properties: 1) They efficiently model articulation by sharing computation across similar warps, 2) they efficiently model an exponentially large set of global mixtures through composition of local mixtures, and 3) they capture the dependency of global geometry on local appearance (parts look different at different locations). When relations are tree structured, our models can be efficiently optimized with dynamic programming. We learn all parameters, including local appearances, spatial relations, and co-occurrence relations (which encode local rigidity) with a structured SVM solver. Because our model is efficient enough to be used as a detector that searches over scales and image locations, we introduce novel criteria for evaluating pose estimation and human detection, both separately and jointly. We show that currently used evaluation criteria may conflate these two issues. Most previous approaches model limbs with rigid and articulated templates that are trained independently of each other, while we present an extensive diagnostic evaluation that suggests that flexible structure and joint training are crucial for strong performance. We present experimental results on standard benchmarks that suggest our approach is the state-of-the-art system for pose estimation, improving past work on the challenging Parse and Buffy datasets while being orders of magnitude faster.", "In this paper, we address the tasks of detecting, segmenting, parsing, and matching deformable objects. We use a novel probabilistic object model that we call a hierarchical deformable template (HDT). The HDT represents the object by state variables defined over a hierarchy (with typically five levels). The hierarchy is built recursively by composing elementary structures to form more complex structures. A probability distribution-a parameterized exponential model-is defined over the hierarchy to quantify the variability in shape and appearance of the object at multiple scales. To perform inference-to estimate the most probable states of the hierarchy for an input image-we use a bottom-up algorithm called compositional inference. This algorithm is an approximate version of dynamic programming where approximations are made (e.g., pruning) to ensure that the algorithm is fast while maintaining high performance. We adapt the structure-perceptron algorithm to estimate the parameters of the HDT in a discriminative manner (simultaneously estimating the appearance and shape parameters). More precisely, we specify an exponential distribution for the HDT using a dictionary of potentials, which capture the appearance and shape cues. This dictionary can be large and so does not require handcrafting the potentials. Instead, structure-perceptron assigns weights to the potentials so that less important potentials receive small weights (this is like a ?soft? form of feature selection). Finally, we provide experimental evaluation of HDTs on different visual tasks, including detection, segmentation, matching (alignment), and parsing. We show that HDTs achieve state-of-the-art performance for these different tasks when evaluated on data sets with groundtruth (and when compared to alternative algorithms, which are typically specialized to each task).", "We present a unified model for face detection, pose estimation, and landmark estimation in real-world, cluttered images. Our model is based on a mixtures of trees with a shared pool of parts; we model every facial landmark as a part and use global mixtures to capture topological changes due to viewpoint. We show that tree-structured models are surprisingly effective at capturing global elastic deformation, while being easy to optimize unlike dense graph structures. We present extensive results on standard face benchmarks, as well as a new “in the wild” annotated dataset, that suggests our system advances the state-of-the-art, sometimes considerably, for all three tasks. Though our model is modestly trained with hundreds of faces, it compares favorably to commercial systems trained with billions of examples (such as Google Picasa and face.com).", "Compositional models provide an elegant formalism for representing the visual appearance of highly variable objects. While such models are appealing from a theoretical point of view, it has been difficult to demonstrate that they lead to performance advantages on challenging datasets. Here we develop a grammar model for person detection and show that it outperforms previous high-performance systems on the PASCAL benchmark. Our model represents people using a hierarchy of deformable parts, variable structure and an explicit model of occlusion for partially visible objects. To train the model, we introduce a new discriminative framework for learning structured prediction models from weakly-labeled data.", "We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function." ] }
1506.08347
799206314
The presence of occluders significantly impacts object recognition accuracy. However, occlusion is typically treated as an unstructured source of noise and explicit models for occluders have lagged behind those for object appearance and shape. In this paper we describe a hierarchical deformable part model for face detection and landmark localization that explicitly models part occlusion. The proposed model structure makes it possible to augment positive training data with large numbers of synthetically occluded instances. This allows us to easily incorporate the statistics of occlusion patterns in a discriminatively trained model. We test the model on several benchmarks for landmark localization and detection including challenging new data sets featuring significant occlusion. We find that the addition of an explicit occlusion model yields a detection system that outperforms existing approaches for occluded instances while maintaining competitive accuracy in detection and landmark localization for unoccluded instances.
Modeling occlusion is a natural fit for recognition systems with an explicit representation of parts. Work on generative constellation models @cite_8 @cite_27 learned parameters of a full joint distribution over the probability of part occlusion and relied on brute force enumeration for inference, a strategy that doesn't scale to large numbers of landmarks. More commonly, part occlusions are treated independently which makes computation and representation more efficient. For example, the supervised detection model of @cite_17 associates with each part a binary variable indicating occlusion and learns a corresponding appearance template for the occluded state.
{ "cite_N": [ "@cite_27", "@cite_17", "@cite_8" ], "mid": [ "2154422044", "166750225", "2167828171" ], "abstract": [ "We present a method to learn and recognize object class models from unlabeled and unsegmented cluttered scenes in a scale invariant manner. Objects are modeled as flexible constellations of parts. A probabilistic representation is used for all aspects of the object: shape, appearance, occlusion and relative scale. An entropy-based feature detector is used to select regions and their scale within the image. In learning the parameters of the scale-invariant object model are estimated. This is done using expectation-maximization in a maximum-likelihood setting. In recognition, this model is used in a Bayesian manner to classify images. The flexible nature of the model is demonstrated by excellent results over a range of datasets including geometrically constrained classes (e.g. faces, cars) and flexible objects (such as animals).", "Deformable part-based models [1, 2] achieve state-of-the-art performance for object detection, but rely on heuristic initialization during training due to the optimization of non-convex cost function. This paper investigates limitations of such an initialization and extends earlier methods using additional supervision. We explore strong supervision in terms of annotated object parts and use it to (i) improve model initialization, (ii) optimize model structure, and (iii) handle partial occlusions. Our method is able to deal with sub-optimal and incomplete annotations of object parts and is shown to benefit from semi-supervised learning setups where part-level annotation is provided for a fraction of positive examples only. Experimental results are reported for the detection of six animal classes in PASCAL VOC 2007 and 2010 datasets. We demonstrate significant improvements in detection performance compared to the LSVM [1] and the Poselet [3] object detectors.", "We propose a method to learn heterogeneous models of object classes for visual recognition. The training images contain a preponderance of clutter and learning is unsupervised. Our models represent objects as probabilistic constellations of rigid parts (features). The variability within a class is represented by a join probability density function on the shape of the constellation and the appearance of the parts. Our method automatically identifies distinctive features in the training set. The set of model parameters is then learned using expectation maximization. When trained on different, unlabeled and unsegmented views of a class of objects, each component of the mixture model can adapt to represent a subset of the views. Similarly, different component models can also \"specialize\" on sub-classes of an object class. Experiments on images of human heads, leaves from different species of trees, and motor-cars demonstrate that the method works well over a wide variety of objects." ] }
1506.08800
830553840
NoSQL databases like Redis, Cassandra, and MongoDB are increasingly popular because they are flexible, lightweight, and easy to work with. Applications that use these databases will evolve over time, sometimes necessitating (or preferring) a change to the format or organization of the data. The problem we address in this paper is: How can we support the evolution of high-availability applications and their NoSQL data online, without excessive delays or interruptions, even in the presence of backward-incompatible data format changes? We present KVolve, an extension to the popular Redis NoSQL database, as a solution to this problem. KVolve permits a developer to submit an upgrade specification that defines how to transform existing data to the newest version. This transformation is applied lazily as applications interact with the database, thus avoiding long pause times. We demonstrate that KVolve is expressive enough to support substantial practical updates, including format changes to RedisFS, a Redis-backed file system, while imposing essentially no overhead in general use and minimal pause times during updates.
In contrast, the F1 database from Google implemented an asynchronous protocol @cite_37 for adding and removing tables, columns and indexes, which allows the servers in a distributed database system to access and update all the data during a schema change and to transition to the new schema at different times. This is achieved by having stateless database servers with temporal schema leases, by identifying which schema-change operations may cause inconsistencies, and by breaking these into a sequence of schema changes that preserve database consistency as long as servers are no more than one schema version behind. Google's Spanner distributed key-value store @cite_19 (which provides F1's backend) supports changes to key formats and values by registering schema-change transactions at a specific time in the future and by utilizing globally synchronized clocks to coordinate reads and writes with these transactions. These systems do not address changes to the format of Protobufs stored in the F1 columns or Spanner values @cite_16 or inconsistencies that may be caused by interactions with (stateful) clients using different schemas @cite_57 .
{ "cite_N": [ "@cite_57", "@cite_19", "@cite_37", "@cite_16" ], "mid": [ "1970500070", "2013409485", "2140708907", "2124582661" ], "abstract": [ "Online software upgrades are often plagued by runtime behaviors that are poorly understood and difficult to ascertain. For example, the interactions among multiple versions of the software expose the system to race conditions that can introduce latent errors or data corruption. Moreover, industry trends suggest that online upgrades are currently needed in large-scale enterprise systems, which often span multiple administrative domains (e.g., Web 2.0 applications that rely on AJAX client-side code or systems that lease cloud-computing resources). In such systems, the enterprise does not control all the tiers of the system and cannot coordinate the upgrade process, making existing techniques inadequate to prevent mixed-version races. In this paper, we present an analytical framework for impact assessment, which allows system administrators to directly compare the risk of following an online-upgrade plan with the risk of delaying or canceling the upgrade. We also describe an executable model that implements our formal impact assessment and enables a systematic approach for deciding whether an online upgrade is appropriate. Our model provides a method of last resort for avoiding undesirable program behaviors, in situations where mixed-version races cannot be avoided through other technical means.", "Spanner is Google’s scalable, multiversion, globally distributed, and synchronously replicated database. It is the first system to distribute data at global scale and support externally-consistent distributed transactions. This article describes how Spanner is structured, its feature set, the rationale underlying various design decisions, and a novel time API that exposes clock uncertainty. This API and its implementation are critical to supporting external consistency and a variety of powerful features: nonblocking reads in the past, lock-free snapshot transactions, and atomic schema changes, across all of Spanner.", "We introduce a protocol for schema evolution in a globally distributed database management system with shared data, stateless servers, and no global membership. Our protocol is asynchronous--it allows different servers in the database system to transition to a new schema at different times--and online--all servers can access and update all data during a schema change. We provide a formal model for determining the correctness of schema changes under these conditions, and we demonstrate that many common schema changes can cause anomalies and database corruption. We avoid these problems by replacing corruption-causing schema changes with a sequence of schema changes that is guaranteed to avoid corrupting the database so long as all servers are no more than one schema version behind at any time. Finally, we discuss a practical implementation of our protocol in F1, the database management system that stores data for Google AdWords.", "The need to handle increasingly larger data volumes is one factor driving the adoption of a new class of nonrelational NoSQL databases. Advocates of NoSQL databases claim they can be used to build systems that are more performant, scale better, and are easier to program. NoSQL Distilled is a concise but thorough introduction to this rapidly emerging technology. Pramod J. Sadalage and Martin Fowler explain how NoSQL databases work and the ways that they may be a superior alternative to a traditional RDBMS. The authors provide a fast-paced guide to the concepts you need to know in order to evaluate whether NoSQL databases are right for your needs and, if so, which technologies you should explore further. The first part of the book concentrates on core concepts, including schemaless data models, aggregates, new distribution models, the CAP theorem, and map-reduce. In the second part, the authors explore architectural and design issues associated with implementing NoSQL. They also present realistic use cases that demonstrate NoSQL databases at work and feature representative examples using Riak, MongoDB, Cassandra, and Neo4j. In addition, by drawing on Pramod Sadalages pioneering work, NoSQL Distilled shows how to implement evolutionary design with schema migration: an essential technique for applying NoSQL databases. The book concludes by describing how NoSQL is ushering in a new age of Polyglot Persistence, where multiple data-storage worlds coexist, and architects can choose the technology best optimized for each type of data access." ] }
1506.08800
830553840
NoSQL databases like Redis, Cassandra, and MongoDB are increasingly popular because they are flexible, lightweight, and easy to work with. Applications that use these databases will evolve over time, sometimes necessitating (or preferring) a change to the format or organization of the data. The problem we address in this paper is: How can we support the evolution of high-availability applications and their NoSQL data online, without excessive delays or interruptions, even in the presence of backward-incompatible data format changes? We present KVolve, an extension to the popular Redis NoSQL database, as a solution to this problem. KVolve permits a developer to submit an upgrade specification that defines how to transform existing data to the newest version. This transformation is applied lazily as applications interact with the database, thus avoiding long pause times. We demonstrate that KVolve is expressive enough to support substantial practical updates, including format changes to RedisFS, a Redis-backed file system, while imposing essentially no overhead in general use and minimal pause times during updates.
One approach to defining schema changes defines a declarative schema evolution language for NoSQL data -bases @cite_26 . This language allows specifying more comprehensive schema changes and enables the automatic generation of database queries for migrating eagerly to the new schema. (While the paper also mentions the possibility of performing the migration in a lazy manner, which is needed for avoiding downtime, design and implementation details are not provided.) Other approaches use a domain-specific language (DSL) for describing data schema migrations for Python @cite_20 and for Haskell datatypes @cite_5 . Many other approaches @cite_30 @cite_47 @cite_45 @cite_55 have focused on the problem of synthesizing the transformation code to migrate from one schema version to the next, and the transformation is then typically applied offline, rather than incrementally online. In this paper, we focus on how to apply a transformation without halting service rather than synthesizing the transformation code.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_55", "@cite_45", "@cite_5", "@cite_47", "@cite_20" ], "mid": [ "1588213250", "2962765823", "2492205055", "1556253886", "", "2040811166", "2464326659" ], "abstract": [ "The purely manual specification of semantic correspondences between schemas is almost infeasible for very large schemas or when many different schemas have to be matched. Hence, solving such large-scale match tasks asks for automatic or semiautomatic schema matching approaches. Large-scale matching needs especially to be supported for XML schemas and different kinds of ontologies due to their increasing use and size, e.g., in e-business and web and life science applications. Unfortunately, correctly and efficiently matching large schemas and ontologies are very challenging, and most previous match systems have only addressed small match tasks. We provide an overview about recently proposed approaches to achieve high match quality or and high efficiency for large-scale matching. In addition to describing some recent matchers utilizing instance and usage data, we cover approaches on early pruning of the search space, divide and conquer strategies, parallel matching, tuning matcher combinations, the reuse of previous match results, and holistic schema matching. We also provide a brief comparison of selected match tools.", "NoSQL data stores are commonly schema-less, providing no means for globally defining or managing the schema. While this offers great flexibility in early stages of application development, developers soon can experience the heavy burden of dealing with increasingly heterogeneous data. This paper targets schema evolution for NoSQL data stores, the complex task of adapting and changing the implicit structure of the data stored. We discuss the recommendations of the developer community on handling schema changes, and introduce a simple, declarative schema evolution language. With our language, software developers and architects can systematically manage the evolution of their production data and perform typical schema maintenance tasks. We further provide a holistic NoSQL database programming language to define the semantics of our schema evolution language. Our solution does not require any modifications to the NoSQL data store, treating the data store as a black box. Thus, we want to address application developers that use NoSQL systems as database-as-a-service.", "", "To achieve interoperability, modern information systems and e-commerce applications use mappings to translate data from one representation to another. In dynamic environments like the Web, data sources may change not only their data but also their schemas, their semantics, and their query capabilities. Such changes must be reflected in the mappings. Mappings left inconsistent by a schema change have to be detected and updated. As large, complicated schemas become more prevalent, and as data is reused in more applications, manually maintaining mappings (even simple mappings like view definitions) is becoming impractical. We present a novel framework and a tool (ToMAS) for automatically adapting mappings as schemas evolve. Our approach considers not only local changes to a schema, but also changes that may affect and transform many components of a schema. We consider a comprehensive class of mappings for relational and XML schemas with choice types and (nested) constraints. Our algorithm detects mappings affected by a structural or constraint change and generates all the rewritings that are consistent with the semantics of the mapped schemas. Our approach explicitly models mapping choices made by a user and maintains these choices, whenever possible, as the schemas and mappings evolve. We describe an implementation of a mapping management and adaptation tool based on these ideas and compare it with a mapping generation tool.", "", "Supporting database schema evolution represents a long-standing challenge of practical and theoretical importance for modern information systems. In this paper, we describe techniques and systems for automating the critical tasks of migrating the database and rewriting the legacy applications. In addition to labor saving, the benefits delivered by these advances are many and include reliable prediction of outcome, minimization of downtime, system-produced documentation, and support for archiving, historical queries, and provenance. The PRISM PRISM++ system delivers these benefits, by solving the difficult problem of automating the migration of databases and the rewriting of queries and updates. In this paper, we present the PRISM PRISM++ system and the novel technology that made it possible. In particular, we focus on the difficult and previously unsolved problem of supporting legacy queries and updates under schema and integrity constraints evolution. The PRISM PRISM++ approach consists in providing the users with a set of SQL-based Schema Modification Operators (SMOs), which describe how the tables in the old schema are modified into those in the new schema. In order to support updates, SMOs are extended with integrity constraints modification operators. By using recent results on schema mapping, the paper (i) characterizes the impact on integrity constraints of structural schema changes, (ii) devises representations that enable the rewriting of updates, and (iii) develop a unified approach for query and update rewriting under constraints. We complement the system with two novel tools: the first automatically collects and provides statistics on schema evolution histories, whereas the second derives equivalent sequences of SMOs from the migration scripts that were used for schema upgrades. These tools were used to produce an extensive testbed containing 15 evolution histories of scientific databases and web information systems, providing over 100 years of aggregate evolution histories and almost 2,000 schema evolution steps.", "SDN controllers must be periodically upgraded to add features, improve performance, and fix bugs, but current techniques for implementing dynamic updates---i.e., without disrupting ongoing network functions---are inadequate. Simply halting the old controller and bringing up the new one can cause state to be lost, leading to incorrect behavior. For example, if the state represents flows blacklisted by a firewall, then traffic that should be blocked may be allowed to pass through. Techniques based on record and replay can reconstruct controller state automatically, but they are expensive to deploy and do not work in all scenarios. This paper presents a new approach to implementing dynamic updates for SDN controllers. We present the design and implementation of a new controller platform called Morpheus that uses explicit state transfer to implement dynamic updates. Morpheus enables programmers to directly initialize the upgraded controller's state as a function of its existing state, using a domain-specific language that is designed to be easy to use. Morpheus also offers a distributed protocol for safely deploying updates across multiple nodes. Experiments confirm that Morpheus provides correct behavior and good performance." ] }
1506.08800
830553840
NoSQL databases like Redis, Cassandra, and MongoDB are increasingly popular because they are flexible, lightweight, and easy to work with. Applications that use these databases will evolve over time, sometimes necessitating (or preferring) a change to the format or organization of the data. The problem we address in this paper is: How can we support the evolution of high-availability applications and their NoSQL data online, without excessive delays or interruptions, even in the presence of backward-incompatible data format changes? We present KVolve, an extension to the popular Redis NoSQL database, as a solution to this problem. KVolve permits a developer to submit an upgrade specification that defines how to transform existing data to the newest version. This transformation is applied lazily as applications interact with the database, thus avoiding long pause times. We demonstrate that KVolve is expressive enough to support substantial practical updates, including format changes to RedisFS, a Redis-backed file system, while imposing essentially no overhead in general use and minimal pause times during updates.
Our work is also related to the body of research on dynamic software updates @cite_42 @cite_23 @cite_51 @cite_50 , which aim to modify a running program on-the-fly, without causing downtime. However, with the exception of a position paper @cite_32 , these approaches focus on changes to code and data structures loaded in memory, rather than changes to the formats of persistent data stored in a database.
{ "cite_N": [ "@cite_42", "@cite_32", "@cite_50", "@cite_23", "@cite_51" ], "mid": [ "2049659774", "", "2108247069", "2166974198", "2125977605" ], "abstract": [ "Dynamic software updating (DSU) systems facilitate software updates to running programs, thereby permitting developers to add features and fix bugs without downtime. This article introduces Kitsune, a DSU system for C. Kitsune’s design has three notable features. First, Kitsune updates the whole program, rather than individual functions, using a mechanism that places no restrictions on data representations or allowed compiler optimizations. Second, Kitsune makes the important aspects of updating explicit in the program text, making the program’s semantics easy to understand while minimizing programmer effort. Finally, the programmer can write simple specifications to direct Kitsune to generate code that traverses and transforms old-version state for use by new code; such state transformation is often necessary and is significantly more difficult in prior DSU systems. We have used Kitsune to update six popular, open-source, single- and multithreaded programs and find that few program changes are required to use Kitsune, that it incurs essentially no performance overhead, and that update times are fast.", "", "This paper presents POLUS, a software maintenance tool capable of iteratively evolving running software into newer versions. POLUS's primary goal is to increase the dependability of contemporary server software, which is frequently disrupted either by external attacks or by scheduled upgrades. To render POLUS both practical and powerful, we design and implement POLUS aiming to retain backward binary compatibility, support for multithreaded software and recover already tainted state of running software, yet with good usability and very low runtime overhead. To demonstrate the applicability of POLUS, we report our experience in using POLUS to dynamically update three prevalent server applications: vsftpd, sshd and apache HTTP server. Performance measurements show that POLUS incurs negligible runtime overhead: a less than 1 performance degradation (but 5 for one case). The time to apply an update is also minimal.", "This paper presents Rubah, the first dynamic software updating system for Java that: is portable, implemented via libraries and bytecode rewriting on top of a standard JVM; is efficient, imposing essentially no overhead on normal, steady-state execution; is flexible, allowing nearly arbitrary changes to classes between updates; and isnon-disruptive, employing either a novel eager algorithm that transforms the program state with multiple threads, or a novel lazy algorithm that transforms objects as they are demanded, post-update. Requiring little programmer effort, Rubah has been used to dynamically update five long-running applications: the H2 database, the Voldemort key-value store, the Jake2 implementation of the Quake 2 shooter game, the CrossFTP server, and the JavaEmailServer.", "The pressing demand to deploy software updates without stopping running programs has fostered much research on live update systems in the past decades. Prior solutions, however, either make strong assumptions on the nature of the update or require extensive and error-prone manual effort, factors which discourage live update adoption. This paper presents Mutable Checkpoint-Restart (MCR), a new live update solution for generic (multiprocess and multithreaded) server programs written in C. Compared to prior solutions, MCR can support arbitrary software updates and automate most of the common live update operations. The key idea is to allow the new version to restart as similarly to a fresh program initialization as possible, relying on existing code paths to automatically restore the old program threads and reinitialize a relevant portion of the program data structures. To transfer the remaining data structures, MCR relies on a combination of precise and conservative garbage collection techniques to trace all the global pointers and apply the required state transformations on the fly. Experimental results on popular server programs (Apache httpd, nginx, OpenSSH and vsftpd) confirm that our techniques can effectively automate problems previously deemed difficult at the cost of negligible run-time performance overhead (2 on average) and moderate memory overhead (3.9x on average)." ] }
1506.08349
2253824031
A deep learning approach has been proposed recently to derive speaker identifies (d-vector) by a deep neural network (DNN). This approach has been applied to text-dependent speaker recognition tasks and shows reasonable performance gains when combined with the conventional i-vector approach. Although promising, the existing d-vector implementation still can not compete with the i-vector baseline. This paper presents two improvements for the deep learning approach: a phonedependent DNN structure to normalize phone variation, and a new scoring approach based on dynamic time warping (DTW). Experiments on a text-dependent speaker recognition task demonstrated that the proposed methods can provide considerable performance improvement over the existing d-vector implementation.
This paper follows the work in @cite_10 and provides several extensions. Particularly, the speaker identity in @cite_10 is represented by a d-vector derived by average pooling, which is quite neat and efficient, but loses much information of the test signal, such as the distributional property and the temporal constraint. One of the main contribution of this paper is to investigate how to utilize the temporal constraint in the DNN-based approach.
{ "cite_N": [ "@cite_10" ], "mid": [ "2046056978" ], "abstract": [ "In this paper we investigate the use of deep neural networks (DNNs) for a small footprint text-dependent speaker verification task. At development stage, a DNN is trained to classify speakers at the frame-level. During speaker enrollment, the trained DNN is used to extract speaker specific features from the last hidden layer. The average of these speaker features, or d-vector, is taken as the speaker model. At evaluation stage, a d-vector is extracted for each utterance and compared to the enrolled speaker model to make a verification decision. Experimental results show the DNN based speaker verification system achieves good performance compared to a popular i-vector system on a small footprint text-dependent speaker verification task. In addition, the DNN based system is more robust to additive noise and outperforms the i-vector system at low False Rejection operating points. Finally the combined system outperforms the i-vector system by 14 and 25 relative in equal error rate (EER) for clean and noisy conditions respectively." ] }
1506.08163
1550035701
We study the estimation error of constrained M-estimators, and derive explicit upper bounds on the expected estimation error determined by the Gaussian width of the constraint set. Both of the cases where the true parameter is on the boundary of the constraint set (matched constraint), and where the true parameter is strictly in the constraint set (mismatched constraint) are considered. For both cases, we derive novel universal estimation error bounds for regression in a generalized linear model with the canonical link function. Our error bound for the mismatched constraint case is minimax optimal in terms of its dependence on the sample size, for Gaussian linear regression by the Lasso.
In @cite_20 @cite_7 , the authors derived sharp estimation error bounds for regression in the linear model by constrained least squares (LS) estimators. The analysis in @cite_0 provides a minimax estimation error bound for the same setting . There are some related works on learning a function in a function class @cite_2 @cite_6 . When the function class is linearly parametrized by vectors in @math , and the function corresponding to @math is in the function class, the @math -estimation error in the function class may be translated into the @math -estimation error with respect to @math . A common limitation of @cite_2 @cite_6 @cite_7 @cite_20 @cite_0 is that the results are not extendable to general non-linear statistical models.
{ "cite_N": [ "@cite_7", "@cite_6", "@cite_0", "@cite_2", "@cite_20" ], "mid": [ "2077306951", "2949521215", "2953072791", "1603156887", "1928866391" ], "abstract": [ "We consider the problem of estimating an unknown signal @math from noisy linear observations @math . In many practical instances, @math has a certain structure that can be captured by a structure inducing convex function @math . For example, @math norm can be used to encourage a sparse solution. To estimate @math with the aid of @math , we consider the well-known LASSO method and provide sharp characterization of its performance. We assume the entries of the measurement matrix @math and the noise vector @math have zero-mean normal distributions with variances @math and @math respectively. For the LASSO estimator @math , we attempt to calculate the Normalized Square Error (NSE) defined as @math as a function of the noise level @math , the number of observations @math and the structure of the signal. We show that, the structure of the signal @math and choice of the function @math enter the error formulae through the summary parameters @math and @math , which are defined as the Gaussian squared-distances to the subdifferential cone and to the @math -scaled subdifferential, respectively. The first LASSO estimator assumes a-priori knowledge of @math and is given by @math . We prove that its worst case NSE is achieved when @math and concentrates around @math . Secondly, we consider @math , for some @math . This time the NSE formula depends on the choice of @math and is given by @math . We then establish a mapping between this and the third estimator @math . Finally, for a number of important structured signal classes, we translate our abstract formulae to closed-form upper bounds on the NSE.", "We obtain sharp bounds on the performance of Empirical Risk Minimization performed in a convex class and with respect to the squared loss, without assuming that class members and the target are bounded functions or have rapidly decaying tails. Rather than resorting to a concentration-based argument, the method used here relies on a small-ball' assumption and thus holds for classes consisting of heavy-tailed functions and for heavy-tailed targets. The resulting estimates scale correctly with the noise level' of the problem, and when applied to the classical, bounded scenario, always improve the known bounds.", "This tutorial provides an exposition of a flexible geometric framework for high dimensional estimation problems with constraints. The tutorial develops geometric intuition about high dimensional sets, justifies it with some results of asymptotic convex geometry, and demonstrates connections between geometric results and estimation problems. The theory is illustrated with applications to sparse recovery, matrix completion, quantization, linear and logistic regression and generalized linear models.", "", "This paper considers the linear inverse problem where we wish to estimate a structured signal x_0 from its corrupted observations. When the problem is ill-posed, it is natural to associate a convex function f(·) with the structure of the signal. For example, l_1 norm can be used for sparse signals. To carry out the estimation, we consider two well-known convex programs: 1) Second order cone program (SOCP), and, 2) Lasso. Assuming Gaussian measurements, we show that, if precise information about the value f(x_0) or the l_2-norm of the noise is available, one can do a particularly good job at estimation. In particular, the reconstruction error becomes proportional to the “sparsity” of the signal rather than to the ambient dimension of the noise vector. We connect our results to the existing literature and provide a discussion on their relation to the standard least-squares problem. Our error bounds are non-asymptotic and sharp, they apply to arbitrary convex functions and do not assume any distribution on the noise." ] }
1506.08163
1550035701
We study the estimation error of constrained M-estimators, and derive explicit upper bounds on the expected estimation error determined by the Gaussian width of the constraint set. Both of the cases where the true parameter is on the boundary of the constraint set (matched constraint), and where the true parameter is strictly in the constraint set (mismatched constraint) are considered. For both cases, we derive novel universal estimation error bounds for regression in a generalized linear model with the canonical link function. Our error bound for the mismatched constraint case is minimax optimal in terms of its dependence on the sample size, for Gaussian linear regression by the Lasso.
Another research direction considers constrained estimation in possibly non-linear statistical models @cite_24 @cite_23 @cite_10 . A constrained @math -estimator for logistic regression was proposed and analyzed in @cite_24 . In @cite_10 , the authors proposed and analyzed a universal projection-based estimator for regression in generalized linear models (GLMs). In @cite_23 , the authors analyzed the performance of the constrained LS estimator in GLMs. A common limitation of @cite_24 @cite_23 @cite_10 is that the results are valid only for the specific proposed estimators, and they do not even apply to the constrained maximum-likelihood (ML) estimator, which is the most popular approach in practice. Moreover, the proposed estimators in @cite_24 @cite_23 @cite_10 can only recover the true parameter up to a scale ambiguity.
{ "cite_N": [ "@cite_24", "@cite_10", "@cite_23" ], "mid": [ "2964322027", "2963403872", "2952339920" ], "abstract": [ "This paper develops theoretical results regarding noisy 1-bit compressed sensing and sparse binomial regression. We demonstrate that a single convex program gives an accurate estimate of the signal, or coefficient vector, for both of these models. We show that an -sparse signal in can be accurately estimated from m = O(s log(n s)) single-bit measurements using a simple convex program. This remains true even if each measurement bit is flipped with probability nearly 1 2. Worst-case (adversarial) noise can also be accounted for, and uniform results that hold for all sparse inputs are derived as well. In the terminology of sparse logistic regression, we show that O (s log (2n s)) Bernoulli trials are sufficient to estimate a coefficient vector in which is approximately -sparse. Moreover, the same convex program works for virtually all generalized linear models, in which the link function may be unknown. To our knowledge, these are the first results that tie together the theory of sparse logistic regression to 1-bit compressed sensing. Our results apply to general signal structures aside from sparsity; one only needs to know the size of the set where signals reside. The size is given by the mean width of K, a computable quantity whose square serves as a robust extension of the dimension.", "Author(s): Plan, Y; Vershynin, R; Yudovina, E | Abstract: Consider measuring an n-dimensional vector x through the inner product with several measurement vectors, a_1, a_2, ..., a_m. It is common in both signal processing and statistics to assume the linear response model y_i = + e_i, where e_i is a noise term. However, in practice the precise relationship between the signal x and the observations y_i may not follow the linear model, and in some cases it may not even be known. To address this challenge, in this paper we propose a general model where it is only assumed that each observation y_i may depend on a_i only through . We do not assume that the dependence is known. This is a form of the semiparametric single index model, and it includes the linear model as well as many forms of the generalized linear model as special cases. We further assume that the signal x has some structure, and we formulate this as a general assumption that x belongs to some known (but arbitrary) feasible set K. We carefully detail the benefit of using the signal structure to improve estimation. The theory is based on the mean width of K, a geometric parameter which can be used to understand its effective dimension in estimation problems. We determine a simple, efficient two-step procedure for estimating the signal based on this model -- a linear estimation followed by metric projection onto K. We give general conditions under which the estimator is minimax optimal up to a constant. This leads to the intriguing conclusion that in the high noise regime, an unknown non-linearity in the observations does not significantly reduce one's ability to determine the signal, even when the non-linearity may be non-invertible. Our results may be specialized to understand the effect of non-linearities in compressed sensing.", "We study the problem of signal estimation from non-linear observations when the signal belongs to a low-dimensional set buried in a high-dimensional space. A rough heuristic often used in practice postulates that non-linear observations may be treated as noisy linear observations, and thus the signal may be estimated using the generalized Lasso. This is appealing because of the abundance of efficient, specialized solvers for this program. Just as noise may be diminished by projecting onto the lower dimensional space, the error from modeling non-linear observations with linear observations will be greatly reduced when using the signal structure in the reconstruction. We allow general signal structure, only assuming that the signal belongs to some set K in R^n. We consider the single-index model of non-linearity. Our theory allows the non-linearity to be discontinuous, not one-to-one and even unknown. We assume a random Gaussian model for the measurement matrix, but allow the rows to have an unknown covariance matrix. As special cases of our results, we recover near-optimal theory for noisy linear observations, and also give the first theoretical accuracy guarantee for 1-bit compressed sensing with unknown covariance matrix of the measurement vectors." ] }
1506.08163
1550035701
We study the estimation error of constrained M-estimators, and derive explicit upper bounds on the expected estimation error determined by the Gaussian width of the constraint set. Both of the cases where the true parameter is on the boundary of the constraint set (matched constraint), and where the true parameter is strictly in the constraint set (mismatched constraint) are considered. For both cases, we derive novel universal estimation error bounds for regression in a generalized linear model with the canonical link function. Our error bound for the mismatched constraint case is minimax optimal in terms of its dependence on the sample size, for Gaussian linear regression by the Lasso.
We say that the constraint is if @math lies on the boundary of @math in ) (or @math ), and if @math lies strictly in @math (or @math ). The analyses in @cite_20 @cite_7 require the constraint to be matched, while in practice the exact value of @math is seldom known. The constraint in @cite_2 is always matched due to the special structure of quantum density operators. The error bounds in @cite_24 @cite_0 can be overly pessimistic, because they hold for all @math . The results in @cite_6 @cite_23 @cite_10 do not require a matched constraint and depend on @math ; our result is of this kind. Recall that, however, @cite_6 is limited to specific statistical models, and @cite_23 @cite_10 are limited to specific @math -estimators.
{ "cite_N": [ "@cite_7", "@cite_6", "@cite_24", "@cite_0", "@cite_23", "@cite_2", "@cite_10", "@cite_20" ], "mid": [ "2077306951", "2949521215", "2964322027", "2953072791", "2952339920", "1603156887", "2963403872", "1928866391" ], "abstract": [ "We consider the problem of estimating an unknown signal @math from noisy linear observations @math . In many practical instances, @math has a certain structure that can be captured by a structure inducing convex function @math . For example, @math norm can be used to encourage a sparse solution. To estimate @math with the aid of @math , we consider the well-known LASSO method and provide sharp characterization of its performance. We assume the entries of the measurement matrix @math and the noise vector @math have zero-mean normal distributions with variances @math and @math respectively. For the LASSO estimator @math , we attempt to calculate the Normalized Square Error (NSE) defined as @math as a function of the noise level @math , the number of observations @math and the structure of the signal. We show that, the structure of the signal @math and choice of the function @math enter the error formulae through the summary parameters @math and @math , which are defined as the Gaussian squared-distances to the subdifferential cone and to the @math -scaled subdifferential, respectively. The first LASSO estimator assumes a-priori knowledge of @math and is given by @math . We prove that its worst case NSE is achieved when @math and concentrates around @math . Secondly, we consider @math , for some @math . This time the NSE formula depends on the choice of @math and is given by @math . We then establish a mapping between this and the third estimator @math . Finally, for a number of important structured signal classes, we translate our abstract formulae to closed-form upper bounds on the NSE.", "We obtain sharp bounds on the performance of Empirical Risk Minimization performed in a convex class and with respect to the squared loss, without assuming that class members and the target are bounded functions or have rapidly decaying tails. Rather than resorting to a concentration-based argument, the method used here relies on a small-ball' assumption and thus holds for classes consisting of heavy-tailed functions and for heavy-tailed targets. The resulting estimates scale correctly with the noise level' of the problem, and when applied to the classical, bounded scenario, always improve the known bounds.", "This paper develops theoretical results regarding noisy 1-bit compressed sensing and sparse binomial regression. We demonstrate that a single convex program gives an accurate estimate of the signal, or coefficient vector, for both of these models. We show that an -sparse signal in can be accurately estimated from m = O(s log(n s)) single-bit measurements using a simple convex program. This remains true even if each measurement bit is flipped with probability nearly 1 2. Worst-case (adversarial) noise can also be accounted for, and uniform results that hold for all sparse inputs are derived as well. In the terminology of sparse logistic regression, we show that O (s log (2n s)) Bernoulli trials are sufficient to estimate a coefficient vector in which is approximately -sparse. Moreover, the same convex program works for virtually all generalized linear models, in which the link function may be unknown. To our knowledge, these are the first results that tie together the theory of sparse logistic regression to 1-bit compressed sensing. Our results apply to general signal structures aside from sparsity; one only needs to know the size of the set where signals reside. The size is given by the mean width of K, a computable quantity whose square serves as a robust extension of the dimension.", "This tutorial provides an exposition of a flexible geometric framework for high dimensional estimation problems with constraints. The tutorial develops geometric intuition about high dimensional sets, justifies it with some results of asymptotic convex geometry, and demonstrates connections between geometric results and estimation problems. The theory is illustrated with applications to sparse recovery, matrix completion, quantization, linear and logistic regression and generalized linear models.", "We study the problem of signal estimation from non-linear observations when the signal belongs to a low-dimensional set buried in a high-dimensional space. A rough heuristic often used in practice postulates that non-linear observations may be treated as noisy linear observations, and thus the signal may be estimated using the generalized Lasso. This is appealing because of the abundance of efficient, specialized solvers for this program. Just as noise may be diminished by projecting onto the lower dimensional space, the error from modeling non-linear observations with linear observations will be greatly reduced when using the signal structure in the reconstruction. We allow general signal structure, only assuming that the signal belongs to some set K in R^n. We consider the single-index model of non-linearity. Our theory allows the non-linearity to be discontinuous, not one-to-one and even unknown. We assume a random Gaussian model for the measurement matrix, but allow the rows to have an unknown covariance matrix. As special cases of our results, we recover near-optimal theory for noisy linear observations, and also give the first theoretical accuracy guarantee for 1-bit compressed sensing with unknown covariance matrix of the measurement vectors.", "", "Author(s): Plan, Y; Vershynin, R; Yudovina, E | Abstract: Consider measuring an n-dimensional vector x through the inner product with several measurement vectors, a_1, a_2, ..., a_m. It is common in both signal processing and statistics to assume the linear response model y_i = + e_i, where e_i is a noise term. However, in practice the precise relationship between the signal x and the observations y_i may not follow the linear model, and in some cases it may not even be known. To address this challenge, in this paper we propose a general model where it is only assumed that each observation y_i may depend on a_i only through . We do not assume that the dependence is known. This is a form of the semiparametric single index model, and it includes the linear model as well as many forms of the generalized linear model as special cases. We further assume that the signal x has some structure, and we formulate this as a general assumption that x belongs to some known (but arbitrary) feasible set K. We carefully detail the benefit of using the signal structure to improve estimation. The theory is based on the mean width of K, a geometric parameter which can be used to understand its effective dimension in estimation problems. We determine a simple, efficient two-step procedure for estimating the signal based on this model -- a linear estimation followed by metric projection onto K. We give general conditions under which the estimator is minimax optimal up to a constant. This leads to the intriguing conclusion that in the high noise regime, an unknown non-linearity in the observations does not significantly reduce one's ability to determine the signal, even when the non-linearity may be non-invertible. Our results may be specialized to understand the effect of non-linearities in compressed sensing.", "This paper considers the linear inverse problem where we wish to estimate a structured signal x_0 from its corrupted observations. When the problem is ill-posed, it is natural to associate a convex function f(·) with the structure of the signal. For example, l_1 norm can be used for sparse signals. To carry out the estimation, we consider two well-known convex programs: 1) Second order cone program (SOCP), and, 2) Lasso. Assuming Gaussian measurements, we show that, if precise information about the value f(x_0) or the l_2-norm of the noise is available, one can do a particularly good job at estimation. In particular, the reconstruction error becomes proportional to the “sparsity” of the signal rather than to the ambient dimension of the noise vector. We connect our results to the existing literature and provide a discussion on their relation to the standard least-squares problem. Our error bounds are non-asymptotic and sharp, they apply to arbitrary convex functions and do not assume any distribution on the noise." ] }
1506.08307
2149858098
In this paper we are concerned with the problem of data forwarding from a wireless body area network (WBAN) to a gateway when body shadowing affects the ability of WBAN nodes to communicate with the gateway. To solve this problem we present a new WBAN architecture that uses two communication technologies. One network is formed between on-body nodes, and is realized with capacitive body-coupled communication (BCC), while an IEEE 802.15.4 radio frequency (RF) network is used for forwarding data to the gateway. WBAN nodes that have blocked RF links due to body shadowing forward their data through the BCC link to a node that acts as a relay and has an active RF connection. For this architecture we design a network layer protocol that manages the two communication technologies and is responsible for relay selection and data forwarding. Next, we develop analytical performance models of the medium access control (MAC) protocols of the two independent communication links in order to be used for driving the decisions of the previous algorithms. Finally, the analytical models are used for further optimizing energy and delay efficiency. We test our system under different configurations first by performing simulations and next by using real RF traces.
In this paper we deal with the problem of optimizing data delivery from a WBAN to a gateway. The problem is challenging because communication between on-body and off-body nodes is very unreliable @cite_6 @cite_27 . The most promising way to attack the problem is through cooperation between WBAN nodes @cite_30 . With cooperation the nodes that aid in forwarding the data of other nodes are the . There is a plethora of works that focus on selecting the optimal relay in the general case of wireless sensor networks (WSN) by considering different optimization objectives, while fewer works have focused in WBANs.
{ "cite_N": [ "@cite_30", "@cite_27", "@cite_6" ], "mid": [ "2159456336", "1990122148", "2135820177" ], "abstract": [ "The increasing use of wireless networks and the constant miniaturization of electrical devices has empowered the development of Wireless Body Area Networks (WBANs). In these networks various sensors are attached on clothing or on the body or even implanted under the skin. The wireless nature of the network and the wide variety of sensors offer numerous new, practical and innovative applications to improve health care and the Quality of Life. The sensors of a WBAN measure for example the heartbeat, the body temperature or record a prolonged electrocardiogram. Using a WBAN, the patient experiences a greater physical mobility and is no longer compelled to stay in the hospital. This paper offers a survey of the concept of Wireless Body Area Networks. First, we focus on some applications with special interest in patient monitoring. Then the communication in a WBAN and its positioning between the different technologies is discussed. An overview of the current research on the physical layer, existing MAC and network protocols is given. Further, cross layer and quality of service is discussed. As WBANs are placed on the human body and often transport private data, security is also considered. An overview of current and past projects is given. Finally, the open research issues and challenges are pointed out.", "Wireless sensor networks represent a key technology enabler for enhanced health care and assisted living systems. Recent standardization eorts to ensure compatibility among sensor network systems sold by dierent vendors have produced the IEEE 802.15.4 standard, which specifies the MAC and physical layer behavior. This standard has certain draw-backs: it supports only single-hop communication; it does not mitigate the hidden terminal problem; and it does not coordinate node sleeping patterns. The IEEE 802.15.4 standard design philosophy assumes that higher layer mechanisms will take care of any added functionality. Building on IEEE 802.15.4, this paper proposes TImezone COordinated Sleep Scheduling (TICOSS), a mechanism inspired by MERLIN [2] that provides multi-hop support over 802.15.4 through the division of the network into timezones. TICOSS is cross-layer in nature, as it closely coordinates MAC and routing layer behavior. The main contributions of TICOSS are threefold: (1) it allows nodes to alternate periods of activity and periods of inactivity to save energy; (2) it mitigates packet collisions due to hidden terminals belonging to nearby star networks; (3) it provides shortest path routing for packets from a node to the closest gateway. Simulation experiments confirm that augmenting IEEE 802.15.4 networks with TICOSS doubles the operational lifetime for high trac scenarios. TICOSS has also been implemented on the Phillips AquisGrain modules for testing and eventual deployment in assisted living systems.", "One would expect a body-area network to have consistently good connectivity, given the relatively short distances involved. However, early experimental results suggest otherwise. This poster examines the characteristics of the links in an on-body IEEE 802.15.4 network and the factors that influence link performance. We demonstrate that node location, as well as body position, significantly affects connectivity. For example, connectivity in the sitting position tends to be much worse than standing. We will present a comprehensive evaluation including various combinations of changes in node orientation, node placement, body position, and environmental factors. Preliminary results clearly demonstrate the need for researching different radios, topologies and protocol design to make body area networks viable." ] }
1506.08307
2149858098
In this paper we are concerned with the problem of data forwarding from a wireless body area network (WBAN) to a gateway when body shadowing affects the ability of WBAN nodes to communicate with the gateway. To solve this problem we present a new WBAN architecture that uses two communication technologies. One network is formed between on-body nodes, and is realized with capacitive body-coupled communication (BCC), while an IEEE 802.15.4 radio frequency (RF) network is used for forwarding data to the gateway. WBAN nodes that have blocked RF links due to body shadowing forward their data through the BCC link to a node that acts as a relay and has an active RF connection. For this architecture we design a network layer protocol that manages the two communication technologies and is responsible for relay selection and data forwarding. Next, we develop analytical performance models of the medium access control (MAC) protocols of the two independent communication links in order to be used for driving the decisions of the previous algorithms. Finally, the analytical models are used for further optimizing energy and delay efficiency. We test our system under different configurations first by performing simulations and next by using real RF traces.
Another significant number of research efforts focused on the problem of reliable communication and was motivated from experimental results. @cite_28 the authors first focus on obtaining an experimental characterization of the channel. From the obtained results, the use of statically assigned relay nodes in specific body locations was proposed. Nevertheless, specific on-body nodes may still fail or be occluded due to body shadowing. Another aspect that affects the WBAN performance is the mobility of the user. Mobility models for WBANs driven from experiments were studied in @cite_22 . In this work the authors study the problem of user mobility and posture changes and how they affect WBAN performance. The solution is a multi-hop protocol that is aided by a single relay.
{ "cite_N": [ "@cite_28", "@cite_22" ], "mid": [ "2129331658", "2093955703" ], "abstract": [ "This paper focuses on the energy efficiency of communication in small-scale sensor networks experiencing high path loss. In particular, a sensor network on the human body or BASN is considered. The energy consumption or network lifetime of a single-hop network and a multi-hop network are compared. We derive a propagation model and a radio model for communication along the human body. Using these models, energy efficiency was studied analytically for a line and a tree topology. Calculations show that single-hop communication is inefficient, especially for nodes far away from the sink. There however, multi-hop proves to be more efficient but closer to the sink hotspots arise. Based on these findings, we propose to exploit the performance difference by either introducing extra nodes in the network, i.e. dedicated relay devices, or by using a cooperative approach or by a combination of both. We show that these solutions increase the network lifetime significantly.", "A good mobility model is an essential prerequisite for performance evaluation of protocols for wireless networks with node mobility. Sensor nodes in a Wireless Body Area Network (WBAN) exhibit high mobility. The WBAN topology may completely change because of posture changes and movement even within a certain type of posture. The WBAN also moves as a whole in an ambient network. Therefore, an appropriate mobility model is of great importance for performance evaluation. This paper presents a comprehensive configurable mobility model MoBAN for evaluating intra-and extra-WBAN communication. It implements different postures as well as individual node mobility within a particular posture. The model can be adapted to a broad range of applications for WBANs. The model is made available through http: www.es.ele.tue.nl nes , as an add-on to the mobility framework of the OMNeT++ simulator. Two case studies illustrate the use of the mobility model for performance evaluation of network protocols." ] }
1506.08307
2149858098
In this paper we are concerned with the problem of data forwarding from a wireless body area network (WBAN) to a gateway when body shadowing affects the ability of WBAN nodes to communicate with the gateway. To solve this problem we present a new WBAN architecture that uses two communication technologies. One network is formed between on-body nodes, and is realized with capacitive body-coupled communication (BCC), while an IEEE 802.15.4 radio frequency (RF) network is used for forwarding data to the gateway. WBAN nodes that have blocked RF links due to body shadowing forward their data through the BCC link to a node that acts as a relay and has an active RF connection. For this architecture we design a network layer protocol that manages the two communication technologies and is responsible for relay selection and data forwarding. Next, we develop analytical performance models of the medium access control (MAC) protocols of the two independent communication links in order to be used for driving the decisions of the previous algorithms. Finally, the analytical models are used for further optimizing energy and delay efficiency. We test our system under different configurations first by performing simulations and next by using real RF traces.
In contrast to all the previous works that look into each specific node as a potential relay, there is an option to use multiple WBAN nodes for improving the reliability and or energy power. This class of protocols, that also targets generalized wireless cooperative networks, proposes the creation of across groups of nodes in order to improve the diversity gain in the RF link by simultaneously transmitting the same information over different wireless paths @cite_29 . Nevertheless, there is the major problem of node synchronization in this case. In cooperative schemes that involve many nodes we can also add recent works that apply advanced techniques like network coding @cite_5 @cite_3 .
{ "cite_N": [ "@cite_5", "@cite_29", "@cite_3" ], "mid": [ "1971367413", "2152162955", "2009106662" ], "abstract": [ "In this paper we present and contrast two approaches, Cooperative Network Coding (CNC) and Cooperative Diversity Coding (CDC) to achieving reliable wireless body area networks. CNC combines cooperative communications and network coding, while CDC combines cooperative communication and diversity coding. These approaches also provide enhanced throughput and transparent self-healing which are desirable features that Wireless Body Area Networks should offer. Additionally, these feed-forward techniques are especially suitable for real-time applications, where retransmissions are not an appropriate alternative. Although, these techniques provide similar benefits, simulation results show that CDC provides higher throughput than CNC because of the fact that the network topology is known and few hops between the source and destination. Moreover, CDC has lower complexity, since the source and destination nodes know the coding coefficients.", "In this paper, a cluster-based cooperative communication scheme is introduced in which each cluster acts as a virtual node with multiple antennas. The presented model considers the effects of random deployment of nodes and hence their random distribution across the network. Taking these effects into account, a pairwise-error probability analysis for a generic space-time code structure is provided. The analysis reveals that the diversity and coding gain of the system depends on both the code structure as well as the nodes' distribution, and hence new code design criteria are developed. Simulation results are presented to confirm the theoretical analysis.", "Clustering and relaying techniques are important approaches towards mitigating the problem of finite network lifetime in wireless sensor networks. To this end, given a clustered wireless sensor network (WSN) (with defined cluster heads and their associated clusters) and a given relay node placement, we present a distributed service allocation algorithm for the relay node for maximizing network lifetime. We evaluate the performance of our method through theoretical analysis as well as simulations, and demonstrate the superior performance of our proposed method compared to a greedy periodic approach." ] }
1506.08307
2149858098
In this paper we are concerned with the problem of data forwarding from a wireless body area network (WBAN) to a gateway when body shadowing affects the ability of WBAN nodes to communicate with the gateway. To solve this problem we present a new WBAN architecture that uses two communication technologies. One network is formed between on-body nodes, and is realized with capacitive body-coupled communication (BCC), while an IEEE 802.15.4 radio frequency (RF) network is used for forwarding data to the gateway. WBAN nodes that have blocked RF links due to body shadowing forward their data through the BCC link to a node that acts as a relay and has an active RF connection. For this architecture we design a network layer protocol that manages the two communication technologies and is responsible for relay selection and data forwarding. Next, we develop analytical performance models of the medium access control (MAC) protocols of the two independent communication links in order to be used for driving the decisions of the previous algorithms. Finally, the analytical models are used for further optimizing energy and delay efficiency. We test our system under different configurations first by performing simulations and next by using real RF traces.
Despite the architectural and protocol differences, all the previously described schemes share one common characteristic which is the use of a single RF communication technology. To alleviate the problems of a single technology two different ones can be used. The number of works in this area is quite limited. A communication protocol that uses two technologies was presented in @cite_21 . In that work the authors proposed the use of two different RF bands namely the 433MHz and 2.4GHz. The 433MHz band is used for data aggregation whereas the 2.4GHz band is used for data forwarding to the gateway. Since the range of the 433MHz band is limited to approximately 2 meters around the node it is possible to improve the reliability and energy consumption by reducing the number of nodes competing for the same channel. Still, both links use RF for on-body communications and two independent RF bands are required.
{ "cite_N": [ "@cite_21" ], "mid": [ "2156094078" ], "abstract": [ "Wireless sensor network (WSN) technologies have been extended to the bio-medical area, and it is called body sensor networks (BSN). BSN systems sense and transmit the vital signs of human, such as electrocardiogram (ECG) and electromyogram (EMG), in unobtrusive and efficient way. Those vital signs are critical to human's life and behavior, so the data should be reliable and transmitted in real-time. In this paper, we propose a BSN platform that is using two-level communications (TLC) to increase the reliability of the system. Also, we develop a hardware and software platform to support the TLC and increase the reliability and energy efficiency." ] }
1506.08307
2149858098
In this paper we are concerned with the problem of data forwarding from a wireless body area network (WBAN) to a gateway when body shadowing affects the ability of WBAN nodes to communicate with the gateway. To solve this problem we present a new WBAN architecture that uses two communication technologies. One network is formed between on-body nodes, and is realized with capacitive body-coupled communication (BCC), while an IEEE 802.15.4 radio frequency (RF) network is used for forwarding data to the gateway. WBAN nodes that have blocked RF links due to body shadowing forward their data through the BCC link to a node that acts as a relay and has an active RF connection. For this architecture we design a network layer protocol that manages the two communication technologies and is responsible for relay selection and data forwarding. Next, we develop analytical performance models of the medium access control (MAC) protocols of the two independent communication links in order to be used for driving the decisions of the previous algorithms. Finally, the analytical models are used for further optimizing energy and delay efficiency. We test our system under different configurations first by performing simulations and next by using real RF traces.
Finally, we should mention that none of the previous works employed an in-depth analysis of the impact of the MAC protocol parameters on energy delay reliability of the complete cooperative network. We could only trace a limited number of works for non-cooperative systems that use analytical protocol performance models for further optimization. These approaches use the MAC model of IEEE 802.15.4 for power minimization at an individual node @cite_7 @cite_31 .
{ "cite_N": [ "@cite_31", "@cite_7" ], "mid": [ "2150050284", "2139644003" ], "abstract": [ "As a specific area of sensor networks, wireless in-home sensor networks differ from general sensor networks in that the network has nodes with heterogeneous resources and dissimilar mobility attributes. For example, sensor with different radio coverage, energy capacity, and processing capabilities are deployed, and some of the sensors are mobile and others are fixed in position. The architecture and routing protocol for this type of heterogeneous sensor networks must be based on the resources and characteristics of their member nodes. In addition, the sole stress on energy efficiency for performance measurement is not sufficient. System lifetime is more important in this case. We propose a hub-spoke network topology that is adaptively formed according to the resources of its members. A protocol named resource oriented protocol (ROP) was developed to build the network topology. This protocol principally divides the network operation into two phases. In the topology formation phase, nodes report their available resource characteristics, based on which network architecture is optimally built. We stress that due to the existence of nodes with limitless resources, a top-down appointment process can build the architecture with minimum resource consumption of ordinary nodes. In the topology update phase, mobile sensors and isolated sensors are accepted into the network with an optimal balance of resources. To avoid overhead of periodic route updates, we use a reactive strategy to maintain route cache. Simulation results show that the hub-spoke topology built by ROP can achieve much longer system lifetime.", "Accurate analytical expressions of delay and packet reception probabilities, and energy consumption of duty-cycled wireless sensor networks with random medium access control (MAC) are instrumental for the efficient design and optimization of these resource-constrained networks. Given a clustered network topology with unslotted IEEE 802.15.4 and preamble sampling MAC, a novel approach to the modeling of the delay, reliability, and energy consumption is proposed. The challenging part in such a modeling is the random MAC and sleep policy of the receivers, which prevents to establish the exact time of data packet transmission. The analysis gives expressions as function of sleep time, listening time, traffic rate and MAC parameters. The analytical results are then used to optimize the duty cycle of the nodes and MAC protocol parameters. The approach provides a significant reduction of the energy consumption compared to existing solutions in the literature. Monte Carlo simulations by ns2 assess the validity of the analysis." ] }
1506.07405
766589415
It has been observed in a variety of contexts that gradient descent methods have great success in solving low-rank matrix factorization problems, despite the relevant problem formulation being non-convex. We tackle a particular instance of this scenario, where we seek the @math -dimensional subspace spanned by a streaming data matrix. We apply the natural first order incremental gradient descent method, constraining the gradient method to the Grassmannian. In this paper, we propose an adaptive step size scheme that is greedy for the noiseless case, that maximizes the improvement of our metric of convergence at each data index @math , and yields an expected improvement for the noisy case. We show that, with noise-free data, this method converges from any random initialization to the global minimum of the problem. For noisy data, we provide the expected convergence rate of the proposed algorithm per iteration.
This problem is non-convex firstly because of the product of the two optimization variables @math and @math and secondly because the optimization is over the Grassmannian @math , the non-convex set of all @math -dimensional subspaces in @math . However, several methods For example, the power method can solve this problem if the top @math singular values of @math are distinct @cite_24 . Specifically, considering @math , if the desired accuracy of the @math output by the power method to the global minimizer is @math , and the first two singular values of @math , @math and @math are distinct with the @math for @math , then the power method converges in @math iterations. can find the global minimizer of this problem in polynomial time under a variety of assumptions on @math .
{ "cite_N": [ "@cite_24" ], "mid": [ "1504714361" ], "abstract": [ "We describe two algorithms for the eigenvalue, eigenvector problem which, on input a Gaussian matrix with complex entries, finish with probability 1 and in average polynomial time." ] }
1506.07405
766589415
It has been observed in a variety of contexts that gradient descent methods have great success in solving low-rank matrix factorization problems, despite the relevant problem formulation being non-convex. We tackle a particular instance of this scenario, where we seek the @math -dimensional subspace spanned by a streaming data matrix. We apply the natural first order incremental gradient descent method, constraining the gradient method to the Grassmannian. In this paper, we propose an adaptive step size scheme that is greedy for the noiseless case, that maximizes the improvement of our metric of convergence at each data index @math , and yields an expected improvement for the noisy case. We show that, with noise-free data, this method converges from any random initialization to the global minimum of the problem. For noisy data, we provide the expected convergence rate of the proposed algorithm per iteration.
In this paper, we are interested in approximating a streaming data matrix. At each step, we sample a column of @math , denoted @math . We consider the planted problem, where @math where @math is noise and @math is drawn from a continuous distribution with support on the true subspace, spanned by @math with orthonormal columns; @math , @math . When @math , we wish to find the @math that minimizes the span of the data vectors or the range of @math , denoted @math . When @math we still discuss results in terms of the distance from @math . If we consider only @math , Problem ) is identical to Problem . The GROUSE algorithm (Grassmannian Rank-One Update Subspace Estimation) we analyze is shown as Algorithm , where we generate a sequence @math of @math matrices with orthonormal columns with the goal that @math as @math . Each observed vector is used to update @math to @math , and we constrain the gradient descent method to the Grassmannian using a geodesic update @cite_21 .
{ "cite_N": [ "@cite_21" ], "mid": [ "2045512849" ], "abstract": [ "In this paper we develop new Newton and conjugate gradient algorithms on the Grassmann and Stiefel manifolds. These manifolds represent the constraints that arise in such areas as the symmetric eigenvalue problem, nonlinear eigenvalue problems, electronic structures computations, and signal processing. In addition to the new algorithms, we show how the geometrical framework gives penetrating new insights allowing us to create, understand, and compare algorithms. The theory proposed here provides a taxonomy for numerical linear algebra algorithms that provide a top level mathematical view of previously unrelated algorithms. It is our hope that developers of new algorithms and perturbation theories will benefit from the theory, methods, and examples in this paper." ] }
1506.07405
766589415
It has been observed in a variety of contexts that gradient descent methods have great success in solving low-rank matrix factorization problems, despite the relevant problem formulation being non-convex. We tackle a particular instance of this scenario, where we seek the @math -dimensional subspace spanned by a streaming data matrix. We apply the natural first order incremental gradient descent method, constraining the gradient method to the Grassmannian. In this paper, we propose an adaptive step size scheme that is greedy for the noiseless case, that maximizes the improvement of our metric of convergence at each data index @math , and yields an expected improvement for the noisy case. We show that, with noise-free data, this method converges from any random initialization to the global minimum of the problem. For noisy data, we provide the expected convergence rate of the proposed algorithm per iteration.
First we discuss incremental methods. @cite_10 established the global convergence of a stochastic gradient descent method for the recovery of a positive definite matrix @math in the undersampled case, where the matrix @math is not measured directly but instead via linear measurements. They propose a step size scheme under which they prove global convergence results from a randomly generated initialization. Similarly, @cite_7 invokes a martingale-based argument to show the global convergence rate of the proposed incremental PCA method to the single top eigenvector in the fully sampled case. In contrast, @cite_2 estimates the best @math -dimensional subspace in the fully sampled case and provides a global convergence result by relaxing the non-convex problem to a convex one. We seek to identify the @math dimensional subspace by solving the non-convex problem directly. Finally, our work is most related to @cite_12 , which provides local convergence guarantees for GROUSE in both the fully sampled and undersampled case. Our work focuses on global convergence but only in the fully sampled case; we will extend the global convergence results to the undersampled case in future work.
{ "cite_N": [ "@cite_10", "@cite_12", "@cite_7", "@cite_2" ], "mid": [ "1574269637", "1987142714", "2116195245", "2951135829" ], "abstract": [ "Stochastic gradient descent (SGD) on a low-rank factorization is commonly employed to speed up matrix problems including matrix completion, subspace tracking, and SDP relaxation. In this paper, we exhibit a step size scheme for SGD on a low-rank least-squares problem, and we prove that, under broad sampling conditions, our method converges globally from a random starting point within @math steps with constant probability for constant-rank problems. Our modification of SGD relates it to stochastic power iteration. We also show experiments to illustrate the runtime and convergence of the algorithm.", "Grassmannian rank-one update subspace estimation (GROUSE) is an iterative algorithm for identifying a linear subspace of @math Rn from data consisting of partial observations of random vectors from that subspace. This paper examines local convergence properties of GROUSE, under assumptions on the randomness of the observed vectors, the randomness of the subset of elements observed at each iteration, and incoherence of the subspace with the coordinate directions. Convergence at an expected linear rate is demonstrated under certain assumptions. The case in which the full random vector is revealed at each iteration allows for much simpler analysis and is also described. GROUSE is related to incremental SVD methods and to gradient projection algorithms in optimization.", "We consider a situation in which we see samples in @math drawn i.i.d. from some distribution with mean zero and unknown covariance A. We wish to compute the top eigenvector of A in an incremental fashion - with an algorithm that maintains an estimate of the top eigenvector in O(d) space, and incrementally adjusts the estimate with each new data point that arrives. Two classical such schemes are due to Krasulina (1969) and Oja (1983). We give finite-sample convergence rates for both.", "We study PCA as a stochastic optimization problem and propose a novel stochastic approximation algorithm which we refer to as \"Matrix Stochastic Gradient\" (MSG), as well as a practical variant, Capped MSG. We study the method both theoretically and empirically." ] }
1506.07405
766589415
It has been observed in a variety of contexts that gradient descent methods have great success in solving low-rank matrix factorization problems, despite the relevant problem formulation being non-convex. We tackle a particular instance of this scenario, where we seek the @math -dimensional subspace spanned by a streaming data matrix. We apply the natural first order incremental gradient descent method, constraining the gradient method to the Grassmannian. In this paper, we propose an adaptive step size scheme that is greedy for the noiseless case, that maximizes the improvement of our metric of convergence at each data index @math , and yields an expected improvement for the noisy case. We show that, with noise-free data, this method converges from any random initialization to the global minimum of the problem. For noisy data, we provide the expected convergence rate of the proposed algorithm per iteration.
Turning to batch methods, @cite_23 @cite_30 provided the first theoretical guarantee for an alternating minimization algorithm for low-rank matrix recovery in the undersampled case. Under typical assumptions required for the matrix recovery problems @cite_20 , they established geometric convergence to the global optimal solution. Earlier work @cite_22 @cite_1 considered the same undersampled problem formulation and established convergence guarantees for a steepest descent method (and a preconditioned version) on the full gradient, performed on the Grassmannian. @cite_15 @cite_9 @cite_28 considered low rank semidefinite matrix estimation problems, where they reparamterized the underlying matrix as @math , and update @math via a first order gradient descent method. However, all these results require batch processing and a decent initialization that is close enough to the optimal point, resulting in a heavy computational burden and precluding problems with streaming data. We study random initialization, and our algorithm has fast, computationally efficient updates that can be performed in an online context.
{ "cite_N": [ "@cite_30", "@cite_22", "@cite_28", "@cite_9", "@cite_1", "@cite_23", "@cite_15", "@cite_20" ], "mid": [ "", "2000157792", "1189174436", "1836708065", "", "", "2116680824", "2118550318" ], "abstract": [ "", "Let M be an nα × n matrix of rank r ≪ n, and assume that a uniformly random subset E of its entries is observed. We describe an efficient algorithm that reconstructs M from |E| = O(r n) observed entries with relative root mean square error RMSE ≤ C(α) (nr |E|)1 2. Further, if r = O(1) and M is sufficiently unstructured, then it can be reconstructed exactly from |E| = O(n log n) entries. This settles (in the case of bounded rank) a question left open by Candes and Recht and improves over the guarantees for their reconstruction algorithm. The complexity of our algorithm is O(|E|r log n), which opens the way to its use for massive data sets. In the process of proving these statements, we obtain a generalization of a celebrated result by Friedman-Kahn-Szemeredi and Feige-Ofek on the spectrum of sparse random matrices.", "We propose a simple, scalable, and fast gradient descent algorithm to optimize a nonconvex objective for the rank minimization problem and a closely related family of semidefinite programs. With @math random measurements of a positive semidefinite @math matrix of rank @math and condition number @math , our method is guaranteed to converge linearly to the global optimum.", "We study the minimization of a convex function @math over the set of @math positive semi-definite matrices, but when the problem is recast as @math , with @math and @math . We study the performance of gradient descent on @math ---which we refer to as Factored Gradient Descent (FGD)---under standard assumptions on the original function @math . We provide a rule for selecting the step size and, with this choice, show that the local convergence rate of FGD mirrors that of standard gradient descent on the original @math : i.e., after @math steps, the error is @math for smooth @math , and exponentially small in @math when @math is (restricted) strongly convex. In addition, we provide a procedure to initialize FGD for (restricted) strongly convex objectives and when one only has access to @math via a first-order oracle; for several problem instances, such proper initialization leads to global convergence guarantees. FGD and similar procedures are widely used in practice for problems that can be posed as matrix factorization. To the best of our knowledge, this is the first paper to provide precise convergence rate guarantees for general convex functions under standard convex assumptions.", "", "", "Optimization problems with rank constraints arise in many applications, including matrix regression, structured PCA, matrix completion and matrix decomposition problems. An attractive heuristic for solving such problems is to factorize the low-rank matrix, and to run projected gradient descent on the nonconvex factorized optimization problem. The goal of this problem is to provide a general theoretical framework for understanding when such methods work well, and to characterize the nature of the resulting fixed point. We provide a simple set of conditions under which projected gradient descent, when given a suitable initialization, converges geometrically to a statistically useful solution. Our results are applicable even when the initial solution is outside any region of local convexity, and even when the problem is globally concave. Working in a non-asymptotic framework, we show that our conditions are satisfied for a wide range of concrete models, including matrix regression, structured PCA, matrix completion with real and quantized observations, matrix decomposition, and graph clustering problems. Simulation results show excellent agreement with the theoretical predictions.", "The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard because it contains vector cardinality minimization as a special case. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is sufficiently large. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this preexisting concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to minimizing the nuclear norm and illustrate our results with numerical examples." ] }
1506.07405
766589415
It has been observed in a variety of contexts that gradient descent methods have great success in solving low-rank matrix factorization problems, despite the relevant problem formulation being non-convex. We tackle a particular instance of this scenario, where we seek the @math -dimensional subspace spanned by a streaming data matrix. We apply the natural first order incremental gradient descent method, constraining the gradient method to the Grassmannian. In this paper, we propose an adaptive step size scheme that is greedy for the noiseless case, that maximizes the improvement of our metric of convergence at each data index @math , and yields an expected improvement for the noisy case. We show that, with noise-free data, this method converges from any random initialization to the global minimum of the problem. For noisy data, we provide the expected convergence rate of the proposed algorithm per iteration.
Lastly, several convergence results for optimization on general Riemannian manifolds, including several special cases for the Grassmannian, can be found in @cite_26 . Most of the results are very general; they include global convergence rates to local optima for steepest descent, conjugate gradient, and trust region methods, to name a few. We instead focus on solving the problem in and provide global convergence rates to the global minimum.
{ "cite_N": [ "@cite_26" ], "mid": [ "1804110266" ], "abstract": [ "Many problems in the sciences and engineering can be rephrased as optimization problems on matrix search spaces endowed with a so-called manifold structure. This book shows how to exploit the special structure of such problems to develop efficient numerical algorithms. It places careful emphasis on both the numerical formulation of the algorithm and its differential geometric abstraction--illustrating how good algorithms draw equally from the insights of differential geometry, optimization, and numerical analysis. Two more theoretical chapters provide readers with the background in differential geometry necessary to algorithmic development. In the other chapters, several well-known optimization methods such as steepest descent and conjugate gradients are generalized to abstract manifolds. The book provides a generic development of each of these methods, building upon the material of the geometric chapters. It then guides readers through the calculations that turn these geometrically formulated methods into concrete numerical algorithms. The state-of-the-art algorithms given as examples are competitive with the best existing algorithms for a selection of eigenspace problems in numerical linear algebra. Optimization Algorithms on Matrix Manifolds offers techniques with broad applications in linear algebra, signal processing, data mining, computer vision, and statistical analysis. It can serve as a graduate-level textbook and will be of interest to applied mathematicians, engineers, and computer scientists." ] }
1506.07552
796926850
Stochastic algorithms are efficient approaches to solving machine learning and optimization problems. In this paper, we propose a general framework called Splash for parallelizing stochastic algorithms on multi-node distributed systems. Splash consists of a programming interface and an execution engine. Using the programming interface, the user develops sequential stochastic algorithms without concerning any detail about distributed computing. The algorithm is then automatically parallelized by a communication-efficient execution engine. We provide theoretical justifications on the optimal rate of convergence for parallelizing stochastic gradient descent. Splash is built on top of Apache Spark. The real-data experiments on logistic regression, collaborative filtering and topic modeling verify that Splash yields order-of-magnitude speedup over single-thread stochastic algorithms and over state-of-the-art implementations on Spark.
Distributed machine learning systems have been implemented for a variety of applications and are based on different programming paradigms. Related systems include parameter servers @cite_29 @cite_7 @cite_41 @cite_39 , Petuum @cite_22 , Naiad @cite_38 and GraphLab @cite_15 . There are also machine learning systems built on existing platforms, including Mahout @cite_31 based on Hadoop @cite_3 and MLI @cite_35 based on Spark @cite_18 .
{ "cite_N": [ "@cite_38", "@cite_31", "@cite_35", "@cite_18", "@cite_22", "@cite_7", "@cite_41", "@cite_29", "@cite_3", "@cite_39", "@cite_15" ], "mid": [ "2082171780", "", "2949813383", "2131975293", "2952508678", "2168231600", "2138996412", "200298483", "", "2060393849", "2096544401" ], "abstract": [ "Naiad is a distributed system for executing data parallel, cyclic dataflow programs. It offers the high throughput of batch processors, the low latency of stream processors, and the ability to perform iterative and incremental computations. Although existing systems offer some of these features, applications that require all three have relied on multiple platforms, at the expense of efficiency, maintainability, and simplicity. Naiad resolves the complexities of combining these features in one framework. A new computational model, timely dataflow, underlies Naiad and captures opportunities for parallelism across a wide class of algorithms. This model enriches dataflow computation with timestamps that represent logical points in the computation and provide the basis for an efficient, lightweight coordination mechanism. We show that many powerful high-level programming models can be built on Naiad's low-level primitives, enabling such diverse tasks as streaming data analysis, iterative machine learning, and interactive graph mining. Naiad outperforms specialized systems in their target application domains, and its unique features enable the development of new high-performance applications.", "", "MLI is an Application Programming Interface designed to address the challenges of building Machine Learn- ing algorithms in a distributed setting based on data-centric computing. Its primary goal is to simplify the development of high-performance, scalable, distributed algorithms. Our initial results show that, relative to existing systems, this interface can be used to build distributed implementations of a wide variety of common Machine Learning algorithms with minimal complexity and highly competitive performance and scalability.", "We present Resilient Distributed Datasets (RDDs), a distributed memory abstraction that lets programmers perform in-memory computations on large clusters in a fault-tolerant manner. RDDs are motivated by two types of applications that current computing frameworks handle inefficiently: iterative algorithms and interactive data mining tools. In both cases, keeping data in memory can improve performance by an order of magnitude. To achieve fault tolerance efficiently, RDDs provide a restricted form of shared memory, based on coarse-grained transformations rather than fine-grained updates to shared state. However, we show that RDDs are expressive enough to capture a wide class of computations, including recent specialized programming models for iterative jobs, such as Pregel, and new applications that these models do not capture. We have implemented RDDs in a system called Spark, which we evaluate through a variety of user applications and benchmarks.", "What is a systematic way to efficiently apply a wide spectrum of advanced ML programs to industrial scale problems, using Big Models (up to 100s of billions of parameters) on Big Data (up to terabytes or petabytes)? Modern parallelization strategies employ fine-grained operations and scheduling beyond the classic bulk-synchronous processing paradigm popularized by MapReduce, or even specialized graph-based execution that relies on graph representations of ML programs. The variety of approaches tends to pull systems and algorithms design in different directions, and it remains difficult to find a universal platform applicable to a wide range of ML programs at scale. We propose a general-purpose framework that systematically addresses data- and model-parallel challenges in large-scale ML, by observing that many ML programs are fundamentally optimization-centric and admit error-tolerant, iterative-convergent algorithmic solutions. This presents unique opportunities for an integrative system design, such as bounded-error network synchronization and dynamic scheduling based on ML program structure. We demonstrate the efficacy of these system designs versus well-known implementations of modern ML algorithms, allowing ML programs to run in much less time and at considerably larger model sizes, even on modestly-sized compute clusters.", "Recent work in unsupervised feature learning and deep learning has shown that being able to train large models can dramatically improve performance. In this paper, we consider the problem of training a deep network with billions of parameters using tens of thousands of CPU cores. We have developed a software framework called DistBelief that can utilize computing clusters with thousands of machines to train large models. Within this framework, we have developed two algorithms for large-scale distributed training: (i) Downpour SGD, an asynchronous stochastic gradient descent procedure supporting a large number of model replicas, and (ii) Sandblaster, a framework that supports a variety of distributed batch optimization procedures, including a distributed implementation of L-BFGS. Downpour SGD and Sandblaster L-BFGS both increase the scale and speed of deep network training. We have successfully used our system to train a deep network 30x larger than previously reported in the literature, and achieves state-of-the-art performance on ImageNet, a visual object recognition task with 16 million images and 21k categories. We show that these same techniques dramatically accelerate the training of a more modestly- sized deep network for a commercial speech recognition service. Although we focus on and report performance of these methods as applied to training large neural networks, the underlying algorithms are applicable to any gradient-based machine learning algorithm.", "Latent variable techniques are pivotal in tasks ranging from predicting user click patterns and targeting ads to organizing the news and managing user generated content. Latent variable techniques like topic modeling, clustering, and subspace estimation provide substantial insight into the latent structure of complex data with little or no external guidance making them ideal for reasoning about large-scale, rapidly evolving datasets. Unfortunately, due to the data dependencies and global state introduced by latent variables and the iterative nature of latent variable inference, latent-variable techniques are often prohibitively expensive to apply to large-scale, streaming datasets. In this paper we present a scalable parallel framework for efficient inference in latent variable models over streaming web-scale data. Our framework addresses three key challenges: 1) synchronizing the global state which includes global latent variables (e.g., cluster centers and dictionaries); 2) efficiently storing and retrieving the large local state which includes the data-points and their corresponding latent variables (e.g., cluster membership); and 3) sequentially incorporating streaming data (e.g., the news). We address these challenges by introducing: 1) a novel delta-based aggregation system with a bandwidth-efficient communication protocol; 2) schedule-aware out-of-core storage; and 3) approximate forward sampling to rapidly incorporate new data. We demonstrate state-of-the-art performance of our framework by easily tackling datasets two orders of magnitude larger than those addressed by the current state-of-the-art. Furthermore, we provide an optimized and easily customizable open-source implementation of the framework1.", "Piccolo is a new data-centric programming model for writing parallel in-memory applications in data centers. Unlike existing data-flow models, Piccolo allows computation running on different machines to share distributed, mutable state via a key-value table interface. Piccolo enables efficient application implementations. In particular, applications can specify locality policies to exploit the locality of shared state access and Piccolo's run-time automatically resolves write-write conflicts using user-defined accumulation functions. Using Piccolo, we have implemented applications for several problem domains, including the PageRank algorithm, k-means clustering and a distributed crawler. Experiments using 100 Amazon EC2 instances and a 12 machine cluster show Piccolo to be faster than existing data flow models for many problems, while providing similar fault-tolerance guarantees and a convenient programming interface.", "", "Big data may contain big values, but also brings lots of challenges to the computing theory, architecture, framework, knowledge discovery algorithms, and domain specific tools and applications. Beyond the 4-V or 5-V characters of big datasets, the data processing shows the features like inexact, incremental, and inductive manner. This brings new research opportunities to research community across theory, systems, algorithms, and applications. Is there some new \"theory\" for the big data? How to handle the data computing algorithms in an operatable manner? This report shares some view on new challenges identified, and covers some of the application scenarios such as micro-blog data analysis and data processing in building next generation search engines.", "While high-level data parallel frameworks, like MapReduce, simplify the design and implementation of large-scale data processing systems, they do not naturally or efficiently support many important data mining and machine learning algorithms and can lead to inefficient learning systems. To help fill this critical void, we introduced the GraphLab abstraction which naturally expresses asynchronous, dynamic, graph-parallel computation while ensuring data consistency and achieving a high degree of parallel performance in the shared-memory setting. In this paper, we extend the GraphLab framework to the substantially more challenging distributed setting while preserving strong data consistency guarantees. We develop graph based extensions to pipelined locking and data versioning to reduce network congestion and mitigate the effect of network latency. We also introduce fault tolerance to the GraphLab abstraction using the classic Chandy-Lamport snapshot algorithm and demonstrate how it can be easily implemented by exploiting the GraphLab abstraction itself. Finally, we evaluate our distributed implementation of the GraphLab abstraction on a large Amazon EC2 deployment and show 1-2 orders of magnitude performance gains over Hadoop-based implementations." ] }
1506.07552
796926850
Stochastic algorithms are efficient approaches to solving machine learning and optimization problems. In this paper, we propose a general framework called Splash for parallelizing stochastic algorithms on multi-node distributed systems. Splash consists of a programming interface and an execution engine. Using the programming interface, the user develops sequential stochastic algorithms without concerning any detail about distributed computing. The algorithm is then automatically parallelized by a communication-efficient execution engine. We provide theoretical justifications on the optimal rate of convergence for parallelizing stochastic gradient descent. Splash is built on top of Apache Spark. The real-data experiments on logistic regression, collaborative filtering and topic modeling verify that Splash yields order-of-magnitude speedup over single-thread stochastic algorithms and over state-of-the-art implementations on Spark.
To the best of our knowledge, none of these systems are explicitly designed for parallelizing stochastic algorithms. Mahout and MLI, both adopting the iterative MapReduce @cite_20 framework, are designed for batch algorithms. The parameter servers, Petuum and Naiad provide user-definable update primitives such as @math on variables or @math on messages, under which a distributed stochastic algorithm can be implemented. However, a typical stochastic algorithm updates its parameters in every iteration, which involves expensive inter-node communication. In practice, we found that the per-iteration computation usually takes a few microseconds, but pushing an update from one Amazon EC2 node to another takes milliseconds. Thus, the communication cost dominates the computation cost. If the communication is asynchronous, then the algorithm will easily diverge because of the significant latency.
{ "cite_N": [ "@cite_20" ], "mid": [ "2173213060" ], "abstract": [ "MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day." ] }
1506.07552
796926850
Stochastic algorithms are efficient approaches to solving machine learning and optimization problems. In this paper, we propose a general framework called Splash for parallelizing stochastic algorithms on multi-node distributed systems. Splash consists of a programming interface and an execution engine. Using the programming interface, the user develops sequential stochastic algorithms without concerning any detail about distributed computing. The algorithm is then automatically parallelized by a communication-efficient execution engine. We provide theoretical justifications on the optimal rate of convergence for parallelizing stochastic gradient descent. Splash is built on top of Apache Spark. The real-data experiments on logistic regression, collaborative filtering and topic modeling verify that Splash yields order-of-magnitude speedup over single-thread stochastic algorithms and over state-of-the-art implementations on Spark.
Apart from distributed systems literature, there is a flurry of research studying communication-efficient methods for convex optimization. Some of them applies to stochastic algorithms. @cite_5 study the one-shot averaging scheme for parallelizing SGD. @cite_17 present a framework for parallelizing stochastic dual coordinate methods. Both methods can be implemented on top of . Our theoretical analysis on SGD generalizes that of @cite_5 . In particular, the results of assume that the parallelized SGD is synchronized for multiple rounds, while @cite_5 let the algorithm synchronize only at the end. The multi-round synchronization scheme is more robust when the objective function is not strongly convex. But its theoretical analysis is challenging, because the updates on separate threads are no longer independent.
{ "cite_N": [ "@cite_5", "@cite_17" ], "mid": [ "2166706236", "2963861706" ], "abstract": [ "With the increase in available data parallel machine learning has become an increasingly pressing problem. In this paper we present the first parallel stochastic gradient descent algorithm including a detailed analysis and experimental evidence. Unlike prior work on parallel optimization algorithms [5, 7] our variant comes with parallel acceleration guarantees and it poses no overly tight latency constraints, which might only be available in the multicore setting. Our analysis introduces a novel proof technique — contractive mappings to quantify the speed of convergence of parameter distributions to their asymptotic limits. As a side effect this answers the question of how quickly stochastic gradient descent algorithms reach the asymptotically normal regime [1, 8].", "Communication remains the most significant bottleneck in the performance of distributed optimization algorithms for large-scale machine learning. In this paper, we propose a communication-efficient framework, COCOA, that uses local computation in a primal-dual setting to dramatically reduce the amount of necessary communication. We provide a strong convergence rate analysis for this class of algorithms, as well as experiments on real-world distributed datasets with implementations in Spark. In our experiments, we find that as compared to state-of-the-art mini-batch versions of SGD and SDCA algorithms, COCOA converges to the same .001-accurate solution quality on average 25 × as quickly." ] }
1506.07773
2220764785
Given a graph @math , a non-negative integer @math , and a weight function that maps each vertex in @math to a positive real number, the is about finding a maximum weighted independent set in @math of cardinality at most @math . A special case of MWBIS, when the weight assigned to each vertex is equal to its degree in @math , is called the problem. In other words, the MIVC problem is about finding an independent set of cardinality at most @math with maximum coverage. Since it is a generalization of the well-known Maximum Weighted Independent Set (MWIS) problem, MWBIS too does not have any constant factor polynomial time approximation algorithm assuming @math . In this paper, we study MWBIS in the context of bipartite graphs. We show that, unlike MWIS, the MIVC (and thereby the MWBIS) problem in bipartite graphs is NP-hard. Then, we show that the MWBIS problem admits a @math -factor approximation algorithm in the class of bipartite graphs, which matches the integrality gap of a natural LP relaxation.
A more general problem (also known by the same name, MWBIS,) was introduced and studied in the context of special graphs like trees, forests, cycle graphs, interval graphs, and planar graphs in @cite_9 where each vertex in the given graph @math has a cost associated with it and the problem is about finding an independent set of total cost at most @math (, where @math is a part of the input,) in @math that has the highest weight amongst all such independent sets. Apart from this work, to the best of our knowledge, not much is known about MWBIS.
{ "cite_N": [ "@cite_9" ], "mid": [ "1863224559" ], "abstract": [ "We study a natural extension of the Maximum Weight Independent Set Problem (MWIS), one of the most studied optimization problems in Graph algorithms. We are given a graph @math , a weight function @math , a budget function @math , and a positive integer @math . The weight (resp. budget) of a subset of vertices is the sum of weights (resp. budgets) of the vertices in the subset. A @math -budgeted independent set in @math is a subset of vertices, such that no pair of vertices in that subset are adjacent, and the budget of the subset is at most @math . The goal is to find a @math -budgeted independent set in @math such that its weight is maximum among all the @math -budgeted independent sets in @math . We refer to this problem as MWBIS. Being a generalization of MWIS, MWBIS also has several applications in Scheduling, Wireless networks and so on. Due to the hardness results implied from MWIS, we study the MWBIS problem in several special classes of graphs. We design exact algorithms for trees, forests, cycle graphs, and interval graphs. In unweighted case we design an approximation algorithm for @math -claw free graphs whose approximation ratio ( @math ) is competitive with the approximation ratio ( @math ) of MWIS (unweighted). Furthermore, we extend Baker's technique Baker83 to get a PTAS for MWBIS in planar graphs." ] }
1506.07773
2220764785
Given a graph @math , a non-negative integer @math , and a weight function that maps each vertex in @math to a positive real number, the is about finding a maximum weighted independent set in @math of cardinality at most @math . A special case of MWBIS, when the weight assigned to each vertex is equal to its degree in @math , is called the problem. In other words, the MIVC problem is about finding an independent set of cardinality at most @math with maximum coverage. Since it is a generalization of the well-known Maximum Weighted Independent Set (MWIS) problem, MWBIS too does not have any constant factor polynomial time approximation algorithm assuming @math . In this paper, we study MWBIS in the context of bipartite graphs. We show that, unlike MWIS, the MIVC (and thereby the MWBIS) problem in bipartite graphs is NP-hard. Then, we show that the MWBIS problem admits a @math -factor approximation algorithm in the class of bipartite graphs, which matches the integrality gap of a natural LP relaxation.
Given a graph @math , we know that the is about finding the minimum number of vertices that cover all the edges of @math . Several variants of the VC problem has been studied in the literature. We discuss about a couple of them here. For a positive integer @math , the is about finding the minimum number of vertices that cover at least @math distinct edges of @math . In the year @math , Burroughs and Bshouty introduced and studied the problem of partial vertex cover @cite_13 . In this paper, the authors gave a @math -factor approximation algorithm by rounding fractional optimal solutions given by an LP relaxation of the problem. Bar-Yehuda in @cite_3 came up with another @math -approximation algorithm that relied on the beautiful local ratio' method. A primal-dual algorithm achieving the same approximation factor was given in @cite_5 . In @cite_14 , it was shown that the PVC problem on bipartite graphs is NP-hard.
{ "cite_N": [ "@cite_5", "@cite_14", "@cite_13", "@cite_3" ], "mid": [ "", "9203110", "2126448893", "2070814595" ], "abstract": [ "", "Graphs are often used to model risk management in various systems. Particularly, in [6] have considered a system which essentially represents a tripartite graph. The goal in this model is to reduce the risk in the system below a predefined risk threshold level. It can be shown that the main goal in this risk management system can be formulated as a Partial Vertex Cover problem on bipartite graphs. It is well-known that the vertex cover problem is in P on bipartite graphs; however, the computational complexity of the partial vertex cover problem on bipartite graphs is open. In this paper, we show that the partial vertex cover problem is NP-hard on bipartite graphs. Then, we show that the budgeted maximum coverage problem (a problem related to partial vertex cover problem) admits an ( 8 9 )-approximation algorithm in the class of bipartite graphs, which matches the integrality gap of a natural LP relaxation.", "Linear programming relaxations have been used extensively in designing approximation algorithms for optimization problems. For vertex cover, linear programming and a thresholding technique gives a 2-approximate solution, rivaling the best known performance ratio. For a generalization of vertex cover we call vc t, in which we seek to cover t edges, this technique may not yield a feasible solution at all. We introduce a new method for massaging a linear programming solution to get a good, feasible solution for vc t. Our technique manipulates the values of the linear programming solution to prepare them for thresholding. We prove that this method achieves a performance ratio of 2 for vc t with unit weights. A second algorithm extends this result, giving a 2-approximation for vc t with arbitrary weights. We show that this is tight in the sense that any α-approximation algorithm for vc t with α < 2 implies a breakthrough α-approximation algorithm for vertex cover.", "In this paper we consider the natural generalizations of two fundamental problems, the Set-Cover problem and the Min-Knapsack problem. We are given a hypergraph, each vertex of which has a nonnegative weight, and each edge of which has a nonnegative length. For a given threshold ??, our objective is to find a subset of the vertices with minimum total cost, such that at least a length of ?? of the edges is covered. This problem is called the partial set cover problem. We present an O(|V|2+|H|)-time, ?E-approximation algorithm for this problem, where ?E?2 is an upper bound on the edge cardinality of the hypergraph and |H| is the size of the hypergraph (i.e., the sum of all its edges cardinalities). The special case where ?E=2 is called the partial vertex cover problem. For this problem a 2-approximation was previously known, however, the time complexity of our solution, i.e., O(|V|2), is a dramatic improvement.We show that if the weights are homogeneous (i.e., proportional to the potential coverage of the sets) then any minimal cover is a good approximation. Now, using the local-ratio technique, it is sufficient to repeatedly subtract a homogeneous weight function from the given weight function." ] }
1506.07773
2220764785
Given a graph @math , a non-negative integer @math , and a weight function that maps each vertex in @math to a positive real number, the is about finding a maximum weighted independent set in @math of cardinality at most @math . A special case of MWBIS, when the weight assigned to each vertex is equal to its degree in @math , is called the problem. In other words, the MIVC problem is about finding an independent set of cardinality at most @math with maximum coverage. Since it is a generalization of the well-known Maximum Weighted Independent Set (MWIS) problem, MWBIS too does not have any constant factor polynomial time approximation algorithm assuming @math . In this paper, we study MWBIS in the context of bipartite graphs. We show that, unlike MWIS, the MIVC (and thereby the MWBIS) problem in bipartite graphs is NP-hard. Then, we show that the MWBIS problem admits a @math -factor approximation algorithm in the class of bipartite graphs, which matches the integrality gap of a natural LP relaxation.
Another popular variant of the VC problem is the (MVC) problem. Given a graph @math and a positive integer @math , the is about finding @math vertices that maximize the number of distinct edges covered by them in @math . Ageev and Sviridenko in @cite_1 gave a @math -approximation algorithm for the MVC problem. An approximation algorithm, that uses a semidefinite programming technique, based on a parameter whose factor of approximation is better than @math when the parameter is sufficiently large was shown in @cite_2 . Apollonio and Simeone in @cite_6 proved that the MVC problem on bipartite graphs is NP-hard. The same authors in @cite_10 gave a @math factor approximation algorithm for MVC on bipartite graphs that exploited the structure of the fractional optimal solutions of a linear programming formulation for the problem. The authors of @cite_14 improved this result to obtain an @math factor approximation algorithm for MVC on bipartite graphs.
{ "cite_N": [ "@cite_14", "@cite_1", "@cite_6", "@cite_2", "@cite_10" ], "mid": [ "9203110", "1512148653", "2049200290", "2004973717", "2040480264" ], "abstract": [ "Graphs are often used to model risk management in various systems. Particularly, in [6] have considered a system which essentially represents a tripartite graph. The goal in this model is to reduce the risk in the system below a predefined risk threshold level. It can be shown that the main goal in this risk management system can be formulated as a Partial Vertex Cover problem on bipartite graphs. It is well-known that the vertex cover problem is in P on bipartite graphs; however, the computational complexity of the partial vertex cover problem on bipartite graphs is open. In this paper, we show that the partial vertex cover problem is NP-hard on bipartite graphs. Then, we show that the budgeted maximum coverage problem (a problem related to partial vertex cover problem) admits an ( 8 9 )-approximation algorithm in the class of bipartite graphs, which matches the integrality gap of a natural LP relaxation.", "In this paper we demonstrate a general method of designing constant-factor approximation algorithms for some discrete optimization problems with cardinality constraints. The core of the method is a simple deterministic (\"pipage\") procedure of rounding of linear relaxations. By using the method we design a (1-(1-1 k)k)-approximation algorithm for the maximum coverage problem where k is the maximum size of the subsets that are covered, and a 1 2-approximation algorithm for the maximum cut problem with given sizes of parts in the vertex set bipartition. The performance guarantee of the former improves on that of the well-known (1 - e-1)-greedy algorithm due to Cornuejols, Fisher and Nemhauser in each case of bounded k. The latter is, to the best of our knowledge, the first constant-factor algorithm for that version of the maximum cut problem.", "Given a simple undirected graph G and a positive integer s the Maximum Vertex Coverage Problem is the problem of finding a set U of s vertices of G such that the number of edges having at least one endpoint in U is as large as possible. We prove that the Maximum Vertex Coverage problem on bipartite graphs is NP-hard and discuss several consequences related to known combinatorial optimization problems.", "Abstract We consider the max-vertex-cover (MVC) problem, i.e., find k vertices from an undirected and edge-weighted graph G =( V , E ), where | V |= n ⩾ k , such that the total edge weight covered by the k vertices is maximized. There is a 3 4-approximation algorithm for MVC, based on a linear programming relaxation. We show that the guaranteed ratio can be improved by a simple greedy algorithm for k >(3 4) n , and a simple randomized algorithm for k >(1 2) n . Furthermore, we study a semidefinite programming (SDP) relaxation based approximation algorithms for MVC. We show that, for a range of k , our SDP-based algorithm achieves the best performance guarantee among the four types of algorithms mentioned in this paper.", "Given a simple undirected graph @math and a positive integer @math , the maximum vertex coverage problem (MVC) is the problem of finding a set @math of @math vertices of @math such that the number of edges having at least one endpoint in @math is as large as possible. The problem is NP-hard even in bipartite graphs, as shown in two recent papers [N. Apollonio and B. Simeone, Discrete Appl. Math., 165 (2014), pp. 37--48; G. Joret and A. Vetta, Reducing the Rank of a Matroid, preprint, arXiv:1211.4853v1 [cs.DS], 2012]. By exploiting the structure of the fractional optimal solutions of a linear programming formulation for the maximum coverage problem, we provide a @math -approximation algorithm for the problem. The algorithm immediately extends to the weighted version of MVC." ] }
1506.07490
2282554529
The discrete Gaussian @math is the distribution that assigns to each vector @math in a shifted lattice @math probability proportional to @math . It has long been an important tool in the study of lattices. More recently, algorithms for discrete Gaussian sampling (DGS) have found many applications in computer science. In particular, polynomial-time algorithms for DGS with very high parameters @math have found many uses in cryptography and in reductions between lattice problems. And, in the past year, Aggarwal, Dadush, Regev, and Stephens-Davidowitz showed @math -time algorithms for DGS with a much wider range of parameters and used them to obtain the current fastest known algorithms for the two most important lattice problems, the Shortest Vector Problem (SVP) and the Closest Vector Problem (CVP). Motivated by its increasing importance, we investigate the complexity of DGS itself and its relationship to CVP and SVP. Our first result is a polynomial-time dimension-preserving reduction from DGS to CVP. There is a simple reduction from CVP to DGS, so this shows that DGS is equivalent to CVP. Our second result, which we find to be more surprising, is a polynomial-time dimension-preserving reduction from centered DGS (the important special case when @math ) to SVP. In the other direction, there is a simple reduction from @math -approximate SVP for any @math , and we present some (relatively weak) evidence to suggest that this might be the best achievable approximation factor. We also show that our CVP result extends to a much wider class of distributions and even to other norms.
However, all of the above-mentioned algorithms only work above the smoothing parameter of the lattice because they incur error that depends on how smooth the distribution is. Recently, @cite_3 showed that the averages of pairs of vectors sampled from the centered discrete Gaussian will be distributed as discrete Gaussians with a lower parameter, as long as we condition on the averages lying in the lattice. They then showed how to choose such pairs efficiently and proved that this is sufficient to sample from any centered discrete Gaussian in @math time---even for parameters @math below smoothing. @cite_30 then extended this idea to arbitrary Gaussians (as opposed to just centered Gaussians) with very low parameters @math . In both cases, the sampler actually outputs exponentially many vectors from the desired distribution.
{ "cite_N": [ "@cite_30", "@cite_3" ], "mid": [ "1624710162", "2951200405" ], "abstract": [ "We give a 2 n+o(n) -time and space randomized algorithm for solving the exact Closest Vector Problem (CVP) on n-dimensional Euclidean lattices. This improves on the previous fastest algorithm, the deterministic a#x007E; O (4n)-time and a#x007E; O (2n)-space algorithm of Micciancio and Voulgaris. We achieve our main result in three steps. First, we show how to modify the sampling algorithm due to Aggarwal, Dadush, Regev, and Stephens-Davidowitz (ADRS) to solve the problem of discrete Gaussian sampling over lattice shifts, L-t, with very low parameters. While the actual algorithm is a natural generalization of ADRS, the analysis uses substantial new ideas. This yields a 2n+o(n)-time algorithm for approximate CVP with the very good approximation factor a#x03B3; = 1+2-o(n log n). Second, we show that the approximate closest vectors to a target vector can be grouped into \"lower-dimensional clusters,\" and we use this to obtain a recursive reduction from exact CVP to a variant of approximate CVP that \"behaves well with these clusters.\" Third, we show that our discrete Gaussian sampling algorithm can be used to solve this variant of approximate CVP. The analysis depends crucially on some new properties of the discrete Gaussian distribution and approximate closest vectors, which might be of independent interest.", "We give a randomized @math -time and space algorithm for solving the Shortest Vector Problem (SVP) on n-dimensional Euclidean lattices. This improves on the previous fastest algorithm: the deterministic @math -time and @math -space algorithm of Micciancio and Voulgaris (STOC 2010, SIAM J. Comp. 2013). In fact, we give a conceptually simple algorithm that solves the (in our opinion, even more interesting) problem of discrete Gaussian sampling (DGS). More specifically, we show how to sample @math vectors from the discrete Gaussian distribution at any parameter in @math time and space. (Prior work only solved DGS for very large parameters.) Our SVP result then follows from a natural reduction from SVP to DGS. We also show that our DGS algorithm implies a @math -time algorithm that approximates the Closest Vector Problem to within a factor of @math . In addition, we give a more refined algorithm for DGS above the so-called smoothing parameter of the lattice, which can generate @math discrete Gaussian samples in just @math time and space. Among other things, this implies a @math -time and space algorithm for @math -approximate decision SVP." ] }
1506.07490
2282554529
The discrete Gaussian @math is the distribution that assigns to each vector @math in a shifted lattice @math probability proportional to @math . It has long been an important tool in the study of lattices. More recently, algorithms for discrete Gaussian sampling (DGS) have found many applications in computer science. In particular, polynomial-time algorithms for DGS with very high parameters @math have found many uses in cryptography and in reductions between lattice problems. And, in the past year, Aggarwal, Dadush, Regev, and Stephens-Davidowitz showed @math -time algorithms for DGS with a much wider range of parameters and used them to obtain the current fastest known algorithms for the two most important lattice problems, the Shortest Vector Problem (SVP) and the Closest Vector Problem (CVP). Motivated by its increasing importance, we investigate the complexity of DGS itself and its relationship to CVP and SVP. Our first result is a polynomial-time dimension-preserving reduction from DGS to CVP. There is a simple reduction from CVP to DGS, so this shows that DGS is equivalent to CVP. Our second result, which we find to be more surprising, is a polynomial-time dimension-preserving reduction from centered DGS (the important special case when @math ) to SVP. In the other direction, there is a simple reduction from @math -approximate SVP for any @math , and we present some (relatively weak) evidence to suggest that this might be the best achievable approximation factor. We also show that our CVP result extends to a much wider class of distributions and even to other norms.
The samplers in this work approach discrete Gaussian sampling in a completely different way. (Indeed, the author repeatedly tried and failed to modify the above techniques to work in our context.) Instead, as we described above, we use a new method of sampling based on lattice sparsification. This tool was originally introduced by Khot for the purposes of proving the hardness of approximating SVP @cite_8 . Khot analyzed the behavior of sparsification only on the specific lattices that arose in his reduction, which were cleverly designed to behave nicely when sparsified. Later, Dadush and Kun analyzed the behavior of sparsification over general lattices @cite_32 and introduced the idea of adding a random shift to the target in order to obtain deterministic approximation algorithms for CVP in any norm. Dadush, Regev, and Stephens-Davidowitz used a similar algorithm to obtain a reduction from approximate CVP to the same problem with an upper bound on the distance to the lattice (and a slightly smaller approximation factor) @cite_49 . Our sparsification analysis in the CVP case is most similar to that of @cite_49 , though our reduction requires tighter analysis.
{ "cite_N": [ "@cite_32", "@cite_49", "@cite_8" ], "mid": [ "2146007300", "2069278600", "2056492141" ], "abstract": [ "We give a deterministic algorithm for solving the (1 + e) approximate Closest Vector Problem (CVP) on any n dimensional lattice and any norm in 2O(n) (1 + 1 e)n time and 2n poly(n) space. Our algorithm builds on the lattice point enumeration techniques of Micciancio and Voulgaris (STOC 2010) and Dadush, Peikert and Vempala (FOCS 2011), and gives an elegant, deterministic alternative to the \"AKS Sieve\" based algorithms for (1 + e)-CVP (Ajtai, Kumar, and Sivakumar; STOC 2001 and CCC 2002). Furthermore, assuming the existence of a poly(n)-space and 2O(n) time algorithm for exact CVP in the l2 norm, the space complexity of our algorithm can be reduced to polynomial. Our main technical contribution is a method for \"spar-sifying\" any input lattice while approximately maintaining its metric structure. To this end, we employ the idea of random sublattice restrictions, which was first employed by Khot (FOCS 2003) for the purpose of proving hardness for Shortest Vector Problem (SVP) under lp norms.", "We present a new efficient algorithm for the search version of the approximate Closest Vector Problem with Preprocessing (CVPP). Our algorithm achieves an approximation factor of O(n sqrt log n ), improving on the previous best of O(n^ 1.5 ) due to Lag arias, Lenstra, and Schnorr hkzbabai . We also show, somewhat surprisingly, that only O(n) vectors of preprocessing advice are sufficient to solve the problem (with the slightly worse approximation factor of O(n)). We remark that this still leaves a large gap with respect to the decisional version of CVPP, where the best known approximation factor is O(sqrt n log n ) due to Aharonov and Regev AharonovR04 . To achieve these results, we show a reduction to the same problem restricted to target points that are close to the lattice and a more efficient reduction to a harder problem, Bounded Distance Decoding with preprocessing (BDDP). Combining either reduction with the previous best-known algorithm for BDDP by Liu, Lyubashevsky, and Micciancio LiuLM06 gives our main result. In the setting of CVP without preprocessing, we also give a reduction from (1+eps)gamma approximate CVP to gamma approximate CVP where the target is at distance at most 1+1 eps times the minimum distance (the length of the shortest non-zero vector) which relies on the lattice sparsification techniques of Dadush and Kun DadushK13 . As our final and most technical contribution, we present a substantially more efficient variant of the LLM algorithm (both in terms of run-time and amount of preprocessing advice), and via an improved analysis, show that it can decode up to a distance proportional to the reciprocal of the smoothing parameter of the dual lattice MR04 . We show that this is never smaller than the LLM decoding radius, and that it can be up to an wide tilde Omega (sqrt n ) factor larger.", "Let p > 1 be any fixed real. We show that assuming NP n RP, there is no polynomial time algorithm that approximates the Shortest Vector Problem (SVP) in e p norm within a constant factor. Under the stronger assumption NP n RTIME(2poly(log n)), we show that there is no polynomial-time algorithm with approximation ratio 2(log n)1 2−e where n is the dimension of the lattice and e > 0 is an arbitrarily small constant.We first give a new (randomized) reduction from Closest Vector Problem (CVP) to SVP that achieves some constant factor hardness. The reduction is based on BCH Codes. Its advantage is that the SVP instances produced by the reduction behave well under the augmented tensor product, a new variant of tensor product that we introduce. This enables us to boost the hardness factor to 2(log n)1 2-e." ] }
1506.07490
2282554529
The discrete Gaussian @math is the distribution that assigns to each vector @math in a shifted lattice @math probability proportional to @math . It has long been an important tool in the study of lattices. More recently, algorithms for discrete Gaussian sampling (DGS) have found many applications in computer science. In particular, polynomial-time algorithms for DGS with very high parameters @math have found many uses in cryptography and in reductions between lattice problems. And, in the past year, Aggarwal, Dadush, Regev, and Stephens-Davidowitz showed @math -time algorithms for DGS with a much wider range of parameters and used them to obtain the current fastest known algorithms for the two most important lattice problems, the Shortest Vector Problem (SVP) and the Closest Vector Problem (CVP). Motivated by its increasing importance, we investigate the complexity of DGS itself and its relationship to CVP and SVP. Our first result is a polynomial-time dimension-preserving reduction from DGS to CVP. There is a simple reduction from CVP to DGS, so this shows that DGS is equivalent to CVP. Our second result, which we find to be more surprising, is a polynomial-time dimension-preserving reduction from centered DGS (the important special case when @math ) to SVP. In the other direction, there is a simple reduction from @math -approximate SVP for any @math , and we present some (relatively weak) evidence to suggest that this might be the best achievable approximation factor. We also show that our CVP result extends to a much wider class of distributions and even to other norms.
More generally, this paper can be considered as part of a long line of work that studies the relationships between various lattice problems under dimension-preserving reductions. Notable examples include @cite_0 , which showed that SVP reduces to CVP; @cite_24 , which gave a reduction from SIVP to CVP; and @cite_47 , which showed the equivalence of uSVP, GapSVP, and BDD up to polynomial approximation factors. In particular, this work together with @cite_24 shows that exact SIVP, exact CVP, and DGS are all equivalent under dimension-preserving reductions. (See @cite_48 for a summary of such reductions.)
{ "cite_N": [ "@cite_0", "@cite_48", "@cite_47", "@cite_24" ], "mid": [ "2013794527", "", "1490468194", "1971445617" ], "abstract": [ "Abstract We show that given oracle access to a subroutine which returns approximate closest vectors in a lattice, one may find in polynomial time approximate shortest vectors in a lattice. The level of approximation is maintained; that is, for any function f , the following holds: Suppose that the subroutine, on input of a lattice L and a target vector w (not necessarily in the lattice), outputs v ∈ L such that ‖ v − w ‖≤f(n)·‖ u − w ‖ for any u ∈ L . Then, our algorithm, on input of a lattice L , outputs a non-zero vector v ∈ L such that ‖ v ‖≤f(n)·‖ u ‖ for any non-zero vector u ∈ L . The result holds for any norm, and preserves the dimension of the lattice, i.e., the closest vector oracle is called on lattices of exactly the same dimension as the original shortest vector problem. This result establishes the widely believed conjecture by which the shortest vector problem is not harder than the closest vector problem. The proof can be easily adapted to establish an analogous result for the corresponding computational problems for linear codes.", "", "We prove the equivalence, up to a small polynomial approximation factor @math , of the lattice problems uSVP (unique Shortest Vector Problem), BDD (Bounded Distance Decoding) and GapSVP (the decision version of the Shortest Vector Problem). This resolves a long-standing open problem about the relationship between uSVP and the more standard GapSVP, as well the BDD problem commonly used in coding theory. The main cryptographic application of our work is the proof that the Ajtai-Dwork ([2]) and the Regev ([33]) cryptosystems, which were previously only known to be based on the hardness of uSVP, can be equivalently based on the hardness of worst-case GapSVP @math and GapSVP @math , respectively. Also, in the case of uSVP and BDD, our connection is very tight, establishing the equivalence (within a small constant approximation factor) between the two most central problems used in lattice based public key cryptography and coding theory.", "We give various deterministic polynomial time reductions among approximation problems on point lattices. Our reductions are both efficient and robust, in the sense that they preserve the rank of the lattice and approximation factor achieved. Our main result shows that for any γ ≥ 1, approximating all the successive minima of a lattice (and, in particular, approximately solving the Shortest Independent Vectors Problem, SIVPγ) within a factor γ reduces under deterministic polynomial time rank-preserving reductions to approximating the Closest Vector Problem (CVP) within the same factor γ. This solves an open problem posed by Blomer in (ICALP 2000). As an application, we obtain faster algorithms for the exact solution of SIVP that run in time n! · sO(1) (where n is the rank of the lattice, and s the size of the input,) improving on the best previously known solution of Blomer (ICALP 2000) by a factor 3n. We also show that SIVP, CVP and many other lattice problems are equivalent in their exact version under deterministic polynomial time rank-preserving reductions." ] }
1506.07549
2283205818
In this paper, we consider a planar annulus, i.e., a bounded, two-connected, Jordan domain, endowed with a sequence of triangulations exhausting it. We then construct a corresponding sequence of maps which converge uniformly on compact subsets of the domain, to a conformal homeomorphism onto the interior of a Euclidean annulus bounded by two concentric circles. As an application, we will affirm a conjecture raised by Ken Stephenson in the 90's which predicts that the Riemann mapping can be approximated by a sequence of electrical networks.
Our definition of the harmonic conjugate function is motivated by the fact that, in the smooth category, a conformal map is determined by its real and imaginary parts, which are known to be harmonic conjugates. The search for discrete approximation of conformal maps has a long and rich history. We refer to @cite_16 and [Section 2] ChSm as excellent recent accounts.
{ "cite_N": [ "@cite_16" ], "mid": [ "1521632191" ], "abstract": [ "We detail the theory of Discrete Riemann Surfaces. It takes place on a cellular decomposition of a surface, together with its Poincare dual, equipped with a discrete conformal structure. A lot of theorems of the continuous theory follow through to the discrete case, we define the discrete analogs of period matrices, Riemann's bilinear relations, exponential of constant argument and series. We present the notion of criticality and its relationship with integrability." ] }
1506.06784
2243129471
We explore the probabilistic foundations of shared control in complex dynamic environments. In order to do this, we formulate shared control as a random process and describe the joint distribution that governs its behavior. For tractability, we model the relationships between the operator, autonomy, and crowd as an undirected graphical model. Further, we introduce an interaction function between the operator and the robot, that we call "agreeability"; in combination with the methods developed in trautman-ijrr-2015 , we extend a cooperative collision avoidance autonomy to shared control. We therefore quantify the notion of simultaneously optimizing over agreeability (between the operator and autonomy), and safety and efficiency in crowded environments. We show that for a particular form of interaction function between the autonomy and the operator, linear blending is recovered exactly. Additionally, to recover linear blending, unimodal restrictions must be placed on the models describing the operator and the autonomy. In turn, these restrictions raise questions about the flexibility and applicability of the linear blending framework. Additionally, we present an extension of linear blending called "operator biased linear trajectory blending" (which formalizes some recent approaches in linear blending such as dragan-ijrr-2013 ) and show that not only is this also a restrictive special case of our probabilistic approach, but more importantly, is statistically unsound, and thus, mathematically, unsuitable for implementation. Instead, we suggest a statistically principled approach that guarantees data is used in a consistent manner, and show how this alternative approach converges to the full probabilistic framework. We conclude by proving that, in general, linear blending is suboptimal with respect to the joint metric of agreeability, safety, and efficiency.
This linear arbitration model has enjoyed wide adoption in the assistive wheelchair community ( @cite_30 @cite_2 @cite_15 @cite_25 @cite_7 @cite_1 @cite_28 @cite_21 ). Outside of the wheelchair community, shared control path planning researchers have widely adopted Equation as a standard protocol, as extensively argued in @cite_10 @cite_26 (in @cite_10 , it is argued that linear policy blending can act as a common lens across a wide range of literature''). Additionally, the work of @cite_29 @cite_6 advocates the broad adoption of a linear arbitration step for shared control.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_7", "@cite_28", "@cite_29", "@cite_21", "@cite_1", "@cite_6", "@cite_2", "@cite_15", "@cite_10", "@cite_25" ], "mid": [ "2012976444", "", "1763775273", "", "2140464197", "2229572887", "2037986392", "", "1972335829", "", "2105925198", "" ], "abstract": [ "Powered wheelchair users often struggle to drive safely and effectively and, in more critical cases, can only get around when accompanied by an assistant. To address these issues, we propose a collaborative control mechanism that assists users as and when they require help. The system uses a multiple-hypothesis method to predict the driver's intentions and, if necessary, adjusts the control signals to achieve the desired goal safely. The main emphasis of this paper is on a comprehensive evaluation, where we not only look at the system performance but also, perhaps more importantly, characterize the user performance in an experiment that combines eye tracking with a secondary task. Without assistance, participants experienced multiple collisions while driving around the predefined route. Conversely, when they were assisted by the collaborative controller, not only did they drive more safely but also they were able to pay less attention to their driving, resulting in a reduced cognitive workload. We discuss the importance of these results and their implications for other applications of shared control, such as brain-machine interfaces, where it could be used to compensate for both the low frequency and the low resolution of the user input.", "", "The control system for a personal aid for mobility and health monitoring (PAMM) for the elderly is presented. PAMM is intended to assist the elderly living independently or in senior assisted living facilities. It provides physical support and guidance, as well as monitoring basic vital signs for users that may have both limited physical and cognitive capabilities. This paper presents the design of a bi-level control system for PAMM. The first level is an admittance-based mobility controller that provides a natural and intuitive human machine interface. The second level is an adaptive shared controller that allocates control between the user and the computer based on metrics of the user's performance. Field trials at an eldercare facility show the effectiveness of the design.", "", "In todays aging society, many people require mobility assistance, that can be provided by robotized assistive wheelchairs with a certain degree of autonomy when manual control is unfeasible due to disability. Robot wheelchairs, though, are not supposed to be completely in control because lack of human intervention may lead to loss of residual capabilities and frustration. Most of these systems rely on shared control, which typically consists of swapping control from human to robot when needed. However, this means that persons never deal with situations they find difficult. We propose a new shared control approach to allow constant cooperation between humans and robots, so that assistance may be adapted to the user's skills. Our proposal is based on the reactive navigation paradigm, where robot and human commands become different goals in a Potential Field. Our main novelty is that human and robot attractors are weighted by their respective local efficiencies at each time instant. This produces an emergent behavior that combines both inputs in an efficient, safe and smooth way and is dynamically adapted to the user's needs. The proposed control scheme has been successfully tested at hospital Fondazione Santa Lucia (FSL) in Rome with several volunteers presenting different disabilities.", "", "Abstract Internet telerobotics has emerged in recent decade with direct control and supervisory control as the main teleoperation paradigms. Both paradigms, however, are difficult to use on applications operating in the unknown and dynamic real world, while they do not provide adequate feeling of interaction or a human-friendly control interface to human operator. This paper proposes a novel interactive control (i.e., active supervisory control) paradigm: telecommanding, which is used for Internet-based wheeled robot teleoperation. Telecommanding involves two parts: basic telecommanding using joystick commands, and advanced telecommanding using linguistic commands. Each joystick or linguistic command is designed to perform an independent task and is defined with multiple events (non-time action references), and the corresponding response functions. This event-driven mechanism enables the robot to deliberately respond to expected events while to reactively respond to unexpected events. Assisted by up-to-date media streaming technologies, telecommanding can help a novice operator to easily control an Internet robot navigating in an unknown and dynamic real world. Experiments, including an Internet-based teleoperation test over 1500 km from Beijing to Hong Kong, demonstrate the promising performance.", "", "This paper presents a new adaptive servo-level shared control scheme for a mobile assistive robot that aims at assisting senior and disabled people to transport heavy objects in a complex environment with obstacles. Several technical problems and challenges related to the assistive robotic system and the shared controller are addressed. Specifically, a nonlinear tracking controller is developed for the robot to follow the user. An obstacle avoidance controller is developed based on the deformable virtual zone principle for the robot to avoid obstacles. The adaptive servo-level shared controller utilizes the tracking controller and the obstacle avoidance controller’s outputs to generate a new shared control output to command the robot. Experiments show that the user can guide the movement of the robot safely and smoothly in the complex environment with the developed controllers.", "", "In shared control teleoperation, the robot assists the user in accomplishing the desired task, making teleoperation easier and more seamless. Rather than simply executing the user's input, which is hindered by the inadequacies of the interface, the robot attempts to predict the user's intent, and assists in accomplishing it. In this work, we are interested in the scientific underpinnings of assistance: we propose an intuitive formalism that captures assistance as policy blending, illustrate how some of the existing techniques for shared control instantiate it, and provide a principled analysis of its main components: prediction of user intent and its arbitration with the user input. We define the prediction problem, with foundations in inverse reinforcement learning, discuss simplifying assumptions that make it tractable, and test these on data from users teleoperating a robotic manipulator. We define the arbitration problem from a control-theoretic perspective, and turn our attention to what users consider good arbitration. We conduct a user study that analyzes the effect of different factors on the performance of assistance, indicating that arbitration should be contextual: it should depend on the robot's confidence in itself and in the user, and even the particulars of the user. Based on the study, we discuss challenges and opportunities that a robot sharing the control with the user might face: adaptation to the context and the user, legibility of behavior, and the closed loop between prediction and user behavior.", "" ] }
1506.06784
2243129471
We explore the probabilistic foundations of shared control in complex dynamic environments. In order to do this, we formulate shared control as a random process and describe the joint distribution that governs its behavior. For tractability, we model the relationships between the operator, autonomy, and crowd as an undirected graphical model. Further, we introduce an interaction function between the operator and the robot, that we call "agreeability"; in combination with the methods developed in trautman-ijrr-2015 , we extend a cooperative collision avoidance autonomy to shared control. We therefore quantify the notion of simultaneously optimizing over agreeability (between the operator and autonomy), and safety and efficiency in crowded environments. We show that for a particular form of interaction function between the autonomy and the operator, linear blending is recovered exactly. Additionally, to recover linear blending, unimodal restrictions must be placed on the models describing the operator and the autonomy. In turn, these restrictions raise questions about the flexibility and applicability of the linear blending framework. Additionally, we present an extension of linear blending called "operator biased linear trajectory blending" (which formalizes some recent approaches in linear blending such as dragan-ijrr-2013 ) and show that not only is this also a restrictive special case of our probabilistic approach, but more importantly, is statistically unsound, and thus, mathematically, unsuitable for implementation. Instead, we suggest a statistically principled approach that guarantees data is used in a consistent manner, and show how this alternative approach converges to the full probabilistic framework. We conclude by proving that, in general, linear blending is suboptimal with respect to the joint metric of agreeability, safety, and efficiency.
2) Compute the input @math : This quantity may be computed using nearly any off the shelf planning algorithm, and is dependent on application. The Dynamic Window Approach'' @cite_17 and Vector Field Histograms @math '' @cite_8 are popular approaches to perform obstacle avoidance for wheelchairs. Sometimes, the autonomy is biased according to data about the operator---for instance, one might imagine an offline training phase where the robot is taught how'' to move through the space, and then this data could be agglomerated using, e.g., inverse optimal control. Alternatively, one might bias the autonomous decision making by conditioning the planner on the predicted or known human goal.
{ "cite_N": [ "@cite_8", "@cite_17" ], "mid": [ "2114476723", "2117211893" ], "abstract": [ "This paper presents further improvements on the earlier vector field histogram (VFH) method developed by Borenstein-Koren (1991) for real-time mobile robot obstacle avoidance. The enhanced method, called VFH+, offers several improvements that result in smoother robot trajectories and greater reliability. VFH+ reduces some of the parameter tuning of the original VFH method by explicitly compensating for the robot width. Also added in VFH+ is a better approximation of the mobile robot trajectory, which results in higher reliability.", "This approach, designed for mobile robots equipped with synchro-drives, is derived directly from the motion dynamics of the robot. In experiments, the dynamic window approach safely controlled the mobile robot RHINO at speeds of up to 95 cm sec, in populated and dynamic environments." ] }
1506.07236
2952812999
Vehicle relocation is the problem in which a mobile robot has to estimate the self-position with respect to an a priori map of landmarks using the perception and the motion measurements without using any knowledge of the initial self-position. Recently, RANdom SAmple Consensus (RANSAC), a robust multi-hypothesis estimator, has been successfully applied to offline relocation in static environments. On the other hand, online relocation in dynamic environments is still a difficult problem, for available computation time is always limited, and for measurement include many outliers. To realize real time algorithm for such an online process, we have developed an incremental version of RANSAC algorithm by extending an efficient preemption RANSAC scheme. This novel scheme named incremental RANSAC is able to find inlier hypotheses of self-positions out of large number of outlier hypotheses contaminated by outlier measurements.
Previous techniques for online localization can be classified into two categories, according to whether the initial self-position is known or not. If the initial self-position is known, the localization problem is equivalent to position tracking, and traditional techniques such as Kalman Filtering @cite_10 @cite_3 are applicable. If the initial self-position is unknown, the full relocation problem needs to be solved.
{ "cite_N": [ "@cite_10", "@cite_3" ], "mid": [ "2112770013", "1606157100" ], "abstract": [ "We propose an on-line algorithm for simultaneous localization and mapping of dynamic environments. Our algorithm is capable of differentiating static and dynamic parts of the environment and representing them appropriately on the map. Our approach is based on maintaining two occupancy grids. One grid models the static parts of the environment, and the other models the dynamic parts of the environment. The union of the two provides a complete description of the environment over time. We also maintain a third map containing information about static landmarks detected in the environment. These landmarks provide the robot with localization. Results in simulation and with physical robots show the efficiency of our approach and show how the differentiation of dynamic and static entities in the environment and SLAM can be mutually beneficial.", "In this paper we will describe a representation for spatial relationships which makes explicit their inherent uncertainty. We will show ways to manipulate them to obtain estimates of relationships and associated uncertainties not explicitly given, and show how decisions to sense or act can be made a priori based on those estimates. We will show how new constraint information, usually obtained by measurement, can be used to update the world model of relationships consistently, and in some situations, optimally. The framework we describe relies only on well-known state estimation methods." ] }
1506.07236
2952812999
Vehicle relocation is the problem in which a mobile robot has to estimate the self-position with respect to an a priori map of landmarks using the perception and the motion measurements without using any knowledge of the initial self-position. Recently, RANdom SAmple Consensus (RANSAC), a robust multi-hypothesis estimator, has been successfully applied to offline relocation in static environments. On the other hand, online relocation in dynamic environments is still a difficult problem, for available computation time is always limited, and for measurement include many outliers. To realize real time algorithm for such an online process, we have developed an incremental version of RANSAC algorithm by extending an efficient preemption RANSAC scheme. This novel scheme named incremental RANSAC is able to find inlier hypotheses of self-positions out of large number of outlier hypotheses contaminated by outlier measurements.
Markov Localization and Monte Carlo Localization @cite_9 are two popular algorithms for online relocation. They generate a number of self-position hypotheses covering all possible positions and score the likelihood of every hypothesis based on consistency between features and landmarks. Although they are reliable in relatively small environments @cite_1 , they are not scalable to large environments since the number of required hypotheses is linear to the environment size.
{ "cite_N": [ "@cite_9", "@cite_1" ], "mid": [ "2160584648", "1986909116" ], "abstract": [ "Mobile robot localization is the problem of determining a robot’s pose from sensor data. This article presents a family of probabilistic localization algorithms known as Monte Carlo Localization (MCL). MCL algorithms represent a robot’s belief by a set of weighted hypotheses (samples), which approximate the posterior under a common Bayesian formulation of the localization problem. Building on the basic MCL algorithm, this article develops a more robust algorithm called MixtureMCL, which integrates two complimentary ways of generating samples in the estimation. To apply this algorithm to mobile robots equipped with range finders, a kernel density tree is learned that permits fast sampling. Systematic empirical results illustrate the robustness and computational efficiency of the approach.  2001 Published by Elsevier Science B.V.", "In this paper we study the global localization problem in SLAM: the determination of the vehicle location in a previously mapped environment with no other prior information. We show that, using a grid sampling representation of the configuration space, it is possible to evaluate all vehicle location hypotheses in the environment (up to a certain resolution) with a computational cost that is bilinear: linear both in the number of map features and in the number of sensor measurements. We propose a pairing-driven algorithm that considers only individual measurement-feature pairings and thus, in contrast with current correspondence space algorithms, it avoids searching in the exponential correspondence space. It uses a voting strategy that accumulates evidence for each vehicle location hypothesis, assuring robustness to noise in the sensor measurements and environment models. The general nature of the proposed strategy allows the consideration of different types of features and sensor measurements. Using the popular Victoria Park dataset, we compare its performance with location-driven algorithms where the solution space is usually randomly sampled. We show that the proposed pairing-driven technique is computationally more efficient in proportion to the density of features in the environment." ] }
1506.07236
2952812999
Vehicle relocation is the problem in which a mobile robot has to estimate the self-position with respect to an a priori map of landmarks using the perception and the motion measurements without using any knowledge of the initial self-position. Recently, RANdom SAmple Consensus (RANSAC), a robust multi-hypothesis estimator, has been successfully applied to offline relocation in static environments. On the other hand, online relocation in dynamic environments is still a difficult problem, for available computation time is always limited, and for measurement include many outliers. To realize real time algorithm for such an online process, we have developed an incremental version of RANSAC algorithm by extending an efficient preemption RANSAC scheme. This novel scheme named incremental RANSAC is able to find inlier hypotheses of self-positions out of large number of outlier hypotheses contaminated by outlier measurements.
There are also some offline algorithms that are scalable to large environments. They aim to estimate the self-position by matching between the global landmark map and a local feature map. The essence of these algorithms is to generate a small set of good initial hypotheses by matching minimal set of features and landmarks @cite_6 @cite_7 . As briefly described in section , RANSAC is one of such algorithms. Their computational cost depends not on the environment size but on the number of features, therefore they are efficient especially in sparse environments @cite_1 . However, even these algorithms are not directly applicable to online relocation, where available computation time is always limited and typically constant. Moreover, larger number of features would be required in the case of dynamic environments since there are usually many outlier features. This makes it difficult even to apply some pre-computed lookup tables that could accelerate the map matching @cite_13 .
{ "cite_N": [ "@cite_13", "@cite_1", "@cite_7", "@cite_6" ], "mid": [ "1601378499", "1986909116", "2135493014", "2118121744" ], "abstract": [ "Localization, that is the estimation of a robot's location from sensor data, is a fundamental problem in mobile robotics. This papers presents a version of Markov localization which provides accurate position estimates and which is tailored towards dynamic environments. The key idea of Markov localization is to maintain a probability density over the space of all locations of a robot in its environment. Our approach represents this space metrically, using a fine-grained grid to approximate densities. It is able to globally localize the robot from scratch and to recover from localization failures. It is robust to approximate models of the environment (such as occupancy grid maps) and noisy sensors (such as ultrasound sensors). Our approach also includes a filtering technique which allows a mobile robot to reliably estimate its position even in densely populated environments in which crowds of people block the robot's sensors for extended periods of time. The method described here has been implemented and tested in several real-world applications of mobile robots, including the deployments of two mobile robots as interactive museum tour-guides.", "In this paper we study the global localization problem in SLAM: the determination of the vehicle location in a previously mapped environment with no other prior information. We show that, using a grid sampling representation of the configuration space, it is possible to evaluate all vehicle location hypotheses in the environment (up to a certain resolution) with a computational cost that is bilinear: linear both in the number of map features and in the number of sensor measurements. We propose a pairing-driven algorithm that considers only individual measurement-feature pairings and thus, in contrast with current correspondence space algorithms, it avoids searching in the exponential correspondence space. It uses a voting strategy that accumulates evidence for each vehicle location hypothesis, assuring robustness to noise in the sensor measurements and environment models. The general nature of the proposed strategy allows the consideration of different types of features and sensor measurements. Using the popular Victoria Park dataset, we compare its performance with location-driven algorithms where the solution space is usually randomly sampled. We show that the proposed pairing-driven technique is computationally more efficient in proportion to the density of features in the environment.", "Many generic position-estimation algorithms are vulnerable to ambiguity introduced by nonunique landmarks. Also, the available high-dimensional image data is not fully used when these techniques are extended to vision-based localization. This paper presents the landmark matching, triangulation, reconstruction, and comparison (LTRQ global localization algorithm, which is reasonably immune to ambiguous landmark matches. It extracts natural landmarks for the (rough) matching stage before generating the list of possible position estimates through triangulation. Reconstruction and comparison then rank the possible estimates. The LTRC algorithm has been implemented using an interpreted language, onto a robot equipped with a panoramic vision system. Empirical data shows remarkable improvement in accuracy when compared with the established random sample consensus method. LTRC is also robust against inaccurate map data.", "In this paper we propose an algorithm to deter- mine the location of a vehicle in an environment represented by a stochastic map, given a set of environment measure- ments obtained by a sensor mounted on the vehicle. We show that the combined use of (1) geometric constraints considering feature correlation, (2) joint compatibility, (3) random sampling and (4) locality, make this algorithm lin- ear with both the size of the stochastic map and the number of measurements. We demonstrate the practicality and ro- bustness of our approach with experiments in an outdoor environment." ] }
1506.06714
2951580200
We present a novel response generation system that can be trained end to end on large quantities of unstructured Twitter conversations. A neural network architecture is used to address sparsity issues that arise when integrating contextual information into classic statistical models, allowing the system to take into account previous dialog utterances. Our dynamic-context generative models show consistent gains over both context-sensitive and non-context-sensitive Machine Translation and Information Retrieval baselines.
Continuous representations of words and phrases estimated by neural network models have been applied on a variety of tasks ranging from Information Retrieval (IR) @cite_27 @cite_12 , Online Recommendation @cite_23 , Machine Translation (MT) @cite_10 @cite_3 @cite_20 @cite_13 , and Language Modeling (LM) @cite_5 @cite_24 . successfully use an embedding model to refine the estimation of rare phrase-translation probabilities, which is traditionally affected by sparsity problems. Robustness to sparsity is a crucial property of our method, as it allows us to capture context information while avoiding unmanageable growth of model parameters.
{ "cite_N": [ "@cite_13", "@cite_3", "@cite_24", "@cite_27", "@cite_23", "@cite_5", "@cite_20", "@cite_10", "@cite_12" ], "mid": [ "2949888546", "", "2117130368", "2136189984", "2251008987", "2132339004", "1753482797", "2250489405", "2131876387" ], "abstract": [ "Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.", "", "We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance.", "Latent semantic models, such as LSA, intend to map a query to its relevant documents at the semantic level where keyword-based matching often fails. In this study we strive to develop a series of new latent semantic models with a deep structure that project queries and documents into a common low-dimensional space where the relevance of a document given a query is readily computed as the distance between them. The proposed deep structured semantic models are discriminatively trained by maximizing the conditional likelihood of the clicked documents given a query using the clickthrough data. To make our models applicable to large-scale Web search applications, we also use a technique called word hashing, which is shown to effectively scale up our semantic models to handle large vocabularies which are common in such tasks. The new models are evaluated on a Web document ranking task using a real-world data set. Results show that our best model significantly outperforms other latent semantic models, which were considered state-of-the-art in the performance prior to the work presented in this paper.", "An “Interestingness Modeler” uses deep neural networks to learn deep semantic models (DSM) of “interestingness.” The DSM, consisting of two branches of deep neural networks or their convolutional versions, identifies and predicts target documents that would interest users reading source documents. The learned model observes, identifies, and detects naturally occurring signals of interestingness in click transitions between source and target documents derived from web browser logs. Interestingness is modeled with deep neural networks that map source-target document pairs to feature vectors in a latent space, trained on document transitions in view of a “context” and optional “focus” of source and target documents. Network parameters are learned to minimize distances between source documents and their corresponding “interesting” targets in that space. The resulting interestingness model has applicable uses, including, but not limited to, contextual entity searches, automatic text highlighting, prefetching documents of likely interest, automated content recommendation, automated advertisement placement, etc.", "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.", "We introduce a class of probabilistic continuous translation models called Recurrent Continuous Translation Models that are purely based on continuous representations for words, phrases and sentences and do not rely on alignments or phrasal translation units. The models have a generation and a conditioning aspect. The generation of the translation is modelled with a target Recurrent Language Model, whereas the conditioning on the source sentence is modelled with a Convolutional Sentence Model. Through various experiments, we show first that our models obtain a perplexity with respect to gold translations that is > 43 lower than that of stateof-the-art alignment-based translation models. Secondly, we show that they are remarkably sensitive to the word order, syntax, and meaning of the source sentence despite lacking alignments. Finally we show that they match a state-of-the-art system when rescoring n-best lists of translations.", "We present a joint language and translation model based on a recurrent neural network which predicts target words based on an unbounded history of both source and target words. The weaker independence assumptions of this model result in a vastly larger search space compared to related feedforward-based language or translation models. We tackle this issue with a new lattice rescoring algorithm and demonstrate its effectiveness empirically. Our joint model builds on a well known recurrent neural network language model (Mikolov, 2012) augmented by a layer of additional inputs from the source language. We show competitive accuracy compared to the traditional channel model features. Our best results improve the output of a system trained on WMT 2012 French-English data by up to 1.5 BLEU, and by 1.1 BLEU on average across several test sets.", "In this paper, we propose a new latent semantic model that incorporates a convolutional-pooling structure over word sequences to learn low-dimensional, semantic vector representations for search queries and Web documents. In order to capture the rich contextual structures in a query or a document, we start with each word within a temporal context window in a word sequence to directly capture contextual features at the word n-gram level. Next, the salient word n-gram features in the word sequence are discovered by the model and are then aggregated to form a sentence-level feature vector. Finally, a non-linear transformation is applied to extract high-level semantic information to generate a continuous vector representation for the full text string. The proposed convolutional latent semantic model (CLSM) is trained on clickthrough data and is evaluated on a Web document ranking task using a large-scale, real-world data set. Results show that the proposed model effectively captures salient semantic information in queries and documents for the task while significantly outperforming previous state-of-the-art semantic models." ] }
1506.06714
2951580200
We present a novel response generation system that can be trained end to end on large quantities of unstructured Twitter conversations. A neural network architecture is used to address sparsity issues that arise when integrating contextual information into classic statistical models, allowing the system to take into account previous dialog utterances. Our dynamic-context generative models show consistent gains over both context-sensitive and non-context-sensitive Machine Translation and Information Retrieval baselines.
Our work extends the Recurrent Neural Network Language Model (RLM) of @cite_14 , which uses continuous representations to estimate a probability function over natural language sentences. We propose a set of conditional RLMs where contextual information (i.e., past utterances) is encoded in a continuous context vector to help generate the response. Our models differ from most previous work in the way the context vector is constructed. For example, and use a pre-trained topic model. In our models, the context vector is learned along with the conditional RLM that generates the response. Additionally, the learned context encodings do not exclusively capture contentful words. Indeed, even stop words'' can carry discriminative power in this task; for example, all words in the utterance how are you?'' are commonly characterized as stop words, yet this is a contentful dialog utterance.
{ "cite_N": [ "@cite_14" ], "mid": [ "179875071" ], "abstract": [ "A new recurrent neural network based language model (RNN LM) with applications to speech recognition is presented. Results indicate that it is possible to obtain around 50 reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. Speech recognition experiments show around 18 reduction of word error rate on the Wall Street Journal task when comparing models trained on the same amount of data, and around 5 on the much harder NIST RT05 task, even when the backoff model is trained on much more data than the RNN LM. We provide ample empirical evidence to suggest that connectionist language models are superior to standard n-gram techniques, except their high computational (training) complexity. Index Terms: language modeling, recurrent neural networks, speech recognition" ] }
1506.06279
753799398
A variety of fan-based wikis about episodic fiction (e.g., television shows, novels, movies) exist on the World Wide Web. These wikis provide a wealth of information about complex stories, but if readers are behind in their viewing they run the risk of encountering "spoilers" -- information that gives away key plot points before the intended time of the show's writers. Enterprising readers might browse the wiki in a web archive so as to view the page prior to a specific episode date and thereby avoid spoilers. Unfortunately, due to how web archives choose the "best" page, it is still possible to see spoilers (especially in sparse archives). In this paper we discuss how to use Memento to avoid spoilers. Memento uses TimeGates to determine which best archived page to give back to the user, currently using a minimum distance heuristic. We quantify how this heuristic is inadequate for avoiding spoilers, analyzing data collected from fan wikis and the Internet Archive. We create an algorithm for calculating the probability of encountering a spoiler in a given wiki article. We conduct an experiment with 16 wiki sites for popular television shows. We find that 38 of those pages are unavailable in the Internet Archive. We find that when accessing fan wiki pages in the Internet Archive there is as much as a 66 chance of encountering a spoiler. Using sample access logs from the Internet Archive, we find that 19 of actual requests to the Wayback Machine for wikia.com pages ended in spoilers. We suggest the use of a different minimum distance heuristic, minpast, for wikis, using the desired datetime as an upper bound.
Almedia, Mozafari, and Cho produced one of the first studies of the behavior of contributors to Wikipedia @cite_23 . The authors discover that there are distinct groups of Wikipedia contributors. They suggest that as the number of articles increases, the contributors' attention is split among more and more content, resulting in the larger number of revising contributors rather than article creators. This informs our notion of number of edits as a surrogate to the popularity of a page.
{ "cite_N": [ "@cite_23" ], "mid": [ "2115055535" ], "abstract": [ "A recent phenomenon on the Web is the emergence and proliferation of new social media systems allowing social interaction between people. One of the most popular of these systems is Wikipedia that allows users to create content in a collaborative way. Despite its current popularity, not much is known about how users interact with Wikipedia and how it has evolved over time. In this paper we aim to provide a first, extensive study of the user behavior on Wikipedia and its evolution. Compared to prior studies, our work differs in several ways. First, previous studies on the analysis of the user workloads (for systems such as peer-to-peer systems [10] and Web servers [2]) have mainly focused on understanding the users who are accessing information. In contrast, Wikipedia’s provides us with the opportunity to understand how users create and maintain information since it provides the complete evolution history of its content. Second, the main focus of prior studies is evaluating the implication of the user workloads on the system performance, while our study is trying to understand the evolution of the data corpus and the user behavior themselves. Our main findings include that (1) the evolution and updates of Wikipedia is governed by a self-similar process, not by the Poisson process that has been observed for the general Web [4, 6] and (2) the exponential growth of Wikipedia is mainly driven by its rapidly increasing user base, indicating the importance of its open editorial policy for its current success. We also find that (3) the number of updates made to the Wikipedia articles exhibit a power-law distribution, but the distribution is less skewed than those obtained from other studies." ] }
1506.06302
2953142925
Given an undirected graph @math and a fixed "pattern" graph @math with @math vertices, we consider the @math -Transversal and @math -Packing problems. The former asks to find the smallest @math such that the subgraph induced by @math does not have @math as a subgraph, and the latter asks to find the maximum number of pairwise disjoint @math -subsets @math such that the subgraph induced by each @math has @math as a subgraph. We prove that if @math is 2-connected, @math -Transversal and @math -Packing are almost as hard to approximate as general @math -Hypergraph Vertex Cover and @math -Set Packing, so it is NP-hard to approximate them within a factor of @math and @math respectively. We also show that there is a 1-connected @math where @math -Transversal admits an @math -approximation algorithm, so that the connectivity requirement cannot be relaxed from 2 to 1. For a special case of @math -Transversal where @math is a (family of) cycles, we mention the implication of our result to the related Feedback Vertex Set problem, and give a different hardness proof for directed graphs.
After the aforementioned work characterizing those pattern graphs @math admitting the existence of a polynomial-time exact algorithm for @math -Packing @cite_41 @cite_31 , Lund and Yannakakis @cite_63 studied the maximization version of @math -Transversal (i.e. find the largest @math such that the subgraph induced by @math does not have @math as a subgraph), and showed it is hard to approximate within factor @math for any @math . They also mentioned the minimization version of two extensions of @math -Transversal. The most general node-deletion problem is APX-hard for every nontrivial hereditary (i.e. closed under node deletion) property, and the special case where the property is characterized by a finite number of forbidden subgraphs (i.e. @math -Transversal in our terminology) can be approximated with a constant ratio. They did not provide explicit constants (one trivial approximation ratio for @math -Transversal is @math ), and our result can be viewed as a quantitative extension of their inapproximability results for the special case of @math -Transversal.
{ "cite_N": [ "@cite_41", "@cite_31", "@cite_63" ], "mid": [ "1999341643", "1964448775", "1563577400" ], "abstract": [ "For arbitrary graphs G and H, a G-factor of H is a spanning subgraph of H composed of disjoint copies of G. G-factors are natural generalizations of l-factors (or perfect matchings), in which G replaces the complete graph on two vertices. Our results show that the perfect matching problem is essentially the only instance of the G-factor problem that is likely to admit a polynomial time bounded solution. Specifically, if G has any component with three or more vertices then the existence question for G-factors is NP-complete. (In all other cases the question can be resolved in polynomial time.) .br The notion of a G-factor is further generalized by replacing G by an arbitrary family of graphs. This generalization forms the foundation for an extension of the traditional theory of matching. This theory, whose details will be developed elsewhere, includes, in addition to further NP-completeness results, new polynomial algorithms and simple duality results. Some indication of the nature and scope of this theory are presented here.", "An H-decomposition of a graph G=(V,E) is a partition of E into subgraphs isomorphic to H. Given a fixed graph H, the H-decomposition problem is to determine whether an input graph G admits an H-decomposition. In 1980, Holyer conjectured that H-decomposition is NP-complete whenever H is connected and has three edges or more. Some partial results have been obtained since then. A complete proof of Holyer's conjecture is the content of this paper. The characterization problem of all graphs H for which H-decomposition is NP-complete is hence reduced to graphs where every connected component contains at most two edges.", "We consider the following class of problems: given a graph, find the maximum number of nodes inducing a subgraph that satisfies a desired property π, such as planar, acyclic, bipartite, etc. We show that this problem is hard to approximate for any property π on directed or undirected graphs that is nontrivial and hereditary on induced subgraphs." ] }
1506.06737
1839185180
We consider a bipartite stochastic block model on vertex sets @math and @math , with planted partitions in each, and ask at what densities efficient algorithms can recover the partition of the smaller vertex set. When @math , multiple thresholds emerge. We first locate a sharp threshold for detection of the partition, in the sense of the results of mossel2012stochastic,mossel2013proof and massoulie2014community for the stochastic block model. We then show that at a higher edge density, the singular vectors of the rectangular biadjacency matrix exhibit a localization delocalization phase transition, giving recovery above the threshold and no recovery below. Nevertheless, we propose a simple spectral algorithm, Diagonal Deletion SVD, which recovers the partition at a nearly optimal edge density. The bipartite stochastic block model studied here was used by feldman2014algorithm to give a unified algorithm for recovering planted partitions and assignments in random hypergraphs and random @math -SAT formulae respectively. Our results give the best known bounds for the clause density at which solutions can be found efficiently in these models as well as showing a barrier to further improvement via this reduction to the bipartite block model.
The stochastic block model has been a source of considerable recent interest. There are many algorithmic approaches to the problem, including algorithms based on maximum-likelihood methods @cite_7 , belief propagation @cite_15 , spectral methods @cite_32 , modularity maximization @cite_12 , and combinatorial methods @cite_23 , @cite_29 , @cite_10 , @cite_38 . @cite_39 gave the first algorithm to detect partitions in the sparse, constant average degree regime. @cite_15 conjectured the precise achievable constant and subsequent algorithms @cite_40 @cite_8 @cite_9 @cite_26 achieved this bound. Sharp thresholds for full recovery (as opposed to detection) have been found by @cite_25 @cite_24 @cite_33 . @cite_22 used ideas for reconstructing assignments to random @math -SAT formulas in the planted @math -SAT model to show that Goldreich's construction of a one-way function in @cite_18 is not secure when the predicate correlates with either one or two of its inputs. For more on Goldreich's PRG from a cryptographic perspective see the survey of @cite_3 . @cite_36 gave an algorithm to recover the partition of @math in the bipartite stochastic block model to solve instances of planted random @math -SAT and planted hypergraph partitioning using subsampled power iteration.
{ "cite_N": [ "@cite_22", "@cite_36", "@cite_29", "@cite_3", "@cite_15", "@cite_10", "@cite_38", "@cite_18", "@cite_8", "@cite_39", "@cite_23", "@cite_26", "@cite_7", "@cite_32", "@cite_40", "@cite_25", "@cite_12", "@cite_33", "@cite_9", "@cite_24" ], "mid": [ "2002676260", "2950319503", "2016776056", "181881105", "2963264680", "2024281442", "2024128809", "2951745592", "", "2133361319", "", "", "", "1605711022", "2023348178", "2093663313", "", "", "", "" ], "abstract": [ "Goldreich (ECCC 2000) suggested a simple construction of a candidate one-way function f : 0, 1 n → 0, 1 m where each bit of output is a fixed predicate P of a constant number d of (random) input bits. We investigate the security of this construction in the regime m = Dn, where D(d) is a sufficiently large constant. We prove that for any predicate P that correlates with either one or two of its inputs, f can be inverted with high probability. We also prove an amplification claim regarding Goldreich’s construction. Suppose we are given an assignment @math that has correlation @math with the hidden assignment @math . Then, given access to x′, it is possible to invert f on x with high probability, provided @math is sufficiently large.", "We present an algorithm for recovering planted solutions in two well-known models, the stochastic block model and planted constraint satisfaction problems, via a common generalization in terms of random bipartite graphs. Our algorithm matches up to a constant factor the best-known bounds for the number of edges (or constraints) needed for perfect recovery and its running time is linear in the number of edges used. The time complexity is significantly better than both spectral and SDP-based approaches. The main contribution of the algorithm is in the case of unequal sizes in the bipartition (corresponding to odd uniformity in the CSP). Here our algorithm succeeds at a significantly lower density than the spectral approaches, surpassing a barrier based on the spectral norm of a random matrix. Other significant features of the algorithm and analysis include (i) the critical use of power iteration with subsampling, which might be of independent interest; its analysis requires keeping track of multiple norms of an evolving solution (ii) it can be implemented statistically, i.e., with very limited access to the input distribution (iii) the algorithm is extremely simple to implement and runs in linear time, and thus is practical even for very large instances.", "Abstract The average-case complexity of recognising some NP-complete properties is examined, when the instances are randomly selected from those which have the property. We carry out this analysis for 1. (1) Graph k -colourability. We describe an O ( n 2 ) expected time algorithm for n -vertex graphs, with k constant. 2. (2) Small equitable cut. We describe an O ( n 3 ) expected time algorithm for finding and verifying , the minimum equitable cut in a 2 n -vertex graph G , condition on G having one with at most (1 − ϵ)n 2 2 edges. 3. (3) Partitioning a 2 n vertex graph into two sparse vertex induced subgraphs of a given class. We describe an O ( n 3 ) expected time algorithm for computing such a partition. 4. (4) The number problem 3-PARTITION. We describe an O ( n 2 ) expected time algorithm for problems with 3 n integers.", "Constant parallel-time cryptography allows performing complex cryptographic tasks at an ultimate level of parallelism, namely, by local functions that each of their output bits depend on a constant number of input bits. The feasibility of such highly efficient cryptographic constructions was widely studied in the last decade via two main research threads.", "The stochastic block model with two communities, or equivalently the planted bisection model, is a popular model of random graph exhibiting a cluster behavior. In the symmetric case, the graph has two equally sized clusters and vertices connect with probability @math within clusters and @math across clusters. In the past two decades, a large body of literature in statistics and computer science has focused on providing lower bounds on the scaling of @math to ensure exact recovery. In this paper, we identify a sharp threshold phenomenon for exact recovery: if @math and @math are constant (with @math ), recovering the communities with high probability is possible if @math and is impossible if $( + 2 ) - . In particular, this improves the existing bounds. This also sets a new line of sight for efficient clustering algorithms. While maximum likelihood (ML) achieves the optimal threshold (by definition), it is in the worst case NP-hard. This paper proposes an efficient algorithm based on a semidefinite programming relaxation of ML, which is proved to succeed in recovering the communities close to the threshold, while numerical experiments suggest that it may achieve the threshold. An efficient algorithm that succeeds all the way down to the threshold is also obtained using a partial recovery algorithm combined with a local improvement procedure.", "We resolve in the affirmative a question of Boppana and Bui: whether simulated annealing can, with high probability and in polynomial time, find the optimal bisection of a random graph in npr when p − R = Θ(nΔ − 2) for Δ 2. (The random graph model npr specifies a “planted” bisection of density r, separating two n 2-vertex subsets of slightly higher density p.) We show that simulated “annealing” at an appropriate fixed temperature (i.e., the Metropolis algorithm) finds the unique smallest bisection in O(n2 + ) steps with very high probability, provided . (By using a slightly modified neighborhood structure, the number of steps can be reduced to O(n1+).) We leave open the question of whether annealing is effective for Δ in the range , whose lower limit represents the threshold at which the planted bisection becomes lost amongst other random small bisections. It also remains open whether hillclimbing (i.e., annealing at temperature 0) solves the same problem; towards the latter result, Juels has recently extended our analysis and shown that random hillclimbing finds the minimum bisection with constant probability, when p − R = Ω(1) (corresponding to Δ=2).", "The NP-hard graph bisection problem is to partition the nodes of an undirected graph into two equal-sized groups so as to minimize the number of edges that cross the partition. The more general graph l-partition problem is to partition the nodes of an undirected graph into l equal-sized groups so as to minimize the total number of edges that cross between groups.", "", "", "In this paper we study the use of spectral techniques for graph partitioning. Let G = (V, E) be a graph whose vertex set has a ‘latent’ partition V1,. . ., Vk. Moreover, consider a ‘density matrix’ Ɛ = (Ɛvw)v, sw∈V such that, for v ∈ Vi and w ∈ Vj, the entry Ɛvw is the fraction of all possible Vi−Vj-edges that are actually present in G. We show that on input (G, k) the partition V1,. . ., Vk can (very nearly) be recovered in polynomial time via spectral methods, provided that the following holds: Ɛ approximates the adjacency matrix of G in the operator norm, for vertices v ∈ Vi, w ∈ Vj ≠ Vi the corresponding column vectors Ɛv, Ɛw are separated, and G is sufficiently ‘regular’ with respect to the matrix Ɛ. This result in particular applies to sparse graphs with bounded average degree as n = #V → ∞, and it has various consequences on partitioning random graphs.", "", "", "", "Problems such as bisection, graph coloring, and clique are generally believed hard in the worst case. However, they can be solved if the input data is drawn randomly from a distribution over graphs containing acceptable solutions. In this paper we show that a simple spectral algorithm can solve all three problems above in the average case, as well as a more general problem of partitioning graphs based on edge density. In nearly all cases our approach meets or exceeds previous parameters, while introducing substantial generality. We apply spectral techniques, using foremost the observation that in all of these problems, the expected adjacency matrix is a low rank matrix wherein the structure of the solution is evident.", "[1] conjectured the existence of a sharp threshold on model parameters for community detection in sparse random graphs drawn from the stochastic block model. Mossel, Neeman and Sly [2] established the negative part of the conjecture, proving impossibility of non-trivial reconstruction below the threshold. In this work we solve the positive part of the conjecture. To that end we introduce a modified adjacency matrix B which counts self-avoiding paths of a given length e between pairs of nodes. We then prove that for logarithmic length e, the leading eigenvectors of this modified matrix provide a non-trivial reconstruction of the underlying structure, thereby settling the conjecture. A key step in the proof consists in establishing a weak Ramanujan property of the constructed matrix B. Namely, the spectrum of B consists in two leading eigenvalues ρ(B), λ2 and n -- 2 eigenvalues of a lower order O(ne √ρ(B) for all e 0, ρ(B) denoting B's spectral radius.", "The planted bisection model is a random graph model in which the nodes are divided into two equal-sized communities and then edges are added randomly in a way that depends on the community membership. We establish necessary and sufficient conditions for the asymptotic recoverability of the planted bisection in this model. When the bisection is asymptotically recoverable, we give an efficient algorithm that successfully recovers it. We also show that the planted bisection is recoverable asymptotically if and only if with high probability every node belongs to the same community as the majority of its neighbors. Our algorithm for finding the planted bisection runs in time almost linear in the number of edges. It has three stages: spectral clustering to compute an initial guess, a \"replica\" stage to get almost every vertex correct, and then some simple local moves to finish the job. An independent work by Abbe, Bandeira, and Hall establishes similar (slightly weaker) results but only in the sparse case where pn, qn = Θ(log n n).", "", "", "", "" ] }
1506.05865
609399965
Automatic text summarization is widely regarded as the highly difficult problem, partially because of the lack of large text summarization data set. Due to the great challenge of constructing the large scale summaries for full text, in this paper, we introduce a large corpus of Chinese short text summarization dataset constructed from the Chinese microblogging website Sina Weibo, which is released to the public this http URL . This corpus consists of over 2 million real Chinese short texts with short summaries given by the author of each text. We also manually tagged the relevance of 10,666 short summaries with their corresponding short texts. Based on the corpus, we introduce recurrent neural network for the summary generation and achieve promising results, which not only shows the usefulness of the proposed corpus for short text summarization research, but also provides a baseline for further research on this topic.
in some form has been studied since 1950. Since then, most researches are related to extractive summarizations by analyzing the organization of the words in the document @cite_20 @cite_11 ; Since it needs labeled data sets for supervised machine learning methods and labeling dataset is very intensive, some researches focused on the unsupervised methods @cite_16 . The scale of existing data sets are usually very small (most of them are less than 1000). For example, DUC2002 dataset contains 567 documents and each document is provided with two 100-words human summaries. Our work is also related to the headline generation, which is a task to generate one sentence of the text it entitles. Colmenares et.al construct a 1.3 million financial news headline dataset written in English for headline generation @cite_15 . However, the data set is not publicly available.
{ "cite_N": [ "@cite_15", "@cite_16", "@cite_20", "@cite_11" ], "mid": [ "2293941196", "2144270295", "", "1974339500" ], "abstract": [ "Automatic headline generation is a sub-task of document summarization with many reported applications. In this study we present a sequence-prediction technique for learning how editors title their news stories. The introduced technique models the problem as a discrete optimization task in a feature-rich space. In this space the global optimum can be found in polynomial time by means of dynamic programming. We train and test our model on an extensive corpus of financial news, and compare it against a number of baselines by using standard metrics from the document summarization domain, as well as some new ones proposed in this work. We also assess the readability and informativeness of the generated titles through human evaluation. The obtained results are very appealing and substantiate the soundness of the approach.", "This paper presents an innovative unsupervised method for automatic sentence extraction using graph-based ranking algorithms. We evaluate the method in the context of a text summarization task, and show that the results obtained compare favorably with previously published results on established benchmarks.", "", "Excerpts of technical papers and magazine articles that serve the purposes of conventional abstracts have been created entirely by automatic means. In the exploratory research described, the complete text of an article in machine-readable form is scanned by an IBM 704 data-processing machine and analyzed in accordance with a standard program. Statistical information derived from word frequency and distribution is used by the machine to compute a relative measure of significance, first for individual words and then for sentences. Sentences scoring highest in significance are extracted and printed out to become the \"auto-abstract.\"" ] }
1506.06053
604542539
The spatial preferential attachment (SPA) is a model for complex networks. In the SPA model, nodes are embedded in a metric space, and each node has a sphere of influence whose size increases if the node gains an in-link, and otherwise decreases with time. In this paper, we study the behaviour of the SPA model when the distribution of the nodes is non-uniform. Specifically, the space is divided into dense and sparse regions, where it is assumed that the dense regions correspond to coherent communities. We prove precise theoretical results regarding the degree of a node, the number of common neighbours, and the average out-degree in a region. Moreover, we show how these theoretically derived results about the graph properties of the model can be used to formulate a reliable estimator for the distance between certain pairs of nodes, and to estimate the density of the region containing a given node.
Efforts to extract node information through link analysis began with a heuristic quantification of entity similarity: numerical values, obtained from the graph structure, indicating the relatedness of two nodes. Early simple measures of entity similarity, such as the Jaccard coefficient @cite_6 , gave way to iterative graph theoretic measures, in which two objects are similar if they are related to similar objects, such as SimRank @cite_4 . Many such measures also incorporate co-citation, the number of common neighbours of two nodes, as proposed in the context of bibliographic research in an early paper by Small @cite_16 . @cite_17 , the authors make inferences on the social space for nodes in a social network, using Bayesian methods and maximum likelihood.
{ "cite_N": [ "@cite_16", "@cite_4", "@cite_6", "@cite_17" ], "mid": [ "2005207065", "2117831564", "1555083332", "2066459332" ], "abstract": [ "A new form of document coupling called co-citation is defined as the frequency with which two documents are cited together. The co-citation frequency of two scientific papers can be determined by comparing lists of citing documents in the Science Citation Index and counting identical entries. Networks of co-cited papers can be generated for specific scientific specialties, and an example is drawn from the literature of particle physics. Co-citation patterns are found to differ significantly from bibliographic coupling patterns, but to agree generally with patterns of direct citation. Clusters of co-cited papers provide a new way to study the specialty structure of science. They may provide a new approach to indexing and to the creation of SDI profiles.", "The problem of measuring \"similarity\" of objects arises in many applications, and many domain-specific measures have been developed, e.g., matching text across documents or computing overlap among item-sets. We propose a complementary approach, applicable in any domain with object-to-object relationships, that measures similarity of the structural context in which objects occur, based on their relationships with other objects. Effectively, we compute a measure that says \"two objects are similar if they are related to similar objects:\" This general similarity measure, called SimRank, is based on a simple and intuitive graph-theoretic model. For a given domain, SimRank can be combined with other domain-specific similarity measures. We suggest techniques for efficient computation of SimRank scores, and provide experimental results on two application domains showing the computational feasibility and effectiveness of our approach.", "This paper proposes an algorithm and data structure for fast computation of similarity based on Jaccard coefficient to retrieve images with regions similar to those of a query image. The similarity measures the degree of overlap between the regions of an image and those of another image. The key idea for fast computation of the similarity is to use the runlength description of an image for computing the number of overlapped pixels between the regions. We present an algorithm and data structure, and do experiments on 30,000 images to evaluate the performance of our algorithm. Experiments showed that the proposed algorithm is 5.49 (2.36) times faster than a naive algorithm on the average (the worst). And we theoretically gave fairly good estimates of the computation time.", "Network models are widely used to represent relational information among interacting units. In studies of social networks, recent emphasis has been placed on random graph models where the nodes usually represent individual social actors and the edges represent the presence of a specified relation between actors. We develop a class of models where the probability of a relation between actors depends on the positions of individuals in an unobserved “social space.” We make inference for the social space within maximum likelihood and Bayesian frameworks, and propose Markov chain Monte Carlo procedures for making inference on latent positions and the effects of observed covariates. We present analyses of three standard datasets from the social networks literature, and compare the method to an alternative stochastic blockmodeling approach. In addition to improving on model fit for these datasets, our method provides a visual and interpretable model-based spatial representation of social relationships and improv..." ] }
1506.06053
604542539
The spatial preferential attachment (SPA) is a model for complex networks. In the SPA model, nodes are embedded in a metric space, and each node has a sphere of influence whose size increases if the node gains an in-link, and otherwise decreases with time. In this paper, we study the behaviour of the SPA model when the distribution of the nodes is non-uniform. Specifically, the space is divided into dense and sparse regions, where it is assumed that the dense regions correspond to coherent communities. We prove precise theoretical results regarding the degree of a node, the number of common neighbours, and the average out-degree in a region. Moreover, we show how these theoretically derived results about the graph properties of the model can be used to formulate a reliable estimator for the distance between certain pairs of nodes, and to estimate the density of the region containing a given node.
Generative spatial models were proposed in a more general setting, where the main objective was to generate graphs with properties that correspond to those observed in real-life networks. Different approaches were explored, for example in @cite_14 using thresholds, or in @cite_3 @cite_0 using a geometric variant of the preferential attachment. Graph properties of this model were analyzed by Jordan in @cite_7 ; follow-up work on this model can be found in @cite_13 . @cite_12 , a non-uniform distribution of the points in space is considered. @cite_8 , Jacob and M " o rters propose a probabilistic spatial model where the link probability is a function decreasing with distance. The setting is general, and includes the SPA model as a special case. Follow-up work on this model can be found in @cite_5 .
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_8", "@cite_3", "@cite_0", "@cite_5", "@cite_13", "@cite_12" ], "mid": [ "2099570416", "2015170934", "1970878284", "2181086294", "", "1960772788", "1804927206", "1993806092" ], "abstract": [ "We analyze the structure of random graphs generated by the geographical threshold model. The model is a generalization of random geometric graphs. Nodes are distributed in space, and edges are assigned according to a threshold function involving the distance between nodes as well as randomly chosen node weights. We show how the degree distribution, percolation and connectivity transitions, clustering coefficient, and diameter relate to the threshold value and weight distribution. We give bounds on the threshold value guaranteeing the presence or absence of a giant component, connectivity and disconnectivity of the graph, and small diameter. Finally, we consider the clustering coefficient for nodes with a given degree l, finding that its scaling is very close to 1 l when the node weights are exponentially distributed.", "We investigate the degree sequence of the geometric preferential attachment model of Flaxman, Frieze and Vera (2006), (2007) in the case where the self-loop parameter a is set to 0. We show that, given certain conditions on the attractiveness function F, the degree sequence converges to the same sequence as found for standard preferential attachment in (2001). We also apply our method to the extended model introduced in van der Esker (2008) which allows for an initial attractiveness term, proving similar results.", "We define a class of growing networks in which new nodes are given a spatial position and are connected to existing nodes with a probability mechanism favoring short distances and high degrees. The competition of preferential attachment and spatial clustering gives this model a range of interesting properties. Empirical degree distributions converge to a limit law, which can be a power law with any exponent τ>2 . The average clustering coefficient of the networks converges to a positive limit. Finally, a phase transition occurs in the global clustering coefficients and empirical distribution of edge lengths when the power-law exponent crosses the critical value τ=3 . Our main tool in the proof of these results is a general weak law of large numbers in the spirit of Penrose and Yukich.", "A detailed understanding of expansion in complex networks can greatly aid in the design and analysis of algorithms for a variety of important network tasks, including routing messages, ranking nodes, and compressing graphs. This has motivated several recent investigations of expansion properties in real-world graphs and also in random models of real-world graphs, like the preferential attachment graph. The results point to a gap between real-world observations and theoretical models. Some real-world graphs are expanders and others are not, but a graph generated by the preferential attachment model is an expander whp. We study a random graph Gn that combines certain aspects of geometric random graphs and preferential attachment graphs. This model yields a graph with power-law degree distribution where the expansion property depends on a tunable parameter of the model. The vertices of Gn are n sequentially generated points x1, x2, ..., xn chosen uniformly at random from the unit sphere in R3. After generating xt, we randomly connect it to m points from those points in x1, x2, ..., xt-1....", "", "A growing family of random graphs is called robust if it retains a giant component after percolation with arbitrary positive retention probability. We study robustness for graphs, in which new vertices are given a spatial position on the @math -dimensional torus and are connected to existing vertices with a probability favouring short spatial distances and high degrees. In this model of a scale-free network with clustering we can independently tune the power law exponent @math of the degree distribution and the rate @math at which the connection probability decreases with the distance of two vertices. We show that the network is robust if @math . In the case of one-dimensional space we also show that the network is not robust if @math . This implies that robustness of a scale-free network depends not only on its power-law exponent but also on its clustering features. Other than the classical models of scale-free networks our model is not locally tree-like, and hence we need to develop novel methods for its study, including, for example, a surprising application of the BK-inequality.", "We study an evolving spatial network in which sequentially arriving vertices are joined to existing vertices at random according to a rule that combines preference according to degree with preference according to spatial proximity. We investigate phase transitions in graph structure as the relative weighting of these two components of the attachment rule is varied. Previous work of one of the authors showed that when the geometric component is weak, the limiting degree sequence of the resulting graph coincides with that of the standard Barab 'asi--Albert preferential attachment model. We show that at the other extreme, in the case of a sufficiently strong geometric component, the limiting degree sequence coincides with that of a purely geometric model, the on-line nearest-neighbour graph, which is of interest in its own right and for which we prove some extensions of known results. We also show the presence of an intermediate regime, in which the behaviour differs significantly from both the on-line nearest-neighbour graph and the Barab 'asi--Albert model; in this regime, we obtain a stretched exponential upper bound on the degree sequence. Our results lend some mathematical support to simulation studies of Manna and Sen, while proving that the power law to stretched exponential phase transition occurs at a different point from the one conjectured by those authors.", "We investigate the degree sequences of geometric preferential attachment graphs in general compact metric spaces. We show that, under certain conditions on the attractiveness function, the behaviour of the degree sequence is similar to that of the preferential attachment with multiplicative fitness models investigated by When the metric space is finite, the degree distribution at each point of the space converges to a degree distribution which is an asymptotic power law whose index depends on the chosen point. For infinite metric spaces, we can show that for vertices in a Borel subset of S of positive measure the degree distribution converges to a distribution whose tail is close to that of a power law whose index again depends on the set." ] }
1506.06053
604542539
The spatial preferential attachment (SPA) is a model for complex networks. In the SPA model, nodes are embedded in a metric space, and each node has a sphere of influence whose size increases if the node gains an in-link, and otherwise decreases with time. In this paper, we study the behaviour of the SPA model when the distribution of the nodes is non-uniform. Specifically, the space is divided into dense and sparse regions, where it is assumed that the dense regions correspond to coherent communities. We prove precise theoretical results regarding the degree of a node, the number of common neighbours, and the average out-degree in a region. Moreover, we show how these theoretically derived results about the graph properties of the model can be used to formulate a reliable estimator for the distance between certain pairs of nodes, and to estimate the density of the region containing a given node.
The SPA model was first proposed in @cite_1 as a model for the World Wide Web. @cite_1 and @cite_15 , it was proved that the SPA model produces graphs with certain graph properties that correspond to those observed in real-life networks. The authors' previous paper, @cite_10 , used common neighbours to explore the underlying geometry of the SPA model and quantify node similarity based on distance in the space. However, the distribution of nodes in space was assumed to be uniform. The approach used in this paper is similar to that in @cite_10 , but we investigate the complications that arise when the distribution is non-uniform, which is clearly a more realistic setting.
{ "cite_N": [ "@cite_10", "@cite_15", "@cite_1" ], "mid": [ "2963690324", "2174413628", "" ], "abstract": [ "The spatial preferred attachment (SPA) model is a model for networked information spaces such as domains of the World Wide Web, citation graphs, and on-line social networks. It uses a metric space to model the hidden attributes of the vertices. Thus, vertices are elements of a metric space, and link formation depends on the metric distance between vertices. We show, through theoretical analysis and simulation, that for graphs formed according to the SPA model it is possible to infer the metric distance between vertices from the link structure of the graph. Precisely, the estimate is based on the number of common neighbours of a pair of vertices, a measure known as co-citation. To be able to calculate this estimate, we derive a precise relation between the number of common neighbours and metric distance. We also analyse the distribution of edge lengths, where the length of an edge is the metric distance between its end points. We show that this distribution has three different regimes, and that the tail of this distribution follows a power law.", "We investigate a stochastic model for complex networks, based on a spatial embedding of the nodes, called the Spatial Preferred Attachment (SPA) model. In the SPA model, nodes have spheres of influence of varying size, and new nodes may only link to a node if they fall within its influence region. The spatial embedding of the nodes models the background knowledge or identity of the node, which influences its link environment. In this paper, we focus on the (directed) diameter, small separators, and the (weak) giant component of the model.", "" ] }
1506.06053
604542539
The spatial preferential attachment (SPA) is a model for complex networks. In the SPA model, nodes are embedded in a metric space, and each node has a sphere of influence whose size increases if the node gains an in-link, and otherwise decreases with time. In this paper, we study the behaviour of the SPA model when the distribution of the nodes is non-uniform. Specifically, the space is divided into dense and sparse regions, where it is assumed that the dense regions correspond to coherent communities. We prove precise theoretical results regarding the degree of a node, the number of common neighbours, and the average out-degree in a region. Moreover, we show how these theoretically derived results about the graph properties of the model can be used to formulate a reliable estimator for the distance between certain pairs of nodes, and to estimate the density of the region containing a given node.
An earlier version of this work, containing no proofs, was presented at the workshop WAW 2013. An extended abstract can be found in @cite_11 .
{ "cite_N": [ "@cite_11" ], "mid": [ "2236074598" ], "abstract": [ "In this paper, a spatial preferential attachment model for complex networks in which there is non-uniform distribution of the nodes in the metric space is studied. In this model, the metric layout represents hidden information about the similarity and community structure of the nodes. It is found that, for density functions that are locally constant, the graph properties can be well approximated by considering the graph as a union of graphs from uniform density spatial models corresponding to the regions of different densities. Moreover, methods from the uniform case can be used to extract information about the metric layout. Specifically, through link and co-citation analysis the density of a node’s region can be estimated and the pairwise distances for certain nodes can be recovered with good accuracy." ] }
1506.05908
2950809786
Knowledge tracing---where a machine models the knowledge of a student as they interact with coursework---is a well established problem in computer supported education. Though effectively modeling student knowledge would have high educational impact, the task has many inherent challenges. In this paper we explore the utility of using Recurrent Neural Networks (RNNs) to model student learning. The RNN family of models have important advantages over previous methods in that they do not require the explicit encoding of human domain knowledge, and can capture more complex representations of student knowledge. Using neural networks results in substantial improvements in prediction performance on a range of knowledge tracing datasets. Moreover the learned model can be used for intelligent curriculum design and allows straightforward interpretation and discovery of structure in student tasks. These results suggest a promising new line of research for knowledge tracing and an exemplary application task for RNNs.
With or without such extensions, Knowledge Tracing suffers from several difficulties. First, the binary representation of student understanding may be unrealistic. Second, the meaning of the hidden variables and their mappings onto exercises can be ambiguous, rarely meeting the model's expectation of a single concept per exercise. Several techniques have been developed to create and refine concept categories and concept-exercise mappings. The current gold standard, Cognitive Task Analysis @cite_16 is an arduous and iterative process where domain experts ask learners to talk through their thought processes while solving problems. Finally, the binary response data used to model transitions imposes a limit on the kinds of exercises that can be modeled.
{ "cite_N": [ "@cite_16" ], "mid": [ "1792756357" ], "abstract": [ "Cognitive task analysis is defined as the extension of traditional task analysis techniques to yield information about the knowledge, thought processes and goal structures that underlie observable task performance. Cognitive task analyses are conducted for a wide variety of purposes, including the design of computer systems to support human work, the development of training, and the development of tests to certify competence. As part of its Programme of Work, NATO Research Study Group 27 on Cognitive Task Analysis has undertaken the task of reviewing existing cognitive task analysis techniques. The Group concludes that few integrated methods exist, that little attention is being paid to the conditions under which methods are appropriate, and that often it is unclear how the products of cognitive task analysis should be used. RSG.27 has also organized a workshop with experts in the field of cognitive task analysis. The most important issues that were discussed during the workshop were: (1) the use of CTA in the design of new systems, (2) the question when to use what technique, and (3) the role of CTA in system design. RSG.27 emphasizes: (1) that is important for the CTA community to be able to empirically demonstrate the added value of a CTA; (2) it is critical for the success of CTA to be involved in the design process from the start to finish, and to establish clear links with methods that are used by other disciplines, and (3) recommends that more research effort be directed to the issue of the reliability of CTA techniques. (P)" ] }
1506.05908
2950809786
Knowledge tracing---where a machine models the knowledge of a student as they interact with coursework---is a well established problem in computer supported education. Though effectively modeling student knowledge would have high educational impact, the task has many inherent challenges. In this paper we explore the utility of using Recurrent Neural Networks (RNNs) to model student learning. The RNN family of models have important advantages over previous methods in that they do not require the explicit encoding of human domain knowledge, and can capture more complex representations of student knowledge. Using neural networks results in substantial improvements in prediction performance on a range of knowledge tracing datasets. Moreover the learned model can be used for intelligent curriculum design and allows straightforward interpretation and discovery of structure in student tasks. These results suggest a promising new line of research for knowledge tracing and an exemplary application task for RNNs.
Partially Observable Markov Decision Processes (POMDPs) have been used to model learner behavior over time, in cases where the learner follows an open-ended path to arrive at a solution @cite_2 . Although POMDPs present an extremely flexible framework, they require exploration of an exponentially large state space. Current implementations are also restricted to a discrete state space, with hard-coded meanings for latent variables. This makes them intractable or inflexible in practice, though they have the potential to overcome both of those limitations.
{ "cite_N": [ "@cite_2" ], "mid": [ "2160178500" ], "abstract": [ "Human and automated tutors attempt to choose pedagogical activities that will maximize student learning, informed by their estimates of the student's current knowledge. There has been substantial research on tracking and modeling student learning, but significantly less attention on how to plan teaching actions and how the assumed student model impacts the resulting plans. We frame the problem of optimally selecting teaching actions using a decision-theoretic approach and show how to formulate teaching as a partially observable Markov decision process planning problem. This framework makes it possible to explore how different assumptions about student learning and behavior should affect the selection of teaching actions. We consider how to apply this framework to concept learning problems, and we present approximate methods for finding optimal teaching actions, given the large state and action spaces that arise in teaching. Through simulations and behavioral experiments, we explore the consequences of choosing teacher actions under different assumed student models. In two concept-learning tasks, we show that this technique can accelerate learning relative to baseline performance." ] }
1506.05908
2950809786
Knowledge tracing---where a machine models the knowledge of a student as they interact with coursework---is a well established problem in computer supported education. Though effectively modeling student knowledge would have high educational impact, the task has many inherent challenges. In this paper we explore the utility of using Recurrent Neural Networks (RNNs) to model student learning. The RNN family of models have important advantages over previous methods in that they do not require the explicit encoding of human domain knowledge, and can capture more complex representations of student knowledge. Using neural networks results in substantial improvements in prediction performance on a range of knowledge tracing datasets. Moreover the learned model can be used for intelligent curriculum design and allows straightforward interpretation and discovery of structure in student tasks. These results suggest a promising new line of research for knowledge tracing and an exemplary application task for RNNs.
Simpler models from the Performance Factors Analysis (PFA) framework @cite_27 and Learning Factors Analysis (LFA) framework @cite_19 have shown predictive power comparable to BKT @cite_23 . To obtain better predictive results than with any one model alone, various ensemble methods have been used to combine BKT and PFA @cite_10 . Model combinations supported by AdaBoost, Random Forest, linear regression, logistic regression and a feed-forward neural network were all shown to deliver superior results to BKT and PFA on their own. But because of the learner models they rely on, these ensemble techniques grapple with the same limitations, including a requirement for accurate concept labeling.
{ "cite_N": [ "@cite_19", "@cite_27", "@cite_10", "@cite_23" ], "mid": [ "1562092080", "1596401170", "2031074083", "" ], "abstract": [ "A cognitive model is a set of production rules or skills encoded in intelligent tutors to model how students solve problems. It is usually generated by brainstorming and iterative refinement between subject experts, cognitive scientists and programmers. In this paper we propose a semi-automated method for improving a cognitive model called Learning Factors Analysis that combines a statistical model, human expertise and a combinatorial search. We use this method to evaluate an existing cognitive model and to generate and evaluate alternative models. We present improved cognitive models and make suggestions for improving the intelligent tutor based on those models.", "Knowledge tracing (KT)[1] has been used in various forms for adaptive computerized instruction for more than 40 years. However, despite its long history of application, it is difficult to use in domain model search procedures, has not been used to capture learning where multiple skills are needed to perform a single action, and has not been used to compute latencies of actions. On the other hand, existing models used for educational data mining (e.g. Learning Factors Analysis (LFA)[2]) and model search do not tend to allow the creation of a “model overlay” that traces predictions for individual students with individual skills so as to allow the adaptive instruction to automatically remediate performance. Because these limitations make the transition from model search to model application in adaptive instruction more difficult, this paper describes our work to modify an existing data mining model so that it can also be used to select practice adaptively. We compare this new adaptive data mining model (PFA, Performance Factors Analysis) with two versions of LFA and then compare PFA with standard KT.", "Over the last decades, there have been a rich variety of approaches towards modeling student knowledge and skill within interactive learning environments. There have recently been several empirical comparisons as to which types of student models are better at predicting future performance, both within and outside of the interactive learning environment. However, these comparisons have produced contradictory results. Within this paper, we examine whether ensemble methods, which integrate multiple models, can produce prediction results comparable to or better than the best of nine student modeling frameworks, taken individually. We ensemble model predictions within a Cognitive Tutor for Genetics, at the level of predicting knowledge action-by-action within the tutor. We evaluate the predictions in terms of future performance within the tutor and on a paper post-test. Within this data set, we do not find evidence that ensembles of models are significantly better. Ensembles of models perform comparably to or slightly better than the best individual models, at predicting future performance within the tutor software. However, the ensembles of models perform marginally significantly worse than the best individual models, at predicting post-test performance.", "" ] }
1506.05908
2950809786
Knowledge tracing---where a machine models the knowledge of a student as they interact with coursework---is a well established problem in computer supported education. Though effectively modeling student knowledge would have high educational impact, the task has many inherent challenges. In this paper we explore the utility of using Recurrent Neural Networks (RNNs) to model student learning. The RNN family of models have important advantages over previous methods in that they do not require the explicit encoding of human domain knowledge, and can capture more complex representations of student knowledge. Using neural networks results in substantial improvements in prediction performance on a range of knowledge tracing datasets. Moreover the learned model can be used for intelligent curriculum design and allows straightforward interpretation and discovery of structure in student tasks. These results suggest a promising new line of research for knowledge tracing and an exemplary application task for RNNs.
Recent work has explored combining Item Response Theory (IRT) models with switched nonlinear Kalman filters @cite_11 , as well as with Knowledge Tracing @cite_5 @cite_12 . Though these approaches are promising, at present they are both more restricted in functional form and more expensive (due to inference of latent variables) than the method we present here.
{ "cite_N": [ "@cite_5", "@cite_12", "@cite_11" ], "mid": [ "1265790443", "", "2950489292" ], "abstract": [ "Traditionally, the assessment and learning science commu-nities rely on different paradigms to model student performance. The assessment community uses Item Response Theory which allows modeling different student abilities and problem difficulties, while the learning science community uses Knowledge Tracing, which captures skill acquisition. These two paradigms are complementary - IRT cannot be used to model student learning, while Knowledge Tracing assumes all students and problems are the same. Recently, two highly related models based on a principled synthesis of IRT and Knowledge Tracing were introduced. However, these two models were evaluated on different data sets, using different evaluation metrics and with different ways of splitting the data into training and testing sets. In this paper we reconcile the models' results by presenting a unified view of the two models, and by evaluating the models under a common evaluation metric. We find that both models are equivalent and only differ in their training procedure. Our results show that the combined IRT and Knowledge Tracing models offer the best of assessment and learning sciences - high prediction accuracy like the IRT model, and the ability to model student learning like Knowledge Tracing.", "", "We propose SPARFA-Trace, a new machine learning-based framework for time-varying learning and content analytics for education applications. We develop a novel message passing-based, blind, approximate Kalman filter for sparse factor analysis (SPARFA), that jointly (i) traces learner concept knowledge over time, (ii) analyzes learner concept knowledge state transitions (induced by interacting with learning resources, such as textbook sections, lecture videos, etc, or the forgetting effect), and (iii) estimates the content organization and intrinsic difficulty of the assessment questions. These quantities are estimated solely from binary-valued (correct incorrect) graded learner response data and a summary of the specific actions each learner performs (e.g., answering a question or studying a learning resource) at each time instance. Experimental results on two online course datasets demonstrate that SPARFA-Trace is capable of tracing each learner's concept knowledge evolution over time, as well as analyzing the quality and content organization of learning resources, the question-concept associations, and the question intrinsic difficulties. Moreover, we show that SPARFA-Trace achieves comparable or better performance in predicting unobserved learner responses than existing collaborative filtering and knowledge tracing approaches for personalized education." ] }
1506.05908
2950809786
Knowledge tracing---where a machine models the knowledge of a student as they interact with coursework---is a well established problem in computer supported education. Though effectively modeling student knowledge would have high educational impact, the task has many inherent challenges. In this paper we explore the utility of using Recurrent Neural Networks (RNNs) to model student learning. The RNN family of models have important advantages over previous methods in that they do not require the explicit encoding of human domain knowledge, and can capture more complex representations of student knowledge. Using neural networks results in substantial improvements in prediction performance on a range of knowledge tracing datasets. Moreover the learned model can be used for intelligent curriculum design and allows straightforward interpretation and discovery of structure in student tasks. These results suggest a promising new line of research for knowledge tracing and an exemplary application task for RNNs.
Recurrent neural networks are competitive or state-of-the-art for several time series tasks--for instance, speech to text @cite_28 , translation @cite_4 , and image captioning @cite_14 --where large amounts of training data are available. These results suggest that we could be much more successful at tracing student knowledge if we formulated the task as a new application of temporal neural networks.
{ "cite_N": [ "@cite_28", "@cite_14", "@cite_4" ], "mid": [ "2950689855", "2951805548", "179875071" ], "abstract": [ "Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates , which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7 on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score.", "We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.", "A new recurrent neural network based language model (RNN LM) with applications to speech recognition is presented. Results indicate that it is possible to obtain around 50 reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. Speech recognition experiments show around 18 reduction of word error rate on the Wall Street Journal task when comparing models trained on the same amount of data, and around 5 on the much harder NIST RT05 task, even when the backoff model is trained on much more data than the RNN LM. We provide ample empirical evidence to suggest that connectionist language models are superior to standard n-gram techniques, except their high computational (training) complexity. Index Terms: language modeling, recurrent neural networks, speech recognition" ] }
1506.06006
1176712834
Crowd flow segmentation is an important step in many video surveillance tasks. In this work, we propose an algorithm for segmenting flows in H.264 compressed videos in a completely unsupervised manner. Our algorithm works on motion vectors which can be obtained by partially decoding the compressed video without extracting any additional features. Our approach is based on modelling the motion vector field as a Conditional Random Field (CRF) and obtaining oriented motion segments by finding the optimal labelling which minimises the global energy of CRF. These oriented motion segments are recursively merged based on gradient across their boundaries to obtain the final flow segments. This work in compressed domain can be easily extended to pixel domain by substituting motion vectors with motion based features like optical flow. The proposed algorithm is experimentally evaluated on a standard crowd flow dataset and its superior performance in both accuracy and computational time are demonstrated through quantitative results.
In the recent past, quite a few novel approaches have been proposed for crowd analysis both in the pixel and compressed domain. In this section we discuss some of these approaches. @cite_14 proposed a Lagrangian dynamics based approach for segmentation and analysis of crowd flow. Their approach involves generating a flow field and propagating particles along them using numerical integration methods. The space-time evolution of these particles is used to setup a Finite Time Lyapunov Exponent field, which can capture the underlying Lagrangian Coherent Structure (LCS) in the flow. Dynamics and stability of the LCS reveal various flow segments present in the video.
{ "cite_N": [ "@cite_14" ], "mid": [ "2113137767" ], "abstract": [ "Minimum cut maximum flow algorithms on graphs have emerged as an increasingly useful tool for exactor approximate energy minimization in low-level vision. The combinatorial optimization literature provides many min-cut max-flow algorithms with different polynomial time complexity. Their practical efficiency, however, has to date been studied mainly outside the scope of computer vision. The goal of this paper is to provide an experimental comparison of the efficiency of min-cut max flow algorithms for applications in vision. We compare the running times of several standard algorithms, as well as a new algorithm that we have recently developed. The algorithms we study include both Goldberg-Tarjan style \"push -relabel\" methods and algorithms based on Ford-Fulkerson style \"augmenting paths.\" We benchmark these algorithms on a number of typical graphs in the contexts of image restoration, stereo, and segmentation. In many cases, our new algorithm works several times faster than any of the other methods, making near real-time performance possible. An implementation of our max-flow min-cut algorithm is available upon request for research purposes." ] }
1506.06006
1176712834
Crowd flow segmentation is an important step in many video surveillance tasks. In this work, we propose an algorithm for segmenting flows in H.264 compressed videos in a completely unsupervised manner. Our algorithm works on motion vectors which can be obtained by partially decoding the compressed video without extracting any additional features. Our approach is based on modelling the motion vector field as a Conditional Random Field (CRF) and obtaining oriented motion segments by finding the optimal labelling which minimises the global energy of CRF. These oriented motion segments are recursively merged based on gradient across their boundaries to obtain the final flow segments. This work in compressed domain can be easily extended to pixel domain by substituting motion vectors with motion based features like optical flow. The proposed algorithm is experimentally evaluated on a standard crowd flow dataset and its superior performance in both accuracy and computational time are demonstrated through quantitative results.
Again in H.264 compressed format, @cite_4 proposed a segmentation algorithm for crowd flow based on super-pixels. The mean motion vectors are colour coded and superpixel segmentation is performed at different scales. These segments, obtained at different scales, are merged based on boundary potential between superpixels to obtain flow segments.
{ "cite_N": [ "@cite_4" ], "mid": [ "2020173024" ], "abstract": [ "In this paper, we have proposed a simple yet robust novel approach for segmentation of high density crowd flows based on super-pixels in H.264 compressed videos. The collective representation of the motion vectors of the compressed video sequence is transformed to color map and super-pixel segmentation is performed at various scales for clustering the coherent motion vectors. The number of dynamically meaningful flow segments is determined by measuring the confidence score of the accumulated multi-scale super-pixel boundaries. The final crowd flow segmentation is obtained from the edges that are consistent across all the super-pixel resolutions. Hence, our major contribution involves obtaining the flow segmentation by clustering the motion vectors and determination of number of flow segments using only motion super-pixels without any prior assumption of the number of flow segments. The proposed approach was bench-marked on standard crowd flow dataset. Experiments demonstrated better accuracy and speedup for the proposed approach compared to the state-of-the-art methods." ] }
1506.05672
641360110
In recent years, the importance of research data and the need to archive and to share it in the scientific community have increased enormously. This introduces a whole new set of challenges for digital libraries. In the social sciences typical research data sets consist of surveys and questionnaires. In this paper we focus on the use case of social science survey question reuse and on mechanisms to support users in the query formulation for data sets. We describe and evaluate thesaurus- and co-occurrence-based approaches for query expansion to improve retrieval quality in digital libraries and research data archives. The challenge here is to translate the information need and the underlying sociological phenomena into proper queries. As we can show retrieval quality can be improved by adding related terms to the queries. In a direct comparison automatically expanded queries using extracted co-occurring terms can provide better results than queries manually reformulated by a domain expert and better results than a keyword-based BM25 baseline.
A typical problem that arises during every search-based retrieval task (in contrast to browsing or filter-based tasks) is the so-called language or vocabulary problem @cite_4 : During the formulization of an information need, a searcher can (in theory) use the unlimited possibilities of human language to express him- or herself @cite_5 . This is especially true when expressing information needs in the scientific domain using domain-specific expressions that are very unique and context-sensitive. Every scientific community and discipline has developed its own special vocabulary that is not commonly used by other researchers from other domains. With regard to survey question retrieval, this problem is even more pronounced as it is likely that the underlying topic of a survey question is not directly represented in the question text. In this special setting of short textual documents and a very domain-specific content, this long-known problem becomes even more pronounced.
{ "cite_N": [ "@cite_5", "@cite_4" ], "mid": [ "2021188310", "1984565341" ], "abstract": [ "L'A. met en avant les implications de la philosophie du langage pour la recherche d'information, notamment en ce qui concerne les difficultes de la description des contenus intellectuels. Les echecs du processus de recherche d'information sur le plan linguistique se situent aux niveaux de la description et de la discrimination, et dans l'equilibre entre resultat (recall) et precision. L'A. rappelle l'impact des travaux de L. Wittgenstein sur la theorie du langage et plus particulierement sur le sens linguistique : les mots nomment des objets, chaque mot a un sens, le sens d'un mot est independent du contexte, le sens des phrases est constitue du sens des mots. La theorie externaliste des echaffaudages (scaffolding) devrait avoir un impact important sur la description et la classification en recherche d'information. La part de la philosophie du langage dans la litterature sur la recherche d'information demeure toutefois modeste.", "In almost all computer applications, users must enter correct words for the desired objects or actions. For success without extensive training, or in first-tries for new targets, the system must recognize terms that will be chosen spontaneously. We studied spontaneous word choice for objects in five application-related domains, and found the variability to be surprisingly large. In every case two people favored the same term with probability" ] }
1506.05672
641360110
In recent years, the importance of research data and the need to archive and to share it in the scientific community have increased enormously. This introduces a whole new set of challenges for digital libraries. In the social sciences typical research data sets consist of surveys and questionnaires. In this paper we focus on the use case of social science survey question reuse and on mechanisms to support users in the query formulation for data sets. We describe and evaluate thesaurus- and co-occurrence-based approaches for query expansion to improve retrieval quality in digital libraries and research data archives. The challenge here is to translate the information need and the underlying sociological phenomena into proper queries. As we can show retrieval quality can be improved by adding related terms to the queries. In a direct comparison automatically expanded queries using extracted co-occurring terms can provide better results than queries manually reformulated by a domain expert and better results than a keyword-based BM25 baseline.
apply four methods for microblog retrieval @cite_10 : query reformulation, automatic query expansion, affinity propagation as well as a combination of these techniques. To reformulate the query hashtags are extracted from tweets and used as additional information for the query. Furthermore, every two consecutive words of the query are grouped and added to the query. A relevance feedback model is used for automatic query expansion. The respective top ten terms of the top ten documents are selected. The affinity propagation approach is implemented by using a cluster algorithm to group tweets. The idea behind this is that the probability of tweets being relevant is higher for those, which are similar to relevant tweets. It is shown that automatic query expansion is a very effective method, while affinity propagation is less successful.
{ "cite_N": [ "@cite_10" ], "mid": [ "2296541110" ], "abstract": [ "This report describes the methods that our Information Retrieval Group at Purdue University used for the TREC Microblog 2011 track. The rst method is the pseudo-relevance feedback, a traditional algorithm to reformulate the query by adding expanded terms to the query. The second method is the anity propagation, a non parametric clustering algorithm that can group the top tweets according to their similarities. The nal score of a tweet is based on its relevance score and the relevance score of its representative in the group. We found that query expansion is a very useful technique for microblog retrieval, while anity propagation could achieve a comparable performance when combining with other techniques." ] }
1506.05672
641360110
In recent years, the importance of research data and the need to archive and to share it in the scientific community have increased enormously. This introduces a whole new set of challenges for digital libraries. In the social sciences typical research data sets consist of surveys and questionnaires. In this paper we focus on the use case of social science survey question reuse and on mechanisms to support users in the query formulation for data sets. We describe and evaluate thesaurus- and co-occurrence-based approaches for query expansion to improve retrieval quality in digital libraries and research data archives. The challenge here is to translate the information need and the underlying sociological phenomena into proper queries. As we can show retrieval quality can be improved by adding related terms to the queries. In a direct comparison automatically expanded queries using extracted co-occurring terms can provide better results than queries manually reformulated by a domain expert and better results than a keyword-based BM25 baseline.
Microblogging services like Twitter also face the vocabulary problem for short texts. A tweet consists of up to 140 characters, while the question texts used in this work have an average length of 83.57 characters. The latest research in the field of microblog retrieval is therefore relevant for the problem at hand. For instance, pseudo-relevance feedback @cite_2 and document expansion @cite_1 are common approaches to address the vocabulary problem @cite_16 . analyze two approaches for microblog retrieval @cite_0 . The first approach uses a retrieval model based on Bayesian networks. The influence of a microblogger as well as the temporal distribution of search terms are included in the calculation of the relevance of a tweet. Here, only the usage of topic-specific features improved the results. In the second approach, query expansion (pseudo-relevance feedback) and document expansion methods are implemented. Tweets obtained by these approaches are merged. Additionally, those tweets are extended by contained URLs. Final scores are calculated by applying Rocchio-expansion as well as using the vector space model. Document expansion combined with vector space model improves retrieval results. Automatic query expansion does not increase recall, but significantly increases precision.
{ "cite_N": [ "@cite_0", "@cite_16", "@cite_1", "@cite_2" ], "mid": [ "", "1993692165", "2048978851", "1991693254" ], "abstract": [ "", "The relative ineffectiveness of information retrieval systems is largely caused by the inaccuracy with which a query formed by a few keywords models the actual user information need. One well known method to overcome this limitation is automatic query expansion (AQE), whereby the user’s original query is augmented by new features with a similar meaning. AQE has a long history in the information retrieval community but it is only in the last years that it has reached a level of scientific and experimental maturity, especially in laboratory settings such as TREC. This survey presents a unified view of a large number of recent approaches to AQE that leverage various data sources and employ very different principles and techniques. The following questions are addressed. Why is query expansion so important to improve search effectiveness? What are the main steps involved in the design and implementation of an AQE component? What approaches to AQE are available and how do they compare? Which issues must still be resolved before AQE becomes a standard component of large operational information retrieval systems (e.g., search engines)?", "Collections containing a large number of short documents are becoming increasingly common. As these collections grow in number and size, providing effective retrieval of brief texts presents a significant research problem. We propose a novel approach to improving information retrieval (IR) for short texts based on aggressive document expansion. Starting from the hypothesis that short documents tend to be about a single topic, we submit documents as pseudo-queries and analyze the results to learn about the documents themselves. Document expansion helps in this context because short documents yield little in the way of term frequency information. However, as we show, the proposed technique helps us model not only lexical properties, but also temporal properties of documents. We present experimental results using a corpus of microblog (Twitter) data and a corpus of metadata records from a federated digital library. With respect to established baselines, results of these experiments show that applying our proposed document expansion method yields significant improvements in effectiveness. Specifically, our method improves the lexical representation of documents and the ability to let time influence retrieval.", "Query expansion methods using pseudo-relevance feedback have been shown effective for microblog search because they can solve vocabulary mismatch problems often seen in searching short documents such as Twitter messages (tweets), which are limited to 140 characters. Pseudo-relevance feedback assumes that the top ranked documents in the initial search results are relevant and that they contain topic-related words appropriate for relevance feedback. However, those assumptions do not always hold in reality because the initial search results often contain many irrelevant documents. In such a case, only a few of the suggested expansion words may be useful with many others being useless or even harmful. To overcome the limitation of pseudo-relevance feedback for microblog search, we propose a novel query expansion method based on two-stage relevance feedback that models search interests by manual tweet selection and integration of lexical and temporal evidence into its relevance model. Our experiments using a corpus of microblog data (the Tweets2011 corpus) demonstrate that the proposed two-stage relevance feedback approaches considerably improve search result relevance over almost all topics." ] }
1506.05692
2056336673
We present a novel hybrid algorithm for Bayesian network structure learning, called H2PC. It first reconstructs the skeleton of a Bayesian network and then performs a Bayesian-scoring greedy hill-climbing search to orient the edges. The algorithm is based on divide-and-conquer constraint-based subroutines to learn the local structure around a target variable. We conduct two series of experimental comparisons of H2PC against Max-Min Hill-Climbing (MMHC), which is currently the most powerful state-of-the-art algorithm for Bayesian network structure learning. First, we use eight well-known Bayesian network benchmarks with various data sizes to assess the quality of the learned structure returned by the algorithms. Our extensive experiments show that H2PC outperforms MMHC in terms of goodness of fit to new data and quality of the network structure with respect to the true dependence structure of the data. Second, we investigate H2PC's ability to solve the multi-label learning problem. We provide theoretical results to characterize and identify graphically the so-called minimal label powersets that appear as irreducible factors in the joint distribution under the faithfulness condition. The multi-label learning problem is then decomposed into a series of multi-class classification problems, where each multi-class variable encodes a label powerset. H2PC is shown to compare favorably to MMHC in terms of global classification accuracy over ten multi-label data sets covering different application domains. Overall, our experiments support the conclusions that local structural learning with H2PC in the form of local neighborhood induction is a theoretically well-motivated and empirically effective learning framework that is well suited to multi-label learning. The source code (in R) of H2PC as well as all data sets used for the empirical tests are publicly available.
This MLC problem may be tackled in various ways . Each of these approaches is supposed to capture - to some extent - the relationships between labels. The two most straightforward meta-learning methods are: Binary Relevance (BR) and Label Powerset (LP) . Both methods can be regarded as opposite in the sense that BR does consider each label independently, while LP considers the whole label set at once (one multi-class problem). An important question remains: what shall we capture from the statistical relationships between labels exactly to solve the multi-label classification problem? The problem attracted a great deal of interest . It is well beyond the scope and purpose of this paper to delve deeper into these approaches, we point the reader to for a review. The second fundamental problem that we wish to address involves finding an optimal feature subset selection of a label set, w.r.t an Information Theory criterion @cite_1 . As in the single-label case, multi-label feature selection has been studied recently and has encountered some success .
{ "cite_N": [ "@cite_1" ], "mid": [ "1849729440" ], "abstract": [ "In this paper, we examine a method for feature subset selection based on Information Theory. Initially, a framework for defining the theoretically optimal, but computationally intractable, method for feature subset selection is presented. We show that our goal should be to eliminate a feature if it gives us little or no additional information beyond that subsumed by the remaining features. In particular, this will be the case for both irrelevant and redundant features. We then give an efficient algorithm for feature selection which computes an approximation to the optimal feature selection criterion. The conditions under which the approximate algorithm is successful are examined. Empirical results are given on a number of data sets, showing that the algorithm effectively handles datasets with a very large number of features." ] }
1506.05703
1639027819
Recently, there has been a lot of effort to represent words in continuous vector spaces. Those representations have been shown to capture both semantic and syntactic information about words. However, distributed representations of phrases remain a challenge. We introduce a novel model that jointly learns word vector representations and their summation. Word representations are learnt using the word co-occurrence statistical information. To embed sequences of words (i.e. phrases) with different sizes into a common semantic space, we propose to average word vector representations. In contrast with previous methods which reported a posteriori some compositionality aspects by simple summation, we simultaneously train words to sum, while keeping the maximum information from the original vectors. We evaluate the quality of the word representations on several classical word evaluation tasks, and we introduce a novel task to evaluate the quality of the phrase representations. While our distributed representations compete with other methods of learning word representations on word evaluations, we show that they give better performance on the phrase evaluation. Such representations of phrases could be interesting for many tasks in natural language processing.
The count-based methods consist of using the statistical information contained in large corpora of unlabeled text to build large matrices by simply counting words (word co-coocurrence statistics). The rows correspond to words or terms, and the columns correspond to a local context. The context can be documents, such as in latent semantic analysis (LSA) @cite_0 ; or other words @cite_16 . To generate low-dimensional word representations, a low-rank approximation of these large matrices is performed, mainly with a singular value decomposition (SVD). Many authors proposed to improve this model with different transformations for the matrix of counts, such as positive pointwise mutual information (PPMI) @cite_10 @cite_17 , or a square root of the co-occurrence probabilities in the form of Hellinger PCA @cite_7 . Instead of using the co-occurrence probabilities, @cite_26 suggest that word vector representations should be learnt with ratios of co-occurrence probabilities. For this purpose, they introduce a log-bilinear regression model that combines both global matrix factorization and local context window methods.
{ "cite_N": [ "@cite_26", "@cite_7", "@cite_0", "@cite_16", "@cite_10", "@cite_17" ], "mid": [ "2250539671", "1499253590", "2147152072", "1981617416", "1978400666", "2125031621" ], "abstract": [ "Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.", "Word embeddings resulting from neural language models have been shown to be a great asset for a large variety of NLP tasks. However, such architecture might be difficult and time-consuming to train. Instead, we propose to drastically simplify the word embeddings computation through a Hellinger PCA of the word co- occurence matrix. We compare those new word embeddings with some well-known embeddings on named entity recognition and movie review tasks and show that we can reach similar or even better performance. Although deep learning is not really necessary for generating good word embeddings, we show that it can provide an easy way to adapt embeddings to specific tasks.", "A new method for automatic indexing and retrieval is described. The approach is to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries. The particular technique used is singular-value decomposition, in which a large term by document matrix is decomposed into a set of ca. 100 orthogonal factors from which the original matrix can be approximated by linear combination. Documents are represented by ca. 100 item vectors of factor weights. Queries are represented as pseudo-document vectors formed from weighted combinations of terms, and documents with supra-threshold cosine values are returned. initial tests find this completely automatic method for retrieval to be promising.", "A procedure that processes a corpus of text and produces numeric vectors containing information about its meanings for each word is presented. This procedure is applied to a large corpus of natural language text taken from Usenet, and the resulting vectors are examined to determine what information is contained within them. These vectors provide the coordinates in a high-dimensional space in which word relationships can be analyzed. Analyses of both vector similarity and multidimensional scaling demonstrate that there is significant semantic information carried in the vectors. A comparison of vector similarity with human reaction times in a single-word priming experiment is presented. These vectors provide the basis for a representational model of semantic memory, hyperspace analogue to language (HAL).", "The idea that at least some aspects of word meaning can be induced from patterns of word co-occurrence is becoming increasingly popular. However, there is less agreement about the precise computations involved, and the appropriate tests to distinguish between the various possibilities. It is important that the effect of the relevant design choices and parameter values are understood if psychological models using these methods are to be reliably evaluated and compared. In this article, we present a systematic exploration of the principal computational possibilities for formulating and validating representations of word meanings from word co-occurrence statistics. We find that, once we have identified the best procedures, a very simple approach is surprisingly successful and robust over a range of psychologically relevant evaluation measures.", "We analyze skip-gram with negative-sampling (SGNS), a word embedding method introduced by , and show that it is implicitly factorizing a word-context matrix, whose cells are the pointwise mutual information (PMI) of the respective word and context pairs, shifted by a global constant. We find that another embedding method, NCE, is implicitly factorizing a similar matrix, where each cell is the (shifted) log conditional probability of a word given its context. We show that using a sparse Shifted Positive PMI word-context matrix to represent words improves results on two word similarity tasks and one of two analogy tasks. When dense low-dimensional vectors are preferred, exact factorization with SVD can achieve solutions that are at least as good as SGNS's solutions for word similarity tasks. On analogy questions SGNS remains superior to SVD. We conjecture that this stems from the weighted nature of SGNS's factorization." ] }
1506.05703
1639027819
Recently, there has been a lot of effort to represent words in continuous vector spaces. Those representations have been shown to capture both semantic and syntactic information about words. However, distributed representations of phrases remain a challenge. We introduce a novel model that jointly learns word vector representations and their summation. Word representations are learnt using the word co-occurrence statistical information. To embed sequences of words (i.e. phrases) with different sizes into a common semantic space, we propose to average word vector representations. In contrast with previous methods which reported a posteriori some compositionality aspects by simple summation, we simultaneously train words to sum, while keeping the maximum information from the original vectors. We evaluate the quality of the word representations on several classical word evaluation tasks, and we introduce a novel task to evaluate the quality of the phrase representations. While our distributed representations compete with other methods of learning word representations on word evaluations, we show that they give better performance on the phrase evaluation. Such representations of phrases could be interesting for many tasks in natural language processing.
The predictive-based model has first been introduced as a neural probabilistic language model @cite_3 . A neural network architecture is trained to predict the next word given a window of preceding words, where words are representated by low-dimensional vector. Since, some variations of this architecture have been proposed. @cite_14 train a language model to discriminate a two-class classification task: if the word in the middle of the input window is related to its context or not. More recently, the need of full neural architectures has been questioned @cite_22 @cite_8 . MikolovICLR2013 ( MikolovICLR2013 ) propose two predictive-based log-linear models for learning distributed representations of words: (i) the continous bag-of-words model (CBOW), where the objective is to correctly classify the current (middle) word given a symmetric window of context words around it; (ii) the skip-gram model, where instead of predicting the current word based on the context, it tries to maximize classification of a word based on another word in the same sentence. In Mikolov2013 ( Mikolov2013 ), the authors also introduce a data-driven approach for learning phrases, where the phrases are treated as individual tokens during the training.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_3", "@cite_8" ], "mid": [ "2158899491", "2097732278", "2132339004", "1614298861" ], "abstract": [ "We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements.", "Continuous-valued word embeddings learned by neural language models have recently been shown to capture semantic and syntactic information about words very well, setting performance records on several word similarity tasks. The best results are obtained by learning high-dimensional embeddings from very large quantities of data, which makes scalability of the training method a critical factor. We propose a simple and scalable new approach to learning word embeddings based on training log-bilinear models with noise-contrastive estimation. Our approach is simpler, faster, and produces better results than the current state-of-the-art method. We achieve results comparable to the best ones reported, which were obtained on a cluster, using four times less data and more than an order of magnitude less computing time. We also investigate several model types and find that the embeddings learned by the simpler models perform at least as well as those learned by the more complex ones.", "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.", "" ] }
1506.06048
1005050706
This paper applies a hidden Markov model to the problem of Attention Deficit Hyperactivity Disorder (ADHD) diagnosis from resting-state functional Magnetic Resonance Image (fMRI) scans of subjects. The proposed model considers the temporal evolution of fMRI voxel activations in the cortex, cingulate gyrus, and thalamus regions of the brain in order to make a diagnosis. Four feature dimen- sionality reduction methods are applied to the fMRI scan: voxel means, voxel weighted means, principal components analysis, and kernel principal components analysis. Using principal components analysis and kernel principal components analysis for dimensionality reduction, the proposed algorithm yielded an accu- racy of 63.01 and 62.06 , respectively, on the ADHD-200 competition dataset when differentiating between healthy control, ADHD innattentive, and ADHD combined types.
Using the ADHD-200 competition dataset, which consists of several hundred resting-state fMRI scans, http: fcon_1000.projects.nitrc.org indi adhd200 Eloyan @cite_4 explored several different classifiers for ADHD diagnosis, including a support vector machine, gradient boosting, and voxel-based morphology. In addition, several feature extraction methods were investigated, including singular value decomposition and CUR matrix decomposition. The best classification accuracy was achieved by taking a weighted combination of these classifiers, which yielded $61.0 Recent studies @cite_1 @cite_5 emphasize that different parts of the brain are functionally correlated. Taking this into consideration, the Human Connectome Project explores graphical models that seek to capture these functional connectivities, both in task-based and resting-state fMRI scans. http: humanconnectome.org Similarly, Zhang @cite_10 proposed a Bayesian network for modeling functional neural activity. In this work, each region of the brain is represented as a node in the graphical model and the functional connectivity of these nodes over time is used for classifying drug addicts from healthy controls.
{ "cite_N": [ "@cite_1", "@cite_5", "@cite_4", "@cite_10" ], "mid": [ "1999653836", "2133903921", "2010549559", "2098899472" ], "abstract": [ "In recent years, the principles of network science have increasingly been applied to the study of the brain's structural and functional organization. Bullmore and Sporns review this growing field of research and discuss its contributions to our understanding of brain function.", "Functional imaging studies have shown that certain brain regions, including posterior cingulate cortex (PCC) and ventral anterior cingulate cortex (vACC), consistently show greater activity during resting states than during cognitive tasks. This finding led to the hypothesis that these regions constitute a network supporting a default mode of brain function. In this study, we investigate three questions pertaining to this hypothesis: Does such a resting-state network exist in the human brain? Is it modulated during simple sensory processing? How is it modulated during cognitive processing? To address these questions, we defined PCC and vACC regions that showed decreased activity during a cognitive (working memory) task, then examined their functional connectivity during rest. PCC was strongly coupled with vACC and several other brain regions implicated in the default mode network. Next, we examined the functional connectivity of PCC and vACC during a visual processing task and show that the resultant connectivity maps are virtually identical to those obtained during rest. Last, we defined three lateral prefrontal regions showing increased activity during the cognitive task and examined their resting-state connectivity. We report significant inverse correlations among all three lateral prefrontal regions and PCC, suggesting a mechanism for attenuation of default mode network activity during cognitive processing. This study constitutes, to our knowledge, the first resting-state connectivity analysis of the default mode and provides the most compelling evidence to date for the existence of a cohesive default mode network. Our findings also provide insight into how this network is modulated by task demands and what functions it might subserve.", "Successful automated diagnoses of attention deficit hyperactive disorder (ADHD) using imaging and functional biomarkers would have fundamental consequences on the public health impact of the disease. In this work, we show results on the predictability of ADHD using imaging biomarkers and discuss the scientific and diagnostic impacts of the research. We created a prediction model using the landmark ADHD 200 data set focusing on resting state functional connectivity (rs-fc) and structural brain imaging. We predicted ADHD status and subtype, obtained by behavioral examination, using imaging data, intelligence quotients and other covariates. The novel contributions of this manuscript include a thorough exploration of prediction and image feature extraction methodology on this form of data, including the use of singular value decompositions, CUR decompositions, random forest, gradient boosting, bagging, voxel-based morphometry and support vector machines as well as important insights into the value, and potentially lack thereof, of imaging biomarkers of disease. The key results include the CUR-based decomposition of the rs-fc-fMRI along with gradient boosting and the prediction algorithm based on a motor network parcellation and random forest algorithm. We conjecture that the CUR decomposition is largely diagnosing common population directions of head motion. Of note, a byproduct of this research is a potential automated method for detecting subtle in-scanner motion. The final prediction algorithm, a weighted combination of several algorithms, had an external test set specificity of 94 with sensitivity of 21 . The most promising imaging biomarker was a correlation graph from a motor network parcellation. In summary, we have undertaken a large-scale statistical exploratory prediction exercise on the unique ADHD 200 data set. The exercise produced several potential leads for future scientific exploration of the neurological basis of ADHD.", "Functional Magnetic Resonance Imaging (fMRI) has enabled scientists to look into the active brain. However, interactivity between functional brain regions, is still little studied. In this paper, we contribute a novel framework for modeling the interactions between multiple active brain regions, using Dynamic Bayesian Networks (DBNs) as generative models for brain activation patterns. This framework is applied to modeling of neuronal circuits associated with reward. The novelty of our framework from a Machine Learning perspective lies in the use of DBNs to reveal the brain connectivity and interactivity. Such interactivity models which are derived from fMRI data are then validated through a group classification task. We employ and compare four different types of DBNs: Parallel Hidden Markov Models, Coupled Hidden Markov Models, Fully-linked Hidden Markov Models and Dynamically Multi-Linked HMMs (DML-HMM). Moreover, we propose and compare two schemes of learning DML-HMMs. Experimental results show that by using DBNs, group classification can be performed even if the DBNs are constructed from as few as 5 brain regions. We also demonstrate that, by using the proposed learning algorithms, different DBN structures characterize drug addicted subjects vs. control subjects. This finding provides an independent test for the effect of psychopathology on brain function. In general, we demonstrate that incorporation of computer science principles into functional neuroimaging clinical studies provides a novel approach for probing human brain function." ] }
1506.06048
1005050706
This paper applies a hidden Markov model to the problem of Attention Deficit Hyperactivity Disorder (ADHD) diagnosis from resting-state functional Magnetic Resonance Image (fMRI) scans of subjects. The proposed model considers the temporal evolution of fMRI voxel activations in the cortex, cingulate gyrus, and thalamus regions of the brain in order to make a diagnosis. Four feature dimen- sionality reduction methods are applied to the fMRI scan: voxel means, voxel weighted means, principal components analysis, and kernel principal components analysis. Using principal components analysis and kernel principal components analysis for dimensionality reduction, the proposed algorithm yielded an accu- racy of 63.01 and 62.06 , respectively, on the ADHD-200 competition dataset when differentiating between healthy control, ADHD innattentive, and ADHD combined types.
Apart from using functional relations, temporal correlation between brain voxels and connectivity has been explored by Fiecas @cite_2 . Furthermore, the temporal relation between mental states and neuronal activities has been investigated by building a conditional random field @cite_9 . Duan @cite_11 proposed two methods based on likelihood and distance measures to analyze fMRI scans using an HMM. However, this work focuses on analyzing the Blood Oxygen Level Dependent (BOLD) signal for brain activities in order to predict the brain activations in task-based fMRI time series. Eavani @cite_14 analyzed the functional connectivity dynamics in resting state fMRI and decoded the temporal variation of functional connectivity into a sequence of hidden states using an HMM.
{ "cite_N": [ "@cite_14", "@cite_9", "@cite_11", "@cite_2" ], "mid": [ "61705880", "22163543", "2619476781", "1999457494" ], "abstract": [ "Research in recent years has provided some evidence of temporal non-stationarity of functional connectivity in resting state fMRI. In this paper, we present a novel methodology that can decode connectivity dynamics into a temporal sequence of hidden network \"states\" for each subject, using a Hidden Markov Modeling (HMM) framework. Each state is characterized by a unique covariance matrix or whole-brain network. Our model generates these covariance matrices from a common but unknown set of sparse basis networks, which capture the range of functional activity co-variations of regions of interest (ROIs). Distinct hidden states arise due to a variation in the strengths of these basis networks. Thus, our generative model combines a HMM framework with sparse basis learning of positive definite matrices. Results on simulated fMRI data show that our method can effectively recover underlying basis networks as well as hidden states. We apply this method on a normative dataset of resting state fMRI scans. Results indicate that the functional activity of a subject at any point during the scan is composed of combinations of overlapping task-positive negative pairs of networks as revealed by our basis. Distinct hidden temporal states are produced due to a different set of basis networks dominating the covariance pattern in each state.", "Functional magnetic resonance imaging (fMRI) has provided an invaluable method of investing real time neuron activities. Statistical tools have been developed to recognise the mental state from a batch of fMRI observations over a period. However, an interesting question is whether it is possible to estimate the real time mental states at each moment during the fMRI observation. In this paper, we address this problem by building a probabilistic model of the brain activity. We model the tempo-spatial relations among the hidden high-level mental states and observable low-level neuron activities. We verify our model by experiments on practical fMRI data. The model also implies interesting clues on the task-responsible regions in the brain.", "This paper introduces two unsupervised learning methods for analyzing functional magnetic resonance imaging (fMRI) data based on hidden Markov model (HMM). HMM approach is focused on capturing the first-order statistical evolution among the samples of a voxel time series, and it can provide a complimentary perspective of the BOLD signals. Two-state HMM is created for each voxel, and the model parameters are estimated from the voxel time series and the stimulus paradigm. Two different activation detection methods are presented in this paper. The first method is based on the likelihood and likelihood-ratio test, in which an additional Gaussian model is used to enhance the contrast of the HMM likelihood map. The secondmethod is based on certain distance measures between the two state distributions, in which the most likely HMM state sequence is estimated through the Viterbi algorithm. The distance between the on-state and off-state distributions is measured either through a t-test, or using the Kullback-Leibler distance (KLD). Experimental results on both normal subject and brain tumor subject are presented. HMM approach appears to be more robust in detecting the supplemental active voxels comparing with SPM, especially for brain tumor subject.", "There have been many interpretations of functional connectivity and proposed measures of temporal correlations between BOLD signals across different brain areas. These interpretations yield from many studies on functional connectivity using resting-state fMRI data that have emerged in recent years. However, not all of these studies used the same metrics for quantifying the temporal correlations between brain regions. In this paper, we use a public-domain test–retest resting-state fMRI data set to perform a systematic investigation of the stability of the metrics that are often used in resting-state functional connectivity (FC) studies. The fMRI data set was collected across three different sessions. The second session took place approximately eleven months after the first session, and the third session was an hour after the second session. The FC metrics composed of cross-correlation, partial cross-correlation, cross-coherence, and parameters based on an autoregressive model. We discussed the strengths and weaknesses of each metric. We performed ROI-level and full-brain seed-based voxelwise test–retest analyses using each FC metric to assess its stability. For both ROI-level and voxel-level analyses, we found that cross-correlation yielded more stable measurements than the other metrics. We discussed the consequences of this result on the utility of the FC metrics. We observed that for negatively correlated ROIs, their partial cross-correlation is shrunk towards zero, thus affecting the stability of their FC. For the present data set, we found greater stability in FC between the second and third sessions (one hour between sessions) compared to the first and second sessions (approximately 11 months between sessions). Finally, we report that some of the metrics showed a positive association between strength and stability. In summary, the results presented in this paper suggest important implications when choosing metrics for quantifying and assessing various types of functional connectivity for resting-state fMRI studies." ] }
1506.06048
1005050706
This paper applies a hidden Markov model to the problem of Attention Deficit Hyperactivity Disorder (ADHD) diagnosis from resting-state functional Magnetic Resonance Image (fMRI) scans of subjects. The proposed model considers the temporal evolution of fMRI voxel activations in the cortex, cingulate gyrus, and thalamus regions of the brain in order to make a diagnosis. Four feature dimen- sionality reduction methods are applied to the fMRI scan: voxel means, voxel weighted means, principal components analysis, and kernel principal components analysis. Using principal components analysis and kernel principal components analysis for dimensionality reduction, the proposed algorithm yielded an accu- racy of 63.01 and 62.06 , respectively, on the ADHD-200 competition dataset when differentiating between healthy control, ADHD innattentive, and ADHD combined types.
Similar to the previously mentioned temporal approaches to ADHD classification, we will investigate the temporal evoluation of voxels for both healthy and ADHD positive subjects using an HMM. However, we explore reduced dimensional representations of fMRI voxels in ADHD regions of interest. The analysis of regions of interest in an fMRI scan instead of the entire brain is common practice. For instance, Solmaz @cite_8 used a bag of words approach for identifying ADHD patients using the region of the brain.
{ "cite_N": [ "@cite_8" ], "mid": [ "2048342478" ], "abstract": [ "Attention Deficit Hyperactivity Disorder (ADHD) is receiving lots of attention nowadays mainly because it is one of the common brain disorders among children and not much information is known about the cause of this disorder. In this study, we propose to use a novel approach for automatic classification of ADHD conditioned subjects and control subjects using functional Magnetic Resonance Imaging (fMRI) data of resting state brains. For this purpose, we compute the correlation between every possible voxel pairs within a subject and over the time frame of the experimental protocol. A network of voxels is constructed by representing a high correlation value between any two voxels as an edge. A Bag-of-Words (BoW) approach is used to represent each subject as a histogram of network features; such as the number of degrees per voxel. The classification is done using a Support Vector Machine (SVM). We also investigate the use of raw intensity values in the time series for each voxel. Here, every subject is represented as a combined histogram of network and raw intensity features. Experimental results verified that the classification accuracy improves when the combined histogram is used. We tested our approach on a highly challenging dataset released by NITRC for ADHD-200 competition and obtained promising results. The dataset not only has a large size but also includes subjects from different demography and edge groups. To the best of our knowledge, this is the first paper to propose BoW approach in any functional brain disorder classification and we believe that this approach will be useful in analysis of many brain related conditions." ] }
1506.05197
2951823730
We derive a general and closed-form result for the success probability in downlink multiple-antenna (MIMO) heterogeneous cellular networks (HetNets), utilizing a novel Toeplitz matrix representation. This main result, which is equivalently the signal-to-interference ratio (SIR) distribution, includes multiuser MIMO, single-user MIMO and per-tier biasing for @math different tiers of randomly placed base stations (BSs), assuming zero-forcing precoding and perfect channel state information. The large SIR limit of this result admits a simple closed form that is accurate at moderate SIRs, e.g., above 5 dB. These results reveal that the SIR-invariance property of SISO HetNets does not hold for MIMO HetNets; instead the success probability may decrease as the network density increases. We prove that the maximum success probability is achieved by activating only one tier of BSs, while the maximum area spectral efficiency (ASE) is achieved by activating all the BSs. This reveals a unique tradeoff between the ASE and link reliability in multiuser MIMO HetNets. To achieve the maximum ASE while guaranteeing a certain link reliability, we develop efficient algorithms to find the optimal BS densities. It is shown that as the link reliability requirement increases, more BSs and more tiers should be deactivated.
A key result from early HetNet analysis was the derivation of the signal-to-interference-plus-noise (SINR) distribution, also known as the coverage or outage probability, where the HetNet is characterized by randomly placed base stations forming @math tiers, each tier distinguished by a unique transmit power and , i.e. the average number of BSs per unit area @cite_1 @cite_12 . A key resulting observation was that the signal-to-interference ratio (SIR) distribution is invariant to the BS densities, as long as the mobile connects to the BS providing the strongest received signal power. This property means that cell densification does not degrade the link reliability, and so the area spectral efficiency (ASE) of the network can be increased indefinitely by deploying more BSs. These early papers have resulted in a flurry of follow on work, e.g. @cite_9 @cite_27 @cite_23 @cite_21 @cite_3 @cite_6 , and see @cite_25 for a survey.
{ "cite_N": [ "@cite_9", "@cite_21", "@cite_1", "@cite_3", "@cite_6", "@cite_27", "@cite_23", "@cite_25", "@cite_12" ], "mid": [ "2005108639", "2057540419", "2034420299", "1990810841", "2032712938", "2109830484", "1999509241", "2039688938", "2149170915" ], "abstract": [ "Pushing data traffic from cellular to WiFi is an example of inter radio access technology (RAT) offloading. While this clearly alleviates congestion on the over-loaded cellular network, the ultimate potential of such offloading and its effect on overall system performance is not well understood. To address this, we develop a general and tractable model that consists of M different RATs, each deploying up to K different tiers of access points (APs), where each tier differs in transmit power, path loss exponent, deployment density and bandwidth. Each class of APs is modeled as an independent Poisson point process (PPP), with mobile user locations modeled as another independent PPP, all channels further consisting of i.i.d. Rayleigh fading. The distribution of rate over the entire network is then derived for a weighted association strategy, where such weights can be tuned to optimize a particular objective. We show that the optimum fraction of traffic offloaded to maximize SINR coverage is not in general the same as the one that maximizes rate coverage, defined as the fraction of users achieving a given rate.", "Random spatial models are attractive for modeling heterogeneous cellular networks (HCNs) due to their realism, tractability, and scalability. A major limitation of such models to date in the context of HCNs is the neglect of network traffic and load: all base stations (BSs) have typically been assumed to always be transmitting. Small cells in particular will have a lighter load than macrocells, and so their contribution to the network interference may be significantly overstated in a fully loaded model. This paper incorporates a flexible notion of BS load by introducing a new idea of conditionally thinning the interference field. For a K-tier HCN where BSs across tiers differ in terms of transmit power, supported data rate, deployment density, and now load, we derive the coverage probability for a typical mobile, which connects to the strongest BS signal. Conditioned on this connection, the interfering BSs of the i^ th tier are assumed to transmit independently with probability p_i, which models the load. Assuming — reasonably — that smaller cells are more lightly loaded than macrocells, the analysis shows that adding such access points to the network always increases the coverage probability. We also observe that fully loaded models are quite pessimistic in terms of coverage.", "In this paper we develop a tractable framework for SINR analysis in downlink heterogeneous cellular networks (HCNs) with flexible cell association policies. The HCN is modeled as a multi-tier cellular network where each tier's base stations (BSs) are randomly located and have a particular transmit power, path loss exponent, spatial density, and bias towards admitting mobile users. For example, as compared to macrocells, picocells would usually have lower transmit power, higher path loss exponent (lower antennas), higher spatial density (many picocells per macrocell), and a positive bias so that macrocell users are actively encouraged to use the more lightly loaded picocells. In the present paper we implicitly assume all base stations have full queues; future work should relax this. For this model, we derive the outage probability of a typical user in the whole network or a certain tier, which is equivalently the downlink SINR cumulative distribution function. The results are accurate for all SINRs, and their expressions admit quite simple closed-forms in some plausible special cases. We also derive the average ergodic rate of the typical user, and the minimum average user throughput - the smallest value among the average user throughputs supported by one cell in each tier. We observe that neither the number of BSs or tiers changes the outage probability or average ergodic rate in an interference-limited full-loaded HCN with unbiased cell association (no biasing), and observe how biasing alters the various metrics.", "In this letter, we introduce new mathematical frameworks to the computation of coverage probability and average rate of cellular networks, by relying on a stochastic geometry abstraction modeling approach. With the aid of the Gil-Pelaez inversion formula, we prove that coverage and rate can be compactly formulated as a twofold integral for arbitrary per-link power gains. In the interference-limited regime, single-integral expressions are obtained. As a case study, Gamma-distributed per-link power gains are investigated further, and approximated closed-form expressions for coverage and rate in the interference-limited regime are obtained, which shed light on the impact of channel parameters and physical-layer transmission schemes.", "The equivalent-in-distribution (EiD)-based approach to the analysis of single-input-single-output (SISO) cellular networks for transmission over Rayleigh fading channels has recently been introduced [1]. Its rationale relies upon formulating the aggregate other-cell interference in terms of an infinite summation of independent and conditionally distributed Gaussian random variables (RVs). This approach leads to exact integral expressions of the error probability for arbitrary bi-dimensional modulations. In this paper, the EiD-based approach is generalized to the performance analysis of multiple-input-multiple-output (MIMO) cellular networks for transmission over Rayleigh fading channels. The proposed mathematical formulation allows us to study a large number of MIMO arrangements, including receive-diversity, spatial-multiplexing, orthogonal space-time block coding, zero-forcing reception and zero-forcing precoding. Depending on the MIMO setup, either exact or approximate integral expressions of the error probability are provided. In the presence of other-cell interference and noise, the error probability is formulated in terms of a two-fold integral. In interference-limited cellular networks, the mathematical framework simplifies to a single integral expression. As a byproduct, the proposed approach enables us to study SISO cellular networks for transmission over Nakagami- @math fading channels. The mathematical analysis is substantiated with the aid of extensive Monte Carlo simulations.", "The deployment of femtocells in a macrocell network is an economical and effective way to increase network capacity and coverage. Nevertheless, such deployment is challenging due to the presence of inter-tier and intra-tier interference, and the ad hoc operation of femtocells. Motivated by the flexible subchannel allocation capability of OFDMA, we investigate the effect of spectrum allocation in two-tier networks, where the macrocells employ closed access policy and the femtocells can operate in either open or closed access. By introducing a tractable model, we derive the success probability for each tier under different spectrum allocation and femtocell access policies. In particular, we consider joint subchannel allocation, in which the whole spectrum is shared by both tiers, as well as disjoint subchannel allocation, whereby disjoint sets of subchannels are assigned to both tiers. We formulate the throughput maximization problem subject to quality of service constraints in terms of success probabilities and per-tier minimum rates, and provide insights into the optimal spectrum allocation. Our results indicate that with closed access femtocells, the optimized joint and disjoint subchannel allocations provide the highest throughput among all schemes in sparse and dense femtocell networks, respectively. With open access femtocells, the optimized joint subchannel allocation provides the highest possible throughput for all femtocell densities.", "In this paper, we introduce an analytical framework to compute the average rate of downlink heterogeneous cellular networks. The framework leverages recent application of stochastic geometry to other-cell interference modeling and analysis. The heterogeneous cellular network is modeled as the superposition of many tiers of Base Stations (BSs) having different transmit power, density, path-loss exponent, fading parameters and distribution, and unequal biasing for flexible tier association. A long-term averaged maximum biased-received-power tier association is considered. The positions of the BSs in each tier are modeled as points of an independent Poisson Point Process (PPP). Under these assumptions, we introduce a new analytical methodology to evaluate the average rate, which avoids the computation of the Coverage Probability (Pcov) and needs only the Moment Generating Function (MGF) of the aggregate interference at the probe mobile terminal. The distinguishable characteristic of our analytical methodology consists in providing a tractable and numerically efficient framework that is applicable to general fading distributions, including composite fading channels with small- and mid-scale fluctuations. In addition, our method can efficiently handle correlated Log-Normal shadowing with little increase of the computational complexity. The proposed MGF-based approach needs the computation of either a single or a two-fold numerical integral, thus reducing the complexity of Pcov-based frameworks, which require, for general fading distributions, the computation of a four-fold integral.", "For more than three decades, stochastic geometry has been used to model large-scale ad hoc wireless networks, and it has succeeded to develop tractable models to characterize and better understand the performance of these networks. Recently, stochastic geometry models have been shown to provide tractable yet accurate performance bounds for multi-tier and cognitive cellular wireless networks. Given the need for interference characterization in multi-tier cellular networks, stochastic geometry models provide high potential to simplify their modeling and provide insights into their design. Hence, a new research area dealing with the modeling and analysis of multi-tier and cognitive cellular wireless networks is increasingly attracting the attention of the research community. In this article, we present a comprehensive survey on the literature related to stochastic geometry models for single-tier as well as multi-tier and cognitive cellular wireless networks. A taxonomy based on the target network model, the point process used, and the performance evaluation technique is also presented. To conclude, we discuss the open research challenges and future research directions.", "Cellular networks are in a major transition from a carefully planned set of large tower-mounted base-stations (BSs) to an irregular deployment of heterogeneous infrastructure elements that often additionally includes micro, pico, and femtocells, as well as distributed antennas. In this paper, we develop a tractable, flexible, and accurate model for a downlink heterogeneous cellular network (HCN) consisting of K tiers of randomly located BSs, where each tier may differ in terms of average transmit power, supported data rate and BS density. Assuming a mobile user connects to the strongest candidate BS, the resulting Signal-to-Interference-plus-Noise-Ratio (SINR) is greater than 1 when in coverage, Rayleigh fading, we derive an expression for the probability of coverage (equivalently outage) over the entire network under both open and closed access, which assumes a strikingly simple closed-form in the high SINR regime and is accurate down to -4 dB even under weaker assumptions. For external validation, we compare against an actual LTE network (for tier 1) with the other K-1 tiers being modeled as independent Poisson Point Processes. In this case as well, our model is accurate to within 1-2 dB. We also derive the average rate achieved by a randomly located mobile and the average load on each tier of BSs. One interesting observation for interference-limited open access networks is that at a given , adding more tiers and or BSs neither increases nor decreases the probability of coverage or outage when all the tiers have the same target-SINR." ] }
1506.05197
2951823730
We derive a general and closed-form result for the success probability in downlink multiple-antenna (MIMO) heterogeneous cellular networks (HetNets), utilizing a novel Toeplitz matrix representation. This main result, which is equivalently the signal-to-interference ratio (SIR) distribution, includes multiuser MIMO, single-user MIMO and per-tier biasing for @math different tiers of randomly placed base stations (BSs), assuming zero-forcing precoding and perfect channel state information. The large SIR limit of this result admits a simple closed form that is accurate at moderate SIRs, e.g., above 5 dB. These results reveal that the SIR-invariance property of SISO HetNets does not hold for MIMO HetNets; instead the success probability may decrease as the network density increases. We prove that the maximum success probability is achieved by activating only one tier of BSs, while the maximum area spectral efficiency (ASE) is achieved by activating all the BSs. This reveals a unique tradeoff between the ASE and link reliability in multiuser MIMO HetNets. To achieve the maximum ASE while guaranteeing a certain link reliability, we develop efficient algorithms to find the optimal BS densities. It is shown that as the link reliability requirement increases, more BSs and more tiers should be deactivated.
The link reliability vs. ASE tradeoff discussed in this paper is related to the notion of transmission capacity'' in wireless ad hoc networks @cite_39 @cite_41 , with related multi-antenna results such as @cite_10 @cite_31 @cite_20 @cite_14 . In wireless ad hoc networks, to maximize the spatial throughput, i.e. ASE, while guaranteeing the link reliability at a certain SINR, the density of transmitters cannot exceed a certain value, which is called the @cite_41 . Naturally, allowing more simultaneous transmitters will increase the spatial reuse efficiency, but the interference to the receivers will become higher and so the SINR and thus link reliability will decrease, similar to the ASE vs. link reliability tradeoff studied in this paper.
{ "cite_N": [ "@cite_14", "@cite_41", "@cite_39", "@cite_31", "@cite_10", "@cite_20" ], "mid": [ "2031116206", "2963847582", "2095796369", "2150450717", "2137079066", "2102473388" ], "abstract": [ "The tremendous capacity gains promised by space division multiple access (SDMA) depend critically on the accuracy of the transmit channel state information. In the broadcast channel, even without any network interference, it is known that such gains collapse due to interstream interference if the feedback is delayed or low rate. In this paper, we investigate SDMA in the presence of interference from many other simultaneously active transmitters distributed randomly over the network. In particular we consider zero-forcing beamforming in a decentralized (ad hoc) network where each receiver provides feedback to its respective transmitter. We derive closed-form expressions for the outage probability, network throughput, transmission capacity, and average achievable rate and go on to quantify the degradation in network performance due to residual self-interference as a function of key system parameters. One particular finding is that as in the classical broadcast channel, the per-user feedback rate must increase linearly with the number of transmit antennas and SINR (in dB) for the full multiplexing gains to be preserved with limited feedback. We derive the throughput-maximizing number of streams, establishing that single-stream transmission is optimal in most practically relevant settings. In short, SDMA does not appear to be a prudent design choice for interference-limited wireless networks.", "Transmission capacity (TC) is a performance metric for wireless networks that measures the spatial intensity of successful transmissions per unit area, subject to a constraint on the permissible outage probability (where outage occurs when the signal to interference plus noise ratio (SINR) at a receiver is below a threshold). This volume gives a unified treatment of the TC framework that has been developed by the authors and their collaborators over the past decade. The mathematical framework underlying the analysis (reviewed in Section 2) is stochastic geometry: Poisson point processes model the locations of interferers, and (stable) shot noise processes represent the aggregate interference seen at a receiver. Section 3 presents TC results (exact, asymptotic, and bounds) on a simple model in order to illustrate a key strength of the framework: analytical tractability yields explicit performance dependence upon key model parameters. Section 4 presents enhancements to this basic model — channel fading, variable link distances (VLD), and multihop. Section 5 presents four network design case studies well-suited to TC: (i) spectrum management, (ii) interference cancellation, (iii) signal threshold transmission scheduling, and (iv) power control. Section 6 studies the TC when nodes have multiple antennas, which provides a contrast vs. classical results that ignore interference.", "In this paper, upper and lower bounds on the transmission capacity of spread-spectrum (SS) wireless ad hoc networks are derived. We define transmission capacity as the product of the maximum density of successful transmissions multiplied by their data rate, given an outage constraint. Assuming that the nodes are randomly distributed in space according to a Poisson point process, we derive upper and lower bounds for frequency hopping (FH-CDMA) and direct sequence (DS-CDMA) SS networks, which incorporate traditional modulation types (no spreading) as a special case. These bounds cleanly summarize how ad hoc network capacity is affected by the outage probability, spreading factor, transmission power, target signal-to-noise ratio (SNR), and other system parameters. Using these bounds, it can be shown that FH-CDMA obtains a higher transmission capacity than DS-CDMA on the order of M sup 1-2 spl alpha , where M is the spreading factor and spl alpha >2 is the path loss exponent. A tangential contribution is an (apparently) novel technique for obtaining tight bounds on tail probabilities of additive functionals of homogeneous Poisson point processes.", "Receivers with N antennas in single-hop, ad-hoc wireless networks with nodes randomly distributed on an infinite plane with uniform area density are studied. Transmitting nodes have single antennas and transmit simultaneously in the same frequency band with power P that decays with distance via the commonly-used inverse-polynomial model with path-loss- exponent (PLE) greater than 2. This model applies to shared spectrum systems where multiple links share the same frequency band. In the interference-limited regime, the average spectral efficiency of a representative link E[C] (b s Hz link) is found to grow as log(N) and linearly with PLE, and its variance decays as 1 N. The average signal-to-interference-plus-noise-ratio (SINR) on a representative link is found to grow faster than linearly with N. With multiple-input-multiple-output (MIMO) links where transmit nodes have multiple antennas without Channel- State-Information, it is found that E[C] in the network can be improved if nodes transmit using the optimum number of antennas compared to the optimum selfish strategy of transmitting equal-power streams from every antenna. The results are extended to random code-division-multiple-access systems where the optimum spreading factor for a given link length is found. These results are developed as asymptotic expressions using infinite random matrix theory and are validated by Monte-Carlo simulations.", "This paper derives the outage probability and transmission capacity of ad hoc wireless networks with nodes employing multiple antenna diversity techniques, for a general class of signal distributions. This analysis allows system performance to be quantified for fading or non-fading environments. The transmission capacity is given for interference-limited uniformly random networks on the entire plane with path loss exponent alpha > 2 in which nodes use: (1) static beamforming through M sectorized antennas, for which the increase in transmission capacity is shown to be thetas(M2) if the antennas are without sidelobes, but less in the event of a nonzero sidelobe level; (2) dynamic eigenbeamforming (maximal ratio transmission combining), in which the increase is shown to be thetas(M 2 alpha ); (3) various transmit antenna selection and receive antenna selection combining schemes, which give appreciable but rapidly diminishing gains; and (4) orthogonal space-time block coding, for which there is only a small gain due to channel hardening, equivalent to Nakagami-m fading for increasing m. It is concluded that in ad hoc networks, static and dynamic beamforming perform best, selection combining performs well but with rapidly diminishing returns with added antennas, and that space-time block coding offers only marginal gains.", "The benefit of multi-antenna receivers is investigated in wireless ad hoc networks, and the main finding is that network throughput can be made to scale linearly with the number of receive antennas N_r even if each transmitting node uses only a single antenna. This is in contrast to a large body of prior work in single-user, multiuser, and ad hoc wireless networks that have shown linear scaling is achievable when multiple receive and transmit antennas (i.e., MIMO transmission) are employed, but that throughput increases logarithmically or sublinearly with N_r when only a single transmit antenna (i.e., SIMO transmission) is used. The linear gain is achieved by using the receive degrees of freedom to simultaneously suppress interference and increase the power of the desired signal, and exploiting the subsequent performance benefit to increase the density of simultaneous transmissions instead of the transmission rate. This result is proven in the transmission capacity framework, which presumes single-hop transmissions in the presence of randomly located interferers, but it is also illustrated that the result holds under several relaxations of the model, including imperfect channel knowledge, multihop transmission, and regular networks (i.e., interferers are deterministically located on a grid)." ] }
1506.05529
1948036214
Community detection in online social networks has been a hot research topic in recent years. Meanwhile, to enjoy more social network services, users nowadays are usually involved in multiple online social networks simultaneously, some of which can share common information and structures. Networks that involve some common users are named as multiple "partially aligned networks". In this paper, we want to detect communities of multiple partially aligned networks simultaneously, which is formally defined as the "Mutual Clustering" problem. The "Mutual Clustering" problem is very challenging as it has two important issues to address: (1) how to preserve the network characteristics in mutual community detection? and (2) how to utilize the information in other aligned networks to refine and disambiguate the community structures of the shared users? To solve these two challenges, a novel community detection method, MCD (Mutual Community Detector), is proposed in this paper. MCD can detect social community structures of users in multiple partially aligned networks at the same time with full considerations of (1) characteristics of each network, and (2) information of the shared users across aligned networks. Extensive experiments conducted on two real-world partially aligned heterogeneous social networks demonstrate that MCD can solve the "Mutual Clustering" problem very well.
Clustering is a very broad research area, which include various types of clustering problems, e.g., consensus clustering @cite_14 @cite_6 , multi-view clustering @cite_25 @cite_28 , multi-relational clustering @cite_38 , co-training based clustering @cite_35 , and dozens of papers have been published on these topics. @cite_14 propose a probabilistic consensus clustering method by using evidence accumulation. propose a bayesian consensus clustering method in @cite_2 . Meanwhile, @cite_25 propose to study the multi-view clustering problem, where the attributes of objects are split into two independent subsets. @cite_33 propose to apply multi-view K-Means clustering methods to big data. @cite_38 propose a user-guided multi-relational clustering method, CrossClus, to performs multi-relational clustering under user's guidance. propose to address the multi-view clustering problem based on a co-training setting in @cite_35 .
{ "cite_N": [ "@cite_38", "@cite_35", "@cite_14", "@cite_33", "@cite_28", "@cite_6", "@cite_2", "@cite_25" ], "mid": [ "2056756258", "2101324110", "2143384329", "2176433868", "", "", "2095763169", "" ], "abstract": [ "Most structured data in real-life applications are stored in relational databases containing multiple semantically linked relations. Unlike clustering in a single table, when clustering objects in relational databases there are usually a large number of features conveying very different semantic information, and using all features indiscriminately is unlikely to generate meaningful results. Because the user knows her goal of clustering, we propose a new approach called CrossClus, which performs multi-relational clustering under user's guidance. Unlike semi-supervised clustering which requires the user to provide a training set, we minimize the user's effort by using a very simple form of user guidance. The user is only required to select one or a small set of features that are pertinent to the clustering goal, and CrossClus searches for other pertinent features in multiple relations. Each feature is evaluated by whether it clusters objects in a similar way with the user specified features. We design efficient and accurate approaches for both feature selection and object clustering. Our comprehensive experiments demonstrate the effectiveness and scalability of CrossClus.", "We propose a spectral clustering algorithm for the multi-view setting where we have access to multiple views of the data, each of which can be independently used for clustering. Our spectral clustering algorithm has a flavor of co-training, which is already a widely used idea in semi-supervised learning. We work on the assumption that the true underlying clustering would assign a point to the same cluster irrespective of the view. Hence, we constrain our approach to only search for the clusterings that agree across the views. Our algorithm does not have any hyperparameters to set, which is a major advantage in unsupervised learning. We empirically compare with a number of baseline methods on synthetic and real-world datasets to show the efficacy of the proposed algorithm.", "Clustering ensemble methods produce a consensus partition of a set of data points by combining the results of a collection of base clustering algorithms. In the evidence accumulation clustering (EAC) paradigm, the clustering ensemble is transformed into a pairwise co-association matrix, thus avoiding the label correspondence problem, which is intrinsic to other clustering ensemble schemes. In this paper, we propose a consensus clustering approach based on the EAC paradigm, which is not limited to crisp partitions and fully exploits the nature of the co-association matrix. Our solution determines probabilistic assignments of data points to clusters by minimizing a Bregman divergence between the observed co-association frequencies and the corresponding co-occurrence probabilities expressed as functions of the unknown assignments. We additionally propose an optimization algorithm to find a solution under any double-convex Bregman divergence. Experiments on both synthetic and real benchmark data show the effectiveness of the proposed approach.", "In past decade, more and more data are collected from multiple sources or represented by multiple views, where different views describe distinct perspectives of the data. Although each view could be individually used for finding patterns by clustering, the clustering performance could be more accurate by exploring the rich information among multiple views. Several multi-view clustering methods have been proposed to unsupervised integrate different views of data. However, they are graph based approaches, e.g. based on spectral clustering, such that they cannot handle the large-scale data. How to combine these heterogeneous features for unsupervised large-scale data clustering has become a challenging problem. In this paper, we propose a new robust large-scale multi-view clustering method to integrate heterogeneous representations of largescale data. We evaluate the proposed new methods by six benchmark data sets and compared the performance with several commonly used clustering approaches as well as the baseline multi-view clustering methods. In all experimental results, our proposed methods consistently achieve superiors clustering performances.", "", "", "Motivation: In biomedical research a growing number of platforms and technologies are used to measure diverse but related information, and the task of clustering a set of objects based on multiple sources of data arises in several applications. Most current approaches to multisource clustering either independently determine a separate clustering for each data source or determine a single ‘joint’ clustering for all data sources. There is a need for more flexible approaches that simultaneously model the dependence and the heterogeneity of the data sources. Results: We propose an integrative statistical model that permits a separate clustering of the objects for each data source. These separate clusterings adhere loosely to an overall consensus clustering, and hence they are not independent. We describe a computationally scalable Bayesian framework for simultaneous estimation of both the consensus clustering and the source-specific clusterings. We demonstrate that this flexible approach is more robust than joint clustering of all data sources, and is more powerful than clustering each data source independently. We present an application to subtype identification of breast cancer tumor samples using publicly available data from The Cancer Genome Atlas. Availability: R code with instructions and examples is available at", "" ] }
1506.05529
1948036214
Community detection in online social networks has been a hot research topic in recent years. Meanwhile, to enjoy more social network services, users nowadays are usually involved in multiple online social networks simultaneously, some of which can share common information and structures. Networks that involve some common users are named as multiple "partially aligned networks". In this paper, we want to detect communities of multiple partially aligned networks simultaneously, which is formally defined as the "Mutual Clustering" problem. The "Mutual Clustering" problem is very challenging as it has two important issues to address: (1) how to preserve the network characteristics in mutual community detection? and (2) how to utilize the information in other aligned networks to refine and disambiguate the community structures of the shared users? To solve these two challenges, a novel community detection method, MCD (Mutual Community Detector), is proposed in this paper. MCD can detect social community structures of users in multiple partially aligned networks at the same time with full considerations of (1) characteristics of each network, and (2) information of the shared users across aligned networks. Extensive experiments conducted on two real-world partially aligned heterogeneous social networks demonstrate that MCD can solve the "Mutual Clustering" problem very well.
Clustering based community detection in online social networks is a hot research topic and many different techniques have been proposed to optimize certain measures of the results, e.g., modularity function @cite_1 , and normalized cut @cite_7 . give a comprehensive survey of correlated techniques used to detect communities in networks in @cite_30 and a detailed tutorial on spectral clustering has been given by Luxburg in @cite_41 . These works are mostly studied based on homogeneous social networks. However, in the real-world online social networks, abundant heterogeneous information generated by users' online social activities exist in online social networks. @cite_43 studies ranking-based clustering on heterogeneous networks, while @cite_29 studies ranking-based classification problems on heterogeneous networks. @cite_17 proposes a classification based method for community detection in complex networks and study the community structures in multiplex networks in @cite_36 .
{ "cite_N": [ "@cite_30", "@cite_7", "@cite_41", "@cite_36", "@cite_29", "@cite_1", "@cite_43", "@cite_17" ], "mid": [ "2152430833", "2121947440", "2949364901", "2074617510", "", "2095293504", "2149288670", "2142674578" ], "abstract": [ "Abstract Networks (or graphs) appear as dominant structures in diverse domains, including sociology, biology, neuroscience and computer science. In most of the aforementioned cases graphs are directed — in the sense that there is directionality on the edges, making the semantics of the edges nonsymmetric as the source node transmits some property to the target one but not vice versa. An interesting feature that real networks present is the clustering or community structure property, under which the graph topology is organized into modules commonly called communities or clusters. The essence here is that nodes of the same community are highly similar while on the contrary, nodes across communities present low similarity. Revealing the underlying community structure of directed complex networks has become a crucial and interdisciplinary topic with a plethora of relevant application domains. Therefore, naturally there is a recent wealth of research production in the area of mining directed graphs — with clustering being the primary method sought and the primary tool for community detection and evaluation. The goal of this paper is to offer an in-depth comparative review of the methods presented so far for clustering directed networks along with the relevant necessary methodological background and also related applications. The survey commences by offering a concise review of the fundamental concepts and methodological base on which graph clustering algorithms capitalize on. Then we present the relevant work along two orthogonal classifications. The first one is mostly concerned with the methodological principles of the clustering algorithms, while the second one approaches the methods from the viewpoint regarding the properties of a good cluster in a directed network. Further, we present methods and metrics for evaluating graph clustering results, demonstrate interesting application domains and provide promising future research directions.", "We propose a novel approach for solving the perceptual grouping problem in vision. Rather than focusing on local features and their consistencies in the image data, our approach aims at extracting the global impression of an image. We treat image segmentation as a graph partitioning problem and propose a novel global criterion, the normalized cut, for segmenting the graph. The normalized cut criterion measures both the total dissimilarity between the different groups as well as the total similarity within the groups. We show that an efficient computational technique based on a generalized eigenvalue problem can be used to optimize this criterion. We applied this approach to segmenting static images, as well as motion sequences, and found the results to be very encouraging.", "In recent years, spectral clustering has become one of the most popular modern clustering algorithms. It is simple to implement, can be solved efficiently by standard linear algebra software, and very often outperforms traditional clustering algorithms such as the k-means algorithm. On the first glance spectral clustering appears slightly mysterious, and it is not obvious to see why it works at all and what it really does. The goal of this tutorial is to give some intuition on those questions. We describe different graph Laplacians and their basic properties, present the most common spectral clustering algorithms, and derive those algorithms from scratch by several different approaches. Advantages and disadvantages of the different spectral clustering algorithms are discussed.", "Network science is an interdisciplinary endeavor, with methods and applications drawn from across the natural, social, and information sciences. A prominent problem in network science is the algorithmic detection of tightly connected groups of nodes known as communities. We developed a generalized framework of network quality functions that allowed us to study the community structure of arbitrary multislice networks, which are combinations of individual networks coupled through links that connect each node in one network slice to itself in other slices. This framework allows studies of community structure in a general setting encompassing networks that evolve over time, have multiple types of links (multiplexity), and have multiple scales.", "", "We propose and study a set of algorithms for discovering community structure in networks-natural divisions of network nodes into densely connected subgroups. Our algorithms all share two definitive features: first, they involve iterative removal of edges from the network to split it into communities, the edges removed being identified using any one of a number of possible \"betweenness\" measures, and second, these measures are, crucially, recalculated after each removal. We also propose a measure for the strength of the community structure found by our algorithms, which gives us an objective metric for choosing the number of communities into which a network should be divided. We demonstrate that our algorithms are highly effective at discovering community structure in both computer-generated and real-world network data, and show how they can be used to shed light on the sometimes dauntingly complex structure of networked systems.", "A heterogeneous information network is an information network composed of multiple types of objects. Clustering on such a network may lead to better understanding of both hidden structures of the network and the individual role played by every object in each cluster. However, although clustering on homogeneous networks has been studied over decades, clustering on heterogeneous networks has not been addressed until recently. A recent study proposed a new algorithm, RankClus, for clustering on bi-typed heterogeneous networks. However, a real-world network may consist of more than two types, and the interactions among multi-typed objects play a key role at disclosing the rich semantics that a network carries. In this paper, we study clustering of multi-typed heterogeneous networks with a star network schema and propose a novel algorithm, NetClus, that utilizes links across multityped objects to generate high-quality net-clusters. An iterative enhancement method is developed that leads to effective ranking-based clustering in such heterogeneous networks. Our experiments on DBLP data show that NetClus generates more accurate clustering results than the baseline topic model algorithm PLSA and the recently proposed algorithm, RankClus. Further, NetClus generates informative clusters, presenting good ranking and cluster membership information for each attribute object in each net-cluster.", "Clustering data in high dimensions is believed to be a hard problem in general. A number of efficient clustering algorithms developed in recent years address this problem by projecting the data into a lower-dimensional subspace, e.g. via Principal Components Analysis (PCA) or random projections, before clustering. Here, we consider constructing such projections using multiple views of the data, via Canonical Correlation Analysis (CCA). Under the assumption that the views are un-correlated given the cluster label, we show that the separation conditions required for the algorithm to be successful are significantly weaker than prior results in the literature. We provide results for mixtures of Gaussians and mixtures of log concave distributions. We also provide empirical support from audio-visual speaker clustering (where we desire the clusters to correspond to speaker ID) and from hierarchical Wikipedia document clustering (where one view is the words in the document and the other is the link structure)." ] }
1506.05529
1948036214
Community detection in online social networks has been a hot research topic in recent years. Meanwhile, to enjoy more social network services, users nowadays are usually involved in multiple online social networks simultaneously, some of which can share common information and structures. Networks that involve some common users are named as multiple "partially aligned networks". In this paper, we want to detect communities of multiple partially aligned networks simultaneously, which is formally defined as the "Mutual Clustering" problem. The "Mutual Clustering" problem is very challenging as it has two important issues to address: (1) how to preserve the network characteristics in mutual community detection? and (2) how to utilize the information in other aligned networks to refine and disambiguate the community structures of the shared users? To solve these two challenges, a novel community detection method, MCD (Mutual Community Detector), is proposed in this paper. MCD can detect social community structures of users in multiple partially aligned networks at the same time with full considerations of (1) characteristics of each network, and (2) information of the shared users across aligned networks. Extensive experiments conducted on two real-world partially aligned heterogeneous social networks demonstrate that MCD can solve the "Mutual Clustering" problem very well.
In recent years, researchers' attention has started to shift to study multiple heterogeneous social networks simultaneously. @cite_5 are the first to propose the concepts of and . Across aligned social networks, different social network application problems have been studied, which include different cross-network link prediction transfer @cite_42 @cite_23 @cite_44 @cite_12 , emerging network clustering @cite_27 and large-scale network community detection @cite_15 , inter-network information diffusion and influence maximization @cite_16 .
{ "cite_N": [ "@cite_42", "@cite_44", "@cite_27", "@cite_23", "@cite_5", "@cite_15", "@cite_16", "@cite_12" ], "mid": [ "2127989284", "", "2405147195", "", "2047532797", "2075283574", "633744573", "" ], "abstract": [ "Online social networks have gained great success in recent years and many of them involve multiple kinds of nodes and complex relationships. Among these relationships, social links among users are of great importance. Many existing link prediction methods focus on predicting social links that will appear in the future among all users based upon a snapshot of the social network. In real-world social networks, many new users are joining in the service every day. Predicting links for new users are more important. Different from conventional link prediction problems, link prediction for new users are more challenging due to the following reasons: (1) differences in information distributions between new users and the existing active users (i.e., old users); (2) lack of information from the new users in the network. We propose a link prediction method called SCAN-PS (Supervised Cross Aligned Networks link prediction with Personalized Sampling), to solve the link prediction problem for new users with information transferred from both the existing active users in the target network and other source networks through aligned accounts. We proposed a within-target-network personalized sampling method to process the existing active users' information in order to accommodate the differences in information distributions before the intra-network knowledge transfer. SCAN-PS can also exploit information in other source networks, where the user accounts are aligned with the target network. In this way, SCAN-PS could solve the cold start problem when information of these new users is total absent in the target network.", "", "Nowadays, many new social networks offering specific services spring up overnight. In this paper, we want to detect communities for emerging networks. Community detection for emerging networks is very challenging as information in emerging networks is usually too sparse for traditional methods to calculate effective closeness scores among users and achieve good community detection results. Meanwhile, users nowadays usually join multiple social networks simultaneously, some of which are developed and can share common information with the emerging networks. Based on both link and attribution information across multiple networks, a new general closeness measure, intimacy, is introduced in this paper. With both micro and macro controls, an effective and efficient method, CAD (Cold stArt community Detector), is proposed to propagate information from developed network to calculate effective intimacy scores among users in emerging networks. Extensive experiments conducted on real-world social networks demonstrate that CAD can perform very well in addressing the emerging network community detection problem.", "", "Online social networks can often be represented as heterogeneous information networks containing abundant information about: who, where, when and what. Nowadays, people are usually involved in multiple social networks simultaneously. The multiple accounts of the same user in different networks are mostly isolated from each other without any connection between them. Discovering the correspondence of these accounts across multiple social networks is a crucial prerequisite for many interesting inter-network applications, such as link recommendation and community analysis using information from multiple networks. In this paper, we study the problem of anchor link prediction across multiple heterogeneous social networks, i.e., discovering the correspondence among different accounts of the same user. Unlike most prior work on link prediction and network alignment, we assume that the anchor links are one-to-one relationships (i.e., no two edges share a common endpoint) between the accounts in two social networks, and a small number of anchor links are known beforehand. We propose to extract heterogeneous features from multiple heterogeneous networks for anchor link prediction, including user's social, spatial, temporal and text information. Then we formulate the inference problem for anchor links as a stable matching problem between the two sets of user accounts in two different networks. An effective solution, MNA (Multi-Network Anchoring), is derived to infer anchor links w.r.t. the one-to-one constraint. Extensive experiments on two real-world heterogeneous social networks show that our MNA model consistently outperform other commonly-used baselines on anchor link prediction.", "Social networks have been part of people's daily life and plenty of users have registered accounts in multiple social networks. Interconnections among multiple social networks add a multiplier effect to social applications when fully used. With the sharp expansion of network size, traditional standalone algorithms can no longer support computing on large scale networks while alternatively, distributed and parallel computing become a solution to utilize the data-intensive information hidden in multiple social networks. As such, synergistic partitioning, which takes the relationships among different networks into consideration and focuses on partitioning the same nodes of different networks into same partitions. With that, the partitions containing the same nodes can be assigned to the same server to improve the data locality and reduce communication overhead among servers, which are very important for distributed applications. To date, there have been limited studies on multiple large scale network partitioning due to three major challenges: 1) the need to consider relationships across multiple networks given the existence of intricate interactions, 2) the difficulty for standalone programs to utilize traditional partitioning methods, 3) the fact that to generate balanced partitions is NP-complete. In this paper, we propose a novel framework to partition multiple social networks synergistically. In particular, we apply a distributed multilevel k-way partitioning method to divide the first network into k partitions. Based on the given anchor nodes which exist in all the social networks and the partition results of the first network, using MapReduce, we then develop a modified distributed multilevel partitioning method to divide other networks. Extensive experiments on two real data sets demonstrate that our method can significantly outperform baseline independent-partitioning method in accuracy and scalability.", "The influence maximization problem aims at finding a subset of seed users who can maximize the spread of influence in online social networks (OSNs). Existing works mostly focus on one single homogenous network. However, in the real world, OSNs (1) are usually heterogeneous, via which users can influence each others in multiple channels; and (2) share common users, via whom information could propagate across networks.", "" ] }
1506.04767
2253717125
We propose algorithms to approximate directed information graphs. Directed information graphs are probabilistic graphical models that depict causal dependencies between stochastic processes in a network. The proposed algorithms identify optimal and near-optimal approximations in terms of Kullback-Leibler divergence. The user-chosen sparsity trades off the quality of the approximation against visual conciseness and computational tractability. One class of approximations contains graphs with specified in-degrees. Another class additionally requires that the graph is connected. For both classes, we propose algorithms to identify the optimal approximations and also near-optimal approximations, using a novel relaxation of submodularity. We also propose algorithms to identify the r-best approximations among these classes, enabling robust decision making.
Other approaches to approximating graphical models include using @math -regularized regression to identify sparse Ising models for Markov networks with binary variables . Another approach proposes a linear programming relaxation coupled with branch and bound to find an optimal approximation . Annealed importance sampling is used in @cite_9 ; see references therein for Markov chain Monte Carlo based techniques. The performance of a forward-backward greedy search for Markov networks in a high-dimensional setting is studied in @cite_10 . In @cite_7 , an algorithm is proposed to first identify an variable ordering and then greedily select parents.
{ "cite_N": [ "@cite_9", "@cite_10", "@cite_7" ], "mid": [ "2251243611", "2950618930", "2131304595" ], "abstract": [ "We present a new sampling approach to Bayesian learning of the Bayesian network structure. Like some earlier sampling methods, we sample linear orders on nodes rather than directed acyclic graphs (DAGs). The key difference is that we replace the usual Markov chain Monte Carlo (MCMC) method by the method of annealed importance sampling (AIS). We show that AIS is not only competitive to MCMC in exploring the posterior, but also superior to MCMC in two ways: it enables easy and efficient parallelization, due to the independence of the samples, and lower-bounding of the marginal likelihood of the model with good probabilistic guarantees. We also provide a principled way to correct the bias due to order-based sampling, by implementing a fast algorithm for counting the linear extensions of a given partial order.", "In this paper, we address the problem of learning the structure of a pairwise graphical model from samples in a high-dimensional setting. Our first main result studies the sparsistency, or consistency in sparsity pattern recovery, properties of a forward-backward greedy algorithm as applied to general statistical models. As a special case, we then apply this algorithm to learn the structure of a discrete graphical model via neighborhood estimation. As a corollary of our general result, we derive sufficient conditions on the number of samples n, the maximum node-degree d and the problem size p, as well as other conditions on the model parameters, so that the algorithm recovers all the edges with high probability. Our result guarantees graph selection for samples scaling as n = Omega(d^2 log(p)), in contrast to existing convex-optimization based algorithms that require a sample complexity of (d^3 log(p)). Further, the greedy algorithm only requires a restricted strong convexity condition which is typically milder than irrepresentability assumptions. We corroborate these results using numerical simulations at the end.", "Given a finite set E of random variables, the entropy function h on E is a mapping from the set of all subsets of E into the set of all nonnegative real numbers such that for each A ⊆ E h(A) is the entropy of A . The present paper points out that the entropy function h is a β -function, i.e., a monotone non-decreasing and submodular function with h(O) = 0 and that the pair ( E, h ) is a polymatroid. The polymatroidal structure of a set of random variables induced by the entropy function is fundamental when we deal with the interdependence analysis of random variables such as the information-theoretic correlative analysis, the analysis of multiple-user communication networks, etc. Also, we introduce the notion of the principal partition of a set of random variables by transferring some results in the theory of matroids." ] }
1506.04767
2253717125
We propose algorithms to approximate directed information graphs. Directed information graphs are probabilistic graphical models that depict causal dependencies between stochastic processes in a network. The proposed algorithms identify optimal and near-optimal approximations in terms of Kullback-Leibler divergence. The user-chosen sparsity trades off the quality of the approximation against visual conciseness and computational tractability. One class of approximations contains graphs with specified in-degrees. Another class additionally requires that the graph is connected. For both classes, we propose algorithms to identify the optimal approximations and also near-optimal approximations, using a novel relaxation of submodularity. We also propose algorithms to identify the r-best approximations among these classes, enabling robust decision making.
There has been much less work developing approximations for directed information graphs. In @cite_8 , an algorithm is proposed to identify the best directed spanning tree approximation for directed information graphs. In @cite_1 , several algorithms are introduced for inferring the exact topology. One of the algorithms can be also used to compute the best approximation where the only topological constraints are user-specified in-degrees. That is discussed here as Algorithm 1 in . Several works investigated sparse approximations using lasso and related penalties when processes are jointly autoregressive with Gaussian noise .
{ "cite_N": [ "@cite_1", "@cite_8" ], "mid": [ "1980934101", "2022332792" ], "abstract": [ "Directed information theory deals with communication channels with feedback. When applied to networks, a natural extension based on causal conditioning is needed. We show here that measures built from directed information theory in networks can be used to assess Granger causality graphs of stochastic processes. We show that directed information theory includes measures such as the transfer entropy, and that it is the adequate information theoretic framework needed for neuroscience applications, such as connectivity inference problems.", "Recently, directed information graphs have been proposed as concise graphical representations of the statistical dynamics among multiple random processes. A directed edge from one node to another indicates that the past of one random process statistically affects the future of another, given the past of all other processes. When the number of processes is large, computing those conditional dependence tests becomes difficult. Also, when the number of interactions becomes too large, the graph no longer facilitates visual extraction of relevant information for decision-making. This work considers approximating the true joint distribution on multiple random processes by another, whose directed information graph has at most one parent for any node. Under a Kullback-Leibler (KL) divergence minimization criterion, we show that the optimal approximate joint distribution can be obtained by maximizing a sum of directed informations. In particular, each directed information calculation only involves statistics among a pair of processes and can be efficiently estimated and given all pairwise directed informations, an efficient minimum weight spanning directed tree algorithm can be solved to find the best tree. We demonstrate the efficacy of this approach using simulated and experimental data. In both, the approximations preserve the relevant information for decision-making." ] }
1506.04767
2253717125
We propose algorithms to approximate directed information graphs. Directed information graphs are probabilistic graphical models that depict causal dependencies between stochastic processes in a network. The proposed algorithms identify optimal and near-optimal approximations in terms of Kullback-Leibler divergence. The user-chosen sparsity trades off the quality of the approximation against visual conciseness and computational tractability. One class of approximations contains graphs with specified in-degrees. Another class additionally requires that the graph is connected. For both classes, we propose algorithms to identify the optimal approximations and also near-optimal approximations, using a novel relaxation of submodularity. We also propose algorithms to identify the r-best approximations among these classes, enabling robust decision making.
In our preliminary work @cite_5 , we developed an algorithm to identify the optimal bounded in-degree approximation containing a directed spanning tree subgraph. This appears here as Algorithm 2. Also, a sufficient condition for a greedy search to return near-optimal approximations was identified in @cite_5 , presented here as Definition .
{ "cite_N": [ "@cite_5" ], "mid": [ "2030587315" ], "abstract": [ "Modern neuroscientific recording technologies are increasingly generating rich, multimodal data that provide unique opportunities to investigate the intricacies of brain function. However, our ability to exploit the dynamic, interactive interplay among neural processes is limited by the lack of appropriate analysis methods. In this paper, some challenging issues in neuroscience data analysis are described, and some general-purpose approaches to address such challenges are proposed. Specifically, we discuss statistical methodologies with a theme of loss functions, and hierarchical Bayesian inference methodologies from the perspective of constructing optimal mappings. These approaches are demonstrated on both simulated and experimentally acquired neural data sets to assess causal influences and track time-varying interactions among neural processes on a fine time scale." ] }
1506.04723
634848499
We propose a layered street view model to encode both depth and semantic information on street view images for autonomous driving. Recently, stixels, stix-mantics, and tiered scene labeling methods have been proposed to model street view images. We propose a 4-layer street view model, a compact representation over the recently proposed stix-mantics model. Our layers encode semantic classes like ground, pedestrians, vehicles, buildings, and sky in addition to the depths. The only input to our algorithm is a pair of stereo images. We use a deep neural network to extract the appearance features for semantic classes. We use a simple and an efficient inference algorithm to jointly estimate both semantic classes and layered depth values. Our method outperforms other competing approaches in Daimler urban scene segmentation dataset. Our algorithm is massively parallelizable, allowing a GPU implementation with a processing speed about 9 fps.
Our work is related to the general area of semantic segmentation and scene understanding, such as @cite_10 @cite_2 @cite_0 @cite_27 @cite_21 @cite_3 @cite_6 @cite_11 . While earlier approaches were based on hand-designed features, it has been shown recently that using deep neural networks for feature learning leads to better performance on this task @cite_26 @cite_19 @cite_1 @cite_7 .
{ "cite_N": [ "@cite_26", "@cite_7", "@cite_21", "@cite_1", "@cite_3", "@cite_6", "@cite_0", "@cite_19", "@cite_27", "@cite_2", "@cite_10", "@cite_11" ], "mid": [ "2022508996", "2105340328", "", "", "", "", "2161236525", "2102605133", "2054279472", "2545985378", "78159342", "100686880" ], "abstract": [ "Scene labeling consists of labeling each pixel in an image with the category of the object it belongs to. We propose a method that uses a multiscale convolutional network trained from raw pixels to extract dense feature vectors that encode regions of multiple sizes centered on each pixel. The method alleviates the need for engineered features, and produces a powerful representation that captures texture, shape, and contextual information. We report results using multiple postprocessing methods to produce the final labeling. Among those, we propose a technique to automatically retrieve, from a pool of segmentation components, an optimal set of components that best explain the scene; these components are arbitrary, for example, they can be taken from a segmentation tree or from any family of oversegmentations. The system yields record accuracies on the SIFT Flow dataset (33 classes) and the Barcelona dataset (170 classes) and near-record accuracy on Stanford background dataset (eight classes), while being an order of magnitude faster than competing approaches, producing a 320×240 image labeling in less than a second, including feature extraction.", "We propose a deep feed-forward neural network architecture for pixel-wise semantic scene labeling. It uses a novel recursive neural network architecture for context propagation, referred to as rCPN. It first maps the local visual features into a semantic space followed by a bottom-up aggregation of local information into a global representation of the entire image. Then a top-down propagation of the aggregated information takes place that enhances the contextual information of each local feature. Therefore, the information from every location in the image is propagated to every other location. Experimental results on Stanford background and SIFT Flow datasets show that the proposed method outperforms previous approaches. It is also orders of magnitude faster than previous methods and takes only 0.07 seconds on a GPU for pixel-wise labeling of a 256 x 256 image starting from raw RGB pixel values, given the super-pixel mask that takes an additional 0.3 seconds using an off-the-shelf implementation.", "", "", "", "", "Most state-of-the-art techniques for multi-class image segmentation and labeling use conditional random fields defined over pixels or image regions. While region-level models often feature dense pairwise connectivity, pixel-level models are considerably larger and have only permitted sparse graph structures. In this paper, we consider fully connected CRF models defined on the complete set of pixels in an image. The resulting graphs have billions of edges, making traditional inference algorithms impractical. Our main contribution is a highly efficient approximate inference algorithm for fully connected CRF models in which the pairwise edge potentials are defined by a linear combination of Gaussian kernels. Our experiments demonstrate that dense connectivity at the pixel level substantially improves segmentation and labeling accuracy.", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "This paper details a new approach for learning a discriminative model of object classes, incorporating texture, layout, and context information efficiently. The learned model is used for automatic visual understanding and semantic segmentation of photographs. Our discriminative model exploits texture-layout filters, novel features based on textons, which jointly model patterns of texture and their spatial layout. Unary classification and feature selection is achieved using shared boosting to give an efficient classifier which can be applied to a large number of classes. Accurate image segmentation is achieved by incorporating the unary classifier in a conditional random field, which (i) captures the spatial interactions between class labels of neighboring pixels, and (ii) improves the segmentation of specific object instances. Efficient training of the model on large datasets is achieved by exploiting both random feature selection and piecewise training methods. High classification and segmentation accuracy is demonstrated on four varied databases: (i) the MSRC 21-class database containing photographs of real objects viewed under general lighting conditions, poses and viewpoints, (ii) the 7-class Corel subset and (iii) the 7-class Sowerby database used in (Proceeding of IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 695---702, June 2004), and (iv) a set of video sequences of television shows. The proposed algorithm gives competitive and visually pleasing results for objects that are highly textured (grass, trees, etc.), highly structured (cars, faces, bicycles, airplanes, etc.), and even articulated (body, cow, etc.).", "We propose a method to identify and localize object classes in images. Instead of operating at the pixel level, we advocate the use of superpixels as the basic unit of a class segmentation or pixel localization scheme. To this end, we construct a classifier on the histogram of local features found in each superpixel. We regularize this classifier by aggregating histograms in the neighborhood of each superpixel and then refine our results further by using the classifier in a conditional random field operating on the superpixel graph. Our proposed method exceeds the previously published state-of-the-art on two challenging datasets: Graz-02 and the PASCAL VOC 2007 Segmentation Challenge.", "Feature extraction, coding and pooling, are important components on many contemporary object recognition paradigms. In this paper we explore novel pooling techniques that encode the second-order statistics of local descriptors inside a region. To achieve this effect, we introduce multiplicative second-order analogues of average and max-pooling that together with appropriate non-linearities lead to state-of-the-art performance on free-form region recognition, without any type of feature coding. Instead of coding, we found that enriching local descriptors with additional image information leads to large performance gains, especially in conjunction with the proposed pooling methodology. We show that second-order pooling over free-form regions produces results superior to those of the winning systems in the Pascal VOC 2011 semantic segmentation challenge, with models that are 20,000 times faster.", "In this paper we present Stixmantics, a novel medium-level scene representation for real-time visual semantic scene understanding. Relevant scene structure, motion and object class information is encoded using so-called Stixels as primitive elements. Sparse feature-point trajectories are used to estimate the 3D motion field and to enforce temporal consistency of semantic labels. Spatial label coherency is obtained by using a CRF framework." ] }
1506.04723
634848499
We propose a layered street view model to encode both depth and semantic information on street view images for autonomous driving. Recently, stixels, stix-mantics, and tiered scene labeling methods have been proposed to model street view images. We propose a 4-layer street view model, a compact representation over the recently proposed stix-mantics model. Our layers encode semantic classes like ground, pedestrians, vehicles, buildings, and sky in addition to the depths. The only input to our algorithm is a pair of stereo images. We use a deep neural network to extract the appearance features for semantic classes. We use a simple and an efficient inference algorithm to jointly estimate both semantic classes and layered depth values. Our method outperforms other competing approaches in Daimler urban scene segmentation dataset. Our algorithm is massively parallelizable, allowing a GPU implementation with a processing speed about 9 fps.
The problem of jointly solving both semantic segmentation and depth estimation from stereo camera was addressed in @cite_14 as a unified energy minimization framework. Our work focuses on semantic labeling using ordering constraint on road scenes and using fewer classes applicable to road scenes. @cite_17 , a typical road scene is classified into ground, vertical objects and sky to estimate the geometric layout from a single image. Objects like pedestrians and cars are segmented as vertical objects. This would be an under-representation for road scene understanding. @cite_12 modeled the scene using two horizontal curves that divide the image into three regions: top, middle, and bottom.
{ "cite_N": [ "@cite_14", "@cite_12", "@cite_17" ], "mid": [ "2020045638", "2007536425", "" ], "abstract": [ "The problems of dense stereo reconstruction and object class segmentation can both be formulated as Random Field labeling problems, in which every pixel in the image is assigned a label corresponding to either its disparity, or an object class such as road or building. While these two problems are mutually informative, no attempt has been made to jointly optimize their labelings. In this work we provide a flexible framework configured via cross-validation that unifies the two problems and demonstrate that, by resolving ambiguities, which would be present in real world data if the two problems were considered separately, joint optimization of the two problems substantially improves performance. To evaluate our method, we augment the Leuven data set ( http: cms.brookes.ac.uk research visiongroup files Leuven.zip ), which is a stereo video shot from a car driving around the streets of Leuven, with 70 hand labeled object class and disparity maps. We hope that the release of these annotations will stimulate further work in the challenging domain of street-view analysis. Complete source code is publicly available ( http: cms.brookes.ac.uk staff Philip-Torr ale.htm ).", "Dynamic programming (DP) has been a useful tool for a variety of computer vision problems. However its application is usually limited to problems with a one dimensional or low treewidth structure, whereas most domains in vision are at least 2D. In this paper we show how to apply DP for pixel labeling of 2D scenes with simple “tiered” structure. While there are many variations possible, for the applications we consider the following tiered structure is appropriate. An image is first divided by horizontal curves into the top, middle, and bottom regions, and the middle region is further subdivided vertically into subregions. Under these constraints a globally optimal labeling can be found using an efficient dynamic programming algorithm. We apply this algorithm to two very different tasks. The first is the problem of geometric class labeling where the goal is to assign each pixel a label such as “sky”, “ground”, and “surface above ground”. The second task involves incorporating simple shape priors for segmentation of an image into the “foreground” and “background” regions.", "" ] }
1506.04723
634848499
We propose a layered street view model to encode both depth and semantic information on street view images for autonomous driving. Recently, stixels, stix-mantics, and tiered scene labeling methods have been proposed to model street view images. We propose a 4-layer street view model, a compact representation over the recently proposed stix-mantics model. Our layers encode semantic classes like ground, pedestrians, vehicles, buildings, and sky in addition to the depths. The only input to our algorithm is a pair of stereo images. We use a deep neural network to extract the appearance features for semantic classes. We use a simple and an efficient inference algorithm to jointly estimate both semantic classes and layered depth values. Our method outperforms other competing approaches in Daimler urban scene segmentation dataset. Our algorithm is massively parallelizable, allowing a GPU implementation with a processing speed about 9 fps.
One popular model for road scene is the stixel world that simplifies the world using a ground plane and a set of vertical sticks on the ground representing obstacles @cite_13 . Stixels are compact and efficient representation for upright objects on the ground. The stixel representation can simply be seen as the computation of two curves. The first curve runs on the ground plane enclosing the free space that can be immediately reached without collision, and the second curve encodes the vertical objects boundary. In order to compute the stixel world, either depth map from semi-global stereo matching algorithm (SGM) @cite_25 or cost volume @cite_24 can be used. As with SGM, dynamic programming (DP) enables fast implementation for the computation of the stixels. Recently, @cite_22 demonstrated a monocular free-space estimation using appearance cues.
{ "cite_N": [ "@cite_24", "@cite_22", "@cite_13", "@cite_25" ], "mid": [ "2047166686", "1966931935", "2161337244", "2117248802" ], "abstract": [ "Mobile robots require object detection and classification for safe and smooth navigation. Stereo vision improves such detection by doubling the views of the scene and by giving indirect access to depth information. This depth information can also be used to reduce the set of candidate detection windows. Up to now, most algorithms compute a depth map to discard unpromising detection windows. We propose a novel approach where a stixel world model is computed directly from the stereo images, without computing an intermediate depth map. We experimentally demonstrate that such approach can considerably reduce the set of candidate detection windows at a fraction of the computation cost of previous approaches.", "In this paper we propose a novel algorithm for estimating the drivable collision-free space for autonomous navigation of on-road and on-water vehicles. In contrast to previous approaches that use stereo cameras or LIDAR, we show a method to solve this problem using a single camera. Inspired by the success of many vision algorithms that employ dynamic programming for efficient inference, we reduce the free space estimation task to an inference problem on a 1D graph, where each node represents a column in the image and its label denotes a position that separates the free space from the obstacles. Our algorithm exploits several image and geometric features based on edges, color, and homography to define potential functions on the 1D graph, whose parameters are learned through structured SVM. We show promising results on the challenging KITTI dataset as well as video collected from boats.", "Ambitious driver assistance for complex urban scenarios demands a complete awareness of the situation, including all moving and stationary objects that limit the free space. Recent progress in real-time dense stereo vision provides precise depth information for nearly every pixel of an image. This rises new questions: How can one efficiently analyze half a million disparity values of next generation imagers? And how can one find all relevant obstacles in this huge amount of data in real-time? In this paper we build a medium-level representation named \"stixel-world\". It takes into account that the free space in front of vehicles is limited by objects with almost vertical surfaces. These surfaces are approximated by adjacent rectangular sticks of a certain width and height. The stixel-world turns out to be a compact but flexible representation of the three-dimensional traffic situation that can be used as the common basis for the scene understanding tasks of driver assistance and autonomous systems.", "This paper describes the semiglobal matching (SGM) stereo method. It uses a pixelwise, mutual information (Ml)-based matching cost for compensating radiometric differences of input images. Pixelwise matching is supported by a smoothness constraint that is usually expressed as a global cost function. SGM performs a fast approximation by pathwise optimizations from all directions. The discussion also addresses occlusion detection, subpixel refinement, and multibaseline matching. Additionally, postprocessing steps for removing outliers, recovering from specific problems of structured environments, and the interpolation of gaps are presented. Finally, strategies for processing almost arbitrarily large images and fusion of disparity images using orthographic projection are proposed. A comparison on standard stereo images shows that SGM is among the currently top-ranked algorithms and is best, if subpixel accuracy is considered. The complexity is linear to the number of pixels and disparity range, which results in a runtime of just 1-2 seconds on typical test images. An in depth evaluation of the Ml-based matching cost demonstrates a tolerance against a wide range of radiometric transformations. Finally, examples of reconstructions from huge aerial frame and pushbroom images demonstrate that the presented ideas are working well on practical problems." ] }
1506.04723
634848499
We propose a layered street view model to encode both depth and semantic information on street view images for autonomous driving. Recently, stixels, stix-mantics, and tiered scene labeling methods have been proposed to model street view images. We propose a 4-layer street view model, a compact representation over the recently proposed stix-mantics model. Our layers encode semantic classes like ground, pedestrians, vehicles, buildings, and sky in addition to the depths. The only input to our algorithm is a pair of stereo images. We use a deep neural network to extract the appearance features for semantic classes. We use a simple and an efficient inference algorithm to jointly estimate both semantic classes and layered depth values. Our method outperforms other competing approaches in Daimler urban scene segmentation dataset. Our algorithm is massively parallelizable, allowing a GPU implementation with a processing speed about 9 fps.
Stix-mantics @cite_11 , a recently introduced model, gives more flexibility compared to stixels. Instead of having only one stixel for every column, they allow multiple segments along every column in the image and also combine nearby segments to form superpixel-style entities with better geometric meaning. Using these stixel-inspired superpixels, semantic class labeling is addressed.
{ "cite_N": [ "@cite_11" ], "mid": [ "100686880" ], "abstract": [ "In this paper we present Stixmantics, a novel medium-level scene representation for real-time visual semantic scene understanding. Relevant scene structure, motion and object class information is encoded using so-called Stixels as primitive elements. Sparse feature-point trajectories are used to estimate the 3D motion field and to enforce temporal consistency of semantic labels. Spatial label coherency is obtained by using a CRF framework." ] }
1506.04723
634848499
We propose a layered street view model to encode both depth and semantic information on street view images for autonomous driving. Recently, stixels, stix-mantics, and tiered scene labeling methods have been proposed to model street view images. We propose a 4-layer street view model, a compact representation over the recently proposed stix-mantics model. Our layers encode semantic classes like ground, pedestrians, vehicles, buildings, and sky in addition to the depths. The only input to our algorithm is a pair of stereo images. We use a deep neural network to extract the appearance features for semantic classes. We use a simple and an efficient inference algorithm to jointly estimate both semantic classes and layered depth values. Our method outperforms other competing approaches in Daimler urban scene segmentation dataset. Our algorithm is massively parallelizable, allowing a GPU implementation with a processing speed about 9 fps.
We focus on obtaining layer-aware semantic labels and depths jointly from street-view images. Our work is closely related to many existing algorithms in vision, and most notably with tiered scene labeling @cite_12 , joint semantic segmentation and depth estimation @cite_14 , stixels and more recently, stix-mantics @cite_11 . Our approach achieves real-time processing speed and outperforms the competing algorithms @cite_14 in accuracy. We also achieve this performance without using explicit depth estimation and temporal constraints, which can be obtained using visual odometry. Similar to layered street view constraint, Manhattan constraints have been useful in indoor scene understanding @cite_8 @cite_16 .
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_16", "@cite_12", "@cite_11" ], "mid": [ "2020045638", "2116851763", "2083347703", "2007536425", "100686880" ], "abstract": [ "The problems of dense stereo reconstruction and object class segmentation can both be formulated as Random Field labeling problems, in which every pixel in the image is assigned a label corresponding to either its disparity, or an object class such as road or building. While these two problems are mutually informative, no attempt has been made to jointly optimize their labelings. In this work we provide a flexible framework configured via cross-validation that unifies the two problems and demonstrate that, by resolving ambiguities, which would be present in real world data if the two problems were considered separately, joint optimization of the two problems substantially improves performance. To evaluate our method, we augment the Leuven data set ( http: cms.brookes.ac.uk research visiongroup files Leuven.zip ), which is a stereo video shot from a car driving around the streets of Leuven, with 70 hand labeled object class and disparity maps. We hope that the release of these annotations will stimulate further work in the challenging domain of street-view analysis. Complete source code is publicly available ( http: cms.brookes.ac.uk staff Philip-Torr ale.htm ).", "We study the problem of generating plausible interpretations of a scene from a collection of line segments automatically extracted from a single indoor image. We show that we can recognize the three dimensional structure of the interior of a building, even in the presence of occluding objects. Several physically valid structure hypotheses are proposed by geometric reasoning and verified to find the best fitting model to line segments, which is then converted to a full 3D model. Our experiments demonstrate that our structure recovery from line segments is comparable with methods using full image appearance. Our approach shows how a set of rules describing geometric constraints between groups of segments can be used to prune scene interpretation hypotheses and to generate the most plausible interpretation.", "This paper addresses scene understanding in the context of a moving camera, integrating semantic reasoning ideas from monocular vision with 3D information available through structure-from-motion. We combine geometric and photometric cues in a Bayesian framework, building on recent successes leveraging the indoor Manhattan assumption in monocular vision. We focus on indoor environments and show how to extract key boundaries while ignoring clutter and decorations. To achieve this we present a graphical model that relates photometric cues learned from labeled data, stereo photo-consistency across multiple views, and depth cues derived from structure-from-motion point clouds. We show how to solve MAP inference using dynamic programming, allowing exact, global inference in ∼100 ms (in addition to feature computation of under one second) without using specialized hardware. Experiments show our system out-performing the state-of-the-art.", "Dynamic programming (DP) has been a useful tool for a variety of computer vision problems. However its application is usually limited to problems with a one dimensional or low treewidth structure, whereas most domains in vision are at least 2D. In this paper we show how to apply DP for pixel labeling of 2D scenes with simple “tiered” structure. While there are many variations possible, for the applications we consider the following tiered structure is appropriate. An image is first divided by horizontal curves into the top, middle, and bottom regions, and the middle region is further subdivided vertically into subregions. Under these constraints a globally optimal labeling can be found using an efficient dynamic programming algorithm. We apply this algorithm to two very different tasks. The first is the problem of geometric class labeling where the goal is to assign each pixel a label such as “sky”, “ground”, and “surface above ground”. The second task involves incorporating simple shape priors for segmentation of an image into the “foreground” and “background” regions.", "In this paper we present Stixmantics, a novel medium-level scene representation for real-time visual semantic scene understanding. Relevant scene structure, motion and object class information is encoded using so-called Stixels as primitive elements. Sparse feature-point trajectories are used to estimate the 3D motion field and to enforce temporal consistency of semantic labels. Spatial label coherency is obtained by using a CRF framework." ] }
1506.04854
2276776862
The operating status of power systems is influenced by growing varieties of factors, resulting from the developing sizes and complexity of power systems. In this situation, the model-based methods need to be revisited. A data-driven method, as the novel alternative on the other hand, is proposed in this paper. It reveals the correlations between the factors and the system status through statistical properties of data. An augmented matrix as the data source is the key trick for this method and is formulated by two parts: 1) status data as the basic part; and 2) factor data as the augmented part. The random matrix theory is applied as the mathematical framework. The linear eigenvalue statistics, such as the mean spectral radius, are defined to study data correlations through large random matrices. Compared with model-based methods, the proposed method is inspired by a pure statistical approach without a prior knowledge of operation and interaction mechanism models for power systems and factors. In general, this method is direct in analysis, robust against bad data, universal to various factors, and applicable for real-time analysis. A case study based on the standard IEEE 118-bus system validates the proposed method.
Current researches on correlation analysis are mainly model-based methods, for which the mechanism models are essential preconditions. These mechanism models are established based on assumptions and simplifications, and used for specific power systems and factors. Lian studied the effect of dynamic load characteristics on the voltage stability and sensitivity in power systems, using the P--V and the Q--V curves @cite_4 . In Lian's method, the power system is equivalent to an decentralized system; dynamic loads are approximated as differential equations. These processes increase the complexion and inaccuracy of the analysis. Parinya proposed a stochastic stability index to investigate the small signal stability of power systems incorporating wind power @cite_3 . The status space equations and energy functions need to be rewritten when the grid changes, and the test system is too small in scale to convince.
{ "cite_N": [ "@cite_4", "@cite_3" ], "mid": [ "2026738767", "1966447884" ], "abstract": [ "The voltage stability and the sensitivity of power grid are presented by introducing the P-V and the Q-V curves based on the dynamic load. Firstly, a power grid is exactly rewritten into a simple equivalent decentralized system by the power flow calculation. Next, the dynamic load is modeled in a higher order polynomial form identified from the data at a load-bus observed in Sweden. Then, the system and the load P-V and Q-V curves are constructed, and finally the reactive power is shown to be more sensitive to the voltage stability than the active one by the sensitivity analysis comparing with the case of data observed in Japan.", "The stochastic stability index (SSI) is proposed in this paper to investigate the small signal stability (SSS) of the power system incorporating wind power. The SSI is developed using the first integral energy function method based on Lyapunov's stability and the theory of stochastic stability. The model of induction generator wind turbine is modified and applied for the SSS study. This proposed method can investigate impact of stochastic wind power quantitatively while the general deterministic methods cannot." ] }
1506.04854
2276776862
The operating status of power systems is influenced by growing varieties of factors, resulting from the developing sizes and complexity of power systems. In this situation, the model-based methods need to be revisited. A data-driven method, as the novel alternative on the other hand, is proposed in this paper. It reveals the correlations between the factors and the system status through statistical properties of data. An augmented matrix as the data source is the key trick for this method and is formulated by two parts: 1) status data as the basic part; and 2) factor data as the augmented part. The random matrix theory is applied as the mathematical framework. The linear eigenvalue statistics, such as the mean spectral radius, are defined to study data correlations through large random matrices. Compared with model-based methods, the proposed method is inspired by a pure statistical approach without a prior knowledge of operation and interaction mechanism models for power systems and factors. In general, this method is direct in analysis, robust against bad data, universal to various factors, and applicable for real-time analysis. A case study based on the standard IEEE 118-bus system validates the proposed method.
Also, some data-driven methods for correlation analysis are proposed recently, such as the principal components analysis, the artificial neural networks, the support vector machine @cite_12 . Eltigani utilizes artificial neural networks (ANNs) in assessing the transient stability @cite_21 . In his approach, the power system is described by an equivalent single machine infinite bus system, which cannot reflect accurately the actual state of the system. Moreover, with the scale-up of the system and increase of training samples, the training speed of ANNs progressively slows down.
{ "cite_N": [ "@cite_21", "@cite_12" ], "mid": [ "2083651719", "2027069555" ], "abstract": [ "This paper aims at verifying the accuracy of Artificial Neural Networks (ANN) in assessing the transient stability of a single machine infinite bus system. The fault critical clearing time obtained through ANN is compared to the results of the conventional equal area criterion method. The multilayer feedforward artificial neural network concept is applied to the system. The training of the ANN is achieved through the supervised learning; and the back propagation technique is used as a learning method in order to minimize the training error. The training data set is generated using two steps process. First, the equal area criterion is used to determine the critical angle. After that the swing equation is solved using the point-to-point method up to the critical angle to determine the critical clearing time. Then the stability of the system is verified. As a result we find that the critical clearing time is predicted with slightly less accuracy using ANN compared to the conventional methods for the same input data sets unless the ANN is well trained.", "This paper studies the fundamental dimensionality of synchrophasor data, and proposes an online application for early event detection using the reduced dimensionality. First, the dimensionality of the phasor measurement unit (PMU) data under both normal and abnormal conditions is analyzed. This suggests an extremely low underlying dimensionality despite the large number of the raw measurements. An early event detection algorithm based on the change of core subspaces of the PMU data at the occurrence of an event is proposed. Theoretical justification for the algorithm is provided using linear dynamical system theory. Numerical simulations using both synthetic and realistic PMU data are conducted to validate the proposed algorithm." ] }
1506.04579
1817277359
We present a technique for adding global context to deep convolutional networks for semantic segmentation. The approach is simple, using the average feature for a layer to augment the features at each location. In addition, we study several idiosyncrasies of training, significantly increasing the performance of baseline networks (e.g. from FCN). When we add our proposed global feature, and a technique for learning normalization parameters, accuracy increases consistently even over our improved versions of the baselines. Our proposed approach, ParseNet, achieves state-of-the-art performance on SiftFlow and PASCAL-Context with small additional computational cost over baselines, and near current state-of-the-art performance on PASCAL VOC 2012 semantic segmentation with a simple approach. Code is available at this https URL .
Deep convolutional neural networks (CNN) @cite_1 @cite_5 @cite_0 have become powerful tools not only for whole image classification, but also for object detection and semantic segmentation @cite_20 @cite_14 @cite_37 . This success has been attributed to both the large capacity and effective training of the CNN. Following the scheme @cite_16 , CNNs achieve state-of-the-art results on object detection and segmentation tasks. As a caveat, even though a single pass through the networks used in these systems is approaching or already past video frame rate for individual patch, these approaches require classifying hundreds or thousands of patches per image, and thus are still slow. @cite_24 @cite_23 improve the computation by applying convolution to the whole image once, and then pool features from the final feature map of the network for each region proposal or pixel to achieve comparable or even better results. Yet, these methods still fall short of including whole image context and only classify patches or pixels locally. Our ParseNet is built upon the fully convolutional network architecture @cite_19 with a strong emphasis on including contextual information in a simple approach.
{ "cite_N": [ "@cite_37", "@cite_14", "@cite_1", "@cite_0", "@cite_24", "@cite_19", "@cite_23", "@cite_5", "@cite_16", "@cite_20" ], "mid": [ "", "", "2618530766", "", "2179352600", "2952632681", "", "", "2088049833", "2102605133" ], "abstract": [ "", "", "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 , respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.", "", "Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g. 224×224) input image. This requirement is “artificial” and may hurt the recognition accuracy for the images or sub-images of an arbitrary size scale. In this work, we equip the networks with a more principled pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size scale. By removing the fixed-size limitation, we can improve all CNN-based image classification methods in general. Our SPP-net achieves state-of-the-art accuracy on the datasets of ImageNet 2012, Pascal VOC 2007, and Caltech101.", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.", "", "", "This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html ).", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn." ] }
1506.04395
1924985727
We develop a Deep-Text Recurrent Network (DTRN) that regards scene text reading as a sequence labelling problem. We leverage recent advances of deep convolutional neural networks to generate an ordered high-level sequence from a whole word image, avoiding the difficult character segmentation problem. Then a deep recurrent model, building on long short-term memory (LSTM), is developed to robustly recognize the generated CNN sequences, departing from most existing approaches recognising each character independently. Our model has a number of appealing properties in comparison to existing scene text recognition methods: (i) It can recognise highly ambiguous words by leveraging meaningful context information, allowing it to work reliably without either pre- or post-processing; (ii) the deep CNN feature is robust to various image distortions; (iii) it retains the explicit order information in word image, which is essential to discriminate word strings; (iv) the model does not depend on pre-defined dictionary, and it can process unknown words and arbitrary strings. Codes for the DTRN will be available.
Previous work mainly focuses on developing a powerful character classifier with manually-designed image features. A HoG feature with random ferns was developed for character classification in @cite_7 . Neumann and Matas proposed new oriented strokes for character detection and classification @cite_21 . Their performance is limited by the low-level features. In @cite_22 , a mid-level representation of characters was developed by proposing a discriminative feature pooling. Similarly, Yao proposed the mid-level Strokelets to describe the parts of characters @cite_30 .
{ "cite_N": [ "@cite_30", "@cite_21", "@cite_22", "@cite_7" ], "mid": [ "1978729128", "2056435187", "2043075591", "1998042868" ], "abstract": [ "Driven by the wide range of applications, scene text detection and recognition have become active research topics in computer vision. Though extensively studied, localizing and reading text in uncontrolled environments remain extremely challenging, due to various interference factors. In this paper, we propose a novel multi-scale representation for scene text recognition. This representation consists of a set of detectable primitives, termed as strokelets, which capture the essential substructures of characters at different granularities. Strokelets possess four distinctive advantages: (1) Usability: automatically learned from bounding box labels, (2) Robustness: insensitive to interference factors, (3) Generality: applicable to variant languages, and (4) Expressivity: effective at describing characters. Extensive experiments on standard benchmarks verify the advantages of strokelets and demonstrate the effectiveness of the proposed algorithm for text recognition.", "An unconstrained end-to-end text localization and recognition method is presented. The method introduces a novel approach for character detection and recognition which combines the advantages of sliding-window and connected component methods. Characters are detected and recognized as image regions which contain strokes of specific orientations in a specific relative position, where the strokes are efficiently detected by convolving the image gradient field with a set of oriented bar filters. Additionally, a novel character representation efficiently calculated from the values obtained in the stroke detection phase is introduced. The representation is robust to shift at the stroke level, which makes it less sensitive to intra-class variations and the noise induced by normalizing character size and positioning. The effectiveness of the representation is demonstrated by the results achieved in the classification of real-world characters using an euclidian nearest-neighbor classifier trained on synthetic data in a plain form. The method was evaluated on a standard dataset, where it achieves state-of-the-art results in both text localization and recognition.", "We present a new feature representation method for scene text recognition problem, particularly focusing on improving scene character recognition. Many existing methods rely on Histogram of Oriented Gradient (HOG) or part-based models, which do not span the feature space well for characters in natural scene images, especially given large variation in fonts with cluttered backgrounds. In this work, we propose a discriminative feature pooling method that automatically learns the most informative sub-regions of each scene character within a multi-class classification framework, whereas each sub-region seamlessly integrates a set of low-level image features through integral images. The proposed feature representation is compact, computationally efficient, and able to effectively model distinctive spatial structures of each individual character class. Extensive experiments conducted on challenging datasets (Chars74K, ICDAR'03, ICDAR'11, SVT) show that our method significantly outperforms existing methods on scene character classification and scene text recognition tasks.", "This paper focuses on the problem of word detection and recognition in natural images. The problem is significantly more challenging than reading text in scanned documents, and has only recently gained attention from the computer vision community. Sub-components of the problem, such as text detection and cropped image word recognition, have been studied in isolation [7, 4, 20]. However, what is unclear is how these recent approaches contribute to solving the end-to-end problem of word recognition. We fill this gap by constructing and evaluating two systems. The first, representing the de facto state-of-the-art, is a two stage pipeline consisting of text detection followed by a leading OCR engine. The second is a system rooted in generic object recognition, an extension of our previous work in [20]. We show that the latter approach achieves superior performance. While scene text recognition has generally been treated with highly domain-specific methods, our results demonstrate the suitability of applying generic computer vision methods. Adopting this approach opens the door for real world scene text recognition to benefit from the rapid advances that have been taking place in object recognition." ] }
1506.04395
1924985727
We develop a Deep-Text Recurrent Network (DTRN) that regards scene text reading as a sequence labelling problem. We leverage recent advances of deep convolutional neural networks to generate an ordered high-level sequence from a whole word image, avoiding the difficult character segmentation problem. Then a deep recurrent model, building on long short-term memory (LSTM), is developed to robustly recognize the generated CNN sequences, departing from most existing approaches recognising each character independently. Our model has a number of appealing properties in comparison to existing scene text recognition methods: (i) It can recognise highly ambiguous words by leveraging meaningful context information, allowing it to work reliably without either pre- or post-processing; (ii) the deep CNN feature is robust to various image distortions; (iii) it retains the explicit order information in word image, which is essential to discriminate word strings; (iv) the model does not depend on pre-defined dictionary, and it can process unknown words and arbitrary strings. Codes for the DTRN will be available.
Recent advances of DNN for image representation encourage the development of more powerful character classifiers, leading to the state-of-the-art performance on this task. The pioneer work was done by LeCun , who designed a CNN for isolated handwriting digit recognition @cite_20 . A two-layer CNN system was proposed for both character detection and classification in @cite_2 . PhotoOCR system employs a five-layer DNN for character recognition @cite_3 . Similarly, Jaderberg @cite_23 proposed novel deep features by employing a Maxout CNN model for learning common features, which were subsequently used for a number of different tasks, such as character classification, location optimization and language model learning.
{ "cite_N": [ "@cite_3", "@cite_23", "@cite_20", "@cite_2" ], "mid": [ "2122221966", "70975097", "2310919327", "1607307044" ], "abstract": [ "We describe Photo OCR, a system for text extraction from images. Our particular focus is reliable text extraction from smartphone imagery, with the goal of text recognition as a user input modality similar to speech recognition. Commercially available OCR performs poorly on this task. Recent progress in machine learning has substantially improved isolated character classification, we build on this progress by demonstrating a complete OCR system using these techniques. We also incorporate modern data center-scale distributed language modelling. Our approach is capable of recognizing text in a variety of challenging imaging conditions where traditional OCR systems fail, notably in the presence of substantial blur, low resolution, low contrast, high image noise and other distortions. It also operates with low latency, mean processing time is 600 ms per image. We evaluate our system on public benchmark datasets for text extraction and outperform all previously reported results, more than halving the error rate on multiple benchmarks. The system is currently in use in many applications at Google, and is available as a user input modality in Google Translate for Android.", "The goal of this work is text spotting in natural images. This is divided into two sequential tasks: detecting words regions in the image, and recognizing the words within these regions. We make the following contributions: first, we develop a Convolutional Neural Network (CNN) classifier that can be used for both tasks. The CNN has a novel architecture that enables efficient feature sharing (by using a number of layers in common) for text detection, character case-sensitive and insensitive classification, and bigram classification. It exceeds the state-of-the-art performance for all of these. Second, we make a number of technical changes over the traditional CNN architectures, including no downsampling for a per-pixel sliding window, and multi-mode learning with a mixture of linear models (maxout). Third, we have a method of automated data mining of Flickr, that generates word and character level annotations. Finally, these components are used together to form an end-to-end, state-of-the-art text spotting system. We evaluate the text-spotting system on two standard benchmarks, the ICDAR Robust Reading data set and the Street View Text data set, and demonstrate improvements over the state-of-the-art on multiple measures.", "", "Full end-to-end text recognition in natural images is a challenging problem that has received much attention recently. Traditional systems in this area have relied on elaborate models incorporating carefully hand-engineered features or large amounts of prior knowledge. In this paper, we take a different route and combine the representational power of large, multilayer neural networks together with recent developments in unsupervised feature learning, which allows us to use a common framework to train highly-accurate text detector and character recognizer modules. Then, using only simple off-the-shelf methods, we integrate these two modules into a full end-to-end, lexicon-driven, scene text recognition system that achieves state-of-the-art performance on standard benchmarks, namely Street View Text and ICDAR 2003." ] }
1506.04395
1924985727
We develop a Deep-Text Recurrent Network (DTRN) that regards scene text reading as a sequence labelling problem. We leverage recent advances of deep convolutional neural networks to generate an ordered high-level sequence from a whole word image, avoiding the difficult character segmentation problem. Then a deep recurrent model, building on long short-term memory (LSTM), is developed to robustly recognize the generated CNN sequences, departing from most existing approaches recognising each character independently. Our model has a number of appealing properties in comparison to existing scene text recognition methods: (i) It can recognise highly ambiguous words by leveraging meaningful context information, allowing it to work reliably without either pre- or post-processing; (ii) the deep CNN feature is robust to various image distortions; (iii) it retains the explicit order information in word image, which is essential to discriminate word strings; (iv) the model does not depend on pre-defined dictionary, and it can process unknown words and arbitrary strings. Codes for the DTRN will be available.
These approaches treat isolated character classification and subsequent word recognition separately. They do not unleash the full potential of word context information in the recognition. They often design complicated optimization algorithm to infer word string by incorporating multiple additional visual cues, or require a number of post-processing steps to refine the results @cite_23 @cite_3 . Our model differs significantly from them by exploring the recurrence of deep features, allowing it to leverage the underlying context information to directly recognise the whole word image in a deep sequence, without a language model and any kind of post-processing.
{ "cite_N": [ "@cite_3", "@cite_23" ], "mid": [ "2122221966", "70975097" ], "abstract": [ "We describe Photo OCR, a system for text extraction from images. Our particular focus is reliable text extraction from smartphone imagery, with the goal of text recognition as a user input modality similar to speech recognition. Commercially available OCR performs poorly on this task. Recent progress in machine learning has substantially improved isolated character classification, we build on this progress by demonstrating a complete OCR system using these techniques. We also incorporate modern data center-scale distributed language modelling. Our approach is capable of recognizing text in a variety of challenging imaging conditions where traditional OCR systems fail, notably in the presence of substantial blur, low resolution, low contrast, high image noise and other distortions. It also operates with low latency, mean processing time is 600 ms per image. We evaluate our system on public benchmark datasets for text extraction and outperform all previously reported results, more than halving the error rate on multiple benchmarks. The system is currently in use in many applications at Google, and is available as a user input modality in Google Translate for Android.", "The goal of this work is text spotting in natural images. This is divided into two sequential tasks: detecting words regions in the image, and recognizing the words within these regions. We make the following contributions: first, we develop a Convolutional Neural Network (CNN) classifier that can be used for both tasks. The CNN has a novel architecture that enables efficient feature sharing (by using a number of layers in common) for text detection, character case-sensitive and insensitive classification, and bigram classification. It exceeds the state-of-the-art performance for all of these. Second, we make a number of technical changes over the traditional CNN architectures, including no downsampling for a per-pixel sliding window, and multi-mode learning with a mixture of linear models (maxout). Third, we have a method of automated data mining of Flickr, that generates word and character level annotations. Finally, these components are used together to form an end-to-end, state-of-the-art text spotting system. We evaluate the text-spotting system on two standard benchmarks, the ICDAR Robust Reading data set and the Street View Text data set, and demonstrate improvements over the state-of-the-art on multiple measures." ] }
1506.04395
1924985727
We develop a Deep-Text Recurrent Network (DTRN) that regards scene text reading as a sequence labelling problem. We leverage recent advances of deep convolutional neural networks to generate an ordered high-level sequence from a whole word image, avoiding the difficult character segmentation problem. Then a deep recurrent model, building on long short-term memory (LSTM), is developed to robustly recognize the generated CNN sequences, departing from most existing approaches recognising each character independently. Our model has a number of appealing properties in comparison to existing scene text recognition methods: (i) It can recognise highly ambiguous words by leveraging meaningful context information, allowing it to work reliably without either pre- or post-processing; (ii) the deep CNN feature is robust to various image distortions; (iii) it retains the explicit order information in word image, which is essential to discriminate word strings; (iv) the model does not depend on pre-defined dictionary, and it can process unknown words and arbitrary strings. Codes for the DTRN will be available.
Our approach is partially motivated by the recent success of deep models for image captioning, where the combination of the CNN and RNN has been applied @cite_24 @cite_31 @cite_33 . They explored the CNN for computing a deep feature from a whole image, followed by a RNN to decode it into a sequence of words. ReNet @cite_1 was proposed to directly compute the deep image feature by using four RNN to sweep across the image. Generally, these models do not explicitly store the strict spatial information by using the global image representation. By contrast, our word images include explicit order information of its string, which is a crucial cue to discriminate a word. Our goal here is to derive a set of robust sequential features from the word image, and design an new model that bridges the image representation learning and sequence labelling task.
{ "cite_N": [ "@cite_24", "@cite_31", "@cite_1", "@cite_33" ], "mid": [ "2951805548", "2951183276", "1664573881", "1557952530" ], "abstract": [ "We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.", "Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized.", "In this paper, we propose a deep neural network architecture for object recognition based on recurrent neural networks. The proposed network, called ReNet, replaces the ubiquitous convolution+pooling layer of the deep convolutional neural network with four recurrent neural networks that sweep horizontally and vertically in both directions across the image. We evaluate the proposed ReNet on three widely-used benchmark datasets; MNIST, CIFAR-10 and SVHN. The result suggests that ReNet is a viable alternative to the deep convolutional neural network, and that further investigation is needed.", "The problem of detecting and recognizing text in natural scenes has proved to be more challenging than its counterpart in documents, with most of the previous work focusing on a single part of the problem. In this work, we propose new solutions to the character and word recognition problems and then show how to combine these solutions in an end-to-end text-recognition system. We do so by leveraging the recently introduced Maxout networks along with hybrid HMM models that have proven useful for voice recognition. Using these elements, we build a tunable and highly accurate recognition system that beats state-of-the-art results on all the sub-problems for both the ICDAR 2003 and SVT benchmark datasets." ] }
1506.04188
2951820359
In the Russian cards problem, Alice, Bob and Cath draw @math , @math and @math cards, respectively, from a publicly known deck. Alice and Bob must then communicate their cards to each other without Cath learning who holds a single card. Solutions in the literature provide weak security, where Cath does not know with certainty who holds each card that is not hers, or perfect security, where Cath learns no probabilistic information about who holds any given card from Alice and Bob's exchange. We propose an intermediate notion, which we call @math -strong security, where the probabilities perceived by Cath may only change by a factor of @math . We then show that a mild variant of the so-called geometric strategy gives @math -strong safety for arbitrarily small @math and appropriately chosen values of @math .
The Russian cards problem may be traced back to Kirkman @cite_10 , but recently it has received renewed attention after its inclusion in the 2000 Mathematics Olympiad @cite_5 . One of the solutions for deals of distribution type @math uses the Fano plane, a special case of a combinatorial design, which can also be used for many other distribution types @cite_7 . Another solution uses modular arithmetic, which can also be generalized for many distribution types where the eavesdropper holds one card @cite_4 . These solutions use only two announcements, but some cases are known to require more. A solution using three announcements for @math is reported in @cite_0 , and a four-step protocol for @math and @math is presented in @cite_2 . The solution we will work with in this paper is similar to the one reported in @cite_6 , which also takes two steps. The Russian cards problem has also been generalized to a larger number of agents in @cite_11 @cite_3 .
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_6", "@cite_3", "@cite_0", "@cite_2", "@cite_5", "@cite_10", "@cite_11" ], "mid": [ "1608583290", "165070137", "2010404638", "1551579511", "", "2073060670", "", "1964318830", "2027785398" ], "abstract": [ "Consider three players Alice, Bob and Cath who hold a, b and c cards, respectively, from a deck of d = a+b+c cards. The cards are all different and players only know their own cards. Suppose Alice and Bob wish to communicate their cards to each other without Cath learning whether Alice or Bob holds a specific card. Considering the cards as consecutive natural numbers 0, 1, . . . , we investigate general conditions for when Alice or Bob can safely announce the sum of the cards they hold modulo an appropriately chosen integer. We demonstrate that this holds whenever a, b > 2 and c = 1. Because Cath holds a single card, this also implies that Alice and Bob will learn the card deal from the other player’s announcement.", "Two parties A and B select a cards and b cards from a known deck and a third party C receives the remaining c cards. We consider methods whereby A can, in a single message, publicly inform B of her hand without C learning any card held by A or by B. Conditions on a, b, c are given for the existence of an appropriate message.", "In the generalized Russian cards problem, the three players Alice, Bob and Cath draw @math a , b and @math c cards, respectively, from a deck of @math a + b + c cards. Players only know their own cards and what the deck of cards is. Alice and Bob are then required to communicate their hand of cards to each other by way of public messages. For a natural number @math k , the communication is said to be @math k -safe if Cath does not learn whether or not Alice holds any given set of at most @math k cards that are not Cath's, a notion originally introduced as weak @math k -security by Swanson and Stinson. An elegant solution by Atkinson views the cards as points in a finite projective plane. We propose a general solution in the spirit of Atkinson's, although based on finite vector spaces rather than projective planes, and call it the geometric protocol'. Given arbitrary @math c , k > 0 , this protocol gives an informative and @math k -safe solution to the generalized Russian cards problem for infinitely many values of @math ( a , b , c ) with @math b = O ( a c ) . This improves on the collection of parameters for which solutions are known. In particular, it is the first solution which guarantees @math k -safety when Cath has more than one card.", "We consider the generic problem of Secure Aggregation of Distributed Information (SADI), where several agents acting as a team have information distributed amongst them, modelled by means of a publicly known deck of cards distributed amongst the agents, so that each of them knows only her cards. The agents have to exchange and aggregate the information about how the cards are distributed amongst them by means of public announcements over insecure communication channels, intercepted by an adversary \"eavesdropper\", in such a way that the adversary does not learn who holds any of the cards. We present a combinatorial construction of protocols that provides a direct solution of a class of SADI problems and develop a technique of iterated reduction of SADI problems to smaller ones which are eventually solvable directly. We show that our methods provide a solution to a large class of SADI problems, including all SADI problems with sufficiently large size and sufficiently balanced card distributions.", "", "In the generalized Russian cards problem, Alice, Bob and Cath draw a, b and c cards, respectively, from a deck of size a+b+c. Alice and Bob must then communicate their entire hand to each other, without Cath learning the owner of a single card she does not hold. Unlike many traditional problems in cryptography, however, they are not allowed to encode or hide the messages they exchange from Cath. The problem is then to find methods through which they can achieve this. We propose a general four-step solution based on finite vector spaces, and call it the ''colouring protocol'', as it involves colourings of lines. Our main results show that the colouring protocol may be used to solve the generalized Russian cards problem in cases where a is a power of a prime, c=O(a^2) and b=O(c^2). This improves substantially on the set of parameters for which solutions are known to exist; in particular, it had not been shown previously that the problem could be solved in cases where the eavesdropper has more cards than one of the communicating players.", "", "", "This paper investigates Russian Cards problem for the purpose of unconditional secure communication. First, a picking rule and deleting rule as well as safe communication condition are given to deal with the problem with 3 players and 7 cards. Further, the problem is generalized to tackle n players and n(n−1)+1 cards. A new picking rule for constructing the announcement is presented, and a new deleting rule for players to determine each other’s cards is formalized. Moreover, the safe communication condition is proved. In addition, to illustrate the approach, an example for 5 players and 21 cards is presented in detail." ] }
1506.04188
2951820359
In the Russian cards problem, Alice, Bob and Cath draw @math , @math and @math cards, respectively, from a publicly known deck. Alice and Bob must then communicate their cards to each other without Cath learning who holds a single card. Solutions in the literature provide weak security, where Cath does not know with certainty who holds each card that is not hers, or perfect security, where Cath learns no probabilistic information about who holds any given card from Alice and Bob's exchange. We propose an intermediate notion, which we call @math -strong security, where the probabilities perceived by Cath may only change by a factor of @math . We then show that a mild variant of the so-called geometric strategy gives @math -strong safety for arbitrarily small @math and appropriately chosen values of @math .
However, while the protocols mentioned above provide unconditionally secure solutions to the Russian cards problem in that the eavesdropper may not know with certainty who holds a given card, that does not mean that she may not have a high probability of guessing this information correctly. To this end, stronger notions of security are studied in @cite_8 . There, a distinction is made between weak and perfect security; in perfectly secure solutions, Cath does not acquire any probabilistic information about the ownership of any specific card. All of the above solutions provide weak security in this sense, but Swanson and Stinson show how designs may be used to achieve perfect security, an idea further developed in @cite_13 .
{ "cite_N": [ "@cite_13", "@cite_8" ], "mid": [ "1596450573", "2141368734" ], "abstract": [ "In the generalized Russian cards problem, we have a card deck @math of @math cards and three participants, Alice, Bob, and Cathy, dealt @math , @math , and @math cards, respectively. Once the cards are dealt, Alice and Bob wish to privately communicate their hands to each other via public announcements, without the advantage of a shared secret or public key infrastructure. Cathy should remain ignorant of all but her own cards after Alice and Bob have made their announcements. Notions for Cathy's ignorance in the literature range from Cathy not learning the fate of any individual card with certainty (weak @math -security) to not gaining any probabilistic advantage in guessing the fate of some set of @math cards (perfect @math -security). As we demonstrate, the generalized Russian cards problem has close ties to the field of combinatorial designs, on which we rely heavily, particularly for perfect security notions. Our main result establishes an equivalence between perfectly @math -secure strategies and @math -designs on @math points with block size @math , when announcements are chosen uniformly at random from the set of possible announcements. We also provide construction methods and example solutions, including a construction that yields perfect @math -security against Cathy when @math . We leverage a known combinatorial design to construct a strategy with @math , @math , and @math that is perfectly @math -secure. Finally, we consider a variant of the problem that yields solutions that are easy to construct and optimal with respect to both the number of announcements and level of security achieved. Moreover, this is the first method obtaining weak @math -security that allows Alice to hold an arbitrary number of cards and Cathy to hold a set of @math cards. Alternatively, the construction yields solutions for arbitrary @math , @math and any @math .", "We present the first formal mathematical presentation of the generalized Russian cards problem, and provide rigorous security definitions that capture both basic and extended versions of weak and perfect security notions. In the generalized Russian cards problem, three players, Alice, Bob, and Cathy, are dealt a deck of @math cards, each given @math , @math , and @math cards, respectively. The goal is for Alice and Bob to learn each other's hands via public communication, without Cathy learning the fate of any particular card. The basic idea is that Alice announces a set of possible hands she might hold, and Bob, using knowledge of his own hand, should be able to learn Alice's cards from this announcement, but Cathy should not. Using a combinatorial approach, we are able to give a nice characterization of informative strategies (i.e., strategies allowing Bob to learn Alice's hand), having optimal communication complexity, namely the set of possible hands Alice announces must be equivalent to a large set of @math -designs, where @math . We also provide some interesting necessary conditions for certain types of deals to be simultaneously informative and secure. That is, for deals satisfying @math for some @math , where @math and the strategy is assumed to satisfy a strong version of security (namely perfect @math -security), we show that @math and hence @math . We also give a precise characterization of informative and perfectly @math -secure deals of the form @math satisfying @math involving @math -designs." ] }
1506.04693
2126355467
Many methods have been proposed to detect communities, not only in plain, but also in attributed, di- rected, or even dynamic complex networks. From the modeling point of view, to be of some utility, the com- munity structure must be characterized relatively to the properties of the studied system. However, most of the existing works focus on the detection of communities, and only very few try to tackle this interpretation problem. Moreover, the existing approaches are limited either by the type of data they handle or by the nature of the results they output. In this work, we see the interpretation of commu- nities as a problem independent from the detection process, consisting in identifying the most characteristic features of communities. We give a formal definition of this problem and propose a method to solve it. To this aim, we first define a sequence-based representation of networks, com- bining temporal information, community structure, topo- logical measures, and nodal attributes. We then describe how to identify the most emerging sequential patterns of this dataset and use them to characterize the communities. We study the performance of our method on artificially generated dynamic attributed networks. We also em- pirically validate our framework on real-world systems: a DBLP network of scientific collaborations, and a LastFM network of social and musical interactions.
Authors historically interpreted the communities they found in an way @cite_29 @cite_18 @cite_14 , but this somewhat subjective approach does not scale well on large networks.
{ "cite_N": [ "@cite_29", "@cite_14", "@cite_18" ], "mid": [ "1971421925", "2131681506", "2164998314" ], "abstract": [ "A number of recent studies have focused on the statistical properties of networked systems such as social networks and the Worldwide Web. Researchers have concentrated particularly on a few properties that seem to be common to many networks: the small-world property, power-law degree distributions, and network transitivity. In this article, we highlight another property that is found in many networks, the property of community structure, in which network nodes are joined together in tightly knit groups, between which there are only looser connections. We propose a method for detecting such communities, built around the idea of using centrality indices to find community boundaries. We test our method on computer-generated and real-world graphs whose community structure is already known and find that the method detects this known structure with high sensitivity and reliability. We also apply the method to two networks whose community structure is not well known—a collaboration network and a food web—and find that it detects significant and informative community divisions in both cases.", "We propose a simple method to extract the community structure of large networks. Our method is a heuristic method that is based on modularity optimization. It is shown to outperform all other known community detection methods in terms of computation time. Moreover, the quality of the communities detected is very good, as measured by the so-called modularity. This is shown first by identifying language communities in a Belgian mobile phone network of 2 million customers and by analysing a web graph of 118 million nodes and more than one billion links. The accuracy of our algorithm is also verified on ad hoc modular networks.", "To comprehend the multipartite organization of large-scale biological and social systems, we introduce an information theoretic approach that reveals community structure in weighted and directed networks. We use the probability flow of random walks on a network as a proxy for information flows in the real system and decompose the network into modules by compressing a description of the probability flow. The result is a map that both simplifies and highlights the regularities in the structure and their relationships. We illustrate the method by making a map of scientific communication as captured in the citation patterns of >6,000 journals. We discover a multicentric organization with fields that vary dramatically in size and degree of integration into the network of science. Along the backbone of the network—including physics, chemistry, molecular biology, and medicine—information flows bidirectionally, but the map reveals a directional pattern of citation from the applied fields to the basic sciences." ] }
1506.04693
2126355467
Many methods have been proposed to detect communities, not only in plain, but also in attributed, di- rected, or even dynamic complex networks. From the modeling point of view, to be of some utility, the com- munity structure must be characterized relatively to the properties of the studied system. However, most of the existing works focus on the detection of communities, and only very few try to tackle this interpretation problem. Moreover, the existing approaches are limited either by the type of data they handle or by the nature of the results they output. In this work, we see the interpretation of commu- nities as a problem independent from the detection process, consisting in identifying the most characteristic features of communities. We give a formal definition of this problem and propose a method to solve it. To this aim, we first define a sequence-based representation of networks, com- bining temporal information, community structure, topo- logical measures, and nodal attributes. We then describe how to identify the most emerging sequential patterns of this dataset and use them to characterize the communities. We study the performance of our method on artificially generated dynamic attributed networks. We also em- pirically validate our framework on real-world systems: a DBLP network of scientific collaborations, and a LastFM network of social and musical interactions.
More recently, several authors used topological measures to characterize community structures in plain networks. In @cite_15 , Lancichinetti . visually examined the distribution of some community-based topological measures, both at local and intermediary levels. Their goal was to understand the general shape of communities belonging to networks modeling various types of real-world systems. In @cite_32 , Leskovec . proposed to study the community structure as a whole, by considering it at various scales, thanks to a global measure called conductance. These two studies are valuable, however, from the interpretation perspective, they are limited by the fact they consider the network as a whole. Communities are studied and characterized collectively, in order to identify trends in the whole network, or even a collection of networks.
{ "cite_N": [ "@cite_15", "@cite_32" ], "mid": [ "2029130073", "2131717044" ], "abstract": [ "Background Community structure is one of the key properties of complex networks and plays a crucial role in their topology and function. While an impressive amount of work has been done on the issue of community detection, very little attention has been so far devoted to the investigation of communities in real networks.", "A large body of work has been devoted to identifying community structure in networks. A community is often though of as a set of nodes that has more connections between its members than to the remainder of the network. In this paper, we characterize as a function of size the statistical and structural properties of such sets of nodes. We define the network community profile plot, which characterizes the \"best\" possible community - according to the conductance measure - over a wide range of size scales, and we study over 70 large sparse real-world networks taken from a wide range of application domains. Our results suggest a significantly more refined picture of community structure in large real-world networks than has been appreciated previously. Our most striking finding is that in nearly every network dataset we examined, we observe tight but almost trivial communities at very small scales, and at larger size scales, the best possible communities gradually \"blend in\" with the rest of the network and thus become less \"community-like.\" This behavior is not explained, even at a qualitative level, by any of the commonly-used network generation models. Moreover, this behavior is exactly the opposite of what one would expect based on experience with and intuition from expander graphs, from graphs that are well-embeddable in a low-dimensional structure, and from small social networks that have served as testbeds of community detection algorithms. We have found, however, that a generative model, in which new edges are added via an iterative \"forest fire\" burning process, is able to produce graphs exhibiting a network community structure similar to our observations." ] }
1506.04693
2126355467
Many methods have been proposed to detect communities, not only in plain, but also in attributed, di- rected, or even dynamic complex networks. From the modeling point of view, to be of some utility, the com- munity structure must be characterized relatively to the properties of the studied system. However, most of the existing works focus on the detection of communities, and only very few try to tackle this interpretation problem. Moreover, the existing approaches are limited either by the type of data they handle or by the nature of the results they output. In this work, we see the interpretation of commu- nities as a problem independent from the detection process, consisting in identifying the most characteristic features of communities. We give a formal definition of this problem and propose a method to solve it. To this aim, we first define a sequence-based representation of networks, com- bining temporal information, community structure, topo- logical measures, and nodal attributes. We then describe how to identify the most emerging sequential patterns of this dataset and use them to characterize the communities. We study the performance of our method on artificially generated dynamic attributed networks. We also em- pirically validate our framework on real-world systems: a DBLP network of scientific collaborations, and a LastFM network of social and musical interactions.
In order to characterize each community individually, some authors took advantage of the information conveyed by nodal attributes, when they are available. In @cite_8 , Tumminello . proposed a statistical method to characterize the communities in terms of the over-expressed attributes found in the elements of the community. In @cite_36 , Labatut & Balasque interpreted the communities of a social attributed network. They used statistical regression and discriminant correspondence analysis to identify the most characteristic attributes of each community. Both studies are valuable, however they do not take advantage of the available topological measures to enhance the interpretation process.
{ "cite_N": [ "@cite_36", "@cite_8" ], "mid": [ "196811037", "2036731102" ], "abstract": [ "Community detection, an important part of network analysis, has become a very popular field of research. This activity resulted in a profusion of community detection algorithms, all different in some not always clearly defined sense. This makes it very difficult to select an appropriate tool when facing the concrete task of having to identify and interpret groups of nodes, relatively to a system of interest. In this chapter, we tackle this problem in a very practical way, from the user’s point of view. We first review community detection algorithms and characterize them in terms of the nature of the communities they detect. We then focus on the methodological tools one can use to analyze the obtained community structure, both in terms of topological features and nodal attributes. To be as concrete as possible, we use a real-world social network to illustrate the application of the presented tools and give examples of interpretation of their results from a Business Science perspective.", "We introduce an analytical statistical method for characterizing the communities detected in heterogeneous complex systems. By proposing a suitable null hypothesis, our method makes use of the hypergeometric distribution to assess the probability that a given property is over-expressed in the elements of a community with respect to all the elements of the investigated set. We apply our method to two specific complex networks, namely a network of world movies and a network of physics preprints. The characterization of the elements and of the communities is done in terms of languages and countries for the movie network and of journals and subject categories for papers. We find that our method is able to characterize clearly the communities identified. Moreover our method works well both for large and for small communities." ] }
1506.04693
2126355467
Many methods have been proposed to detect communities, not only in plain, but also in attributed, di- rected, or even dynamic complex networks. From the modeling point of view, to be of some utility, the com- munity structure must be characterized relatively to the properties of the studied system. However, most of the existing works focus on the detection of communities, and only very few try to tackle this interpretation problem. Moreover, the existing approaches are limited either by the type of data they handle or by the nature of the results they output. In this work, we see the interpretation of commu- nities as a problem independent from the detection process, consisting in identifying the most characteristic features of communities. We give a formal definition of this problem and propose a method to solve it. To this aim, we first define a sequence-based representation of networks, com- bining temporal information, community structure, topo- logical measures, and nodal attributes. We then describe how to identify the most emerging sequential patterns of this dataset and use them to characterize the communities. We study the performance of our method on artificially generated dynamic attributed networks. We also em- pirically validate our framework on real-world systems: a DBLP network of scientific collaborations, and a LastFM network of social and musical interactions.
Certain community detection methods take advantage of both relational (structure) and individual (attributes) information to detect communities. It seems natural to suppose the results they produce can be used for interpretation purposes. For example, in @cite_28 , Zhou . interpreted the communities in terms of the attributes used during the detection process; and in @cite_13 , Yang . identified the top attributes for each identified community. However, the problem with these community detection-based methods is that the notion of community is often defined procedurally, i.e. simply as the output of the detection method, without any further formalization. It is consequently not clear how structure and attributes affect the detection, and hence the interpretation process. All these methods additionally rely on the implicit assumption of community homophily. In other words, communities are supposed to be groups of nodes both densely interconnected and similar in terms of attributes. To our knowledge, no study has ever shown this feature was present in all systems, or even in all the communities of a given network, or that all attributes were concerned. It is therefore doubtful those methods are general enough to be applied to any type of network.
{ "cite_N": [ "@cite_28", "@cite_13" ], "mid": [ "2165515835", "2012921801" ], "abstract": [ "The goal of graph clustering is to partition vertices in a large graph into different clusters based on various criteria such as vertex connectivity or neighborhood similarity. Graph clustering techniques are very useful for detecting densely connected groups in a large graph. Many existing graph clustering methods mainly focus on the topological structure for clustering, but largely ignore the vertex properties which are often heterogenous. In this paper, we propose a novel graph clustering algorithm, SA-Cluster, based on both structural and attribute similarities through a unified distance measure. Our method partitions a large graph associated with attributes into k clusters so that each cluster contains a densely connected subgraph with homogeneous attribute values. An effective method is proposed to automatically learn the degree of contributions of structural similarity and attribute similarity. Theoretical analysis is provided to show that SA-Cluster is converging. Extensive experimental results demonstrate the effectiveness of SA-Cluster through comparison with the state-of-the-art graph clustering and summarization methods.", "Community detection algorithms are fundamental tools that allow us to uncover organizational principles in networks. When detecting communities, there are two possible sources of information one can use: the network structure, and the features and attributes of nodes. Even though communities form around nodes that have common edges and common attributes, typically, algorithms have only focused on one of these two data modalities: community detection algorithms traditionally focus only on the network structure, while clustering algorithms mostly consider only node attributes. In this paper, we develop Communities from Edge Structure and Node Attributes (CESNA), an accurate and scalable algorithm for detecting overlapping communities in networks with node attributes. CESNA statistically models the interaction between the network structure and the node attributes, which leads to more accurate community detection as well as improved robustness in the presence of noise in the network structure. CESNA has a linear runtime in the network size and is able to process networks an order of magnitude larger than comparable approaches. Last, CESNA also helps with the interpretation of detected communities by finding relevant node attributes for each community." ] }
1506.04693
2126355467
Many methods have been proposed to detect communities, not only in plain, but also in attributed, di- rected, or even dynamic complex networks. From the modeling point of view, to be of some utility, the com- munity structure must be characterized relatively to the properties of the studied system. However, most of the existing works focus on the detection of communities, and only very few try to tackle this interpretation problem. Moreover, the existing approaches are limited either by the type of data they handle or by the nature of the results they output. In this work, we see the interpretation of commu- nities as a problem independent from the detection process, consisting in identifying the most characteristic features of communities. We give a formal definition of this problem and propose a method to solve it. To this aim, we first define a sequence-based representation of networks, com- bining temporal information, community structure, topo- logical measures, and nodal attributes. We then describe how to identify the most emerging sequential patterns of this dataset and use them to characterize the communities. We study the performance of our method on artificially generated dynamic attributed networks. We also em- pirically validate our framework on real-world systems: a DBLP network of scientific collaborations, and a LastFM network of social and musical interactions.
Another method was recently defined based on frequent pattern mining, which can be used for community interpretation. In @cite_38 , Stattner & Collard introduced the notion of . A conceptual link corresponds to a set of links from the original network, connecting nodes who share similar attributes. Such a link is said to be frequent when the number of links it represents is above a given threshold. This method can be seen as a generalization of the notion of homophily, and was initially used to simplify the network and help understanding it. Finding frequent conceptual links amounts to detecting groups of nodes sharing common attributes, with a pattern mining point of view. This methods considers both the network structure and the nodal attributes, however it ignores their evolution, i.e. it does not take the temporal aspect into account.
{ "cite_N": [ "@cite_38" ], "mid": [ "2054358026" ], "abstract": [ "In this work, we propose a novel approach for the discovery of frequent patterns in a social network on the basis of both vertex attributes and link frequency. With an analogy to the traditional task of mining frequent item sets, we show that the issue addressed can be formulated in terms of a conceptual analysis that elicits conceptual links. A social-based conceptual link is a synthetic representation of a set of links between groups of vertexes that share similar internal properties. We propose a first algorithm that optimizes the search into the concept lattice of conceptual links and extracts maximal frequent conceptual links. We study the performances of our solution and give experimental results obtained on a sample example. Finally we show that the set of conceptual links extracted provides a conceptual view of the social network." ] }
1506.04573
2951292810
We study the issue of PAC-Bayesian domain adaptation: We want to learn, from a source domain, a majority vote model dedicated to a target one. Our theoretical contribution brings a new perspective by deriving an upper-bound on the target risk where the distributions' divergence---expressed as a ratio---controls the trade-off between a source error measure and the target voters' disagreement. Our bound suggests that one has to focus on regions where the source data is informative.From this result, we derive a PAC-Bayesian generalization bound, and specialize it to linear classifiers. Then, we infer a learning algorithmand perform experiments on real data.
A domain adaptation task fulfills the assumption @cite_13 if the source and target domains only differ in their marginals according to the input space, i.e., @math . In this scenario, one may estimate @math , and even @math , by using unsupervised density estimation methods. Interestingly, by also assuming that the domains share the same support, we have @math . Then from Line we obtain @math
{ "cite_N": [ "@cite_13" ], "mid": [ "2034368206" ], "abstract": [ "Abstract A class of predictive densities is derived by weighting the observed samples in maximizing the log-likelihood function. This approach is effective in cases such as sample surveys or design of experiments, where the observed covariate follows a different distribution than that in the whole population. Under misspecification of the parametric model, the optimal choice of the weight function is asymptotically shown to be the ratio of the density function of the covariate in the population to that in the observations. This is the pseudo-maximum likelihood estimation of sample surveys. The optimality is defined by the expected Kullback–Leibler loss, and the optimal weight is obtained by considering the importance sampling identity. Under correct specification of the model, however, the ordinary maximum likelihood estimate (i.e. the uniform weight) is shown to be optimal asymptotically. For moderate sample size, the situation is in between the two extreme cases, and the weight function is selected by minimizing a variant of the information criterion derived as an estimate of the expected loss. The method is also applied to a weighted version of the Bayesian predictive density. Numerical examples as well as Monte-Carlo simulations are shown for polynomial regression. A connection with the robust parametric estimation is discussed." ] }
1506.04338
2949230572
In perspective cameras, images of a frontal-parallel 3D object preserve its aspect ratio invariant to its depth. Such an invariance is useful in photography but is unique to perspective projection. In this paper, we show that alternative non-perspective cameras such as the crossed-slit or XSlit cameras exhibit a different depth-dependent aspect ratio (DDAR) property that can be used to 3D recovery. We first conduct a comprehensive analysis to characterize DDAR, infer object depth from its AR, and model recoverable depth range, sensitivity, and error. We show that repeated shape patterns in real Manhattan World scenes can be used for 3D reconstruction using a single XSlit image. We also extend our analysis to model slopes of lines. Specifically, parallel 3D lines exhibit depth-dependent slopes (DDS) on their images which can also be used to infer their depths. We validate our analyses using real XSlit cameras, XSlit panoramas, and catadioptric mirrors. Experiments show that DDAR and DDS provide important depth cues and enable effective single-image scene reconstruction.
A major task of computer vision is to infer 3D geometry of scenes using as fewer images as possible. Tremendous efforts have focused on recovering a special class of scene called the Manhattan World (MW) @cite_13 . MW is composed of repeated planar surfaces and parallel lines aligned with three mutually orthogonal principal axes and fits well to many man-made (interior exterior) environments. Under the MW assumption, one can simultaneously conduct 3D scene reconstruction @cite_10 @cite_21 and camera calibration @cite_25 .
{ "cite_N": [ "@cite_10", "@cite_21", "@cite_13", "@cite_25" ], "mid": [ "2109443835", "2536043048", "2102271310", "2136527382" ], "abstract": [ "When we look at a picture, our prior knowledge about the world allows us to resolve some of the ambiguities that are inherent to monocular vision, and thereby infer 3d information about the scene. We also recognize different objects, decide on their orientations, and identify how they are connected to their environment. Focusing on the problem of autonomous 3d reconstruction of indoor scenes, in this paper we present a dynamic Bayesian network model capable of resolving some of these ambiguities and recovering 3d information for many images. Our model assumes a \"floorwall\" geometry on the scene and is trained to recognize the floor-wall boundary in each column of the image. When the image is produced under perspective geometry, we show that this model can be used for 3d reconstruction from a single image. To our knowledge, this was the first monocular approach to automatically recover 3d reconstructions from single indoor images.", "This paper proposes a fully automated 3D reconstruction and visualization system for architectural scenes (interiors and exteriors). The reconstruction of indoor environments from photographs is particularly challenging due to texture-poor planar surfaces such as uniformly-painted walls. Our system first uses structure-from-motion, multi-view stereo, and a stereo algorithm specifically designed for Manhattan-world scenes (scenes consisting predominantly of piece-wise planar surfaces with dominant directions) to calibrate the cameras and to recover initial 3D geometry in the form of oriented points and depth maps. Next, the initial geometry is fused into a 3D model with a novel depth-map integration algorithm that, again, makes use of Manhattan-world assumptions and produces simplified 3D models. Finally, the system enables the exploration of reconstructed environments with an interactive, image-based 3D viewer. We demonstrate results on several challenging datasets, including a 3D reconstruction and image-based walk-through of an entire floor of a house, the first result of this kind from an automated computer vision system.", "When designing computer vision systems for the blind and visually impaired it is important to determine the orientation of the user relative to the scene. We observe that most indoor and outdoor (city) scenes are designed on a Manhattan three-dimensional grid. This Manhattan grid structure puts strong constraints on the intensity gradients in the image. We demonstrate an algorithm for detecting the orientation of the user in such scenes based on Bayesian inference using statistics which we have learnt in this domain. Our algorithm requires a single input image and does not involve pre-processing stages such as edge detection and Hough grouping. We demonstrate strong experimental results on a range of indoor and outdoor images. We also show that estimating the grid structure makes it significantly easier to detect target objects which are not aligned with the grid.", "Edges in man-made environments, grouped according to vanishing point directions, provide single-view constraints that have been exploited before as a precursor to both scene understanding and camera calibration. A Bayesian approach to edge grouping was proposed in the \"Manhattan World\" paper by Coughlan and Yuille, where they assume the existence of three mutually orthogonal vanishing directions in the scene. We extend the thread of work spawned by Coughlan and Yuille in several significant ways. We propose to use the expectation maximization (EM) algorithm to perform the search over all continuous parameters that influence the location of the vanishing points in a scene. Because EM behaves well in high-dimensional spaces, our method can optimize over many more parameters than the exhaustive and stochastic algorithms used previously for this task. Among other things, this lets us optimize over multiple groups of orthogonal vanishing directions, each of which induces one additional degree of freedom. EM is also well suited to recursive estimation of the kind needed for image sequences and or in mobile robotics. We present experimental results on images of \"Atlanta worlds\", complex urban scenes with multiple orthogonal edge-groups, that validate our approach. We also show results for continuous relative orientation estimation on a mobile robot." ] }
1506.04338
2949230572
In perspective cameras, images of a frontal-parallel 3D object preserve its aspect ratio invariant to its depth. Such an invariance is useful in photography but is unique to perspective projection. In this paper, we show that alternative non-perspective cameras such as the crossed-slit or XSlit cameras exhibit a different depth-dependent aspect ratio (DDAR) property that can be used to 3D recovery. We first conduct a comprehensive analysis to characterize DDAR, infer object depth from its AR, and model recoverable depth range, sensitivity, and error. We show that repeated shape patterns in real Manhattan World scenes can be used for 3D reconstruction using a single XSlit image. We also extend our analysis to model slopes of lines. Specifically, parallel 3D lines exhibit depth-dependent slopes (DDS) on their images which can also be used to infer their depths. We validate our analyses using real XSlit cameras, XSlit panoramas, and catadioptric mirrors. Experiments show that DDAR and DDS provide important depth cues and enable effective single-image scene reconstruction.
MW generally exhibits repeated line patterns but lacks textures and therefore traditional stereo matching is less suitable for reconstruction. Instead, prior-based modeling is more widely adopted. For example, Furukawa al @cite_21 assign a plane to each pixel and then apply graph-cut on discretized plane parameters. Other monocular cues such as the vanishing points @cite_6 and the reference planes ( the ground) have also been used to better approximate scene geometry. Hoime al @cite_22 @cite_20 use image attributes (color, edge orientation, ) to label image regions with different geometric classes (sky, ground, and vertical) and then pop-up" the vertical regions to generate visually pleasing 3D reconstructions. Similar approaches have been used to handle indoor scenes @cite_10 . Machine learning techniques have also been used to infer depths from image features and the location and orientation of planar regions @cite_11 @cite_9 . Lee al @cite_5 and Flint al @cite_2 search for the most feasible combination of line segments for indoor MW understanding.
{ "cite_N": [ "@cite_22", "@cite_9", "@cite_21", "@cite_6", "@cite_2", "@cite_5", "@cite_10", "@cite_20", "@cite_11" ], "mid": [ "2155871590", "2110917409", "2536043048", "1592011777", "1780730131", "2116851763", "2109443835", "", "2158211626" ], "abstract": [ "Many computer vision algorithms limit their performance by ignoring the underlying 3D geometric structure in the image. We show that we can estimate the coarse geometric properties of a scene by learning appearance-based models of geometric classes, even in cluttered natural scenes. Geometric classes describe the 3D orientation of an image region with respect to the camera. We provide a multiple-hypothesis framework for robustly estimating scene structure from a single image and obtaining confidences for each geometric label. These confidences can then be used to improve the performance of many other applications. We provide a thorough quantitative evaluation of our algorithm on a set of outdoor images and demonstrate its usefulness in two applications: object detection and automatic single-view reconstruction.", "We consider the problem of estimating detailed 3D structure from a single still image of an unstructured environment. Our goal is to create 3D models which are both quantitatively accurate as well as visually pleasing. For each small homogeneous patch in the image, we use a Markov random field (MRF) to infer a set of \"plane parameters\" that capture both the 3D location and 3D orientation of the patch. The MRF, trained via supervised learning, models both image depth cues as well as the relationships between different parts of the image. Inference in our model is tractable, and requires only solving a convex optimization problem. Other than assuming that the environment is made up of a number of small planes, our model makes no explicit assumptions about the structure of the scene; this enables the algorithm to capture much more detailed 3D structure than does prior art (such as Saxena et ah, 2005, Delage et ah, 2005, and Hoiem et el, 2005), and also give a much richer experience in the 3D flythroughs created using image-based rendering, even for scenes with significant non-vertical structure. Using this approach, we have created qualitatively correct 3D models for 64.9 of 588 images downloaded from the Internet, as compared to 's performance of 33.1 . Further, our models are quantitatively more accurate than either or", "This paper proposes a fully automated 3D reconstruction and visualization system for architectural scenes (interiors and exteriors). The reconstruction of indoor environments from photographs is particularly challenging due to texture-poor planar surfaces such as uniformly-painted walls. Our system first uses structure-from-motion, multi-view stereo, and a stereo algorithm specifically designed for Manhattan-world scenes (scenes consisting predominantly of piece-wise planar surfaces with dominant directions) to calibrate the cameras and to recover initial 3D geometry in the form of oriented points and depth maps. Next, the initial geometry is fused into a 3D model with a novel depth-map integration algorithm that, again, makes use of Manhattan-world assumptions and produces simplified 3D models. Finally, the system enables the exploration of reconstructed environments with an interactive, image-based 3D viewer. We demonstrate results on several challenging datasets, including a 3D reconstruction and image-based walk-through of an entire floor of a house, the first result of this kind from an automated computer vision system.", "We describe how 3D affine measurements may be computed from a single perspective view of a scene given only minimal geometric information determined from the image. This minimal information is typically the vanishing line of a reference plane, and a vanishing point for a direction not parallel to the plane. It is shown that affine scene structure may then be determined from the image, without knowledge of the camera's internal calibration (e.g. focal length), nor of the explicit relation between camera and world (pose). In particular, we show how to (i) compute the distance between planes parallel to the reference plane (up to a common scale factor)s (ii) compute area and length ratios on any plane parallel to the reference planes (iii) determine the camera's location. Simple geometric derivations are given for these results. We also develop an algebraic representation which unifies the three types of measurement and, amongst other advantages, permits a first order error propagation analysis to be performed, associating an uncertainty with each measurement. We demonstrate the technique for a variety of applications, including height measurements in forensic images and 3D graphical modelling from single images.", "A number of recent papers have investigated reconstruction under Manhattan world assumption, in which surfaces in the world are assumed to be aligned with one of three dominant directions [1,2,3,4]. In this paper we present a dynamic programming solution to the reconstruction problem for \"indoor\" Manhattan worlds (a sub-class of Manhattan worlds). Our algorithm deterministically finds the global optimum and exhibits computational complexity linear in both model complexity and image size. This is an important improvement over previous methods that were either approximate [3] or exponential in model complexity [4]. We present results for a new dataset containing several hundred manually annotated images, which are released in conjunction with this paper.", "We study the problem of generating plausible interpretations of a scene from a collection of line segments automatically extracted from a single indoor image. We show that we can recognize the three dimensional structure of the interior of a building, even in the presence of occluding objects. Several physically valid structure hypotheses are proposed by geometric reasoning and verified to find the best fitting model to line segments, which is then converted to a full 3D model. Our experiments demonstrate that our structure recovery from line segments is comparable with methods using full image appearance. Our approach shows how a set of rules describing geometric constraints between groups of segments can be used to prune scene interpretation hypotheses and to generate the most plausible interpretation.", "When we look at a picture, our prior knowledge about the world allows us to resolve some of the ambiguities that are inherent to monocular vision, and thereby infer 3d information about the scene. We also recognize different objects, decide on their orientations, and identify how they are connected to their environment. Focusing on the problem of autonomous 3d reconstruction of indoor scenes, in this paper we present a dynamic Bayesian network model capable of resolving some of these ambiguities and recovering 3d information for many images. Our model assumes a \"floorwall\" geometry on the scene and is trained to recognize the floor-wall boundary in each column of the image. When the image is produced under perspective geometry, we show that this model can be used for 3d reconstruction from a single image. To our knowledge, this was the first monocular approach to automatically recover 3d reconstructions from single indoor images.", "", "We consider the task of depth estimation from a single monocular image. We take a supervised learning approach to this problem, in which we begin by collecting a training set of monocular images (of unstructured outdoor environments which include forests, trees, buildings, etc.) and their corresponding ground-truth depthmaps. Then, we apply supervised learning to predict the depthmap as a function of the image. Depth estimation is a challenging problem, since local features alone are insufficient to estimate depth at a point, and one needs to consider the global context of the image. Our model uses a discriminatively-trained Markov Random Field (MRF) that incorporates multiscale local- and global-image features, and models both depths at individual points as well as the relation between depths at different points. We show that, even on unstructured scenes, our algorithm is frequently able to recover fairly accurate depthmaps." ] }