aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1705.04138
2613172232
We consider the stochastic composition optimization problem proposed in wang2017stochastic , which has applications ranging from estimation to statistical and machine learning. We propose the first ADMM-based algorithm named com-SVR-ADMM, and show that com-SVR-ADMM converges linearly for strongly convex and Lipschitz smooth objectives, and has a convergence rate of @math , which improves upon the @math rate in wang2016accelerating when the objective is convex and Lipschitz smooth. Moreover, com-SVR-ADMM possesses a rate of @math when the objective is convex but without Lipschitz smoothness. We also conduct experiments and show that it outperforms existing algorithms.
The ADMM algorithm, on the other hand, was first proposed in @cite_5 @cite_20 and later reviewed in @cite_14 . Since then, several ADMM-based stochastic algorithms have been proposed, e.g., @cite_2 @cite_7 @cite_11 . However, these algorithms all possess sublinear convergence rates. Therefore, several recent works tried to combine the variance reduction scheme and ADMM to accelerate convergence. For instance, SVRG-ADMM was proposed in @cite_3 . It was shown that SVRG-ADMM converges linearly when the objective is strongly convex, and has a sublinear convergence rate in the general convex case. Another recent work @cite_16 further proved that SVRG-ADMM converges to a stationary point with a rate @math when the objective is nonconvex. In @cite_17 , the authors used acceleration technique in @cite_6 @cite_10 to further improve the convergence rate of SVRG-ADMM. However, all aforementioned variance-reduced ADMM algorithms cannot be directly applied to solving the stochastic composition optimization problem.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_17", "@cite_3", "@cite_6", "@cite_2", "@cite_5", "@cite_16", "@cite_10", "@cite_20", "@cite_11" ], "mid": [ "2164278908", "", "2950948235", "", "", "", "", "", "2408675156", "2045079045", "2210220586" ], "abstract": [ "Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.", "", "The alternating direction method of multipliers (ADMM) is a powerful optimization solver in machine learning. Recently, stochastic ADMM has been integrated with variance reduction methods for stochastic gradient, leading to SAG-ADMM and SDCA-ADMM that have fast convergence rates and low iteration complexities. However, their space requirements can still be high. In this paper, we propose an integration of ADMM with the method of stochastic variance reduced gradient (SVRG). Unlike another recent integration attempt called SCAS-ADMM, the proposed algorithm retains the fast convergence benefits of SAG-ADMM and SDCA-ADMM, but is more advantageous in that its storage requirement is very low, even independent of the sample size @math . Experimental results demonstrate that it is as fast as SAG-ADMM and SDCA-ADMM, much faster than SCAS-ADMM, and can be used on much bigger data sets.", "", "", "", "", "", "We consider the problem of minimizing the sum of the average function consisting of a large number of smooth convex component functions and a general convex function that can be non-differentiable. Although many methods have been proposed to solve the problem with the assumption that the sum is strongly convex, few methods support the non-strongly convex cases. Adding a small quadratic regularization is the common trick used to tackle non-strongly convex problems; however, it may worsen certain qualities of solutions or weaken the performance of the algorithms. Avoiding this trick, we extend the deterministic accelerated proximal gradient methods of Paul Tseng to randomized versions for solving the problem without the strongly convex assumption. Our algorithms achieve the optimal convergence rate @math . Tuning involved parameters helps our algorithms get better complexity compared with the deterministic accelerated proximal gradient methods. We also propose a scheme for non-smooth problem.", "For variational problems of the form Infv∈V f(Av)+g(v) , we propose a dual method which decouples the difficulties relative to the functionals f and g from the possible ill-conditioning effects of the linear operator A. The approach is based on the use of an Augmented Lagrangian functional and leads to an efficient and simply implementable algorithm. We study also the finite element approximation of such problems, compatible with the use of our algorithm. The method is finally applied to solve several problems of continuum mechanics.", "Online optimization has emerged as powerful tool in large scale optimization. In this pa- per, we introduce efficient online optimization algorithms based on the alternating direction method (ADM), which can solve online convex optimization under linear constraints where the objective could be non-smooth. We introduce new proof techniques for ADM in the batch setting, which yields a O(1 T) convergence rate for ADM and forms the basis for regret anal- ysis in the online setting. We consider two scenarios in the online setting, based on whether an additional Bregman divergence is needed or not. In both settings, we establish regret bounds for both the objective function as well as constraints violation for general and strongly convex functions. We also consider inexact ADM updates where certain terms are linearized to yield efficient updates and show the stochastic convergence rates. In addition, we briefly discuss that online ADM can be used as projection- free online learning algorithm in some scenarios. Preliminary results are presented to illustrate the performance of the proposed algorithms." ] }
1705.04281
2950430675
Multiple scattering of an electromagnetic wave as it passes through an object is a fundamental problem that limits the performance of current imaging systems. In this paper, we describe a new technique-called Series Expansion with Accelerated Gradient Descent on Lippmann-Schwinger Equation (SEAGLE)-for robust imaging under multiple scattering based on a combination of a new nonlinear forward model and a total variation (TV) regularizer. The proposed forward model can account for multiple scattering, which makes it advantageous in applications where linear models are inaccurate. Specifically, it corresponds to a series expansion of the scattered wave with an accelerated-gradient method. This expansion guarantees the convergence even for strongly scattering objects. One of our key insights is that it is possible to obtain an explicit formula for computing the gradient of our nonlinear forward model with respect to the unknown object, thus enabling fast image reconstruction with the state-of-the-art fast iterative shrinkage thresholding algorithm (FISTA). The proposed method is validated on both simulated and experimentally measured data.
The key innovation in this paper is in the new forward model, which enables tractable optimization of a cost function that incorporates nonlinear scattering and a non-differentiable regularizer such as TV. As corroborated by experiments, this formulation enables fast, stable, and reliable convergence, even when working with limited amount of data. This paper extends our preliminary work @cite_18 by including key mathematical derivations, as well as more extensive validation on experimentally collected data.
{ "cite_N": [ "@cite_18" ], "mid": [ "2528295078" ], "abstract": [ "We propose a new compressive imaging method for reconstructing 2D or 3D objects from their scattered wave-field measurements. Our method relies on a novel, nonlinear measurement model that can account for the multiple scattering phenomenon, which makes the method preferable in applications where linear measurement models are inaccurate. We construct the measurement model by expanding the scattered wave-field with an accelerated-gradient method, which is guaranteed to converge and is suitable for large-scale problems. We provide explicit formulas for computing the gradient of our measurement model with respect to the unknown image, which enables image formation with a sparsity- driven numerical optimization algorithm. We validate the method both analytically and with numerical simulations." ] }
1705.04146
2613312549
Solving algebraic word problems requires executing a series of arithmetic operations---a program---to obtain a final answer. However, since programs can be arbitrarily complicated, inducing them directly from question-answer pairs is a formidable challenge. To make this task more feasible, we solve these problems by generating answer rationales, sequences of natural language and human-readable mathematical expressions that derive the final answer through a series of small steps. Although rationales do not explicitly specify programs, they provide a scaffolding for their structure via intermediate milestones. To evaluate our approach, we have created a new 100,000-sample dataset of questions, answers and rationales. Experimental results show that indirect supervision of program learning via answer rationales is a promising strategy for inducing arithmetic programs.
Extensive efforts have been made in the domain of math problem solving @cite_14 @cite_16 @cite_0 , which aim at obtaining the correct answer to a given math problem. Other work has focused on learning to map math expressions into formal languages @cite_11 . We aim to generate natural language rationales, where the bindings between variables and the problem solving approach are mixed into a single generative model that attempts to solve the problem while explaining the approach taken.
{ "cite_N": [ "@cite_0", "@cite_14", "@cite_16", "@cite_11" ], "mid": [ "2951624407", "2105717194", "2251349042", "2524879642" ], "abstract": [ "This paper presents a novel approach to automatically solving arithmetic word problems. This is the first algorithmic approach that can handle arithmetic problems with multiple steps and operations, without depending on additional annotations or predefined templates. We develop a theory for expression trees that can be used to represent and evaluate the target arithmetic expressions; we use it to uniquely decompose the target arithmetic problem to multiple classification problems; we then compose an expression tree, combining these with world knowledge through a constrained inference framework. Our classifiers gain from the use of quantity schemas that supports better extraction of features. Experimental results show that our method outperforms existing systems, achieving state of the art performance on benchmark datasets of arithmetic word problems.", "This paper presents a novel approach to learning to solve simple arithmetic word problems. Our system, ARIS, analyzes each of the sentences in the problem statement to identify the relevant variables and their values. ARIS then maps this information into an equation that represents the problem, and enables its (trivial) solution as shown in Figure 1. The paper analyzes the arithmetic-word problems “genre”, identifying seven categories of verbs used in such problems. ARIS learns to categorize verbs with 81.2 accuracy, and is able to solve 77.7 of the problems in a corpus of standard primary school test questions. We report the first learning results on this task without reliance on predefined templates and make our data publicly available. 1", "We present an approach for automatically learning to solve algebra word problems. Our algorithm reasons across sentence boundaries to construct and solve a system of linear equations, while simultaneously recovering an alignment of the variables and numbers in these equations to the problem text. The learning algorithm uses varied supervision, including either full equations or just the final answers. We evaluate performance on a newly gathered corpus of algebra word problems, demonstrating that the system can correctly answer almost 70 of the questions in the dataset. This is, to our knowledge, the first learning result for this task.", "Identifying mathematical relations expressed in text is essential to understanding a broad range of natural language text from election reports, to financial news, to sport commentaries to mathematical word problems. This paper focuses on identifying and understanding mathematical relations described within a single sentence. We introduce the problem of Equation Parsing -- given a sentence, identify noun phrases which represent variables, and generate the mathematical equation expressing the relation described in the sentence. We introduce the notion of projective equation parsing and provide an efficient algorithm to parse text to projective equations. Our system makes use of a high precision lexicon of mathematical expressions and a pipeline of structured predictors, and generates correct equations in @math of the cases. In @math of the time, it also identifies the correct noun phrase @math variables mapping, significantly outperforming baselines. We also release a new annotated dataset for task evaluation." ] }
1705.04146
2613312549
Solving algebraic word problems requires executing a series of arithmetic operations---a program---to obtain a final answer. However, since programs can be arbitrarily complicated, inducing them directly from question-answer pairs is a formidable challenge. To make this task more feasible, we solve these problems by generating answer rationales, sequences of natural language and human-readable mathematical expressions that derive the final answer through a series of small steps. Although rationales do not explicitly specify programs, they provide a scaffolding for their structure via intermediate milestones. To evaluate our approach, we have created a new 100,000-sample dataset of questions, answers and rationales. Experimental results show that indirect supervision of program learning via answer rationales is a promising strategy for inducing arithmetic programs.
Our approach is strongly tied with the work on sequence to sequence transduction using the encoder-decoder paradigm @cite_1 @cite_2 @cite_13 , and inherits ideas from the extensive literature on semantic parsing @cite_3 @cite_10 @cite_7 @cite_5 @cite_15 @cite_19 and program generation @cite_8 @cite_12 , namely, the usage of an external memory, the application of different operators over values in the memory and the copying of stored values into the output sequence. Providing textual explanations for classification decisions has begun to receive attention, as part of increased interest in creating models whose decisions can be interpreted. , jointly modeled both a classification decision, and the selection of the most relevant subsection of a document for making the classification decision. generate textual explanations for visual classification problems, but in contrast to our model, they first generate an answer, and then, conditional on the answer, generate an explanation. This effectively creates a post-hoc justification for a classification decision rather than a program for deducing an answer. These papers, like ours, have jointly modeled rationales and answer predictions; however, we are the first to use rationales to guide program induction.
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_10", "@cite_1", "@cite_3", "@cite_19", "@cite_2", "@cite_5", "@cite_15", "@cite_13", "@cite_12" ], "mid": [ "2124204950", "2440159969", "2252136820", "2949888546", "2147389891", "2214429195", "2133564696", "2158396456", "", "1753482797", "" ], "abstract": [ "Semantic parsing is the problem of deriving a structured meaning representation from a natural language utterance. Here we approach it as a straightforward machine translation task, and demonstrate that standard machine translation components can be adapted into a semantic parser. In experiments on the multilingual GeoQuery corpus we find that our parser is competitive with the state of the art, and in some cases achieves higher accuracy than recently proposed purpose-built systems. These results support the use of machine translation methods as an informative baseline in semantic parsing evaluations, and suggest that research in semantic parsing could benefit from advances in machine translation.", "Prediction without justification has limited applicability. As a remedy, we learn to extract pieces of input text as justifications -- rationales -- that are tailored to be short and coherent, yet sufficient for making the same prediction. Our approach combines two modular components, generator and encoder, which are trained to operate well together. The generator specifies a distribution over text fragments as candidate rationales and these are passed through the encoder for prediction. Rationales are never given during training. Instead, the model is regularized by desiderata for rationales. We evaluate the approach on multi-aspect sentiment analysis against manually annotated test cases. Our approach outperforms attention-based baseline by a significant margin. We also successfully illustrate the method on the question retrieval task.", "In this paper, we train a semantic parser that scales up to Freebase. Instead of relying on annotated logical forms, which is especially expensive to obtain at large scale, we learn from question-answer pairs. The main challenge in this setting is narrowing down the huge number of possible logical predicates for a given question. We tackle this problem in two ways: First, we build a coarse mapping from phrases to predicates using a knowledge base and a large text corpus. Second, we use a bridging operation to generate additional predicates based on neighboring predicates. On the dataset of Cai and Yates (2013), despite not having annotated logical forms, our system outperforms their state-of-the-art parser. Additionally, we collected a more realistic and challenging dataset of question-answer pairs and improves over a natural baseline.", "Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.", "Many semantic parsing models use tree transformations to map between natural language and meaning representation. However, while tree transformations are central to several state-of-the-art approaches, little use has been made of the rich literature on tree automata. This paper makes the connection concrete with a tree transducer based semantic parsing model and suggests that other models can be interpreted in a similar framework, increasing the generality of their contributions. In particular, this paper further introduces a variational Bayesian inference algorithm that is applicable to a wide class of tree transducers, producing state-of-the-art semantic parsing results while remaining applicable to any domain employing probabilistic tree transducers.", "Deep neural networks have achieved impressive supervised classification performance in many tasks including image recognition, speech recognition, and sequence to sequence learning. However, this success has not been translated to applications like question answering that may involve complex arithmetic and logic reasoning. A major limitation of these models is in their inability to learn even simple arithmetic and logic operations. For example, it has been shown that neural networks fail to learn to add two binary numbers reliably. In this work, we propose Neural Programmer, an end-to-end differentiable neural network augmented with a small set of basic arithmetic and logic operations. Neural Programmer can call these augmented operations over several steps, thereby inducing compositional programs that are more complex than the built-in operations. The model learns from a weak supervision signal which is the result of execution of the correct program, hence it does not require expensive annotation of the correct program itself. The decisions of what operations to call, and what data segments to apply to are inferred by Neural Programmer. Such decisions, during training, are done in a differentiable fashion so that the entire network can be trained jointly by gradient descent. We find that training the model is difficult, but it can be greatly improved by adding random noise to the gradient. On a fairly complex synthetic table-comprehension dataset, traditional recurrent networks and attentional models perform poorly while Neural Programmer typically obtains nearly perfect accuracy.", "Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.", "Using natural language to write programs is a touchstone problem for computational linguistics. We present an approach that learns to map natural-language descriptions of simple “if-then” rules to executable code. By training and testing on a large corpus of naturally-occurring programs (called “recipes”) and their natural language descriptions, we demonstrate the ability to effectively map language to code. We compare a number of semantic parsing approaches on the highly noisy training data collected from ordinary users, and find that loosely synchronous systems perform best.", "", "We introduce a class of probabilistic continuous translation models called Recurrent Continuous Translation Models that are purely based on continuous representations for words, phrases and sentences and do not rely on alignments or phrasal translation units. The models have a generation and a conditioning aspect. The generation of the translation is modelled with a target Recurrent Language Model, whereas the conditioning on the source sentence is modelled with a Convolutional Sentence Model. Through various experiments, we show first that our models obtain a perplexity with respect to gold translations that is > 43 lower than that of stateof-the-art alignment-based translation models. Secondly, we show that they are remarkably sensitive to the word order, syntax, and meaning of the source sentence despite lacking alignments. Finally we show that they match a state-of-the-art system when rescoring n-best lists of translations.", "" ] }
1705.04358
2614913722
Convolutional Neural Networks (CNNs) have been used extensively for computer vision tasks and produce rich feature representation for objects or parts of an image. But reasoning about scenes requires integration between the low-level feature representations and the high-level semantic information. We propose a deep network architecture which models the semantic context of scenes by capturing object-level information. We use Long Short Term Memory(LSTM) units in conjunction with object proposals to incorporate object-object relationship and object-scene relationship in an end-to-end trainable manner. We evaluate our model on the LSUN dataset and achieve results comparable to the state-of-art. We further show visualization of the learned features and analyze the model with experiments to verify our model's ability to model context.
Both global scene descriptors like GIST @cite_9 and spatial pyramids @cite_21 and local, low-level features like SIFT @cite_15 have been used in the past for scene classification. Others part based models like @cite_50 @cite_25 try to obtain mid-level information from deformable parts. Although image-level features capture the holistic information of the scene, and low and mid level features capture the object information in a scene, the above methods concentrate only on the image statistics and don't attach any clear semantic meaning to a scene or its constituents.
{ "cite_N": [ "@cite_9", "@cite_21", "@cite_50", "@cite_15", "@cite_25" ], "mid": [ "1566135517", "2162915993", "2099528205", "2151103935", "" ], "abstract": [ "In this paper, we propose a computational model of the recognition of real world scenes that bypasses the segmentation and the processing of individual objects or regions. The procedure is based on a very low dimensional representation of the scene, that we term the Spatial Envelope. We propose a set of perceptual dimensions (naturalness, openness, roughness, expansion, ruggedness) that represent the dominant spatial structure of a scene. Then, we show that these dimensions may be reliably estimated using spectral and coarsely localized information. The model generates a multidimensional space in which scenes sharing membership in semantic categories (e.g., streets, highways, coasts) are projected closed together. The performance of the spatial envelope model shows that specific information about object shape or identity is not a requirement for scene categorization and that modeling a holistic representation of the scene informs about its probable semantic category.", "This paper presents a method for recognizing scene categories based on approximate global geometric correspondence. This technique works by partitioning the image into increasingly fine sub-regions and computing histograms of local features found inside each sub-region. The resulting \"spatial pyramid\" is a simple and computationally efficient extension of an orderless bag-of-features image representation, and it shows significantly improved performance on challenging scene categorization tasks. Specifically, our proposed method exceeds the state of the art on the Caltech-101 database and achieves high accuracy on a large database of fifteen natural scene categories. The spatial pyramid framework also offers insights into the success of several recently proposed image descriptions, including Torralba’s \"gist\" and Lowe’s SIFT descriptors.", "Weakly supervised discovery of common visual structure in highly variable, cluttered images is a key problem in recognition. We address this problem using deformable part-based models (DPM's) with latent SVM training [6]. These models have been introduced for fully supervised training of object detectors, but we demonstrate that they are also capable of more open-ended learning of latent structure for such tasks as scene recognition and weakly supervised object localization. For scene recognition, DPM's can capture recurring visual elements and salient objects; in combination with standard global image features, they obtain state-of-the-art results on the MIT 67-category indoor scene dataset. For weakly supervised object localization, optimization over latent DPM parameters can discover the spatial extent of objects in cluttered training images without ground-truth bounding boxes. The resulting method outperforms a recent state-of-the-art weakly supervised object localization approach on the PASCAL-07 dataset.", "This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.", "" ] }
1705.04358
2614913722
Convolutional Neural Networks (CNNs) have been used extensively for computer vision tasks and produce rich feature representation for objects or parts of an image. But reasoning about scenes requires integration between the low-level feature representations and the high-level semantic information. We propose a deep network architecture which models the semantic context of scenes by capturing object-level information. We use Long Short Term Memory(LSTM) units in conjunction with object proposals to incorporate object-object relationship and object-scene relationship in an end-to-end trainable manner. We evaluate our model on the LSUN dataset and achieve results comparable to the state-of-art. We further show visualization of the learned features and analyze the model with experiments to verify our model's ability to model context.
Alternatively, some other methods build on an object-centric view of a scene where a set of objects are the discriminative characteristics of the scene. @cite_32 uses a scene representation built from pre-trained object detectors. @cite_22 introduces a measure for object-class distance to generalise the idea of an object bank and uses it for classification. In the more recent literature, @cite_28 uses discriminative clustering of the deep CNN features of scene patches to form meta objects which pooled together at different scales makes the final scene representation. Various other CNN architectures for scene classification have been proposed in the last few years. MOP-CNN model @cite_26 pools deep CNN features at different scales for smaller image patches and obtains the VLAD descriptor for the entire image. @cite_38 uses supervision from auxiliary branch classifiers to decide whether or not to increase the depth of the network. A similar technique of using auxiliary supervision layer along with fisher convolution vectors is used for learning the final feature representation in @cite_51 .
{ "cite_N": [ "@cite_38", "@cite_26", "@cite_22", "@cite_28", "@cite_32", "@cite_51" ], "mid": [ "1594587862", "1524680991", "", "2191616647", "2169177311", "2289772031" ], "abstract": [ "One of the most promising ways of improving the performance of deep convolutional neural networks is by increasing the number of convolutional layers. However, adding layers makes training more difficult and computationally expensive. In order to train deeper networks, we propose to add auxiliary supervision branches after certain intermediate layers during training. We formulate a simple rule of thumb to determine where these branches should be added. The resulting deeply supervised structure makes the training much easier and also produces better classification results on ImageNet and the recently released, larger MIT Places dataset", "Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations lack geometric invariance, which limits their robustness for classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (MOP-CNN). This scheme extracts CNN activations for local patches at multiple scale levels, performs orderless VLAD pooling of these activations at each level separately, and concatenates the result. The resulting MOP-CNN representation can be used as a generic feature for either supervised or unsupervised recognition tasks, from image classification to instance-level retrieval; it consistently outperforms global CNN activations without requiring any joint training of prediction layers for a particular target dataset. In absolute terms, it achieves state-of-the-art results on the challenging SUN397 and MIT Indoor Scenes classification datasets, and competitive results on ILSVRC2012 2013 classification and INRIA Holidays retrieval datasets.", "", "Recent work on scene classification still makes use of generic CNN features in a rudimentary manner. In this paper, we present a novel pipeline built upon deep CNN features to harvest discriminative visual objects and parts for scene classification. We first use a region proposal technique to generate a set of high-quality patches potentially containing objects, and apply a pre-trained CNN to extract generic deep features from these patches. Then we perform both unsupervised and weakly supervised learning to screen these patches and discover discriminative ones representing category-specific objects and parts. We further apply discriminative clustering enhanced with local CNN fine-tuning to aggregate similar objects and parts into groups, called meta objects. A scene image representation is constructed by pooling the feature response maps of all the learned meta objects at multiple spatial scales. We have confirmed that the scene image representation obtained using this new pipeline is capable of delivering state-of-the-art performance on two popular scene benchmark datasets, MIT Indoor 67 [22] and Sun397 [31].", "Robust low-level image features have been proven to be effective representations for a variety of visual recognition tasks such as object recognition and scene classification; but pixels, or even local image patches, carry little semantic meanings. For high level visual tasks, such low-level image representations are potentially not enough. In this paper, we propose a high-level image representation, called the Object Bank, where an image is represented as a scale-invariant response map of a large number of pre-trained generic object detectors, blind to the testing dataset or visual task. Leveraging on the Object Bank representation, superior performances on high level visual recognition tasks can be achieved with simple off-the-shelf classifiers such as logistic regression and linear SVM. Sparsity algorithms make our representation more efficient and scalable for large scene datasets, and reveal semantically meaningful feature patterns.", "Convolutional neural networks (CNNs) have recently achieved remarkable successes in various image classification and understanding tasks. The deep features obtained at the top fully connected layer of the CNN (FC-features) exhibit rich global semantic information and are extremely effective in image classification. On the other hand, the convolutional features in the middle layers of the CNN also contain meaningful local information, but are not fully explored for image representation. In this paper, we propose a novel locally supervised deep hybrid model (LS-DHM) that effectively enhances and explores the convolutional features for scene recognition. First, we notice that the convolutional features capture local objects and fine structures of scene images, which yield important cues for discriminating ambiguous scenes, whereas these features are significantly eliminated in the highly compressed FC representation. Second, we propose a new local convolutional supervision layer to enhance the local structure of the image by directly propagating the label information to the convolutional layers. Third, we propose an efficient Fisher convolutional vector (FCV) that successfully rescues the orderless mid-level semantic information (e.g., objects and textures) of scene image. The FCV encodes the large-sized convolutional maps into a fixed-length mid-level representation, and is demonstrated to be strongly complementary to the high-level FC-features. Finally, both the FCV and FC-features are collaboratively employed in the LS-DHM representation, which achieves outstanding performance in our experiments. It obtains 83.75 and 67.56 accuracies, respectively, on the heavily benchmarked MIT Indoor67 and SUN397 data sets, advancing the state-of-the-art substantially." ] }
1705.04358
2614913722
Convolutional Neural Networks (CNNs) have been used extensively for computer vision tasks and produce rich feature representation for objects or parts of an image. But reasoning about scenes requires integration between the low-level feature representations and the high-level semantic information. We propose a deep network architecture which models the semantic context of scenes by capturing object-level information. We use Long Short Term Memory(LSTM) units in conjunction with object proposals to incorporate object-object relationship and object-scene relationship in an end-to-end trainable manner. We evaluate our model on the LSUN dataset and achieve results comparable to the state-of-art. We further show visualization of the learned features and analyze the model with experiments to verify our model's ability to model context.
CNNs have been very successful in learning discriminative features for vision problems and recurrent neural networks have been shown to effectively model the dependencies between its inputs. Many recent architectures use a combination of CNN and LSTM to jointly learn the feature representations and their dependencies. Multi-modal tasks like image captioning @cite_44 @cite_0 @cite_23 and visual question answering @cite_3 @cite_30 @cite_47 use CNN for the image features while the LSTM generates the language for the caption or the answer. Some recent approaches to scene labeling and semantic segmentation use CNN-LSTM architecture @cite_4 @cite_56 @cite_42 @cite_16 as CNN-only architectures contain larger receptive fields which do not allow for finer pixel-level label assignment. LSTMs also incorporate dependencies between pixels and improve agreement among their labels. Tasks involving videos also employ LSTM after extracting deep CNN features of individual frames @cite_36 @cite_39 @cite_46 since the temporal component of videos are suitable inputs for LSTM units. But as some other very recent works show, even in absence of temporal information, CNN-LSTM models can be used effectively to model relationships between image regions or object labels @cite_7 @cite_48 @cite_27 . We borrow from these works to use a CNN-LSTM combination to model context.
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_7", "@cite_36", "@cite_46", "@cite_48", "@cite_42", "@cite_3", "@cite_56", "@cite_44", "@cite_0", "@cite_39", "@cite_27", "@cite_23", "@cite_47", "@cite_16" ], "mid": [ "2949218037", "2951277909", "2951829713", "2201691068", "", "1934184906", "2179259799", "2950761309", "1909234690", "2951912364", "1811254738", "", "2340382781", "2951805548", "2412400526", "2951729963" ], "abstract": [ "This work aims to address the problem of image-based question-answering (QA) with new models and datasets. In our work, we propose to use neural networks and visual semantic embeddings, without intermediate stages such as object detection and image segmentation, to predict answers to simple questions about images. Our model performs 1.8 times better than the only published results on an existing image QA dataset. We also present a question generation algorithm that converts image descriptions, which are widely available, into QA form. We used this algorithm to produce an order-of-magnitude larger dataset, with more evenly distributed answers. A suite of baseline results on this new dataset are also presented.", "Scene parsing is a technique that consist on giving a label to all pixels in an image according to the class they belong to. To ensure a good visual coherence and a high class accuracy, it is essential for a scene parser to capture image long range dependencies. In a feed-forward architecture, this can be simply achieved by considering a sufficiently large input context patch, around each pixel to be labeled. We propose an approach consisting of a recurrent convolutional neural network which allows us to consider a large input context, while limiting the capacity of the model. Contrary to most standard approaches, our method does not rely on any segmentation methods, nor any task-specific features. The system is trained in an end-to-end manner over raw pixels, and models complex spatial dependencies with low inference cost. As the context size increases with the built-in recurrence, the system identifies and corrects its own errors. Our approach yields state-of-the-art performance on both the Stanford Background Dataset and the SIFT Flow Dataset, while remaining very fast at test time.", "It is well known that contextual and multi-scale representations are important for accurate visual recognition. In this paper we present the Inside-Outside Net (ION), an object detector that exploits information both inside and outside the region of interest. Contextual information outside the region of interest is integrated using spatial recurrent neural networks. Inside, we use skip pooling to extract information at multiple scales and levels of abstraction. Through extensive experiments we evaluate the design space and provide readers with an overview of what tricks of the trade are important. ION improves state-of-the-art on PASCAL VOC 2012 object detection from 73.9 to 76.4 mAP. On the new and more challenging MS COCO dataset, we improve state-of-art-the from 19.7 to 33.1 mAP. In the 2015 MS COCO Detection Challenge, our ION model won the Best Student Entry and finished 3rd place overall. As intuition suggests, our detection results provide strong evidence that context and multi-scale representations improve small object detection.", "This paper studies deep network architectures to address the problem of video classification. A multi-stream framework is proposed to fully utilize the rich multimodal information in videos. Specifically, we first train three Convolutional Neural Networks to model spatial, short-term motion and audio clues respectively. Long Short Term Memory networks are then adopted to explore long-term temporal dynamics. With the outputs of the individual streams, we propose a simple and effective fusion method to generate the final predictions, where the optimal fusion weights are learned adaptively for each class, and the learning process is regularized by automatically estimated class relationships. Our contributions are two-fold. First, the proposed multi-stream framework is able to exploit multimodal features that are more comprehensive than those previously attempted. Second, we demonstrate that the adaptive fusion method using the class relationship as a regularizer outperforms traditional alternatives that estimate the weights in a \"free\" fashion. Our framework produces significantly better results than the state of the arts on two popular benchmarks, 92.2 on UCF-101 (without using audio) and 84.9 on Columbia Consumer Videos.", "", "In recent years, the convolutional neural network (CNN) has achieved great success in many computer vision tasks. Partially inspired by neuroscience, CNN shares many properties with the visual system of the brain. A prominent difference is that CNN is typically a feed-forward architecture while in the visual system recurrent connections are abundant. Inspired by this fact, we propose a recurrent CNN (RCNN) for object recognition by incorporating recurrent connections into each convolutional layer. Though the input is static, the activities of RCNN units evolve over time so that the activity of each unit is modulated by the activities of its neighboring units. This property enhances the ability of the model to integrate the context information, which is important for object recognition. Like other recurrent neural networks, unfolding the RCNN through time can result in an arbitrarily deep network with a fixed number of parameters. Furthermore, the unfolded network has multiple paths, which can facilitate the learning process. The model is tested on four benchmark object recognition datasets: CIFAR-10, CIFAR-100, MNIST and SVHN. With fewer trainable parameters, RCNN outperforms the state-of-the-art models on all of these datasets. Increasing the number of parameters leads to even better performance. These results demonstrate the advantage of the recurrent structure over purely feed-forward structure for object recognition.", "Semantic object parsing is a fundamental task for understanding objects in detail in computer vision community, where incorporating multi-level contextual information is critical for achieving such fine-grained pixel-level recognition. Prior methods often leverage the contextual information through post-processing predicted confidence maps. In this work, we propose a novel deep Local-Global Long Short-Term Memory (LG-LSTM) architecture to seamlessly incorporate short-distance and long-distance spatial dependencies into the feature learning over all pixel positions. In each LG-LSTM layer, local guidance from neighboring positions and global guidance from the whole image are imposed on each position to better exploit complex local and global contextual information. Individual LSTMs for distinct spatial dimensions are also utilized to intrinsically capture various spatial layouts of semantic parts in the images, yielding distinct hidden and memory cells of each position for each dimension. In our parsing approach, several LG-LSTM layers are stacked and appended to the intermediate convolutional layers to directly enhance visual features, allowing network parameters to be learned in an end-to-end way. The long chains of sequential computation by stacked LG-LSTM layers also enable each pixel to sense a much larger region for inference benefiting from the memorization of previous dependencies in all positions along all dimensions. Comprehensive evaluations on three public datasets well demonstrate the significant superiority of our LG-LSTM over other state-of-the-art methods.", "We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing 0.25M images, 0.76M questions, and 10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (this http URL).", "This paper addresses the problem of pixel-level segmentation and classification of scene images with an entirely learning-based approach using Long Short Term Memory (LSTM) recurrent neural networks, which are commonly used for sequence classification. We investigate two-dimensional (2D) LSTM networks for natural scene images taking into account the complex spatial dependencies of labels. Prior methods generally have required separate classification and image segmentation stages and or pre- and post-processing. In our approach, classification, segmentation, and context integration are all carried out by 2D LSTM networks, allowing texture and spatial model parameters to be learned within a single model. The networks efficiently capture local and global contextual information over raw RGB values and adapt well for complex scene images. Our approach, which has a much lower computational complexity than prior methods, achieved state-of-the-art performance over the Stanford Background and the SIFT Flow datasets. In fact, if no pre- or post-processing is applied, LSTM networks outperform other state-of-the-art approaches. Hence, only with a single-core Central Processing Unit (CPU), the running time of our approach is equivalent or better than the compared state-of-the-art approaches which use a Graphics Processing Unit (GPU). Finally, our networks' ability to visualize feature maps from each layer supports the hypothesis that LSTM networks are overall suited for image processing tasks.", "Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.", "In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel image captions. It directly models the probability distribution of generating a word given previous words and an image. Image captions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on four benchmark datasets (IAPR TC-12, Flickr 8K, Flickr 30K and MS COCO). Our model outperforms the state-of-the-art methods. In addition, we apply the m-RNN model to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval. The project page of this work is: www.stat.ucla.edu junhua.mao m-RNN.html .", "", "While deep convolutional neural networks (CNNs) have shown a great success in single-label image classification, it is important to note that real world images generally contain multiple labels, which could correspond to different objects, scenes, actions and attributes in an image. Traditional approaches to multi-label image classification learn independent classifiers for each category and employ ranking or thresholding on the classification results. These techniques, although working well, fail to explicitly exploit the label dependencies in an image. In this paper, we utilize recurrent neural networks (RNNs) to address this problem. Combined with CNNs, the proposed CNN-RNN framework learns a joint image-label embedding to characterize the semantic label dependency as well as the image-label relevance, and it can be trained end-to-end from scratch to integrate both information in a unified framework. Experimental results on public benchmark datasets demonstrate that the proposed architecture achieves better performance than the state-of-the-art multi-label classification model", "We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.", "Modeling textual or visual information with vector representations trained from large language or visual datasets has been successfully explored in recent years. However, tasks such as visual question answering require combining these vector representations with each other. Approaches to multimodal pooling include element-wise product or sum, as well as concatenation of the visual and textual representations. We hypothesize that these methods are not as expressive as an outer product of the visual and textual vectors. As the outer product is typically infeasible due to its high dimensionality, we instead propose utilizing Multimodal Compact Bilinear pooling (MCB) to efficiently and expressively combine multimodal features. We extensively evaluate MCB on the visual question answering and grounding tasks. We consistently show the benefit of MCB over ablations without MCB. For visual question answering, we present an architecture which uses MCB twice, once for predicting attention over spatial features and again to combine the attended representation with the question representation. This model outperforms the state-of-the-art on the Visual7W dataset and the VQA challenge.", "By taking the semantic object parsing task as an exemplar application scenario, we propose the Graph Long Short-Term Memory (Graph LSTM) network, which is the generalization of LSTM from sequential data or multi-dimensional data to general graph-structured data. Particularly, instead of evenly and fixedly dividing an image to pixels or patches in existing multi-dimensional LSTM structures (e.g., Row, Grid and Diagonal LSTMs), we take each arbitrary-shaped superpixel as a semantically consistent node, and adaptively construct an undirected graph for each image, where the spatial relations of the superpixels are naturally used as edges. Constructed on such an adaptive graph topology, the Graph LSTM is more naturally aligned with the visual patterns in the image (e.g., object boundaries or appearance similarities) and provides a more economical information propagation route. Furthermore, for each optimization step over Graph LSTM, we propose to use a confidence-driven scheme to update the hidden and memory states of nodes progressively till all nodes are updated. In addition, for each node, the forgets gates are adaptively learned to capture different degrees of semantic correlation with neighboring nodes. Comprehensive evaluations on four diverse semantic object parsing datasets well demonstrate the significant superiority of our Graph LSTM over other state-of-the-art solutions." ] }
1705.04288
2949800193
While Deep Neural Networks (DNNs) push the state-of-the-art in many machine learning applications, they often require millions of expensive floating-point operations for each input classification. This computation overhead limits the applicability of DNNs to low-power, embedded platforms and incurs high cost in data centers. This motivates recent interests in designing low-power, low-latency DNNs based on fixed-point, ternary, or even binary data precision. While recent works in this area offer promising results, they often lead to large accuracy drops when compared to the floating-point networks. We propose a novel approach to map floating-point based DNNs to 8-bit dynamic fixed-point networks with integer power-of-two weights with no change in network architecture. Our dynamic fixed-point DNNs allow different radix points between layers. During inference, power-of-two weights allow multiplications to be replaced with arithmetic shifts, while the 8-bit fixed-point representation simplifies both the buffer and adder design. In addition, we propose a hardware accelerator design to achieve low-power, low-latency inference with insignificant degradation in accuracy. Using our custom accelerator design with the CIFAR-10 and ImageNet datasets, we show that our method achieves significant power and energy savings while increasing the classification accuracy.
Alternatively, DNNs with low precision data formats have enormous potentials for reducing hardware complexity, power and latency. Not surprisingly, there exists a rich body of literature which studies such limited precisions. Previous work in this area have considered a wide range of reduced precision including fixed point @cite_23 @cite_18 @cite_15 , ternary (-1,0,1) @cite_7 and binary (-1,1) @cite_1 @cite_19 . Furthermore, comprehensive studies of the effects of different precision on deep neural networks are also available. Gysel @cite_3 propose Ristretto, a hardware-oriented tool capable of simulating a wide range of signal precisions. While they consider dynamic fixed-point, in their work the focus is on network accuracy so the hardware metrics are not evaluated. On the other hand, Hashemi @cite_22 provide a broad evaluation of different precisions and quantizations on both the hardware metrics and network accuracy. However, they do not evaluate dynamic fixed point.
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_7", "@cite_1", "@cite_3", "@cite_19", "@cite_23", "@cite_15" ], "mid": [ "1841592590", "2563860341", "2013305145", "2319920447", "", "2161758346", "2950769435", "2140660536" ], "abstract": [ "Training of large-scale deep neural networks is often constrained by the available computational resources. We study the effect of limited precision data representation and computation on neural network training. Within the context of low-precision fixed-point computations, we observe the rounding scheme to play a crucial role in determining the network's behavior during training. Our results show that deep networks can be trained using only 16-bit wide fixed-point number representation when using stochastic rounding, and incur little to no degradation in the classification accuracy. We also demonstrate an energy-efficient hardware accelerator that implements low-precision fixed-point arithmetic with stochastic rounding.", "Deep neural networks are gaining in popularity as they are used to generate state-of-the-art results for a variety of computer vision and machine learning applications. At the same time, these networks have grown in depth and complexity in order to solve harder problems. Given the limitations in power budgets dedicated to these networks, the importance of low-power, low-memory solutions has been stressed in recent years. While a large number of dedicated hardware using different precisions has recently been proposed, there exists no comprehensive study of different bit precisions and arithmetic in both inputs and network parameters. In this work, we address this issue and perform a study of different bit-precisions in neural networks (from floating-point to fixed-point, powers of two, and binary). In our evaluation, we consider and analyze the effect of precision scaling on both network accuracy and hardware metrics including memory footprint, power and energy consumption, and design area. We also investigate training-time methodologies to compensate for the reduction in accuracy due to limited bit precision and demonstrate that in most cases, precision scaling can deliver significant benefits in design metrics at the cost of very modest decreases in network accuracy. In addition, we propose that a small portion of the benefits achieved when using lower precisions can be forfeited to increase the network size and therefore the accuracy. We evaluate our experiments, using three well-recognized networks and datasets to show its generality. We investigate the trade-offs and highlight the benefits of using lower precisions in terms of energy and memory footprint.", "Feedforward deep neural networks that employ multiple hidden layers show high performance in many applications, but they demand complex hardware for implementation. The hardware complexity can be much lowered by minimizing the word-length of weights and signals, but direct quantization for fixed-point network design does not yield good results. We optimize the fixed-point design by employing backpropagation based retraining. The designed fixed-point networks with ternary weights (+1, 0, and −1) and 3-bit signal show only negligible performance loss when compared to the floating-point coun-terparts. The backpropagation for retraining uses quantized weights and fixed-point signal to compute the output, but utilizes high precision values for adapting the networks. A character recognition and a phoneme recognition examples are presented.", "We introduce a method to train Binarized Neural Networks (BNNs) - neural networks with binary weights and activations at run-time. At training-time the binary weights and activations are used for computing the parameters gradients. During the forward pass, BNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations, which is expected to substantially improve power-efficiency. To validate the effectiveness of BNNs we conduct two sets of experiments on the Torch7 and Theano frameworks. On both, BNNs achieved nearly state-of-the-art results over the MNIST, CIFAR-10 and SVHN datasets. Last but not least, we wrote a binary matrix multiplication GPU kernel with which it is possible to run our MNIST BNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The code for training and running our BNNs is available on-line.", "", "Multilayer Neural Networks (MNNs) are commonly trained using gradient descent-based methods, such as BackPropagation (BP). Inference in probabilistic graphical models is often done using variational Bayes methods, such as Expectation Propagation (EP). We show how an EP based approach can also be used to train deterministic MNNs. Specifically, we approximate the posterior of the weights given the data using a \"mean-field\" factorized distribution, in an online setting. Using online EP and the central limit theorem we find an analytical approximation to the Bayes update of this posterior, as well as the resulting Bayes estimates of the weights and outputs. Despite a different origin, the resulting algorithm, Expectation BackPropagation (EBP), is very similar to BP in form and efficiency. However, it has several additional advantages: (1) Training is parameter-free, given initial conditions (prior) and the MNN architecture. This is useful for large-scale problems, where parameter tuning is a major challenge. (2) The weights can be restricted to have discrete values. This is especially useful for implementing trained MNNs in precision limited hardware chips, thus improving their speed and energy efficiency by several orders of magnitude. We test the EBP algorithm numerically in eight binary text classification tasks. In all tasks, EBP outperforms: (1) standard BP with the optimal constant learning rate (2) previously reported state of the art. Interestingly, EBP-trained MNNs with binary weights usually perform better than MNNs with continuous (real) weights - if we average the MNN output using the inferred posterior.", "We simulate the training of a set of state of the art neural networks, the Maxout networks (, 2013a), on three benchmark datasets: the MNIST, CIFAR10 and SVHN, with three distinct arithmetics: floating point, fixed point and dynamic fixed point. For each of those datasets and for each of those arithmetics, we assess the impact of the precision of the computations on the final error of the training. We find that very low precision computation is sufficient not just for running trained networks but also for training them. For example, almost state-of-the-art results were obtained on most datasets with 10 bits for computing activations and gradients, and 12 bits for storing updated parameters.", "A new algorithm for designing multilayer feedforward neural networks with single powers-of-two weights is presented. By applying this algorithm, the digital hardware implementation of such networks becomes easier as a result of the elimination of multipliers. This proposed algorithm consists of two stages. First, the network is trained by using the standard backpropagation algorithm. Weights are then quantized to single powers-of-two values, and weights and slopes of activation functions are adjusted adaptively to reduce the sum of squared output errors to a specified level. Simulation results indicate that the multilayer feedforward neural networks with single powers-of-two weights obtained using the proposed algorithm have generalization performance similar to that of the original networks with continuous weights. >" ] }
1705.04288
2949800193
While Deep Neural Networks (DNNs) push the state-of-the-art in many machine learning applications, they often require millions of expensive floating-point operations for each input classification. This computation overhead limits the applicability of DNNs to low-power, embedded platforms and incurs high cost in data centers. This motivates recent interests in designing low-power, low-latency DNNs based on fixed-point, ternary, or even binary data precision. While recent works in this area offer promising results, they often lead to large accuracy drops when compared to the floating-point networks. We propose a novel approach to map floating-point based DNNs to 8-bit dynamic fixed-point networks with integer power-of-two weights with no change in network architecture. Our dynamic fixed-point DNNs allow different radix points between layers. During inference, power-of-two weights allow multiplications to be replaced with arithmetic shifts, while the 8-bit fixed-point representation simplifies both the buffer and adder design. In addition, we propose a hardware accelerator design to achieve low-power, low-latency inference with insignificant degradation in accuracy. Using our custom accelerator design with the CIFAR-10 and ImageNet datasets, we show that our method achieves significant power and energy savings while increasing the classification accuracy.
In the hardware design domain, while few work have considered different bit-width fixed-point representations in their accelerator design @cite_22 @cite_0 @cite_12 , in contrast to the accuracy analysis, no evaluation of hardware designs using dynamic fixed-point is available. We fill this gap by providing an accelerator design optimized to use dynamic fixed-point representation for intermediate computations while using power-of-two weights.
{ "cite_N": [ "@cite_0", "@cite_22", "@cite_12" ], "mid": [ "2096645269", "2563860341", "2094756095" ], "abstract": [ "We present a massively parallel coprocessor for accelerating Convolutional Neural Networks (CNNs), a class of important machine learning algorithms. The coprocessor functional units, consisting of parallel 2D convolution primitives and programmable units performing sub-sampling and non-linear functions specific to CNNs, implement a “meta-operator” to which a CNN may be compiled to. The coprocessor is serviced by distributed off-chip memory banks with large data bandwidth. As a key feature, we use low precision data and further increase the effective memory bandwidth by packing multiple words in every memory operation, and leverage the algorithm’s simple data access patterns to use off-chip memory as a scratchpad for intermediate data, critical for CNNs. A CNN is mapped to the coprocessor hardware primitives with instructions to transfer data between the memory and coprocessor. We have implemented a prototype of the CNN coprocessor on an off-the-shelf PCI FPGA card with a single Xilinx Virtex5 LX330T FPGA and 4 DDR2 memory banks totaling 1GB. The coprocessor prototype can process at the rate of 3.4 billion multiply accumulates per second (GMACs) for CNN forward propagation, a speed that is 31x faster than a software implementation on a 2.2 GHz AMD Opteron processor. For a complete face recognition application with the CNN on the coprocessor and the rest of the image processing tasks on the host, the prototype is 6-10x faster, depending on the host-coprocessor bandwidth.", "Deep neural networks are gaining in popularity as they are used to generate state-of-the-art results for a variety of computer vision and machine learning applications. At the same time, these networks have grown in depth and complexity in order to solve harder problems. Given the limitations in power budgets dedicated to these networks, the importance of low-power, low-memory solutions has been stressed in recent years. While a large number of dedicated hardware using different precisions has recently been proposed, there exists no comprehensive study of different bit precisions and arithmetic in both inputs and network parameters. In this work, we address this issue and perform a study of different bit-precisions in neural networks (from floating-point to fixed-point, powers of two, and binary). In our evaluation, we consider and analyze the effect of precision scaling on both network accuracy and hardware metrics including memory footprint, power and energy consumption, and design area. We also investigate training-time methodologies to compensate for the reduction in accuracy due to limited bit precision and demonstrate that in most cases, precision scaling can deliver significant benefits in design metrics at the cost of very modest decreases in network accuracy. In addition, we propose that a small portion of the benefits achieved when using lower precisions can be forfeited to increase the network size and therefore the accuracy. We evaluate our experiments, using three well-recognized networks and datasets to show its generality. We investigate the trade-offs and highlight the benefits of using lower precisions in terms of energy and memory footprint.", "Convolutional neural network (CNN) has been widely employed for image recognition because it can achieve high accuracy by emulating behavior of optic nerves in living creatures. Recently, rapid growth of modern applications based on deep learning algorithms has further improved research and implementations. Especially, various accelerators for deep CNN have been proposed based on FPGA platform because it has advantages of high performance, reconfigurability, and fast development round, etc. Although current FPGA accelerators have demonstrated better performance over generic processors, the accelerator design space has not been well exploited. One critical problem is that the computation throughput may not well match the memory bandwidth provided an FPGA platform. Consequently, existing approaches cannot achieve best performance due to under-utilization of either logic resource or memory bandwidth. At the same time, the increasing complexity and scalability of deep learning applications aggravate this problem. In order to overcome this problem, we propose an analytical design scheme using the roofline model. For any solution of a CNN design, we quantitatively analyze its computing throughput and required memory bandwidth using various optimization techniques, such as loop tiling and transformation. Then, with the help of rooine model, we can identify the solution with best performance and lowest FPGA resource requirement. As a case study, we implement a CNN accelerator on a VC707 FPGA board and compare it to previous approaches. Our implementation achieves a peak performance of 61.62 GFLOPS under 100MHz working frequency, which outperform previous approaches significantly." ] }
1705.04400
2614038158
Replacing hand-engineered pipelines with end-to-end deep learning systems has enabled strong results in applications like speech and object recognition. However, the causality and latency constraints of production systems put end-to-end speech models back into the underfitting regime and expose biases in the model that we show cannot be overcome by "scaling up", i.e., training bigger models on more data. In this work we systematically identify and address sources of bias, reducing error rates by up to 20 while remaining practical for deployment. We achieve this by utilizing improved neural architectures for streaming inference, solving optimization issues, and employing strategies that increase audio and label modelling versatility.
The most direct way to remove all bias in the input-modeling is probably learning a sufficiently expressive model directly from raw waveforms as in @cite_20 @cite_10 by parameterizing and learning these transformations. These works suggest that non trivial improvement in accuracy purely from modeling the raw waveform is hard to obtain without a significant increase in the compute and memory requirements. @cite_25 introduced a trainable per-channel energy normalization layer (PCEN) that parametrizes power normalization as well as the compression step, which is typically handled by a static log transform.
{ "cite_N": [ "@cite_10", "@cite_25", "@cite_20" ], "mid": [ "", "2478006605", "2331927446" ], "abstract": [ "", "Robust and far-field speech recognition is critical to enable true hands-free communication. In far-field conditions, signals are attenuated due to distance. To improve robustness to loudness variation, we introduce a novel frontend called per-channel energy normalization (PCEN). The key ingredient of PCEN is the use of an automatic gain control based dynamic compression to replace the widely used static (such as log or root) compression. We evaluate PCEN on the keyword spotting task. On our large rerecorded noisy and far-field eval sets, we show that PCEN significantly improves recognition performance. Furthermore, we model PCEN as neural network layers and optimize high-dimensional PCEN parameters jointly with the keyword spotting acoustic model. The trained PCEN frontend demonstrates significant further improvements without increasing model complexity or inference-time cost.", "Deep learning has dramatically improved the performance of speech recognition systems through learning hierarchies of features optimized for the task at hand. However, true end-to-end learning, where features are learned directly from waveforms, has only recently reached the performance of hand-tailored representations based on the Fourier transform. In this paper, we detail an approach to use convolutional filters to push past the inherent tradeoff of temporal and frequency resolution that exists for spectral representations. At increased computational cost, we show that increasing temporal resolution via reduced stride and increasing frequency resolution via additional filters delivers significant performance improvements. Further, we find more efficient representations by simultaneously learning at multiple scales, leading to an overall decrease in word error rate on a difficult internal speech test set by 20.7 relative to networks with the same number of parameters trained on spectrograms." ] }
1705.04400
2614038158
Replacing hand-engineered pipelines with end-to-end deep learning systems has enabled strong results in applications like speech and object recognition. However, the causality and latency constraints of production systems put end-to-end speech models back into the underfitting regime and expose biases in the model that we show cannot be overcome by "scaling up", i.e., training bigger models on more data. In this work we systematically identify and address sources of bias, reducing error rates by up to 20 while remaining practical for deployment. We achieve this by utilizing improved neural architectures for streaming inference, solving optimization issues, and employing strategies that increase audio and label modelling versatility.
Lookahead convolutions have been proposed for streaming inference @cite_1 . Latency constrained Bidirectional recurrent layers (LC-BRNN) and Context sensitive chunks (CSC) have been proposed in @cite_27 for tractable sequence model training but not explored for streaming inference. Time delay neural networks @cite_14 and Convolutional networks are also options for controlling the amount of future context.
{ "cite_N": [ "@cite_27", "@cite_14", "@cite_1" ], "mid": [ "2329068866", "2402146185", "" ], "abstract": [ "This paper presents a study of using deep bidirectional long short-term memory (DBLSTM) recurrent neural network as acoustic model for DBLSTM-HMM based large vocabulary continuous speech recognition (LVCSR), where a context-sensitive-chunk (CSC) back-propagation through time (BPTT) approach is used to train DBLSTM by splitting each training sequence into chunks with appended contextual observations, and a CSC-based decoding method with possibly overlapped CSCs is used for recognition. Our approach makes mini-batch based training on GPU more efficient and reduces the latency of DBLSTM-based LVCSR from a whole utterance to a short chunk. Evaluations have been made on Switchboard-I benchmark task. In comparison with epoch-wise BPTT training, our method can achieve more than three times speedup on a single GPU card without degrading recognition accuracy. In comparison with a highly optimized DNN-HMM system trained by a frame-level cross entropy (CE) criterion, our CE-trained DBLSTM-HMM system achieves relative word error rate reductions of 9 and 5 on Eval2000 and RT03S testing sets, respectively. Furthermore, by running model averaging based parallel training of DBLSTM on a cluster of GPUs, CSC-BPTT incurs less accuracy degradation than epoch-wise BPTT while achieves a linear speedup.", "Recurrent neural network architectures have been shown to efficiently model long term temporal dependencies between acoustic events. However the training time of recurrent networks is higher than feedforward networks due to the sequential nature of the learning algorithm. In this paper we propose a time delay neural network architecture which models long term temporal dependencies with training times comparable to standard feed-forward DNNs. The network uses sub-sampling to reduce computation during training. On the Switchboard task we show a relative improvement of 6 over the baseline DNN model. We present results on several LVCSR tasks with training data ranging from 3 to 1800 hours to show the effectiveness of the TDNN architecture in learning wider temporal dependencies in both small and large data scenarios.", "" ] }
1705.04179
2612326314
This paper describes a dual certificate condition on a linear measurement operator @math (defined on a Hilbert space @math and having finite-dimensional range) which guarantees that an atomic norm minimization, in a certain sense, will be able to approximately recover a structured signal @math from measurements @math . Put very streamlined, the condition implies that peaks in a sparse decomposition of @math are close the the support of the atomic decomposition of the solution @math . The condition applies in a relatively general context - in particular, the space @math can be infinite-dimensional. The abstract framework is applied to several concrete examples, one example being super-resolution. In this process, several novel results which are interesting on its own are obtained.
The concept of atomic norms was introduced in @cite_1 . Much of the work will rely on techniques developed in e.g. @cite_14 @cite_4 for analyzing the use of @math -norm minimization for recovering spike trains. The concrete examples which we consider in Section have all been thoroughly investigated before. We will refer to related literature in respective sections.
{ "cite_N": [ "@cite_14", "@cite_1", "@cite_4" ], "mid": [ "2964325628", "2167077875", "2017718716" ], "abstract": [ "This paper develops a mathematical theory of super-resolution. Broadly speaking, super-resolution is the problem of recovering the fine details of an object—the high end of its spectrum—from coarse scale information only—from samples at the low end of the spectrum. Suppose we have many point sources at unknown locations in [0,1] and with unknown complex-valued amplitudes. We only observe Fourier samples of this object up to a frequency cutoff fc. We show that one can super-resolve these point sources with infinite precision—i.e., recover the exact locations and amplitudes—by solving a simple convex optimization problem, which can essentially be reformulated as a semidefinite program. This holds provided that the distance between sources is at least 2 fc. This result extends to higher dimensions and other models. In one dimension, for instance, it is possible to recover a piecewise smooth function by resolving the discontinuity points with infinite precision as well. We also show that the theory and methods are robust to noise. In particular, in the discrete setting we develop some theoretical results explaining how the accuracy of the super-resolved signal is expected to degrade when both the noise level and the super-resolution factor vary. © 2014 Wiley Periodicals, Inc.", "In applications throughout science and engineering one is often faced with the challenge of solving an ill-posed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constrained structurally so that they only have a few degrees of freedom relative to their ambient dimension. This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems. The class of simple models considered includes those formed as the sum of a few atoms from some (possibly infinite) elementary atomic set; examples include well-studied cases from many technical fields such as sparse vectors (signal processing, statistics) and low-rank matrices (control, statistics), as well as several others including sums of a few permutation matrices (ranked elections, multiobject tracking), low-rank tensors (computer vision, neuroscience), orthogonal matrices (machine learning), and atomic measures (system identification). The convex programming formulation is based on minimizing the norm induced by the convex hull of the atomic set; this norm is referred to as the atomic norm. The facial structure of the atomic norm ball carries a number of favorable properties that are useful for recovering simple models, and an analysis of the underlying convex geometry provides sharp estimates of the number of generic measurements required for exact and robust recovery of models from partial information. These estimates are based on computing the Gaussian widths of tangent cones to the atomic norm ball. When the atomic set has algebraic structure the resulting optimization problems can be solved or approximated via semidefinite programming. The quality of these approximations affects the number of measurements required for recovery, and this tradeoff is characterized via some examples. Thus this work extends the catalog of simple models (beyond sparse vectors and low-rank matrices) that can be recovered from limited linear information via tractable convex programming.", "This paper studies sparse spikes deconvolution over the space of measures. We focus on the recovery properties of the support of the measure (i.e., the location of the Dirac masses) using total variation of measures (TV) regularization. This regularization is the natural extension of the @math l1 norm of vectors to the setting of measures. We show that support identification is governed by a specific solution of the dual problem (a so-called dual certificate) having minimum @math L2 norm. Our main result shows that if this certificate is non-degenerate (see the definition below), when the signal-to-noise ratio is large enough TV regularization recovers the exact same number of Diracs. We show that both the locations and the amplitudes of these Diracs converge toward those of the input measure when the noise drops to zero. Moreover, the non-degeneracy of this certificate can be checked by computing a so-called vanishing derivative pre-certificate. This proxy can be computed in closed form by solving a linear system. Lastly, we draw connections between the support of the recovered measure on a continuous domain and on a discretized grid. We show that when the signal-to-noise level is large enough, and provided the aforementioned dual certificate is non-degenerate, the solution of the discretized problem is supported on pairs of Diracs which are neighbors of the Diracs of the input measure. This gives a precise description of the convergence of the solution of the discretized problem toward the solution of the continuous grid-free problem, as the grid size tends to zero." ] }
1705.04045
2950288447
Every day, millions of users reveal their interests on Facebook, which are then monetized via targeted advertisement marketing campaigns. In this paper, we explore the use of demographically rich Facebook Ads audience estimates for tracking non-communicable diseases around the world. Across 47 countries, we compute the audiences of marker interests, and evaluate their potential in tracking health conditions associated with tobacco use, obesity, and diabetes, compared to the performance of placebo interests. Despite its huge potential, we find that, for modeling prevalence of health conditions across countries, differences in these interest audiences are only weakly indicative of the corresponding prevalence rates. Within the countries, however, our approach provides interesting insights on trends of health awareness across demographic groups. Finally, we provide a temporal error analysis to expose the potential pitfalls of using Facebook's Marketing API as a black box.
In Social media in public health'' Kass-Hout & Alhinnawi assert, Social media can provide timely, relevant, and transparent information of public health importance'' @cite_14 , formulating an exciting ongoing research to study health trends via online data. Following this call for action, some researchers focus on information seeking on topics such as abortion @cite_7 and vaccines @cite_13 , others use the data to now-cast diseases @cite_8 , while others use search logs, most notably using Google Trends, to predict seasonal flu @cite_17 .
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_8", "@cite_13", "@cite_17" ], "mid": [ "2115535574", "2598305407", "2028085927", "336195196", "" ], "abstract": [ "Crowdsourcing, folksonomy, user-generated content, social networks, the sharing economy, peer production, Multi-User Virtual Environments, participatory media, collaborative creativity and Big Data (see Table 1) are all terms in the expanding lexicon of user-contributed data and user-led innovation (Sharp, Darren and Salomon, Mandy, User-led innovation: a new framework for co-creating business and social value, Smart Internet", "Background: The authors present a case study of a public health campaign, including social media, and aiming at maximizing the use of web app on sexual health. Objective: To analyze the impact of a Facebook fan page, Facebook advertisements, and posters to maximize the number of visits to the educational web app. Methods: The campaign is assessed for 1 year, using data tracked through Facebook statistics and Google Analytics. Results: The site had 3670 visits 10.1 visitors day, 95 CI 8.7-11.4. During the one-month Facebook Ads campaign, the site received 1263 visits 42.1 visitors day, 95 CI 37.3-46.9, multiplying by over four the average number of visitors day. 34.4 of all the participants were recruited during the one-month Facebook ads campaign. Conclusions: Facebook advertisements seem to be a good tool to promote an educational web app on sexual health targeting youth, and to reach a huge number of users rapidly and at a low cost.", "Twitter is a unique social media channel, in the sense that users discuss and talk about the most diverse topics, including their health conditions. In this paper we analyze how Dengue epidemic is reflected on Twitter and to what extent that information can be used for the sake of surveillance. Dengue is a mosquito-borne infectious disease that is a leading cause of illness and death in tropical and subtropical regions, including Brazil. We propose an active surveillance methodology that is based on four dimensions: volume, location, time and public perception. First we explore the public perception dimension by performing sentiment analysis. This analysis enables us to filter out content that is not relevant for the sake of Dengue surveillance. Then, we verify the high correlation between the number of cases reported by official statistics and the number of tweets posted during the same time period (i.e., R2 = 0.9578). A clustering approach was used in order to exploit the spatio-temporal dimension, and the quality of the clusters obtained becomes evident when they are compared to official data (i.e., RandIndex = 0.8914). As an application, we propose a Dengue surveillance system that shows the evolution of the dengue situation reported in tweets, which is implemented in www.observatorio.inweb.org.br dengue .", "Vaccination campaigns are one of the most important and successful public health programs ever undertaken. People who want to learn about vaccines in order to make an informed decision on whether to vaccinate are faced with a wealth of information on the Internet, both for and against vaccinations. In this paper we develop an automated way to score Internet search queries and web pages as to the likelihood that a person making these queries or reading those pages would decide to vaccinate. We apply this method to data from a major Internet search engine, while people seek information about the Measles, Mumps and Rubella (MMR) vaccine. We show that our method is accurate, and use it to learn about the information acquisition process of people. Our results show that people who are pro-vaccination as well as people who are anti-vaccination seek similar information, but browsing this information has differing effect on their future browsing. These findings demonstrate the need for health authorities to tailor their information according to the current stance of users.", "" ] }
1705.04045
2950288447
Every day, millions of users reveal their interests on Facebook, which are then monetized via targeted advertisement marketing campaigns. In this paper, we explore the use of demographically rich Facebook Ads audience estimates for tracking non-communicable diseases around the world. Across 47 countries, we compute the audiences of marker interests, and evaluate their potential in tracking health conditions associated with tobacco use, obesity, and diabetes, compared to the performance of placebo interests. Despite its huge potential, we find that, for modeling prevalence of health conditions across countries, differences in these interest audiences are only weakly indicative of the corresponding prevalence rates. Within the countries, however, our approach provides interesting insights on trends of health awareness across demographic groups. Finally, we provide a temporal error analysis to expose the potential pitfalls of using Facebook's Marketing API as a black box.
The Facebook advertising API has been studied in terms of determining a relative value of different user demographics and assessing the overall stability of the advertising market @cite_16 @cite_12 . A few studies have attempted to link Facebook audience data to behavioral aspects and interests related to health conditions. Gittelman convert 37 Facebook interest categories to nine factors to use in the modeling of life expectancy @cite_6 . Although they show an improvement in the statistical models, their approach avoided determining relationships between each individual category with the real-world data. On the other hand, Chunara explore the relationship of two factors -- interest in television and outdoor activities -- to the obesity rates in metros across the USA and neighborhoods within New York City @cite_1 . Both of these studies are confined to the United States, and are limited to one health topic. In contrast, our study expands the set of real-world health indicators we track, as well as the geographic coverage to global proportions. Crucially, we assess the quality of Facebook data by introducing placebo baselines, normalization alternatives, and performing temporal analysis.
{ "cite_N": [ "@cite_16", "@cite_1", "@cite_6", "@cite_12" ], "mid": [ "2049704571", "2003064313", "2010079596", "2077691217" ], "abstract": [ "Advertising is ubiquitous on the Web; numerous ad networks serve billions of ads daily via keyword or search term auctions. Recently, online social networks (OSNs) such as Facebook have created site-specific ad services that differ from traditional ad networks by letting advertisers bid on users rather than keywords. With Facebook's annual ad revenue exceeding $4 billion, OSN-based ad services are emerging to be a significant fraction of the online ad market. In contrast to other online ad markets (e.g., Google's ad market), there has been little academic study of OSN ad services, and OSNs have released very little data about their advertising markets; as a result, researchers currently lack the tools to measure and understand these markets. In this paper, our goal is to bring visibility to OSN ad markets, focusing on Facebook. We demonstrate that the (undocumented) feature that suggests bids to advertisers is most likely calculated via sampling recent winning bids. We then show how this feature can be used to explore the relative value of different user demographics and the overall stability of the advertising market. Through the exploration of suggested bid data for different demographics, we find dramatic differences in prices paid across different user interests and locations. Finally, we show that the ad market shows long-term variability, suggesting that OSN ad services have yet to mature.", "Background Understanding the social environmental around obesity has been limited by available data. One promising approach used to bridge similar gaps elsewhere is to use passively generated digital data. Purpose This article explores the relationship between online social environment via web-based social networks and population obesity prevalence. Methods We performed a cross-sectional study using linear regression and cross validation to measure the relationship and predictive performance of user interests on the online social network Facebook to obesity prevalence in metros across the United States of America (USA) and neighborhoods within New York City (NYC). The outcomes, proportion of obese and or overweight population in USA metros and NYC neighborhoods, were obtained via the Centers for Disease Control and Prevention Behavioral Risk Factor Surveillance and NYC EpiQuery systems. Predictors were geographically specific proportion of users with activity-related and sedentary-related interests on Facebook. Results Higher proportion of the population with activity-related interests on Facebook was associated with a significant 12.0 (95 Confidence Interval (CI) 11.9 to 12.1) lower predicted prevalence of obese and or overweight people across USA metros and 7.2 (95 CI: 6.8 to 7.7) across NYC neighborhoods. Conversely, greater proportion of the population with interest in television was associated with higher prevalence of obese and or overweight people of 3.9 (95 CI: 3.7 to 4.0) (USA) and 27.5 (95 CI: 27.1 to 27.9, significant) (NYC). For activity-interests and national obesity outcomes, the average root mean square prediction error from 10-fold cross validation was comparable to the average root mean square error of a model developed using the entire data set. Conclusions Activity-related interests across the USA and sedentary-related interests across NYC were significantly associated with obesity prevalence. Further research is needed to understand how the online social environment relates to health outcomes and how it can be used to identify or target interventions.", "Background: Investigation into personal health has become focused on conditions at an increasingly local level, while response rates have declined and complicated the process of collecting data at an individual level. Simultaneously, social media data have exploded in availability and have been shown to correlate with the prevalence of certain health conditions. Objective: Facebook likes may be a source of digital data that can complement traditional public health surveillance systems and provide data at a local level. We explored the use of Facebook likes as potential predictors of health outcomes and their behavioral determinants. Methods: We performed principal components and regression analyses to examine the predictive qualities of Facebook likes with regard to mortality, diseases, and lifestyle behaviors in 214 counties across the United States and 61 of 67 counties in Florida. These results were compared with those obtainable from a demographic model. Health data were obtained from both the 2010 and 2011 Behavioral Risk Factor Surveillance System (BRFSS) and mortality data were obtained from the National Vital Statistics System. Results: Facebook likes added significant value in predicting most examined health outcomes and behaviors even when controlling for age, race, and socioeconomic status, with model fit improvements (adjusted R 2 ) of an average of 58 across models for 13 different health-related metrics over basic sociodemographic models. Small area data were not available in sufficient abundance to test the accuracy of the model in estimating health conditions in less populated markets, but initial analysis using data from Florida showed a strong model fit for obesity data (adjusted R 2 =.77). Conclusions: Facebook likes provide estimates for examined health outcomes and health behaviors that are comparable to those obtained from the BRFSS. Online sources may provide more reliable, timely, and cost-effective county-level data than that obtainable from traditional public health surveillance systems as well as serve as an adjunct to those systems. [J Med Internet Res 2015;17(4):e98]", "Not all of the over one billion users of online social networks (OSNs) are equally valuable to the OSNs. The current business model of monetizing advertisements targeted to users does not appear to be based on any visible grouping of the users. The primary metrics remain CPM (cost per mille---i.e., thousand impressions) and CPC (cost per click) of ads that are shown to users. However, there is significant diversity in the actions of users---some users upload interesting content triggering additional views and comments leading to further cascades of action. Beyond direct impressions, a user's action can generate indirect impressions by actions induced on friends and other users. Identifying the valuable user segments requires examination of profile data, friendships, and most importantly, their activity. Here we explore an alternate approach for measuring the value of users in OSNs by proposing a framework from the viewpoint of a popular OSN. Using a real dataset on the social network and activities of users, we show that a small subset of actions are likely to be key indicators of a user's value. Additionally, by examining the current targeting demographics available in Facebook, we are able to explore the relative (monetary) value that different users represent to the OSN." ] }
1705.03860
2612197977
This report presents our SmartSpace event handling framework for managing smart-grids and renewable energy installations. SmartSpace provides decision support for human stakeholders. Based on different datasources that feed into our framework, a variety of analysis and decision steps are supported. These decision steps are ultimately used to provide adequate information to human stakeholders. The paper discusses potential data sources for decisions around smart energy systems and introduces a spatio-temporal modeling technique for the involved data. Operations to reason about the formalized data are provided. Our spatio-temporal models help to provide a semantic context for the data. Customized rules allow the specification of conditions under which information is provided to stakeholders. We exemplify our ideas and present our demonstrators including visualization capabilities.
Different approaches have been introduced for spatio-temporal modeling and reasoning. For describing models using logic work on process algebra-like formalisms has been conducted @cite_3 @cite_42 . Qualitative spatial descriptions and reasoning @cite_27 @cite_10 @cite_1 @cite_14 where spatial relationships are not described through exact geometry but rather using predicates for, e.g., describing that an entity is located close to another object are important for the BeSpaceD based abstractions. Other logic approaches to spatial reasoning can be found in @cite_24 @cite_34 . A comparison on semantic formalisms for industry 4.0, as well as guidelines to assist engineers on this topic is featured in @cite_17 . Related to our BeSpaceD operators introduced in this paper, functional programming language features for large scale data operations are supported by frameworks such as Spark http: spark.apache.org . In comparison, in our work, we are specifically targeting operations for spatio-temporal models.
{ "cite_N": [ "@cite_14", "@cite_42", "@cite_1", "@cite_3", "@cite_24", "@cite_27", "@cite_34", "@cite_10", "@cite_17" ], "mid": [ "2079249289", "2059796241", "1871668702", "2076632469", "140589515", "1539708367", "", "852874", "2055363618" ], "abstract": [ "Representation and reasoning with qualitative spatial relations is an important problem in artificial intelligence and has wide applications in the fields of geographic information system, computer vision, autonomous robot navigation, natural language understanding, spatial databases and so on. The reasons for this interest in using qualitative spatial relations include cognitive comprehensibility, efficiency and computational facility. This paper summarizes progress in qualitative spatial representation by describing key calculi representing different types of spatial relationships. The paper concludes with a discussion of current research and glimpse of future work.", "We enrich spatial constraint systems with operators to specify information and processes moving from a space to another. We shall refer to these news structures as spatial constraint systems with extrusion. We shall investigate the properties of this new family of constraint systems and illustrate their applications. From a computational point of view the new operators provide for process information extrusion, a central concept in formalisms for mobile communication. From an epistemic point of view extrusion corresponds to a notion we shall call utterance; a piece of information that an agent communicates to others but that may be inconsistent with the agent's beliefs. Utterances can then be used to express instances of epistemic notions, which are common place in social media, such as hoaxes or intentional lies. Spatial constraint systems with extrusion can be seen as complete Heyting algebras equipped with maps to account for spatial and epistemic specifications.", "The paper is a overview of the major qualitative spatial representation and reasoning techniques. We survey the main aspects of the representation of qualitative knowledge including ontological aspects, topology, distance, orientation and shape. We also consider qualitative spatial reasoning including reasoning about spatial change. Finally there is a discussion of theoretical results and a glimpse of future work. The paper is a revised and condensed version of [33,34].", "We present a logic that can express properties of freshness, secrecy, structure, and behavior of concurrent systems. In addition to standard logical and temporal operators, our logic includes spatial operations corresponding to composition, local name restriction, and a primitive fresh name quantifier. Properties can also be defined by recursion; a central aim of this paper is then the combination of a logical notion of freshness with inductive and coinductive definitions of properties.", "A spatial logic consists of four groups of operators: standard propositional connectives; spatial operators; a temporal modality; calculus-specific operators. The calculus-specific operators talk about the capabilities of the processes of the calculus, that is, the process constructors through which a process can interact with its environment. We prove some minimality results for spatial logics. The main results show that in the logics for π-calculus and asynchronous π-calculus the calculus-specific operators can be eliminated. The results are presented under both the strong and the weak interpretations of the temporal modality. Our proof techniques are applicable to other spatial logics, so to eliminate some of – if not all – the calculus-specific operators. As an example of this, we consider the logic for the Ambient calculus, with the strong semantics.", "In this paper we advocate the use of multi-dimensional modal logics as a framework for knowledge representation and, in particular, for representing spatio-temporal information. We construct a two-dimensional logic capable of describing topological relationships that change over time. This logic, called PSTL (Propositional Spatio-Temporal Logic) is the Cartesian product of the well-known temporal logic PTL and the modal logic S4u, which is the Lewis system S4 augmented with the universal modality. Although it is an open problem whether the full PSTL is decidable, we show that it contains decidable fragments into which various temporal extensions (both point-based and interval based) of the spatial logic RCC-8 can be embedded. We consider known decidability and complexity results that are relevant to computation with multi-dimensional formalisms and discuss possible directions for further research.", "", "", "Under the context of Industrie 4.0 (I4.0), future production systems provide balanced operations between manufacturing flexibility and efficiency, realized in an autonomous, horizontal, and decentralized item-level production control framework. Structured interoperability via precise formulations on an appropriate degree is crucial to achieve software engineering efficiency in the system life cycle. However, selecting the degree of formalization can be challenging, as it crucially depends on the desired common understanding (semantic degree) between multiple parties. In this paper, we categorize different semantic degrees and map a set of technologies in industrial automation to their associated degrees. Furthermore, we created guidelines to assist engineers selecting appropriate semantic degrees in their design. We applied these guidelines on publicly available scenarios to examine the validity of the approach, and identified semantic elements over internally developed use cases concerning plug-and-produce." ] }
1705.03934
2612853140
A Bloom filter is a simple data structure supporting membership queries on a set. The standard Bloom filter does not support the delete operation, therefore, many applications use a counting Bloom filter to enable deletion. This paper proposes a generalization of the counting Bloom filter approach, called "autoscaling Bloom filters", which allows adjustment of its capacity with probabilistic bounds on false positives and true positives. In essence, the autoscaling Bloom filter is a binarized counting Bloom filter with an adjustable binarization threshold. We present the mathematical analysis of the performance as well as give a procedure for minimization of the false positive rate.
A recent probabilistic analysis of the SBF is presented in @cite_10 . Detailed surveys on BFs and their applications are provided in @cite_13 and @cite_2 . Recent applications of BFs and their modifications include certificate revocation for smart grids @cite_14 . An important aspect for the applicability of BFs in modern networking applications is the processing speed of a filter. In order to improve the speed of the membership check, the authors in @cite_25 proposed a novel filter type called Ultra-Fast BFs. In @cite_6 it was shown that BFs can be accelerated (in terms of processing speed) by using particular types of hashing functions.
{ "cite_N": [ "@cite_14", "@cite_10", "@cite_6", "@cite_2", "@cite_13", "@cite_25" ], "mid": [ "2524874630", "", "2624747703", "", "2074633331", "2731273958" ], "abstract": [ "Given the scalability of the advanced metering infrastructure (AMI) networks, maintenance and access of certificate revocation lists (CRLs) pose new challenges. It is inefficient to create one large CRL for all the smart meters (SMs) or create a customized CRL for each SM since too many CRLs will be required. In order to tackle the scalability of the AMI network, we divide the network into clusters of SMs, but there is a tradeoff between the overhead at the certificate authority (CA) and the overhead at the clusters. We use Bloom filters to reduce the size of the CRLs in order to alleviate this tradeoff by increasing the clusters’ size with acceptable overhead. However, since Bloom filters suffer from false positives, there is a need to handle this problem so that SMs will not discard important messages due to falsely identifying the certificate of a sender as invalid. To this end, we propose two certificate revocation schemes that can identify and nullify the false positives. While the first scheme requires contacting the gateway to resolve them, the second scheme requires the CA additionally distribute the list of certificates that trigger false positives. Using mathematical models, we have demonstrated that the probability of contacting the gateway in the first scheme and the overhead of the second scheme can be very low by properly designing the Bloom filters. In order to assess the scalability and validate the mathematical formulas, we have implemented the proposed schemes using Visual C. The results indicate that our schemes are much more scalable than the conventional CRL and the mathematical and simulation results are almost identical. Moreover, we simulated the distribution of the CRLs in a wireless mesh-based AMI network using ns-3 network simulator and assessed its distribution overhead.", "", "The Internet continues to flourish, while an increasing number of network applications are found deploying Bloom filters. However, the heterogeneity of the Bloom filter realisations complicates the utilisation of relevant applications. Moreover, when applying Bloom filter to traffic that usually has a gigabit capacity, even insignificant delays will accumulate and restrict the effectiveness of the real-time protocols. In this study, the authors present a Bloom filter construction that can be easily and consistently adopted at network nodes, with also considerable processing speed. Specifically, the authors show that AES-based hashes are adequate to create Bloom filters correctly. Then they illustrate how AES new instructions (AES-NI) can be leveraged to accelerate the Bloom filter realisation. According to the authors' experimental results, the proposed Bloom filter enables the best speed performance compared to the competing approaches.", "", "Many network solutions and overlay networks utilize probabilistic techniques to reduce information processing and networking costs. This survey article presents a number of frequently used and useful probabilistic techniques. Bloom filters and their variants are of prime importance, and they are heavily used in various distributed systems. This has been reflected in recent research and many new algorithms have been proposed for distributed systems that are either directly or indirectly based on Bloom filters. In this survey, we give an overview of the basic and advanced techniques, reviewing over 20 variants and discussing their application in distributed systems, in particular for caching, peer-to-peer systems, routing and forwarding, and measurement data summarization.", "The network link speed is increasing at an alarming rate, which requires all network functions on routers switches to keep pace. Bloom filter is a widely-used membership check data structure in network applications. It also faces the urgent demand of improving the performance in membership check speed. To this end, this paper proposes a new Bloom filter variant called Ultra-Fast Bloom Filters, by leveraging the SIMD techniques. We make three improvements for the UFBF to accelerate the membership check speed. First, we develop a novel hash computation algorithm which can compute multiple hash functions in parallel with the use of SIMD instructions. Second, we change a Bloom filter's bit-test process from sequential to parallel. Third, we increase the cache efficiency of membership check by encoding an element's information to a small block which can easily fit into a cache-line. Both theoretical analysis and extensive simulations show that the UFBF greatly exceeds the state-of-the-art Bloom filter variants on membership check speed." ] }
1705.03916
2952258553
This paper explores the use of Answer Set Programming (ASP) in solving Distributed Constraint Optimization Problems (DCOPs). The paper provides the following novel contributions: (1) It shows how one can formulate DCOPs as logic programs; (2) It introduces ASP-DPOP, the first DCOP algorithm that is based on logic programming; (3) It experimentally shows that ASP-DPOP can be up to two orders of magnitude faster than DPOP (its imperative programming counterpart) as well as solve some problems that DPOP fails to solve, due to memory limitations; and (4) It demonstrates the applicability of ASP in a wide array of multi-agent problems currently modeled as DCOPs. Under consideration in Theory and Practice of Logic Programming (TPLP).
The use of declarative programs, specifically logic programs, for reasoning in multi-agent domains is not new. Starting with some seminal papers @cite_8 , various authors have explored the use of several different flavors of logic programming, such as normal logic programs and abductive logic programs, to address cooperation between agents @cite_30 @cite_39 @cite_11 @cite_46 . Some proposals have also explored the combination between constraint programming, logic programming, and formalization of multi-agent domains @cite_45 @cite_24 @cite_53 @cite_20 . Logic programming has been used in modeling multi-agent scenarios involving agents knowledge about other's knowledge @cite_18 , computing models in the logics of knowledge @cite_63 , multi-agent planning @cite_55 and formalizing negotiation @cite_52 . ASP-DPOP is similar to the last two applications in that () it can be viewed as a collection of agent programs; () it computes solutions using an ASP solver; and () it uses message passing for agent communication. A key difference is that ASP-DPOP solves multi-agent problems formulated as constraint-based models, while the other applications solve problems formulated as decision-theoretic and game-theoretic models.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_8", "@cite_53", "@cite_55", "@cite_52", "@cite_39", "@cite_24", "@cite_45", "@cite_63", "@cite_46", "@cite_20", "@cite_11" ], "mid": [ "120926601", "", "1540024383", "1607804604", "1490114956", "205152477", "", "2018064910", "2962845465", "", "2171432826", "1993778656", "" ], "abstract": [ "This paper presents a framework that integrates three aspects of agency: planning, for proactive behaviour, negotiation, for social behaviour and resource achievement, and control of operation, for reconciling rationality with reactivity. Agents are designed and programmed in a computational logic-based language where these aspects are accommodated in a declarative and modular way. We show how this framework can be applied to agent problems requiring negotiation and resource achievement and present some of its formal properties. The framework can be implemented based on a communication platform for agent interaction and on well-established logic programming technologies for agent reasoning.", "", "In this paper we present an extension of logic programming (LP) that is suitable not only for the “rational” component of a single agent but also for the “reactive” component and that can encompass multidagent systems. We modify an earlier abductive proof procedure and embed it within an agent cycle. The proof procedure incorporates abduction, definitions and integrity constraints within a dynamic environment, where changes can be observed as inputs. The definitions allow rational planning behaviour and the integrity constraints allow reactive, conditiondaction type behaviour. The agent cycle provides a resourcedbounded mechanism that allows the agent’s thinking to be interrupted for the agent to record and assimilate observations as input and execute actions as output, before resuming further thinking. We argue that these extensions of LP, accommodating multidtheories embedded in a shared environment, provide the necessary multidagent functionality. We argue also that our work extends Shoham’s Agent0 and the BDI architecture.", "This paper explores the use of Constraint Logic Programming (CLP) as a platform for experimenting with planning problems in the presence of multiple interacting agents. The paper develops a novel constraint-based action language, B MAP, that enables the declarative description of large classes of multi-agent and multi-valued domains. B MAP supports several complex features, including combined effects of concurrent and interacting actions, concurrency control, and delayed effects. The paper presents a mapping of B MAP theories to CLP and it demonstrates the effectiveness of an implementation in SICStus Prolog on several benchmark problems. The effort is an evolution of previous research on using CLP for single-agent domains, demonstrating the flexibility of CLP technology to handle the more complex issues of multi-agency and concurrency.", "Multiagent planning deals with the problem of generating plans for multiple agents. It requires formalizing ways for the agents to interact and cooperate, in order to achieve their goals. One way for the agents to interact is through negotiations. Integration of negotiation in multiagent planning has not been extensively investigated and a systematic way for this task has yet to be found. We develop a generic model for negotiation in dynamic environments and apply it to generate joint-plans with negotiation for multiple agents. We identify the minimal requirements for such a model and propose a general scheme for one-to-one negotiations. This model of negotiation is instantiated to deal with dynamic knowledge of planning agents. We demonstrate how logic programming can be employed as a uniform platform to support both planning and negotiation, providing an ideal testbed for experimenting with multiagent planning with negotiations.", "The paper introduces a logical framework for negotiation among dishonest agents. The framework relies on the use of abductive logic programming as a knowledge representation language for agents to deal with incomplete information and preferences. The paper shows how intentionally false or inaccurate information of agents could be encoded in the agents' knowledge bases. Such disinformation can be effectively used in the process of negotiation to have desired outcomes by agents. The negotiation processes are formulated under the answer set semantics of abductive logic programming and enable the exploration of various strategies that agents can employ in their negotiation.", "", "Abstract Multi Agent Systems (MAS) have become the key technology for decomposing complex problems in order to solve them more efficiently, or for problems distributed in nature. However, many industrial applications besides their distributed nature, also involve a large number of parameters and constraints among them, i.e. they are combinatorial. Solving such particularly hard problems efficiently requires programming tools that combine MAS technology with a programming schema that facilitates the modeling and solution of constraints. This paper presents MACLP (Multi Agent Constraint Logic Programming), a logic-programming platform for building, in a declarative way, multi agent systems with constraint-solving capabilities. MACLP extends CSPCONS, a logic programming system that permits distributed program execution through communicating sequential Prolog processes with constraints, by providing all the necessary facilities for communication between agents. These facilities abstract from the programmer all the low-level details of the communication and allow him to focus on the development of the agent itself.", "The paper presents a knowledge representation formalism, in the form of a high-level Action Description Language (ADL) for multi-agent systems, where autonomous agents reason and act in a shared environment. Agents are autonomously pursuing individual goals, but are capable of interacting through a shared knowledge repository. In their interactions through shared portions of the world, the agents deal with problems of synchronization and concurrency; the action language allows the description of strategies to ensure a consistent global execution of the agents’ autonomously derived plans. A distributed planning problem is formalized by providing the declarative specications of the portion of the problem pertaining a single agent. Each of these specications is executable by a stand-alone CLP-based planner. The coordination among agents exploits a Linda infrastructure. The proposal is validated in a prototype implementation developed in SICStus Prolog. To appear in Theory and Practice of Logic Programming (TPLP). Research partially funded by GNCS-INdAM projects, MUR-PRIN: Innovative and multidisciplinary approaches for constraint and preference reasoning project; NSF grants IIS-0812267 and HRD-0420407; and grants 2009.010.0336 and 2010.011.0403.", "", "Multi-agent systems (MAS) can take many forms depending on the characteristics of the agents populating them. Amongst the more demanding properties with respect to the design and implementation of multi-agent system is how these agents may individually reason and communicate about their knowledge and beliefs, with a view to cooperation and collaboration. In this paper, we present a deductive reasoning multi-agent platform using an extension of answer set programming (ASP). We show that it is capable of dealing with the specification and implementation of the system's architecture, communication and the individual agent's reasoning capacities. Agents are represented as Ordered Choice Logic Programs (OCLP) as a way of modelling their knowledge and reasoning capacities, with communication between the agents regulated by uni-directional channels transporting information based on their answer sets. In the implementation of our system we combine the extensibility of the JADE framework with the flexibility of the OCT front-end to the Smodels answer set solver. The power of this approach is demonstrated by a multi-agent system reasoning about equilibria of extensive games with perfect information.", "A multiple contact filter connector capable of accommodating high RF currents and a method of manufacturing the same are disclosed. The connector includes an outer met allic shell, a dielectric body within the shell and at least one network filter contact assembly. The inner body has at least one through channel and a transverse cavity which communicates with the channel and an annular met allic ring disposed inwardly of the shell. The network filter contact assembly has a ground electrode and a pin electrode and is disposed within the portion of the channel bridging the cavity. Conductive curable filler material is charged into the cavity around and in contact with the ground electrode and annular ring to form a ground plate for the connector. A pair of spaced apart conductive plates may be disposed transversely to the ground electrode and ring to be in electrical contact therewith and the filler material.", "" ] }
1705.03916
2952258553
This paper explores the use of Answer Set Programming (ASP) in solving Distributed Constraint Optimization Problems (DCOPs). The paper provides the following novel contributions: (1) It shows how one can formulate DCOPs as logic programs; (2) It introduces ASP-DPOP, the first DCOP algorithm that is based on logic programming; (3) It experimentally shows that ASP-DPOP can be up to two orders of magnitude faster than DPOP (its imperative programming counterpart) as well as solve some problems that DPOP fails to solve, due to memory limitations; and (4) It demonstrates the applicability of ASP in a wide array of multi-agent problems currently modeled as DCOPs. Under consideration in Theory and Practice of Logic Programming (TPLP).
Researchers have also developed a framework that integrates declarative techniques with off-the-shelf constraint solvers to partition large constraint optimization problems into smaller subproblems and solve them in parallel @cite_16 . In contrast, DCOPs are problems that are naturally distributed and cannot be arbitrarily partitioned.
{ "cite_N": [ "@cite_16" ], "mid": [ "2053166398" ], "abstract": [ "This paper presents Cologne, a declarative optimization platform that enables constraint optimization problems (COPs) to be declaratively specified and incrementally executed in distributed systems. Cologne integrates a declarative networking engine with an off-the-shelf constraint solver. We have developed the Colog language that combines distributed Datalog used in declarative networking with language constructs for specifying goals and constraints used in COPs. Cologne uses novel query processing strategies for processing Colog programs, by combining the use of bottom-up distributed Datalog evaluation with top-down goal-oriented constraint solving. Using case studies based on cloud and wireless network optimizations, we demonstrate that Cologne (1) can flexibly support a wide range of policy-based optimizations in distributed systems, (2) results in orders of magnitude less code compared to imperative implementations, and (3) is highly efficient with low overhead and fast convergence times." ] }
1705.03916
2952258553
This paper explores the use of Answer Set Programming (ASP) in solving Distributed Constraint Optimization Problems (DCOPs). The paper provides the following novel contributions: (1) It shows how one can formulate DCOPs as logic programs; (2) It introduces ASP-DPOP, the first DCOP algorithm that is based on logic programming; (3) It experimentally shows that ASP-DPOP can be up to two orders of magnitude faster than DPOP (its imperative programming counterpart) as well as solve some problems that DPOP fails to solve, due to memory limitations; and (4) It demonstrates the applicability of ASP in a wide array of multi-agent problems currently modeled as DCOPs. Under consideration in Theory and Practice of Logic Programming (TPLP).
ASP-DPOP is able to exploit problem structure by propagating hard constraints and using them to prune the search space efficiently. This reduces the memory requirement of the algorithm and improves the scalability of the system. Existing DCOP algorithms that also propagate hard and soft constraints to prune the search space include H-DPOP that propagates exclusively hard constraints @cite_68 , BrC-DPOP that propagates branch consistency @cite_21 , and variants of BnB-ADOPT @cite_59 @cite_48 @cite_15 that maintains soft-arc consistency @cite_17 @cite_57 @cite_56 . A key difference is that these algorithms require algorithm developers to explicitly implement the ability to reason about the hard and soft constraints and propagate them efficiently. In contrast, ASP-DPOP capitalizes on general purpose ASP solvers to do so.
{ "cite_N": [ "@cite_48", "@cite_21", "@cite_56", "@cite_57", "@cite_59", "@cite_15", "@cite_68", "@cite_17" ], "mid": [ "175434978", "268318943", "122825473", "1808362601", "2950150554", "2402888789", "2114722577", "33515650" ], "abstract": [ "This note considers how to modify BnB-ADOPT, a well-known algorithm for optimally solving distributed constraint optimization problems, with a double aim: (i) to avoid sending most of the redundant messages and (ii) to handle cost functions of any arity. Some of the messages exchanged by BnB-ADOPT turned out to be redundant. Removing most of the redundant messages increases substantially communication efficiency: the number of exchanged messages is -in most cases-at least three times fewer (keeping the other measures almost unchanged), and termination and optimality are maintained. On the other hand, handling n-ary cost functions was addressed in the original work, but the presence of thresholds makes their practical usage more complex. Both issues -removing most of the redundant messages and efficiently handling n-ary cost functions- can be combined, producing the new version BnB-ADOPT+. Experimentally, we show the benefits of this version over the original one.", "The DCOP model has gained momentum in recent years thanks to its ability to capture problems that are naturally distributed and cannot be realistically addressed in a centralized manner. Dynamic programming based techniques have been recognized to be among the most effective techniques for building complete DCOP solvers (e.g., DPOP). Unfortunately, they also suffer from a widely recognized drawback: their messages are exponential in size. Another limitation is that most current DCOP algorithms do not actively exploit hard constraints, which are common in many real problems. This paper addresses these two limitations by introducing an algorithm, called BrC-DPOP, that exploits arc consistency and a form of consistency that applies to paths in pseudo-trees to reduce the size of the messages. Experimental results shows that BrC-DPOP uses messages that are up to one order of magnitude smaller than DPOP, and that it can scale up well, being able to solve problems that its counterpart can not.", "Gutierrez and Meseguer show how to enforce consistency in BnB-ADOPT+ for distributed constraint optimization, but they consider unconditional deletions only. However, during search, more values can be pruned conditionally according to variable instantiations that define subproblems. Enforcing consistency in these subproblems can cause further search space reduction. We introduce efficient methods to maintain soft arc consistencies in every subproblem during search, a non trivial task due to asynchronicity and induced overheads. Experimental results show substantial benefits on three different benchmarks. We are grateful to the anonymous referees for their constructive comments. The work of Lei and Lee was generously supported by grants CUHK413808, CUHK413710 and CUHK413713 from the Research Grants Council of Hong Kong SAR. The work of Gutierrez and Meseguer was partially supported by the Spanish project TIN2009-13591-C02-02 and Generalitat de Catalunya 2009-SGR-1434. The work of Gutierrez, Lee and Meseguer was also jointly supported by the CSIC RGC Joint Research Scheme grants S-HK003 12 and 2011HK0017. The work of Mak was performed while he was at CUHK.", "Several multiagent tasks can be formulated and solved as DCOPs. BnB-ADOPT+-AC is one of the most efficient algorithms for optimal DCOP solving. It is based on BnB-ADOPT, removing redundant messages and maintaining soft arc consistency during search. In this paper, we present several improvements for this algorithm, namely (i) a better implementation, (ii) processing exactly simultaneous deletions, and (iii) searching on arc consistent cost functions. We present empirical results showing the benefits of these improvements on several benchmarks.", "Distributed constraint optimization (DCOP) problems are a popular way of formulating and solving agent-coordination problems. A DCOP problem is a problem where several agents coordinate their values such that the sum of the resulting constraint costs is minimal. It is often desirable to solve DCOP problems with memory-bounded and asynchronous algorithms. We introduce Branch-and-Bound ADOPT (BnB-ADOPT), a memory-bounded asynchronous DCOP search algorithm that uses the message-passing and communication framework of ADOPT (Modi, Shen, Tambe, & Yokoo, 2005), a well known memory-bounded asynchronous DCOP search algorithm, but changes the search strategy of ADOPT from best-first search to depth-first branch-and-bound search. Our experimental results show that BnB-ADOPT finds cost-minimal solutions up to one order of magnitude faster than ADOPT for a variety of large DCOP problems and is as fast as NCBB, a memory-bounded synchronous DCOP search algorithm, for most of these DCOP problems. Additionally, it is often desirable to find bounded-error solutions for DCOP problems within a reasonable amount of time since finding cost-minimal solutions is NP-hard. The existing bounded-error approximation mechanism allows users only to specify an absolute error bound on the solution cost but a relative error bound is often more intuitive. Thus, we present two new bounded-error approximation mechanisms that allow for relative error bounds and implement them on top of BnB-ADOPT.", "ADOPT and BnB-ADOPT are two optimal DCOP search algorithms that are similar except for their search strategies: the former uses best-first search and the latter uses depth-first branch-and-bound search. In this paper, we present a new algorithm, called ADOPT(k), that generalizes them. Its behavior depends on the k parameter. It behaves like ADOPT when k = 1, like BnB-ADOPT when k = ∞ and like a hybrid of ADOPT and BnB-ADOPT when 1 < k < ∞. We prove that ADOPT(k) is a correct and complete algorithm and experimentally show that ADOPT(k) outperforms ADOPT and BnB-ADOPT on several benchmarks across several metrics.", "In distributed constraint optimization problems, dynamic programming methods have been recently proposed (e.g. DPOP). In dynamic programming many valuations are grouped together in fewer messages, which produce much less networking overhead than search. Nevertheless, these messages are exponential in size. The basic DPOP always communicates all possible assignments, even when some of them may be inconsistent due to hard constraints. Many real problems contain hard constraints that significantly reduce the space of feasible assignments. This paper introduces H-DPOP, a hybrid algorithm that is based on DPOP, which uses Constraint Decision Diagrams (CDD) to rule out infeasible assignments, and thus compactly represent UTIL messages. Experimental results show that H-DPOP requires several orders of magnitude less memory than DPOP, especially for dense and tightly-constrained problems.", "In the centralized context, global constraints have been essential for the advancement of constraint reasoning. In this paper we propose to include soft global constraints in distributed constraint optimization problems (DCOPs). Looking for efficiency, we study possible decompositions of global constraints, including the use of extra variables. We extend the distributed search algorithm BnB-ADOPT+ to support these representations of global constraints. In addition, we explore the relation of global constraints with soft local consistency in DCOPs, in particular for the generalized soft arc consistency (GAC) level. We include specific propagators for some well-known soft global constraints. Finally, we provide empirical results on several benchmarks." ] }
1705.03607
2613400644
Recently, deep Convolutional Neural Networks (CNN) have demonstrated strong performance on RGB salient object detection. Although, depth information can help improve detection results, the exploration of CNNs for RGB-D salient object detection remains limited. Here we propose a novel deep CNN architecture for RGB-D salient object detection that exploits high-level, mid-level, and low level features. Further, we present novel depth features that capture the ideas of background enclosure and depth contrast that are suitable for a learned approach. We show improved results compared to state-of-the-art RGB-D salient object detection methods. We also show that the low-level and mid-level depth features both contribute to improvements in the results. Especially, F-Score of our method is 0.848 on RGBD1000 dataset, which is 10.7 better than the second place.
Saliency detection to model eye movements began with low-level hand-crafted features, with classic work by Itti al @cite_13 being influential. A variety of salient object detection methods have been proposed in recent years, we focus on these as more relevant to our work.
{ "cite_N": [ "@cite_13" ], "mid": [ "2128272608" ], "abstract": [ "A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented. Multiscale image features are combined into a single topographical saliency map. A dynamical neural network then selects attended locations in order of decreasing saliency. The system breaks down the complex problem of scene understanding by rapidly selecting, in a computationally efficient manner, conspicuous locations to be analyzed in detail." ] }
1705.03607
2613400644
Recently, deep Convolutional Neural Networks (CNN) have demonstrated strong performance on RGB salient object detection. Although, depth information can help improve detection results, the exploration of CNNs for RGB-D salient object detection remains limited. Here we propose a novel deep CNN architecture for RGB-D salient object detection that exploits high-level, mid-level, and low level features. Further, we present novel depth features that capture the ideas of background enclosure and depth contrast that are suitable for a learned approach. We show improved results compared to state-of-the-art RGB-D salient object detection methods. We also show that the low-level and mid-level depth features both contribute to improvements in the results. Especially, F-Score of our method is 0.848 on RGBD1000 dataset, which is 10.7 better than the second place.
In RGB salient object detection, methods often measure constrast between features of a region versus its surrounds, either locally and or globally @cite_1 @cite_13 . Contrast is mostly computed with respect to appearance-based features such as colour, texture, and intensity edges @cite_4 @cite_2 .
{ "cite_N": [ "@cite_13", "@cite_1", "@cite_4", "@cite_2" ], "mid": [ "2128272608", "2037954058", "2156777442", "2161185676" ], "abstract": [ "A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented. Multiscale image features are combined into a single topographical saliency map. A dynamical neural network then selects attended locations in order of decreasing saliency. The system breaks down the complex problem of scene understanding by rapidly selecting, in a computationally efficient manner, conspicuous locations to be analyzed in detail.", "Automatic estimation of salient object regions across images, without any prior assumption or knowledge of the contents of the corresponding scenes, enhances many computer vision and computer graphics applications. We introduce a regional contrast based salient object detection algorithm, which simultaneously evaluates global contrast differences and spatial weighted coherence scores. The proposed algorithm is simple, efficient, naturally multi-scale, and produces full-resolution, high-quality saliency maps. These saliency maps are further used to initialize a novel iterative version of GrabCut, namely SaliencyCut, for high quality unsupervised salient object segmentation. We extensively evaluated our algorithm using traditional salient object detection datasets, as well as a more challenging Internet image dataset. Our experimental results demonstrate that our algorithm consistently outperforms 15 existing salient object detection and segmentation methods, yielding higher precision and better recall rates. We also show that our algorithm can be used to efficiently extract salient object masks from Internet images, enabling effective sketch-based image retrieval (SBIR) via simple shape comparisons. Despite such noisy internet images, where the saliency regions are ambiguous, our saliency guided image retrieval achieves a superior retrieval rate compared with state-of-the-art SBIR methods, and additionally provides important target object region information.", "Detecting visually salient regions in images is one of the fundamental problems in computer vision. We propose a novel method to decompose an image into large scale perceptually homogeneous elements for efficient salient region detection, using a soft image abstraction representation. By considering both appearance similarity and spatial distribution of image pixels, the proposed representation abstracts out unnecessary image details, allowing the assignment of comparable saliency values across similar regions, and producing perceptually accurate salient region detection. We evaluate our salient region detection approach on the largest publicly available dataset with pixel accurate annotations. The experimental results show that the proposed method outperforms 18 alternate methods, reducing the mean absolute error by 25.2 compared to the previous best result, while being computationally more efficient.", "Salient object detection has been attracting a lot of interest, and recently various heuristic computational models have been designed. In this paper, we regard saliency map computation as a regression problem. Our method, which is based on multi-level image segmentation, uses the supervised learning approach to map the regional feature vector to a saliency score, and finally fuses the saliency scores across multiple levels, yielding the saliency map. The contributions lie in two-fold. One is that we show our approach, which integrates the regional contrast, regional property and regional background ness descriptors together to form the master saliency map, is able to produce superior saliency maps to existing algorithms most of which combine saliency maps heuristically computed from different types of features. The other is that we introduce a new regional feature vector, background ness, to characterize the background, which can be regarded as a counterpart of the objectness descriptor [2]. The performance evaluation on several popular benchmark data sets validates that our approach outperforms existing state-of-the-arts." ] }
1705.03607
2613400644
Recently, deep Convolutional Neural Networks (CNN) have demonstrated strong performance on RGB salient object detection. Although, depth information can help improve detection results, the exploration of CNNs for RGB-D salient object detection remains limited. Here we propose a novel deep CNN architecture for RGB-D salient object detection that exploits high-level, mid-level, and low level features. Further, we present novel depth features that capture the ideas of background enclosure and depth contrast that are suitable for a learned approach. We show improved results compared to state-of-the-art RGB-D salient object detection methods. We also show that the low-level and mid-level depth features both contribute to improvements in the results. Especially, F-Score of our method is 0.848 on RGBD1000 dataset, which is 10.7 better than the second place.
Recently, methods using deep CNNs have obtained strong results for RGB salient object detection. Wang al @cite_29 combine local information and a global search. Often the networks make use of deep CNN networks for object classification for a large number of classes, specifically VGG16 @cite_9 or GoogleNet @cite_20 . Some utilize these networks for extracting the low features @cite_22 @cite_17 @cite_5 . Lee al incorporate high-level features based on these networks, along with low level features @cite_22 . This approach to incorporating top-down semantic information about objects into salient object detection has been effective.
{ "cite_N": [ "@cite_22", "@cite_9", "@cite_29", "@cite_5", "@cite_20", "@cite_17" ], "mid": [ "2953227099", "1686810756", "", "2461475918", "2950179405", "2949370174" ], "abstract": [ "Recent advances in saliency detection have utilized deep learning to obtain high level features to detect salient regions in a scene. These advances have demonstrated superior results over previous works that utilize hand-crafted low level features for saliency detection. In this paper, we demonstrate that hand-crafted features can provide complementary information to enhance performance of saliency detection that utilizes only high level features. Our method utilizes both high level and low level features for saliency detection under a unified deep learning framework. The high level features are extracted using the VGG-net, and the low level features are compared with other parts of an image to form a low level distance map. The low level distance map is then encoded using a convolutional neural network(CNN) with multiple 1X1 convolutional and ReLU layers. We concatenate the encoded low level distance map and the high level features, and connect them to a fully connected neural network classifier to evaluate the saliency of a query region. Our experiments show that our method can further improve the performance of state-of-the-art deep learning-based saliency detection methods.", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "", "Traditional1 salient object detection models often use hand-crafted features to formulate contrast and various prior knowledge, and then combine them artificially. In this work, we propose a novel end-to-end deep hierarchical saliency network (DHSNet) based on convolutional neural networks for detecting salient objects. DHSNet first makes a coarse global prediction by automatically learning various global structured saliency cues, including global contrast, objectness, compactness, and their optimal combination. Then a novel hierarchical recurrent convolutional neural network (HRCNN) is adopted to further hierarchically and progressively refine the details of saliency maps step by step via integrating local context information. The whole architecture works in a global to local and coarse to fine manner. DHSNet is directly trained using whole images and corresponding ground truth saliency masks. When testing, saliency maps can be generated by directly and efficiently feedforwarding testing images through the network, without relying on any other techniques. Evaluations on four benchmark datasets and comparisons with other 11 state-of-the-art algorithms demonstrate that DHSNet not only shows its significant superiority in terms of performance, but also achieves a real-time speed of 23 FPS on modern GPUs.", "We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.", "Salient object detection has recently witnessed substantial progress due to powerful features extracted using deep convolutional neural networks (CNNs). However, existing CNN-based methods operate at the patch level instead of the pixel level. Resulting saliency maps are typically blurry, especially near the boundary of salient objects. Furthermore, image patches are treated as independent samples even when they are overlapping, giving rise to significant redundancy in computation and storage. In this CVPR 2016 paper, we propose an end-to-end deep contrast network to overcome the aforementioned limitations. Our deep network consists of two complementary components, a pixel-level fully convolutional stream and a segment-wise spatial pooling stream. The first stream directly produces a saliency map with pixel-level accuracy from an input image. The second stream extracts segment-wise features very efficiently, and better models saliency discontinuities along object boundaries. Finally, a fully connected CRF model can be optionally incorporated to improve spatial coherence and contour localization in the fused result from these two streams. Experimental results demonstrate that our deep model significantly improves the state of the art." ] }
1705.03607
2613400644
Recently, deep Convolutional Neural Networks (CNN) have demonstrated strong performance on RGB salient object detection. Although, depth information can help improve detection results, the exploration of CNNs for RGB-D salient object detection remains limited. Here we propose a novel deep CNN architecture for RGB-D salient object detection that exploits high-level, mid-level, and low level features. Further, we present novel depth features that capture the ideas of background enclosure and depth contrast that are suitable for a learned approach. We show improved results compared to state-of-the-art RGB-D salient object detection methods. We also show that the low-level and mid-level depth features both contribute to improvements in the results. Especially, F-Score of our method is 0.848 on RGBD1000 dataset, which is 10.7 better than the second place.
Compared to RGB salient object detection, fewer methods use RGB-D values for computing saliency. Peng al calculate a saliency map by combining low, middle, and high level saliency information @cite_19 . Ren al calculate region contrast and use background, depth, and orientation priors. They then produce a saliency map by applying PageRank and a MRF to the outputs @cite_27 . Ju al calculate the saliency score using anisotropic center-surround difference and produce a saliency map by refining the score applying Grabcut segmentation and a 2D Gaussian filter @cite_11 . Feng al improve RGB-D salient object detection results based on the idea that salient objects are more likely to be in front of their surroundings for a large number of directions @cite_15 . All the existing RGB-D methods use hand-crafted parameters, such as for scale and weights between metrics. However, real world scenes contain unpredictable object arrangements for which fixed hand coded parameters may limit generalization. No published papers have yet presented a CNN architecture for RGB-D salient object detection. A preliminary paper (Arxiv only) uses only low-level color and depth features @cite_28 .
{ "cite_N": [ "@cite_28", "@cite_19", "@cite_27", "@cite_15", "@cite_11" ], "mid": [ "2520640394", "20683899", "1938386764", "2461758788", "1976409045" ], "abstract": [ "Numerous efforts have been made to design various low-level saliency cues for RGBD saliency detection, such as color and depth contrast features as well as background and color compactness priors. However, how these low-level saliency cues interact with each other and how they can be effectively incorporated to generate a master saliency map remain challenging problems. In this paper, we design a new convolutional neural network (CNN) to automatically learn the interaction mechanism for RGBD salient object detection. In contrast to existing works, in which raw image pixels are fed directly to the CNN, the proposed method takes advantage of the knowledge obtained in traditional saliency detection by adopting various flexible and interpretable saliency feature vectors as inputs. This guides the CNN to learn a combination of existing features to predict saliency more effectively, which presents a less complex problem than operating on the pixels directly. We then integrate a superpixel-based Laplacian propagation framework with the trained CNN to extract a spatially consistent saliency map by exploiting the intrinsic structure of the input image. Extensive quantitative and qualitative experimental evaluations on three data sets demonstrate that the proposed method consistently outperforms the state-of-the-art methods.", "Although depth information plays an important role in the human vision system, it is not yet well-explored in existing visual saliency computational models. In this work, we first introduce a large scale RGBD image dataset to address the problem of data deficiency in current research of RGBD salient object detection. To make sure that most existing RGB saliency models can still be adequate in RGBD scenarios, we continue to provide a simple fusion framework that combines existing RGB-produced saliency with new depth-induced saliency, the former one is estimated from existing RGB models while the latter one is based on the proposed multi-contextual contrast model. Moreover, a specialized multi-stage RGBD model is also proposed which takes account of both depth and appearance cues derived from low-level feature contrast, mid-level region grouping and high-level priors enhancement. Extensive experiments show the effectiveness and superiority of our model which can accurately locate the salient objects from RGBD images, and also assign consistent saliency values for the target objects.", "Inspired by the effectiveness of global priors for 2D saliency analysis, this paper aims to explore those particular to RGB-D data. To this end, we propose two priors, which are the normalized depth prior and the global-context surface orientation prior, and formulate them in the forms simple for computation. A two-stage RGB-D salient object detection framework is presented. It first integrates the region contrast, together with the background, depth, and orientation priors to achieve a saliency map. Then, a saliency restoration scheme is proposed, which integrates the PageRank algorithm for sampling high confident regions and recovers saliency for those ambiguous. The saliency map is thus reconstructed and refined globally. We conduct comparative experiments on two publicly available RGB-D datasets. Experimental results show that our approach consistently outperforms other state-of-the-art algorithms on both datasets.", "Recent work in salient object detection has considered the incorporation of depth cues from RGB-D images. In most cases, depth contrast is used as the main feature. However, areas of high contrast in background regions cause false positives for such methods, as the background frequently contains regions that are highly variable in depth. Here, we propose a novel RGB-D saliency feature. Local Background Enclosure (LBE) captures the spread of angular directions which are background with respect to the candidate region and the object that it is part of. We show that our feature improves over state-of-the-art RGB-D saliency approaches as well as RGB methods on the RGBD1000 and NJUDS2000 datasets.", "Most previous works on saliency detection are dedicated to 2D images. Recently it has been shown that 3D visual information supplies a powerful cue for saliency analysis. In this paper, we propose a novel saliency method that works on depth images based on anisotropic center-surround difference. Instead of depending on absolute depth, we measure the saliency of a point by how much it outstands from surroundings, which takes the global depth structure into consideration. Besides, two common priors based on depth and location are used for refinement. The proposed method works within a complexity of O(N) and the evaluation on a dataset of over 1000 stereo images shows that our method outperforms state-of-the-art." ] }
1705.03607
2613400644
Recently, deep Convolutional Neural Networks (CNN) have demonstrated strong performance on RGB salient object detection. Although, depth information can help improve detection results, the exploration of CNNs for RGB-D salient object detection remains limited. Here we propose a novel deep CNN architecture for RGB-D salient object detection that exploits high-level, mid-level, and low level features. Further, we present novel depth features that capture the ideas of background enclosure and depth contrast that are suitable for a learned approach. We show improved results compared to state-of-the-art RGB-D salient object detection methods. We also show that the low-level and mid-level depth features both contribute to improvements in the results. Especially, F-Score of our method is 0.848 on RGBD1000 dataset, which is 10.7 better than the second place.
Two datasets are widely used for RGB-D salient object detection, RGBD1000 @cite_19 and NJUDS2000 @cite_11 . The RGBD1000 datasets contain 1000 RGB-D images captured by a standard Microsoft Kinect. The NJUDS2000 datasets contain around 2000 RGB-D images captured by a Fuji W3 stereo camera.
{ "cite_N": [ "@cite_19", "@cite_11" ], "mid": [ "20683899", "1976409045" ], "abstract": [ "Although depth information plays an important role in the human vision system, it is not yet well-explored in existing visual saliency computational models. In this work, we first introduce a large scale RGBD image dataset to address the problem of data deficiency in current research of RGBD salient object detection. To make sure that most existing RGB saliency models can still be adequate in RGBD scenarios, we continue to provide a simple fusion framework that combines existing RGB-produced saliency with new depth-induced saliency, the former one is estimated from existing RGB models while the latter one is based on the proposed multi-contextual contrast model. Moreover, a specialized multi-stage RGBD model is also proposed which takes account of both depth and appearance cues derived from low-level feature contrast, mid-level region grouping and high-level priors enhancement. Extensive experiments show the effectiveness and superiority of our model which can accurately locate the salient objects from RGBD images, and also assign consistent saliency values for the target objects.", "Most previous works on saliency detection are dedicated to 2D images. Recently it has been shown that 3D visual information supplies a powerful cue for saliency analysis. In this paper, we propose a novel saliency method that works on depth images based on anisotropic center-surround difference. Instead of depending on absolute depth, we measure the saliency of a point by how much it outstands from surroundings, which takes the global depth structure into consideration. Besides, two common priors based on depth and location are used for refinement. The proposed method works within a complexity of O(N) and the evaluation on a dataset of over 1000 stereo images shows that our method outperforms state-of-the-art." ] }
1705.03658
2612804666
Web applications require access to the file-system for many different tasks. When analyzing the security of a web application, secu- rity analysts should thus consider the impact that file-system operations have on the security of the whole application. Moreover, the analysis should take into consideration how file-system vulnerabilities might in- teract with other vulnerabilities leading an attacker to breach into the web application. In this paper, we first propose a classification of file- system vulnerabilities, and then, based on this classification, we present a formal approach that allows one to exploit file-system vulnerabilities. We give a formal representation of web applications, databases and file- systems, and show how to reason about file-system vulnerabilities. We also show how to combine file-system vulnerabilities and SQL-Injection vulnerabilities for the identification of complex, multi-stage attacks. We have developed an automatic tool that implements our approach and we show its efficiency by discussing several real-world case studies, which are witness to the fact that our tool can generate, and exploit, complex attacks that, to the best of our knowledge, no other state-of-the-art-tool for the security of web applications can find.
The idea underlying the methodology for modeling s given in @cite_8 is similar to our approach, but they defined three different attacker models that should find web attacks, whereas we show how the standard DY attacker can be used. They also represent a number of HTTP details that we do not require that eases the modeling phase. Most importantly, they don't take combination of attacks into consideration.
{ "cite_N": [ "@cite_8" ], "mid": [ "1976371754" ], "abstract": [ "We propose a formal model of web security based on an abstraction of the web platform and use this model to analyze the security of several sample web mechanisms and applications. We identify three distinct threat models that can be used to analyze web applications, ranging from a web attacker who controls malicious web sites and clients, to stronger attackers who can control the network and or leverage sites designed to display user-supplied content. We propose two broadly applicable security goals and study five security mechanisms. In our case studies, which include HTML5 forms, Referer validation, and a single sign-on solution, we use a SAT-based model-checking tool to find two previously known vulnerabilities and three new vulnerabilities. Our case study of a Kerberos-based single sign-on system illustrates the differences between a secure network protocol using custom client software and a similar but vulnerable web protocol that uses cookies, redirects, and embedded links instead." ] }
1705.03658
2612804666
Web applications require access to the file-system for many different tasks. When analyzing the security of a web application, secu- rity analysts should thus consider the impact that file-system operations have on the security of the whole application. Moreover, the analysis should take into consideration how file-system vulnerabilities might in- teract with other vulnerabilities leading an attacker to breach into the web application. In this paper, we first propose a classification of file- system vulnerabilities, and then, based on this classification, we present a formal approach that allows one to exploit file-system vulnerabilities. We give a formal representation of web applications, databases and file- systems, and show how to reason about file-system vulnerabilities. We also show how to combine file-system vulnerabilities and SQL-Injection vulnerabilities for the identification of complex, multi-stage attacks. We have developed an automatic tool that implements our approach and we show its efficiency by discussing several real-world case studies, which are witness to the fact that our tool can generate, and exploit, complex attacks that, to the best of our knowledge, no other state-of-the-art-tool for the security of web applications can find.
The model-based security testing tool SPaCiTE @cite_10 starts from a secure (ASLan++) specification of a and, by mutating the specification, automatically introduces security flaws. SPaCiTE implements a mature concretization phase, but it mainly finds vulnerability entry points and tries to exploit them, whereas our main goal is to consider how the exploitation of one or more vulnerabilities can compromise the security of the .
{ "cite_N": [ "@cite_10" ], "mid": [ "1996788431" ], "abstract": [ "Web applications are a major target of attackers. The increasing complexity of such applications and the subtlety of today's attacks make it very hard for developers to manually secure their web applications. Penetration testing is considered an art, the success of a penetration tester in detecting vulnerabilities mainly depends on his skills. Recently, model-checkers dedicated to security analysis have proved their ability to identify complex attacks on web-based security protocols. However, bridging the gap between an abstract attack trace output by a model-checker and a penetration test on the real web application is still an open issue. We present here a methodology for testing web applications starting from a secure model. First, we mutate the model to introduce specific vulnerabilities present in web applications. Then, a model-checker outputs attack traces that exploit those vulnerabilities. Next, the attack traces are translated into concrete test cases by using a 2-step mapping. Finally, the tests are executed on the real system using an automatic procedure that may request the help of a test expert from time to time. A prototype has been implemented and evaluated on Web Goat, an insecure web application maintained by OWASP. It successfully reproduced Role-Based Access Control (RBAC) and Cross-Site Scripting (XSS) attacks." ] }
1705.03658
2612804666
Web applications require access to the file-system for many different tasks. When analyzing the security of a web application, secu- rity analysts should thus consider the impact that file-system operations have on the security of the whole application. Moreover, the analysis should take into consideration how file-system vulnerabilities might in- teract with other vulnerabilities leading an attacker to breach into the web application. In this paper, we first propose a classification of file- system vulnerabilities, and then, based on this classification, we present a formal approach that allows one to exploit file-system vulnerabilities. We give a formal representation of web applications, databases and file- systems, and show how to reason about file-system vulnerabilities. We also show how to combine file-system vulnerabilities and SQL-Injection vulnerabilities for the identification of complex, multi-stage attacks. We have developed an automatic tool that implements our approach and we show its efficiency by discussing several real-world case studies, which are witness to the fact that our tool can generate, and exploit, complex attacks that, to the best of our knowledge, no other state-of-the-art-tool for the security of web applications can find.
The Chained Attack'' approach of @cite_7 considers multiple attacks to compromise a , but it does not consider file-system vulnerabilities nor interactions between vulnerabilities, which means that it can't reason about using a to access the file-system. Moreover, it requires an extra effort of the security analyst, who should provide an instantiation library for the concretization phase, while we use well-known external state-of-the-art tools.
{ "cite_N": [ "@cite_7" ], "mid": [ "2483259815" ], "abstract": [ "We present the Chained Attacks approach, an automated model-based approach to test the security of web applications that does not require a background in formal methods. Starting from a set of HTTP conversations and a configuration file providing the testing surface and purpose, a model of the System Under Test (SUT) is generated and input, along with the web attacker model we defined, to a model checker acting as test oracle. The HTTP conversations, payload libraries, and a mapping created while generating the model aid the concretization of the test cases, allowing for their execution on the SUT's implementation. We applied our approach to a real-life case study and we were able to find a combination of different attacks representing the concrete chained attack performed by a bug bounty hunter." ] }
1705.03658
2612804666
Web applications require access to the file-system for many different tasks. When analyzing the security of a web application, secu- rity analysts should thus consider the impact that file-system operations have on the security of the whole application. Moreover, the analysis should take into consideration how file-system vulnerabilities might in- teract with other vulnerabilities leading an attacker to breach into the web application. In this paper, we first propose a classification of file- system vulnerabilities, and then, based on this classification, we present a formal approach that allows one to exploit file-system vulnerabilities. We give a formal representation of web applications, databases and file- systems, and show how to reason about file-system vulnerabilities. We also show how to combine file-system vulnerabilities and SQL-Injection vulnerabilities for the identification of complex, multi-stage attacks. We have developed an automatic tool that implements our approach and we show its efficiency by discussing several real-world case studies, which are witness to the fact that our tool can generate, and exploit, complex attacks that, to the best of our knowledge, no other state-of-the-art-tool for the security of web applications can find.
@cite_8 , presented a methodology for modeling web applications and considered five case studies modeled in the Alloy @cite_3 language. The idea is similar to our approach, but they defined three different attacker models that should find web attacks, whereas we have shown how the standard DY attacker can be used. They also represent a number of HTTP details that we do not require that eases the modeling phave. Finally, and most importantly, they don't take combination of attacks into consideration.
{ "cite_N": [ "@cite_3", "@cite_8" ], "mid": [ "1895387792", "1976371754" ], "abstract": [ "In Software Abstractions Daniel Jackson introduces an approach to software design that draws on traditional formal methods but exploits automated tools to find flaws as early as possible. This approach--which Jackson calls \"lightweight formal methods\" or \"agile modeling\"--takes from formal specification the idea of a precise and expressive notation based on a tiny core of simple and robust concepts but replaces conventional analysis based on theorem proving with a fully automated analysis that gives designers immediate feedback. Jackson has developed Alloy, a language that captures the essence of software abstractions simply and succinctly, using a minimal toolkit of mathematical notions. This revised edition updates the text, examples, and appendixes to be fully compatible with the latest version of Alloy (Alloy 4). The designer can use automated analysis not only to correct errors but also to make models that are more precise and elegant. This approach, Jackson says, can rescue designers from \"the tarpit of implementation technologies\" and return them to thinking deeply about underlying concepts. Software Abstractions introduces the key elements: a logic, which provides the building blocks of the language; a language, which adds a small amount of syntax to the logic for structuring descriptions; and an analysis, a form of constraint solving that offers both simulation (generating sample states and executions) and checking (finding counterexamples to claimed properties).", "We propose a formal model of web security based on an abstraction of the web platform and use this model to analyze the security of several sample web mechanisms and applications. We identify three distinct threat models that can be used to analyze web applications, ranging from a web attacker who controls malicious web sites and clients, to stronger attackers who can control the network and or leverage sites designed to display user-supplied content. We propose two broadly applicable security goals and study five security mechanisms. In our case studies, which include HTML5 forms, Referer validation, and a single sign-on solution, we use a SAT-based model-checking tool to find two previously known vulnerabilities and three new vulnerabilities. Our case study of a Kerberos-based single sign-on system illustrates the differences between a secure network protocol using custom client software and a similar but vulnerable web protocol that uses cookies, redirects, and embedded links instead." ] }
1705.03658
2612804666
Web applications require access to the file-system for many different tasks. When analyzing the security of a web application, secu- rity analysts should thus consider the impact that file-system operations have on the security of the whole application. Moreover, the analysis should take into consideration how file-system vulnerabilities might in- teract with other vulnerabilities leading an attacker to breach into the web application. In this paper, we first propose a classification of file- system vulnerabilities, and then, based on this classification, we present a formal approach that allows one to exploit file-system vulnerabilities. We give a formal representation of web applications, databases and file- systems, and show how to reason about file-system vulnerabilities. We also show how to combine file-system vulnerabilities and SQL-Injection vulnerabilities for the identification of complex, multi-stage attacks. We have developed an automatic tool that implements our approach and we show its efficiency by discussing several real-world case studies, which are witness to the fact that our tool can generate, and exploit, complex attacks that, to the best of our knowledge, no other state-of-the-art-tool for the security of web applications can find.
@cite_10 , B " presented SPaCiTE, a model-based security testing tool that starts from a secure (ASLan++) specification of a web application and, by mutating the specification, automatically introduces security flaws. SPaCiTE implements a mature concretization phase, but it mainly finds vulnerability entry points and tries to exploit them, whereas our main goal is to consider how the exploitation of one or more vulnerabilities can compromise the security of the web application.
{ "cite_N": [ "@cite_10" ], "mid": [ "1996788431" ], "abstract": [ "Web applications are a major target of attackers. The increasing complexity of such applications and the subtlety of today's attacks make it very hard for developers to manually secure their web applications. Penetration testing is considered an art, the success of a penetration tester in detecting vulnerabilities mainly depends on his skills. Recently, model-checkers dedicated to security analysis have proved their ability to identify complex attacks on web-based security protocols. However, bridging the gap between an abstract attack trace output by a model-checker and a penetration test on the real web application is still an open issue. We present here a methodology for testing web applications starting from a secure model. First, we mutate the model to introduce specific vulnerabilities present in web applications. Then, a model-checker outputs attack traces that exploit those vulnerabilities. Next, the attack traces are translated into concrete test cases by using a 2-step mapping. Finally, the tests are executed on the real system using an automatic procedure that may request the help of a test expert from time to time. A prototype has been implemented and evaluated on Web Goat, an insecure web application maintained by OWASP. It successfully reproduced Role-Based Access Control (RBAC) and Cross-Site Scripting (XSS) attacks." ] }
1705.03658
2612804666
Web applications require access to the file-system for many different tasks. When analyzing the security of a web application, secu- rity analysts should thus consider the impact that file-system operations have on the security of the whole application. Moreover, the analysis should take into consideration how file-system vulnerabilities might in- teract with other vulnerabilities leading an attacker to breach into the web application. In this paper, we first propose a classification of file- system vulnerabilities, and then, based on this classification, we present a formal approach that allows one to exploit file-system vulnerabilities. We give a formal representation of web applications, databases and file- systems, and show how to reason about file-system vulnerabilities. We also show how to combine file-system vulnerabilities and SQL-Injection vulnerabilities for the identification of complex, multi-stage attacks. We have developed an automatic tool that implements our approach and we show its efficiency by discussing several real-world case studies, which are witness to the fact that our tool can generate, and exploit, complex attacks that, to the best of our knowledge, no other state-of-the-art-tool for the security of web applications can find.
The Chained Attack'' approach of @cite_7 considered multiple attacks to compromise a web application. The idea is close to the one we present in this paper. However, the Chained Attack'' approach does not consider file-system vulnerabilities nor interactions between vulnerabilities, which means that with that formalization it would be impossible to represent a to access the file-system. Finally, the Chained Attack'' approach requires an extra effort of the security analyst, who should provide an instantiation library for the concretization phase, while we use well-known external state-of-the-art tools.
{ "cite_N": [ "@cite_7" ], "mid": [ "2483259815" ], "abstract": [ "We present the Chained Attacks approach, an automated model-based approach to test the security of web applications that does not require a background in formal methods. Starting from a set of HTTP conversations and a configuration file providing the testing surface and purpose, a model of the System Under Test (SUT) is generated and input, along with the web attacker model we defined, to a model checker acting as test oracle. The HTTP conversations, payload libraries, and a mapping created while generating the model aid the concretization of the test cases, allowing for their execution on the SUT's implementation. We applied our approach to a real-life case study and we were able to find a combination of different attacks representing the concrete chained attack performed by a bug bounty hunter." ] }
1705.03822
2613712382
In mobile crowdsourcing (MCS), mobile users accomplish outsourced human intelligence tasks. MCS requires an appropriate task assignment strategy, since different workers may have different performance in terms of acceptance rate and quality. Task assignment is challenging, since a worker’s performance 1) may fluctuate, depending on both the worker’s current personal context and the task context and 2) is not known a priori, but has to be learned over time. Moreover, learning context-specific worker performance requires access to context information, which may not be available at a central entity due to communication overhead or privacy concerns. In addition, evaluating worker performance might require costly quality assessments. In this paper, we propose a context-aware hierarchical online learning algorithm addressing the problem of performance maximization in MCS. In our algorithm, a local controller (LC) in the mobile device of a worker regularly observes the worker’s context, her his decisions to accept or decline tasks and the quality in completing tasks. Based on these observations, the LC regularly estimates the worker’s context-specific performance. The mobile crowdsourcing platform (MCSP) then selects workers based on performance estimates received from the LCs. This hierarchical approach enables the LCs to learn context-specific worker performance and it enables the MCSP to select suitable workers. In addition, our algorithm preserves worker context locally, and it keeps the number of required quality assessments low. We prove that our algorithm converges to the optimal task assignment strategy. Moreover, the algorithm outperforms simpler task assignment strategies in experiments based on synthetic and real data.
Research has put some effort in theoretically defining and classifying CS systems, such as web-based @cite_50 , mobile @cite_21 and spatial @cite_43 CS. AhujaSchaar2016 LiuLiu2017 SlivkinsVaughan2013 Below, we give an overview on related work on task assignment in general, mobile and spatial CS systems, as relevant for our scenario. Note that strategic behavior of workers and task owners in CS systems, e.g., concerning pricing and effort spent in task completion @cite_36 is out of the scope of this paper. Also note that we assume that it is possible to assess the quality of a completed task. A different line of work on CS deals with quality estimation in case of missing ground truth, recently also using online learning @cite_22 .
{ "cite_N": [ "@cite_22", "@cite_36", "@cite_21", "@cite_43", "@cite_50" ], "mid": [ "2605194399", "2522887313", "1973875719", "2467923433", "2020740057" ], "abstract": [ "We consider a crowd-sourcing problem where in the process of labeling massive data sets, multiple labelers with unknown annotation quality must be selected to perform the labeling task for each incoming data sample or task, with the results aggregated using for example simple or weighted majority voting rule. In this paper, we approach this labeler selection problem in an online learning framework, whereby the quality of the labeling outcome by a specific set of labelers is estimated so that the learning algorithm over time learns to use the most effective combinations of labelers. This type of online learning in some sense falls under the family of multi-armed bandit (MAB) problems, but with a distinct feature not commonly seen: since the data is unlabeled to begin with and the labelers’ quality is unknown, their labeling outcome (or reward in the MAB context) cannot be readily verified; it can only be estimated against the crowd and be known probabilistically. We design an efficient online algorithm LS_OL using a simple majority voting rule that can differentiate high and low quality labelers over time, and is shown to have a regret (with respect to always using the optimal set of labelers) of @math uniformly in time under mild assumptions on the collective quality of the crowd, thus regret free in the average sense. We discuss further performance improvement by using a more sophisticated majority voting rule, and show how to detect and filter out “bad” (dishonest, malicious or very incompetent) labelers to further enhance the quality of crowd-sourcing. Extension to the case when a labeler’s quality is task-type dependent is also discussed using techniques from the literature on continuous arms. We establish a lower bound on the order of @math , where @math is an arbitrary function such that @math . We further provide a matching upper bound through a minor modification of the algorithm we proposed and studied earlier on. We present numerical results using both simulation and set of images labeled by amazon mechanic turks.", "In many two-sided markets, each side has incomplete information about the other but has an opportunity to learn (some) relevant information before final matches are made. For instance, clients seeking workers to perform tasks often conduct interviews that require the workers to perform some tasks and thereby provide information to both sides. The performance of a worker in such an interview assessment - and hence the information revealed - depends both on the inherent characteristics of the worker and the task and also on the actions taken by the worker (e.g. the effort expended); thus there is both adverse selection (on both sides) and moral hazard (on one side). When interactions are ongoing, incentives for workers to expend effort in the current assessment can be provided by the payment rule used and also by the matching rule that assesses and determines the tasks to which the worker is assigned in the future; thus workers have career concerns. We derive mechanisms - payment, assessment and matching rules - that lead to final matchings that are stable in the long run and achieve close to the optimal performance (profit or social welfare maximizing) in equilibrium (unique) thus mitigating both adverse selection and moral hazard (in many settings).", "With the proliferation of increasingly powerful mobile devices, mobile users can collaboratively form a mobile cloud to provide pervasive services, such as data collecting, processing, and computing. With this mobile cloud, mobile crowdsourcing, as an emerging service paradigm, can enable mobile users to take over the outsourced tasks. By leveraging the sensing capabilities of mobile devices and integrating humanintelligence and machine-computation, mobile crowdsourcing has the potential to revolutionize the approach of data collecting and processing. In this article we investigate the mobile crowdsourcing architecture and applications, then discuss some research challenges and countermeasures for developing mobile crowdsourcing. Some research orientations are finally envisioned for further studies.", "Crowdsourcing relies on the contributions of a large number of workers to accomplish spatial tasks, and it has drawn more attention in recent years. Many crowdsourcing tasks are completed online due to its convenience and efficiency. However, sometimes this traditional method may not work due to special requirements involving actual physical locations. Thus, a new paradigm of data collection, called spatial crowdsourcing, has emerged in the past few years. Spatial crowdsourcing consists of location-specific tasks that require people to physically be at specific locations to complete them. In this article we discuss unique challenges of spatial crowdsourcing, provide a comprehensive view of this new paradigm by introducing the taxonomy, and give future directions.", "" ] }
1705.03822
2613712382
In mobile crowdsourcing (MCS), mobile users accomplish outsourced human intelligence tasks. MCS requires an appropriate task assignment strategy, since different workers may have different performance in terms of acceptance rate and quality. Task assignment is challenging, since a worker’s performance 1) may fluctuate, depending on both the worker’s current personal context and the task context and 2) is not known a priori, but has to be learned over time. Moreover, learning context-specific worker performance requires access to context information, which may not be available at a central entity due to communication overhead or privacy concerns. In addition, evaluating worker performance might require costly quality assessments. In this paper, we propose a context-aware hierarchical online learning algorithm addressing the problem of performance maximization in MCS. In our algorithm, a local controller (LC) in the mobile device of a worker regularly observes the worker’s context, her his decisions to accept or decline tasks and the quality in completing tasks. Based on these observations, the LC regularly estimates the worker’s context-specific performance. The mobile crowdsourcing platform (MCSP) then selects workers based on performance estimates received from the LCs. This hierarchical approach enables the LCs to learn context-specific worker performance and it enables the MCSP to select suitable workers. In addition, our algorithm preserves worker context locally, and it keeps the number of required quality assessments low. We prove that our algorithm converges to the optimal task assignment strategy. Moreover, the algorithm outperforms simpler task assignment strategies in experiments based on synthetic and real data.
For MCS systems, @cite_17 proposes algorithms for optimal TR in WST mode that take into account the trade-off between the privacy of worker context, the utility to recommend the best tasks and the efficiency in terms of communication and computation overhead. TR is performed by a server based on a generalized context shared by the worker. The statistics used for TR are gathered offline via a proxy that ensures differential privacy guarantees. While @cite_17 allows to flexibly adjust the shared generalized context and makes TRs based on offline statistics and generalized worker context, our approach keeps worker context locally and learns each worker's individual statistics online. @cite_16 , an online learning algorithm for mobile crowdsensing is presented to maximize the revenue of a budget-constrained task owner by learning the sensing values of workers with known prices. While @cite_16 considers a total budget and each crowdsensing task requires a minimum number of workers, we consider a separate budget per task, which translates to a maximum number of required workers, and we additionally take task and worker context into account.
{ "cite_N": [ "@cite_16", "@cite_17" ], "mid": [ "2048922650", "2344949217" ], "abstract": [ "Mobile crowdsensing has been intensively explored recently due to its flexible and pervasive sensing ability. Although many crowdsensing platforms have been built for various applications, the general issue of how to manage such systems intelligently remains largely open. While recent investigations mostly focus on incentivizing crowdsensing, the robustness of crowdsensing toward uncontrollable sensing quality, another important issue, has been widely neglected. Due to the non-professional personnel and devices, the quality of crowdsensing data cannot be fully guaranteed, hence the revenue gained from mobile crowdsensing is generally uncertain. Moreover, the need for compensating the sensing costs under a limited budget has exacerbated the situation: one does not enjoy an infinite horizon to learn the sensing ability of the crowd and hence to make decisions based on sufficient statistics. In this paper, we present a novel framework, Budget LImited robuSt crowdSensing (BLISS), to handle this problem through an online learning approach. Our approach aims to minimize the difference on average sense (a.k.a. regret) between the achieved total sensing revenue and the (unknown) optimal one, and we show that our BLISS sensing policies achieve logarithmic regret bounds and Hannan-consistency. Finally, we use extensive simulations to demonstrate the effectiveness of BLISS.", "Mobile crowdsourcing (MC) is a transformative paradigm that engages a crowd of mobile users (i.e., workers) in the act of collecting, analyzing, and disseminating information or sharing their resources. To ensure quality of service, MC platforms tend to recommend MC tasks to workers based on their context information extracted from their interactions and smartphone sensors. This raises privacy concerns hard to address due to the constrained resources on mobile devices. In this paper, we identify fundamental tradeoffs among three metrics—utility, privacy, and efficiency—in an MC system and propose a flexible optimization framework that can be adjusted to any desired tradeoff point with joint efforts of MC platform and workers. Since the underlying optimization problems are NP-hard, we present efficient approximation algorithms to solve them. Since worker statistics are needed when tuning the optimization models, we use an efficient aggregation approach to collecting worker feedbacks while providing differential privacy guarantees. Both numerical evaluations and performance analysis are conducted to demonstrate the effectiveness and efficiency of the proposed framework." ] }
1705.03822
2613712382
In mobile crowdsourcing (MCS), mobile users accomplish outsourced human intelligence tasks. MCS requires an appropriate task assignment strategy, since different workers may have different performance in terms of acceptance rate and quality. Task assignment is challenging, since a worker’s performance 1) may fluctuate, depending on both the worker’s current personal context and the task context and 2) is not known a priori, but has to be learned over time. Moreover, learning context-specific worker performance requires access to context information, which may not be available at a central entity due to communication overhead or privacy concerns. In addition, evaluating worker performance might require costly quality assessments. In this paper, we propose a context-aware hierarchical online learning algorithm addressing the problem of performance maximization in MCS. In our algorithm, a local controller (LC) in the mobile device of a worker regularly observes the worker’s context, her his decisions to accept or decline tasks and the quality in completing tasks. Based on these observations, the LC regularly estimates the worker’s context-specific performance. The mobile crowdsourcing platform (MCSP) then selects workers based on performance estimates received from the LCs. This hierarchical approach enables the LCs to learn context-specific worker performance and it enables the MCSP to select suitable workers. In addition, our algorithm preserves worker context locally, and it keeps the number of required quality assessments low. We prove that our algorithm converges to the optimal task assignment strategy. Moreover, the algorithm outperforms simpler task assignment strategies in experiments based on synthetic and real data.
Algorithms for contextual MAB also differ with respect to their approach to balance the exploration vs. exploitation trade-off. While the epoch-greedy algorithm @cite_49 and the algorithms in @cite_38 @cite_2 @cite_33 @cite_7 explicitly distinguish between exploration and exploitation steps, the LinUCB @cite_29 , @cite_14 algorithm, the clustering algorithm in @cite_44 and the contextual zooming algorithm @cite_46 follow an index-based approach, in which in any round, the action with the highest index is selected. Other algorithms, like the one for contextual MAB with resource constraints in Ref. @cite_30 , draw samples from a distribution to find a policy which is then used to select the action. Finally, algorithms like the Thompson-sampling based algorithm in @cite_40 draw samples from a distribution to build a belief, and select the action which maximizes the expected reward based on this belief.
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_14", "@cite_33", "@cite_7", "@cite_29", "@cite_44", "@cite_40", "@cite_49", "@cite_2", "@cite_46" ], "mid": [ "2963981813", "1851342690", "1487320471", "", "", "2112420033", "1839697241", "2166253248", "2519411794", "", "2148434045" ], "abstract": [ "", "In this paper, we propose a novel framework for decentralized, online learning by many learners. At each moment of time, an instance characterized by a certain context may arrive to each learner; based on the context, the learner can select one of its own actions (which gives a reward and provides information) or request assistance from another learner. In the latter case, the requester pays a cost and receives the reward but the provider learns the information. In our framework, learners are modeled as cooperative contextual bandits. Each learner seeks to maximize the expected reward from its arrivals, which involves trading off the reward received from its own actions, the information learned from its own actions, the reward received from the actions requested of others and the cost paid for these actions-taking into account what it has learned about the value of assistance from each other learner. We develop distributed online learning algorithms and provide analytic bounds to compare the efficiency of these with algorithms with the complete knowledge (oracle) benchmark (in which the expected reward of every action in every context is known by every learner). Our estimates show that regret-the loss incurred by the algorithm-is sublinear in time. Our theoretical framework can be used in many practical applications including Big Data mining, event detection in surveillance sensor networks and distributed online recommendation systems.", "In this paper we study the contextual bandit problem (also known as the multi-armed bandit problem with expert advice) for linear payoff functions. For T rounds, K actions, and d dimensional feature vectors, we prove an O (√ Td ln(KT ln(T ) δ) ) regret bound that holds with probability 1− δ for the simplest known (both conceptually and computationally) efficient upper confidence bound algorithm for this problem. We also prove a lower bound of Ω( √ Td) for this setting, matching the upper bound up to logarithmic factors.", "", "", "Personalized web services strive to adapt their services (advertisements, news articles, etc.) to individual users by making use of both content and user information. Despite a few recent advances, this problem remains challenging for at least two reasons. First, web service is featured with dynamically changing pools of content, rendering traditional collaborative filtering methods inapplicable. Second, the scale of most web services of practical interest calls for solutions that are both fast in learning and computation. In this work, we model personalized recommendation of news articles as a contextual bandit problem, a principled approach in which a learning algorithm sequentially selects articles to serve users based on contextual information about the users and articles, while simultaneously adapting its article-selection strategy based on user-click feedback to maximize total user clicks. The contributions of this work are three-fold. First, we propose a new, general contextual bandit algorithm that is computationally efficient and well motivated from learning theory. Second, we argue that any bandit algorithm can be reliably evaluated offline using previously recorded random traffic. Finally, using this offline evaluation method, we successfully applied our new algorithm to a Yahoo! Front Page Today Module dataset containing over 33 million events. Results showed a 12.5 click lift compared to a standard context-free bandit algorithm, and the advantage becomes even greater when data gets more scarce.", "We introduce a novel algorithmic approach to content recommendation based on adaptive clustering of exploration-exploitation \"bandit\") strategies. We provide a sharp regret analysis of this algorithm in a standard stochastic noise setting, demonstrate its scalability properties, and prove its effectiveness on a number of artificial and real-world datasets. Our experiments show a significant increase in prediction performance over state-of-the-art methods for bandit problems.", "Thompson Sampling is one of the oldest heuristics for multi-armed bandit problems. It is a randomized algorithm based on Bayesian ideas, and has recently generated significant interest after several studies demonstrated it to have better empirical performance compared to the state-of-the-art methods. However, many questions regarding its theoretical performance remained open. In this paper, we design and analyze a generalization of Thompson Sampling algorithm for the stochastic contextual multi-armed bandit problem with linear payoff functions, when the contexts are provided by an adaptive adversary. This is among the most important and widely studied version of the contextual bandits problem. We prove a high probability regret bound of O(d2 e√T1+e) in time T for any 0 < e < 1, where d is the dimension of each context vector and e is a parameter used by the algorithm. Our results provide the first theoretical guarantees for the contextual version of Thompson Sampling, and are close to the lower bound of Ω(d√T) for this problem. This essentially solves a COLT open problem of Chapelle and Li [COLT 2012].", "We present Epoch-Greedy, an algorithm for contextual multi-armed bandits (also known as bandits with side information). Epoch-Greedy has the following properties: 1. No knowledge of a time horizon T is necessary. 2. The regret incurred by Epoch-Greedy is controlled by a sample complexity bound for a hypothesis class. 3. The regret scales as O(T2 3S1 3) or better (sometimes, much better). Here S is the complexity term in a sample complexity bound for standard supervised learning.", "", "In a multi-armed bandit (MAB) problem, an online algorithm makes a sequence of choices. In each round it chooses from a time-invariant set of alternatives and receives the payoff associated with this alternative. While the case of small strategy sets is by now well-understood, a lot of recent work has focused on MAB problems with exponentially or infinitely large strategy sets, where one needs to assume extra structure in order to make the problem tractable. In particular, recent literature considered information on similarity between arms. We consider similarity information in the setting of contextual bandits, a natural extension of the basic MAB problem where before each round an algorithm is given the context--a hint about the payoffs in this round. Contextual bandits are directly motivated by placing advertisements on web pages, one of the crucial problems in sponsored search. A particularly simple way to represent similarity information in the contextual bandit setting is via a similarity distance between the context-arm pairs which bounds from above the difference between the respective expected payoffs. Prior work on contextual bandits with similarity uses \"uniform\" partitions of the similarity space, so that each context-arm pair is approximated by the closest pair in the partition. Algorithms based on \"uniform\" partitions disregard the structure of the payoffs and the context arrivals, which is potentially wasteful. We present algorithms that are based on adaptive partitions, and take advantage of \"benign\" payoffs and context arrivals without sacrificing the worst-case performance. The central idea is to maintain a finer partition in high-payoff regions of the similarity space and in popular regions of the context space. Our results apply to several other settings, e.g., MAB with constrained temporal change (Slivkins and Upfal, 2008) and sleeping bandits (, 2008a)." ] }
1705.03773
2612678409
It has been shown that Chinese poems can be successfully generated by sequence-to-sequence neural models, particularly with the attention mechanism. A potential problem of this approach, however, is that neural models can only learn abstract rules, while poem generation is a highly creative process that involves not only rules but also innovations for which pure statistical models are not appropriate in principle. This work proposes a memory-augmented neural model for Chinese poem generation, where the neural model and the augmented memory work together to balance the requirements of linguistic accordance and aesthetic innovation, leading to innovative generations that are still rule-compliant. In addition, it is found that the memory mechanism provides interesting flexibility that can be used to generate poems with different styles.
A multitude of methods have been proposed for automatic poem generation. The first approach is based on rules and or templates. For example, phrase search @cite_10 @cite_1 , word association norm @cite_8 , template search @cite_18 , genetic search @cite_11 , text summarization @cite_3 . Another approach involves various SMT methods, e.g., @cite_15 @cite_2 . A disadvantage shared by the above methods is that they are based on the surface forms of words or characters, having no deep understanding of the meaning of a poem.
{ "cite_N": [ "@cite_18", "@cite_8", "@cite_1", "@cite_3", "@cite_2", "@cite_15", "@cite_10", "@cite_11" ], "mid": [ "", "2026351760", "2116787772", "197120736", "12732426", "2059488546", "1608017197", "2385617402" ], "abstract": [ "", "Word associations are an important element of linguistic creativity. Traditional lexical knowledge bases such as WordNet formalize a limited set of systematic relations among words, such as synonymy, polysemy and hypernymy. Such relations maintain their systematicity when composed into lexical chains. We claim that such relations cannot explain the type of lexical associations common in poetic text. We explore in this paper the usage of Word Association Norms (WANs) as an alternative lexical knowledge source to analyze linguistic computational creativity. We specifically investigate the Haiku poetic genre, which is characterized by heavy reliance on lexical associations. We first compare the density of WAN-based word associations in a corpus of English Haiku poems to that of WordNet-based associations as well as in other non-poetic genres. These experiments confirm our hypothesis that the non-systematic lexical associations captured in WANs play an important role in poetic text. We then present Gaiku, a system to automatically generate Haikus from a seed word and using WAN-associations. Human evaluation indicate that generated Haikus are of lesser quality than human Haikus, but a high proportion of generated Haikus can confuse human readers, and a few of them trigger intriguing reactions.", "As is well-known, cultures are rooted in their unique regions, histories and languages. Communication media have been developed to circulate these cultural characteristics. As a part of our research \"Cultural Computing\", which means the translation of cultures using scientific methods representing essential aspects of Japanese culture[1], an interactive Renku poem generation supporting system was developed to study the reproduction of a traditional Japanese Renku by computer. This system extended the functionality of our previous Hitch-Haiku system to the Renku based on same association method and attached more cultural characteristics on it: the Renku verse displayed on the Japanese-style color pattern which represents the same season in Renku Kigo (seasonal reference) and the generated Renku verse including the information of sightseeing place.", "Part of the long lasting cultural heritage of China is the classical ancient Chinese poems which follow strict formats and complicated linguistic rules. Automatic Chinese poetry composition by programs is considered as a challenging problem in computational linguistics and requires high Artificial Intelligence assistance, and has not been well addressed. In this paper, we formulate the poetry composition task as an optimization problem based on a generative summarization framework under several constraints. Given the user specified writing intents, the system retrieves candidate terms out of a large poem corpus, and then orders these terms to fit into poetry formats, satisfying tonal and rhythm requirements. The optimization process under constraints is conducted via iterative term substitutions till convergence, and outputs the subset with the highest utility as the generated poem. For experiments, we perform generation on large datasets of 61,960 classic poems from Tang and Song Dynasty of China. A comprehensive evaluation, using both human judgments and ROUGE scores, has demonstrated the effectiveness of our proposed approach.", "This paper describes a statistical approach to generation of Chinese classical poetry and proposes a novel method to automatically evaluate poems. The system accepts a set of keywords representing the writing intents from a writer and generates sentences one by one to form a completed poem. A statistical machine translation (SMT) system is applied to generate new sentences, given the sentences generated previously. For each line of sentence a specific model specially trained for that line is used, as opposed to using a single model for all sentences. To enhance the coherence of sentences on every line, a coherence model using mutual information is applied to select candidates with better consistency with previous sentences. In addition, we demonstrate the effectiveness of the BLEU metric for evaluation with a novel method of generating diverse references.", "Part of the unique cultural heritage of China is the game of Chinese couplets (duilian). One person challenges the other person with a sentence (first sentence). The other person then replies with a sentence (second sentence) equal in length and word segmentation, in a way that corresponding words in the two sentences match each other by obeying certain constraints on semantic, syntactic, and lexical relatedness. This task is viewed as a difficult problem in AI and has not been explored in the research community. In this paper, we regard this task as a kind of machine translation process. We present a phrase-based SMT approach to generate the second sentence. First, the system takes as input the first sentence, and generates as output an N-best list of proposed second sentences, using a phrase-based SMT decoder. Then, a set of filters is used to remove candidates violating linguistic constraints. Finally, a Ranking SVM is applied to rerank the candidates. A comprehensive evaluation, using both human judgments and BLEU scores, has been conducted, and the results demonstrate that this approach is very successful.", "Human communication is fostered in environments of regional communities and cultures and in different languages. Cultures are rooted in their unique histories. Communication media have been developed to circulate these cultural characteristics. The theme of our research is \"Cultural Computing\", which means the translation of cultures using scientific methods representing essential aspects of Japanese culture [1]. We study the reproduction of a traditional Japanese Haiku by computer. Our system can abstract an essence of human emotions and thoughts into a Haiku, a Japanese minimal poem form. A user chooses arbitrary phrases from a chapter of the essay \"1000 Books and 1000 Nights\" [2]. Using the phrases chosen by the user, our system generates the Haiku which includes the essence of these words.", "Automatic generation of poetry has always been considered a hard nut in natural language generation.This paper reports some pioneering research on a possible generic algorithm and its automatic generation of SONGCI.In light of the characteristics of Chinese ancient poetry,this paper designed the level and oblique tones-based coding method,the syntactic and semantic weighted function of fitness,the elitism and roulette-combined selection operator,and the partially mapped crossover operator and the heuristic mutation operator.As shown by tests,the system constructed on the basis of the computing model designed in this paper is basically capable of generating Chinese SONGCI with some aesthetic merit.This work represents progress in the field of Chinese poetry automatic generation." ] }
1705.03284
2963275645
The congested clique model of distributed computing has been receiving attention as a model for densely connected distributed systems. While there has been significant progress on the side of upper bounds, we have very little in terms of lower bounds for the congested clique; indeed, it is now known that proving explicit congested clique lower bounds is as difficult as proving circuit lower bounds. In this work, we use various more traditional complexity theory tools to build a clearer picture of the complexity landscape of the congested clique: item Nondeterminism and beyond: We introduce the nondeterministic congested clique model (analogous to NP) and show that there is a natural canonical problem family that captures all problems solvable in constant time with nondeterministic algorithms. We further generalise these notions by introducing the constant-round decision hierarchy (analogous to the polynomial hierarchy). item Non-constructive lower bounds: We lift the prior non-uniform counting arguments to a general technique for proving non-constructive uniform lower bounds for the congested clique. In particular, we prove a time hierarchy theorem for the congested clique, showing that there are decision problems of essentially all complexities, both in the deterministic and nondeterministic settings. item Fine-grained complexity: We map out relationships between various natural problems in the congested clique model, arguing that a reduction-based complexity theory currently gives us a fairly good picture of the complexity landscape of the congested clique.
As noted in the introduction, upper bounds have been extensively studied in the congested clique model. Problems studied in prior work include routing and sorting @cite_8 , minimum spanning trees @cite_35 @cite_7 @cite_9 @cite_49 , subgraph detection @cite_18 @cite_20 , shortest path problems @cite_20 @cite_28 , local problems @cite_25 @cite_6 @cite_44 and problems related to matrix multiplication @cite_38 @cite_20 .
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_38", "@cite_7", "@cite_8", "@cite_28", "@cite_9", "@cite_6", "@cite_44", "@cite_49", "@cite_25", "@cite_20" ], "mid": [ "1987099434", "1845051857", "2511108295", "1992996565", "2139535340", "2949524113", "2488670996", "1816797629", "2478193891", "", "2951272922", "2950813619" ], "abstract": [ "We consider a simple model for overlay networks, where all n processes are connected to all other processes, and each message contains at most O(log n) bits. For this model, we present a distributed algorithm which constructs a minimum-weight spanning tree in O(log log n) communication rounds, where in each round any process can send a message to every other process. If message size is @math for some @math , then the number of communication rounds is @math .", "Let G=(V,E) be an n-vertex graph and Md a d-vertex graph, for some constant d. Is Md a subgraph of G? We consider this problem in a model where all n processes are connected to all other processes, and each message contains up to @math bits. A simple deterministic algorithm that requires @math communication rounds is presented. For the special case that Md is a triangle, we present a probabilistic algorithm that requires an expected @math rounds of communication, where t is the number of triangles in the graph, and @math with high probability. We also present deterministic algorithms that are specially suited for sparse graphs. In graphs of maximum degree Δ, we can test for arbitrary subgraphs of diameter D in @math rounds. For triangles, we devise an algorithm featuring a round complexity of @math , where A denotes the arboricity of G.", "Censor- [PODC’15] recently showed how to efficiently implement centralized algebraic algorithms for matrix multiplication in the congested clique model, a model of distributed computing that has received increasing attention in the past few years. This paper develops further algebraic techniques for designing algorithms in this model. We present deterministic and randomized algorithms, in the congested clique model, for efficiently computing multiple independent instances of matrix products, computing the determinant, the rank and the inverse of a matrix, and solving systems of linear equations. As applications of these techniques, we obtain more efficient algorithms for the computation, again in the congested clique model, of the all-pairs shortest paths and the diameter in directed and undirected graphs with small weights, improving over Censor-’s work. We also obtain algorithms for several other graph-theoretic problems such as computing the number of edges in a maximum matching and the Gallai-Edmonds decomposition of a simple graph, and computing a minimum vertex cover of a bipartite graph.", "We study two fundamental graph problems, Graph Connectivity (GC) and Minimum Spanning Tree (MST), in the well-studied Congested Clique model, and present several new bounds on the time and message complexities of randomized algorithms for these problems. No non-trivial (i.e., super-constant) time lower bounds are known for either of the aforementioned problems; in particular, an important open question is whether or not constant-round algorithms exist for these problems. We make progress toward answering this question by presenting randomized Monte Carlo algorithms for both problems that run in O(log log log n) rounds (where n is the size of the clique). Our results improve by an exponential factor on the long-standing (deterministic) time bound of O(log log n) rounds for these problems due to (SICOMP 2005). Our algorithms make use of several algorithmic tools including graph sketching, random sampling, and fast sorting. The second contribution of this paper is to present several almost-tight bounds on the message complexity of these problems. Specifically, we show that Ω(n2) messages are needed by any algorithm (including randomized Monte Carlo algorithms, and regardless of the number of rounds) that solves the GC (and hence also the MST) problem if each machine in the Congested Clique has initial knowledge only of itself (the so-called KT0 model). In contrast, if the machines have initial knowledge of their neighbors' IDs (the so-called KT1 model), we present a randomized Monte Carlo algorithm for MST that uses O(n polylog n) messages and runs in O(polylog n) rounds. To complement this, we also present a lower bound in the KT1 model that shows that Ω(n) messages are required by any algorithm that solves GC, regardless of the number of rounds used. Our results are a step toward understanding the power of randomization in the Congested Clique with respect to both time and message complexity.", "Consider a clique of n nodes, where in each synchronous round each pair of nodes can exchange O(log n) bits. We provide deterministic constant-time solutions for two problems in this model. The first is a routing problem where each node is source and destination of n messages of size O(log n). The second is a sorting problem where each node i is given n keys of size O(log n) and needs to receive the ith batch of n keys according to the global order of the keys. The latter result also implies deterministic constant-round solutions for related problems such as selection or determining modes.", "We present a method for solving the shortest transshipmen problem - also known as uncapacitated minimum cost flow - up to a multiplicative error of @math in undirected graphs with polynomially bounded integer edge weights using a tailored gradient descent algorithm. An important special case of the transshipment problem is the single-source shortest paths (SSSP) problem. Our gradient descent algorithm takes @math iterations, and in each iteration it needs to solve the transshipment problem up to a multiplicative error of @math , where @math is the number of nodes. In particular, this allows us to perform a single iteration by computing a solution on a sparse spanner of logarithmic stretch. As a consequence, we improve prior work by obtaining the following results: (1) Broadcast congest model: @math -approximate SSSP using @math rounds, where @math is the (hop) diameter of the network. (2) Broadcast congested clique model: @math -approximate transshipment and SSSP using @math rounds. (3) Multipass streaming model: @math -approximate transshipment and SSSP using @math space and @math passes. The previously fastest algorithms for these models leverage sparse hop sets. We bypass the hop set construction; computing a spanner is sufficient with our method. The above bounds assume non-negative integer edge weights that are polynomially bounded in @math ; for general non-negative weights, running times scale with the logarithm of the maximum ratio between non-zero weights. In case of asymmetric costs, running times scale with the maximum ratio between the costs of both directions over all edges.", "We present a randomized algorithm that computes a Minimum Spanning Tree (MST) in O(log* n) rounds, with high probability, in the Congested Clique model of distributed computing. In this model, the input is a graph on n nodes, initially each node knows only its incident edges, and per round each two nodes can exchange O(log n) bits. Our key technical novelty is an O(log* n) Graph Connectivity algorithm, the heart of which is a (recursive) forest growth method, based on a combination of two ideas: a sparsity-sensitive sketching aimed at sparse graphs and a random edge sampling aimed at dense graphs. Our result improves significantly over the O(log log log n) algorithm of [PODC 2015] and the O(log log n) algorithm of [SPAA 2003; SICOMP 2005].", "This paper presents constant-time and near-constant-time distributed algorithms for a variety of problems in the congested clique model. We show how to compute a 3-ruling set in expected O(logloglogn) rounds and using this, we obtain a constant-approximation to metric facility location, also in expected O(logloglogn) rounds. In addition, assuming an input metric space of constant doubling dimension, we obtain constant-round algorithms to compute constant-factor approximations to the minimum spanning tree and the metric facility location problems. These results significantly improve on the running time of the fastest known algorithms for these problems in the congested clique setting.", "This paper addresses the cornerstone family of in distributed computing, and investigates the curious gap between randomized and deterministic solutions under bandwidth restrictions. Our main contribution is in providing tools for derandomizing solutions to local problems, when the @math nodes can only send @math -bit messages in each round of communication. We combine bounded independence, which we show to be sufficient for some algorithms, with the method of conditional expectations and with additional machinery, to obtain the following results. Our techniques give a deterministic maximal independent set (MIS) algorithm in the CONGEST model, where the communication graph is identical to the input graph, in @math rounds, where @math is the diameter of the graph. The best known running time in terms of @math alone is @math , which is super-polylogarithmic, and requires large messages. For the CONGEST model, the only known previous solution is a coloring-based @math -round algorithm, where @math is the maximal degree in the graph. On the way to obtaining the above, we show that in the model, which allows all-to-all communication, there is a deterministic MIS algorithm that runs in @math rounds. , where @math is the maximum degree. When @math , the bound improves to @math and holds also for @math -coloring. In addition, we deterministically construct a @math -spanner with @math edges in @math rounds. For comparison, in the more stringent CONGEST model, the best deterministic algorithm for constructing a @math -spanner with @math edges runs in @math rounds.", "", "The main results of this paper are (I) a simulation algorithm which, under quite general constraints, transforms algorithms running on the Congested Clique into algorithms running in the MapReduce model, and (II) a distributed @math -coloring algorithm running on the Congested Clique which has an expected running time of (i) @math rounds, if @math ; and (ii) @math rounds otherwise. Applying the simulation theorem to the Congested-Clique @math -coloring algorithm yields an @math -round @math -coloring algorithm in the MapReduce model. Our simulation algorithm illustrates a natural correspondence between per-node bandwidth in the Congested Clique model and memory per machine in the MapReduce model. In the Congested Clique (and more generally, any network in the @math model), the major impediment to constructing fast algorithms is the @math restriction on message sizes. Similarly, in the MapReduce model, the combined restrictions on memory per machine and total system memory have a dominant effect on algorithm design. In showing a fairly general simulation algorithm, we highlight the similarities and differences between these models.", "In this work, we use algebraic methods for studying distance computation and subgraph detection tasks in the congested clique model. Specifically, we adapt parallel matrix multiplication implementations to the congested clique, obtaining an @math round matrix multiplication algorithm, where @math is the exponent of matrix multiplication. In conjunction with known techniques from centralised algorithmics, this gives significant improvements over previous best upper bounds in the congested clique model. The highlight results include: -- triangle and 4-cycle counting in @math rounds, improving upon the @math triangle detection algorithm of [DISC 2012], -- a @math -approximation of all-pairs shortest paths in @math rounds, improving upon the @math -round @math -approximation algorithm of Nanongkai [STOC 2014], and -- computing the girth in @math rounds, which is the first non-trivial solution in this model. In addition, we present a novel constant-round combinatorial algorithm for detecting 4-cycles." ] }
1705.03284
2963275645
The congested clique model of distributed computing has been receiving attention as a model for densely connected distributed systems. While there has been significant progress on the side of upper bounds, we have very little in terms of lower bounds for the congested clique; indeed, it is now known that proving explicit congested clique lower bounds is as difficult as proving circuit lower bounds. In this work, we use various more traditional complexity theory tools to build a clearer picture of the complexity landscape of the congested clique: item Nondeterminism and beyond: We introduce the nondeterministic congested clique model (analogous to NP) and show that there is a natural canonical problem family that captures all problems solvable in constant time with nondeterministic algorithms. We further generalise these notions by introducing the constant-round decision hierarchy (analogous to the polynomial hierarchy). item Non-constructive lower bounds: We lift the prior non-uniform counting arguments to a general technique for proving non-constructive uniform lower bounds for the congested clique. In particular, we prove a time hierarchy theorem for the congested clique, showing that there are decision problems of essentially all complexities, both in the deterministic and nondeterministic settings. item Fine-grained complexity: We map out relationships between various natural problems in the congested clique model, arguing that a reduction-based complexity theory currently gives us a fairly good picture of the complexity landscape of the congested clique.
Prior work on computational complexity in the congested clique is fairly limited; the notable exceptions are the connections to circuit complexity @cite_48 and counting arguments for the non-uniform version of the model @cite_22 @cite_48 . However, lower bounds can be proven if we consider problems with large outputs; for example, lower bounds are known for triangle enumeration @cite_47 or, trivially, a problem where all nodes are required to output the whole input graph. Moreover, for the , a version of the model where each node sends the same message to each other node every round, lower bounds have been proven using communication complexity arguments @cite_48 .
{ "cite_N": [ "@cite_48", "@cite_47", "@cite_22" ], "mid": [ "2107805020", "2281539709", "2331029490" ], "abstract": [ "We study the computation power of the congested clique, a model of distributed computation where n players communicate with each other over a complete network in order to compute some function of their inputs. The number of bits that can be sent on any edge in a round is bounded by a parameter b We consider two versions of the model: in the first, the players communicate by unicast, allowing them to send a different message on each of their links in one round; in the second, the players communicate by broadcast, sending one message to all their neighbors. It is known that the unicast version of the model is quite powerful; to date, no lower bounds for this model are known. In this paper we provide a partial explanation by showing that the unicast congested clique can simulate powerful classes of bounded-depth circuits, implying that even slightly super-constant lower bounds for the congested clique would give new lower bounds in circuit complexity. Moreover, under a widely-believed conjecture on matrix multiplication, the triangle detection problem, studied in [8], can be solved in O(ne) time for any e > 0. The broadcast version of the congested clique is the well-known multi-party shared-blackboard model of communication complexity (with number-in-hand input). This version is more amenable to lower bounds, and in this paper we show that the subgraph detection problem studied in [8] requires polynomially many rounds for several classes of subgraphs. We also give upper bounds for the subgraph detection problem, and relate the hardness of triangle detection in the broadcast congested clique to the communication complexity of set disjointness in the 3-party number-on-forehead model.", "Motivated by the need to understand the algorithmic foundations of distributed large-scale graph computations, we study some fundamental graph problems in a message-passing model for distributed computing where @math machines jointly perform computations on graphs with @math nodes (typically, @math ). The input graph is assumed to be initially randomly partitioned among the @math machines. Communication is point-to-point, and the goal is to minimize the number of communication rounds of the computation. We present (almost) tight bounds for the round complexity of two fundamental graph problems, namely PageRank computation and triangle enumeration. Our tight lower bounds, a main contribution of the paper, are established through an information-theoretic approach that relates the round complexity to the minimal amount of information required by machines for correctly solving a problem. Our approach is generic and might be useful in showing lower bounds in the context of similar problems and similar models. We show a lower bound of @math rounds for computing the PageRank. (Notation @math hides a @math factor.) We also present a simple distributed algorithm that computes the PageRank of all the nodes of a graph in @math rounds (notation @math hides a @math factor and an additive @math term). For triangle enumeration, we show a lower bound of @math rounds, where @math is the number of edges of the graph. Our result implies a lower bound of @math for the congested clique, which is tight up to logarithmic factors. We also present a distributed algorithm that enumerates all the triangles of a graph in @math rounds.", "We consider a message passing model with n nodes, each connected to all other nodes by a link that can deliver a message of B bits in a time unit (typically, B = O(log n)). We assume that each node has an input of size L bits (typically, L = O(n log n)) and the nodes cooperate in order to compute some function (i.e., perform a distributed task). We are interested in the number of rounds required to compute the function. We give two results regarding this model. First, we show that most boolean functions require ‸ L B ‹ − 1 rounds to compute deterministically, and that even if we consider randomized protocols that are allowed to err, the expected running time remains Ω(L B) for most boolean function. Second, trying to find explicit functions that require superconstant time, we consider the pointer chasing problem. In this problem, each node i is given an array Ai of length n whose entries are in [n], and the task is to find, for any j ∈ [n], the value of An-1[An-2[. . .A0[j] . . .]]. We give a deterministi..." ] }
1705.03284
2963275645
The congested clique model of distributed computing has been receiving attention as a model for densely connected distributed systems. While there has been significant progress on the side of upper bounds, we have very little in terms of lower bounds for the congested clique; indeed, it is now known that proving explicit congested clique lower bounds is as difficult as proving circuit lower bounds. In this work, we use various more traditional complexity theory tools to build a clearer picture of the complexity landscape of the congested clique: item Nondeterminism and beyond: We introduce the nondeterministic congested clique model (analogous to NP) and show that there is a natural canonical problem family that captures all problems solvable in constant time with nondeterministic algorithms. We further generalise these notions by introducing the constant-round decision hierarchy (analogous to the polynomial hierarchy). item Non-constructive lower bounds: We lift the prior non-uniform counting arguments to a general technique for proving non-constructive uniform lower bounds for the congested clique. In particular, we prove a time hierarchy theorem for the congested clique, showing that there are decision problems of essentially all complexities, both in the deterministic and nondeterministic settings. item Fine-grained complexity: We map out relationships between various natural problems in the congested clique model, arguing that a reduction-based complexity theory currently gives us a fairly good picture of the complexity landscape of the congested clique.
. For the model, explicit lower bounds are known for many problems, even on graphs with very small diameter @cite_52 @cite_36 @cite_32 @cite_0 @cite_27 @cite_4 . These are generally based on reductions from known lower bounds in communication complexity; however, these reductions tend to boil down to constructing graphs with , that is, graphs where large amounts of information have to be transmitted over a small cut. A key motivation for the study of the congested clique model is to understand computation in networks that do not have such bottlenecks.
{ "cite_N": [ "@cite_4", "@cite_36", "@cite_32", "@cite_52", "@cite_0", "@cite_27" ], "mid": [ "2107282727", "2030825457", "1582638066", "1945440063", "1534484868", "2040011014" ], "abstract": [ "Given a simple graph G=(V,E) and a set of sources S ⊆ V, denote for each node ν e V by Lν(∞) the lexicographically ordered list of distance source pairs (d(s,v),s), where s ∈ S. For integers d,k ∈ N∪ ∞ , we consider the source detection, or (S,d,k)-detection task, requiring each node v to learn the first k entries of Lν(∞) (if for all of them d(s,v) ≤ d) or all entries (d(s,v),s) ∈ Lν(∞) satisfying that d(s,v) ≤ d (otherwise). Solutions to this problem provide natural generalizations of concurrent breadth-first search (BFS) tree constructions. For example, the special case of k=∞ requires each source s ∈ S to build a complete BFS tree rooted at s, whereas the special case of d=∞ and S=V requires constructing a partial BFS tree comprising at least k nodes from every node in V. In this work, we give a simple, near-optimal solution for the source detection task in the CONGEST model, where messages contain at most O(log n) bits, running in d+k rounds. We demonstrate its utility for various routing problems, exact and approximate diameter computation, and spanner construction. For those problems, we obtain algorithms in the CONGEST model that are faster and in some cases much simpler than previous solutions.", "We study the verification problem in distributed networks, stated as follows. Let H be a subgraph of a network G where each vertex of G knows which edges incident on it are in H. We would like to verify whether H has some properties, e.g., if it is a tree or if it is connected (every node knows in the end of the process whether H has the specified property or not). We would like to perform this verification in a decentralized fashion via a distributed algorithm. The time complexity of verification is measured as the number of rounds of distributed communication. In this paper we initiate a systematic study of distributed verification, and give almost tight lower bounds on the running time of distributed verification algorithms for many fundamental problems such as connectivity, spanning connected subgraph, and s-t cut verification. We then show applications of these results in deriving strong unconditional time lower bounds on the hardness of distributed approximation for many classical optimization problems including minimum spanning tree, shortest paths, and minimum cut. Many of these results are the first non-trivial lower bounds for both exact and approximate distributed computation and they resolve previous open questions. Moreover, our unconditional lower bound of approximating minimum spanning tree (MST) subsumes and improves upon the previous hardness of approximation bound of Elkin [STOC 2004] as well as the lower bound for (exact) MST computation of Peleg and Rubinovich [FOCS 1999]. Our result implies that there can be no distributed approximation algorithm for MST that is significantly faster than the current exact algorithm, for any approximation factor. Our lower bound proofs show an interesting connection between communication complexity and distributed computing which turns out to be useful in establishing the time complexity of exact and approximate distributed computation of many problems.", "We study the problem of computing the diameter of a network in a distributed way. The model of distributed computation we consider is: in each synchronous round, each node can transmit a different (but short) message to each of its neighbors. We provide an Ω(n) lower bound for the number of communication rounds needed, where n denotes the number of nodes in the network. This lower bound is valid even if the diameter of the network is a small constant. We also show that a (3 2 − e)-approximation of the diameter requires Ω (√n + D) rounds. Furthermore we use our new technique to prove an Ω (√n + D) lower bound on approximating the girth of a graph by a factor 2 − e.", "This paper presents a lower bound of spl Omega (D+ spl radic n) on the time required for the distributed construction of a minimum-weight spanning tree (MST) in n-vertex networks of diameter D= spl Omega (log n), in the bounded message model. This establishes the asymptotic near-optimality of existing time-efficient distributed algorithms for the problem, whose complexity is O(D+ spl radic nlog* n).", "This article presents a fast distributed algorithm to compute a smallk-dominating setD(for any fixedk) and to compute its induced graph partition (breaking the graph into radiuskclusters centered around the vertices ofD). The time complexity of the algorithm isO(klog*n). Smallk-dominating sets have applications in a number of areas, including routing with sparse routing tables, the design of distributed data structures, and center selection in a distributed network. The main application described in this article concerns a fast distributed algorithm for constructing a minimum-weight spanning tree (MST). On ann-vertex network of diameterd, the new algorithm constructs an MST in time, improving on previous results.", "A distributed network is modeled by a graph having n nodes (processors) and diameter D. We study the time complexity of approximating weighted (undirected) shortest paths on distributed networks with a O (log n) bandwidth restriction on edges (the standard synchronous CONGEST model). The question whether approximation algorithms help speed up the shortest paths and distance computation (more precisely distance computation) was raised since at least 2004 by Elkin (SIGACT News 2004). The unweighted case of this problem is well-understood while its weighted counterpart is fundamental problem in the area of distributed approximation algorithms and remains widely open. We present new algorithms for computing both single-source shortest paths (SSSP) and all-pairs shortest paths (APSP) in the weighted case. Our main result is an algorithm for SSSP. Previous results are the classic O(n)-time Bellman-Ford algorithm and an O(n1 2+1 2k + D)-time (8k⌈log(k + 1)⌉ --1)-approximation algorithm, for any integer k ≥ 1, which follows from the result of Lenzen and Patt-Shamir (STOC 2013). (Note that Lenzen and Patt-Shamir in fact solve a harder problem, and we use O(·) to hide the O(poly log n) term.) We present an O (n1 2D1 4 + D)-time (1 + o(1))-approximation algorithm for SSSP. This algorithm is sublinear-time as long as D is sublinear, thus yielding a sublinear-time algorithm with almost optimal solution. When D is small, our running time matches the lower bound of Ω(n1 2 + D) by Das (SICOMP 2012), which holds even when D=Θ(log n), up to a poly log n factor. As a by-product of our technique, we obtain a simple O (n)-time (1+ o(1))-approximation algorithm for APSP, improving the previous O(n)-time O(1)-approximation algorithm following from the results of Lenzen and Patt-Shamir. We also prove a matching lower bound. Our techniques also yield an O(n1 2) time algorithm on fully-connected networks, which guarantees an exact solution for SSSP and a (2+ o(1))-approximate solution for APSP. All our algorithms rely on two new simple tools: light-weight algorithm for bounded-hop SSSP and shortest-path diameter reduction via shortcuts. These tools might be of an independent interest and useful in designing other distributed algorithms." ] }
1705.03284
2963275645
The congested clique model of distributed computing has been receiving attention as a model for densely connected distributed systems. While there has been significant progress on the side of upper bounds, we have very little in terms of lower bounds for the congested clique; indeed, it is now known that proving explicit congested clique lower bounds is as difficult as proving circuit lower bounds. In this work, we use various more traditional complexity theory tools to build a clearer picture of the complexity landscape of the congested clique: item Nondeterminism and beyond: We introduce the nondeterministic congested clique model (analogous to NP) and show that there is a natural canonical problem family that captures all problems solvable in constant time with nondeterministic algorithms. We further generalise these notions by introducing the constant-round decision hierarchy (analogous to the polynomial hierarchy). item Non-constructive lower bounds: We lift the prior non-uniform counting arguments to a general technique for proving non-constructive uniform lower bounds for the congested clique. In particular, we prove a time hierarchy theorem for the congested clique, showing that there are decision problems of essentially all complexities, both in the deterministic and nondeterministic settings. item Fine-grained complexity: We map out relationships between various natural problems in the congested clique model, arguing that a reduction-based complexity theory currently gives us a fairly good picture of the complexity landscape of the congested clique.
. Perhaps the most active development related to the computational complexity theory of distributed computing is currently taking place in the context of the model. There is a lot of very recent work that aims at developing a complete classification of the complexities of problems in the model @cite_11 @cite_33 @cite_29 @cite_31 @cite_1 @cite_37 . In this line of research, the focus is on low-degree large-diameter graphs, while in the congested clique model we will study the opposite corner of the distributed computing landscape: high-degree low-diameter graphs.
{ "cite_N": [ "@cite_37", "@cite_33", "@cite_29", "@cite_1", "@cite_31", "@cite_11" ], "mid": [ "2962850638", "2243868910", "2534944111", "2609730020", "2593376981", "2279830512" ], "abstract": [ "A number of recent papers – e.g. (STOC 2016), (FOCS 2016), Ghaffari & Su (SODA 2017), (PODC 2017), and Chang & Pettie (FOCS 2017) – have advanced our understanding of one of the most fundamental questions in theory of distributed computing: what are the possible time complexity classes of LCL problems in the LOCAL model? In essence, we have a graph problem Π in which a solution can be verified by checking all radius-O(1) neighbourhoods, and the question is what is the smallest T such that a solution can be computed so that each node chooses its own output based on its radius-T neighbourhood. Here T is the distributed time complexity of Π. The time complexity classes for deterministic algorithms in bounded-degree graphs that are known to exist by prior work are Θ(1), Θ(log* n), Θ(logn), Θ(n1 k), and Θ(n). It is also known that there are two gaps: one between ω(1) and o(loglog* n), and another between ω(log* n) and o(logn). It has been conjectured that many more gaps exist, and that the overall time hierarchy is relatively simple – indeed, this is known to be the case in restricted graph families such as cycles and grids. We show that the picture is much more diverse than previously expected. We present a general technique for engineering LCL problems with numerous different deterministic time complexities, including Θ(logα n) for any α ≥ 1, 2Θ(logα n) for any α ≤ 1, and Θ(nα) for any α", "We show that any randomised Monte Carlo distributed algorithm for the Lovasz local lemma requires Omega(log log n) communication rounds, assuming that it finds a correct assignment with high probability. Our result holds even in the special case of d = O(1), where d is the maximum degree of the dependency graph. By prior work, there are distributed algorithms for the Lovasz local lemma with a running time of O(log n) rounds in bounded-degree graphs, and the best lower bound before our work was Omega(log* n) rounds [ 2014].", "We study a family of closely-related distributed graph problems, which we call degree splitting, where roughly speaking the objective is to partition (or orient) the edges such that each node's degree is split almost uniformly. Our findings lead to answers for a number of problems, a sampling of which includes: • We present a poly log n round deterministic algorithm for (2Δ−1)·(1+o(1))-edge-coloring, where Δ denotes the maximum degree. Modulo the 1 + o(1) factor, this settles one of the long-standing open problems of the area from the 1990's (see e.g. Panconesi and Srinivasan [PODC'92]). Indeed, a weaker requirement of (2Δ − 1) · poly log Δ-edge-coloring in poly log n rounds was asked for in the 4th open question in the Distributed Graph Coloring book by Barenboim and Elkin. • We show that sinkless orientation---i.e., orienting edges such that each node has at least one out-going edge---on Δ-regular graphs can be solved in O(logΔ log n) rounds randomized and in O(logΔ n) rounds deterministically. These prove the corresponding lower bounds by [STOC'16] and Chang, Kopelowitz, and Pettie [FOCS'16] to be tight. Moreover, these show that sinkless orientation exhibits an exponential separation between its randomized and deterministic complexities, akin to the results of for Δ-coloring Δ-regular trees. • We present a randomized O(log4 n) round algorithm for orienting a-arboricity graphs with maximum out-degree a(1 + e). This can be also turned into a decomposition into a(1 + e) forests when a = Ω(log n) and into a(1 + e) pseduo-forests when a = o(log n). Obtaining an efficient distributed decomposition into less than 2a forests was stated as the 10th open problem in the book by Barenboim and Elkin.", "The celebrated Time Hierarchy Theorem for Turing machines states, informally, that more problems can be solved given more time. The extent to which a time hierarchy-type theorem holds in the distributed LOCAL model has been open for many years. It is consistent with previous results that all natural problems in the LOCAL model can be classified according to a small constant number of complexities, such as @math , etc. In this paper we establish the first time hierarchy theorem for the LOCAL model and prove that several gaps exist in the LOCAL time hierarchy. 1. We define an infinite set of simple coloring problems called Hierarchical @math -Coloring . A correctly colored graph can be confirmed by simply checking the neighborhood of each vertex, so this problem fits into the class of locally checkable labeling (LCL) problems. However, the complexity of the @math -level Hierarchical @math -Coloring problem is @math , for @math . The upper and lower bounds hold for both general graphs and trees, and for both randomized and deterministic algorithms. 2. Consider any LCL problem on bounded degree trees. We prove an automatic-speedup theorem that states that any randomized @math -time algorithm solving the LCL can be transformed into a deterministic @math -time algorithm. Together with a previous result, this establishes that on trees, there are no natural deterministic complexities in the ranges @math --- @math or @math --- @math . 3. We expose a gap in the randomized time hierarchy on general graphs. Any randomized algorithm that solves an LCL problem in sublogarithmic time can be sped up to run in @math time, which is the complexity of the distributed Lovasz local lemma problem, currently known to be @math and @math .", "LCLs or locally checkable labelling problems (e.g. maximal independent set, maximal matching, and vertex colouring) in the LOCAL model of computation are very well-understood in cycles (toroidal 1-dimensional grids): every problem has a complexity of O(1), Θ(log* n), or Θ(n), and the design of optimal algorithms can be fully automated. This work develops the complexity theory of LCL problems for toroidal 2-dimensional grids. The complexity classes are the same as in the 1-dimensional case: O(1), Θ(log* n), and Θ(n). However, given an LCL problem it is undecidable whether its complexity is Θ(log* n) or Θ(n) in 2-dimensional grids. Nevertheless, if we correctly guess that the complexity of a problem is Θ(log* n), we can completely automate the design of optimal algorithms. For any problem we can find an algorithm that is of a normal form A' o Sk, where A' is a finite function, Sk is an algorithm for finding a maximal independent set in kth power of the grid, and k is a constant. Finally, partially with the help of automated design tools, we classify the complexity of several concrete LCL problems related to colourings and orientations.", "Over the past 30 years numerous algorithms have been designed for symmetry breaking problems in the LOCAL model, such as maximal matching, MIS, vertex coloring, and edge-coloring. For most problems the best randomized algorithm is at least exponentially faster than the best deterministic algorithm. In this paper we prove that these exponential gaps are necessary and establish connections between the deterministic and randomized complexities in the LOCAL model. Each result has a very compelling take-away message: 1. Fast @math -coloring of trees requires random bits: Building on the recent lower bounds of , we prove that the randomized complexity of @math -coloring a tree with maximum degree @math is @math , whereas its deterministic complexity is @math for any @math . This also establishes a large separation between the deterministic complexity of @math -coloring and @math -coloring trees. 2. Randomized lower bounds imply deterministic lower bounds: We prove that any deterministic algorithm for a natural class of problems that runs in @math rounds can be transformed to run in @math rounds. If the transformed algorithm violates a lower bound (even allowing randomization), then one can conclude that the problem requires @math time deterministically. 3. Deterministic lower bounds imply randomized lower bounds: We prove that the randomized complexity of any natural problem on instances of size @math is at least its deterministic complexity on instances of size @math . This shows that a deterministic @math lower bound for any problem implies a randomized @math lower bound. It also illustrates that the graph shattering technique is absolutely essential to the LOCAL model." ] }
1705.03284
2963275645
The congested clique model of distributed computing has been receiving attention as a model for densely connected distributed systems. While there has been significant progress on the side of upper bounds, we have very little in terms of lower bounds for the congested clique; indeed, it is now known that proving explicit congested clique lower bounds is as difficult as proving circuit lower bounds. In this work, we use various more traditional complexity theory tools to build a clearer picture of the complexity landscape of the congested clique: item Nondeterminism and beyond: We introduce the nondeterministic congested clique model (analogous to NP) and show that there is a natural canonical problem family that captures all problems solvable in constant time with nondeterministic algorithms. We further generalise these notions by introducing the constant-round decision hierarchy (analogous to the polynomial hierarchy). item Non-constructive lower bounds: We lift the prior non-uniform counting arguments to a general technique for proving non-constructive uniform lower bounds for the congested clique. In particular, we prove a time hierarchy theorem for the congested clique, showing that there are decision problems of essentially all complexities, both in the deterministic and nondeterministic settings. item Fine-grained complexity: We map out relationships between various natural problems in the congested clique model, arguing that a reduction-based complexity theory currently gives us a fairly good picture of the complexity landscape of the congested clique.
Nondeterministic models of distributed computing have been studied under various names -- for example, @cite_19 @cite_50 @cite_2 @cite_14 , @cite_5 , and @cite_16 can be interpreted as nondeterministic versions of variants of the and models; we refer to the survey by Feuilloley and Fraigniaud @cite_45 for further discussion. However, there seem to be very few papers that take the next step from nondeterministic machines to alternating machines in the context of distributed computing -- we are only aware of Reiter @cite_26 , who studies alternating quantifiers in finite state machines, and @cite_30 and @cite_12 , who study alternating quantifiers in the model.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_26", "@cite_19", "@cite_45", "@cite_50", "@cite_2", "@cite_5", "@cite_16", "@cite_12" ], "mid": [ "", "1981752370", "1537465387", "", "2962803529", "2023425290", "2056295140", "2067202579", "2184633425", "2885668823" ], "abstract": [ "", "Let f be a function on pairs of vertices. An f -labeling scheme for a family of graphs ℱ labels the vertices of all graphs in ℱ such that for every graph G∈ℱ and every two vertices u,v∈G, f(u,v) can be inferred by merely inspecting the labels of u and v. The size of a labeling scheme is the maximum number of bits used in a label of any vertex in any graph in ℱ. This paper illustrates that the notion of universal matrices can be used to efficiently construct f-labeling schemes. Let ℱ(n) be a family of connected graphs of size at most n and let @math denote the collection of graphs of size at most n, such that each graph in @math is composed of a disjoint union of some graphs in ℱ(n). We first investigate methods for translating f-labeling schemes for ℱ(n) to f-labeling schemes for @math . In particular, we show that in many cases, given an f-labeling scheme of size g(n) for a graph family ℱ(n), one can construct an f-labeling scheme of size g(n)+log log n+O(1) for @math . We also show that in several cases, the above mentioned extra additive term of log log n+O(1) is necessary. In addition, we show that the family of n-node graphs which are unions of disjoint circles enjoys an adjacency labeling scheme of size log n+O(1). This illustrates a non-trivial example showing that the above mentioned extra additive term is sometimes not necessary. We then turn to investigate distance labeling schemes on the class of circles of at most n vertices and show an upper bound of 1.5log n+O(1) and a lower bound of 4 3log n−O(1) for the size of any such labeling scheme.", "Combining ideas from distributed algorithms and alternating automata, we introduce a new class of finite graph automata that recognize precisely the languages of finite graphs definable in monadic second-order logic. By restricting transitions to be nondeterministic or deterministic, we also obtain two strictly weaker variants of our automata for which the emptiness problem is decidable.", "", "We survey the recent distributed computing literature on checking whether a given distributed system configuration satisfies a given boolean predicate, i.e., whether the configuration is legal or illegal w.r.t. that predicate. We consider classical distributed computing environments, including mostly synchronous fault-free network computing (LOCAL and CONGEST models), but also asynchronous crash-prone shared-memory computing (WAIT-FREE model), and mobile computing (FSYNC model).", "The problem of verifying a Minimum Spanning Tree (MST) was introduced by Tarjan in a sequential setting. Given a graph and a tree that spans it, the algorithm is required to check whether this tree is an MST. This paper investigates the problem in the distributed setting, where the input is given in a distributed manner, i.e., every node “knows” which of its own emanating edges belong to the tree. Informally, the distributed MST verification problem is the following. Label the vertices of the graph in such a way that for every node, given (its own state and label and) the labels of its neighbors only, the node can detect whether these edges are indeed its MST edges. In this paper, we present such a verification scheme with a maximum label size of O(log n log W), where n is the number of nodes and W is the largest weight of an edge. We also give a matching lower bound of Ω(log n log W) (as long as W > (log n)1+e for some fixed e > 0). Both our bounds improve previously known bounds for the problem.", "This paper addresses the problem of locally verifying global properties. Several natural questions are studied, such as “how expensive is local verification?” and more specifically, “how expensive is local verification compared to computation?” A suitable model is introduced in which these questions are studied in terms of the number of bits a vertex needs to communicate. The model includes the definition of a proof labeling scheme (a pair of algorithms- one to assign the labels, and one to use them to verify that the global property holds). In addition, approaches are presented for the efficient construction of schemes, and upper and lower bounds are established on the bit complexity of schemes for multiple basic problems. The paper also studies the role and cost of unique identities in terms of impossibility and complexity, in the context of proof labeling schemes. Previous studies on related questions deal with distributed algorithms that simultaneously compute a configuration and verify that this configuration has a certain desired property. It turns out that this combined approach enables the verification to be less costly sometimes, since the configuration is typically generated so as to be easily verifiable. In contrast, our approach separates the configuration design from the verification. That is, it first generates the desired configuration without bothering with the need to verify it, and then handles the task of constructing a suitable verification scheme. Our approach thus allows for a more modular design of algorithms, and has the potential to aid in verifying properties even when the original design of the structures for maintaining them was done without verification in mind.", "A central theme in distributed network algorithms concerns understanding and coping with the issue of locality . Despite considerable progress, research efforts in this direction have not yet resulted in a solid basis in the form of a fundamental computational complexity theory for locality. Inspired by sequential complexity theory, we focus on a complexity theory for . In the context of locality, solving a decision problem requires the processors to independently inspect their local neighborhoods and then collectively decide whether a given global input instance belongs to some specified language. We consider the standard @math model of computation and define @math (for local decision ) as the class of decision problems that can be solved in @math communication rounds. We first study the intriguing question of whether randomization helps in local distributed computing, and to what extent. Specifically, we define the corresponding randomized class @math , containing all languages for which there exists a randomized algorithm that runs in @math rounds, accepts correct instances with probability at least @math and rejects incorrect ones with probability at least @math . We show that @math is a threshold for the containment of @math in @math . More precisely, we show that there exists a language that does not belong to @math for any @math but does belong to @math for any @math such that @math . On the other hand, we show that, restricted to hereditary languages, @math , for any function @math and any @math such that @math . In addition, we investigate the impact of non-determinism on local decision, and establish some structural results inspired by classical computational complexity theory. Specifically, we show that non-determinism does help, but that this help is limited, as there exist languages that cannot be decided non-deterministically. Perhaps surprisingly, it turns out that it is the combination of randomization with non-determinism that enables to decide languages . Finally, we introduce the notion of local reduction, and establish some completeness results.", "This work studies decision problems related to graph properties from the perspective of nondeterministic distributed algorithms. For a yes-instance there must exist a proof that can be verified with a distributed algorithm: all nodes must accept a valid proof, and at least one node must reject an invalid proof. We focus on locally checkable proofs that can be verified with a constant-time distributed algorithm. For example, it is easy to prove that a graph is bipartite: the locally checkable proof gives a 2-coloring of the graph, which only takes 1 bit per node. However, it is more difficult to prove that a graph is not bipartite—it turns out that any locally checkable proof requires W(log n) bits per node. In this work we classify graph properties according to their local proof complexity, i. e., how many bits per node are needed in a locally checkable proof. We establish tight or near- tight results for classical graph properties such as the chromatic number. We show that the local proof complexities form a natural hierarchy of complexity classes: for many classical", "Abstract In the framework of distributed network computing , it is known that not all Turing-decidable predicates on labeled networks can be decided locally whenever the computing entities are Turing machines (TM). This holds even if nodes are running non-deterministic Turing machines (NTM). In contrast, we show that every Turing-decidable predicate on labeled networks can be decided locally if nodes are running alternating Turing machines (ATM). More specifically, we show that, for every such predicate, there is a local algorithm for ATMs, with at most two alternations, that decides whether the actual labeled network satisfies that predicate. To this aim, we define a hierarchy of classes of decision tasks, where the lowest level contains tasks solvable with TMs, the first level those solvable with NTMs, and the level k > 1 contains those tasks solvable with ATMs with k − 1 alternations. We characterize the entire hierarchy, and show that it collapses in the second level." ] }
1705.03414
2952734140
We study a distributed learning process observed in human groups and other social animals. This learning process appears in settings in which each individual in a group is trying to decide over time, in a distributed manner, which option to select among a shared set of options. Specifically, we consider a stochastic dynamics in a group in which every individual selects an option in the following two-step process: (1) select a random individual and observe the option that individual chose in the previous time step, and (2) adopt that option if its stochastic quality was good at that time step. Various instantiations of such distributed learning appear in nature, and have also been studied in the social science literature. From the perspective of an individual, an attractive feature of this learning process is that it is a simple heuristic that requires extremely limited computational capacities. But what does it mean for the group -- could such a simple, distributed and essentially memoryless process lead the group as a whole to perform optimally? We show that the answer to this question is yes -- this distributed learning is highly effective at identifying the best option and is close to optimal for the group overall. Our analysis also gives quantitative bounds that show fast convergence of these stochastic dynamics. Prior to our work the only theoretical work related to such learning dynamics has been either in deterministic special cases or in the asymptotic setting. Finally, we observe that our infinite population dynamics is a stochastic variant of the classic multiplicative weights update (MWU) method. Consequently, we arrive at the following interesting converse: the learning dynamics on a finite population considered here can be viewed as a novel distributed and low-memory implementation of the classic MWU method.
Our results suggest that the distributed learning dynamics in finite populations can be viewed as a novel distributed and approximate implementation of the MWU method. While parallelized implementations for solving multi-armed bandit problem exist (see, e.g., @cite_29 @cite_15 @cite_47 @cite_32 @cite_54 ), in such works each node explicitly maintains a weight vector on all options. The most distinctive aspect of the distributed MWU interpretation of the learning dynamics we consider is that no such memory is required -- the weights are represented implicitly by the popularity of the various options, and the sampling and adopting processes require almost no memory. This difference distinguishes our distributed learning dynamics from prior work on distributed MWU or bandit methods.
{ "cite_N": [ "@cite_15", "@cite_54", "@cite_29", "@cite_32", "@cite_47" ], "mid": [ "762176534", "", "2950807979", "2951248203", "1996069568" ], "abstract": [ "We consider the problem of learning in single-player and multiplayer multiarmed bandit models. Bandit problems are classes of online learning problems that capture exploration versus exploitation tradeoffs. In a multiarmed bandit model, players can pick among many arms, and each play of an arm generates an i.i.d. reward from an unknown distribution. The objective is to design a policy that maximizes the expected reward over a time horizon for a single player setting and the sum of expected rewards for the multiplayer setting. In the multiplayer setting, arms may give different rewards to different players. There is no separate channel for coordination among the players. Any attempt at communication is costly and adds to regret. We propose two decentralizable policies, @math ( @math - @math ) and @math - @math , that can be used in both single player and multiplayer settings. These policies are shown to yield expected regret that grows at most as O( @math ). It is well known that @math is the lower bound on the rate of growth of regret even in a centralized case. The proposed algorithms improve on prior work where regret grew at O( @math ). More fundamentally, these policies address the question of additional cost incurred in decentralized online learning, suggesting that there is at most an @math -factor cost in terms of order of regret. This solves a problem of relevance in many domains and had been open for a while.", "", "A key problem in sensor networks is to decide which sensors to query when, in order to obtain the most useful information (e.g., for performing accurate prediction), subject to constraints (e.g., on power and bandwidth). In many applications the utility function is not known a priori, must be learned from data, and can even change over time. Furthermore for large sensor networks solving a centralized optimization problem to select sensors is not feasible, and thus we seek a fully distributed solution. In this paper, we present Distributed Online Greedy (DOG), an efficient, distributed algorithm for repeatedly selecting sensors online, only receiving feedback about the utility of the selected sensors. We prove very strong theoretical no-regret guarantees that apply whenever the (unknown) utility function satisfies a natural diminishing returns property called submodularity. Our algorithm has extremely low communication requirements, and scales well to large sensor deployments. We extend DOG to allow observation-dependent sensor selection. We empirically demonstrate the effectiveness of our algorithm on several real-world sensing tasks.", "The fundamental problem of multiple secondary users contending for opportunistic spectrum access over multiple channels in cognitive radio networks has been formulated recently as a decentralized multi-armed bandit (D-MAB) problem. In a D-MAB problem there are @math users and @math arms (channels) that each offer i.i.d. stochastic rewards with unknown means so long as they are accessed without collision. The goal is to design a decentralized online learning policy that incurs minimal regret, defined as the difference between the total expected rewards accumulated by a model-aware genie, and that obtained by all users applying the policy. We make two contributions in this paper. First, we consider the setting where the users have a prioritized ranking, such that it is desired for the @math -th-ranked user to learn to access the arm offering the @math -th highest mean reward. For this problem, we present the first distributed policy that yields regret that is uniformly logarithmic over time without requiring any prior assumption about the mean rewards. Second, we consider the case when a fair access policy is required, i.e., it is desired for all users to experience the same mean reward. For this problem, we present a distributed policy that yields order-optimal regret scaling with respect to the number of users and arms, better than previously proposed policies in the literature. Both of our distributed policies make use of an innovative modification of the well known UCB1 policy for the classic multi-armed bandit problem that allows a single user to learn how to play the arm that yields the @math -th largest mean reward.", "We formulate and study a decentralized multi-armed bandit (MAB) problem, where M distributed players compete for N independent arms with unknown reward statistics. At each time, each player chooses one arm to play without exchanging information with other players. Players choosing the same arm collide, and, depending on the collision model, either no one receives reward or the colliding players share the reward in an arbitrary way. We show that the minimum system regret of the decentralized MAB grows with time at the same logarithmic order as in the centralized counterpart where players act collectively as a single entity by exchanging observations and making decisions jointly. A general framework of constructing fair and order-optimal decentralized policies is established based on a Time Division Fair Sharing (TDFS) of the M best arms. A lower bound on the system regret growth rate is established for a general class of decentralized polices, to which all TDFS policies belong. We further develop several fair and order-optimal decentralized polices within the TDFS framework and study their performance in different applications including cognitive radio networks, multi-channel communications in unknown fading environment, target collecting in multi-agent systems, and web search and advertising." ] }
1705.03162
2768347146
Abstract The expedient design of precision components in aerospace and other high-tech industries requires simulations of physical phenomena often described by partial differential equations (PDEs) without exact solutions. Modern design problems require simulations with a level of resolution difficult to achieve in reasonable amounts of time—even in effectively parallelized solvers. Though the scale of the problem relative to available computing power is the greatest impediment to accelerating these applications, significant performance gains can be achieved through careful attention to the details of memory communication and access. The swept time–space decomposition rule reduces communication between sub-domains by exhausting the domain of influence before communicating boundary values. Here we present a GPU implementation of the swept rule, which modifies the algorithm for improved performance on this processing architecture by prioritizing use of private (shared) memory, avoiding interblock communication, and overwriting unnecessary values. It shows significant improvement in the execution time of finite-difference solvers for one-dimensional unsteady PDEs, producing speedups of 2 – 9 × for a range of problem sizes, respectively, compared with simple GPU versions and 7 – 300 × compared with parallel CPU versions. However, for a more sophisticated one-dimensional system of equations discretized with a second-order finite-volume scheme, the swept rule performs 1.2 – 1.9 × worse than a standard implementation for all problem sizes.
Alhubail and Wang introduced the swept rule for explicit, time-stepping, numerical schemes applied to PDEs @cite_24 @cite_5 @cite_17 , and our work takes their results and ideas as its starting point. The swept rule is closely related to cache optimization techniques, in particular those that use geometry to organize stencil update computation such as parallelograms @cite_14 and diamonds @cite_4 . The diamond tiling method presented by @cite_4 is similar to the swept rule but uses the data dependency of the grid to improve cache usage rather than avoid communication. Concepts such as stencil optimization using domain decomposition on various architectures that are fundamental to this study are explored by @cite_25 . Their work explores comparisons between parallel GPU and CPU architectures and tunes the stencil algorithm with nested domain decomposition. The swept rule also has elements in common with parallel-in-time and communication-avoiding algorithms.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_24", "@cite_5", "@cite_25", "@cite_17" ], "mid": [ "1973271197", "1506424797", "", "", "2154786353", "2278971702" ], "abstract": [ "We present a new cache oblivious scheme for iterative stencil computations that performs beyond system bandwidth limitations as though gigabytes of data could reside in an enormous on-chip cache. We compare execution times for 2D and 3D spatial domains with up to 128 million double precision elements for constant and variable stencils against hand-optimized naive code and the automatic polyhedral parallelizer and locality optimizer PluTo and demonstrate the clear superiority of our results. The performance benefits stem from a tiling structure that caters for data locality, parallelism and vectorization simultaneously. Rather than tiling the iteration space from inside, we take an exterior approach with a predefined hierarchy, simple regular parallelogram tiles and a locality preserving parallelization. These advantages come at the cost of an irregular work-load distribution but a tightly integrated load-balancer ensures a high utilization of all resources.", "The importance of stencil-based algorithms in computational science has focused attention on optimized parallel implementations for multilevel cache-based processors. Temporal blocking schemes leverage the large bandwidth and low latency of caches to accelerate stencil updates and approach theoretical peak performance. A key ingredient is the reduction of data traffic across slow data paths, especially the main memory interface. In this work we combine the ideas of multicore wavefront temporal blocking and diamond tiling to arrive at stencil update schemes that show large reductions in memory pressure compared to existing approaches. The resulting schemes show performance advantages in bandwidth-starved situations, which are exacerbated by the high bytes per lattice update case of variable coefficients. Our thread groups concept provides a controllable trade-off between concurrency and memory usage, shifting the pressure between the memory interface and the CPU. We present performance results on a contemp...", "", "", "Understanding the most efficient design and utilization of emerging multicore systems is one of the most challenging questions faced by the mainstream and scientific computing industries in several decades. Our work explores multicore stencil (nearest-neighbor) computations --- a class of algorithms at the heart of many structured grid codes, including PDF solvers. We develop a number of effective optimization strategies, and build an auto-tuning environment that searches over our optimizations and their parameters to minimize runtime, while maximizing performance portability. To evaluate the effectiveness of these strategies we explore the broadest set of multicore architectures in the current HPC literature, including the Intel Clovertown, AMD Barcelona, Sun Victoria Falls, IBM QS22 PowerXCell 8i, and NVIDIA GTX280. Overall, our auto-tuning optimization methodology results in the fastest multicore stencil performance to date. Finally, we present several key insights into the architectural tradeoffs of emerging multicore designs and their implications on scientific algorithm development.", "This article describes a method to accelerate parallel, explicit time integration of two-dimensional unsteady PDEs. The method is motivated by our observation that latency, not bandwidth, often limits how fast PDEs can be solved in parallel. The method is called the swept rule of space-time domain decomposition. Compared to conventional, space-only domain decomposition, it communicates similar amount of data, but in fewer messages. The swept rule achieves this by decomposing space and time among computing nodes in ways that exploit the domains of influence and the domain of dependency, making it possible to communicate once per many time steps with no redundant computation. By communicating less often, the swept rule effectively breaks the latency barrier, advancing on average more than one time step per ping-pong latency of the network. The article presents simple theoretical analysis to the performance of the swept rule in two spatial dimensions, and supports the analysis with numerical experiments." ] }
1705.03162
2768347146
Abstract The expedient design of precision components in aerospace and other high-tech industries requires simulations of physical phenomena often described by partial differential equations (PDEs) without exact solutions. Modern design problems require simulations with a level of resolution difficult to achieve in reasonable amounts of time—even in effectively parallelized solvers. Though the scale of the problem relative to available computing power is the greatest impediment to accelerating these applications, significant performance gains can be achieved through careful attention to the details of memory communication and access. The swept time–space decomposition rule reduces communication between sub-domains by exhausting the domain of influence before communicating boundary values. Here we present a GPU implementation of the swept rule, which modifies the algorithm for improved performance on this processing architecture by prioritizing use of private (shared) memory, avoiding interblock communication, and overwriting unnecessary values. It shows significant improvement in the execution time of finite-difference solvers for one-dimensional unsteady PDEs, producing speedups of 2 – 9 × for a range of problem sizes, respectively, compared with simple GPU versions and 7 – 300 × compared with parallel CPU versions. However, for a more sophisticated one-dimensional system of equations discretized with a second-order finite-volume scheme, the swept rule performs 1.2 – 1.9 × worse than a standard implementation for all problem sizes.
Parallel-in-time methods @cite_9 , such as multigrid-reduction-in-time (MGRIT) algorithms @cite_11 , accelerate PDE solutions with time integrators that overcome the interdependence of solutions in the time domain, allowing parallelization of the entire space-time grid. These methods calculate the solution over the space-time domain using a coarse grid and iterate over successively finer grids to achieve the desired accuracy. The use of coarse grids in parallel-in-time methods reduces efficiency and accuracy when applied to nonlinear systems @cite_5 . This shortcoming is intuitive: since chaotic, nonlinear systems may suddenly change in time, and coarse grids are prone to aliasing, the required grid granularity diminishes gains in performance. The swept rule arises from the same motivation, but does not seek to parallelize the computation in time or vary dimensions during the process.
{ "cite_N": [ "@cite_5", "@cite_9", "@cite_11" ], "mid": [ "", "2461259506", "1578712267" ], "abstract": [ "", "Time parallel time integration methods have received renewed interest over the last decade because of the advent of massively parallel computers, which is mainly due to the clock speed limit reached on today’s processors. When solving time dependent partial differential equations, the time direction is usually not used for parallelization. But when parallelization in space saturates, the time direction offers itself as a further direction for parallelization. The time direction is however special, and for evolution problems there is a causality principle: the solution later in time is affected (it is even determined) by the solution earlier in time, but not the other way round. Algorithms trying to use the time direction for parallelization must therefore be special, and take this very different property of the time dimension into account.We show in this chapter how time domain decomposition methods were invented, and give an overview of the existing techniques. Time parallel methods can be classified into four different groups: methods based on multiple shooting, methods based on domain decomposition and waveform relaxation, space-time multigrid methods and direct time parallel methods. We show for each of these techniques the main inventions over time by choosing specific publications and explaining the core ideas of the authors. This chapter is for people who want to quickly gain an overview of the exciting and rapidly developing area of research of time parallel methods.", "We consider optimal-scaling multigrid solvers for the linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolu- tion equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integration techniques is limited to spatial parallelism. However, current trends in computer architectures are leading toward systems with more, but not faster, processors. Therefore, faster compute speeds must come from greater parallelism. One ap- proach to achieving parallelism in time is with multigrid, but extending classical multigrid methods for elliptic operators to this setting is not straightforward. In this paper, we present a nonintrusive, optimal-scaling time-parallel method based on multigrid reduction (MGR). We demonstrate optimal- ity of our multigrid-reduction-in-time algorithm (MGRIT) for solving diffusion equations in two and three space dimensions in numerical experiments. Furthermore, through both parallel performance models and actual parallel numerical results, we show that we can achieve significant speedup in comparison to sequential time marching on modern architectures." ] }
1705.03372
2950123658
We address the problem of localisation of objects as bounding boxes in images with weak labels. This weakly supervised object localisation problem has been tackled in the past using discriminative models where each object class is localised independently from other classes. We propose a novel framework based on Bayesian joint topic modelling. Our framework has three distinctive advantages over previous works: (1) All object classes and image backgrounds are modelled jointly together in a single generative model so that "explaining away" inference can resolve ambiguity and lead to better learning and localisation. (2) The Bayesian formulation of the model enables easy integration of prior knowledge about object appearance to compensate for limited supervision. (3) Our model can be learned with a mixture of weakly labelled and unlabelled data, allowing the large volume of unlabelled images on the Internet to be exploited for learning. Extensive experiments on the challenging VOC dataset demonstrate that our approach outperforms the state-of-the-art competitors.
An approach similar in spirit to ours in the sense of jointly learning a model for all classes is that of Cabral al @cite_24 . This study formulates multi-label image classification as a matrix completion problem, which is also similar in spirit to our factoring images into a mixture of topics. However we add two key factors of (i) a stronger notion of the spatial location and extent of each object, and (ii) the ability to encode human knowledge or transferred knowledge through Bayesian priors. As a result we are able to address more challenging data than @cite_24 such as VOC 2007. Multi-instance multi-label (MIML) @cite_36 approaches provide a mechanism to jointly learn a model for all classes @cite_27 @cite_10 . However because these methods must search a discrete space (of positive instance subsets), their optimisation problem is harder. They also lack the benefit of Bayesian integration of prior knowledge. Finally, while there exist more elaborative joint generative learning methods @cite_30 @cite_19 , they are more complicated than necessary for the WSOL task and thus do not scale to the size of data required here.
{ "cite_N": [ "@cite_30", "@cite_36", "@cite_24", "@cite_19", "@cite_27", "@cite_10" ], "mid": [ "2033012377", "2154840533", "", "2106624428", "2135533176", "" ], "abstract": [ "We develop hierarchical, probabilistic models for objects, the parts composing them, and the visual scenes surrounding them. Our approach couples topic models originally developed for text analysis with spatial transformations, and thus consistently accounts for geometric constraints. By building integrated scene models, we may discover contextual relationships, and better exploit partially labeled training images. We first consider images of isolated objects, and show that sharing parts among object categories improves detection accuracy when learning from few examples. Turning to multiple object scenes, we propose nonparametric models which use Dirichlet processes to automatically learn the number of parts underlying each object category, and objects composing each scene. The resulting transformed Dirichlet process (TDP) leads to Monte Carlo algorithms which simultaneously segment and recognize objects in street and office scenes.", "In this paper, we address the problem of multi-instance multi-label learning (MIML) where each example is associated with not only multiple instances but also multiple class labels. In our novel approach, given an MIML example, each instance in the example is only associated with a single label and the label set of the example is the aggregation of all instance labels. Many real-world tasks such as scene classification, text categorization and gene sequence encoding can be properly formalized under our proposed approach. We formulate our MIML problem as a combination of two optimizations: (1) a quadratic programming (QP) that minimizes the empirical risk with L2-norm regularization, and (2) an integer programing (IP) assigning each instance to a single label. We also present an efficient method combining the stochastic gradient decent and alternating optimization approaches to solve our QP and IP optimizations. In our experiments with both an artificially generated data set and real-world applications, i.e. scene classification and text categorization, our proposed method achieves superior performance over existing state-of-the-art MIML methods such as MIMLBOOST, MIMLSVM, M @math MIML and MIMLRBF.", "", "Given an image, we propose a hierarchical generative model that classifies the overall scene, recognizes and segments each object component, as well as annotates the image with a list of tags. To our knowledge, this is the first model that performs all three tasks in one coherent framework. For instance, a scene of a polo game' consists of several visual objects such as human', horse', grass', etc. In addition, it can be further annotated with a list of more abstract (e.g. dusk') or visually less salient (e.g. saddle') tags. Our generative model jointly explains images through a visual model and a textual model. Visually relevant objects are represented by regions and patches, while visually irrelevant textual annotations are influenced directly by the overall scene class. We propose a fully automatic learning framework that is able to learn robust scene models from noisy Web data such as images and user tags from Flickr.com. We demonstrate the effectiveness of our framework by automatically classifying, annotating and segmenting images from eight classes depicting sport scenes. In all three tasks, our model significantly outperforms state-of-the-art algorithms.", "In this paper, we formalize multi-instance multi-label learning, where each training example is associated with not only multiple instances but also multiple class labels. Such a problem can occur in many real-world tasks, e.g. an image usually contains multiple patches each of which can be described by a feature vector, and the image can belong to multiple categories since its semantics can be recognized in different ways. We analyze the relationship between multi-instance multi-label learning and the learning frameworks of traditional supervised learning, multi-instance learning and multi-label learning. Then, we propose the MIMLBOOST and MIMLSVM algorithms which achieve good performance in an application to scene classification.", "" ] }
1705.03280
2614100893
Compressive Sensing (CS) theory states that real-world signals can often be recovered from much fewer measurements than those suggested by the Shannon sampling theorem. Nevertheless, recoverability does not only depend on the signal, but also on the measurement scheme. The measurement matrix should behave as close as possible to an isometry for the signals of interest. Therefore the search for optimal CS measurement matrices of size @math translates into the search for a set of @math @math -dimensional vectors with minimal coherence. Best Complex Antipodal Spherical Codes (BCASCs) are known to be optimal in terms of coherence. An iterative algorithm for BCASC generation has been recently proposed that tightly approaches the theoretical lower bound on coherence. Unfortunately, the complexity of each iteration depends quadratically on @math and @math . In this work we propose a modification of the algorithm that allows reducing the quadratic complexity to linear on both @math and @math . Numerical evaluation showed that the proposed approach does not worsen the coherence of the resulting BCASCs. On the contrary, an improvement was observed for large @math . The reduction of the computational complexity paves the way for using the BCASCs as CS measurement matrices in problems with large @math . We evaluate the CS performance of the BCASCs for recovering sparse signals. The BCASCs are shown to outperform both complex random matrices and Fourier ensembles as CS measurement matrices, both in terms of coherence and sparse recovery performance, especially for low @math , which is the case of interest in CS.
In @cite_23 , the authors draw from previous research on tight frame construction by R. Balan and I. Daubechies, who proposed constructing the tight frames by selecting the first @math components of the @math -dimensional vectors of an @math discrete Fourier transform (DFT). While this may be directly used for constructing @math @math -dimensional complex codes for relatively low values of @math , it is clear that when @math the coherence between contiguous codewords tends to one. The novelty in @cite_23 is that the selection of the @math DFT rows is not deterministic, but the result of an optimization process. The optimization consists in a random search that looks for the set of frequencies which yields minimal coherence between resulting codewords. In their simulations they consider @math and up to @math codewords.
{ "cite_N": [ "@cite_23" ], "mid": [ "2170678594" ], "abstract": [ "We propose a systematic method for creating constellations of unitary space-time signals for multiple-antenna communication links. Unitary space-time signals, which are orthonormal in time across the antennas, have been shown to be well-tailored to a Rayleigh fading channel where neither the transmitter nor the receiver knows the fading coefficients. The signals can achieve low probability of error by exploiting multiple-antenna diversity. Because the fading coefficients are not known, the criterion for creating and evaluating the constellation is nonstandard and differs markedly from the familiar maximum-Euclidean-distance norm. Our construction begins with the first signal in the constellation-an oblong complex-valued matrix whose columns are orthonormal-and systematically produces the remaining signals by successively rotating this signal in a high-dimensional complex space. This construction easily produces large constellations of high-dimensional signals. We demonstrate its efficacy through examples involving one, two, and three transmitter antennas." ] }
1705.03280
2614100893
Compressive Sensing (CS) theory states that real-world signals can often be recovered from much fewer measurements than those suggested by the Shannon sampling theorem. Nevertheless, recoverability does not only depend on the signal, but also on the measurement scheme. The measurement matrix should behave as close as possible to an isometry for the signals of interest. Therefore the search for optimal CS measurement matrices of size @math translates into the search for a set of @math @math -dimensional vectors with minimal coherence. Best Complex Antipodal Spherical Codes (BCASCs) are known to be optimal in terms of coherence. An iterative algorithm for BCASC generation has been recently proposed that tightly approaches the theoretical lower bound on coherence. Unfortunately, the complexity of each iteration depends quadratically on @math and @math . In this work we propose a modification of the algorithm that allows reducing the quadratic complexity to linear on both @math and @math . Numerical evaluation showed that the proposed approach does not worsen the coherence of the resulting BCASCs. On the contrary, an improvement was observed for large @math . The reduction of the computational complexity paves the way for using the BCASCs as CS measurement matrices in problems with large @math . We evaluate the CS performance of the BCASCs for recovering sparse signals. The BCASCs are shown to outperform both complex random matrices and Fourier ensembles as CS measurement matrices, both in terms of coherence and sparse recovery performance, especially for low @math , which is the case of interest in CS.
The construction method first introduced in @cite_81 and further developed and analyzed in @cite_68 exploits the fact that unitary transforms representing any point on the Grassmann manifold can be represented by the exponential of any element of the tangent space at the identity point. For clarity, in the rank one case we deal with in this paper this would mean an exponential of an @math -dimensional vector with only @math nonzero components. This is due to the fact that we pack subspaces of dimension one (lines) on the Grassmannian, which yields a tangent space of dimensionality @math . The core idea of the method is to design a coherent codebook in the tangent space, which yields an optimal non-coherent one on the Grassmannian. Further work exploiting the exponential parametrization of the Grassmannian can be found in @cite_36 , where different rotated lattices are used as initial codes in the tangent space.
{ "cite_N": [ "@cite_36", "@cite_68", "@cite_81" ], "mid": [ "2543460896", "2133405866", "2123849153" ], "abstract": [ "Geometric methods for construction of codes in the Grassmann manifolds are presented. The methods follow the geometric approach to space-time coding for the non-coherent MIMO channel where the code design is interpreted as a packing problem on Grassmann manifolds. The differential structure of the Grassmann manifold provides parametrization with the tangent space at the identity element. Grassmann codes for the non-coherent channel are constructed by mapping suitable subsets of lattices from the tangent space to the Grassmann manifold via the exponential map. As examples, constructions from the rotated Gosset, Barnes-Wall and Leech lattice are presented. Due to the specifics of the mapping, some of the structure is preserved after the mapping to the manifold. The method is further improved by modifying the mapping from the tangent space to the manifold. Ideas for other constructions of Grassmann codes are also presented and discussed.", "We construct a new family of space-time codes suited for multiple-antenna non-coherent communications over Rayleigh flat fading channels. These codes use all the complex degrees of freedom of the system, i.e. (M times(1-M T)) symbols per channel use, where T is the code length and M is the number of transmit antennas. Their codewords belong to the Grassmann manifold GTldrM (C), the set of the M-dimensional vector subspaces of CT. Our codes are built from space-time codes for coherent systems via a non linear map (parameterization) called the exponential map. The relationships between some properties of the non-coherent space-time codes and their corresponding coherent codes are investigated. We also propose a simplified decoding for these classes of unitary space-time codes. We analyse the performance of these codes in terms of complexity and error probability and we compare them with some current best propositions, in particular training-based codes. The performance of our codes is substantially equivalent or slightly better than the one of training-based codes, under GLRT (Generalized Likelihood Ratio Test) decoding. When a simplified decoder is applied at the receiver, the error probability curves of the two propositions seem to be basically equivalent in the SIMO case with small block size, high spectral efficiency and high number of receive antennas, and in the 2times2 MIMO case with low spectral efficiency. In the other cases the training codes with simplified decoding seem to outperform our proposition.", "A family of space-time codes suited for noncoherent multi-input multi-output (MIMO) systems is presented. These codes use all the complex degrees of freedom of the system, i.e. M spl times (1-(M T)) symbols per channel use. They are constructed as codes on the Grassmann manifold G sub T,M ( spl Copf ) where T is the temporal codelength and M is the number of transmit antennas." ] }
1705.03280
2614100893
Compressive Sensing (CS) theory states that real-world signals can often be recovered from much fewer measurements than those suggested by the Shannon sampling theorem. Nevertheless, recoverability does not only depend on the signal, but also on the measurement scheme. The measurement matrix should behave as close as possible to an isometry for the signals of interest. Therefore the search for optimal CS measurement matrices of size @math translates into the search for a set of @math @math -dimensional vectors with minimal coherence. Best Complex Antipodal Spherical Codes (BCASCs) are known to be optimal in terms of coherence. An iterative algorithm for BCASC generation has been recently proposed that tightly approaches the theoretical lower bound on coherence. Unfortunately, the complexity of each iteration depends quadratically on @math and @math . In this work we propose a modification of the algorithm that allows reducing the quadratic complexity to linear on both @math and @math . Numerical evaluation showed that the proposed approach does not worsen the coherence of the resulting BCASCs. On the contrary, an improvement was observed for large @math . The reduction of the computational complexity paves the way for using the BCASCs as CS measurement matrices in problems with large @math . We evaluate the CS performance of the BCASCs for recovering sparse signals. The BCASCs are shown to outperform both complex random matrices and Fourier ensembles as CS measurement matrices, both in terms of coherence and sparse recovery performance, especially for low @math , which is the case of interest in CS.
An expansion-compression algorithm (ECA) is proposed in @cite_80 for finding packings in Grassmann manifolds. The ECA scheme seems to be motivated by the fact that using the chordal distance yields degenerated constellations if a simple max-min (maximization of the minimum distance between codewords) scheme is applied. To overcome this issue, an alternating scheme between a step of max-min and a subsequent step of min-max is proposed. The former step is called and the latter . The authors observe that using the Fubini-Study distance as a metric degenerated constellations do not occur and one can do with a conventional max-min scheme, thus avoiding the compression step.
{ "cite_N": [ "@cite_80" ], "mid": [ "2143839989" ], "abstract": [ "We propose a numerical method for finding packings of multiple-input and multiple-output (MIMO) semi-unitary precoding matrices in Grassmannian manifolds with different metrics. The proposed expansion-compression algorithm (ECA) is practical, simple and produces efficient packings without the need for a sophisticated initialization. With chordal distance metric, the algorithm tends to converge into a degenerated point constellation, where two points contain identical as well as orthogonal columns and distance between them cannot increase further along geodesic. Therefore, we alternate between max-min and min-max clustering parts of ECA algorithm, where the latter prevents degenerated constellations. With Fubini-Study distance metric, the algorithm converges to best known packings without extra min-max processing." ] }
1705.03280
2614100893
Compressive Sensing (CS) theory states that real-world signals can often be recovered from much fewer measurements than those suggested by the Shannon sampling theorem. Nevertheless, recoverability does not only depend on the signal, but also on the measurement scheme. The measurement matrix should behave as close as possible to an isometry for the signals of interest. Therefore the search for optimal CS measurement matrices of size @math translates into the search for a set of @math @math -dimensional vectors with minimal coherence. Best Complex Antipodal Spherical Codes (BCASCs) are known to be optimal in terms of coherence. An iterative algorithm for BCASC generation has been recently proposed that tightly approaches the theoretical lower bound on coherence. Unfortunately, the complexity of each iteration depends quadratically on @math and @math . In this work we propose a modification of the algorithm that allows reducing the quadratic complexity to linear on both @math and @math . Numerical evaluation showed that the proposed approach does not worsen the coherence of the resulting BCASCs. On the contrary, an improvement was observed for large @math . The reduction of the computational complexity paves the way for using the BCASCs as CS measurement matrices in problems with large @math . We evaluate the CS performance of the BCASCs for recovering sparse signals. The BCASCs are shown to outperform both complex random matrices and Fourier ensembles as CS measurement matrices, both in terms of coherence and sparse recovery performance, especially for low @math , which is the case of interest in CS.
Some of the methods presented so far were either exclusively or, at least, initially designed to construct codes, packings, frames, sensing matrices, etc. in the real space. In some cases, complex extensions are available and, in fact, some of the previous methods were designed to work natively in complex space. Nevertheless, in such cases the computational cost is higher and the methods are only evaluated for relatively low numbers of codewords of low dimensionality. There is, indeed, a lack of work where constructions of large close-to-optimal complex codebooks are presented. To the best of our knowledge, the tables in @cite_15 provide the most complete comparative benchmark so far and are limited to codes with @math codewords of dimensionality @math (or only @math for @math ).
{ "cite_N": [ "@cite_15" ], "mid": [ "2169273233" ], "abstract": [ "Vector sets with optimal coherence according to the Welch bound cannot exist for all pairs of dimension and cardinality. If such an optimal vector set exists, it is an equiangular tight frame and represents the solution to a Grassmannian line packing problem. Best Complex Antipodal Spherical Codes (BCASCs) are the best vector sets with respect to the coherence. By extending methods used to find best spherical codes in the real-valued Euclidean space, the proposed approach aims to find BCASCs, and thereby, a complex-valued vector set with minimal coherence. There are many applications demanding vector sets with low coherence. Examples are not limited to several techniques in wireless communication or to the field of compressed sensing. Within this contribution, existing analytical and numerical approaches for coherence optimization of complex-valued vector spaces are summarized and compared to the proposed approach. The numerically obtained coherence values improve previously reported results. The drawback of increased computational effort is addressed and a faster approximation is proposed which may be an alternative for time critical cases." ] }
1705.03280
2614100893
Compressive Sensing (CS) theory states that real-world signals can often be recovered from much fewer measurements than those suggested by the Shannon sampling theorem. Nevertheless, recoverability does not only depend on the signal, but also on the measurement scheme. The measurement matrix should behave as close as possible to an isometry for the signals of interest. Therefore the search for optimal CS measurement matrices of size @math translates into the search for a set of @math @math -dimensional vectors with minimal coherence. Best Complex Antipodal Spherical Codes (BCASCs) are known to be optimal in terms of coherence. An iterative algorithm for BCASC generation has been recently proposed that tightly approaches the theoretical lower bound on coherence. Unfortunately, the complexity of each iteration depends quadratically on @math and @math . In this work we propose a modification of the algorithm that allows reducing the quadratic complexity to linear on both @math and @math . Numerical evaluation showed that the proposed approach does not worsen the coherence of the resulting BCASCs. On the contrary, an improvement was observed for large @math . The reduction of the computational complexity paves the way for using the BCASCs as CS measurement matrices in problems with large @math . We evaluate the CS performance of the BCASCs for recovering sparse signals. The BCASCs are shown to outperform both complex random matrices and Fourier ensembles as CS measurement matrices, both in terms of coherence and sparse recovery performance, especially for low @math , which is the case of interest in CS.
In this section we provide a general comparison between the ANN-BCASC approximation and other approaches for generating close-to-optimal complex codes. In order to provide a complete cumulative comparison, the best option for such comparison is to extend the tables in @cite_15 , which are in turn an extension of Table II of @cite_39 and Table II of @cite_77 , respectively. These tables provide the values of coherence obtained from complex SCs generated using different methods and are named Table and Table , homonimously to the corresponding tables in @cite_15 . For both the BCASC and the ANN-BCASC methods we adopt the best result of ten independent runs. We also measure the time that each algorithm needed to generate the codes. For the other algorithms, we preserve the values in the original tables of @cite_15 . We use the reference BCASC algorithm with approximate summation with @math summands, in order to have a fair reference for the ANN-BCASC algorithm, for which we adopt the same @math and a constant @math for all cases considered. As in @cite_15 , for both methods @math and @math were used.
{ "cite_N": [ "@cite_77", "@cite_15", "@cite_39" ], "mid": [ "1966208803", "2169273233", "2121743251" ], "abstract": [ "Grassmannian quantization codebooks play a central role in a number of limited feedback schemes for single and multi-user multiple-input multiple-output (MIMO) communication systems. In practice, it is often desirable that these codebooks possess additional properties that facilitate their implementation, beyond the provision of good quantization performance. Although some good codebooks exist, their design tends to be a rather intricate task. The goal of this paper is to suggest a flexible approach to the design of Grassmannian codebooks based on sequential smooth optimization on the Grassmannian manifold and the use of smooth penalty functions to obtain additional desirable properties. As one example, the proposed approach is used to design rank-2 codebooks that have a nested structure and elements from a phase-shift keying (PSK) alphabet. In some numerical comparisons, codebooks designed using the proposed methods have larger minimum distances than some existing codebooks, and provide tangible performance gains when applied to a simple MIMO downlink scenario with zero-forcing beamforming, per-user unitary beamforming and rate control (PU 2RC), and block diagonalization signalling. Furthermore, the proposed approach yields codebooks that attain desirable additional properties without incurring a substantial degradation in performance.", "Vector sets with optimal coherence according to the Welch bound cannot exist for all pairs of dimension and cardinality. If such an optimal vector set exists, it is an equiangular tight frame and represents the solution to a Grassmannian line packing problem. Best Complex Antipodal Spherical Codes (BCASCs) are the best vector sets with respect to the coherence. By extending methods used to find best spherical codes in the real-valued Euclidean space, the proposed approach aims to find BCASCs, and thereby, a complex-valued vector set with minimal coherence. There are many applications demanding vector sets with low coherence. Examples are not limited to several techniques in wireless communication or to the field of compressed sensing. Within this contribution, existing analytical and numerical approaches for coherence optimization of complex-valued vector spaces are summarized and compared to the proposed approach. The numerically obtained coherence values improve previously reported results. The drawback of increased computational effort is addressed and a faster approximation is proposed which may be an alternative for time critical cases.", "Consider a codebook containing N unit-norm complex vectors in a K-dimensional space. In a number of applications, the codebook that minimizes the maximal cross-correlation amplitude (I sub max ) is often desirable. Relying on tools from combinatorial number theory, we construct analytically optimal codebooks meeting, in certain cases, the Welch lower bound. When analytical constructions are not available, we develop an efficient numerical search method based on a generalized Lloyd algorithm, which leads to considerable improvement on the achieved I sub max over existing alternatives. We also derive a composite lower bound on the minimum achievable I sub max that is effective for any codebook size N." ] }
1705.03280
2614100893
Compressive Sensing (CS) theory states that real-world signals can often be recovered from much fewer measurements than those suggested by the Shannon sampling theorem. Nevertheless, recoverability does not only depend on the signal, but also on the measurement scheme. The measurement matrix should behave as close as possible to an isometry for the signals of interest. Therefore the search for optimal CS measurement matrices of size @math translates into the search for a set of @math @math -dimensional vectors with minimal coherence. Best Complex Antipodal Spherical Codes (BCASCs) are known to be optimal in terms of coherence. An iterative algorithm for BCASC generation has been recently proposed that tightly approaches the theoretical lower bound on coherence. Unfortunately, the complexity of each iteration depends quadratically on @math and @math . In this work we propose a modification of the algorithm that allows reducing the quadratic complexity to linear on both @math and @math . Numerical evaluation showed that the proposed approach does not worsen the coherence of the resulting BCASCs. On the contrary, an improvement was observed for large @math . The reduction of the computational complexity paves the way for using the BCASCs as CS measurement matrices in problems with large @math . We evaluate the CS performance of the BCASCs for recovering sparse signals. The BCASCs are shown to outperform both complex random matrices and Fourier ensembles as CS measurement matrices, both in terms of coherence and sparse recovery performance, especially for low @math , which is the case of interest in CS.
[Coherence comparison of complex codes I] Comparison of the coherence of close-to-optimal complex codes obtained via different numerical approaches. Based on Table I of @cite_15 .
{ "cite_N": [ "@cite_15" ], "mid": [ "2169273233" ], "abstract": [ "Vector sets with optimal coherence according to the Welch bound cannot exist for all pairs of dimension and cardinality. If such an optimal vector set exists, it is an equiangular tight frame and represents the solution to a Grassmannian line packing problem. Best Complex Antipodal Spherical Codes (BCASCs) are the best vector sets with respect to the coherence. By extending methods used to find best spherical codes in the real-valued Euclidean space, the proposed approach aims to find BCASCs, and thereby, a complex-valued vector set with minimal coherence. There are many applications demanding vector sets with low coherence. Examples are not limited to several techniques in wireless communication or to the field of compressed sensing. Within this contribution, existing analytical and numerical approaches for coherence optimization of complex-valued vector spaces are summarized and compared to the proposed approach. The numerically obtained coherence values improve previously reported results. The drawback of increased computational effort is addressed and a faster approximation is proposed which may be an alternative for time critical cases." ] }
1705.03280
2614100893
Compressive Sensing (CS) theory states that real-world signals can often be recovered from much fewer measurements than those suggested by the Shannon sampling theorem. Nevertheless, recoverability does not only depend on the signal, but also on the measurement scheme. The measurement matrix should behave as close as possible to an isometry for the signals of interest. Therefore the search for optimal CS measurement matrices of size @math translates into the search for a set of @math @math -dimensional vectors with minimal coherence. Best Complex Antipodal Spherical Codes (BCASCs) are known to be optimal in terms of coherence. An iterative algorithm for BCASC generation has been recently proposed that tightly approaches the theoretical lower bound on coherence. Unfortunately, the complexity of each iteration depends quadratically on @math and @math . In this work we propose a modification of the algorithm that allows reducing the quadratic complexity to linear on both @math and @math . Numerical evaluation showed that the proposed approach does not worsen the coherence of the resulting BCASCs. On the contrary, an improvement was observed for large @math . The reduction of the computational complexity paves the way for using the BCASCs as CS measurement matrices in problems with large @math . We evaluate the CS performance of the BCASCs for recovering sparse signals. The BCASCs are shown to outperform both complex random matrices and Fourier ensembles as CS measurement matrices, both in terms of coherence and sparse recovery performance, especially for low @math , which is the case of interest in CS.
[Coherence comparison of complex codes II] Comparison of the coherence of close-to-optimal complex codes obtained via different numerical approaches. Based on Table II of @cite_15 .
{ "cite_N": [ "@cite_15" ], "mid": [ "2169273233" ], "abstract": [ "Vector sets with optimal coherence according to the Welch bound cannot exist for all pairs of dimension and cardinality. If such an optimal vector set exists, it is an equiangular tight frame and represents the solution to a Grassmannian line packing problem. Best Complex Antipodal Spherical Codes (BCASCs) are the best vector sets with respect to the coherence. By extending methods used to find best spherical codes in the real-valued Euclidean space, the proposed approach aims to find BCASCs, and thereby, a complex-valued vector set with minimal coherence. There are many applications demanding vector sets with low coherence. Examples are not limited to several techniques in wireless communication or to the field of compressed sensing. Within this contribution, existing analytical and numerical approaches for coherence optimization of complex-valued vector spaces are summarized and compared to the proposed approach. The numerically obtained coherence values improve previously reported results. The drawback of increased computational effort is addressed and a faster approximation is proposed which may be an alternative for time critical cases." ] }
1705.03280
2614100893
Compressive Sensing (CS) theory states that real-world signals can often be recovered from much fewer measurements than those suggested by the Shannon sampling theorem. Nevertheless, recoverability does not only depend on the signal, but also on the measurement scheme. The measurement matrix should behave as close as possible to an isometry for the signals of interest. Therefore the search for optimal CS measurement matrices of size @math translates into the search for a set of @math @math -dimensional vectors with minimal coherence. Best Complex Antipodal Spherical Codes (BCASCs) are known to be optimal in terms of coherence. An iterative algorithm for BCASC generation has been recently proposed that tightly approaches the theoretical lower bound on coherence. Unfortunately, the complexity of each iteration depends quadratically on @math and @math . In this work we propose a modification of the algorithm that allows reducing the quadratic complexity to linear on both @math and @math . Numerical evaluation showed that the proposed approach does not worsen the coherence of the resulting BCASCs. On the contrary, an improvement was observed for large @math . The reduction of the computational complexity paves the way for using the BCASCs as CS measurement matrices in problems with large @math . We evaluate the CS performance of the BCASCs for recovering sparse signals. The BCASCs are shown to outperform both complex random matrices and Fourier ensembles as CS measurement matrices, both in terms of coherence and sparse recovery performance, especially for low @math , which is the case of interest in CS.
In short terms, the proposed ANN approximate approach and the reference algorithm generate BCASCs of equivalent quality in terms of coherence. In other words, Tables and confirm that the reduction of the algorithm complexity has no significant effect on the quality of the obtained BCASCs. In fact, both alternatives often produce codes with equal coherence, closely approaching and eventually meeting the theoretical lower bound. Cases for which our approach meets the theoretical lower bound are @math , @math and @math , @math . Furthermore, in some cases the proposed approach outperforms the reference method, which is a remarkable fact, even when the differences are rather negligible in general. The reader might observe that the coherences obtained for the reference method often do not coincide with those given in @cite_15 . This is due to the fact that we use the variant with approximate discrete summation for calculating the integral over complex rotations, in order to enable a fair comparison to the proposed approach, while the results presented in Tables I and II of @cite_15 were obtained using numerical integration.
{ "cite_N": [ "@cite_15" ], "mid": [ "2169273233" ], "abstract": [ "Vector sets with optimal coherence according to the Welch bound cannot exist for all pairs of dimension and cardinality. If such an optimal vector set exists, it is an equiangular tight frame and represents the solution to a Grassmannian line packing problem. Best Complex Antipodal Spherical Codes (BCASCs) are the best vector sets with respect to the coherence. By extending methods used to find best spherical codes in the real-valued Euclidean space, the proposed approach aims to find BCASCs, and thereby, a complex-valued vector set with minimal coherence. There are many applications demanding vector sets with low coherence. Examples are not limited to several techniques in wireless communication or to the field of compressed sensing. Within this contribution, existing analytical and numerical approaches for coherence optimization of complex-valued vector spaces are summarized and compared to the proposed approach. The numerically obtained coherence values improve previously reported results. The drawback of increased computational effort is addressed and a faster approximation is proposed which may be an alternative for time critical cases." ] }
1705.03332
2735924868
Person re-identification task has been greatly boosted by deep convolutional neural networks (CNNs) in recent years. The core of which is to enlarge the inter-class distinction as well as reduce the intra-class variance. However, to achieve this, existing deep models prefer to adopt image pairs or triplets to form verification loss, which is inefficient and unstable since the number of training pairs or triplets grows rapidly as the number of training data grows. Moreover, their performance is limited since they ignore the fact that different dimension of embedding may play different importance. In this paper, we propose to employ identification loss with center loss to train a deep model for person re-identification. The training process is efficient since it does not require image pairs or triplets for training while the inter-class distinction and intra-class variance are well handled. To boost the performance, a new feature reweighting (FRW) layer is designed to explicitly emphasize the importance of each embedding dimension, thus leading to an improved embedding. Experiments on several benchmark datasets have shown the superiority of our method over the state-of-the-art alternatives on both accuracy and speed.
Binary identification loss, contrastive loss and triplet loss are three main types of verification loss. CNNs with binary identification loss have been used by @cite_55 @cite_37 . They output a binary prediction, indicating whether the two images belong to the same identity or not. Many other deep models learn an embedding for each image, and compute the similarities between embeddings based on the Euclidean distance. The works @cite_3 @cite_21 use contrastive loss to train a CNN, which requires a pair of image samples for training. The methods @cite_28 @cite_49 @cite_18 @cite_44 @cite_57 employ triplet loss or its variations with CNN, which requires image triplets during training. For simplicity, the approaches @cite_33 @cite_36 @cite_34 apply identification loss to person re-identification task since it learns discriminative features efficiently. The combination of identification loss and verification loss has been found effective on face recognition @cite_42 , and it also gives excellent performance on person re-identification @cite_10 @cite_52 . Recently, the work @cite_17 propose center loss to reduce the intra-class variance on face recognition task, without constructing image pairs or triplets during training. However, for person re-identification, the mainstream loss to handle the intra-class variance is still verification loss.
{ "cite_N": [ "@cite_37", "@cite_18", "@cite_33", "@cite_28", "@cite_36", "@cite_55", "@cite_21", "@cite_42", "@cite_52", "@cite_3", "@cite_44", "@cite_57", "@cite_49", "@cite_34", "@cite_10", "@cite_17" ], "mid": [ "", "", "1971955426", "1975517671", "", "1982925187", "", "", "", "", "", "", "", "", "2549957142", "2520774990" ], "abstract": [ "", "", "Identifying the same individual across different scenes is an important yet difficult task in intelligent video surveillance. Its main difficulty lies in how to preserve similarity of the same person against large appearance and structure variation while discriminating different individuals. In this paper, we present a scalable distance driven feature learning framework based on the deep neural network for person re-identification, and demonstrate its effectiveness to handle the existing challenges. Specifically, given the training images with the class labels (person IDs), we first produce a large number of triplet units, each of which contains three images, i.e. one person with a matched reference and a mismatched reference. Treating the units as the input, we build the convolutional neural network to generate the layered representations, and follow with the L 2 distance metric. By means of parameter optimization, our framework tends to maximize the relative distance between the matched pair and the mismatched pair for each triplet unit. Moreover, a nontrivial issue arising with the framework is that the triplet organization cubically enlarges the number of training triplets, as one image can be involved into several triplet units. To overcome this problem, we develop an effective triplet generation scheme and an optimized gradient descent algorithm, making the computational load mainly depend on the number of original images instead of the number of triplets. On several challenging databases, our approach achieves very promising results and outperforms other state-of-the-art approaches. HighlightsWe present a novel feature learning framework for person re-identification.Our framework is based on the maximum relative distance comparison.The learning algorithm is scalable to process large amount of data.We demonstrate superior performances over other state-of-the-arts.", "Learning fine-grained image similarity is a challenging task. It needs to capture between-class and within-class image differences. This paper proposes a deep ranking model that employs deep learning techniques to learn similarity metric directly from images. It has higher learning capability than models based on hand-crafted features. A novel multiscale network structure has been developed to describe the images effectively. An efficient triplet sampling algorithm is also proposed to learn the model with distributed asynchronized stochastic gradient. Extensive experiments show that the proposed algorithm outperforms models based on hand-crafted visual features and deep classification models.", "", "Person re-identification is to match pedestrian images from disjoint camera views detected by pedestrian detectors. Challenges are presented in the form of complex variations of lightings, poses, viewpoints, blurring effects, image resolutions, camera settings, occlusions and background clutter across camera views. In addition, misalignment introduced by the pedestrian detector will affect most existing person re-identification methods that use manually cropped pedestrian images and assume perfect detection. In this paper, we propose a novel filter pairing neural network (FPNN) to jointly handle misalignment, photometric and geometric transforms, occlusions and background clutter. All the key components are jointly optimized to maximize the strength of each component when cooperating with others. In contrast to existing works that use handcrafted features, our method automatically learns features optimal for the re-identification task from data. The learned filter pairs encode photometric transforms. Its deep architecture makes it possible to model a mixture of complex photometric and geometric transforms. We build the largest benchmark re-id dataset with 13, 164 images of 1, 360 pedestrians. Unlike existing datasets, which only provide manually cropped pedestrian images, our dataset provides automatically detected bounding boxes for evaluation close to practical applications. Our neural network significantly outperforms state-of-the-art methods on this dataset.", "", "", "", "", "", "", "", "", "In this article, we revisit two popular convolutional neural networks in person re-identification (re-ID): verification and identification models. The two models have their respective advantages and limitations due to different loss functions. Here, we shed light on how to combine the two models to learn more discriminative pedestrian descriptors. Specifically, we propose a Siamese network that simultaneously computes the identification loss and verification loss. Given a pair of training images, the network predicts the identities of the two input images and whether they belong to the same identity. Our network learns a discriminative embedding and a similarity measurement at the same time, thus taking full usage of the re-ID annotations. Our method can be easily applied on different pretrained networks. Albeit simple, the learned embedding improves the state-of-the-art performance on two public person re-ID benchmarks. Further, we show that our architecture can also be applied to image retrieval. The code is available at https: github.com layumi 2016_person_re-ID.", "Convolutional neural networks (CNNs) have been widely used in computer vision community, significantly improving the state-of-the-art. In most of the available CNNs, the softmax loss function is used as the supervision signal to train the deep model. In order to enhance the discriminative power of the deeply learned features, this paper proposes a new supervision signal, called center loss, for face recognition task. Specifically, the center loss simultaneously learns a center for deep features of each class and penalizes the distances between the deep features and their corresponding class centers. More importantly, we prove that the proposed center loss function is trainable and easy to optimize in the CNNs. With the joint supervision of softmax loss and center loss, we can train a robust CNNs to obtain the deep features with the two key learning objectives, inter-class dispension and intra-class compactness as much as possible, which are very essential to face recognition. It is encouraging to see that our CNNs (with such joint supervision) achieve the state-of-the-art accuracy on several important face recognition benchmarks, Labeled Faces in the Wild (LFW), YouTube Faces (YTF), and MegaFace Challenge. Especially, our new approach achieves the best results on MegaFace (the largest public domain face benchmark) under the protocol of small training set (contains under 500000 images and under 20000 persons), significantly improving the previous results and setting new state-of-the-art for both face recognition and face verification tasks." ] }
1705.02966
2613087992
We present a method for generating a video of a talking face. The method takes as inputs: (i) still images of the target face, and (ii) an audio speech segment; and outputs a video of the target face lip synched with the audio. The method runs in real time and is applicable to faces and audio not seen at training time. To achieve this we propose an encoder-decoder CNN model that uses a joint embedding of the face and audio to generate synthesised talking face video frames. The model is trained on tens of hours of unlabelled videos. We also show results of re-dubbing videos using speech from a different person.
There are various works that proposed methods to generate or synthesise videos of talking heads from either audio or text sources. Fan al @cite_22 introduced a method to restitch the lower half of the face via a bi-directional LSTM to re-dub a target video from a different audio source. The LSTM selects a target mouth region from a dictionary of saved target frames, rather than generating the image, so it requires a sizeable amount of video frames of the unique target identity to choose from. Similarly, Charles al @cite_28 uses phonetic labels to select frames from a dictionary of mouth images. Wan al @cite_4 proposed a method to synthesise a talking head via an active appearance model with the ability to control the emotion of the talking avatar, but they are constrained to the unique model trained by the system. Garrido al @cite_26 synthesises talking faces on target speakers by transferring the mouth shapes from the video of the dubber to the target video, but this method requires the video footage of the dubber's mouth saying the speech segment, whereas our method learns the relationship between the sound and the mouth shapes.
{ "cite_N": [ "@cite_28", "@cite_26", "@cite_4", "@cite_22" ], "mid": [ "2572640303", "", "1419436964", "1569907127" ], "abstract": [ "The objective of this work is to build virtual talking avatars of characters fully automatically from TV shows. From this unconstrained data, we show how to capture a character’s style of speech, visual appearance and language in an effort to construct an interactive avatar of the person and effectively immortalize them in a computational model. We make three contributions (i) a complete framework for producing a generative model of the audiovisual and language of characters from TV shows; (ii) a novel method for aligning transcripts to video using the audio; and (iii) a fast audio segmentation system for silencing non-spoken audio from TV shows. Our framework is demonstrated using all 236 episodes from the TV series Friends ( ( ) 97 h of video) and shown to generate novel sentences as well as character specific speech and video.", "", "A controllable computer animated avatar that could be used as a natural user interface for computers is demonstrated. Driven by text and emotion input, it generates expressive speech with corresponding facial movements. To create the avatar, HMM-based text-to-speech synthesis is combined with active appearance model (AAM)-based facial animation. The novelty is the degree of control achieved over the expressiveness of both the speech and the face while keeping the controls simple. Controllability is achieved by training both the speech and facial parameters within a cluster adaptive training (CAT) framework. CAT creates a continuous, low dimensional eigenspace of expressions, which allows the creation of expressions of different intensity (including ones more intense than those in the original recordings) and combining different expressions to create new ones. Results on an emotion-recognition task show that recognition rates given the synthetic output are comparable to those given the original videos of the speaker. Copyright © 2013 ISCA.", "Long short-term memory (LSTM) is a specific recurrent neural network (RNN) architecture that is designed to model temporal sequences and their long-range dependencies more accurately than conventional RNNs. In this paper, we propose to use deep bidirectional LSTM (BLSTM) for audio visual modeling in our photo-real talking head system. An audio visual database of a subject's talking is firstly recorded as our training data. The audio visual stereo data are converted into two parallel temporal sequences, i.e., contextual label sequences obtained by forced aligning audio against text, and visual feature sequences by applying active-appearance-model (AAM) on the lower face region among all the training image samples. The deep BLSTM is then trained to learn the regression model by minimizing the sum of square error (SSE) of predicting visual sequence from label sequence. After testing different network topologies, we interestingly found the best network is two BLSTM layers sitting on top of one feed-forward layer on our datasets. Compared with our previous HMM-based system, the newly proposed deep BLSTM-based one is better on both objective measurement and subjective A B test." ] }
1705.02966
2613087992
We present a method for generating a video of a talking face. The method takes as inputs: (i) still images of the target face, and (ii) an audio speech segment; and outputs a video of the target face lip synched with the audio. The method runs in real time and is applicable to faces and audio not seen at training time. To achieve this we propose an encoder-decoder CNN model that uses a joint embedding of the face and audio to generate synthesised talking face video frames. The model is trained on tens of hours of unlabelled videos. We also show results of re-dubbing videos using speech from a different person.
Our training approach is based on unsupervised learning, in our case from tens of hours of people talking. One of the earliest example of unsupervised learning of the representation of data using neural networks is the autoencoder by Hinton and Salakhutdinov @cite_2 . Further improvements in representation learning, for example the variational auto-encoders by Kingma al @cite_0 , opens up the possibility of generating images using neural networks. Moving forward, current research shows adversarial training proposed by @cite_10 works well for generating natural-looking images; conditional generative models @cite_25 are able to generate images based on auxilary information such as a class label. Our Speech2Vid model is closest in spirit to the image-to-image model by Isola al @cite_11 in that we generate an output that closely resembles the input, but in our case we have both audio and image data as inputs.
{ "cite_N": [ "@cite_0", "@cite_2", "@cite_10", "@cite_25", "@cite_11" ], "mid": [ "", "2100495367", "2099471712", "2423557781", "2552465644" ], "abstract": [ "", "High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "This work explores conditional image generation with a new image density model based on the PixelCNN architecture. The model can be conditioned on any vector, including descriptive labels or tags, or latent embeddings created by other networks. When conditioned on class labels from the ImageNet database, the model is able to generate diverse, realistic scenes representing distinct animals, objects, landscapes and structures. When conditioned on an embedding produced by a convolutional network given a single image of an unseen face, it generates a variety of new portraits of the same person with different facial expressions, poses and lighting conditions. We also show that conditional PixelCNN can serve as a powerful decoder in an image autoencoder. Additionally, the gated convolutional layers in the proposed model improve the log-likelihood of PixelCNN to match the state-of-the-art performance of PixelRNN on ImageNet, with greatly reduced computational cost.", "We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either." ] }
1705.02861
2310936340
Multimedia scenarios have multimedia content and interactive events associated with computer programs. Interactive Scores (IS) is a formalism to represent such scenarios by temporal objects, temporal relations (TRs) and interactive events. IS describe TRs, but IS cannot represent TRs together with conditional branching. We propose a model for conditional branching timed IS in the Non-deterministic Timed Concurrent Constraint (ntcc) calculus. We ran a prototype of our model in Ntccrt (a real-time capable interpreter for ntcc) and the response time was acceptable for real-time interaction. An advantage of ntcc over Max MSP or Petri Nets is that conditions and global constraints are represented declaratively.
Another system dealing with a hierarchical structure is @cite_5 . However, OpenMusic is a software for composition and not real-time interaction.
{ "cite_N": [ "@cite_5" ], "mid": [ "74362393" ], "abstract": [ "This paper presents the computer-assisted composition environment OpenMusic and introduces OM 5.0, a new cross-platform release. The characteristics of this system will be exposed, with examples of applications in music composition and analysis." ] }
1705.02861
2310936340
Multimedia scenarios have multimedia content and interactive events associated with computer programs. Interactive Scores (IS) is a formalism to represent such scenarios by temporal objects, temporal relations (TRs) and interactive events. IS describe TRs, but IS cannot represent TRs together with conditional branching. We propose a model for conditional branching timed IS in the Non-deterministic Timed Concurrent Constraint (ntcc) calculus. We ran a prototype of our model in Ntccrt (a real-time capable interpreter for ntcc) and the response time was acceptable for real-time interaction. An advantage of ntcc over Max MSP or Petri Nets is that conditions and global constraints are represented declaratively.
Another kind of systems capable of real-time interaction are systems (see @cite_6 ). Such systems track the performance of a real instrument and they may play multimedia associated to certain notes of the piece. However, to use these systems it is necessary to play a real instrument; whereas to use IS, the user only has to control some parameters of the piece, such as the start and end dates of the TOs. A model for multimedia interaction that does not require a real instrument uses Hidden Markov Models to model probabilistic installations @cite_41 . The system tracks human motion and it responds to human performance with chords and pitches depending on the knowledge of previous training. However, the system requires intensive training and it is not a tool for composition.
{ "cite_N": [ "@cite_41", "@cite_6" ], "mid": [ "21776116", "2287649557" ], "abstract": [ "We present a description of two small audio visual immersive installations. The main framework is an interactive structure that enables multiple participants to generate jazz improvisations, loosely speaking. The first uses a Bayesian Network to respond to sung or played pitches with machine pitches, in a kind of constrained harmonic way. The second uses Bayesian Networks and Hidden Markov Models to track human motion, play reactive chords, and to respond to pitches both aurally and visually.", "Antescofois a modular anticipatory score following system that holds both instrumental and electronic scores together and is capable of executing electronic scores in synchronization with a live performance and using various controls over time. In its very basic use, it is a classical score following system, but in advanced use it enables concurrent representation and recognition of different audio descriptors (rather than pitch), control over various time scales used in music writing, and enables temporal interaction between the performance and the electronic score. Antescofo comes with a simple score language for flexible writing of time and interaction in computer music." ] }
1705.02936
2950459735
In this paper, we apply an efficient top- @math shortest distance routing algorithm to the link prediction problem and test its efficacy. We compare the results with other base line and state-of-the-art methods as well as with the shortest path. Our results show that using top- @math distances as a similarity measure outperforms classical similarity measures such as Jaccard and Adamic Adar.
Link prediction in social network is a well known problem and extensively studied. Links are predicted using either semantic or topological information of a given network. The main idea in link prediction problem is to measure similarity between two vertices which are not yet linked to each other. If the measured similarity is high enough then the future link is predicted. @cite_11 attempts to infer which new interactions among members of a social network are likely to occur in the near future. Authors develop approaches to link prediction based on measures for analyzing the proximity'' of nodes in a network.
{ "cite_N": [ "@cite_11" ], "mid": [ "2148847267" ], "abstract": [ "Given a snapshot of a social network, can we infer which new interactions among its members are likely to occur in the near future? We formalize this question as the link-prediction problem, and we develop approaches to link prediction based on measures for analyzing the “proximity” of nodes in a network. Experiments on large coauthorship networks suggest that information about future interactions can be extracted from network topology alone, and that fairly subtle measures for detecting node proximity can outperform more direct measures. © 2007 Wiley Periodicals, Inc." ] }
1705.02752
2612272943
We consider a classical k-center problem in trees. Let T be a tree of n vertices and every vertex has a nonnegative weight. The problem is to find k centers on the edges of T such that the maximum weighted distance from all vertices to their closest centers is minimized. Megiddo and Tamir (SIAM J. Comput., 1983) gave an algorithm that can solve the problem in @math time by using Cole's parametric search. Since then it has been open for over three decades whether the problem can be solved in @math time. In this paper, we present an @math time algorithm for the problem and thus settle the open problem affirmatively.
For the unweighted case where the vertices of @math have the same weight, an @math -time algorithm was given in @cite_29 for the @math -center problem. Later, @cite_26 solved the problem in @math time, and the algorithm was improved to @math time @cite_16 . Finally, Frederickson @cite_10 solved the problem in @math time. The above four papers also solve the discrete case and the following problem version in the same running times: All points of @math are considered as demand points and the centers are required to be at vertices of @math . Further, if all points of @math are demand points and centers can be any points of @math , Megiddo and Tamir solved the problem in @math time @cite_11 , and the running time can be reduced to @math by applying Cole's parametric search @cite_21 .
{ "cite_N": [ "@cite_26", "@cite_29", "@cite_21", "@cite_16", "@cite_10", "@cite_11" ], "mid": [ "2141178123", "", "1671942308", "1996896896", "1529668588", "2032624334" ], "abstract": [ "Many known algorithms are based on selection in a set whose cardinality is superlinear in terms of the input length. It is desirable in these cases to have selection algorithms that run in sublinear time in terms of the cardinality of the set. This paper presents a successful development in this direction. The methods developed here are applied to improve the previously known upper bounds for the time complexity of various location problems.", "", "Megiddo introduced a technique for using a parallel algorithm for one problem to construct an efficient serial algorithm for a second problem. We give a general method that trims a factor o f 0(logn) time (or more) for many applications of this technique.", "A succinct and easily searchable representation of the set of intervertex distances of a tree is given. Algorithms are presented for generating this representation, for searching it to select a kth longest path, and for searching it to locate a p-center. The complete algorithm for path selection is asymptotically optimal in the worst case, and the algorithms for p-center location improve on previous methods. The p-center results are extended to networks with independent cycles.", "Linear-time and -space algorithms are presented for solving three versions of the p-center problem in a tree. The techniques are an application of parametric search.", "An @math algorithm for the continuous p-center problem on a tree is presented. Following a sequence of previous algorithms, ours is the first one whose time bound in uniform in p and less than quadratic in n. We also present an @math algorithm for a weighted discrete p-center problem." ] }
1705.02752
2612272943
We consider a classical k-center problem in trees. Let T be a tree of n vertices and every vertex has a nonnegative weight. The problem is to find k centers on the edges of T such that the maximum weighted distance from all vertices to their closest centers is minimized. Megiddo and Tamir (SIAM J. Comput., 1983) gave an algorithm that can solve the problem in @math time by using Cole's parametric search. Since then it has been open for over three decades whether the problem can be solved in @math time. In this paper, we present an @math time algorithm for the problem and thus settle the open problem affirmatively.
Finding @math centers in a general graph is NP-hard @cite_5 . The geometric version of the problem in the plane is also NP-hard @cite_25 , i.e., finding @math centers for @math demanding points in the plane. Some special cases, however, are solvable in polynomial time. For example, if @math , then the problem can be solved in @math time @cite_14 , and if @math , it can be solved in @math time @cite_15 (also refer to @cite_22 for a faster randomized algorithm). If we require all centers to be on a given line, then the problem of finding @math centers can be solved in polynomial time @cite_7 @cite_8 @cite_0 . Recently, problems on uncertain data have been studied extensively and some @math -center problem variations on uncertain data were also considered, e.g., @cite_27 @cite_6 @cite_17 @cite_1 @cite_19 @cite_2 .
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_7", "@cite_8", "@cite_1", "@cite_6", "@cite_0", "@cite_19", "@cite_27", "@cite_2", "@cite_5", "@cite_15", "@cite_25", "@cite_17" ], "mid": [ "2258388518", "2162317280", "2158196818", "2755780781", "2964179137", "", "2605815790", "1628490320", "2110553125", "2303014626", "2010961994", "2046812523", "2133962824", "2345659331" ], "abstract": [ "", "For a set Pof npoints in i¾?2, the Euclidean 2-center problem computes a pair of congruent disks of the minimal radius that cover P. We extend this to the (2,k)-center problem where we compute the minimal radius pair of congruent disks to cover ni¾? kpoints of P. We present a randomized algorithm with O(nk7log3n) expected running time for the (2,k)-center problem. We also study the (p,k)-center problem in i¾?2under the i¾? i¾? -metric. We give solutions for p= 4 in O(kO(1)nlogn) time and for p= 5 in O(kO(1)nlog5n) time.", "In this paper we study several instances of the alignedk-center problem where the goal is, given a set of points S in the plane and a parameter k ⩾ 1, to find k disks with centers on a line l such that their union covers S and the maximum radius of the disks is minimized. This problem is a constrained version of the well-known k-center problem in which the centers are constrained to lie in a particular region such as a segment, a line, or a polygon. We first consider the simplest version of the problem where the line l is given in advance; we can solve this problem in time O(n log2 n). In the case where only the direction of l is fixed, we give an O(n2log2 n)-time algorithm. When l is an arbitrary line, we give a randomized algorithm with expected running time O(n4log2 n). Then we present (1+e)-approximation algorithms for these three problems. When we denote T(k, e) = (k e2+(k e) log k) log(1 e), these algorithms run in O(n log k + T(k, e)) time, O(n log k + T(k, e) e) time, and O(n log k + T(k, e) e2) time, respectively. For k = O(n1 3 log n), we also give randomized algorithms with expected running times O(n + (k e2) log(1 e)), O(n+(k e3) log(1 e)), and O(n + (k e4) log(1 e)), respectively.", "Given a set P of n points and a straight line L, we study three important variations of minimum enclosing circle problem as follows:", "We consider a coverage problem for uncertain points in a tree. Let T be a tree containing a set ( P ) of n (weighted) demand points, and the location of each demand point (P_i P ) is uncertain but is known to be in one of (m_i ) points on T each associated with a probability. Given a covering range ( ), the problem is to find a minimum number of points (called centers) on T to build facilities for serving (or covering) these demand points in the sense that for each uncertain point (P_i P ), the expected distance from (P_i ) to at least one center is no more than ( ). The problem has not been studied before. We present an (O(|T|+M ^2 M) ) time algorithm, where |T| is the number of vertices of T and M is the total number of locations of all uncertain points of ( P ), i.e., (M= _ P_i P m_i ).", "", "The (weighted) k-median, k-means, and k-center problems in the plane are known to be NP-hard. In this paper, we study these problems with an additional constraint that requires the sought k facilities to be on a given line. We present efficient algorithms for various distance measures such as L1,L2,L∞. We assume that all n weighted points are given sorted by their projections on the given line. For k-median, our algorithms for L1 and L∞ metrics run in O(min nk,nklog nlog n,n2O(log klog log n)log n ) time and O(min nklog n,nklog nlog2n,n2O(log klog log n)log2n ) time, respectively. For k-means, which is defined only on the squared L2 distance, we give an O(min nk,nklog n,n2O(log klog log n) ) time algorithm. For k-center, our algorithms run in O(nlog n) time for all three metrics, and in O(n) time for the unweighted version under L1 and L∞ metrics. While our results for the k-center problem are optimal, the results for the k-median problem almost match the best algorithms for the corresponding one-dimensio...", "Problems on uncertain data have attracted significant attention due to the imprecise nature of many measurement data. In this paper, we consider the k-center problem on one-dimensional uncertain data. The input is a set P of (weighted) uncertain points on a real line, and each uncertain point is specified by its probability density function (pdf) which is a piecewise-uniform function (i.e., a histogram). The goal is to find a set Q of k points on the line to minimize the maximum expected distance from the uncertain points of P to their expected closest points in Q. We present efficient algorithms for this uncertain k-center problem and their running times almost match those for the \"deterministic\" k-center problem.", "There is an increasing quantity of data with uncertainty arising from applications such as sensor network measurements, record linkage, and as output of mining algorithms. This uncertainty is typically formalized as probability density functions over tuple values. Beyond storing and processing such data in a DBMS, it is necessary to perform other data analysis tasks such as data mining. We study the core mining problem of clustering on uncertain data, and define appropriate natural generalizations of standard clustering optimization criteria. Two variations arise, depending on whether a point is automatically associated with its optimal center, or whether it must be assigned to a fixed cluster no matter where it is actually located. For uncertain versions of k-means and k-median, we show reductions to their corresponding weighted versions on data with no uncertainties. These are simple in the unassigned case, but require some care for the assigned version. Our most interesting results are for uncertain k-center, which generalizes both traditional k-center and k-median objectives. We show a variety of bicriteria approximation algorithms. One picks O(ke--1log2n) centers and achieves a (1 + e) approximation to the best uncertain k-centers. Another picks 2k centers and achieves a constant factor approximation. Collectively, these results are the first known guaranteed approximation algorithms for the problems of clustering uncertain data.", "We consider the one-dimensional one-center problem on uncertain data. We are given a set P of n (weighted) uncertain points on a real line L and each uncertain point is specified by a probability density function that is a piecewise-uniform function (i.e., a histogram). The goal is to find a point c (the center) on L such that the maximum expected distance from c to all uncertain points of P is minimized. We present a linear-time algorithm for this problem.", "Problems of finding p-centers and dominating sets of radius r in networks are discussed in this paper. Let n be the number of vertices and @math be the number of edges of a network. With the assumption that the distance-matrix of the network is available, we design an @math algorithm for finding an absolute 1-center of a vertex-weighted network and an @math algorithm for finding an absolute 1-center of a vertex-unweighted network (the problem of finding a vertex 1-center of a network is trivial). We show that the problem of finding a (vertex or absolute) p-center (for @math ) of a (vertex-weighted or vertex-unweighted) network, and the problem of finding a dominating set of radius r are @math -hard even in the case where the network has a simple structure (e.g., a planar graph of maximum vertex degree 3). However, we describe an algorithm of complexity @math (respectively, $ O[| E |^p n^ 2p - 1...", "Abstract This paper considers the planar Euclidean two-center problem: given a planar n-point set S, find two congruent circular disks of the smallest radius covering S. The main result is a deterministic algorithm with running time O(nlog2nlog2logn), improving the previous O(nlog9n) bound of Sharir and almost matching the randomized O(nlog2n) bound of Eppstein. If a point in the intersection of the two disks is given, then we can solve the problem in O(nlogn) time with high probability.", "Given n demand points in the plane, the p-center problem is to find p supply points (anywhere in the plane) so as to minimize the maximum distance from a demand point to its respective nearest supply point. The p-median problem is to minimize the sum of distances from demand points to their respective nearest supply points. We prove that the p-center and the p-median problems relative to both the Euclidean and the rectilinear metrics are NP-hard. In fact, we prove that it is NP-hard even to approximate the p-center problems sufficiently closely. The reductions are from 3-satisfiability.", "Uncertain data has been very common in many applications. In this paper, we consider the one-center problem for uncertain data on tree networks. In this problem, we are given a tree T and n (weighted) uncertain points each of which has m possible locations on T associated with probabilities. The goal is to find a point @math xź on T such that the maximum (weighted) expected distance from @math xź to all uncertain points is minimized. To the best of our knowledge, this problem has not been studied before. We propose a refined prune-and-search technique that solves the problem in linear time." ] }
1705.02772
2949269345
The character information in natural scene images contains various personal information, such as telephone numbers, home addresses, etc. It is a high risk of leakage the information if they are published. In this paper, we proposed a scene text erasing method to properly hide the information via an inpainting convolutional neural network (CNN) model. The input is a scene text image, and the output is expected to be text erased image with all the character regions filled up the colors of the surrounding background pixels. This work is accomplished by a CNN model through convolution to deconvolution with interconnection process. The training samples and the corresponding inpainting images are considered as teaching signals for training. To evaluate the text erasing performance, the output images are detected by a novel scene text detection method. Subsequently, the same measurement on text detection is utilized for testing the images in benchmark dataset ICDAR2013. Compared with direct text detection way, the scene text erasing process demonstrates a drastically decrease on the precision, recall and f-score. That proves the effectiveness of proposed method for erasing the text in natural scene images.
Generally, the text detection methods detect text through either connected component analysis (CCA)-based procedure or sliding window-based procedure. The CCA-based methods @cite_21 @cite_15 involves character candidates extraction, character non-character classification, and text grouping. The sliding window-based methods @cite_5 @cite_12 extract regional textual features, such as HoG, LBP @cite_6 , CNN etc, from the regions which are scanned discretely from the image space by multi-scale and multi-ratio, and then scores the regions by inputting the features to a pertained text non-text classification engine. Regions with high text scores are grounded to text regions eventually. Sometimes, image pre-processing or post-processing techniques are required and added in the two pipelines. For text erasing, further process is required, for instance, how to fill the text regions by background color.
{ "cite_N": [ "@cite_21", "@cite_6", "@cite_5", "@cite_15", "@cite_12" ], "mid": [ "117491841", "2002496181", "1607307044", "2142159465", "2056435187" ], "abstract": [ "Maximally Stable Extremal Regions (MSERs) have achieved great success in scene text detection. However, this low-level pixel operation inherently limits its capability for handling complex text information efficiently (e. g. connections between text or background components), leading to the difficulty in distinguishing texts from background components. In this paper, we propose a novel framework to tackle this problem by leveraging the high capability of convolutional neural network (CNN). In contrast to recent methods using a set of low-level heuristic features, the CNN network is capable of learning high-level features to robustly identify text components from text-like outliers (e.g. bikes, windows, or leaves). Our approach takes advantages of both MSERs and sliding-window based methods. The MSERs operator dramatically reduces the number of windows scanned and enhances detection of the low-quality texts. While the sliding-window with CNN is applied to correctly separate the connections of multiple characters in components. The proposed system achieved strong robustness against a number of extreme text variations and serious real-world problems. It was evaluated on the ICDAR 2011 benchmark dataset, and achieved over 78 in F-measure, which is significantly higher than previous methods.", "in this paper, we propose a new framework in pedestrian detection by combining the HOG and uniform LBP feature on blocks. Contrast experiment result shows that detector using combined features is more powerful than one single feature. To further improve the detection performance, we make a contrast experiment that the HOG-LBP features are calculated at variable-size blocks to find the most efficient feature vector. The linear SVM is used to train the pedestrian classifier. Results presented on the INRIA dataset show that our detector is more discriminative and robust than the state-of-the-art algorithms.", "Full end-to-end text recognition in natural images is a challenging problem that has received much attention recently. Traditional systems in this area have relied on elaborate models incorporating carefully hand-engineered features or large amounts of prior knowledge. In this paper, we take a different route and combine the representational power of large, multilayer neural networks together with recent developments in unsupervised feature learning, which allows us to use a common framework to train highly-accurate text detector and character recognizer modules. Then, using only simple off-the-shelf methods, we integrate these two modules into a full end-to-end, lexicon-driven, scene text recognition system that achieves state-of-the-art performance on standard benchmarks, namely Street View Text and ICDAR 2003.", "We present a novel image operator that seeks to find the value of stroke width for each image pixel, and demonstrate its use on the task of text detection in natural images. The suggested operator is local and data dependent, which makes it fast and robust enough to eliminate the need for multi-scale computation or scanning windows. Extensive testing shows that the suggested scheme outperforms the latest published algorithms. Its simplicity allows the algorithm to detect texts in many fonts and languages.", "An unconstrained end-to-end text localization and recognition method is presented. The method introduces a novel approach for character detection and recognition which combines the advantages of sliding-window and connected component methods. Characters are detected and recognized as image regions which contain strokes of specific orientations in a specific relative position, where the strokes are efficiently detected by convolving the image gradient field with a set of oriented bar filters. Additionally, a novel character representation efficiently calculated from the values obtained in the stroke detection phase is introduced. The representation is robust to shift at the stroke level, which makes it less sensitive to intra-class variations and the noise induced by normalizing character size and positioning. The effectiveness of the representation is demonstrated by the results achieved in the classification of real-world characters using an euclidian nearest-neighbor classifier trained on synthetic data in a plain form. The method was evaluated on a standard dataset, where it achieves state-of-the-art results in both text localization and recognition." ] }
1705.02969
2613688792
We propose dynamic sampled stochastic approximation (SA) methods for stochastic optimization with a heavy-tailed distribution (with finite 2nd moment). The objective is the sum of a smooth convex function with a convex regularizer. Typically, it is assumed an oracle with an upper bound @math on its variance (OUBV). Differently, we assume an oracle with . This rarely addressed setup is more aggressive but realistic, where the variance may not be bounded. Our methods achieve optimal iteration complexity and (near) optimal oracle complexity. For the smooth convex class, we use an accelerated SA method a la FISTA which achieves, given tolerance @math , the optimal iteration complexity of @math with a near-optimal oracle complexity of @math . This improves upon Ghadimi and Lan [, 156:59-99, 2016] where it is assumed an OUBV. For the strongly convex class, our method achieves optimal iteration complexity of @math and optimal oracle complexity of @math . This improves upon [, 134:127-155, 2012] where it is assumed an OUBV. In terms of variance, our bounds are local: they depend on variances @math at solutions @math and the per unit distance multiplicative variance @math . For the smooth convex class, there exist policies such that our bounds resemble those obtained if it was assumed an OUBV with @math . For the strongly convex class such property is obtained exactly if the condition number is estimated or in the limit for better conditioned problems or for larger initial batch sizes. In any case, if it is assumed an OUBV, our bounds are thus much sharper since typically @math .
The performance of a SA method can be measured by its and given a tolerance @math on the mean optimality gap. The first is the total number of iterations, a measure for the optimization error, while the second is the total number of samples and oracle calls, a measure for the estimation error. Statistical lower bounds @cite_22 show that the optimal oracle complexities are @math for the smooth convex class and @math for the smooth strongly convex class. Anyhow, an important question that remains is ( Q ): ? Related to such question is the ability of method to treat the with .
{ "cite_N": [ "@cite_22" ], "mid": [ "2096840748" ], "abstract": [ "Relative to the large literature on upper bounds on complexity of convex optimization, lesser attention has been paid to the fundamental hardn4516420ess of these problems. Given the extensive use of convex optimization in machine learning and statistics, gaining an understanding of these complexity-theoretic issues is important. In this paper, we study the complexity of stochastic convex optimization in an oracle model of computation. We introduce a new notion of discrepancy between functions, and use it to reduce problems of stochastic convex optimization to statistical parameter estimation, which can be lower bounded using information-theoretic methods. Using this approach, we improve upon known results and obtain tight minimax complexity estimates for various function classes." ] }
1705.02969
2613688792
We propose dynamic sampled stochastic approximation (SA) methods for stochastic optimization with a heavy-tailed distribution (with finite 2nd moment). The objective is the sum of a smooth convex function with a convex regularizer. Typically, it is assumed an oracle with an upper bound @math on its variance (OUBV). Differently, we assume an oracle with . This rarely addressed setup is more aggressive but realistic, where the variance may not be bounded. Our methods achieve optimal iteration complexity and (near) optimal oracle complexity. For the smooth convex class, we use an accelerated SA method a la FISTA which achieves, given tolerance @math , the optimal iteration complexity of @math with a near-optimal oracle complexity of @math . This improves upon Ghadimi and Lan [, 156:59-99, 2016] where it is assumed an OUBV. For the strongly convex class, our method achieves optimal iteration complexity of @math and optimal oracle complexity of @math . This improves upon [, 134:127-155, 2012] where it is assumed an OUBV. In terms of variance, our bounds are local: they depend on variances @math at solutions @math and the per unit distance multiplicative variance @math . For the smooth convex class, there exist policies such that our bounds resemble those obtained if it was assumed an OUBV with @math . For the strongly convex class such property is obtained exactly if the condition number is estimated or in the limit for better conditioned problems or for larger initial batch sizes. In any case, if it is assumed an OUBV, our bounds are thus much sharper since typically @math .
The initial version of the SG method uses with stepsizes satisfying @math and @math , typically @math . A fundamental improvement in respect to estimation error was Polyak-Ruppert's scheme @cite_37 @cite_47 @cite_45 @cite_34 , where @math are used with a subsequent of the iterates with weights @math (this is sometimes called ). , such scheme obtains optimal iteration and oracle complexities of @math for the smooth convex class and of @math for smooth strongly convex class. These are also the size of the final ergodic average, a measure of the additional implicitly required in iterate averaging schemes. Such methods, hence, are efficient in terms of oracle complexity. Iterate averaging was then extensively explored (see e.g. @cite_48 @cite_5 @cite_50 @cite_24 @cite_43 @cite_55 @cite_23 ). The important work @cite_43 exploits the robustness of iterate averaging in SA methods and shows that such schemes can outperform the SAA approach on relevant convex problems. On the strongly convex class, @cite_21 gives a detailed non-asymptotic robust analysis of Polyak-Ruppert averaging scheme. See also @cite_14 @cite_25 @cite_52 for improvements.
{ "cite_N": [ "@cite_37", "@cite_14", "@cite_48", "@cite_55", "@cite_21", "@cite_52", "@cite_24", "@cite_43", "@cite_45", "@cite_50", "@cite_23", "@cite_5", "@cite_47", "@cite_34", "@cite_25" ], "mid": [ "186028470", "2154682027", "", "2205628031", "", "", "", "1992208280", "59018853", "", "2135499062", "2063746322", "2086161653", "", "" ], "abstract": [ "", "We give novel algorithms for stochastic strongly-convex optimization in the gradient oracle model which return a O(1 T)-approximate solution after T iterations. The first algorithm is deterministic, and achieves this rate via gradient updates and historical averaging. The second algorithm is randomized, and is based on pure gradient steps with a random step size. his rate of convergence is optimal in the gradient oracle model. This improves upon the previously known best rate of O(log(T T), which was obtained by applying an online strongly-convex optimization algorithm with regret O(log(T)) to the batch setting. We complement this result by proving that any algorithm has expected regret of Ω(log(T)) in the online stochastic strongly-convex optimization setting. This shows that any online-to-batch conversion is inherently suboptimal for stochastic strongly-convex optimization. This is the first formal evidence that online convex optimization is strictly more difficult than batch stochastic convex optimization.", "", "We consider regularized stochastic learning and online optimization problems, where the objective function is the sum of two convex terms: one is the loss function of the learning task, and the other is a simple regularization term such as l1-norm for promoting sparsity. We develop extensions of Nesterov's dual averaging method, that can exploit the regularization structure in an online setting. At each iteration of these methods, the learning variables are adjusted by solving a simple minimization problem that involves the running average of all past subgradients of the loss function and the whole regularization term, not just its subgradient. In the case of l1-regularization, our method is particularly effective in obtaining sparse solutions. We show that these methods achieve the optimal convergence rates or regret bounds that are standard in the literature on stochastic and online convex optimization. For stochastic learning problems in which the loss functions have Lipschitz continuous gradients, we also present an accelerated version of the dual averaging method.", "", "", "", "In this paper we consider optimization problems where the objective function is given in a form of the expectation. A basic difficulty of solving such stochastic optimization problems is that the involved multidimensional integrals (expectations) cannot be computed with high accuracy. The aim of this paper is to compare two computational approaches based on Monte Carlo sampling techniques, namely, the stochastic approximation (SA) and the sample average approximation (SAA) methods. Both approaches, the SA and SAA methods, have a long history. Current opinion is that the SAA method can efficiently use a specific (say, linear) structure of the considered problem, while the SA approach is a crude subgradient method, which often performs poorly in practice. We intend to demonstrate that a properly modified SA approach can be competitive and even significantly outperform the SAA method for a certain class of convex stochastic problems. We extend the analysis to the case of convex-concave stochastic saddle point problems and present (in our opinion highly encouraging) results of numerical experiments.", "", "", "Iterative methods that calculate their steps from approximate subgradient directions have proved to be useful for stochastic learning problems over large and streaming data sets. When the objective consists of a loss function plus a nonsmooth regularization term, the solution often lies on a low-dimensional manifold of parameter space along which the regularizer is smooth. (When an l1 regularizer is used to induce sparsity in the solution, for example, this manifold is defined by the set of nonzero components of the parameter vector.) This paper shows that a regularized dual averaging algorithm can identify this manifold, with high probability, before reaching the solution. This observation motivates an algorithmic strategy in which, once an iterate is suspected of lying on an optimal or near-optimal manifold, we switch to a \"local phase\" that searches in this manifold, thus converging rapidly to a near-optimal point. Computational results are presented to verify the identification property and to illustrate the effectiveness of this approach.", "Given a collection of @math different estimators or classifiers, we study the problem of model selection type aggregation, i.e., we construct a new estimator or classifier, called aggregate, which is nearly as good as the best among them with respect to a given risk criterion. We define our aggregate by a simple recursive procedure which solves an auxiliary stochastic linear programming problem related to the original non-linear one and constitutes a special case of the mirror averaging algorithm. We show that the aggregate satisfies sharp oracle inequalities under some general assumptions. The results allow one to construct in an easy way sharp adaptive nonparametric estimators for several problems including regression, classification and density estimation.", "A new recursive algorithm of stochastic approximation type with the averaging of trajectories is investigated. Convergence with probability one is proved for a variety of classical optimization and identification problems. It is also demonstrated for these problems that the proposed algorithm achieves the highest possible rate of convergence.", "", "" ] }
1705.02969
2613688792
We propose dynamic sampled stochastic approximation (SA) methods for stochastic optimization with a heavy-tailed distribution (with finite 2nd moment). The objective is the sum of a smooth convex function with a convex regularizer. Typically, it is assumed an oracle with an upper bound @math on its variance (OUBV). Differently, we assume an oracle with . This rarely addressed setup is more aggressive but realistic, where the variance may not be bounded. Our methods achieve optimal iteration complexity and (near) optimal oracle complexity. For the smooth convex class, we use an accelerated SA method a la FISTA which achieves, given tolerance @math , the optimal iteration complexity of @math with a near-optimal oracle complexity of @math . This improves upon Ghadimi and Lan [, 156:59-99, 2016] where it is assumed an OUBV. For the strongly convex class, our method achieves optimal iteration complexity of @math and optimal oracle complexity of @math . This improves upon [, 134:127-155, 2012] where it is assumed an OUBV. In terms of variance, our bounds are local: they depend on variances @math at solutions @math and the per unit distance multiplicative variance @math . For the smooth convex class, there exist policies such that our bounds resemble those obtained if it was assumed an OUBV with @math . For the strongly convex class such property is obtained exactly if the condition number is estimated or in the limit for better conditioned problems or for larger initial batch sizes. In any case, if it is assumed an OUBV, our bounds are thus much sharper since typically @math .
In the seminal work @cite_35 of Nesterov, a novel accelerated scheme for unconstrained smooth convex optimization with an exact oracle is presented obtaining the optimal rate @math . This improves upon @math of standard methods. Motivated by the importance of regularized problems, this result was further generalized for convex optimization in @cite_16 @cite_36 @cite_28 and in @cite_53 , where the stochastic oracle was considered. Assuming and one oracle call per iteration, @cite_53 obtains iteration and oracle complexities of @math , allowing larger values of @math . See also @cite_51 @cite_55 . In @cite_19 @cite_9 , the strongly convex class was considered and iteration and oracle complexities of @math were obtained, allowing for larger values of the condition number @math . See @cite_55 @cite_23 @cite_40 for considerations on sparse solutions.
{ "cite_N": [ "@cite_35", "@cite_36", "@cite_28", "@cite_53", "@cite_55", "@cite_9", "@cite_19", "@cite_40", "@cite_23", "@cite_16", "@cite_51" ], "mid": [ "2969945254", "2100556411", "", "", "2205628031", "2168909589", "2045744861", "1965193428", "2135499062", "2146482778", "2167302917" ], "abstract": [ "", "We consider the class of iterative shrinkage-thresholding algorithms (ISTA) for solving linear inverse problems arising in signal image processing. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity and thus is adequate for solving large-scale problems even with dense matrix data. However, such methods are also known to converge quite slowly. In this paper we present a new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA which is shown to be faster than ISTA by several orders of magnitude.", "", "", "We consider regularized stochastic learning and online optimization problems, where the objective function is the sum of two convex terms: one is the loss function of the learning task, and the other is a simple regularization term such as l1-norm for promoting sparsity. We develop extensions of Nesterov's dual averaging method, that can exploit the regularization structure in an online setting. At each iteration of these methods, the learning variables are adjusted by solving a simple minimization problem that involves the running average of all past subgradients of the loss function and the whole regularization term, not just its subgradient. In the case of l1-regularization, our method is particularly effective in obtaining sparse solutions. We show that these methods achieve the optimal convergence rates or regret bounds that are standard in the literature on stochastic and online convex optimization. For stochastic learning problems in which the loss functions have Lipschitz continuous gradients, we also present an accelerated version of the dual averaging method.", "In this paper we study new stochastic approximation (SA) type algorithms, namely, the accelerated SA (AC-SA), for solving strongly convex stochastic composite optimization (SCO) problems. Specifically, by introducing a domain shrinking procedure, we significantly improve the large-deviation results associated with the convergence rate of a nearly optimal AC-SA algorithm presented by Ghadimi and Lan in [SIAM J. Optim., 22 (2012), pp 1469--1492]. Moreover, we introduce a multistage AC-SA algorithm, which possesses an optimal rate of convergence for solving strongly convex SCO problems in terms of the dependence on not only the target accuracy, but also a number of problem parameters and the selection of initial points. To the best of our knowledge, this is the first time that such an optimal method has been presented in the literature. From our computational results, these AC-SA algorithms can substantially outperform the classical SA and some other SA type algorithms for solving certain classes of strongly...", "In this paper we present a generic algorithmic framework, namely, the accelerated stochastic approximation (AC-SA) algorithm, for solving strongly convex stochastic composite optimization (SCO) problems. While the classical stochastic approximation algorithms are asymptotically optimal for solving differentiable and strongly convex problems, the AC-SA algorithm, when employed with proper stepsize policies, can achieve optimal or nearly optimal rates of convergence for solving different classes of SCO problems during a given number of iterations. Moreover, we investigate these AC-SA algorithms in more detail, such as by establishing the large-deviation results associated with the convergence rates and introducing an efficient validation procedure to check the accuracy of the generated solutions.", "We propose a new stochastic first-order algorithm for solving sparse regression problems. In each iteration, our algorithm utilizes a stochastic oracle of the subgradient of the objective function. Our algorithm is based on a stochastic version of the estimate sequence technique introduced by Nesterov (Introductory lectures on convex optimization: a basic course, Kluwer, Amsterdam, 2003). The convergence rate of our algorithm depends continuously on the noise level of the gradient. In particular, in the limiting case of noiseless gradient, the convergence rate of our algorithm is the same as that of optimal deterministic gradient algorithms. We also establish some large deviation properties of our algorithm. Unlike existing stochastic gradient methods with optimal convergence rates, our algorithm has the advantage of readily enforcing sparsity at all iterations, which is a critical property for applications of sparse regressions.", "Iterative methods that calculate their steps from approximate subgradient directions have proved to be useful for stochastic learning problems over large and streaming data sets. When the objective consists of a loss function plus a nonsmooth regularization term, the solution often lies on a low-dimensional manifold of parameter space along which the regularizer is smooth. (When an l1 regularizer is used to induce sparsity in the solution, for example, this manifold is defined by the set of nonzero components of the parameter vector.) This paper shows that a regularized dual averaging algorithm can identify this manifold, with high probability, before reaching the solution. This observation motivates an algorithmic strategy in which, once an iterate is suspected of lying on an optimal or near-optimal manifold, we switch to a \"local phase\" that searches in this manifold, thus converging rapidly to a near-optimal point. Computational results are presented to verify the identification property and to illustrate the effectiveness of this approach.", "In this paper we analyze several new methods for solving optimization problems with the objective function formed as a sum of two convex terms: one is smooth and given by a black-box oracle, and another is general but simple and its structure is known. Despite to the bad properties of the sum, such problems, both in convex and nonconvex cases, can be solved with efficiency typical for the good part of the objective. For convex problems of the above structure, we consider primal and dual variants of the gradient method (converge as O (1 k)), and an accelerated multistep version with convergence rate O (1 k2), where k isthe iteration counter. For all methods, we suggest some efficient \"line search\" procedures and show that the additional computational work necessary for estimating the unknown problem class parameters can only multiply the complexity of each iteration by a small constant factor. We present also the results of preliminary computational experiments, which confirm the superiority of the accelerated scheme.", "Regularized risk minimization often involves non-smooth optimization, either because of the loss function (e.g., hinge loss) or the regularizer (e.g., l1-regularizer). Gradient methods, though highly scalable and easy to implement, are known to converge slowly. In this paper, we develop a novel accelerated gradient method for stochastic optimization while still preserving their computational simplicity and scalability. The proposed algorithm, called SAGE (Stochastic Accelerated GradiEnt), exhibits fast convergence rates on stochastic composite optimization with convex or strongly convex objectives. Experimental results show that SAGE is faster than recent (sub)gradient methods including FOLOS, SMIDAS and SCD. Moreover, SAGE can also be extended for online learning, resulting in a simple algorithm but with the best regret bounds currently known for these problems." ] }
1705.02544
2612135493
In this paper, we aim to predict human eye fixation with view-free scenes based on an end-to-end deep learning architecture. Although convolutional neural networks (CNNs) have made substantial improvement on human attention prediction, it is still needed to improve the CNN-based attention models by efficiently leveraging multi-scale features. Our visual attention network is proposed to capture hierarchical saliency information from deep, coarse layers with global saliency information to shallow, fine layers with local saliency response. Our model is based on a skip-layer network structure, which predicts human attention from multiple convolutional layers with various reception fields. Final saliency prediction is achieved via the cooperation of those global and local predictions. Our model is learned in a deep supervision manner, where supervision is directly fed into multi-level layers, instead of previous approaches of providing supervision only at the output layer and propagating this supervision back to earlier layers. Our model thus incorporates multi-level saliency predictions within a single network, which significantly decreases the redundancy of previous approaches of learning multiple network streams with different input scales. Extensive experimental analysis on various challenging benchmark data sets demonstrate our method yields the state-of-the-art performance with competitive inference time. 1 1 Our source code is available at https: github.com wenguanwang deepattention .
Traditional saliency algorithms with a long history targeted at , which refers to the task of identifying the fixation points that human viewers would focus on at first glance. The work of Itti @cite_33 , which was inspired by the Koch and Ullman model @cite_57 , was one of the earliest computational models. Since then, many follow-up works @cite_9 have been proposed in this direction. In recent decades, there is a new wave in saliency detection @cite_49 @cite_15 @cite_23 @cite_59 @cite_58 @cite_35 @cite_64 @cite_32 @cite_50 @cite_7 that concentrated on uniformly highlighting the most salient object regions in an image, starting with the works of @cite_36 and @cite_14 . The later methods, also named as , are directly driven by object-level tasks. In this study, we mainly overview the typical works of the first type of saliency models, since our method tries to predict human eye fixations over an image. We refer the reader to recent literatures ( @cite_52 and @cite_25 ) for more detailed overviews.
{ "cite_N": [ "@cite_35", "@cite_64", "@cite_14", "@cite_33", "@cite_7", "@cite_36", "@cite_9", "@cite_32", "@cite_52", "@cite_57", "@cite_49", "@cite_23", "@cite_59", "@cite_50", "@cite_15", "@cite_58", "@cite_25" ], "mid": [ "2055611119", "2501148868", "2100470808", "2128272608", "2757028014", "2157554677", "2141041441", "2606302360", "2164084182", "1497599070", "1894057436", "1903001680", "1932188298", "2585592883", "2520274358", "2036973297", "1772076007" ], "abstract": [ "This paper proposes a superpixel-based spatiotemporal saliency model for saliency detection in videos. Based on the superpixel representation of video frames, motion histograms and color histograms are extracted at the superpixel level as local features and frame level as global features. Then, superpixel-level temporal saliency is measured by integrating motion distinctiveness of superpixels with a scheme of temporal saliency prediction and adjustment, and superpixel-level spatial saliency is measured by evaluating global contrast and spatial sparsity of superpixels. Finally, a pixel-level saliency derivation method is used to generate pixel-level temporal and spatial saliency maps, and an adaptive fusion method is exploited to integrate them into the spatiotemporal saliency map. Experimental results on two public datasets demonstrate that the proposed model outperforms six state-of-the-art spatiotemporal saliency models in terms of both saliency detection and human fixation prediction.", "This paper proposes an effective spatiotemporal saliency model for unconstrained videos with complicated motion and complex scenes. First, superpixel-level motion and color histograms as well as global motion histogram are extracted as the features for saliency measurement. Then a superpixel-level graph with the addition of a virtual background node representing the global motion is constructed, and an iterative motion saliency (MS) measurement method that utilizes the shortest path algorithm on the graph is exploited to reasonably generate MS maps. Temporal propagation of saliency in both forward and backward directions is performed using efficient operations on inter-frame similarity matrices to obtain the integrated temporal saliency maps with the better coherence. Finally, spatial propagation of saliency both locally and globally is performed via the use of intra-frame similarity matrices to obtain the spatiotemporal saliency maps with the even better quality. The experimental results on two video data sets with various unconstrained videos demonstrate that the proposed model consistently outperforms the state-of-the-art spatiotemporal saliency models on saliency detection performance.", "Detection of visually salient image regions is useful for applications like object segmentation, adaptive compression, and object recognition. In this paper, we introduce a method for salient region detection that outputs full resolution saliency maps with well-defined boundaries of salient objects. These boundaries are preserved by retaining substantially more frequency content from the original image than other existing techniques. Our method exploits features of color and luminance, is simple to implement, and is computationally efficient. We compare our algorithm to five state-of-the-art salient region detection methods with a frequency domain analysis, ground truth, and a salient object segmentation application. Our method outperforms the five algorithms both on the ground-truth evaluation and on the segmentation task by achieving both higher precision and better recall.", "A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented. Multiscale image features are combined into a single topographical saliency map. A dynamical neural network then selects attended locations in order of decreasing saliency. The system breaks down the complex problem of scene understanding by rapidly selecting, in a computationally efficient manner, conspicuous locations to be analyzed in detail.", "This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).", "We study visual attention by detecting a salient object in an input image. We formulate salient object detection as an image segmentation problem, where we separate the salient object from the image background. We propose a set of novel features including multi-scale contrast, center-surround histogram, and color spatial distribution to describe a salient object locally, regionally, and globally. A conditional random field is learned to effectively combine these features for salient object detection. We also constructed a large image database containing tens of thousands of carefully labeled images by multiple users. To our knowledge, it is the first large image database for quantitative evaluation of visual attention algorithms. We validate our approach on this image database, which is public available with this paper.", "Saliency mechanisms play an important role when visual recognition must be performed in cluttered scenes. We propose a computational definition of saliency that deviates from existing models by equating saliency to discrimination. In particular, the salient attributes of a given visual class are defined as the features that enable best discrimination between that class and all other classes of recognition interest. It is shown that this definition leads to saliency algorithms of low complexity, that are scalable to large recognition problems, and is compatible with existing models of early biological vision. Experimental results demonstrating success in the context of challenging recognition problems are also presented.", "This paper proposes an effective salient object segmentation method via the graph-based integration of saliency and objectness. Based on the superpixel segmentation result of the input image, a graph is built to represent superpixels using regular vertex, background seed vertex with the addition of a terminal vertex. The edge weights on the graph are defined by integrating the difference of appearance, saliency, and objectness between superpixels. Then, the object probability of each superpixel is measured by finding the shortest path from the corresponding vertex to the terminal vertex on the graph, and the resultant object probability map can generally better highlight salient objects and suppress background regions compared to both saliency map and objectness map. Finally, the object probability map is used to initialize salient object and background, and effectively incorporated into the framework of graph cut to obtain the final salient object segmentation result. Extensive experimental results on three public benchmark datasets show that the proposed method consistently improves the salient object segmentation performance and outperforms the state-of-the-art salient object segmentation methods. Furthermore, experimental results also demonstrate that the proposed graph-based integration method is more effective than other fusion schemes and robust to saliency maps generated using various saliency models.", "Modeling visual attention-particularly stimulus-driven, saliency-based attention-has been a very active research area over the past 25 years. Many different models of attention are now available which, aside from lending theoretical contributions to other fields, have demonstrated successful applications in computer vision, mobile robotics, and cognitive systems. Here we review, from a computational perspective, the basic concepts of attention implemented in these models. We present a taxonomy of nearly 65 models, which provides a critical comparison of approaches, their capabilities, and shortcomings. In particular, 13 criteria derived from behavioral and computational studies are formulated for qualitative comparison of attention models. Furthermore, we address several challenging issues with models, including biological plausibility of the computations, correlation with eye movement datasets, bottom-up and top-down dissociation, and constructing meaningful performance measures. Finally, we highlight current research trends in attention modeling and provide insights for future.", "A number of psychophysical studies concerning the detection, localization and recognition of objects in the visual field have suggested a two-stage theory of human visual perception. The first stage is the “preattentive” mode, in which simple features are processed rapidly and in parallel over the entire visual field. In the second, “attentive” mode, a specialized processing focus, usually called the focus of attention, is directed to particular locations in the visual field. The analysis of complex forms and the recognition of objects are associated with this second stage.1 The computational justification for such a hypothesis comes from the realization that while it is possible to imagine specific algorithms performing tasks such as shape analysis and recognition at specific locations, it is difficult to imagine these algorithms operating in parallel over the whole visual scene, since such an approach will quickly lead to a combinatorial explosion in terms of required computational resources.2 This is essentially the major critique of Minsky and Papert to a universal application of perceptrons in visual perception.3 Taken together, these empirical and theoretical studies suggest that beyond a certain preprocessing stage, the analysis of visual information proceeds in a sequence of operations, each one applied to a selected location (or locations).", "Visual saliency is a fundamental problem in both cognitive and computational sciences, including computer vision. In this paper, we discover that a high-quality visual saliency model can be learned from multiscale features extracted using deep convolutional neural networks (CNNs), which have had many successes in visual recognition tasks. For learning such saliency models, we introduce a neural network architecture, which has fully connected layers on top of CNNs responsible for feature extraction at three different scales. We then propose a refinement method to enhance the spatial coherence of our saliency results. Finally, aggregating multiple saliency maps computed for different levels of image segmentation can further boost the performance, yielding saliency maps better than those generated from a single segmentation. To promote further research and evaluation of visual saliency models, we also construct a new large database of 4447 challenging images and their pixelwise saliency annotations. Experimental results demonstrate that our proposed method is capable of achieving state-of-the-art performance on all public benchmarks, improving the F-Measure by 5.0 and 13.2 respectively on the MSRA-B dataset and our new dataset (HKU-IS), and lowering the mean absolute error by 5.7 and 35.1 respectively on these two datasets.", "Saliency propagation has been widely adopted for identifying the most attractive object in an image. The propagation sequence generated by existing saliency detection methods is governed by the spatial relationships of image regions, i.e., the saliency value is transmitted between two adjacent regions. However, for the inhomogeneous difficult adjacent regions, such a sequence may incur wrong propagations. In this paper, we attempt to manipulate the propagation sequence for optimizing the propagation quality. Intuitively, we postpone the propagations to difficult regions and meanwhile advance the propagations to less ambiguous simple regions. Inspired by the theoretical results in educational psychology, a novel propagation algorithm employing the teaching-to-learn and learning-to-teach strategies is proposed to explicitly improve the propagation quality. In the teaching-to-learn step, a teacher is designed to arrange the regions from simple to difficult and then assign the simplest regions to the learner. In the learning-to-teach step, the learner delivers its learning confidence to the teacher to assist the teacher to choose the subsequent simple regions. Due to the interactions between the teacher and learner, the uncertainty of original difficult regions is gradually reduced, yielding manifest salient objects with optimized background suppression. Extensive experimental results on benchmark saliency datasets demonstrate the superiority of the proposed algorithm over twelve representative saliency detectors.", "Existing salient object detection models favor over-segmented regions upon which saliency is computed. Such local regions are less effective on representing object holistically and degrade emphasis of entire salient objects. As a result, the existing methods often fail to highlight an entire object in complex background. Toward better grouping of objects and background, in this paper, we consider graph cut, more specifically, the normalized graph cut (Ncut) for saliency detection. Since the Ncut partitions a graph in a normalized energy minimization fashion, resulting eigenvectors of the Ncut contain good cluster information that may group visual contents. Motivated by this, we directly induce saliency maps via eigenvectors of the Ncut, contributing to accurate saliency estimation of visual clusters. We implement the Ncut on a graph derived from a moderate number of superpixels. This graph captures both intrinsic color and edge information of image data. Starting from the superpixels, an adaptive multi-level region merging scheme is employed to seek such cluster information from Ncut eigenvectors. With developed saliency measures for each merged region, encouraging performance is obtained after across-level integration. Experiments by comparing with 13 existing methods on four benchmark datasets, including MSRA-1000, SOD, SED, and CSSD show the proposed method, Ncut saliency, results in uniform object enhancement and achieves comparable better performance to the state-of-the-art methods.", "Video saliency, aiming for estimation of a single dominant object in a sequence, offers strong object-level cues for unsupervised video object segmentation. In this paper, we present a geodesic distance based technique that provides reliable and temporally consistent saliency measurement of superpixels as a prior for pixel-wise labeling. Using undirected intra-frame and inter-frame graphs constructed from spatiotemporal edges or appearance and motion, and a skeleton abstraction step to further enhance saliency estimates, our method formulates the pixel-wise segmentation task as an energy minimization problem on a function that consists of unary terms of global foreground and background models, dynamic location models, and pairwise terms of label smoothness potentials. We perform extensive quantitative and qualitative experiments on benchmark datasets. Our method achieves superior performance in comparison to the current state-of-the-art in terms of accuracy and speed.", "In this paper, we show that large annotated data sets have great potential to provide strong priors for saliency estimation rather than merely serving for benchmark evaluations. To this end, we present a novel image saliency detection method called saliency transfer. Given an input image, we first retrieve a support set of best matches from the large database of saliency annotated images. Then, we assign the transitional saliency scores by warping the support set annotations onto the input image according to computed dense correspondences. To incorporate context, we employ two complementary correspondence strategies: a global matching scheme based on scene-level analysis and a local matching scheme based on patch-level inference. We then introduce two refinement measures to further refine the saliency maps and apply the random-walk-with-restart by exploring the global saliency structure to estimate the affinity between foreground and background assignments. Extensive experimental results on four publicly available benchmark data sets demonstrate that the proposed saliency algorithm consistently outperforms the current state-of-the-art methods.", "This paper proposes a novel saliency detection framework termed as saliency tree. For effective saliency measurement, the original image is first simplified using adaptive color quantization and region segmentation to partition the image into a set of primitive regions. Then, three measures, i.e., global contrast, spatial sparsity, and object prior are integrated with regional similarities to generate the initial regional saliency for each primitive region. Next, a saliency-directed region merging approach with dynamic scale control scheme is proposed to generate the saliency tree, in which each leaf node represents a primitive region and each non-leaf node represents a non-primitive region generated during the region merging process. Finally, by exploiting a regional center-surround scheme based node selection criterion, a systematic saliency tree analysis including salient node selection, regional saliency adjustment and selection is performed to obtain final regional saliency measures and to derive the high-quality pixel-wise saliency map. Extensive experimental results on five datasets with pixel-wise ground truths demonstrate that the proposed saliency tree model consistently outperforms the state-of-the-art saliency models.", "We extensively compare, qualitatively and quantitatively, 41 state-of-the-art models (29 salient object detection, 10 fixation prediction, 1 objectness, and 1 baseline) over seven challenging data sets for the purpose of benchmarking salient object detection and segmentation methods. From the results obtained so far, our evaluation shows a consistent rapid progress over the last few years in terms of both accuracy and running time. The top contenders in this benchmark significantly outperform the models identified as the best in the previous benchmark conducted three years ago. We find that the models designed specifically for salient object detection generally work better than models in closely related areas, which in turn provides a precise definition and suggests an appropriate treatment of this problem that distinguishes it from other problems. In particular, we analyze the influences of center bias and scene complexity in model performance, which, along with the hard cases for the state-of-the-art models, provide useful hints toward constructing more challenging large-scale data sets and better saliency models. Finally, we propose probable solutions for tackling several open problems, such as evaluation scores and data set bias, which also suggest future research directions in the rapidly growing field of salient object detection." ] }
1705.02408
2734906726
In this paper we describe a framework towards computing well-localized, robust motion plans through the perception-aware motion planning problem, whereby we seek a low-cost motion plan subject to a separate constraint on perception localization quality. To solve this problem we introduce the Multiobjective Perception-Aware Planning (MPAP) algorithm which explores the state space via a multiobjective search, considering both cost and a perception heuristic. This framework can accommodate a large range of heuristics, allowing those that capture the history dependence of localization drift and represent complex modern perception methods. We present two such heuristics, one derived from a simplified model of robot perception and a second learned from ground-truth sensor error, which we show to be capable of predicting the performance of a state-of-the-art perception system. The solution trajectory from this heuristic-based search is then certified via Monte Carlo methods to be well-localized and robust. The additional computational burden of perception-aware planning is offset by GPU massive parallelization. Through numerical experiments the algorithm is shown to find well-localized, robust solutions in about a second. Finally, we demonstrate MPAP on a quadrotor flying perception-aware and perception-agnostic plans using Google Tango for localization, finding the quadrotor safely executes the perception-aware plan every time, while crashing in over 20 of the perception-agnostic runs due to loss of localization.
Finally, we note that the concept of multiobjective search to balance uncertainty with plan cost has been considered previously in motion planning, notably in @cite_1 . We utilize the same multiobjective search accelerated through massively parallel GPU implementation put forth by the Parallel Uncertainty-aware Multiobjective Planning ( ) algorithm @cite_1 . However, while considers only uncertainties internal to the system, this work considers external uncertainties through perception. Additionally, in order to handle the added complexity of modern perception solutions, such as the Google Tango we experimentally demonstrate on, we instead employ a perception-heuristic constraint, and consider it in the objective search.
{ "cite_N": [ "@cite_1" ], "mid": [ "2494937121" ], "abstract": [ "In this paper we present the PUMP (Parallel Uncertainty-aware Multiobjective Planning) algorithm for addressing the stochastic kinodynamic motion planning problem, whereby one seeks a low-cost, dynamically-feasible motion plan subject to a constraint on collision probability (CP). To ensure exhaustive evaluation of candidate motion plans (as needed to tradeoff the competing objectives of performance and safety), PUMP incrementally builds the Pareto front of the problem, accounting for the optimization objective and an approximation of CP. This is performed by a massively parallel multiobjective search, here implemented with a focus on GPUs. Upon termination of the exploration phase, PUMP searches the Pareto set of motion plans to identify the lowest cost solution that is certified to satisfy the CP constraint (according to an asymptotically exact estimator). We introduce a novel particle-based CP approximation scheme, designed for efficient GPU implementation, which accounts for dependencies over the history of a trajectory execution. We present numerical experiments for quadrotor planning wherein PUMP identifies solutions in 100 ms, evaluating over one hundred thousand partial plans through the course of its exploration phase. The results show that this multiobjective search achieves a lower motion plan cost, for the same CP constraint, compared to a safety buffer-based search heuristic and repeated RRT trials." ] }
1705.02314
2613629742
In this paper, we build morphological chains for agglutinative languages by using a log-linear model for the morphological segmentation task. The model is based on the unsupervised morphological segmentation system called MorphoChains. We extend MorphoChains log linear model by expanding the candidate space recursively to cover more split points for agglutinative languages such as Turkish, whereas in the original model candidates are generated by considering only binary segmentation of each word. The results show that we improve the state-of-art Turkish scores by 12 having a F-measure of 72 and we improve the English scores by 3 having a F-measure of 74 . Eventually, the system outperforms both MorphoChains and other well-known unsupervised morphological segmentation systems. The results indicate that candidate generation plays an important role in such an unsupervised log-linear model that is learned using contrastive estimation with negative samples.
The oldest works have been usually based on deterministic methods. One of the earliest works is that is proposed by Goldsmith @cite_8 . The model is based on Minimum Description Length (MDL) principle, which is deterministic. employs morphological structures called signatures in order to represent words. Signatures reflect the internal structure of words. Words with similar morphological structure reside in the same signature. For example, - make a signature that covers words such as , and - make another signature that covers .
{ "cite_N": [ "@cite_8" ], "mid": [ "2101711363" ], "abstract": [ "This study reports the results of using minimum description length (MDL) analysis to model unsupervised learning of the morphological segmentation of European languages, using corpora ranging in size from 5,000 words to 500,000 words. We develop a set of heuristics that rapidly develop a probabilistic morphological grammar, and use MDL as our primary tool to determine whether the modifications proposed by the heuristics will be adopted or not. The resulting grammar matches well the analysis that would be developed by a human morphologist.In the final section, we discuss the relationship of this style of MDL grammatical analysis to the notion of evaluation metric in early generative grammar." ] }
1705.02314
2613629742
In this paper, we build morphological chains for agglutinative languages by using a log-linear model for the morphological segmentation task. The model is based on the unsupervised morphological segmentation system called MorphoChains. We extend MorphoChains log linear model by expanding the candidate space recursively to cover more split points for agglutinative languages such as Turkish, whereas in the original model candidates are generated by considering only binary segmentation of each word. The results show that we improve the state-of-art Turkish scores by 12 having a F-measure of 72 and we improve the English scores by 3 having a F-measure of 74 . Eventually, the system outperforms both MorphoChains and other well-known unsupervised morphological segmentation systems. The results indicate that candidate generation plays an important role in such an unsupervised log-linear model that is learned using contrastive estimation with negative samples.
Probabilistic methods have also been used in unsupervised morphological segmentation. Creutz and Lagus @cite_14 introduce another well-known unsupervised morphological segmentation Morfessor Baseline, the first member of the Morfessor family. One of the versions is based on MDL principle and the other one is based on Maximum Likelihood (ML) estimate. In another member of the same family, Creutz and Lagus @cite_10 , suggest using priors by converting the model into a Maximum a Posteriori model, thereby introducing another member of the same family, called Morfessor Categories MAP (Maximum A-posterior). Morfessor has been one of the main reference segmentation systems to compare with most of the unsupervised segmentation systems. In this paper, we also compare our extended model with Morfessor Baseline and Morfessor CatMAP.
{ "cite_N": [ "@cite_14", "@cite_10" ], "mid": [ "2117621558", "201532657" ], "abstract": [ "We present two methods for unsupervised segmentation of words into morpheme-like units. The model utilized is especially suited for languages with a rich morphology, such as Finnish. The first method is based on the Minimum Description Length (MDL) principle and works online. In the second method, Maximum Likelihood (ML) optimization is used. The quality of the segmentations is measured using an evaluation method that compares the segmentations produced to an existing morphological analysis. Experiments on both Finnish and English corpora show that the presented methods perform well compared to a current state-of-the-art system.", "This work presents an algorithm for the unsupervised learning, or induction, of a simple morphology of a natural language. A probabilistic maximum a posteriori model is utilized, which builds hierarchical representations for a set of morphs, which are morpheme-like units discovered from unannotated text corpora. The induced morph lexicon stores parameters related to both the “meaning” and “form” of the morphs it contains. These parameters affect the role of the morphs in words. The model is implemented in a task of unsupervised morpheme segmentation of Finnish and English words. Very good results are obtained for Finnish and almost as good results are obtained in the English task." ] }
1705.02314
2613629742
In this paper, we build morphological chains for agglutinative languages by using a log-linear model for the morphological segmentation task. The model is based on the unsupervised morphological segmentation system called MorphoChains. We extend MorphoChains log linear model by expanding the candidate space recursively to cover more split points for agglutinative languages such as Turkish, whereas in the original model candidates are generated by considering only binary segmentation of each word. The results show that we improve the state-of-art Turkish scores by 12 having a F-measure of 72 and we improve the English scores by 3 having a F-measure of 74 . Eventually, the system outperforms both MorphoChains and other well-known unsupervised morphological segmentation systems. The results indicate that candidate generation plays an important role in such an unsupervised log-linear model that is learned using contrastive estimation with negative samples.
Non-parametric Bayesian methods have also been used in segmentation task. @cite_0 present a framework that generates power-laws by using word frequencies. Pitman-Yor Process @cite_5 (the two parameter extension of a Dirichlet Process) is used as a stochastic process in their framework. Snyder and Barzilay @cite_12 use Dirichlet Process, the simplified version of the Pitman-Yor Process, to induce morpheme boundaries on a bilingual aligned corpus simultaneously by finding the cross-lingual morpheme relations. @cite_6 address the connection between syntax and morphology in a statistical model. Syntactic knowledge is incorporated in their morphological segmentation system. Their results show that using syntactic information helps in morphological segmentation.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_6", "@cite_12" ], "mid": [ "2159399018", "2150507172", "2179974023", "2116211107" ], "abstract": [ "Standard statistical models of language fail to capture one of the most striking properties of natural languages: the power-law distribution in the frequencies of word tokens. We present a framework for developing statistical models that generically produce power-laws, augmenting standard generative models with an adaptor that produces the appropriate pattern of token frequencies. We show that taking a particular stochastic process - the Pitman-Yor process - as an adaptor justifies the appearance of type frequencies in formal analyses of natural language, and improves the performance of a model for unsupervised learning of morphology.", "The class of species sampling mixture models is introduced as an exten- sion of semiparametric models based on the Dirichlet process to models based on the general class of species sampling priors, or equivalently the class of all exchangeable urn distributions. Using Fubini calculus in conjunction with Pitman (1995, 1996), we derive characterizations of the posterior distribution in terms of a posterior par- tition distribution that extend the results of Lo (1984) for the Dirichlet process. These results provide a better understanding of models and have both theoretical and practical applications. To facilitate the use of our models we generalize the work in Brunner, Chan, James and Lo (2001) by extending their weighted Chinese restaurant (WCR) Monte Carlo procedure, an i.i.d. sequential importance sampling (SIS) procedure for approximating posterior mean functionals based on the Dirich- let process, to the case of approximation of mean functionals and additionally their posterior laws in species sampling mixture models. We also discuss collapsed Gibbs sampling, Polya urn Gibbs sampling and a Polya urn SIS scheme. Our framework allows for numerous applications, including multiplicative counting process models subject to weighted gamma processes, as well as nonparametric and semiparamet- ric hierarchical models based on the Dirichlet process, its two-parameter extension, the Pitman-Yor process and finite dimensional Dirichlet priors.", "The connection between part-of-speech (POS) categories and morphological properties is well-documented in linguistics but underutilized in text processing systems. This paper proposes a novel model for morphological segmentation that is driven by this connection. Our model learns that words with common affixes are likely to be in the same syntactic category and uses learned syntactic categories to refine the segmentation boundaries of words. Our results demonstrate that incorporating POS categorization yields substantial performance gains on morphological segmentation of Arabic.", "For centuries, the deep connection between languages has brought about major discoveries about human communication. In this paper we investigate how this powerful source of information can be exploited for unsupervised language learning. In particular, we study the task of morphological segmentation of multiple languages. We present a nonparametric Bayesian model that jointly induces morpheme segmentations of each language under consideration and at the same time identifies cross-lingual morpheme patterns, or abstract morphemes. We apply our model to three Semitic languages: Arabic, Hebrew, Aramaic, as well as to English. Our results demonstrate that learning morphological models in tandem reduces error by up to 24 relative to monolingual models. Furthermore, we provide evidence that our joint model achieves better performance when applied to languages from the same family." ] }
1705.02314
2613629742
In this paper, we build morphological chains for agglutinative languages by using a log-linear model for the morphological segmentation task. The model is based on the unsupervised morphological segmentation system called MorphoChains. We extend MorphoChains log linear model by expanding the candidate space recursively to cover more split points for agglutinative languages such as Turkish, whereas in the original model candidates are generated by considering only binary segmentation of each word. The results show that we improve the state-of-art Turkish scores by 12 having a F-measure of 72 and we improve the English scores by 3 having a F-measure of 74 . Eventually, the system outperforms both MorphoChains and other well-known unsupervised morphological segmentation systems. The results indicate that candidate generation plays an important role in such an unsupervised log-linear model that is learned using contrastive estimation with negative samples.
Some of the systems not only attempt to perform morphological segmentation, but also aim to learn hidden structures behind words. Chan @cite_4 applies Latent Dirichlet Allocation (LDA) to learn morphological paradigms as latent classes. The model assumes that correct segmentations of words are known but morphological paradigms are to be learned. Chan discovers that the final morphological paradigms can be matched with syntactic tags (such as noun, verb etc.). Can and Manandhar @cite_11 obtain syntactic categories from a context distributional clustering algorithm @cite_1 and learn paradigms by using the the pairs of syntactic categories that have common stems.
{ "cite_N": [ "@cite_1", "@cite_4", "@cite_11" ], "mid": [ "2139090905", "2063131977", "" ], "abstract": [ "This paper addresses the issue of the automatic induction of syntactic categories from unannotated corpora. Previous techniques give good results, but fail to cope well with ambiguity or rare words. An algorithm, context distribution clustering (CDC), is presented which can be naturally extended to handle these problems.", "This paper introduces the probabilistic paradigm, a probabilistic, declarative model of morphological structure. We describe an algorithm that recursively applies Latent Dirichlet Allocation with an orthogonality constraint to discover morphological paradigms as the latent classes within a suffix-stem matrix. We apply the algorithm to data preprocessed in several different ways, and show that when suffixes are distinguished for part of speech and allomorphs or gender conjugational variants are merged, the model is able to correctly learn morphological paradigms for English and Spanish. We compare our system with Linguistica (Goldsmith 2001), and discuss the advantages of the probabilistic paradigm over Linguistica's signature representation.", "" ] }
1705.02414
2613924455
In this work we compare different batch construction methods for mini-batch training of recurrent neural networks. While popular implementations like TensorFlow and MXNet suggest a bucketing approach to improve the parallelization capabilities of the recurrent training process, we propose a simple ordering strategy that arranges the training sequences in a stochastic alternatingly sorted way. We compare our method to sequence bucketing as well as various other batch construction strategies on the CHiME-4 noisy speech recognition corpus. The experiments show that our alternated sorting approach is able to compete both in training time and recognition performance while being conceptually simpler to implement.
While mini-batch training was studied extensively for feed-forward networks @cite_8 , authors rarely reveal the batch construction strategy they used during training when RNN experiments are reported. This is because the systems are either trained in a frame-wise fashion @cite_11 or because the analysis uses sequences of very similar length as in @cite_7 . We studied in an earlier work @cite_14 how training on sub-sequences in those cases can lead to significantly faster and often also more robust training. In @cite_2 the problem of having sequences of largely varying lengths in a batch was identified and the authors suggested to adapt their proposed batch-normalization method to a frame-level normalization, although a sequence-level normalization sounds theoretically more reasonable. In @cite_6 a curriculum learning strategy is proposed where sequences follow a specific scheduling in order to reduce overfitting. Modern machine learning frameworks like TensorFlow @cite_3 and MXNet @cite_12 implement a bucketing approach based on the lengths distribution of the sequences. In @cite_1 the authors extend this idea by selecting optimal sequences within each bucket using a dynamic programming technique.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_8", "@cite_1", "@cite_6", "@cite_3", "@cite_2", "@cite_12", "@cite_11" ], "mid": [ "", "", "", "2528993343", "2950304420", "", "2243290847", "2186615578", "2533523411" ], "abstract": [ "", "", "", "An efficient algorithm for recurrent neural network training is presented. The approach increases the training speed for tasks where a length of the input sequence may vary significantly. The proposed approach is based on the optimal batch bucketing by input sequence length and data parallelization on multiple graphical processing units. The baseline training performance without sequence bucketing is compared with the proposed solution for a different number of buckets. An example is given for the online handwriting recognition task using an LSTM recurrent neural network. The evaluation is performed in terms of the wall clock time, number of epochs, and validation loss value.", "Recurrent Neural Networks can be trained to produce sequences of tokens given some input, as exemplified by recent results in machine translation and image captioning. The current approach to training them consists of maximizing the likelihood of each token in the sequence given the current (recurrent) state and the previous token. At inference, the unknown previous token is then replaced by a token generated by the model itself. This discrepancy between training and inference can yield errors that can accumulate quickly along the generated sequence. We propose a curriculum learning strategy to gently change the training process from a fully guided scheme using the true previous token, towards a less guided scheme which mostly uses the generated token instead. Experiments on several sequence prediction tasks show that this approach yields significant improvements. Moreover, it was used successfully in our winning entry to the MSCOCO image captioning challenge, 2015.", "", "Recurrent Neural Networks (RNNs) are powerful models for sequential data that have the potential to learn long-term dependencies. However, they are computationally expensive to train and difficult to parallelize. Recent work has shown that normalizing intermediate representations of neural networks can significantly improve convergence rates in feedforward neural networks . In particular, batch normalization, which uses mini-batch statistics to standardize features, was shown to significantly reduce training time. In this paper, we show that applying batch normalization to the hidden-to-hidden transitions of our RNNs doesn't help the training procedure. We also show that when applied to the input-to-hidden transitions, batch normalization can lead to a faster convergence of the training criterion but doesn't seem to improve the generalization performance on both our language modelling and speech recognition tasks. All in all, applying batch normalization to RNNs turns out to be more challenging than applying it to feedforward networks, but certain variants of it can still be beneficial.", "MXNet is a multi-language machine learning (ML) library to ease the development of ML algorithms, especially for deep neural networks. Embedded in the host language, it blends declarative symbolic expression with imperative tensor computation. It offers auto differentiation to derive gradients. MXNet is computation and memory efficient and runs on various heterogeneous systems, ranging from mobile devices to distributed GPU clusters. This paper describes both the API design and the system implementation of MXNet, and explains how embedding of both symbolic expression and tensor operation is handled in a unified fashion. Our preliminary experiments reveal promising results on large scale deep neural network applications using multiple GPU machines.", "Conversational speech recognition has served as a flagship speech recognition task since the release of the Switchboard corpus in the 1990s. In this paper, we measure the human error rate on the widely used NIST 2000 test set, and find that our latest automated system has reached human parity. The error rate of professional transcribers is 5.9 for the Switchboard portion of the data, in which newly acquainted pairs of people discuss an assigned topic, and 11.3 for the CallHome portion where friends and family members have open-ended conversations. In both cases, our automated system establishes a new state of the art, and edges past the human benchmark, achieving error rates of 5.8 and 11.0 , respectively. The key to our system's performance is the use of various convolutional and LSTM acoustic model architectures, combined with a novel spatial smoothing method and lattice-free MMI acoustic training, multiple recurrent neural network language modeling approaches, and a systematic use of system combination." ] }
1705.02402
2951054159
We present a framework for robust face detection and landmark localisation of faces in the wild, which has been evaluated as part of the 2nd Facial Landmark Localisation Competition'. The framework has four stages: face detection, bounding box aggregation, pose estimation and landmark localisation. To achieve a high detection rate, we use two publicly available CNN-based face detectors and two proprietary detectors. We aggregate the detected face bounding boxes of each input image to reduce false positives and improve face detection accuracy. A cascaded shape regressor, trained using faces with a variety of pose variations, is then employed for pose estimation and image pre-processing. Last, we train the final cascaded shape regressor for fine-grained landmark localisation, using a large number of training samples with limited pose variations. The experimental results obtained on the 300W and Menpo benchmarks demonstrate the superiority of our framework over state-of-the-art methods.
In recent years, discriminative models, in particular CSR-based methods, have become the state-of-the-art in robust facial landmark localisation of unconstrained faces. The key to the success of CSR is the architecture cascading multiple weak regressors, which greatly improves the generalisation capacity and accuracy of a single regression model. To form a CSR, linear regression @cite_22 @cite_27 @cite_44 , random forests or ferns @cite_49 @cite_10 and deep neural networks @cite_36 @cite_21 @cite_6 have been used as weak regressors. To further improve landmark localisation accuracy of CSR-based approaches, new architectures have been proposed. For example, Feng al proposed to fuse multiple CSRs to improve the accuracy of a single CSR model @cite_22 . Xiong al proposed the global supervised descent method using multiple view-based CSRs to deal with the difficulties posed by extreme appearance variations @cite_55 . Moreover, data augmentation @cite_38 @cite_26 @cite_12 and 3D-based approaches @cite_2 have also been used to enhance existing CSR-based facial landmark localisation approaches.
{ "cite_N": [ "@cite_38", "@cite_26", "@cite_22", "@cite_36", "@cite_55", "@cite_21", "@cite_6", "@cite_44", "@cite_27", "@cite_49", "@cite_2", "@cite_10", "@cite_12" ], "mid": [ "1832881114", "", "2015229219", "1976948919", "1946919140", "", "", "", "", "1990937109", "2964014798", "", "" ], "abstract": [ "A large amount of training data is usually crucial for successful supervised learning. However, the task of providing training samples is often time-consuming, involving a considerable amount of tedious manual work. In addition, the amount of training data available is often limited. As an alternative, in this paper, we discuss how best to augment the available data for the application of automatic facial landmark detection. We propose the use of a 3D morphable face model to generate synthesized faces for a regression-based detector training. Benefiting from the large synthetic training data, the learned detector is shown to exhibit a better capability to detect the landmarks of a face with pose variations. Furthermore, the synthesized training data set provides accurate and consistent landmarks automatically as compared to the landmarks annotated manually, especially for occluded facial parts. The synthetic data and real data are from different domains; hence the detector trained using only synthesized faces does not generalize well to real faces. To deal with this problem, we propose a cascaded collaborative regression algorithm, which generates a cascaded shape updater that has the ability to overcome the difficulties caused by pose variations, as well as achieving better accuracy when applied to real faces. The training is based on a mix of synthetic and real image data with the mixing controlled by a dynamic mixture weighting schedule. Initially, the training uses heavily the synthetic data, as this can model the gross variations between the various poses. As the training proceeds, progressively more of the natural images are incorporated, as these can model finer detail. To improve the performance of the proposed algorithm further, we designed a dynamic multi-scale local feature extraction method, which captures more informative local features for detector training. An extensive evaluation on both controlled and uncontrolled face data sets demonstrates the merit of the proposed algorithm.", "", "In this letter, we present a random cascaded-regression copse (R-CR-C) for robust facial landmark detection. Its key innovations include a new parallel cascade structure design, and an adaptive scheme for scale-invariant shape update and local feature extraction. Evaluation on two challenging benchmarks shows the superiority of the proposed algorithm to state-of-the-art methods. © 1994-2012 IEEE.", "We propose a new approach for estimation of the positions of facial key points with three-level carefully designed convolutional networks. At each level, the outputs of multiple networks are fused for robust and accurate estimation. Thanks to the deep structures of convolutional networks, global high-level features are extracted over the whole face region at the initialization stage, which help to locate high accuracy key points. There are two folds of advantage for this. First, the texture context information over the entire face is utilized to locate each key point. Second, since the networks are trained to predict all the key points simultaneously, the geometric constraints among key points are implicitly encoded. The method therefore can avoid local minimum caused by ambiguity and data corruption in difficult image samples due to occlusions, large pose variations, and extreme lightings. The networks at the following two levels are trained to locally refine initial predictions and their inputs are limited to small regions around the initial predictions. Several network structures critical for accurate and robust facial point detection are investigated. Extensive experiments show that our approach outperforms state-of-the-art methods in both detection accuracy and reliability.", "Mathematical optimization plays a fundamental role in solving many problems in computer vision (e.g., camera calibration, image alignment, structure from motion). It is generally accepted that second order descent methods are the most robust, fast, and reliable approaches for nonlinear optimization of a general smooth function. However, in the context of computer vision, second order descent methods have two main drawbacks: 1) the function might not be analytically differentiable and numerical approximations are impractical, and 2) the Hessian may be large and not positive definite. Recently, Supervised Descent Method (SDM), a method that learns the “weighted averaged gradients” in a supervised manner has been proposed to solve these issues. However, SDM is a local algorithm and it is likely to average conflicting gradient directions. This paper proposes Global SDM (GSDM), an extension of SDM that divides the search space into regions of similar gradient directions. GSDM provides a better and more efficient strategy to minimize non-linear least squares functions in computer vision problems. We illustrate the effectiveness of GSDM in two problems: non-rigid image alignment and extrinsic camera calibration.", "", "", "", "", "We present a very efficient, highly accurate, \"Explicit Shape Regression\" approach for face alignment. Unlike previous regression-based approaches, we directly learn a vectorial regression function to infer the whole facial shape (a set of facial landmarks) from the image and explicitly minimize the alignment errors over the training data. The inherent shape constraint is naturally encoded into the regressor in a cascaded learning framework and applied from coarse to fine during the test, without using a fixed parametric shape model as in most previous methods. To make the regression more effective and efficient, we design a two-level boosted regression, shape indexed features and a correlation-based feature selection method. This combination enables us to learn accurate models from large training data in a short time (20 min for 2,000 training images), and run regression extremely fast in test (15 ms for a 87 landmarks shape). Experiments on challenging data show that our approach significantly outperforms the state-of-the-art in terms of both accuracy and efficiency.", "Face alignment, which fits a face model to an image and extracts the semantic meanings of facial pixels, has been an important topic in CV community. However, most algorithms are designed for faces in small to medium poses (below 45), lacking the ability to align faces in large poses up to 90. The challenges are three-fold: Firstly, the commonly used landmark-based face model assumes that all the landmarks are visible and is therefore not suitable for profile views. Secondly, the face appearance varies more dramatically across large poses, ranging from frontal view to profile view. Thirdly, labelling landmarks in large poses is extremely challenging since the invisible landmarks have to be guessed. In this paper, we propose a solution to the three problems in an new alignment framework, called 3D Dense Face Alignment (3DDFA), in which a dense 3D face model is fitted to the image via convolutional neutral network (CNN). We also propose a method to synthesize large-scale training samples in profile views to solve the third problem of data labelling. Experiments on the challenging AFLW database show that our approach achieves significant improvements over state-of-the-art methods.", "", "" ] }
1705.02395
2952777343
Abundant data is the key to successful machine learning. However, supervised learning requires annotated data that are often hard to obtain. In a classification task with limited resources, Active Learning (AL) promises to guide annotators to examples that bring the most value for a classifier. AL can be successfully combined with self-training, i.e., extending a training set with the unlabelled examples for which a classifier is the most certain. We report our experiences on using AL in a systematic manner to train an SVM classifier for Stack Overflow posts discussing performance of software components. We show that the training examples deemed as the most valuable to the classifier are also the most difficult for humans to annotate. Despite carefully evolved annotation criteria, we report low inter-rater agreement, but we also propose mitigation strategies. Finally, based on one annotator's work, we show that self-training can improve the classification accuracy. We conclude the paper by discussing implication for future text miners aspiring to use AL and self-training.
AL is a semi-supervised machine learning approach in which a learning algorithm interactively queries the human to obtain labels for specific examples, typically the most difficult ones. The method for selecting examples to query should be optimized to maximize the gained learning. Uncertainty sampling is a simple technique that selects examples where the classifier is least certain on which label to apply @cite_23 . This has the effect of separating the examples into two distinct groups and thus remove borderline cases, see the horizontal histograms in Fig. . AL enables a shift of focus from momentary data analysis to a process with a feedback loop @cite_26 @cite_17 .
{ "cite_N": [ "@cite_26", "@cite_23", "@cite_17" ], "mid": [ "2143528108", "2903158431", "2152332539" ], "abstract": [ "Mining software engineering data has emerged as a successful research direction over the past decade. In this position paper, we advocate Software Intelligence (SI) as the future of mining software engineering data, within modern software engineering research, practice, and education. We coin the name SI as an inspiration from the Business Intelligence (BI) field, which offers concepts and techniques to improve business decision making by using fact-based support systems. Similarly, SI offers software practitioners (not just developers) up-to-date and pertinent information to support their daily decision-making processes. SI should support decision-making processes throughout the lifetime of a software system not just during its development phase. The vision of SI has yet to become a reality that would enable software engineering research to have a strong impact on modern software practice. Nevertheless, recent advances in the Mining Software Repositories (MSR) field show great promise and provide strong support for realizing SI in the near future. This position paper summarizes the state of practice and research of SI, and lays out future research directions for mining software engineering data to enable SI.", "", "The practices of industrial and academic data mining are very different. These differences have significant implications for (a) how we manage industrial data mining projects; (b) the direction of academic studies in data mining; and (c) training programs for engineers who seek to use data miners in an industrial setting." ] }
1705.01877
2611228415
In this paper, we focus on finding clusters in partially categorized data sets. We propose a semi-supervised version of Gaussian mixture model, called C3L, which retrieves natural subgroups of given categories. In contrast to other semi-supervised models, C3L is parametrized by user-defined leakage level, which controls maximal inconsistency between initial categorization and resulting clustering. Our method can be implemented as a module in practical expert systems to detect clusters, which combine expert knowledge with true distribution of data. Moreover, it can be used for improving the results of less flexible clustering techniques, such as projection pursuit clustering. The paper presents extensive theoretical analysis of the model and fast algorithm for its efficient optimization. Experimental results show that C3L finds high quality clustering model, which can be applied in discovering meaningful groups in partially classified data.
Semi-supervised clustering incorporates the knowledge about class labels to partitioning process @cite_8 . This information can be presented as partial labeling, which gives a division of a small portion of data into categories, or as pairwise constraints, which indicate whether two data points originate from the same (must-links) or distinct classes (cannot-links). Although pairwise constraints provide less amount of information than partial labeling, it is easier to assess whether two instances come from the same group than assign them to particular classes.
{ "cite_N": [ "@cite_8" ], "mid": [ "1516407653" ], "abstract": [ "Since the initial work on constrained clustering, there have been numerous advances in methods, applications, and our understanding of the theoretical properties of constraints and constrained clustering algorithms. Bringing these developments together, Constrained Clustering: Advances in Algorithms, Theory, and Applications presents an extensive collection of the latest innovations in clustering data analysis methods that use background knowledge encoded as constraints. Algorithms The first five chapters of this volume investigate advances in the use of instance-level, pairwise constraints for partitional and hierarchical clustering. The book then explores other types of constraints for clustering, including cluster size balancing, minimum cluster size,and cluster-level relational constraints. Theory It also describes variations of the traditional clustering under constraints problem as well as approximation algorithms with helpful performance guarantees. Applications The book ends by applying clustering with constraints to relational data, privacy-preserving data publishing, and video surveillance data. It discusses an interactive visual clustering approach, a distance metric learning approach, existential constraints, and automatically generated constraints. With contributions from industrial researchers and leading academic experts who pioneered the field, this volume delivers thorough coverage of the capabilities and limitations of constrained clustering methods as well as introduces new types of constraints and clustering algorithms." ] }
1705.01877
2611228415
In this paper, we focus on finding clusters in partially categorized data sets. We propose a semi-supervised version of Gaussian mixture model, called C3L, which retrieves natural subgroups of given categories. In contrast to other semi-supervised models, C3L is parametrized by user-defined leakage level, which controls maximal inconsistency between initial categorization and resulting clustering. Our method can be implemented as a module in practical expert systems to detect clusters, which combine expert knowledge with true distribution of data. Moreover, it can be used for improving the results of less flexible clustering techniques, such as projection pursuit clustering. The paper presents extensive theoretical analysis of the model and fast algorithm for its efficient optimization. Experimental results show that C3L finds high quality clustering model, which can be applied in discovering meaningful groups in partially classified data.
Clustering with pairwise constraints was introduced by @cite_37 , who created a variant of k-means, which focuses on preserving all constraints. @cite_13 constructed a version of Gaussian mixture model, which gathers data points into equivalence classes (called chunklets) using must-link relation and then applied EM algorithm on such generalized data set of chunklets. This approach was later modified to multi-modal clustering models @cite_27 . The aforementioned methods work well with noiseless side information, but deteriorate the results when some constraints are mislabeled. To overcome this problem, the authors of @cite_17 @cite_7 applied hidden Markov random fields (HMRF) to construct more sophisticated dependencies between linked points. However, the use of HMRF leads to complex solutions, which are difficult to optimize. In recent years, Asafi and Cohen-Or. @cite_22 suggested reducing distances between data points with a must-link constraint and adding a dimension for each cannot-link constraint. After updating all other distances to, e.g., satisfy the triangle inequality, the thus obtained pairwise distance matrix can be used for unsupervised learning. Wang and Davidson @cite_32 proposed a version of spectral clustering, which relies on solving a generalized eigenvalue problem.
{ "cite_N": [ "@cite_37", "@cite_22", "@cite_7", "@cite_32", "@cite_27", "@cite_13", "@cite_17" ], "mid": [ "2134089414", "2127320480", "", "2088857627", "", "2148687775", "2139956879" ], "abstract": [ "Clustering is traditionally viewed as an unsupervised method for data analysis. However, in some cases information about the problem domain is available in addition to the data instances themselves. In this paper, we demonstrate how the popular k-means clustering algorithm can be protably modied to make use of this information. In experiments with articial constraints on six data sets, we observe improvements in clustering accuracy. We also apply this method to the real-world problem of automatically detecting road lanes from GPS data and observe dramatic increases in performance.", "In this paper, we introduce a new approach to constrained clustering which treats the constraints as features. Our method augments the original feature space with additional dimensions, each of which derived from a given Cannot-link constraints. The specified Cannot-link pair gets extreme coordinates values, and the rest of the points get coordinate values that express their spatial influence from the specified constrained pair. After augmenting all the new features, a standard unconstrained clustering algorithm can be performed, like k-means or spectral clustering. We demonstrate the efficacy of our method for active semi-supervised learning applied to image segmentation and compare it to alternative methods. We also evaluate the performance of our method on the four most commonly evaluated datasets from the UCI machine learning repository.", "", "Constrained clustering has been well-studied for algorithms like K-means and hierarchical agglomerative clustering. However, how to encode constraints into spectral clustering remains a developing area. In this paper, we propose a flexible and generalized framework for constrained spectral clustering. In contrast to some previous efforts that implicitly encode Must-Link and Cannot-Link constraints by modifying the graph Laplacian or the resultant eigenspace, we present a more natural and principled formulation, which preserves the original graph Laplacian and explicitly encodes the constraints. Our method offers several practical advantages: it can encode the degree of belief (weight) in Must-Link and Cannot-Link constraints; it guarantees to lower-bound how well the given constraints are satisfied using a user-specified threshold; and it can be solved deterministically in polynomial time through generalized eigendecomposition. Furthermore, by inheriting the objective function from spectral clustering and explicitly encoding the constraints, much of the existing analysis of spectral clustering techniques is still valid. Consequently our work can be posed as a natural extension to unconstrained spectral clustering and be interpreted as finding the normalized min-cut of a labeled graph. We validate the effectiveness of our approach by empirical results on real-world data sets, with applications to constrained image segmentation and clustering benchmark data sets with both binary and degree-of-belief constraints.", "", "Density estimation with Gaussian Mixture Models is a popular generative technique used also for clustering. We develop a framework to incorporate side information in the form of equivalence constraints into the model estimation procedure. Equivalence constraints are defined on pairs of data points, indicating whether the points arise from the same source (positive constraints) or from different sources (negative constraints). Such constraints can be gathered automatically in some learning problems, and are a natural form of supervision in others. For the estimation of model parameters we present a closed form EM procedure which handles positive constraints, and a Generalized EM procedure using a Markov net which handles negative constraints. Using publicly available data sets we demonstrate that such side information can lead to considerable improvement in clustering tasks, and that our algorithm is preferable to two other suggested methods using the same type of side information.", "Unsupervised clustering can be significantly improved using supervision in the form of pairwise constraints, i.e., pairs of instances labeled as belonging to same or different clusters. In recent years, a number of algorithms have been proposed for enhancing clustering quality by employing such supervision. Such methods use the constraints to either modify the objective function, or to learn the distance measure. We propose a probabilistic model for semi-supervised clustering based on Hidden Markov Random Fields (HMRFs) that provides a principled framework for incorporating supervision into prototype-based clustering. The model generalizes a previous approach that combines constraints and Euclidean distance learning, and allows the use of a broad range of clustering distortion measures, including Bregman divergences (e.g., Euclidean distance and I-divergence) and directional similarity measures (e.g., cosine similarity). We present an algorithm that performs partitional semi-supervised clustering of data by minimizing an objective function derived from the posterior energy of the HMRF model. Experimental results on several text data sets demonstrate the advantages of the proposed framework." ] }
1705.02073
2613614176
Cross-lingual text classification(CLTC) is the task of classifying documents written in different languages into the same taxonomy of categories. This paper presents a novel approach to CLTC that builds on model distillation, which adapts and extends a framework originally proposed for model compression. Using soft probabilistic predictions for the documents in a label-rich language as the (induced) supervisory labels in a parallel corpus of documents, we train classifiers successfully for new languages in which labeled training data are not available. An adversarial feature adaptation technique is also applied during the model training to reduce distribution mismatch. We conducted experiments on two benchmark CLTC datasets, treating English as the source language and German, French, Japan and Chinese as the unlabeled target languages. The proposed approach had the advantageous or comparable performance of the other state-of-art methods.
One branch of CLTC methods is to use lexical level mappings to transfer the knowledge from the source language to the target language. The work by @cite_12 was the first effort to solve CLTC problem. They translated the target-language documents to source language using a bilingual dictionary. The classifier trained in the source language was then applied on those translated documents. Similarly, @cite_36 built cross-lingual classifier by translating subjectivity words and phrases in the source language into the target language. @cite_50 also utilized a bilingual dictionary. Instead of translating the documents, they tried to translate the classification model from source language to target language.
{ "cite_N": [ "@cite_36", "@cite_50", "@cite_12" ], "mid": [ "2142262074", "2148861942", "1489959797" ], "abstract": [ "This paper discusses learning multilingual subjective language via cross-lingual projections.", "In this paper, we introduce a method that automatically builds text classifiers in a new language by training on already labeled data in another language. Our method transfers the classification knowledge across languages by translating the model features and by using an Expectation Maximization (EM) algorithm that naturally takes into account the ambiguity associated with the translation of a word. We further exploit the readily available unlabeled data in the target language via semi-supervised learning, and adapt the translated model to better fit the data distribution of the target language.", "This article deals with the problem of Cross-Lingual Text Categorization (CLTC), which arises when documents in different languages must be classified according to the same classification tree. We describe practical and cost-effective solutions for automatic Cross-Lingual Text Categorization, both in case a sufficient number of training examples is available for each new language and in the case that for some language no training examples are available." ] }
1705.02073
2613614176
Cross-lingual text classification(CLTC) is the task of classifying documents written in different languages into the same taxonomy of categories. This paper presents a novel approach to CLTC that builds on model distillation, which adapts and extends a framework originally proposed for model compression. Using soft probabilistic predictions for the documents in a label-rich language as the (induced) supervisory labels in a parallel corpus of documents, we train classifiers successfully for new languages in which labeled training data are not available. An adversarial feature adaptation technique is also applied during the model training to reduce distribution mismatch. We conducted experiments on two benchmark CLTC datasets, treating English as the source language and German, French, Japan and Chinese as the unlabeled target languages. The proposed approach had the advantageous or comparable performance of the other state-of-art methods.
Some recent efforts in CLTC focus on the use of automatic machine translation (MT) technology. For example, Wan @cite_47 used machine translation systems to give each document a source-language and a target-language version, where one version is machine-translated from the another one. A co-training @cite_46 algorithm was applied on two versions of both source and target documents to iterative train classifiers in both languages. MT-based CLTC also include the work on multi-view learning with different algorithms, such as majority voting @cite_20 , matrix completion @cite_38 and multi-view co-regularization @cite_30 .
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_47", "@cite_46", "@cite_20" ], "mid": [ "2952302403", "2125666396", "", "2048679005", "2142742813" ], "abstract": [ "In many multilingual text classification problems, the documents in different languages often share the same set of categories. To reduce the labeling cost of training a classification model for each individual language, it is important to transfer the label knowledge gained from one language to another language by conducting cross language classification. In this paper we develop a novel subspace co-regularized multi-view learning method for cross language text classification. This method is built on parallel corpora produced by machine translation. It jointly minimizes the training error of each classifier in each language while penalizing the distance between the subspace representations of parallel documents. Our empirical study on a large set of cross language text classification tasks shows the proposed method consistently outperforms a number of inductive methods, domain adaptation methods, and multi-view learning methods.", "Cross language text classification is an important learning task in natural language processing. A critical challenge of cross language learning arises from the fact that words of different languages are in disjoint feature spaces. In this paper, we propose a two-step representation learning method to bridge the feature spaces of different languages by exploiting a set of parallel bilingual documents. Specifically, we first formulate a matrix completion problem to produce a complete parallel document-term matrix for all documents in two languages, and then induce a low dimensional cross-lingual document representation by applying latent semantic indexing on the obtained matrix. We use a projected gradient descent algorithm to solve the formulated matrix completion problem with convergence guarantees. The proposed method is evaluated by conducting a set of experiments with cross language sentiment classification tasks on Amazon product reviews. The experimental results demonstrate that the proposed learning method outperforms a number of other cross language representation learning methods, especially when the number of parallel bilingual documents is small.", "", "We consider the problem of using a large unlabeled sample to boost performance of a learning algorit,hrn when only a small set of labeled examples is available. In particular, we consider a problem setting motivated by the task of learning to classify web pages, in which the description of each example can be partitioned into two distinct views. For example, the description of a web page can be partitioned into the words occurring on that page, and the words occurring in hyperlinks t,hat point to that page. We assume that either view of the example would be sufficient for learning if we had enough labeled data, but our goal is to use both views together to allow inexpensive unlabeled data to augment, a much smaller set of labeled examples. Specifically, the presence of two distinct views of each example suggests strategies in which two learning algorithms are trained separately on each view, and then each algorithm’s predictions on new unlabeled examples are used to enlarge the training set of the other. Our goal in this paper is to provide a PAC-style analysis for this setting, and, more broadly, a PAC-style framework for the general problem of learning from both labeled and unlabeled data. We also provide empirical results on real web-page data indicating that this use of unlabeled examples can lead to significant improvement of hypotheses in practice. *This research was supported in part by the DARPA HPKB program under contract F30602-97-1-0215 and by NSF National Young investigator grant CCR-9357793. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. TO copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and or a fee. COLT 98 Madison WI USA Copyright ACM 1998 l-58113-057--0 98 7... 5.00 92 Tom Mitchell School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213-3891 mitchell+@cs.cmu.edu", "We address the problem of learning classifiers when observations have multiple views, some of which may not be observed for all examples. We assume the existence of view generating functions which may complete the missing views in an approximate way. This situation corresponds for example to learning text classifiers from multilingual collections where documents are not available in all languages. In that case, Machine Translation (MT) systems may be used to translate each document in the missing languages. We derive a generalization error bound for classifiers learned on examples with multiple artificially created views. Our result uncovers a trade-off between the size of the training set, the number of views, and the quality of the view generating functions. As a consequence, we identify situations where it is more interesting to use multiple views for learning instead of classical single view learning. An extension of this framework is a natural way to leverage unlabeled multi-view data in semi-supervised learning. Experimental results on a subset of the Reuters RCV1 RCV2 collections support our findings by showing that additional views obtained from MT may significantly improve the classification performance in the cases identified by our trade-off." ] }
1705.02073
2613614176
Cross-lingual text classification(CLTC) is the task of classifying documents written in different languages into the same taxonomy of categories. This paper presents a novel approach to CLTC that builds on model distillation, which adapts and extends a framework originally proposed for model compression. Using soft probabilistic predictions for the documents in a label-rich language as the (induced) supervisory labels in a parallel corpus of documents, we train classifiers successfully for new languages in which labeled training data are not available. An adversarial feature adaptation technique is also applied during the model training to reduce distribution mismatch. We conducted experiments on two benchmark CLTC datasets, treating English as the source language and German, French, Japan and Chinese as the unlabeled target languages. The proposed approach had the advantageous or comparable performance of the other state-of-art methods.
Another branch of CLTC methods focuses on representation learning or the mapping of the induced representations in cross-language settings @cite_34 @cite_18 @cite_25 @cite_6 @cite_1 @cite_28 @cite_26 @cite_27 @cite_16 @cite_43 . For example, @cite_9 and @cite_2 used a parallel corpus to learn word alignment probabilities in a pre-processing step. Some other work attempts to find a language-invariant (or interlingua) representation for words or documents in different languages using various techniques, such as latent semantic indexing @cite_3 , kernel canonical correlation analysis @cite_32 , matrix completion @cite_38 , principal component analysis @cite_11 and Bayesian graphical models @cite_42 .
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_26", "@cite_28", "@cite_9", "@cite_42", "@cite_1", "@cite_32", "@cite_6", "@cite_16", "@cite_3", "@cite_43", "@cite_27", "@cite_2", "@cite_34", "@cite_25", "@cite_11" ], "mid": [ "2125666396", "", "", "", "", "74584387", "", "2125972593", "2167660864", "", "2301772290", "", "", "", "1980862579", "", "2115924763" ], "abstract": [ "Cross language text classification is an important learning task in natural language processing. A critical challenge of cross language learning arises from the fact that words of different languages are in disjoint feature spaces. In this paper, we propose a two-step representation learning method to bridge the feature spaces of different languages by exploiting a set of parallel bilingual documents. Specifically, we first formulate a matrix completion problem to produce a complete parallel document-term matrix for all documents in two languages, and then induce a low dimensional cross-lingual document representation by applying latent semantic indexing on the obtained matrix. We use a projected gradient descent algorithm to solve the formulated matrix completion problem with convergence guarantees. The proposed method is evaluated by conducting a set of experiments with cross language sentiment classification tasks on Amazon product reviews. The experimental results demonstrate that the proposed learning method outperforms a number of other cross language representation learning methods, especially when the number of parallel bilingual documents is small.", "", "", "", "", "This paper explores bridging the content of two different languages via latent topics. Specifically, we propose a unified probabilistic model to simultaneously model latent topics from bilingual corpora that discuss comparable content and use the topics as features in a cross-lingual, dictionary-less text categorization task. Experimental results on multilingual Wikipedia data show that the proposed topic model effectively discovers the topic information from the bilingual corpora, and the learned topics successfully transfer classification knowledge to other languages, for which no labeled training data are available.", "", "The problem of learning a semantic representation of a text document from data is addressed, in the situation where a corpus of unlabeled paired documents is available, each pair being formed by a short English document and its French translation. This representation can then be used for any retrieval, categorization or clustering task, both in a standard and in a cross-lingual setting. By using kernel functions, in this case simple bag-of-words inner products, each part of the corpus is mapped to a high-dimensional space. The correlations between the two spaces are then learnt by using kernel Canonical Correlation Analysis. A set of directions is found in the first and in the second space that are maximally correlated. Since we assume the two representations are completely independent apart from the semantic content, any correlation between them should reflect some semantic similarity. Certain patterns of English words that relate to a specific meaning should correlate with certain patterns of French words corresponding to the same meaning, across the corpus. Using the semantic representation obtained in this way we first demonstrate that the correlations detected between the two versions of the corpus are significantly higher than random, and hence that a representation based on such features does capture statistical patterns that should reflect semantic information. Then we use such representation both in cross-language and in single-language retrieval tasks, observing performance that is consistently and significantly superior to LSI on the same data.", "The lack of Chinese sentiment corpora limits the research progress on Chinese sentiment classification. However, there are many freely available English sentiment corpora on the Web. This paper focuses on the problem of cross-lingual sentiment classification, which leverages an available English corpus for Chinese sentiment classification by using the English corpus as training data. Machine translation services are used for eliminating the language gap between the training set and test set, and English features and Chinese features are considered as two independent views of the classification problem. We propose a cotraining approach to making use of unlabeled Chinese data. Experimental results show the effectiveness of the proposed approach, which can outperform the standard inductive classifiers and the transductive classifiers.", "", "We describe a method for fully automated cross-language document retrieval in which no query translation is required. Queries in one language can retrieve documents in other languages (as well as the original language). This is accomplished by a method that automatically constructs a multi-lingual semantic space using Latent Semantic Indexing (LSI). We present strong preliminary test results for our cross-language LSI (CL-LSI) method for a French-English collection. We also provide some evidence that this automatic method performs comparably to a retrieval method based on machine translation (MT-LSI).", "", "", "", "In cross-lingual text classification problems, it is costly and time-consuming to annotate documents for each individual language. To avoid the expensive re-labeling process, domain adaptation techniques can be applied to adapt a learning system trained in one language domain to another language domain. In this paper we develop a transductive subspace representation learning method to address domain adaptation for cross-lingual text classifications. The proposed approach is formulated as a nonnegative matrix factorization problem and solved using an iterative optimization procedure. Our empirical study on cross-lingual text classification tasks shows the proposed approach consistently outperforms a number of comparison methods.", "", "Representing documents by vectors that are independent of language enhances machine translation and multilingual text categorization. We use discriminative training to create a projection of documents from multiple languages into a single translingual vector space. We explore two variants to create these projections: Oriented Principal Component Analysis (OPCA) and Coupled Probabilistic Latent Semantic Analysis (CPLSA). Both of these variants start with a basic model of documents (PCA and PLSA). Each model is then made discriminative by encouraging comparable document pairs to have similar vector representations. We evaluate these algorithms on two tasks: parallel document retrieval for Wikipedia and Europarl documents, and cross-lingual text classification on Reuters. The two discriminative variants, OPCA and CPLSA, significantly outperform their corresponding baselines. The largest differences in performance are observed on the task of retrieval when the documents are only comparable and not parallel. The OPCA method is shown to perform best." ] }
1705.02194
2613192121
We consider fractional online covering problems with @math -norm objectives. The problem of interest is of the form @math where @math is the weighted sum of @math -norms and @math is a non-negative matrix. The rows of @math (i.e. covering constraints) arrive online over time. We provide an online @math -competitive algorithm where @math and @math is the maximum of the row sparsity of @math and @math . This is based on the online primal-dual framework where we use the dual of the above convex program. Our result expands the class of convex objectives that admit good online algorithms: prior results required a monotonicity condition on the objective @math which is not satisfied here. This result is nearly tight even for the linear special case. As direct applications we obtain (i) improved online algorithms for non-uniform buy-at-bulk network design and (ii) the first online algorithm for throughput maximization under @math -norm edge capacities.
The online primal-dual framework for linear programs @cite_8 is fairly well understood. Tight results are known for the class of packing and covering LPs @cite_24 @cite_1 , with competitive ratio @math for covering LPs and @math for packing LPs; here @math is the row-sparsity and @math is the ratio of the maximum to minimum entries in the constraint matrix. Such LPs are very useful because they correspond to the LP relaxations of many combinatorial optimization problems. Combining the online LP solver with suitable online rounding schemes, good online algorithms have been obtained for many problems, eg. set cover @cite_23 , group Steiner tree @cite_12 , caching @cite_2 and ad-auctions @cite_7 . Online algorithms for LPs with mixed packing and covering constraints were obtained in @cite_0 ; the competitive ratio was improved in @cite_21 . Such mixed packing covering LPs were also used to obtain an online algorithm for capacitated facility location @cite_0 . A more complex mixed packing covering LP was used recently in @cite_4 to obtain online algorithms for non-uniform buy-at-bulk network design: as an application of our result, we obtain a simpler and better (by two log-factors) online algorithm for this problem.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_8", "@cite_21", "@cite_1", "@cite_24", "@cite_0", "@cite_23", "@cite_2", "@cite_12" ], "mid": [ "", "1496647032", "2142339200", "", "2274097845", "2122886291", "", "2074121653", "2905904719", "1973444128" ], "abstract": [ "", "We study the online ad-auctions problem introduced by [15]. We design a (1 - 1 e)-competitive (optimal) algorithm for the problem, which is based on a clean primal-dual approach, matching the competitive factor obtained in [15]. Our basic algorithm along with its analysis are very simple. Our results are based on a unified approach developed earlier for the design of online algorithms [7,8]. In particular, the analysis uses weak duality rather than a tailor made (i.e., problem specific) potential function. We show that this approach is useful for analyzing other classical online algorithms such as ski rental and the TCP-acknowledgement problem. We are confident that the primal-dual method will prove useful in other online scenarios as well. The primal-dual approach enables us to extend our basic ad-auctions algorithm in a straight forward manner to scenarios in which additional information is available, yielding improved worst case competitive factors. In particular, a scenario in which additional stochastic information is available to the algorithm, a scenario in which the number of interested buyers in each product is bounded by some small number d, and a general risk management framework.", "The primal—dual method is a powerful algorithmic technique that has proved to be extremely useful for a wide variety of problems in the area of approximation algorithms for NP-hard problems. The method has its origins in the realm of exact algorithms, e.g., for matching and network flow. In the area of approximation algorithms, the primal—dual method has emerged as an important unifying design methodology, starting from the seminal work of Goemans and Williamson [60] We show in this survey how to extend the primal—dual method to the setting of online algorithms, and show its applicability to a wide variety of fundamental problems. Among the online problems that we consider here are the weighted caching problem, generalized caching, the set-cover problem, several graph optimization problems, routing, load balancing, and the problem of allocating ad-auctions. We also show that classic online problems such as the ski rental problem and the dynamic TCP-acknowledgement problem can be solved optimally using a simple primal—dual approach. The primal—dual method has several advantages over existing methods. First, it provides a general recipe for the design and analysis of online algorithms. The linear programming formulation helps detecting the difficulties of the online problem, and the analysis of the competitive ratio is direct, without a potential function appearing \"out of nowhere.\" Finally, since the analysis is done via duality, the competitiveness of the online algorithm is with respect to an optimal fractional solution, which can be advantageous in certain scenarios.", "", "A covering integer program (CIP) is a mathematical program of the form min c⊤ x ∣ Ax ≥ 1, 0 ≤ x ≤ u, x ∈ ℤn , where all entries in A, c, u are nonnegative. In the online setting, the constraints (i.e., the rows of the constraint matrix A) arrive over time, and the algorithm can only increase the coordinates of x to maintain feasibility. As an intermediate step, we consider solving the covering linear program (CLP) online, where the integrality constraints are dropped. Our main results are (a) an O(log k)-competitive online algorithm for solving the CLP, and (b) an O(log k · log l)-competitive randomized online algorithm for solving the CIP. Here k ≤ n and l ≤ m respectively denote the maximum number of nonzero entries in any row and column of the constraint matrix A. Our algorithm is based on the online primal-dual paradigm, where a novel ingredient is to allow dual variables to increase and decrease throughout the course of the algorithm. It is known that this result is the best possible for polynomial-t...", "We study a wide range of online covering and packing optimization problems. In an online covering problem, a linear cost function is known in advance, but the linear constraints that define the feasible solution space are given one by one, in rounds. In an online packing problem, the profit function as well as the packing constraints are not known in advance. In each round additional information (i.e., a new variable) about the profit function and the constraints is revealed. An online algorithm needs to maintain a feasible solution in each round; in addition, the solutions generated over the different rounds need to satisfy a monotonicity property. We provide general deterministic primal-dual algorithms for online fractional covering and packing problems. We also provide deterministic algorithms for several integral online covering and packing problems. Our algorithms are designed via a novel online primal-dual technique and are evaluated via competitive analysis.", "", "Let @math be a ground set of @math elements, and let @math be a family of subsets of @math , @math , with a positive cost @math associated with each @math . Consider the following online version of the set cover problem, described as a game between an algorithm and an adversary. An adversary gives elements to the algorithm from @math one by one. Once a new element is given, the algorithm has to cover it by some set of @math containing it. We assume that the elements of @math and the members of @math are known in advance to the algorithm; however, the set @math of elements given by the adversary is not known in advance to the algorithm. (In general, @math may be a strict subset of @math .) The objective is to minimize the total cost of the sets chosen by the algorithm. Let @math denote the family of sets in @math that the algorithm chooses. At the end of the game the adversary also produces (offline) a family of sets @math that covers @math . The performance of the algorithm is the ratio between the cost of @math and the cost of @math . The maximum ratio, taken over all input sequences, is the competitive ratio of the algorithm. We present an @math competitive deterministic algorithm for the problem and establish a nearly matching @math lower bound for all interesting values of @math and @math . The techniques used are motivated by similar techniques developed in computational learning theory for online prediction (e.g., the WINNOW algorithm) together with a novel way of converting a fractional solution into a deterministic online algorithm.", "We consider online algorithms for the generalized caching problem. Here we are given a cache of size @math and pages with arbitrary sizes and fetching costs. Given a request sequence of pages, the goal is to minimize the total cost of fetching the pages into the cache. Our main result is an online algorithm with competitive ratio @math , which gives the first @math competitive algorithm for the problem. We also give improved @math -competitive algorithms for the special cases of the bit model and fault model, improving upon the previous @math guarantees due to Irani [Proceedings of the 29th Annual ACM Symposium on Theory of Computing, 1997, pp. 701-710]. Our algorithms are based on an extension of the online primal-dual framework introduced by Buchbinder and Naor [Math. Oper. Res., 34 (2009), pp. 270-286] and involve two steps. First, we obtain an @math -competitive fractional algorithm based on solving online an LP formulation strengthened with exponentially many knapsack cover constraints. Second, we design a suitable online rounding procedure to convert this online fractional algorithm into a randomized algorithm. Our techniques provide a unified framework for caching algorithms and are substantially simpler than those previously used.", "We study a wide range of online graph and network optimization problems, focusing on problems that arise in the study of connectivity and cuts in graphs. In a general online network design problem, we have a communication network known to the algorithm in advance. What is not known in advance are the connectivity (bandwidth) or cut demands between vertices in the network which arrive online.We develop a unified framework for designing online algorithms for problems involving connectivity and cuts. We first present a general O(log m)-competitive deterministic algorithm for generating a fractional solution that satisfies the online connectivity or cut demands, where m is the number of edges in the graph. This may be of independent interest for solving fractional online bandwidth allocation problems, and is applicable to both directed and undirected graphs. We then show how to obtain integral solutions via an online rounding of the fractional solution. This part of the framework is problem dependent, and applies various tools including results on approximate max-flow min-cut for multicommodity flow, the Hierarchically Separated Trees (HST) method and its extensions, certain rounding techniques for dependent variables, and Racke's new hierarchical decomposition of graphs.Specifically, our results for the integral case include an O(log mlog n)-competitive randomized algorithm for the online nonmetric facility location problem and for a generalization of the problem called the multicast problem. In the nonmetric facility location problem, m is the number of facilities and n is the number of clients. The competitive ratio is nearly tight. We also present an O(log2nlog k)-competitive randomized algorithm for the online group Steiner problem in trees and an O(log3nlog k)-competitive randomized algorithm for the problem in general graphs, where n is the number of vertices in the graph and k is the number of groups. Finally, we design a deterministic O(log3nlog log n)-competitive algorithm for the online multi-cut problem." ] }
1705.01906
2610860456
We introduce the concept of derivate-based component-trees for images with an arbitrary number of channels. The approach is a natural extension of the classical component-tree devoted to gray-scale images. The similar structure enables the translation of many gray-level image processing techniques based on the component-tree to hyperspectral and color images. As an example application, we present an image segmentation approach that extracts Maximally Stable Homogeneous Regions (MSHR). The approach very similar to MSER but can be applied to images with an arbitrary number of channels. As opposed to MSER, our approach implicitly segments regions with are both lighter and darker than their background for gray-scale images and can be used in OCR applications where MSER will fail. We introduce a local flooding-based immersion for the derivate-based component-tree construction which is linear in the number of pixels. In the experiments, we show that the runtime scales favorably with an increasing number of channels and may improve algorithms which build on MSER.
MSER have a wide range of applications, ranging from stereo feature point extraction @cite_5 over optical character recognition (OCR) @cite_15 to image tracking @cite_20 . Motivated by their success on gray-value image processing applications, there have been attempts to extend MSER to multi-channel images; Chavez and Gustafson @cite_17 transform the RGB image to the HSV color space and extract gray-value MSER on the single channels separately. Forss 'en @cite_10 overcomes the problem that multi-channel images cannot be totally ordered by using pixel differences in RGB images as opposed to the RGB values directly. This allows the extraction of so-called Maximally Stable Color Regions (MSCR), which are conceptionally similar to MSERs. Although no component-tree is constructed in the process, the idea of using pixel differences is appealing, since it does not require a user-defined partial ordering and can further be trivially extended to images with an arbitrary number of channels. Unfortunately, the approach is computationally demanding and has completly different parameters than MSER.
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_10", "@cite_20", "@cite_17" ], "mid": [ "2124404372", "2061802763", "2005107673", "", "2143892485" ], "abstract": [ "Abstract The wide-baseline stereo problem, i.e. the problem of establishing correspondences between a pair of images taken from different viewpoints is studied. A new set of image elements that are put into correspondence, the so called extremal regions , is introduced. Extremal regions possess highly desirable properties: the set is closed under (1) continuous (and thus projective) transformation of image coordinates and (2) monotonic transformation of image intensities. An efficient (near linear complexity) and practically fast detection algorithm (near frame rate) is presented for an affinely invariant stable subset of extremal regions, the maximally stable extremal regions (MSER). A new robust similarity measure for establishing tentative correspondences is proposed. The robustness ensures that invariants from multiple measurement regions (regions obtained by invariant constructions from extremal regions), some that are significantly larger (and hence discriminative) than the MSERs, may be used to establish tentative correspondences. The high utility of MSERs, multiple measurement regions and the robust metric is demonstrated in wide-baseline experiments on image pairs from both indoor and outdoor scenes. Significant change of scale (3.5×), illumination conditions, out-of-plane rotation, occlusion, locally anisotropic scale change and 3D translation of the viewpoint are all present in the test problems. Good estimates of epipolar geometry (average distance from corresponding points to the epipolar line below 0.09 of the inter-pixel distance) are obtained.", "An end-to-end real-time scene text localization and recognition method is presented. The real-time performance is achieved by posing the character detection problem as an efficient sequential selection from the set of Extremal Regions (ERs). The ER detector is robust to blur, illumination, color and texture variation and handles low-contrast text. In the first classification stage, the probability of each ER being a character is estimated using novel features calculated with O(1) complexity per region tested. Only ERs with locally maximal probability are selected for the second stage, where the classification is improved using more computationally expensive features. A highly efficient exhaustive search with feedback loops is then applied to group ERs into words and to select the most probable character segmentation. Finally, text is recognized in an OCR stage trained using synthetic fonts. The method was evaluated on two public datasets. On the ICDAR 2011 dataset, the method achieves state-of-the-art text localization results amongst published methods and it is the first one to report results for end-to-end text recognition. On the more challenging Street View Text dataset, the method achieves state-of-the-art recall. The robustness of the proposed method against noise and low contrast of characters is demonstrated by “false positives” caused by detected watermark text in the dataset.", "This paper introduces a novel colour-based affine co-variant region detector. Our algorithm is an extension of the maximally stable extremal region (MSER) to colour. The extension to colour is done by looking at successive time-steps of an agglomerative clustering of image pixels. The selection of time-steps is stabilised against intensity scalings and image blur by modelling the distribution of edge magnitudes. The algorithm contains a novel edge significance measure based on a Poisson image noise model, which we show performs better than the commonly used Euclidean distance. We compare our algorithm to the original MSER detector and a competing colour-based blob feature detector, and show through a repeatability test that our detector performs better. We also extend the state of the art in feature repeatability tests, by using scenes consisting of two planes where one is piecewise transparent. This new test is able to evaluate how stable a feature is against changing backgrounds.", "", "In this paper we present extensions to Maximally Stable Extremal Regions that incorporate color information. Our extended interest region detector produces regions that are robust with respect to illumination, background, JPEG compression, and other common sources of image noise. The algorithm can be implemented on a distributed system to run at the same speed as the MSER algorithm. Our methods are compared against a standard MSER base-line. Our approach gives comparable or improved results when tested in various scenarios from the CAVIAR standard data set for object tracking." ] }
1705.01921
2610731166
We propose the Recurrent Soft Attention Model, which integrates the visual attention from the original image to a LSTM memory cell through a down-sample network. The model recurrently transmits visual attention to the memory cells for glimpse mask generation, which is a more natural way for attention integration and exploitation in general object detection and recognition problem. We test our model under the metric of the top-1 accuracy on the CIFAR-10 dataset. The experiment shows that our down-sample network and feedback mechanism plays an effective role among the whole network structure.
Our work is built on some basic researches @cite_2 @cite_1 with respect to recurrent attention model and reinforcement learning. Mnih @cite_2 uses the reinforcement learning algorithm for training locator, while Ba @cite_1 uses the Monte Carlo algorithm. However, these works all focus on the recognition of digit numbers which have clear features for attention genralization and classification. Thus, their abilities to recognize common daily-life object are doubted. On the other hand, inspired by the soft attention concept proposed by Xu @cite_4 , we built our soft attention model in a more natural way for attention genralization, which is proved to be effective.
{ "cite_N": [ "@cite_1", "@cite_4", "@cite_2" ], "mid": [ "1484210532", "2950178297", "2951527505" ], "abstract": [ "We present an attention-based model for recognizing multiple objects in images. The proposed model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant regions of the input image. We show that the model learns to both localize and recognize multiple objects despite being given only class labels during training. We evaluate the model on the challenging task of transcribing house number sequences from Google Street View images and show that it is both more accurate than the state-of-the-art convolutional networks and uses fewer parameters and less computation.", "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.", "Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so." ] }
1705.02085
2952493207
One of the most efficient methods in collaborative filtering is matrix factorization, which finds the latent vector representations of users and items based on the ratings of users to items. However, a matrix factorization based algorithm suffers from the cold-start problem: it cannot find latent vectors for items to which previous ratings are not available. This paper utilizes click data, which can be collected in abundance, to address the cold-start problem. We propose a probabilistic item embedding model that learns item representations from click data, and a model named EMB-MF, that connects it with a probabilistic matrix factorization for rating prediction. The experiments on three real-world datasets demonstrate that the proposed model is not only effective in recommending items with no previous ratings, but also outperforms competing methods, especially when the data is very sparse.
Wang et. al. @cite_8 proposed Expectation-Maximization Collaborative Filtering (EMCF) which exploits both implicit and explicit feedback for recommendation. For predicting ratings for an item, which does not have any previous ratings, the ratings are inferred from the ratings of its neighbors according to click data.
{ "cite_N": [ "@cite_8" ], "mid": [ "91699463" ], "abstract": [ "Collaborative Filtering (CF) is a popular strategy for recommender systems, which infers users' preferences typically using either explicit feedback (e.g., ratings) or implicit feedback (e.g., clicks). Explicit feedback is more accurate, but the quantity is not sufficient; whereas implicit feedback has an abundant quantity, but can be fairly inaccurate. In this paper, we propose a novel method, Expectation-Maximization Collaborative Filtering (EMCF), based on matrix factorization. The contributions of this paper include: first, we combine explicit and implicit feedback together in EMCF to infer users' preferences by learning latent factor vectors from matrix factorization; second, we observe four different cases of implicit feedback in terms of the distribution of latent factor vectors, and then propose different methods to estimate implicit feedback for different cases in EMCF; third, we develop an algorithm for EMCF to iteratively propagate the estimations of implicit feedback and update the latent factor vectors in order to fully utilize implicit feedback. We designed experiments to compare EMCF with other CF methods. The experimental results show that EMCF outperforms other methods by combining explicit and implicit feedback." ] }
1705.02085
2952493207
One of the most efficient methods in collaborative filtering is matrix factorization, which finds the latent vector representations of users and items based on the ratings of users to items. However, a matrix factorization based algorithm suffers from the cold-start problem: it cannot find latent vectors for items to which previous ratings are not available. This paper utilizes click data, which can be collected in abundance, to address the cold-start problem. We propose a probabilistic item embedding model that learns item representations from click data, and a model named EMB-MF, that connects it with a probabilistic matrix factorization for rating prediction. The experiments on three real-world datasets demonstrate that the proposed model is not only effective in recommending items with no previous ratings, but also outperforms competing methods, especially when the data is very sparse.
Item2Vec @cite_2 is a neural network based model for learning item embedding vectors using information. In @cite_7 , the authors applied a word embedding technique by factorizing the shifted PPMI matrix @cite_6 , to learn item embedding vectors from click data. However, using these vectors directly for rating prediction is not appropriate because click data does not exactly reflect preferences of users. Instead, we combine item embedding with MF in a way that allows rating data to contribute to item representations.
{ "cite_N": [ "@cite_6", "@cite_7", "@cite_2" ], "mid": [ "2125031621", "2508504774", "" ], "abstract": [ "We analyze skip-gram with negative-sampling (SGNS), a word embedding method introduced by , and show that it is implicitly factorizing a word-context matrix, whose cells are the pointwise mutual information (PMI) of the respective word and context pairs, shifted by a global constant. We find that another embedding method, NCE, is implicitly factorizing a similar matrix, where each cell is the (shifted) log conditional probability of a word given its context. We show that using a sparse Shifted Positive PMI word-context matrix to represent words improves results on two word similarity tasks and one of two analogy tasks. When dense low-dimensional vectors are preferred, exact factorization with SVD can achieve solutions that are at least as good as SGNS's solutions for word similarity tasks. On analogy questions SGNS remains superior to SVD. We conjecture that this stems from the weighted nature of SGNS's factorization.", "Matrix factorization (MF) models and their extensions are standard in modern recommender systems. MF models decompose the observed user-item interaction matrix into user and item latent factors. In this paper, we propose a co-factorization model, CoFactor, which jointly decomposes the user-item interaction matrix and the item-item co-occurrence matrix with shared item latent factors. For each pair of items, the co-occurrence matrix encodes the number of users that have consumed both items. CoFactor is inspired by the recent success of word embedding models (e.g., word2vec) which can be interpreted as factorizing the word co-occurrence matrix. We show that this model significantly improves the performance over MF models on several datasets with little additional computational overhead. We provide qualitative results that explain how CoFactor improves the quality of the inferred factors and characterize the circumstances where it provides the most significant improvements.", "" ] }
1705.02009
2612535783
Social media such as tweets are emerging as platforms contributing to situational awareness during disasters. Information shared on Twitter by both affected population (e.g., requesting assistance, warning) and those outside the impact zone (e.g., providing assistance) would help first responders, decision makers, and the public to understand the situation first-hand. Effective use of such information requires timely selection and analysis of tweets that are relevant to a particular disaster. Even though abundant tweets are promising as a data source, it is challenging to automatically identify relevant messages since tweet are short and unstructured, resulting to unsatisfactory classification performance of conventional learning-based approaches. Thus, we propose a simple yet effective algorithm to identify relevant messages based on matching keywords and hashtags, and provide a comparison between matching-based and learning-based approaches. To evaluate the two approaches, we put them into a framework specifically proposed for analyzing disaster-related tweets. Analysis results on eleven datasets with various disaster types show that our technique provides relevant tweets of higher quality and more interpretable results of sentiment analysis tasks when compared to learning approach.
Social media such as Twitter and Facebook has been widely regarded as active communication channels during emergency events such as disasters caused by natural hazards. The Federal Emergency Management Agency (FEMA) identifies social media as an essential component of future disaster management @cite_29 . Tweets sent during catastrophic events have been known to contain information that contributes to situational awareness @cite_2 , and a recent survey of studies for analyzing social media in disaster response can be found in @cite_30 . Therefore, social media data analytics for disaster response has gained extensive interest from the research community. Existing studies focus on extracting disaster-related information from socially-generated content during natural disasters, from which actionable information can be disseminated to disaster relief workers @cite_9 . More recent studies build classifiers for identifying earthquake-relevant tweets @cite_19 , classifying tweets based on informative and uninformative tweets @cite_26 @cite_27 @cite_11 @cite_8 . Furthermore, tweets can be categorized by type @cite_17 @cite_21 @cite_20 @cite_1 (i.e., affected individuals, infrastructure and utilities, donations and volunteer, caution and advice, sympathy and condolence) or by information source @cite_25 @cite_13 @cite_33 @cite_14 @cite_17 (i.e., eyewitness, government, NGOs, business, media).
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_14", "@cite_33", "@cite_8", "@cite_9", "@cite_29", "@cite_21", "@cite_17", "@cite_1", "@cite_19", "@cite_27", "@cite_2", "@cite_13", "@cite_25", "@cite_20", "@cite_11" ], "mid": [ "1934362406", "", "2129211744", "2115893922", "2917065785", "10004740", "", "2187303655", "2081212507", "1981048805", "", "2529142661", "71912906", "2135298385", "2098730244", "2595304170", "2535764243" ], "abstract": [ "Social media platforms provide active communication channels during mass convergence and emergency events such as disasters caused by natural hazards. As a result, first responders, decision makers, and the public can use this information to gain insight into the situation as it unfolds. In particular, many social media messages communicated during emergencies convey timely, actionable information. Processing social media messages to obtain such information, however, involves solving multiple challenges including: parsing brief and informal messages, handling information overload, and prioritizing different types of information found in messages. These challenges can be mapped to classical information processing operations such as filtering, classifying, ranking, aggregating, extracting, and summarizing. We survey the state of the art regarding computational methods to process social media messages and highlight both their contributions and shortcomings. In addition, we examine their particularities, and methodically examine a series of key subproblems ranging from the detection of events to the creation of actionable and useful summaries. Research thus far has, to a large extent, produced methods to extract situational awareness information from social media. In this survey, we cover these various approaches, and highlight their benefits and shortcomings. We conclude with research challenges that go beyond situational awareness, and begin to look at supporting decision making and coordinating emergency-response actions.", "", "Social media platforms such as Twitter garner significant attention from very large audiences in response to real-world events. Automatically establishing who is participating in information production or conversation around events can improve event content consumption, help expose the stakeholders in the event and their varied interests, and even help steer subsequent coverage of an event by journalists. In this paper, we take initial steps towards building an automatic classifier for user types on Twitter, focusing on three core user categories that are reflective of the information production and consumption processes around events: organizations, journalists media bloggers, and ordinary individuals. Exploration of the user categories on a range of events shows distinctive characteristics in terms of the proportion of each user type, as well as differences in the nature of content each shared around the events.", "Social media is already a fixture for reporting for many journalists, especially around breaking news events where non-professionals may already be on the scene to share an eyewitness report, photo, or video of the event. At the same time, the huge amount of content posted in conjunction with such events serves as a challenge to finding interesting and trustworthy sources in the din of the stream. In this paper we develop and investigate new methods for filtering and assessing the verity of sources found through social media by journalists. We take a human centered design approach to developing a system, SRSR (\"Seriously Rapid Source Review\"), informed by journalistic practices and knowledge of information production in events. We then used the system, together with a realistic reporting scenario, to evaluate the filtering and visual cue features that we developed. Our evaluation offers insights into social media information sourcing practices and challenges, and highlights the role technology can play in the solution.", "", "During times of disasters online users generate a significant amount of data, some of which are extremely valuable for relief efforts. In this paper, we study the nature of social-media content generated during two different natural disasters. We also train a model based on conditional random fields to extract valuable information from such content. We evaluate our techniques over our two datasets through a set of carefully designed experiments. We also test our methods over a non-disaster dataset to show that our extraction model is useful for extracting information from socially-generated content in general.", "", "Microblogging sites such as Twitter can play a vital role in spreading information during “natural” or man-made disasters. But the volume and velocity of tweets posted during crises today tend to be extremely high, making it hard for disaster-affected communities and professional emergency responders to process the information in a timely manner. Furthermore, posts tend to vary highly in terms of their subjects and usefulness; from messages that are entirely off-topic or personal in nature, to messages containing critical information that augments situational awareness. Finding actionable information can accelerate disaster response and alleviate both property and human losses. In this paper, we describe automatic methods for extracting information from microblog posts. Specifically, we focus on extracting valuable “information nuggets”, brief, self-contained information items relevant to disaster response. Our methods leverage machine learning methods for classifying posts and information extraction. Our results, validated over one large disaster-related dataset, reveal that a careful design can yield an effective system, paving the way for more sophisticated data analysis and visualization systems.", "We analyze microblog posts generated during two recent, concurrent emergency events in North America via Twitter, a popular microblogging service. We focus on communications broadcast by people who were \"on the ground\" during the Oklahoma Grassfires of April 2009 and the Red River Floods that occurred in March and April 2009, and identify information that may contribute to enhancing situational awareness (SA). This work aims to inform next steps for extracting useful, relevant information during emergencies using information extraction (IE) techniques.", "Two weeks after the Great Tohoku earthquake followed by the devastating tsunami, we have sent open-ended questionnaires to a randomly selected sample of Twitter users and also analysed the tweets sent from the disaster-hit areas. We found that people in directly affected areas tend to tweet about their unsafe and uncertain situation while people in remote areas post messages to let their followers know that they are safe. Our analysis of the open-ended answers has revealed that unreliable retweets (RTs) on Twitter was the biggest problem the users have faced during the disaster. Some of the solutions offered by the respondents included introducing official hash tags, limiting the number of RTs for each hash tag and adding features that allow users to trace information by maintaining anonymity.", "", "During natural or man-made disasters, humanitarian response organizations look for useful information to support their decision-making processes. Social media platforms such as Twitter have been considered as a vital source of useful information for disaster response and management. Despite advances in natural language processing techniques, processing short and informal Twitter messages is a challenging task. In this paper, we propose to use Deep Neural Network (DNN) to address two types of information needs of response organizations: 1) identifying informative tweets and 2) classifying them into topical classes. DNNs use distributed representation of words and learn the representation as well as higher level features automatically for the classification task. We propose a new online algorithm based on stochastic gradient descent to train DNNs in an online fashion during disaster situations. We test our models using a crisis-related real-world Twitter dataset.", "", "In this paper we examine the information sharing practices of people living in cities amid armed conflict. We describe the volume and frequency of microblogging activity on Twitter from four cities afflicted by the Mexican Drug War, showing how citizens use social media to alert one another and to comment on the violence that plagues their communities. We then investigate the emergence of civic media \"curators,\" individuals who act as \"war correspondents\" by aggregating and disseminating information to large numbers of people on social media. We conclude by outlining the implications of our observations for the design of civic media systems in wartime.", "This empirical study of \"digital volunteers\" in the aftermath of the January 12, 2010 Haiti earthquake describes their behaviors and mechanisms of self-organizing in the information space of a microblogging environment, where collaborators were newly found and distributed across continents. The paper explores the motivations, resources, activities and products of digital volunteers. It describes how seemingly small features of the technical environment offered structure for self-organizing, while considering how the social-technical milieu enabled individual capacities and collective action. Using social theory about self-organizing, the research offers insight about features of coordination within a setting of massive interaction.", "In case of emergencies (e.g., earthquakes, flooding), rapid responses are needed in order to address victims’ requests for help. Social media used around crises involves self-organizing behavior that can produce accurate results, often in advance of official communications. This allows affected population to send tweets or text messages, and hence, make them heard. The ability to classify tweets and text messages automatically, together with the ability to deliver the relevant information to the appropriate personnel are essential for enabling the personnel to timely and efficiently work to address the most urgent needs, and to understand the emergency situation better. In this study, we developed a reusable information technology infrastructure, called Enhanced Messaging for the Emergency Response Sector (EMERSE), which classifies and aggregates tweets and text messages about the Haiti disaster relief so that non-governmental organizations, relief workers, people in Haiti, and their friends and families can easily access them.", "The first objective towards the effective use of microblogging services such as Twitter for situational awareness during the emerging disasters is discovery of the disaster-related postings. Given the wide range of possible disasters, using a pre-selected set of disaster-related keywords for the discovery is suboptimal. An alternative that we focus on in this work is to train a classifier using a small set of labeled postings that are becoming available as a disaster is emerging. Our hypothesis is that utilizing large quantities of historical microblogs could improve the quality of classification, as compared to training a classifier only on the labeled data. We propose to use unlabeled microblogs to cluster words into a limited number of clusters and use the word clusters as features for classification. To evaluate the proposed semi-supervised approach, we used Twitter data from 6 different disasters. Our results indicate that when the number of labeled tweets is 100 or less, the proposed approach is superior to the standard classification based on the bag or words feature representation. Our results also reveal that the choice of the unlabeled corpus, the choice of word clustering algorithm, and the choice of hyperparameters can have a significant impact on the classification accuracy." ] }
1705.02009
2612535783
Social media such as tweets are emerging as platforms contributing to situational awareness during disasters. Information shared on Twitter by both affected population (e.g., requesting assistance, warning) and those outside the impact zone (e.g., providing assistance) would help first responders, decision makers, and the public to understand the situation first-hand. Effective use of such information requires timely selection and analysis of tweets that are relevant to a particular disaster. Even though abundant tweets are promising as a data source, it is challenging to automatically identify relevant messages since tweet are short and unstructured, resulting to unsatisfactory classification performance of conventional learning-based approaches. Thus, we propose a simple yet effective algorithm to identify relevant messages based on matching keywords and hashtags, and provide a comparison between matching-based and learning-based approaches. To evaluate the two approaches, we put them into a framework specifically proposed for analyzing disaster-related tweets. Analysis results on eleven datasets with various disaster types show that our technique provides relevant tweets of higher quality and more interpretable results of sentiment analysis tasks when compared to learning approach.
These studies typically used machine learning tools to filter disaster-related tweets. An issue with such approaches is the lack of appropriate interpretation of the results, e.g., why a technique works well on some data (high precision and recall) but do not perform as well in others @cite_23 @cite_12 . In addition, the studies tend to examine one particular disaster and imply that the findings are generalizable to others @cite_34 . However, it is known that information shared on Twitter varies considerably from one crisis to another @cite_5 @cite_31 @cite_16 . We aim to narrow the gap by customizing each component of our framework for a particular disaster.
{ "cite_N": [ "@cite_16", "@cite_23", "@cite_5", "@cite_31", "@cite_34", "@cite_12" ], "mid": [ "2151098288", "", "2097797505", "641710284", "2605875565", "2579353374" ], "abstract": [ "The use of social media to communicate timely information during crisis situations has become a common practice in recent years. In particular, the one-to-many nature of Twitter has created an opportunity for stakeholders to disseminate crisis-relevant messages, and to access vast amounts of information they may not otherwise have. Our goal is to understand what affected populations, response agencies and other stakeholders can expect-and not expect-from these data in various types of disaster situations. Anecdotal evidence suggests that different types of crises elicit different reactions from Twitter users, but we have yet to see whether this is in fact the case. In this paper, we investigate several crises-including natural hazards and human-induced disasters-in a systematic manner and with a consistent methodology. This leads to insights about the prevalence of different information types and sources across a variety of crisis situations.", "", "A microblogging service like Twitter continues to surge in importance as a means of sharing information in social networks. In the medical domain, several works have shown the potential of detecting public health events (i.e., infectious disease outbreaks) using Twitter messages or tweets. Given its real-time nature, Twitter can enhance early outbreak warning for public health authorities in order that a rapid response can take place. Most of previous works on detecting outbreaks in Twitter simply analyze tweets matched disease names and or locations of interests. However, the effectiveness of such method is limited for two main reasons. First, disease names are highly ambiguous, i.e., referring slangs or non health-related contexts. Second, the characteristics of infectious diseases are highly dynamic in time and place, namely, strongly time-dependent and vary greatly among different regions. In this paper, we propose to analyze the temporal diversity of tweets during the known periods of real-world outbreaks in order to gain insight into a temporary focus on specific events. More precisely, our objective is to understand whether the temporal diversity of tweets can be used as indicators of outbreak events, and to which extent. We employ an efficient algorithm based on sampling to compute the diversity statistics of tweets at particular time. To this end, we conduct experiments by correlating temporal diversity with the estimated event magnitude of 14 real-world outbreak events manually created as ground truth. Our analysis shows that correlation results are diverse among different outbreaks, which can reflect the characteristics (severity and duration) of outbreaks.", "Keywords: Crisis Informatics ; Social Media Collection ; Social Media Analysis Reference EPFL-CONF-203561 Record created on 2014-11-26, modified on 2017-05-12", "", "In this paper, we introduce Tweedr, a Twitter-mining tool that extracts actionable information for disaster relief workers during natural disasters. The Tweedr pipeline consists of three main parts: classification, clustering and extraction. In the classification phase, we use a variety of classification methods (sLDA, SVM, and logistic regression) to identify tweets reporting damage or casualties. In the clustering phase, we use filters to merge tweets that are similar to one another; and finally, in the extraction phase, we extract tokens and phrases that report specific information about different classes of infrastructure damage, damage types, and casualties. We empirically validate our approach with tweets collected from 12 different crises in the United States since 2006." ] }
1705.01941
2614104402
In this paper we propose to extend the definition of fuzzy transform in order to consider an interpolation of models that are richer than the standard fuzzy transform. We focus on polynomial models, linear in particular, although the approach can be easily applied to other classes of models. As an example of application, we consider the smoothing of time series in finance. A comparison with moving averages is performed using NIFTY 50 stock market index. Experimental results show that a regression driven fuzzy transform (RDFT) provides a smoothing approximation of time series, similar to moving average, but with a smaller delay. This is an important feature for finance and other application, where time plays a key role.
Moving average is a calculation to analyze data points by creating series of averages of different subsets of the full data set. There are different moving averages. Among them, Simple, Cumulative, Weighted or Exponential @cite_20 .
{ "cite_N": [ "@cite_20" ], "mid": [ "2159848758" ], "abstract": [ "Abstract Contemporary state of the art of financial time series modelling is connected to the Efficient Market Hypothesis according to which “prices fully reflect all available information” and hence future evolutions are unforecastable. In simple terms, EMH states that by predicting the future development we are not able to achieve the profits superior to the profits of the market index when these are adjusted for the risk and transactions costs are deducted. On the other hand, there exist works providing evidences that markets are not efficient. In these works, however, the strategies (or technical trading rules) are demonstrated to provide the extra performance in short term only and then the extra performance vanishes. In the paper we apply moving averages in order to define automated trading system and then analyze its profitability in Czech stock market. The results are statistically tested and statistical inference about the applicability of such an automated trading system in Czech stock market is made." ] }
1705.02156
2614084750
Popularity in social media is an important objective for professional users (e.g. companies, celebrities, and public figures, etc). A simple yet prominent metric utilized to measure the popularity of a user is the number of fans or followers she succeed to attract to her page. Popularity is influenced by several factors which identifying them is an interesting research topic. This paper aims to understand this phenomenon in social media by exploring the popularity evolution for professional users in Facebook. To this end, we implemented a crawler and monitor the popularity evolution trend of 8k most popular professional users on Facebook over a period of 14 months. The collected dataset includes around 20 million popularity values and 43 million posts. We characterized different popularity evolution patterns by clustering the users temporal number of fans and study them from various perspectives including their categories and level of activities. Our observations show that being active and famous correlate positively with the popularity trend.
Simultaneously, many studies have tried to model and forecast popularity, specially for content @cite_21 . Bandari utilized article features like source, category, and subjectivity to predict the popularity of an article on Twitter with 84 It is worth mentioning that several companies monitor Facebook FanPages activities and provide reports, by charging their customers, with general analysis for their clients. One of them that provides aggregated popularity results for single users, is SocialBakers . They claim that their services allow brands to measure, compare, and contrast the success of their social media campaigns with competitive intelligence. In summary, although few studies have looked to the different aspects of Facebook FanPages , but their focus were mostly for a small group of users. To the best of the authors knowledge none of the previous studies has specifically investigated the evolution of popularity in a large scale and for a long period. This paper is the first study that looks to this aspect for a list of 8K popular FanPages and also investigates the influential factors to the popularity evolution trends.
{ "cite_N": [ "@cite_21" ], "mid": [ "2070366435" ], "abstract": [ "We present a method for accurately predicting the long time popularity of online content from early measurements of user's access. Using two content sharing portals, Youtube and Digg, we show that by modeling the accrual of views and votes on content offered by these services we can predict the long-term dynamics of individual submissions from initial data. In the case of Digg, measuring access to given stories during the first two hours allows us to forecast their popularity 30 days ahead with remarkable accuracy, while downloads of Youtube videos need to be followed for 10 days to attain the same performance. The differing time scales of the predictions are shown to be due to differences in how content is consumed on the two portals: Digg stories quickly become outdated, while Youtube videos are still found long after they are initially submitted to the portal. We show that predictions are more accurate for submissions for which attention decays quickly, whereas predictions for evergreen content will be prone to larger errors." ] }
1705.01782
2610617295
Robust object recognition systems usually rely on powerful feature extraction mechanisms from a large number of real images. However, in many realistic applications, collecting sufficient images for ever-growing new classes is unattainable. In this paper, we propose a new Zero-shot learning (ZSL) framework that can synthesise visual features for unseen classes without acquiring real images. Using the proposed Unseen Visual Data Synthesis (UVDS) algorithm, semantic attributes are effectively utilised as an intermediate clue to synthesise unseen visual features at the training stage. Hereafter, ZSL recognition is converted into the conventional supervised problem, i.e. the synthesised visual features can be straightforwardly fed to typical classifiers such as SVM. On four benchmark datasets, we demonstrate the benefit of using synthesised unseen data. Extensive experimental results suggest that our proposed approach significantly improve the state-of-the-art results.
We summarise previous ZSL schemes in Fig. , in contrast to conventional supervised classification (Fig. (A)). Since collecting well-labelled visual data for novel classes is expensive, as shown in Fig. (B), zero-shot learning techniques @cite_39 @cite_20 @cite_29 @cite_15 @cite_38 @cite_10 are proposed to recognise novel classes without acquiring the visual data. Most of the early works are based on the Direct-Attribute Prediction (DAP) model @cite_20 . Such a model utilises semantic attributes as intermediate clues. A test sample is classified by each attribute classifier alternately, and the class label is predicted by probabilistic estimation. Admitting the merit of DAP, there are some concerns about its deficiencies. @cite_36 points out that the attributes may correlate to each other resulting in significant information redundancy and poor performance. The human labelling involved in attribute annotation may also be unreliable @cite_51 @cite_40 .
{ "cite_N": [ "@cite_38", "@cite_36", "@cite_29", "@cite_39", "@cite_40", "@cite_51", "@cite_15", "@cite_10", "@cite_20" ], "mid": [ "2740825418", "2040171755", "2523479226", "27961112", "", "2950497525", "2150295085", "2581186283", "2134270519" ], "abstract": [ "Recently, zero-shot action recognition (ZSAR) has emerged with the explosive growth of action categories. In this paper, we explore ZSAR from a novel perspective by adopting the Error-Correcting Output Codes (dubbed ZSECOC). Our ZSECOC equips the conventional ECOC with the additional capability of ZSAR, by addressing the domain shift problem. In particular, we learn discriminative ZSECOC for seen categories from both category-level semantics and intrinsic data structures. This procedure deals with domain shift implicitly by transferring the well-established correlations among seen categories to unseen ones. Moreover, a simple semantic transfer strategy is developed for explicitly transforming the learned embeddings of seen categories to better fit the underlying structure of unseen categories. As a consequence, our ZSECOC inherits the promising characteristics from ECOC as well as overcomes domain shift, making it more discriminative for ZSAR. We systematically evaluate ZSECOC on three realistic action benchmarks, i.e. Olympic Sports, HMDB51 and UCF101. The experimental results clearly show the superiority of ZSECOC over the state-of-the-art methods.", "Existing methods to learn visual attributes are prone to learning the wrong thing -- namely, properties that are correlated with the attribute of interest among training samples. Yet, many proposed applications of attributes rely on being able to learn the correct semantic concept corresponding to each attribute. We propose to resolve such confusions by jointly learning decorrelated, discriminative attribute models. Leveraging side information about semantic relatedness, we develop a multi-task learning approach that uses structured sparsity to encourage feature competition among unrelated attributes and feature sharing among related attributes. On three challenging datasets, we show that accounting for structure in the visual attribute space is key to learning attribute models that preserve semantics, yielding improved generalizability that helps in the recognition and discovery of unseen object categories.", "In this letter, we propose a novel approach for learning semantics-driven attributes , which are discriminative for zero-shot visual recognition. Latent attributes are derived in a principled manner, aiming at maintaining class-level semantic relatedness and attribute-wise balancedness. Unlike existing methods that binarize learned real-valued attributes via a quantization stage, we directly learn the optimal binary attributes by effectively addressing a discrete optimization problem. Particularly, we propose a class-wise discrete descent algorithm, based on which latent attributes of each class are learned iteratively. Moreover, we propose to simultaneously predict multiple attributes from low-level features via multioutput neural networks (MONN), which can model intrinsic correlation among attributes and make prediction more tractable. Extensive experiments on two standard datasets clearly demonstrate the superiority of our method over the state-of-the-arts.", "We introduce the problem of zero-data learning, where a model must generalize to classes or tasks for which no training data are available and only a description of the classes or tasks are provided. Zero-data learning is useful for problems where the set of classes to distinguish or tasks to solve is very large and is not entirely covered by the training data. The main contributions of this work lie in the presentation of a general formalization of zero-data learning, in an experimental analysis of its properties and in empirical evidence showing that generalization is possible and significant in this context. The experimental work of this paper addresses two classification problems of character recognition and a multitask ranking problem in the context of drug discovery. Finally, we conclude by discussing how this new framework could lead to a novel perspective on how to extend machine learning towards AI, where an agent can be given a specification for a learning problem before attempting to solve it (with very few or even zero examples).", "", "In principle, zero-shot learning makes it possible to train a recognition model simply by specifying the category's attributes. For example, with classifiers for generic attributes like and , one can construct a classifier for the zebra category by enumerating which properties it possesses---even without providing zebra training images. In practice, however, the standard zero-shot paradigm suffers because attribute predictions in novel images are hard to get right. We propose a novel random forest approach to train zero-shot models that explicitly accounts for the unreliability of attribute predictions. By leveraging statistics about each attribute's error tendencies, our method obtains more robust discriminative models for the unseen classes. We further devise extensions to handle the few-shot scenario and unreliable attribute descriptions. On three datasets, we demonstrate the benefit for visual category learning with zero or few training examples, a critical domain for rare categories or categories defined on the fly.", "We consider the problem of zero-shot learning, where the goal is to learn a classifier f : X → Y that must predict novel values of Y that were omitted from the training set. To achieve this, we define the notion of a semantic output code classifier (SOC) which utilizes a knowledge base of semantic properties of Y to extrapolate to novel classes. We provide a formalism for this type of classifier and study its theoretical properties in a PAC framework, showing conditions under which the classifier can accurately predict novel classes. As a case study, we build a SOC classifier for a neural decoding task and show that it can often predict words that people are thinking about from functional magnetic resonance images (fMRI) of their neural activity, even without training examples for those words.", "Learning visual attributes is an effective approach for zero-shot recognition. However, existing methods are restricted to learning explicitly nameable attributes and cannot tell which attributes are more important to the recognition task. In this paper, we propose a unified framework named Grouped Simile Ensemble (GSE). We claim our contributions as follows. 1) We propose to substitute explicit attribute annotation by similes, which are more natural expressions that can describe complex unseen classes. Similes do not involve extra concepts of attributes, i.e. only exemplars of seen classes are needed. We provide an efficient scenario to annotate similes for two benchmark datasets, AwA and aPY. 2) We propose a graph-cut-based class clustering algorithm to effectively discover implicit attributes from the similes. 3) Our GSE can automatically find the most effective simile groups to make the prediction. On both datasets, extensive experimental results manifest that our approach can significantly improve the performance over the state-of-the-art methods.", "We study the problem of object classification when training and test classes are disjoint, i.e. no training examples of the target classes are available. This setup has hardly been studied in computer vision research, but it is the rule rather than the exception, because the world contains tens of thousands of different object classes and for only a very few of them image, collections have been formed and annotated with suitable class labels. In this paper, we tackle the problem by introducing attribute-based classification. It performs object detection based on a human-specified high-level description of the target objects instead of training images. The description consists of arbitrary semantic attributes, like shape, color or even geographic information. Because such properties transcend the specific learning task at hand, they can be pre-learned, e.g. from image datasets unrelated to the current task. Afterwards, new classes can be detected based on their attribute representation, without the need for a new training phase. In order to evaluate our method and to facilitate research in this area, we have assembled a new large-scale dataset, “Animals with Attributes”, of over 30,000 animal images that match the 50 classes in Osherson's classic table of how strongly humans associate 85 semantic attributes with animal classes. Our experiments show that by using an attribute layer it is indeed possible to build a learning object detection system that does not require any training images of the target classes." ] }
1705.01782
2610617295
Robust object recognition systems usually rely on powerful feature extraction mechanisms from a large number of real images. However, in many realistic applications, collecting sufficient images for ever-growing new classes is unattainable. In this paper, we propose a new Zero-shot learning (ZSL) framework that can synthesise visual features for unseen classes without acquiring real images. Using the proposed Unseen Visual Data Synthesis (UVDS) algorithm, semantic attributes are effectively utilised as an intermediate clue to synthesise unseen visual features at the training stage. Hereafter, ZSL recognition is converted into the conventional supervised problem, i.e. the synthesised visual features can be straightforwardly fed to typical classifiers such as SVM. On four benchmark datasets, we demonstrate the benefit of using synthesised unseen data. Extensive experimental results suggest that our proposed approach significantly improve the state-of-the-art results.
Although various side information is studied, attribute-based ZSL methods still gain the most popularity. One reason is ZSL by learning attributes often gives prominent classification performance @cite_25 @cite_44 @cite_41 @cite_12 @cite_45 . For another reason, attribute representation is a compact way that can further describe an image by concrete words that are human-understandable @cite_26 @cite_53 @cite_24 @cite_13 . Various types of attributes are proposed to enrich applicable tasks and improve the performance, such as relative attributes @cite_35 , class-similarity attributes @cite_44 , and augmented attributes @cite_32 . Our main motivation of this paper not only aims to improve the ZSL performance, but also seeks for a reliable solution for synthesising high-quality visual features.
{ "cite_N": [ "@cite_35", "@cite_26", "@cite_41", "@cite_53", "@cite_32", "@cite_44", "@cite_24", "@cite_45", "@cite_13", "@cite_25", "@cite_12" ], "mid": [ "", "2098411764", "2951272299", "2201155408", "1500937733", "2157032868", "2519055561", "1858576077", "", "2116447720", "2952567519" ], "abstract": [ "", "We propose to shift the goal of recognition from naming to describing. Doing so allows us not only to name familiar objects, but also: to report unusual aspects of a familiar object (“spotty dog”, not just “dog”); to say something about unfamiliar objects (“hairy and four-legged”, not just “unknown”); and to learn how to recognize new objects with few or no visual examples. Rather than focusing on identity assignment, we make inferring attributes the core problem of recognition. These attributes can be semantic (“spotty”) or discriminative (“dogs have it but sheep do not”). Learning attributes presents a major new challenge: generalization across object categories, not just across instances within a category. In this paper, we also introduce a novel feature selection method for learning attributes that generalize well across categories. We support our claims by thorough evaluation that provides insights into the limitations of the standard recognition paradigm of naming and demonstrates the new abilities provided by our attribute-based framework.", "We present a novel attribute learning framework named Hypergraph-based Attribute Predictor (HAP). In HAP, a hypergraph is leveraged to depict the attribute relations in the data. Then the attribute prediction problem is casted as a regularized hypergraph cut problem in which HAP jointly learns a collection of attribute projections from the feature space to a hypergraph embedding space aligned with the attribute space. The learned projections directly act as attribute classifiers (linear and kernelized). This formulation leads to a very efficient approach. By considering our model as a multi-graph cut task, our framework can flexibly incorporate other available information, in particular class label. We apply our approach to attribute prediction, Zero-shot and @math -shot learning tasks. The results on AWA, USAA and CUB databases demonstrate the value of our methods in comparison with the state-of-the-art approaches.", "Attributes are mid-level semantic properties of objects. Recent research has shown that visual attributes can benefit many traditional learning problems in computer vision community. However, attribute learning is still a challenging problem as the attributes may not always be predictable directly from input images and the variation of visual attributes is sometimes large across categories. In this paper, we propose a unified multiplicative framework for attribute learning, which tackles the key problems. Specifically, images and category information are jointly projected into a shared feature space, where the latent factors are disentangled and multiplied for attribute prediction. The resulting attribute classifier is category-specific instead of being shared by all categories. Moreover, our method can leverage auxiliary data to enhance the predictive ability of attribute classifiers, reducing the effort of instance-level attribute annotation to some extent. Experimental results show that our method achieves superior performance on both instance-level and category-level attribute prediction. For zero-shot learning based on attributes, our method significantly improves the state-of-the-art performance on AwA dataset and achieves comparable performance on CUB dataset.", "We propose a new learning method to infer a mid-level feature representation that combines the advantage of semantic attribute representations with the higher expressive power of non-semantic features. The idea lies in augmenting an existing attribute-based representation with additional dimensions for which an autoencoder model is coupled with a large-margin principle. This construction allows a smooth transition between the zero-shot regime with no training example, the unsupervised regime with training examples but without class labels, and the supervised regime with training examples and with class labels. The resulting optimization problem can be solved efficiently, because several of the necessity steps have closed-form solutions. Through extensive experiments we show that the augmented representation achieves better results in terms of object categorization accuracy than the semantic representation alone.", "Attribute-based representation has shown great promises for visual recognition due to its intuitive interpretation and cross-category generalization property. However, human efforts are usually involved in the attribute designing process, making the representation costly to obtain. In this paper, we propose a novel formulation to automatically design discriminative \"category-level attributes\", which can be efficiently encoded by a compact category-attribute matrix. The formulation allows us to achieve intuitive and critical design criteria (category-separability, learn ability) in a principled way. The designed attributes can be used for tasks of cross-category knowledge transfer, achieving superior performance over well-known attribute dataset Animals with Attributes (AwA) and a large-scale ILSVRC2010 dataset (1.2M images). This approach also leads to state-of-the-art performance on the zero-shot learning task on AwA.", "Vast quantities of videos are now being captured at astonishing rates, but the majority of these are not labelled. To cope with such data, we consider the task of content-based activity recognition in videos without any manually labelled examples, also known as zero-shot video recognition. To achieve this, videos are represented in terms of detected visual concepts, which are then scored as relevant or irrelevant according to their similarity with a given textual query. In this paper, we propose a more robust approach for scoring concepts in order to alleviate many of the brittleness and low precision problems of previous work. Not only do we jointly consider semantic relatedness, visual reliability, and discriminative power. To handle noise and non-linearities in the ranking scores of the selected concepts, we propose a novel pairwise order matrix approach for score aggregation. Extensive experiments on the large-scale TRECVID Multimedia Event Detection data show the superiority of our approach.", "In this paper we consider a version of the zero-shot learning problem where seen class source and target domain data are provided. The goal during test-time is to accurately predict the class label of an unseen target domain instance based on revealed source domain side information ( attributes) for unseen classes. Our method is based on viewing each source or target data as a mixture of seen class proportions and we postulate that the mixture patterns have to be similar if the two instances belong to the same unseen class. This perspective leads us to learning source target embedding functions that map an arbitrary source target domain data into a same semantic space where similarity can be readily measured. We develop a max-margin framework to learn these similarity functions and jointly optimize parameters by means of cross validation. Our test results are compelling, leading to significant improvement in terms of accuracy on most benchmark datasets for zero-shot recognition.", "", "This paper studies the one-shot and zero-shot learning problems, where each object category has only one training example or has no training example at all. We approach this problem by transferring knowledge from known categories (a.k.a source categories) to new categories (a.k.a target categories) via object attributes. Object attributes are high level descriptions of object categories, such as color, texture, shape, etc. Since they represent common properties across different categories, they can be used to transfer knowledge from source categories to target categories effectively. Based on this insight, we propose an attribute-based transfer learning framework in this paper. We first build a generative attribute model to learn the probabilistic distributions of image features for each attribute, which we consider as attribute priors. These attribute priors can be used to (1) classify unseen images of target categories (zero-shot learning), or (2) facilitate learning classifiers for target categories when there is only one training examples per target category (one-shot learning). We demonstrate the effectiveness of the proposed approaches using the Animal with Attributes data set and show state-of-the-art performance in both zero-shot and one-shot learning tests.", "Zero-shot recognition (ZSR) deals with the problem of predicting class labels for target domain instances based on source domain side information (e.g. attributes) of unseen classes. We formulate ZSR as a binary prediction problem. Our resulting classifier is class-independent. It takes an arbitrary pair of source and target domain instances as input and predicts whether or not they come from the same class, i.e. whether there is a match. We model the posterior probability of a match since it is a sufficient statistic and propose a latent probabilistic model in this context. We develop a joint discriminative learning framework based on dictionary learning to jointly learn the parameters of our model for both domains, which ultimately leads to our class-independent classifier. Many of the existing embedding methods can be viewed as special cases of our probabilistic model. On ZSR our method shows 4.90 improvement over the state-of-the-art in accuracy averaged across four benchmark datasets. We also adapt ZSR method for zero-shot retrieval and show 22.45 improvement accordingly in mean average precision (mAP)." ] }
1705.01229
2588377248
A k-dominating set is a set D of nodes of a graph such that, for each node v, there exists a node wD at distance at most k from v. Our aim is the deterministic distributed construction of small T-dominating sets in time T in networks modelled as undirected n-node graphs and under the LOCAL communication model.For any positive integer T, if b is the size of a pairwise disjoint collection of balls of radii at least T in a graph, then b is an obvious lower bound on the size of a T-dominating set. Our first result shows that, even on rings, it is impossible to construct a T-dominating set of size s asymptotically b (i.e.,such that sb1) in time T.In the range of time T(logn), the size of a T-dominating set turns out to be very sensitive to multiplicative constants in running time. Indeed, it follows from Kutten and Peleg (1998), that for time T=logn with large constant , it is possible to construct a T-dominating set whose size is a small fraction of n. By contrast, we show that, for time T=logn for small constant , the size of a T-dominating set must be a large fraction of n.Finally, when To(logn), the above lower bound implies that, for any constant x<1, it is impossible to construct a T-dominating set of size smaller than xn, even on rings. On the positive side, we provide an algorithm that constructs a T-dominating set of size n(T) on all graphs.
In @cite_11 , the authors studied how fast a capacitated minimum dominating set can be distributedly constructed. They showed that, for general graphs, every distributed algorithm achieving a non-trivial approximation ratio (even for uniform capacities) must have a time complexity that essentially grows linearly with the network diameter. In @cite_10 @cite_13 , randomized distributed solutions for dominating set approximation were presented. In @cite_6 the authors prove that, for any @math -approximation of the minimum dominating set or maximum independent set on Unit Disk Graphs, the time @math of finding this approximation must satisfy @math . The paper most closely related to the present work is @cite_3 . The authors present a distributed algorithm to find a @math -dominating set of size at most @math in arbitrary @math -node graphs. Their algorithm runs in time @math in the @math model.
{ "cite_N": [ "@cite_10", "@cite_3", "@cite_6", "@cite_13", "@cite_11" ], "mid": [ "2090222354", "1534484868", "1502920553", "2127622665", "2118005842" ], "abstract": [ "The dominating set problem asks for a small subset D of nodes in a graph such that every node is either in D or adjacent to a node in D. This problem arises in a number of distributed network applications, where it is important to locate a small number of centers in the network such that every node is nearby at least one center. Finding a dominating set of minimum size is NP-complete, and the best known approximation is logarithmic in the maximum degree of the graph and is provided by the same simple greedy approach that gives the well-known logarithmic approximation result for the closely related set cover problem.We describe and analyze new randomized distributed algorithms for the dominating set problem that run in polylogarithmic time, independent of the diameter of the network, and that return a dominating set of size within a logarithmic factor from optimal, with high probability. In particular, our best algorithm runs in O(log n log Δ) rounds with high probability, where n is the number of nodes, Δ is one plus the maximum degree of any node, and each round involves a constant number of message exchanges among any two neighbors; the size of the dominating set obtained is within O (log Δ) of the optimal in expectation and within O(log n) of the optimal with high probability. We also describe generalizations to the weighted case and the case of multiple covering requirements.", "This article presents a fast distributed algorithm to compute a smallk-dominating setD(for any fixedk) and to compute its induced graph partition (breaking the graph into radiuskclusters centered around the vertices ofD). The time complexity of the algorithm isO(klog*n). Smallk-dominating sets have applications in a number of areas, including routing with sparse routing tables, the design of distributed data structures, and center selection in a distributed network. The main application described in this article concerns a fast distributed algorithm for constructing a minimum-weight spanning tree (MST). On ann-vertex network of diameterd, the new algorithm constructs an MST in time, improving on previous results.", "In this paper we extend the lower bound technique by Linial for local coloring and maximal independent sets. We show that constant approximations to maximum independent sets on a ring require at least log-star time. More generally, the product of approximation quality and running time cannot be less than log-star. Using a generalized ring topology, we gain identical lower bounds for approximations to minimum dominating sets. Since our generalized ring topology is contained in a number of geometric graphs such as the unit disk graph, our bounds directly apply as lower bounds for quite a few algorithmic problems in wireless networking. Having in mind these and other results about local approximations of maximum independent sets and minimum dominating sets, one might think that the former are always at least as difficult to obtain as the latter. Conversely, we show that graphs exist, where a maximum independent set can be determined without any communication, while finding even an approximation to a minimum dominating set is as hard as in general graphs.", "Finding a small dominating set is one of the most fundamental problems of traditional graph theory. In this paper, we present a new fully distributed approximation algorithm based on LP relaxation techniques. For an arbitrary parameter k and maximum degree Δ, our algorithm computes a dominating set of expected size O(kΔ2 k log Δ|DSOPT|) in O(k2) rounds where each node has to send O(k2Δ) messages of size O(logΔ). This is the first algorithm which achieves a non-trivial approximation ratio in a constant number of rounds.", "We study local, distributed algorithms for the capacitated minimum dominating set (CapMDS) problem, which arises in various distributed network applications. Given a network graph G = (V,E), and a capacity cap(v) ∈ N for each node v ∈ V , the CapMDS problem asks for a subset S ⊆ V of minimal cardinality, such that every network node not in S is covered by at least one neighbor in S, and every node v ∈ S covers at most cap(v) of its neighbors. We prove that in general graphs and even with uniform capacities, the problem is inherently non-local, i.e., every distributed algorithm achieving a non-trivial approximation ratio must have a time complexity that essentially grows linearly with the network diameter. On the other hand, if for some parameter e > 0, capacities can be violated by a factor of 1 + e, CapMDS becomes much more local. Particularly, based on a novel distributed randomized rounding technique, we present a distributed bi-criteria algorithm that achieves an O(log Δ)-approximation in time O(log3n + log(n) e), where n and Δ denote the number of nodes and the maximal degree in G, respectively. Finally, we prove that in geometric network graphs typically arising in wireless settings, the uniform problem can be approximated within a constant factor in logarithmic time, whereas the non-uniform problem remains entirely non-local." ] }
1705.00997
2611274857
We consider space efficient hash tables that can grow and shrink dynamically and are always highly space efficient, i.e., their space consumption is always close to the lower bound even while growing and when taking into account storage that is only needed temporarily. None of the traditionally used hash tables have this property. We show how known approaches like linear probing and bucket cuckoo hashing can be adapted to this scenario by subdividing them into many subtables or using virtual memory overcommitting. However, these rather straightforward solutions suffer from slow amortized insertion times due to frequent reallocation in small increments. Our main result is DySECT ( Dy namic S pace E fficient C uckoo T able) which avoids these problems. DySECT consists of many subtables which grow by doubling their size. The resulting inhomogeneity in subtable sizes is equalized by the flexibility available in bucket cuckoo hashing where each element can go to several buckets each of which containing several cells. Experiments indicate that DySECT works well with load factors up to 98 . With up to 2.7 times better performance than the next best solution.
Over the last one and a half decades, the field has regained attention, both from theoretical and the practical point of view. The initial innovation that sparked this attention was the idea that storing an element in the less filled of two random'' chains leads to incredibly well balanced loads. This concept is called the power of two choices @cite_16 .
{ "cite_N": [ "@cite_16" ], "mid": [ "2117702591" ], "abstract": [ "We consider the following natural model: customers arrive as a Poisson stream of rate spl lambda n, spl lambda <1, at a collection of n servers. Each customer chooses some constant d servers independently and uniformly at random from the n servers and waits for service at the one with the fewest customers. Customers are served according to the first-in first-out (FIFO) protocol and the service time for a customer is exponentially distributed with mean 1. We call this problem the supermarket model. We wish to know how the system behaves and in particular we are interested in the effect that the parameter d has on the expected time a customer spends in the system in equilibrium. Our approach uses a limiting, deterministic model representing the behavior as n spl rarr spl infin to approximate the behavior of finite systems. The analysis of the deterministic model is interesting in its own right. Along with a theoretical justification of this approach, we provide simulations that demonstrate that the method accurately predicts system behavior, even for relatively small systems. Our analysis provides surprising implications. Having d=2 choices leads to exponential improvements in the expected time a customer spends in the system over d=1, whereas having d=3 choices is only a constant factor better than d=2. We discuss the possible implications for system design." ] }
1705.00997
2611274857
We consider space efficient hash tables that can grow and shrink dynamically and are always highly space efficient, i.e., their space consumption is always close to the lower bound even while growing and when taking into account storage that is only needed temporarily. None of the traditionally used hash tables have this property. We show how known approaches like linear probing and bucket cuckoo hashing can be adapted to this scenario by subdividing them into many subtables or using virtual memory overcommitting. However, these rather straightforward solutions suffer from slow amortized insertion times due to frequent reallocation in small increments. Our main result is DySECT ( Dy namic S pace E fficient C uckoo T able) which avoids these problems. DySECT consists of many subtables which grow by doubling their size. The resulting inhomogeneity in subtable sizes is equalized by the flexibility available in bucket cuckoo hashing where each element can go to several buckets each of which containing several cells. Experiments indicate that DySECT works well with load factors up to 98 . With up to 2.7 times better performance than the next best solution.
rewrote this paragraph which was misleading. Cuckoo hashing can be naturally generalized into two directions in order to make it more space efficient: allowing @math choices @cite_20 or extending cells in the table to that can store @math elements. We will summarize this under the term .
{ "cite_N": [ "@cite_20" ], "mid": [ "2136399778" ], "abstract": [ "We generalize Cuckoo Hashing to d-ary Cuckoo Hashing and show how this yields a simple hash table data structure that stores n elements in (1 + e)n memory cells, for any constant e > 0. Assuming uniform hashing, accessing or deleting table entries takes at most d=O (ln (1 e)) probes and the expected amortized insertion time is constant. This is the first dictionary that has worst case constant access time and expected constant update time, works with (1 + e)n space, and supports satellite information. Experiments indicate that d = 4 probes suffice for e ≈ 0.03. We also describe variants of the data structure that allow the use of hash functions that can be evaluated in constant time." ] }
1705.00995
2611618649
The majority of medical documents and electronic health records are in text format that poses a challenge for data processing and finding relevant documents. Looking for ways to automatically retrieve the enormous amount of health and medical knowledge has always been an intriguing topic. Powerful methods have been developed in recent years to make the text processing automatic. One of the popular approaches to retrieve information based on discovering the themes in health and medical corpora is topic modeling; however, this approach still needs new perspectives. In this research, we describe fuzzy latent semantic analysis (FLSA), a novel approach in topic modeling using fuzzy perspective. FLSA can handle health and medical corpora redundancy issue and provides a new method to estimate the number of topics. The quantitative evaluations show that FLSA produces superior performance and features to latent Dirichlet allocation, the most popular topic model.
Text mining can be defined as the methods of machine learning and statistics with the goal of recognizing patterns and disclosing the hidden information in text data @cite_31 . In this section, we review key concepts, and health and medical applications of topic modeling and fuzzy clustering (FC).
{ "cite_N": [ "@cite_31" ], "mid": [ "80463681" ], "abstract": [ "The enormous amount of information stored in unstructured texts cannot simply be used for further processing by computers, which typically handle text as simple sequences of character strings. Therefore, specific (pre-)processing methods and algorithms are required in order to extract useful patterns. Text mining refers generally to the process of extracting interesting information and knowledge from unstructured text. In this article, we discuss text mining as a young and interdisciplinary field in the intersection of the related areas information retrieval, machine learning, statistics, computational linguistics and especially data mining. We describe the main analysis tasks preprocessing, classification, clustering, information extraction and visualization. In addition, we briefly discuss a number of successful applications of text mining." ] }
1705.00995
2611618649
The majority of medical documents and electronic health records are in text format that poses a challenge for data processing and finding relevant documents. Looking for ways to automatically retrieve the enormous amount of health and medical knowledge has always been an intriguing topic. Powerful methods have been developed in recent years to make the text processing automatic. One of the popular approaches to retrieve information based on discovering the themes in health and medical corpora is topic modeling; however, this approach still needs new perspectives. In this research, we describe fuzzy latent semantic analysis (FLSA), a novel approach in topic modeling using fuzzy perspective. FLSA can handle health and medical corpora redundancy issue and provides a new method to estimate the number of topics. The quantitative evaluations show that FLSA produces superior performance and features to latent Dirichlet allocation, the most popular topic model.
There are two main approaches in text mining: supervised and unsupervised. The goal of supervised approach is to disclose hidden structure in labeled datasets and the goal of unsupervised approach is to discover patterns in unlabeled datasets. The most popular techniques in supervised and unsupervised approaches are classification and clustering, correspondingly. The purpose of classification is to train a corpus with predefined labels and assign a label to a new document @cite_38 . Clustering assigns a cluster to each document in a corpus based on similarity in a cluster and dissimilarity between clusters. Among text mining techniques, topic modeling is one of popular unsupervised methods with a wide range of applications from SMS spam detection @cite_15 to image tagging @cite_28 .
{ "cite_N": [ "@cite_28", "@cite_38", "@cite_15" ], "mid": [ "1979972308", "", "1873936197" ], "abstract": [ "Tagging is nowadays the most prevalent and practical way to make images searchable. However, in reality many tags are irrelevant to image content. To refine the tags, previous solutions usually mine tag relevance relying on the tag similarity estimated right from the corpus to be refined. The calculation of tag similarity is affected by the noisy tags in the corpus, which is not conducive to estimate accurate tag relevance. In this paper, we propose to do tag refinement from the angle of topic modeling. In the proposed scheme, tag similarity and tag relevance are jointly estimated in an iterative manner, so that they can benefit from each other. Specifically, a novel graphical model, regularized Latent Dirichlet Allocation (rLDA), is presented. It facilitates the topic modeling by exploiting both the statistics of tags and visual affinities of images in the corpus. The experiments on tag ranking and image retrieval demonstrate the advantages of the proposed method.", "", "As the use of mobile phones grows, spams are becoming increasingly common in mobile communication such as SMS, calling for research on SMS spam detection. Existing detection techniques for SMS spams have been mostly adapted from those developed for other contexts such as emails and the web without taking into account some unique characteristics of SMS. Additionally, spamming tactics is constantly evolving, making existing methods for spam detection less effective. In this research, we propose to exploit latent content based features for the detection of static SMS spams. The efficacy of the proposed features is empirically validated using multiple classification methods. The results demonstrate that the proposed features significantly improve the performance of SMS spam detection." ] }
1705.00995
2611618649
The majority of medical documents and electronic health records are in text format that poses a challenge for data processing and finding relevant documents. Looking for ways to automatically retrieve the enormous amount of health and medical knowledge has always been an intriguing topic. Powerful methods have been developed in recent years to make the text processing automatic. One of the popular approaches to retrieve information based on discovering the themes in health and medical corpora is topic modeling; however, this approach still needs new perspectives. In this research, we describe fuzzy latent semantic analysis (FLSA), a novel approach in topic modeling using fuzzy perspective. FLSA can handle health and medical corpora redundancy issue and provides a new method to estimate the number of topics. The quantitative evaluations show that FLSA produces superior performance and features to latent Dirichlet allocation, the most popular topic model.
Topic modeling defines each topic as probability distribution over words and a document as probability distribution over topics. In health and medical text mining, latent Dirichlet allocation (LDA) shows better performance than other topic models @cite_37 .
{ "cite_N": [ "@cite_37" ], "mid": [ "2082369396" ], "abstract": [ "Large amount of electronic clinical data encompass important information in free text format. To be able to help guide medical decision-making, text needs to be efficiently processed and coded. In this research, we investigate techniques to improve classification of Emergency Department computed topography (CT) reports. The proposed system uses Natural Language Processing (NLP) to generate structured output from patient reports and then applies machine learning techniques to code for the presence of clinically important injuries for traumatic orbital fracture victims. Topic modeling of the corpora is also utilized as an alternative representation of the patient reports. Our results show that both NLP and topic modeling improve raw text classification results. Within NLP features, filtering the codes using modifiers produces the best performance. Topic modeling, on the other hand, shows mixed results. Topic vectors provide good dimensionality reduction and get comparable classification results as with NLP features. However, binary topic classification fails to improve upon raw text classification." ] }
1705.00995
2611618649
The majority of medical documents and electronic health records are in text format that poses a challenge for data processing and finding relevant documents. Looking for ways to automatically retrieve the enormous amount of health and medical knowledge has always been an intriguing topic. Powerful methods have been developed in recent years to make the text processing automatic. One of the popular approaches to retrieve information based on discovering the themes in health and medical corpora is topic modeling; however, this approach still needs new perspectives. In this research, we describe fuzzy latent semantic analysis (FLSA), a novel approach in topic modeling using fuzzy perspective. FLSA can handle health and medical corpora redundancy issue and provides a new method to estimate the number of topics. The quantitative evaluations show that FLSA produces superior performance and features to latent Dirichlet allocation, the most popular topic model.
LDA has a wide range of health and medical applications such as predicting protein-protein relationships based on the literature knowledge @cite_45 , discovering relevant clinical concepts and structures in patients' health records @cite_0 , identifying patterns of clinical events in a cohort of brain cancer patients @cite_46 , and analyzing time-to-event outcomes @cite_10 . The discovery of clinical pathway (CP) patterns is a method for revealing the structure, semantics, and dynamics of CPs to provide clinicians with explicit knowledge used to guide treatment activities of individual patients. LDA has used for CPs to find treatment behaviors of patients @cite_4 , to predict clinical order patterns, and to model various treatment activities @cite_41 and their occurring time stamps in CPs @cite_19 . LDA has also customized to determine patient mortality @cite_21 , and to discover knowledge from modeling disease and patient characteristics @cite_5 . Redundancy-aware LDA (Red-LDA) is one of the versions of LDA for handling redundancy issue in medical documents and has shown better performance than LDA @cite_51 .
{ "cite_N": [ "@cite_4", "@cite_41", "@cite_21", "@cite_0", "@cite_19", "@cite_45", "@cite_51", "@cite_5", "@cite_46", "@cite_10" ], "mid": [ "2086706273", "2521579474", "2289625907", "200246157", "2152550237", "2150769253", "2001322210", "1902526473", "2016127234", "1954165954" ], "abstract": [ "Clinical pathways leave traces, described as event sequences with regard to a mixture of various latent treatment behaviors. Measuring similarities between patient traces can profitably be exploited further as a basis for providing insights into the pathways, and complementing existing techniques of clinical pathway analysis (CPA), which mainly focus on looking at aggregated data seen from an external perspective. Most existing methods measure similarities between patient traces via computing the relative distance between their event sequences. However, clinical pathways, as typical human-centered processes, always take place in an unstructured fashion, i.e., clinical events occur arbitrarily without a particular order. Bringing order in the chaos of clinical pathways may decline the accuracy of similarity measure between patient traces, and may distort the efficiency of further analysis tasks. In this paper, we present a behavioral topic analysis approach to measure similarities between patient traces. More specifically, a probabilistic graphical model, i.e., latent Dirichlet allocation (LDA), is employed to discover latent treatment behaviors of patient traces for clinical pathways such that similarities of pairwise patient traces can be measured based on their underlying behavioral topical features. The presented method provides a basis for further applications in CPA. In particular, three possible applications are introduced in this paper, i.e., patient trace retrieval, clustering, and anomaly detection. The proposed approach and the presented applications are evaluated via a real-world dataset of several specific clinical pathways collected from a Chinese hospital.", "Objective Build probabilistic topic model representations of hospital admissions processes and compare the ability of such models to predict clinical order patterns as compared to preconstructed order sets. @PARASPLIT Materials and Methods The authors evaluated the first 24 hours of structured electronic health record data for > 10 K inpatients. Drawing an analogy between structured items (e.g., clinical orders) to words in a text document, the authors performed latent Dirichlet allocation probabilistic topic modeling. These topic models use initial clinical information to predict clinical orders for a separate validation set of > 4 K patients. The authors evaluated these topic model-based predictions vs existing human-authored order sets by area under the receiver operating characteristic curve, precision, and recall for subsequent clinical orders. @PARASPLIT Results Existing order sets predict clinical orders used within 24 hours with area under the receiver operating characteristic curve 0.81, precision 16 , and recall 35 . This can be improved to 0.90, 24 , and 47 ( P < 10−20) by using probabilistic topic models to summarize clinical data into up to 32 topics. Many of these latent topics yield natural clinical interpretations (e.g., “critical care,” “pneumonia,” “neurologic evaluation”). @PARASPLIT Discussion Existing order sets tend to provide nonspecific, process-oriented aid, with usability limitations impairing more precise, patient-focused support. Algorithmic summarization has the potential to breach this usability barrier by automatically inferring patient context, but with potential tradeoffs in interpretability. @PARASPLIT Conclusion Probabilistic topic modeling provides an automated approach to detect thematic trends in patient care and generate decision support content. A potential use case finds related clinical orders for decision support.", "", "Clinical reporting is often performed with minimal consideration for secondary computational analysis of concepts. This fact makes the comparison of patients challenging as records lack a representation in a space where their similarity may be judged quantitatively. We present a method by which the entirety of a patient’s clinical records may be compared using latent topics. To capture topics at a clinically relevant level, patient reports are partitioned based on their type, allowing for a more granular characterization of topics. The resulting probabilistic patient topic representations are directly comparable to one another using distance measures. To navigate a collection of patient records we have developed a workstation that allows users to weight different report types and displays succinct summarizations of why two patients are deemed similar, tailoring and expediting searches. Results show the system is able to capture clinically significant topics that can be used for case-based retrieval.", "Background Ensuring that all cancer patients have access to the appropriate treatment within an appropriate time is a strategic priority in many countries. There is in particular a need to describe and analyse cancer care trajectories and to produce waiting time indicators. We developed an algorithm for extracting temporally represented care trajectories from coded information collected routinely by the general cancer Registry in Poitou-Charentes region, France. The present work aimed to assess the performance of this algorithm on real-life patient data in the setting of non-metastatic breast cancer, using measures of similarity.", "This paper investigates applying statistical topic models to extract and predict relationships between biological entities, especially protein mentions. A statistical topic model, Latent Dirichlet Allocation (LDA) is promising; however, it has not been investigated for such a task. In this paper, we apply the state-of-the-art Collapsed Variational Bayesian Inference and Gibbs Sampling inference to estimating the LDA model, and compared them from the viewpoints of log-likelihoods, classification accuracy and retrieval effectiveness. We demonstrate through experiments that the Collapsed Variational LDA gives better results than the other, especially in terms of classification accuracy and retrieval effectiveness in the task of the protein-protein relationship prediction.", "The clinical notes in a given patient record contain much redundancy, in large part due to clinicians’ documentation habit of copying from previous notes in the record and pasting into a new note. Previous work has shown that this redundancy has a negative impact on the quality of text mining and topic modeling in particular. In this paper we describe a novel variant of Latent Dirichlet Allocation (LDA) topic modeling, Red-LDA, which takes into account the inherent redundancy of patient records when modeling content of clinical notes. To assess the value of Red-LDA, we experiment with three baselines and our novel redundancy-aware topic modeling method: given a large collection of patient records, (i) apply vanilla LDA to all documents in all input records; (ii) identify and remove all redundancy by chosing a single representative document for each record as input to LDA; (iii) identify and remove all redundant paragraphs in each record, leaving partial, non-redundant documents as input to LDA; and (iv) apply Red-LDA to all documents in all input records. Both quantitative evaluation carried out through log-likelihood on held-out data and topic coherence of produced topics and qualitative assessement of topics carried out by physicians show that Red-LDA produces superior models to all three baseline strategies. This research contributes to the emerging field of understanding the characteristics of the electronic health record and how to account for them in the framework of data mining. The code for the two redundancy-elimination baselines and Red-LDA is made publicly available to the community.", "Display Omitted We present the UPhenome model, which derives phenotypes in an unsupervised manner.UPhenome scales easily to large sets of diseases and clinical observations.The learned phenotypes combine clinical text, ICD9 codes, lab tests, and medications.UPhenome learns phenotypes that can be applied to unseen patient records. We present the Unsupervised Phenome Model (UPhenome), a probabilistic graphical model for large-scale discovery of computational models of disease, or phenotypes. We tackle this challenge through the joint modeling of a large set of diseases and a large set of clinical observations. The observations are drawn directly from heterogeneous patient record data (notes, laboratory tests, medications, and diagnosis codes), and the diseases are modeled in an unsupervised fashion. We apply UPhenome to two qualitatively different mixtures of patients and diseases: records of extremely sick patients in the intensive care unit with constant monitoring, and records of outpatients regularly followed by care providers over multiple years. We demonstrate that the UPhenome model can learn from these different care settings, without any additional adaptation. Our experiments show that (i) the learned phenotypes combine the heterogeneous data types more coherently than baseline LDA-based phenotypes; (ii) they each represent single diseases rather than a mix of diseases more often than the baseline ones; and (iii) when applied to unseen patient records, they are correlated with the patients ground-truth disorders. Code for training, inference, and quantitative evaluation is made available to the research community.", "Clinical narrative in the medical record provides perhaps the most detailed account of a patient's history. However, this information is documented in free-text, which makes it challenging to analyze. Efforts to index unstructured clinical narrative often focus on identifying predefined concepts from clinical terminologies. Less studied is the problem of analyzing the text as a whole to create temporal indices that capture relationships between learned clinical events. Topic models provide a method for analyzing large corpora of text to discover semantically related clusters of words. This work presents a topic model tailored to the clinical reporting environment that allows for individual patient timelines. Results show the model is able to identify patterns of clinical events in a cohort of brain cancer patients.", "Two challenging problems in the clinical study of cancer are the characterization of cancer subtypes and the classification of individual patients according to those subtypes. Statistical approaches addressing these problems are hampered by population heterogeneity and challenges inherent in data integration across high-dimensional, diverse covariates. We have developed a survival-supervised latent Dirichlet allocation (survLDA) modeling framework to address these concerns. LDA models have proven extremely effective at identifying themes common across large collections of text, but applications to genomics have been limited. Our framework extends LDA to the genome by considering each patient as a document' with text' constructed from clinical and high-dimensional genomic measurements. We then further extend the framework to allow for supervision by a time-to-event response. The model enables the efficient identification of collections of clinical and genomic features that co-occur within patient subgroups, and then characterizes each patient by those features. An application of survLDA to The Cancer Genome Atlas (TCGA) ovarian project identifies informative patient subgroups that are characterized by different propensities for exhibiting abnormal mRNA expression and methylations, corresponding to differential rates of survival from primary therapy." ] }
1705.00995
2611618649
The majority of medical documents and electronic health records are in text format that poses a challenge for data processing and finding relevant documents. Looking for ways to automatically retrieve the enormous amount of health and medical knowledge has always been an intriguing topic. Powerful methods have been developed in recent years to make the text processing automatic. One of the popular approaches to retrieve information based on discovering the themes in health and medical corpora is topic modeling; however, this approach still needs new perspectives. In this research, we describe fuzzy latent semantic analysis (FLSA), a novel approach in topic modeling using fuzzy perspective. FLSA can handle health and medical corpora redundancy issue and provides a new method to estimate the number of topics. The quantitative evaluations show that FLSA produces superior performance and features to latent Dirichlet allocation, the most popular topic model.
Fuzzy clustering has used in predicting the response to treatment with citalopram in alcohol dependence @cite_22 , analyzing diabetic neuropathy @cite_29 , detecting early diabetic retinopathy @cite_49 , characterizing stroke subtypes and coexisting causes of ischemic stroke @cite_6 @cite_13 @cite_34 , improving decision-making in radiation therapy @cite_44 , and detecting cancer such as breast cancer @cite_26 . In addition, fuzzy clustering was used to improve ultrasound imaging technique @cite_1 and analyze microarray data @cite_25 .
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_29", "@cite_1", "@cite_6", "@cite_44", "@cite_49", "@cite_34", "@cite_13", "@cite_25" ], "mid": [ "2029912695", "2024203939", "1976993894", "2019652504", "2012779036", "2143459909", "2054275537", "2079969507", "2060814138", "2109619915" ], "abstract": [ "Extensive amounts of knowledge and data stored in medical databases require the development of specialized tools for storing, accessing, analysis, and effectiveness usage of stored knowledge and data. Intelligent methods such as neural networks, fuzzy sets, decision trees, and expert systems are, slowly but steadily, applied in the medical fields. Recently, rough set theory is a new intelligent technique was used for the discovery of data dependencies, data reduction, approximate set classification, and rule induction from databases. In this paper, we present a rough set method for generating classification rules from a set of observed 360 samples of the breast cancer data. The attributes are selected, normalized and then the rough set dependency rules are generated directly from the real value attribute vector. Then the rough set reduction technique is applied to find all reducts of the data which contains the minimal subset of attributes that are associated with a class label for classification. Experimental results from applying the rough set analysis to the set of data samples are given and evaluated. In addition, the generated rules are also compared to the well-known ID3 classifier algorithm. The study showed that the theory of rough sets seems to be a useful tool for inductive learning and a valuable aid for building expert systems.", "Introduction The prediction of patient response to new pharmacotherapies for alcohol dependence has usually not been successful with standard statistical techniques. We hypothesized that fuzzy logic, a qualitative computational approach, could predict response to 40 mg day citalopram and 40 mg day citalopram with a brief psychosocial intervention in alcohol-dependent patients. Methods Two data sets were formed with patients from our studies who received 40 mg day citalopram alone (n = 34) or 40 mg day citalopram and a brief psychosocial intervention (n = 28). The output variable, “response,” was the percentage decrease in alcohol intake from baseline. Input variables included age, gender, baseline alcohol intake, and levels of anxiety, depression, alcohol dependence, and alcohol-related problems. Results A fuzzy rulebase was created from the data of 26 randomly chosen patients who received 40 mg day citalopram and was used to predict the responses of the remaining eight patients. Eight rules related response with depression, anxiety, alcohol dependence, alcohol-related problems, age, and baseline alcohol intake. The average magnitude of the error in the predictions (RMSE) was 2.6 with a bias (ME) of 0.6. Predicted and actual response correlated (r = 0.99; p < 0.001). A fuzzy rulebase was created from the data of 28 randomly chosen patients who received 40 mg day citalopram and a brief psychosocial intervention and was used to predict the responses of the remaining five patients. Six rules related response with age, anxiety, depression, alcohol dependence, and baseline alcohol intake with good predictive performance (RMSE = 6.4; ME = −1.5; r = 0.96; p < 0.01). Conclusions This study indicates that fuzzy logic modeling can predict response to pharmacotherapies for alcohol dependence. Clinical Pharmacology & Therapeutics (1997) 62, 209–224; doi:", "A new model for the fuzzy-based analysis of diabetic neuropathy is illustrated, whose pathogenesis so far is not well known. The underlying algebraic structure is a commutative l-monoid, whose support is a set of classifications based on the concept of linguistic variable introduced by Zadeh. The analysis is carried out by means of patient's anagraphical and clinical data, e.g. age, sex, duration of the disease, insulinic needs, severity of diabetes, possible presence of complications. The results obtained by us are identical with medical diagnoses. Moreover, analyzing suitable relevance factors one gets reasonable information about the etiology of the disease, our results agree with most credited clinical hypotheses.", "Abstract Elastography is a new ultrasound imaging technique to provide the information about relative tissue stiffness. The elasticity information provided by this dynamic imaging method has proven to be helpful in distinguishing benign and malignant breast tumors. In previous studies for computer-aided diagnosis (CAD), the tumor contour was manually segmented and each pixel in the elastogram was classified into hard or soft tissue using the simple thresholding technique. In this paper, the tumor contour was automatically segmented by the level set method to provide more objective and reliable tumor contour for CAD. Moreover, the elasticity of each pixel inside each tumor was classified by the fuzzy c-means clustering technique to obtain a more precise diagnostic result. The test elastography database included 66 benign and 31 malignant biopsy-proven tumors. In the experiments, the accuracy, sensitivity, specificity and the area index Az under the receiver operating characteristic curve for the classification of solid breast masses were 83.5 (81 97), 83.9 (26 31), 83.3 (55 66) and 0.902 for the fuzzy c-means clustering method, respectively, and 59.8 (58 97), 96.8 (30 31), 42.4 (28 66) and 0.818 for the conventional thresholding method, respectively. The differences of accuracy, specificity and Az value were statistically significant ( p", "Abstract Twentieth century medical science has embraced nineteenth century Boolean probability theory based upon two-valued Aristotelian logic. With the later addition of bit-based, von Neumann structured computational architectures, an epistemology based on randomness has led to a bivalent epidemiological methodology that dominates medical decision making. In contrast, fuzzy logic, based on twentieth century multi-valued logic, and computational structures that are content addressed and adaptively modified, has advanced a new scientific paradigm for the twenty-first century. Diseases such as stroke involve multiple concomitant causal factors that are difficult to represent using conventional statistical methods. We tested which paradigm best represented this complex multi-causal clinical phenomenon—stroke. We show that the fuzzy logic paradigm better represented clinical complexity in cerebrovascular disease than current probability theory based methodology. We believe this finding is generalizable to all of clinical science since multiple concomitant causal factors are involved in nearly all known pathological processes.", "Radiation therapy decision-making is a complex process that has to take into consideration a variety of interrelated functions. Many fuzzy factors that must be considered in the calculation of the appropriate dose increase the complexity of the decision-making problem. A novel approach introduces fuzzy cognitive maps (FCMs) as the computational modeling method, which tackles the complexity and allows the analysis and simulation of the clinical radiation procedure. Specifically this approach is used to determine the success of radiation therapy process estimating the final dose delivered to the target volume, based on the soft computing technique of FCMs. Furthermore a two-level integrated hierarchical structure is proposed to supervise and evaluate the radiotherapy process prior to treatment execution. The supervisor determines the treatment variables of cancer therapy and the acceptance level of final radiation dose to the target volume. Two clinical case studies are used to test the proposed methodology and evaluate the simulation results. The usefulness of this two-level hierarchical structure discussed and future research directions are suggested for the clinical use of this methodology.", "Discusses a hybrid fuzzy image-processing system for situation assessment of diabetic retinopathy. The hybrid approach is motivated by the characteristics of the medical data and of the diagnostic decision-making process. The aim of the system is to support the early detection of diabetic retinopathy in a primary-care environment. For this purpose, both internal medicine. (diabetes) and ophthalmology have to be considered. The main input data are ophthalmological parameters, such as visual acuity, status of the anterior segment, status of the fundus, and previous therapies, and diabetological status; i.e., metabolic data. To reduce the huge number of parameters that have to be extracted by the ophthalmologist, image-processing methods for the automatic analysis of fundus photographs have been developed. The extraction is done by a multistage model-based approach. The segmentation results are used as an input to an overall fuzzy system that produces the final decision outcome (situation classes).", "In evidence-based medicine, stroke subtype is diagnosed after a sequential search for etiology; the first positive test result of significant severity rounds off to one overwhelming cause. Degree of s", "Evidence-based medicine, founded in probability-based statistics, applies what is the case for the collective to the individual patient. An intuitive approach, however, would define structure in the (", "Background: Organisms simplify the orchestration of gene expression by coregulating genes whose products function together in the cell. Many proteins serve different roles depending on the demands of the organism, and therefore the corresponding genes are often coexpressed with different groups of genes under different situations. This poses a challenge in analyzing wholegenome expression data, because many genes will be similarly expressed to multiple, distinct groups of genes. Because most commonly used analytical methods cannot appropriately represent these relationships, the connections between conditionally coregulated genes are often missed. Results: We used a heuristically modified version of fuzzy k-means clustering to identify overlapping clusters of yeast genes based on published gene-expression data following the response of yeast cells to environmental changes. We have validated the method by identifying groups of functionally related and coregulated genes, and in the process we have uncovered new correlations between yeast genes and between the experimental conditions based on similarities in gene-expression patterns. To investigate the regulation of gene expression, we correlated the clusters with known transcription factor binding sites present in the genes’ promoters. These results give insights into the mechanism of the regulation of gene expression in yeast cells responding to environmental changes. Conclusions: Fuzzy k-means clustering is a useful analytical tool for extracting biological insights from gene-expression data. Our analysis presented here suggests that a prevalent theme in the regulation of yeast gene expression is the condition-specific coregulation of overlapping sets of genes." ] }
1705.01088
2951924128
We propose a new technique for visual attribute transfer across images that may have very different appearance but have perceptually similar semantic structure. By visual attribute transfer, we mean transfer of visual information (such as color, tone, texture, and style) from one image to another. For example, one image could be that of a painting or a sketch while the other is a photo of a real scene, and both depict the same type of scene. Our technique finds semantically-meaningful dense correspondences between two input images. To accomplish this, it adapts the notion of "image analogy" with features extracted from a Deep Convolutional Neutral Network for matching; we call our technique Deep Image Analogy. A coarse-to-fine strategy is used to compute the nearest-neighbor field for generating the results. We validate the effectiveness of our proposed method in a variety of cases, including style texture transfer, color style swap, sketch painting to photo, and time lapse.
Early color transfer techniques tend to be global, i.e., a global transformation is applied to a source image to match the color statistics of a reference image @cite_9 @cite_16 . They work well when the source and reference images are of similar scenes, even though the spatial layouts can be dissimilar. Other methods incorporate user input @cite_48 @cite_42 or a large database @cite_29 @cite_37 to guide the color transfer. Local color transfer methods infer local color statistics in different color regions by establishing region correspondences @cite_27 . More recently, transfer local color between regions with the same annotated class; in a similar vein, transfer the color style across the same semantic regions.
{ "cite_N": [ "@cite_37", "@cite_48", "@cite_29", "@cite_9", "@cite_42", "@cite_27", "@cite_16" ], "mid": [ "2083366168", "", "2116645175", "2129112648", "2240798854", "2160530465", "2141015396" ], "abstract": [ "We live in a dynamic visual world where the appearance of scenes changes dramatically from hour to hour or season to season. In this work we study \"transient scene attributes\" -- high level properties which affect scene appearance, such as \"snow\", \"autumn\", \"dusk\", \"fog\". We define 40 transient attributes and use crowdsourcing to annotate thousands of images from 101 webcams. We use this \"transient attribute database\" to train regressors that can predict the presence of attributes in novel images. We demonstrate a photo organization method based on predicted attributes. Finally we propose a high-level image editing method which allows a user to adjust the attributes of a scene, e.g. change a scene to be \"snowy\" or \"sunset\". To support attribute manipulation we introduce a novel appearance transfer technique which is simple and fast yet competitive with the state-of-the-art. We show that we can convincingly modify many transient attributes in outdoor scenes.", "", "We present an image restoration method that leverages a large database of images gathered from the web. Given an input image, we execute an efficient visual search to find the closest images in the database; these images define the input's visual context. We use the visual context as an image-specific prior and show its value in a variety of image restoration operations, including white balance correction, exposure correction, and contrast enhancement. We evaluate our approach using a database of 1 million images downloaded from Flickr and demonstrate the effect of database size on performance. Our results show that priors based on the visual context consistently out-perform generic or even domain-specific priors for these operations.", "We use a simple statistical analysis to impose one image's color characteristics on another. We can achieve color correction by choosing an appropriate source image and apply its characteristic to another image.", "We introduce a general technique for \"colorizing\" greyscale images by transferring color between a source, color image and a destination, greyscale image. Although the general problem of adding chromatic values to a greyscale image has no exact, objective solution, the current approach attempts to provide a method to help minimize the amount of human labor required for this task. Rather than choosing RGB colors from a palette to color individual components, we transfer the entire color \"mood\" of the source to the target image by matching luminance and texture information between the images. We choose to transfer only chromatic information and retain the original luminance values of the target image. Further, the procedure is enhanced by allowing the user to match areas of the two images with rectangular swatches. We show that this simple technique can be successfully applied to a variety of images and video, provided that texture and luminance are sufficiently distinct. The images generated demonstrate the potential and utility of our technique in a diverse set of application domains.", "We address the problem of regional color transfer between two natural images by probabilistic segmentation. We use a new expectation-maximization (EM) scheme to impose both spatial and color smoothness to infer natural connectivity among pixels. Unlike previous work, our method takes local color information into consideration, and segment image with soft region boundaries for seamless color transfer and compositing. Our modified EM method has two advantages in color manipulation: first, subject to different levels of color smoothness in image space, our algorithm produces an optimal number of regions upon convergence, where the color statistics in each region can be adequately characterized by a component of a Gaussian mixture model (GMM). Second, we allow a pixel to fall in several regions according to our estimated probability distribution in the EM step, resulting in a transparency-like ratio for compositing different regions seamlessly. Hence, natural color transition across regions can be achieved, where the necessary intra-region and inter-region smoothness are enforced without losing original details. We demonstrate results on a variety of applications including image deblurring, enhanced color transfer, and colorizing gray scale images. Comparisons with previous methods are also presented.", "This article proposes an original method to estimate a continuous transformation that maps one N-dimensional distribution to another. The method is iterative, non-linear, and is shown to converge. Only 1D marginal distribution is used in the estimation process, hence involving low computation costs. As an illustration this mapping is applied to color transfer between two images of different contents. The paper also serves as a central focal point for collecting together the research activity in this area and relating it to the important problem of automated color grading" ] }
1705.01088
2951924128
We propose a new technique for visual attribute transfer across images that may have very different appearance but have perceptually similar semantic structure. By visual attribute transfer, we mean transfer of visual information (such as color, tone, texture, and style) from one image to another. For example, one image could be that of a painting or a sketch while the other is a photo of a real scene, and both depict the same type of scene. Our technique finds semantically-meaningful dense correspondences between two input images. To accomplish this, it adapts the notion of "image analogy" with features extracted from a Deep Convolutional Neutral Network for matching; we call our technique Deep Image Analogy. A coarse-to-fine strategy is used to compute the nearest-neighbor field for generating the results. We validate the effectiveness of our proposed method in a variety of cases, including style texture transfer, color style swap, sketch painting to photo, and time lapse.
Finding dense correspondences between two images is a fundamental problem in computer vision and graphics. Initial correspondence methods were designed for stereo matching, optical flow, and image alignment @cite_11 . These methods compute a dense correspondence field, but they assume brightness consistency and local motion, and may be hard to handle occlusion well.
{ "cite_N": [ "@cite_11" ], "mid": [ "2118877769" ], "abstract": [ "Image registration finds a variety of applications in computer vision. Unfortunately, traditional image registration techniques tend to be costly. We present a new image registration technique that makes use of the spatial intensity gradient of the images to find a good match using a type of Newton-Raphson iteration. Our technique is taster because it examines far fewer potential matches between the images than existing techniques Furthermore, this registration technique can be generalized to handle rotation, scaling and shearing. We show how our technique can be adapted tor use in a stereo vision system." ] }
1705.01088
2951924128
We propose a new technique for visual attribute transfer across images that may have very different appearance but have perceptually similar semantic structure. By visual attribute transfer, we mean transfer of visual information (such as color, tone, texture, and style) from one image to another. For example, one image could be that of a painting or a sketch while the other is a photo of a real scene, and both depict the same type of scene. Our technique finds semantically-meaningful dense correspondences between two input images. To accomplish this, it adapts the notion of "image analogy" with features extracted from a Deep Convolutional Neutral Network for matching; we call our technique Deep Image Analogy. A coarse-to-fine strategy is used to compute the nearest-neighbor field for generating the results. We validate the effectiveness of our proposed method in a variety of cases, including style texture transfer, color style swap, sketch painting to photo, and time lapse.
PatchMatch @cite_14 relaxes the rigidity assumption, and is a fast randomized algorithm for finding a dense NNF for patches. There are two extensions to handle patch variations in geometry and appearance. The Generalized PatchMatch @cite_26 algorithm allows these patches to undergo translations, rotations, and scale changes. NRDC @cite_50 handles consistent tonal and color changes through iterative matching and refinement of appearance transformations. Recently, a multi-scale patch generazation called Needle" @cite_4 has been shown to facilitate reliable matching of degraded images. These approaches are still fundamentally based on low-level features, and as a result, fail to match images that are visually very different but semantically similar. Our proposed technique seeks to address this problem.
{ "cite_N": [ "@cite_50", "@cite_14", "@cite_4", "@cite_26" ], "mid": [ "2106505277", "1993120651", "2470884243", "1763426478" ], "abstract": [ "This paper presents a new efficient method for recovering reliable local sets of dense correspondences between two images with some shared content. Our method is designed for pairs of images depicting similar regions acquired by different cameras and lenses, under non-rigid transformations, under different lighting, and over different backgrounds. We utilize a new coarse-to-fine scheme in which nearest-neighbor field computations using Generalized PatchMatch [ 2010] are interleaved with fitting a global non-linear parametric color model and aggregating consistent matching regions using locally adaptive constraints. Compared to previous correspondence approaches, our method combines the best of two worlds: It is dense, like optical flow and stereo reconstruction methods, and it is also robust to geometric and photometric variations, like sparse feature matching. We demonstrate the usefulness of our method using three applications for automatic example-based photograph enhancement: adjusting the tonal characteristics of a source image to match a reference, transferring a known mask to a new image, and kernel estimation for image deblurring.", "This paper presents interactive image editing tools using a new randomized algorithm for quickly finding approximate nearest-neighbor matches between image patches. Previous research in graphics and vision has leveraged such nearest-neighbor searches to provide a variety of high-level digital image editing tools. However, the cost of computing a field of such matches for an entire image has eluded previous efforts to provide interactive performance. Our algorithm offers substantial performance improvements over the previous state of the art (20-100x), enabling its use in interactive editing tools. The key insights driving the algorithm are that some good patch matches can be found via random sampling, and that natural coherence in the imagery allows us to propagate such matches quickly to surrounding areas. We offer theoretical analysis of the convergence properties of the algorithm, as well as empirical and practical evidence for its high quality and performance. This one simple algorithm forms the basis for a variety of tools -- image retargeting, completion and reshuffling -- that can be used together in the context of a high-level image editing application. Finally, we propose additional intuitive constraints on the synthesis process that offer the user a level of control unavailable in previous methods.", "Reliable patch-matching forms the basis for many algorithms (super-resolution, denoising, inpainting, etc.) However, when the image quality deteriorates (by noise, blur or geometric distortions), the reliability of patch-matching deteriorates as well. Matched patches in the degraded image, do not necessarily imply similarity of the underlying patches in the (unknown) high-quality image. This restricts the applicability of patch-based methods. In this paper we present a patch representation called \"Needle\", which consists of small multi-scale versions of the patch and its immediate surrounding region. While the patch at the finest image scale is severely degraded, the degradation decreases dramatically in coarser needle scales, revealing reliable information for matching. We show that the Needle is robust to many types of image degradations, leads to matches faithful to the underlying high-quality patches, and to improvement in existing patch-based methods.", "PatchMatch is a fast algorithm for computing dense approximate nearest neighbor correspondences between patches of two image regions [1]. This paper generalizes PatchMatch in three ways: (1) to find k nearest neighbors, as opposed to just one, (2) to search across scales and rotations, in addition to just translations, and (3) to match using arbitrary descriptors and distances, not just sum-of-squared-differences on patch colors. In addition, we offer new search and parallelization strategies that further accelerate the method, and we show performance improvements over standard kd-tree techniques across a variety of inputs. In contrast to many previous matching algorithms, which for efficiency reasons have restricted matching to sparse interest points, or spatially proximate matches, our algorithm can efficiently find global, dense matches, even while matching across all scales and rotations. This is especially useful for computer vision applications, where our algorithm can be used as an efficient general-purpose component. We explore a variety of vision applications: denoising, finding forgeries by detecting cloned regions, symmetry detection, and object detection." ] }
1705.01088
2951924128
We propose a new technique for visual attribute transfer across images that may have very different appearance but have perceptually similar semantic structure. By visual attribute transfer, we mean transfer of visual information (such as color, tone, texture, and style) from one image to another. For example, one image could be that of a painting or a sketch while the other is a photo of a real scene, and both depict the same type of scene. Our technique finds semantically-meaningful dense correspondences between two input images. To accomplish this, it adapts the notion of "image analogy" with features extracted from a Deep Convolutional Neutral Network for matching; we call our technique Deep Image Analogy. A coarse-to-fine strategy is used to compute the nearest-neighbor field for generating the results. We validate the effectiveness of our proposed method in a variety of cases, including style texture transfer, color style swap, sketch painting to photo, and time lapse.
Our matching approach uses deep features generated by Deep Convolutional Neural Networks (CNN) @cite_2 . It has been shown in high-level image recognition tasks that such deep features are better representations for images @cite_17 . DeepDream @cite_12 is a recent attempt to generate artistic work using a CNN. This inspired work on neutral style transfer @cite_46 , which successfully applied CNN (pre-trained VGG-16 networks @cite_15 ) to the problem of style transfer, or texture transfer @cite_0 .
{ "cite_N": [ "@cite_15", "@cite_0", "@cite_2", "@cite_46", "@cite_12", "@cite_17" ], "mid": [ "1686810756", "2161208721", "", "1924619199", "2898422183", "2952186574" ], "abstract": [ "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "Here we introduce a new model of natural textures based on the feature spaces of convolutional neural networks optimised for object recognition. Samples from the model are of high perceptual quality demonstrating the generative power of neural networks trained in a purely discriminative fashion. Within the model, textures are represented by the correlations between feature maps in several layers of the network. We show that across layers the texture representations increasingly capture the statistical properties of natural images while making object information more and more explicit. The model provides a new tool to generate stimuli for neuroscience and might offer insights into the deep representations learned by convolutional neural networks.", "", "In fine art, especially painting, humans have mastered the skill to create unique visual experiences through composing a complex interplay between the content and style of an image. Thus far the algorithmic basis of this process is unknown and there exists no artificial system with similar capabilities. However, in other key areas of visual perception such as object and face recognition near-human performance was recently demonstrated by a class of biologically inspired vision models called Deep Neural Networks. Here we introduce an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality. The system uses neural representations to separate and recombine content and style of arbitrary images, providing a neural algorithm for the creation of artistic images. Moreover, in light of the striking similarities between performance-optimised artificial neural networks and biological vision, our work offers a path forward to an algorithmic understanding of how humans create and perceive artistic imagery.", "", "Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets." ] }