aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1611.02361
2949437134
The goal of sentence and document modeling is to accurately represent the meaning of sentences and documents for various Natural Language Processing tasks. In this work, we present Dependency Sensitive Convolutional Neural Networks (DSCNN) as a general-purpose classification system for both sentences and documents. DSCNN hierarchically builds textual representations by processing pretrained word embeddings via Long Short-Term Memory networks and subsequently extracting features with convolution operators. Compared with existing recursive neural models with tree structures, DSCNN does not rely on parsers and expensive phrase labeling, and thus is not restricted to sentence-level tasks. Moreover, unlike other CNN-based models that analyze sentences locally by sliding windows, our system captures both the dependency information within each sentence and relationships across sentences in the same document. Experiment results demonstrate that our approach is achieving state-of-the-art performance on several tasks, including sentiment analysis, question type classification, and subjectivity classification.
The success of deep learning architectures for NLP is first based on the progress in learning distributed word representations in semantic vector space @cite_15 @cite_10 @cite_32 , where each word is modeled with a real-valued vector called a word embedding. In this formulation, instead of using one-hot vectors by indexing words into a vocabulary, word embeddings are learned by projecting words onto a low dimensional and dense vector space that encodes both semantic and syntactic features of words.
{ "cite_N": [ "@cite_15", "@cite_10", "@cite_32" ], "mid": [ "2132339004", "2950133940", "2250539671" ], "abstract": [ "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.", "The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.", "Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition." ] }
1611.02361
2949437134
The goal of sentence and document modeling is to accurately represent the meaning of sentences and documents for various Natural Language Processing tasks. In this work, we present Dependency Sensitive Convolutional Neural Networks (DSCNN) as a general-purpose classification system for both sentences and documents. DSCNN hierarchically builds textual representations by processing pretrained word embeddings via Long Short-Term Memory networks and subsequently extracting features with convolution operators. Compared with existing recursive neural models with tree structures, DSCNN does not rely on parsers and expensive phrase labeling, and thus is not restricted to sentence-level tasks. Moreover, unlike other CNN-based models that analyze sentences locally by sliding windows, our system captures both the dependency information within each sentence and relationships across sentences in the same document. Experiment results demonstrate that our approach is achieving state-of-the-art performance on several tasks, including sentiment analysis, question type classification, and subjectivity classification.
In unordered models, textual representations are independent of the word order. Specifically, ignoring the token order in the phrase and sentence, the bag-of-words model produces the representation by averaging the constituting word embeddings @cite_18 . Besides, a neural-bag-of-words model described in @cite_7 adds an additional hidden layer on top of the averaged word embeddings before the softmax layer for classification purposes. In contrast, sequence models, such as standard Recurrent Neural Networks (RNN) and Long Short-Term Memory networks, construct phrase and sentence representations in an order-sensitive way. For example, thanks to its ability to capture long-distance dependencies, LSTM has re-emerged as a popular choice for many sequence-modeling tasks, including machine translation @cite_0 , image caption generation @cite_2 , and natural language generation @cite_8 . Besides, RNN and LSTM can be both converted to tree-structured networks by using parsing information. For example, @cite_25 applied Recursive Neural Networks as a variant of the standard RNN structured by syntactic trees to the sentiment analysis task. @cite_6 also generalizes LSTM to Tree-LSTM where each LSTM unit combines information from its children units.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_8", "@cite_6", "@cite_0", "@cite_2", "@cite_25" ], "mid": [ "1983578042", "2120615054", "2952013107", "2104246439", "2133564696", "2951912364", "2251939518" ], "abstract": [ "How do people know as much as they do with as little information as they get? The problem takes many forms; learning vocabulary from text is an especially dramatic and convenient case for research. A new general theory of acquired similarity and knowledge representation, latent semantic analysis (LSA), is presented and used to successfully simulate such learning and several other psycholinguistic phenomena. By inducing global knowledge indirectly from local co-occurrence data in a large body of representative text, LSA acquired knowledge about the full vocabulary of English at a comparable rate to schoolchildren. LSA uses no prior linguistic or perceptual similarity knowledge; it is based solely on a general mathematical learning method that achieves powerful inductive effects by extracting the right number of dimensions (e.g., 300) to represent objects and contexts. Relations to other theories, phenomena, and problems are sketched.", "The ability to accurately represent sentences is central to language understanding. We describe a convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) that we adopt for the semantic modelling of sentences. The network uses Dynamic k-Max Pooling, a global pooling operation over linear sequences. The network handles input sentences of varying length and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations. The network does not rely on a parse tree and is easily applicable to any language. We test the DCNN in four experiments: small scale binary and multi-class sentiment prediction, six-way question classification and Twitter sentiment prediction by distant supervision. The network achieves excellent performance in the first three tasks and a greater than 25 error reduction in the last task with respect to the strongest baseline.", "Natural language generation (NLG) is a critical component of spoken dialogue and it has a significant impact both on usability and perceived quality. Most NLG systems in common use employ rules and heuristics and tend to generate rigid and stylised responses without the natural variation of human language. They are also not easily scaled to systems covering multiple domains and languages. This paper presents a statistical language generator based on a semantically controlled Long Short-term Memory (LSTM) structure. The LSTM generator can learn from unaligned data by jointly optimising sentence planning and surface realisation using a simple cross entropy training criterion, and language variation can be easily achieved by sampling from output candidates. With fewer heuristics, an objective evaluation in two differing test domains showed the proposed method improved performance compared to previous methods. Human judges scored the LSTM system higher on informativeness and naturalness and overall preferred it to the other systems.", "Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).", "Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.", "Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.", "Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive negative classification from 80 up to 85.4 . The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7 , an improvement of 9.7 over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases." ] }
1611.02506
2554873467
Several modern applications involve huge graphs and require fast answers to reachability queries. In more than two decades since first proposals, several approaches have been presented adopting on-line searches, hop labelling or transitive closure compression. Transitive closure compression techniques usually construct a graph reachability index, for example by decomposing the graph into disjoint chains. As memory consumption is proportional to the number of chains, the target of those algorithms is to decompose the graph into an optimal number of chains. However, commonly used techniques fail to meet general expectations, are exceedingly complex, and their application on large graphs is impractical. The main contribution of this paper is a novel approach to construct such reachability indexes. The proposed method decomposes the graph into a sub-optimal number @math of chains by following a greedy strategy. We show that, given a vertex topological order, such a decomposition is obtained in @math time, and requires @math space, with @math bounded by @math . We provide experimental evidence suggesting that, on different categories of automatically generated benchmarks as well as on graphs arising from the field of logic synthesis and formal verification, the proposed method produces a number of chains very close to the optimum, while significantly reducing computation time.
The remainder of this paper is organized as follows. illustrates the reachability index data structure proposed in @cite_6 . The algorithm we propose is described in . In we prove time and space bounds of the proposed method. presents an experimental evaluation on the algorithm over several categories of automatically generated benchmarks as well as on graphs arising from the field of logic synthesis and formal verification. Finally, in we provide some summarizing remarks about the work.
{ "cite_N": [ "@cite_6" ], "mid": [ "1965109038" ], "abstract": [ "An important feature of database support for expert systems is the ability of the database to answer queries regarding the existence of a path from one node to another in the directed graph underlying some database relation. Given just the database relation, answering such a query is time-consuming, but given the transitive closure of the database relation a table look-up suffices. We present an indexing scheme that permits the storage of the pre-computed transitive closure of a database relation in a compressed form. The existence of a specified tuple in the closure can be determined from this compressed store by a single look-up followed by an index comparision. We show how to add nodes and arcs to the compressed closure incrementally. We also suggest how this compression technique can be used to reduce the effort required to compute the transitive closure." ] }
1611.02378
2949778035
With the emerging of various online video platforms like Youtube, Youku and LeTV, online TV series' reviews become more and more important both for viewers and producers. Customers rely heavily on these reviews before selecting TV series, while producers use them to improve the quality. As a result, automatically classifying reviews according to different requirements evolves as a popular research topic and is essential in our daily life. In this paper, we focused on reviews of hot TV series in China and successfully trained generic classifiers based on eight predefined categories. The experimental results showed promising performance and effectiveness of its generalization to different TV series.
Dataset is another important factor influencing the performance of our classifiers. Most of the public available movie review data is in English, like the IMDB dataset collected by Pang Lee 2004 @cite_3 . Although it covers all kinds of movies in IMDB website, it only has labels related with the sentiment. Its initial goal was for sentiment analysis. Another intact movie review dataset is SNAP @cite_9 , which consists of reviews from Amazon but only bearing rating scores. However, what we need is the content or aspect tags that are being discussed in each review. In addition, our review text is in Chinese. Therefore, it is necessary for us to build the review dataset by ourselves and label them into generic categories, which is one of as one of the contributions of this paper.
{ "cite_N": [ "@cite_9", "@cite_3" ], "mid": [ "2951727499", "2114524997" ], "abstract": [ "Recommending products to consumers means not only understanding their tastes, but also understanding their level of experience. For example, it would be a mistake to recommend the iconic film Seven Samurai simply because a user enjoys other action movies; rather, we might conclude that they will eventually enjoy it -- once they are ready. The same is true for beers, wines, gourmet foods -- or any products where users have acquired tastes: the best' products may not be the most accessible'. Thus our goal in this paper is to recommend products that a user will enjoy now, while acknowledging that their tastes may have changed over time, and may change again in the future. We model how tastes change due to the very act of consuming more products -- in other words, as users become more experienced. We develop a latent factor recommendation system that explicitly accounts for each user's level of experience. We find that such a model not only leads to better recommendations, but also allows us to study the role of user experience and expertise on a novel dataset of fifteen million beer, wine, food, and movie reviews.", "Sentiment analysis seeks to identify the viewpoint(s) underlying a text span; an example application is classifying a movie review as \"thumbs up\" or \"thumbs down\". To determine this sentiment polarity, we propose a novel machine-learning method that applies text-categorization techniques to just the subjective portions of the document. Extracting these portions can be implemented using efficient techniques for finding minimum cuts in graphs; this greatly facilitates incorporation of cross-sentence contextual constraints." ] }
1611.02614
2557192734
Cooperation in cellular networks is a promising scheme to improve system performance. Existing works consider that a user dynamically chooses the stations that cooperate for his her service, but such assumption often has practical limitations. Instead, cooperation groups can be predefined and static, with nodes linked by fixed infrastructure. To analyze such a potential network, we propose a grouping method based on node proximity. With the Mutually Nearest Neighbour Relation, we allow the formation of singles and pairs of nodes. Given an initial topology for the stations, two new point processes are defined, one for the singles and one for the pairs. We derive structural characteristics for these processes and analyse the resulting interference fields. When the node positions follow a Poisson Point Process (PPP) the processes of singles and pairs are not Poisson. However, the performance of the original model can be approximated by the superposition of two PPPs. This allows the derivation of exact expressions for the coverage probability. Numerical evaluation shows coverage gains from different signal cooperation that can reach up to 15 compared to the standard noncooperative coverage. The analysis is general and can be applied to any type of cooperation in pairs of transmitting nodes.
Other works propose to group BSs , so that the clusters are a-priori defined and do not change over time. The appropriate static clustering should result in considerable performance benefits for the users, with a cost-effective infrastructure. In favour of the static grouping approach are Akoum and Heath @cite_18 , who randomly group BSs around virtual centres; @cite_13 , who form clusters by using edge-coloring for a graph drawn by Delaunay triangulation; @cite_19 , who cluster BSs using a hexagonal lattice, and who analyse in @cite_7 the coverage benefits of cooperating pairs modeled by a Gauss-Poisson point process @cite_22 . The existing static clustering models either group BSs in a random way @cite_18 , or they randomly generate additional cluster nodes around a cluster center @cite_7 @cite_23 , which is translated in the physical world into installing randomly new nodes in the existing infrastructure. A more appropriate analysis should have a map of existing BS locations as the starting point, and from this define in a systematic way cooperation groups. The criterion for grouping should be based on node proximity, in order to limit the negative influence of first-order interference.
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_7", "@cite_19", "@cite_23", "@cite_13" ], "mid": [ "1963737897", "", "", "2150166076", "1821780430", "1660531227" ], "abstract": [ "Interference coordination improves data rates and reduces outages in cellular networks. Accurately evaluating the gains of coordination, however, is contingent upon using a network topology that models realistic cellular deployments. In this paper, we model the base stations locations as a Poisson point process to provide a better analytical assessment of the performance of coordination. Since interference coordination is only feasible within clusters of limited size, we consider a random clustering process where cluster stations are located according to a random point process and groups of base stations associated with the same cluster coordinate. We assume channel knowledge is exchanged among coordinating base stations, and we analyze the performance of interference coordination when channel knowledge at the transmitters is either perfect or acquired through limited feedback. We apply intercell interference nulling (ICIN) to coordinate interference inside the clusters. The feasibility of ICIN depends on the number of antennas at the base stations. Using tools from stochastic geometry, we derive the probability of coverage and the average rate for a typical mobile user. We show that the average cluster size can be optimized as a function of the number of antennas to maximize the gains of ICIN. To minimize the mean loss in rate due to limited feedback, we propose an adaptive feedback allocation strategy at the mobile users. We show that adapting the bit allocation as a function of the signals' strength increases the achievable rate with limited feedback, compared to equal bit partitioning. Finally, we illustrate how this analysis can help solve network design problems such as identifying regions where coordination provides gains based on average cluster size, number of antennas, and number of feedback bits.", "", "", "Cellular networks are usually modeled by placing the base stations on a grid, with mobile users either randomly scattered or placed deterministically. These models have been used extensively but suffer from being both highly idealized and not very tractable, so complex system-level simulations are used to evaluate coverage outage probability and rate. More tractable models have long been desirable. We develop new general models for the multi-cell signal-to-interference-plus-noise ratio (SINR) using stochastic geometry. Under very general assumptions, the resulting expressions for the downlink SINR CCDF (equivalent to the coverage probability) involve quickly computable integrals, and in some practical special cases can be simplified to common integrals (e.g., the Q-function) or even to simple closed-form expressions. We also derive the mean rate, and then the coverage gain (and mean rate loss) from static frequency reuse. We compare our coverage predictions to the grid model and an actual base station deployment, and observe that the proposed model is pessimistic (a lower bound on coverage) whereas the grid model is optimistic, and that both are about equally accurate. In addition to being more tractable, the proposed model may better capture the increasingly opportunistic and dense placement of base stations in future networks.", "This paper develops a comprehensive analytical framework with foundations in stochastic geometry to characterize the performance of cluster-centric content placement in a cache-enabled device-to-device (D2D) network. Different from device-centric content placement, cluster-centric placement focuses on placing content in each cluster, such that the collective performance of all the devices in each cluster is optimized. Modeling the locations of the devices by a Poisson cluster process, we define and analyze the performance for three general cases: 1) @math -Tx case: the receiver of interest is chosen uniformly at random in a cluster and its content of interest is available at the @math closest device to the cluster center; 2) @math -Rx case: the receiver of interest is the @math closest device to the cluster center and its content of interest is available at a device chosen uniformly at random from the same cluster; and 3) baseline case: the receiver of interest is chosen uniformly at random in a cluster and its content of interest is available at a device chosen independently and uniformly at random from the same cluster. Easy-to-use expressions for the key performance metrics, such as coverage probability and area spectral efficiency of the whole network, are derived for all three cases. Our analysis concretely demonstrates significant improvement in the network performance when the device on which content is cached or device requesting content from cache is biased to lie closer to the cluster center compared with the baseline case. Based on this insight, we develop and analyze a new generative model for cluster-centric D2D networks that allows to study the effect of intra-cluster interfering devices that are more likely to lie closer to the cluster center.", "This paper proposes a method for designing base station (BS) clusters and cluster patterns for pair-wise BS coordination. The key idea is that each BS cluster is formed by using the second-order Voronoi region, and the BS clusters are assigned to a specific cluster pattern by using edge-coloring for a graph drawn by Delaunay triangulation. The main advantage of the proposed method is that the BS selection conflict problem is prevented, while users are guaranteed to communicate with their two closest BSs in any irregular BS topology. With the proposed coordination method, analytical expressions for the rate distribution and the ergodic spectral efficiency are derived as a function of relevant system parameters in a fixed irregular network model. In a random network model with a homogeneous Poisson point process, a lower bound on the ergodic spectral efficiency is characterized. Through system level simulations, the performance of the proposed method is compared with that of conventional coordination methods: dynamic clustering and static clustering. Our major finding is that, when users are dense enough in a network, the proposed method provides the same level of coordination benefit with dynamic clustering to edge users." ] }
1611.02588
2555891875
The utilization of social media material in journalistic workflows is increasing, demanding automated methods for the identification of mis- and disinformation. Since textual contradiction across social media posts can be a signal of rumorousness, we seek to model how claims in Twitter posts are being textually contradicted. We identify two different contexts in which contradiction emerges: its broader form can be observed across independently posted tweets and its more specific form in threaded conversations. We define how the two scenarios differ in terms of central elements of argumentation: claims and conversation structure. We design and evaluate models for the two scenarios uniformly as 3-way Recognizing Textual Entailment tasks in order to represent claims and conversation structure implicitly in a generic inference model, while previous studies used explicit or no representation of these properties. To address noisy text, our classifiers use simple similarity features derived from the string and part-of-speech level. Corpus statistics reveal distribution differences for these features in contradictory as opposed to non-contradictory tweet relations, and the classifiers yield state of the art performance.
The RTE-3 benchmark dataset is the first resource that labels paired text snippets in terms of 3-way RTE judgments @cite_9 , but it is comprised of general newswire texts. Similarly, the new large annotated corpus used for deep models for entailment @cite_0 labeled text pairs as Contradiction are too broadly defined, i.e., expressing generic semantic incoherence rather than semantically motivated polarization and mismatch that we are after, which questions its utility in the rumor verification context.
{ "cite_N": [ "@cite_0", "@cite_9" ], "mid": [ "2953084091", "2130359236" ], "abstract": [ "Understanding entailment and contradiction is fundamental to understanding natural language, and inference about entailment and contradiction is a valuable testing ground for the development of semantic representations. However, machine learning research in this area has been dramatically limited by the lack of large-scale resources. To address this, we introduce the Stanford Natural Language Inference corpus, a new, freely available collection of labeled sentence pairs, written by humans doing a novel grounded task based on image captioning. At 570K pairs, it is two orders of magnitude larger than all other resources of its type. This increase in scale allows lexicalized classifiers to outperform some sophisticated existing entailment models, and it allows a neural network-based model to perform competitively on natural language inference benchmarks for the first time.", "Detecting conflicting statements is a foundational text understanding task with applications in information analysis. We propose an appropriate definition of contradiction for NLP tasks and develop available corpora, from which we construct a typology of contradictions. We demonstrate that a system for contradiction needs to make more fine-grained distinctions than the common systems for entailment. In particular, we argue for the centrality of event coreference and therefore incorporate such a component based on topicality. We present the first detailed breakdown of performance on this task. Detecting some types of contradiction requires deeper inferential paths than our system is capable of, but we achieve good performance on types arising from negation and antonymy." ] }
1611.02648
2556467266
We study a variant of the variational autoencoder model (VAE) with a Gaussian mixture as a prior distribution, with the goal of performing unsupervised clustering through deep generative models. We observe that the known problem of over-regularisation that has been shown to arise in regular VAEs also manifests itself in our model and leads to cluster degeneracy. We show that a heuristic called minimum information constraint that has been shown to mitigate this effect in VAEs can also be applied to improve unsupervised clustering performance with our model. Furthermore we analyse the effect of this heuristic and provide an intuition of the various processes with the help of visualizations. Finally, we demonstrate the performance of our model on synthetic data, MNIST and SVHN, showing that the obtained clusters are distinct, interpretable and result in achieving competitive performance on unsupervised clustering to the state-of-the-art results.
The work that is most closely related to ours is the stacked generative semi-supervised model (M1+M2) by @cite_2 . One of the main differences is the fact that their prior distribution is a neural network transformation of both continuous and discrete variables, with Gaussian and categorical priors respectively. The prior for our model, on the other hand, is a neural network transformation of Gaussian variables, which parametrise the means and variances of a mixture of Gaussians, with categorical variables for the mixture components. Crucially, @cite_2 apply their model to semi-supervised classification tasks, whereas we focus on unsupervised clustering. Therefore, our inference algorithm is more specific to the latter.
{ "cite_N": [ "@cite_2" ], "mid": [ "2434741482" ], "abstract": [ "This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound to the mutual information objective that can be optimized efficiently, and show that our training procedure can be interpreted as a variation of the Wake-Sleep algorithm. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods." ] }
1611.02639
2553389628
Gradients have been used to quantify feature importance in machine learning models. Unfortunately, in nonlinear deep networks, not only individual neurons but also the whole network can saturate, and as a result an important input feature can have a tiny gradient. We study various networks, and observe that this phenomena is indeed widespread, across many inputs. @PARASPLIT We propose to examine interior gradients, which are gradients of counterfactual inputs constructed by scaling down the original input. We apply our method to the GoogleNet architecture for object recognition in images, as well as a ligand-based virtual screening network with categorical features and an LSTM based language model for the Penn Treebank dataset. We visualize how interior gradients better capture feature importance. Furthermore, interior gradients are applicable to a wide variety of deep networks, and have the attribution property that the feature importance scores sum to the the prediction score. @PARASPLIT Best of all, interior gradients can be computed just as easily as gradients. In contrast, previous methods are complex to implement, which hinders practical adoption.
Score back-propagation based methods The second set of approaches involve back-propagating the final prediction score through each layer of the network down to the individual features. These include DeepLift ( @cite_21 ), Layer-wise relevance propagation (LRP) ( @cite_23 ), Deconvolution networks (DeConvNets) ( @cite_14 ), and Guided back-propagation ( @cite_20 ). These methods largely differ in the backpropagation logic for various non-linear activation functions. While DeConvNets, Guided back-propagation and LRP rely on the local gradients at each non-linear activation function, DeepLift relies on the deviation in the neuron's activation from a certain baseline input.
{ "cite_N": [ "@cite_14", "@cite_21", "@cite_20", "@cite_23" ], "mid": [ "2952186574", "", "2123045220", "2273348943" ], "abstract": [ "Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.", "", "Most modern convolutional neural networks (CNNs) used for object recognition are built using the same principles: Alternating convolution and max-pooling layers followed by a small number of fully connected layers. We re-evaluate the state of the art for object recognition from small images with convolutional networks, questioning the necessity of different components in the pipeline. We find that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks. Following this finding -- and building on other recent work for finding simple network structures -- we propose a new architecture that consists solely of convolutional layers and yields competitive or state of the art performance on several object recognition datasets (CIFAR-10, CIFAR-100, ImageNet). To analyze the network we introduce a new variant of the \"deconvolution approach\" for visualizing features learned by CNNs, which can be applied to a broader range of network structures than existing approaches.", "Feature representations, both hand-designed and learned ones, are often hard to analyze and interpret, even when they are extracted from visual data. We propose a new approach to study image representations by inverting them with an up-convolutional neural network. We apply the method to shallow representations (HOG, SIFT, LBP), as well as to deep networks. For shallow representations our approach provides significantly better reconstructions than existing methods, revealing that there is surprisingly rich information contained in these features. Inverting a deep network trained on ImageNet provides several insights into the properties of the feature representation learned by the network. Most strikingly, the colors and the rough contours of an image can be reconstructed from activations in higher network layers and even from the predicted class probabilities." ] }
1611.02442
2951270887
We study approximation algorithms for revenue maximization based on static item pricing, where a seller chooses prices for various goods in the market, and then the buyers purchase utility-maximizing bundles at these given prices. We formulate two somewhat general techniques for designing good pricing algorithms for this setting: Price Doubling and Item Halving. Using these techniques, we unify many of the existing results in the item pricing literature under a common framework, as well as provide several new item pricing algorithms for approximating both revenue and social welfare. More specifically, for a variety of settings with item pricing, we show that it is possible to deterministically obtain a log-approximation for revenue and a constant-approximation for social welfare simultaneously: thus one need not sacrifice revenue if the goal is to still have decent welfare guarantees. In addition, we provide a new black-box reduction from revenue to welfare based on item pricing, which immediately gives us new revenue-approximation algorithms (e.g., for gross substitutes valuations). The main technical contribution of this paper is a @math -approximation algorithm for revenue maximization based on the Item Halving technique, for settings where buyers have XoS valuations, where @math is the number of goods and @math is the average supply. Surprisingly, ours is the first known item pricing algorithm with polylogarithmic approximations for such general classes of valuations, and partially resolves an important open question from the algorithmic pricing literature about the existence of item pricing algorithms with logarithmic factors for general valuations. We also use the Item Halving framework to form envy-free item pricing mechanisms for the popular setting of multi-unit markets, providing a log-approximation to revenue in this case.
Clearly, the current paper is closely associated with the growing body of work on item pricing and more general envy-free schemes @cite_2 @cite_13 @cite_1 for social welfare and revenue maximization. Item pricing for maximizing only welfare has traditionally been a sought after area of research owing to its ties to Walrasian equilibrium; since our focus is primarily on revenue, we refer the reader to @cite_25 @cite_8 @cite_7 for more recent algorithmic perspectives on the subject. On the other hand, the problem of item pricing for revenue maximization has recently gained traction in computer science with respect to both sequential and simultaneous mechanisms. A steady stream of research has yielded near-optimal approximation algorithms for a variety of settings including but not limited to unit demand @cite_29 @cite_5 @cite_3 , single minded @cite_12 @cite_33 @cite_6 , graph minded @cite_23 , multi-unit markets @cite_2 @cite_13 , and unlimited supply settings @cite_26 @cite_29 . One of the main contributions of this paper is a general framework that captures many of the above results, and provides a recipe for converting them into bicriteria approximations.
{ "cite_N": [ "@cite_26", "@cite_33", "@cite_7", "@cite_8", "@cite_29", "@cite_1", "@cite_3", "@cite_6", "@cite_23", "@cite_2", "@cite_5", "@cite_13", "@cite_25", "@cite_12" ], "mid": [ "1656659859", "2062453462", "", "", "1964207488", "", "", "", "2292671995", "", "", "2040618534", "", "1526218779" ], "abstract": [ "We study prior-free revenue maximization for a seller with unlimited supply of n item types facing m myopic buyers present for k < log n days. We show that a certain randomized schedule of posted prices has an approximation factor of O(log m+log n k). This algorithm relies on buyer valuations having hereditary maximizers, a novel natural property satisfied for example by gross substitutes valuations. We obtain a matching lower bound with multi-unit valuations. In light of existing results [2], k days can thus improve the approximation by a Θ(k) factor. We also provide a posted price schedule with the same factor for positive affine allocative externalities, despite an increase in the optimal revenue.", "We deal with the problem of finding profit-maximizing prices for a finite number of distinct goods, assuming that of each good an unlimited number of copies is available, or that goods can be reproduced at no cost (e.g., digital goods). Consumers specify subsets of the goods and the maximum prices they are willing to pay. In the considered single-minded case every consumer is interested in precisely one such subset. If the goods are the edges of a graph and consumers are requesting to purchase paths in this graph, then we can think of the problem as pricing computer network connections or transportation links.We start by showing weak NP-hardness of the very restricted case in which the requested subsets are nested, i.e., contained inside each other or non-intersecting, thereby resolving the previously open question whether the problem remains NP-hard when the underlying graph is simply a line. Using a reduction inspired by this result we present an approximation preserving reduction that proves APX-hardness even for very sparse instances defined on general graphs, where the number of requests per edge is bounded by a constant B and no path is longer than some constant l. On the algorithmic side we first present an O(log l + log B)-approximation algorithm that (almost) matches the previously best known approximation guarantee in the general case, but is especially well suited for sparse problem instances. Using a new upper bounding technique we then give an O(l2)-approximation, which is the first algorithm for the general problem with an approximation ratio that does not depend on B.", "", "", "We investigate nonparametric multiproduct pricing problems, in which we want to find revenue maximizing prices for products @math based on a set of customer samples @math . We mostly focus on the unit-demand case, in which products constitute strict substitutes and each customer aims to purchase a single product. In this setting a customer sample consists of a number of nonzero values for different products and possibly an additional product ranking. Once prices are fixed, each customer chooses to buy one of the products she can afford based on some predefined selection rule. We distinguish between the min-buying, max-buying, and rank-buying models. Some of our results also extend to single-minded pricing, in which case products are strict complements and every customer seeks to buy a single set of products, which she purchases if the sum of prices is below her valuation for that set. For the min-buying model we show that the revenue maximization problem is not approximable within factor @math for some constant @math , unless @math , thereby almost closing the gap between the known algorithmic results and previous lower bounds. We also prove inapproximability within @math , @math being an upper bound on the number of nonzero values per customer, and @math under slightly stronger assumptions and provide matching upper bounds. Surprisingly, these hardness results hold even if a price ladder constraint, i.e., a predefined order on the prices of all products, is given. Without the price ladder constraint we obtain similar hardness results for the special case of uniform valuations, i.e., the case that every customer has identical values for all the products she is interested in, assuming specific hardness of the balanced bipartite independent set problem in constant degree graphs or hardness of refuting random 3CNF formulas. Introducing a slightly more general problem definition in which customers are given as an explicit probability distribution, we obtain inapproximability within @math assuming @math . These results apply to single-minded pricing as well. For the max-buying model a polynomial-time approximation scheme exists if a price ladder is given. We give a matching lower bound by proving strong NP-hardness. Assuming limited product supply, we analyze a generic local search algorithm and prove that it is 2-approximate. Finally, we discuss implications for the rank-buying model.", "", "", "", "In the highway problem, we are given an @math -edge path graph (the highway), and a set of paths (the drivers), each one with its own budget. For a given assignment of edge weights (the tolls), the highway owner collects from each driver the weight of the associated path, when it does not exceed the budget of the driver, and zero otherwise. The goal is to choose weights so as to maximize the profit. A lot of research has been devoted to this apparently simple problem. The highway problem was shown to be strongly @math -hard only recently [K. M. , in Proceedings of the International Symposium on Algorithmic Game Theory (SAGT), 2009, pp. 275--286]. The best-known approximation is @math [I. Gamzu and D. Segev, in Proceedings of the International Colloquium on Automata, Languages and Programming (ICALP), 2010, pp. 582--593], which improves on the previous best @math approximation [M.-F. Balcan and A. Blum, in Proceedings of the ACM Conference on Electronic Commerce,...", "", "", "We study envy-free (EF) mechanisms for multi-unit auctions with budgeted agents that approximately maximize revenue. In an EF auction, prices are set so that every bidder receives a bundle that maximizes her utility amongst all bundles; We show that the problem of revenue-maximizing EF auctions is NP-hard, even for the case of identical items and additive valuations (up to the budget). The main result of our paper is a novel EF auction that runs in polynomial time and provides a approximation of 1 2 with respect to the revenue-maximizing EF auction. A slight variant of our mechanism will produce an allocation and pricing that is more restrictive (so called item pricing) and gives a 1 2 approximation to the optimal revenue within this more restrictive class.", "", "Novel compositions for use as cell growth-promoting materials are made by the following novel process involving the steps of: (a) slowly contacting serum or plasma with sufficient chilled perchloric acid to reach a 0.1 to 0.25 final molar concentration of said perchloric acid in said serum or plasma, (b) at a temperature of -1 DEG C. to 15 DEG C., (c) under intensive mixing which is continued until a homogeneous suspension is obtained, (d) separating the resultant precipitate, which contains the growth-promoting substances, from the supernatant, (e) eluting said growth-promoting substances from said precipitate by first resuspending said precipitate in an aqueous alkaline or salt solution, and thereafter, (f) adjusting the pH to solubilize the growth-promoting substances from the insoluble proteins, (g) separating the supernatant, which contains the growth-promoting substances, from the insoluble, undesired precipitate, (h) exchanging the solvent in the supernatant for a physiological solution, and (i) sterilizing the resultant growth-promoting material." ] }
1611.02442
2951270887
We study approximation algorithms for revenue maximization based on static item pricing, where a seller chooses prices for various goods in the market, and then the buyers purchase utility-maximizing bundles at these given prices. We formulate two somewhat general techniques for designing good pricing algorithms for this setting: Price Doubling and Item Halving. Using these techniques, we unify many of the existing results in the item pricing literature under a common framework, as well as provide several new item pricing algorithms for approximating both revenue and social welfare. More specifically, for a variety of settings with item pricing, we show that it is possible to deterministically obtain a log-approximation for revenue and a constant-approximation for social welfare simultaneously: thus one need not sacrifice revenue if the goal is to still have decent welfare guarantees. In addition, we provide a new black-box reduction from revenue to welfare based on item pricing, which immediately gives us new revenue-approximation algorithms (e.g., for gross substitutes valuations). The main technical contribution of this paper is a @math -approximation algorithm for revenue maximization based on the Item Halving technique, for settings where buyers have XoS valuations, where @math is the number of goods and @math is the average supply. Surprisingly, ours is the first known item pricing algorithm with polylogarithmic approximations for such general classes of valuations, and partially resolves an important open question from the algorithmic pricing literature about the existence of item pricing algorithms with logarithmic factors for general valuations. We also use the Item Halving framework to form envy-free item pricing mechanisms for the popular setting of multi-unit markets, providing a log-approximation to revenue in this case.
Despite the tremendous body of work on revenue maximization, ours is the first known poly-logarithmic approximation algorithm based on static, item pricing for complex valuations such as submodular and XoS functions. Partial exceptions include the @math -approximation for simple submodular functions' in @cite_4 and the @math -approximation in @cite_16 for settings consisting of a large number of buyers having the exact same valuation function. Although our central result improves upon the upper bound of @math from @cite_4 for XoS functions, it is pertinent to mention that their result holds for a more general information model where the seller is not aware of the buyer valuations. For such a general model, it is reasonable to expect that the only possible strategy would be to price items uniformly (common price), as is done in @cite_4 @cite_16 . The presence of a matching lower bound of @math @cite_16 for uniform pricing motivates the need for pricing different goods differently. For such a scheme, we argue that knowledge of buyer valuations is necessary since it allows us to quantify each good's relative value in the market.
{ "cite_N": [ "@cite_16", "@cite_4" ], "mid": [ "2082623911", "2031502124" ], "abstract": [ "We consider the item pricing problem for revenue maximization, where a single seller with multiple distinct items caters to multiple buyers with unknown subadditive valuation functions who arrive in a sequence. The seller sets the prices on individual items, and we design randomized pricing strategies to maximize expected revenue. We consider dynamic uniform strategies, which can change the price upon the arrival of each buyer but the price on all unsold items is the same at all times, and static nonuniform strategies, which can assign different prices to different items but can never change it after setting it initially. We design pricing strategies that guarantee poly-logarithmic (in number of items) approximation to maximum possible social welfare, which is an upper bound on revenue. We also show that any static uniform pricing strategy cannot yield such approximation, thus highlighting a large gap between the powers of dynamic and static pricing. Finally, our pricing strategies imply poly-logarithmic ...", "We consider the problem of pricing n items to maximize revenue when faced with a series of unknown buyers with complex preferences, and show that a simple pricing scheme achieves surprisingly strong guarantees. We show that in the unlimited supply setting, a random single price achieves expected revenue within a logarithmic factor of the total social welfare for customers with general valuation functions, which may not even necessarily be monotone. This generalizes work of Guruswami et. al [18], who show a logarithmic factor for only the special cases of single-minded and unit-demand customers. In the limited supply setting, we show that for subadditive valuations, a random single price achieves revenue within a factor of 2O(√(log n loglog n) of the total social welfare, i.e., the optimal revenue the seller could hope to extract even if the seller could price each bundle differently for every buyer. This is the best approximation known for any item pricing scheme for subadditive (or even submodular) valuations, even using multiple prices. We complement this result with a lower bound showing a sequence of subadditive (in fact, XOS) buyers for which any single price has approximation ratio 2Ω(log1 4 n), thus showing that single price schemes cannot achieve a polylogarithmic ratio. This lower bound demonstrates a clear distinction between revenue maximization and social welfare maximization in this setting, for which [12,10] show that a fixed price achieves a logarithmic approximation in the case of XOS [12], and more generally subadditive [10], customers. We also consider the multi-unit case examined by [1111] in the context of social welfare, and show that so long as no buyer requires more than a 1 -- e fraction of the items, a random single price now does in fact achieve revenue within an O(log n) factor of the maximum social welfare." ] }
1611.01868
2950425451
Truth discovery is to resolve conflicts and find the truth from multiple-source statements. Conventional methods mostly research based on the mutual effect between the reliability of sources and the credibility of statements, however, pay no attention to the mutual effect among the credibility of statements about the same object. We propose memory network based models to incorporate these two ideas to do the truth discovery. We use feedforward memory network and feedback memory network to learn the representation of the credibility of statements which are about the same object. Specially, we adopt memory mechanism to learn source reliability and use it through truth prediction. During learning models, we use multiple types of data (categorical data and continuous data) by assigning different weights automatically in the loss function based on their own effect on truth discovery prediction. The experiment results show that the memory network based models much outperform the state-of-the-art method and other baseline methods.
The Bayesian based methods apply Bayesian analysis, which are to predict the probability of a statement to be a truth based on observed information. The corresponding methods include TruthFinder @cite_22 , AccuPr @cite_9 , AccuSim @cite_9 , AccuFormat @cite_9 , LCA @cite_23 and CRH @cite_4 . Specially, TruthFinder method considers similarity between statements, and AccuPr method consider that different statements on the same entry should be disjoint. LCA method is a probabilistic model which analyze latent credibility factors by using them as parameters to find the maximum a posteriori (MAP). CRH method @cite_4 use heterogeneous types of data which consists categorical data and continuous data to estimate the reliability of sources and predict truth. The copying affected method discounts votes from copied observations through computing credibility, such as AccuCopy @cite_9 .
{ "cite_N": [ "@cite_9", "@cite_4", "@cite_22", "@cite_23" ], "mid": [ "2073545563", "", "2118388899", "1713409046" ], "abstract": [ "Many data management applications, such as setting up Web portals, managing enterprise data, managing community data, and sharing scientific data, require integrating data from multiple sources. Each of these sources provides a set of values and different sources can often provide conflicting values. To present quality data to users, it is critical that data integration systems can resolve conflicts and discover true values. Typically, we expect a true value to be provided by more sources than any particular false one, so we can take the value provided by the majority of the sources as the truth. Unfortunately, a false value can be spread through copying and that makes truth discovery extremely tricky. In this paper, we consider how to find true values from conflicting information when there are a large number of sources, among which some may copy from others. We present a novel approach that considers dependence between data sources in truth discovery. Intuitively, if two data sources provide a large number of common values and many of these values are rarely provided by other sources (e.g., particular false values), it is very likely that one copies from the other. We apply Bayesian analysis to decide dependence between sources and design an algorithm that iteratively detects dependence and discovers truth from conflicting information. We also extend our model by considering accuracy of data sources and similarity between values. Our experiments on synthetic data as well as real-world data show that our algorithm can significantly improve accuracy of truth discovery and is scalable when there are a large number of data sources.", "", "The World Wide Web has become the most important information source for most of us. Unfortunately, there is no guarantee for the correctness of information on the Web. Moreover, different websites often provide conflicting information on a subject, such as different specifications for the same product. In this paper, we propose a new problem, called Veracity, i.e., conformity to truth, which studies how to find true facts from a large amount of conflicting information on many subjects that is provided by various websites. We design a general framework for the Veracity problem and invent an algorithm, called TRUTHFlNDER, which utilizes the relationships between websites and their information, i.e., a website is trustworthy if it provides many pieces of true information, and a piece of information is likely to be true if it is provided by many trustworthy websites. An iterative method is used to infer the trustworthiness of websites and the correctness of information from each other. Our experiments show that TRUTHFlNDER successfully finds true facts among conflicting information and identifies trustworthy websites better than the popular search engines.", "A frequent problem when dealing with data gathered from multiple sources on the web (ranging from booksellers to Wikipedia pages to stock analyst predictions) is that these sources disagree, and we must decide which of their (often mutually exclusive) claims we should accept. Current state-of-the-art information credibility algorithms known as \"fact-finders\" are transitive voting systems with rules specifying how votes iteratively flow from sources to claims and then back to sources. While this is quite tractable and often effective, fact-finders also suffer from substantial limitations; in particular, a lack of transparency obfuscates their credibility decisions and makes them difficult to adapt and analyze: knowing the mechanics of how votes are calculated does not readily tell us what those votes mean, and finding, for example, that a source has a score of 6 is not informative. We introduce a new approach to information credibility, Latent Credibility Analysis (LCA), constructing strongly principled, probabilistic models where the truth of each claim is a latent variable and the credibility of a source is captured by a set of model parameters. This gives LCA models clear semantics and modularity that make extending them to capture additional observed and latent credibility factors straightforward. Experiments over four real-world datasets demonstrate that LCA models can outperform the best fact-finders in both unsupervised and semi-supervised settings." ] }
1611.02119
2554330292
Evidence-based health care (EBHC) is an important practice of medicine which attempts to provide systematic scientific evidence to answer clinical questions. In this context, Epistemonikos (www.epistemonikos.org) is one of the first and most important online systems in the field, providing an interface that supports users on searching and filtering scientific articles for practicing EBHC. The system nowadays requires a large amount of expert human effort, where close to 500 physicians manually curate articles to be utilized in the platform. In order to scale up the large and continuous amount of data to keep the system updated, we introduce EpistAid, an interactive intelligent interface which supports clinicians in the process of curating documents for Epistemonikos within lists of papers called evidence matrices. We introduce the characteristics, design and algorithms of our solution, as well as a prototype implementation and a case study to show how our solution addresses the information overload problem in this area.
Several approaches have been proposed to reduce the workload associated with the task of document filtering for citation screening in EBHC databases. Among them we can find: Active Learning( @cite_32 , @cite_11 , @cite_13 , @cite_20 , @cite_23 , @cite_28 ), Automatic Classification ( @cite_7 , @cite_27 , @cite_16 , @cite_3 ), Document Ranking @cite_1 , Relevance Feedback @cite_14 , Document Priorization @cite_0 , and Visualization @cite_26 , @cite_8 . The problem with the majority of these approaches is that they do not ensure 100 to these works, we provide the first controllable and transparent information filtering system for EBHC, inspired by controllable recommender system interfaces @cite_6 , @cite_24 .
{ "cite_N": [ "@cite_14", "@cite_26", "@cite_7", "@cite_8", "@cite_28", "@cite_1", "@cite_32", "@cite_3", "@cite_6", "@cite_0", "@cite_24", "@cite_27", "@cite_23", "@cite_16", "@cite_13", "@cite_20", "@cite_11" ], "mid": [ "", "1973251307", "2063198586", "1978257339", "2404374285", "", "", "2180673283", "2058990114", "2183168769", "", "", "2043566294", "2079370125", "1760234209", "2037771830", "2099883114" ], "abstract": [ "", "Context: Systematic Literature Reviews (SLRs) are an important component to identify and aggregate research evidence from different empirical studies. Despite its relevance, most of the process is conducted manually, implying additional effort when the Selection Review task is performed and leading to reading all studies under analysis more than once. Objective: We propose an approach based on Visual Text Mining (VTM) techniques to assist the Selection Review task in SLR. It is implemented into a VTM tool (Revis), which is freely available for use. Method: We have selected and implemented appropriate visualization techniques into our approach and validated and demonstrated its usefulness in performing real SLRs. Results: The results have shown that employment of VTM techniques can successfully assist in the Selection Review task, speeding up the entire SLR process in comparison to the conventional approach. Conclusion: VTM techniques are valuable tools to be used in the context of selecting studies in the SLR process, prone to speed up some stages of SLRs.", "Objectives: To investigate whether (1) machine learning classifiers can help identify nonrandomized studies eligible for full-text screening by systematic reviewers; (2) classifier performance varies with optimization; and (3) the number of citations to screen can be reduced. Methods: We used an open-source, data-mining suite to process and classify biomedical citations that point to mostly nonrandomized studies from 2 systematic reviews. We built training and test sets for citation portions and compared classifier performance by considering the value of indexing, various feature sets, and optimization. We conducted our experiments in 2 phases. The design of phase I with no optimization was: 4 classifiersx3 feature setsx3 citation portions. Classifiers included k-nearest neighbor, naive Bayes, complement naive Bayes, and evolutionary support vector machine. Feature sets included bag of words, and 2- and 3-term n-grams. Citation portions included titles, titles and abstracts, and full citations with metadata. Phase II with optimization involved a subset of the classifiers, as well as features extracted from full citations, and full citations with overweighted titles. We optimized features and classifier parameters by manually setting information gain thresholds outside of a process for iterative grid optimization with 10-fold cross-validations. We independently tested models on data reserved for that purpose and statistically compared classifier performance on 2 types of feature sets. We estimated the number of citations needed to screen by reviewers during a second pass through a reduced set of citations. Results: In phase I, the evolutionary support vector machine returned the best recall for bag of words extracted from full citations; the best classifier with respect to overall performance was k-nearest neighbor. No classifier attained good enough recall for this task without optimization. In phase II, we boosted performance with optimization for evolutionary support vector machine and complement naive Bayes classifiers. Generalization performance was better for the latter in the independent tests. For evolutionary support vector machine and complement naive Bayes classifiers, the initial retrieval set was reduced by 46 and 35 , respectively. Conclusions: Machine learning classifiers can help identify nonrandomized studies eligible for full-text screening by systematic reviewers. Optimization can markedly improve performance of classifiers. However, generalizability varies with the classifier. The number of citations to screen during a second independent pass through the citations can be substantially reduced.", "Background: Systematic literature reviews (SLRs)are an important component to identify and aggregate research evidence from different empirical studies. One of the activities associated with the SLR process is the selection of primary studies. The process used to select primary studies can be arduous, particularly when the researcher faces large volumes of primary studies. Aim: An experiment was conducted as a pilot test to compare the performance and effectiveness of graduate students in selecting primary studies manually and using visual text mining (VTM) techniques. This paper describes a replication study. Method: The same experimental design and materials of the previous experiment were used in the current experiment. Result: The previous experiment revealed that VTM techniques can speed up the selection of primary studies and increase the number of studies correctly included excluded (effectiveness). The results of the replication confirmed that studies are more rapidly selected using VTM. We observed that the level of experience in researching has a direct relationship with the effectiveness. Conclusion: VTM techniques have proven valuable in the selection of primary studies.", "The active learning (AL) framework is an increasingly popular strategy for reducing the amount of human labeling effort required to induce a predictive model. Most work in AL has assumed that a single, infallible oracle provides labels requested by the learner at a fixed cost. However, real-world applications suitable for AL often include multiple domain experts who provide labels of varying cost and quality. We explore this multiple expert active learning (MEAL) scenario and develop a novel algorithm for instance allocation that exploits the meta-cognitive abilities of novice (cheap) experts in order to make the best use of the experienced (expensive) annotators. We demonstrate that this strategy outperforms strong baseline approaches to MEAL on both a sentiment analysis dataset and two datasets from our motivating application of biomedical citation screening. Furthermore, we provide evidence that novice labelers are often aware of which instances they are likely to mislabel.", "", "", "Background Identifying relevant studies for inclusion in a systematic review (i.e. screening) is a complex, laborious and expensive task. Recently, a number of studies has shown that the use of machine learning and text mining methods to automatically identify relevant studies has the potential to drastically decrease the workload involved in the screening phase. The vast majority of these machine learning methods exploit the same underlying principle, i.e. a study is modelled as a bag-of-words (BOW).", "This paper presents an interactive hybrid recommendation system that generates item predictions from multiple social and semantic web resources, such as Wikipedia, Facebook, and Twitter. The system employs hybrid techniques from traditional recommender system literature, in addition to a novel interactive interface which serves to explain the recommendation process and elicit preferences from the end user. We present an evaluation that compares different interactive and non-interactive hybrid strategies for computing recommendations across diverse social and semantic web APIs. Results of the study indicate that explanation and interaction with a visual representation of the hybrid system increase user satisfaction and relevance of predicted content.", "Abstract Objective Machine learning systems can be an aid to experts performing systematic reviews (SRs) by automatically ranking journal articles for work-prioritization. This work investigates whether a topic-specific automated document ranking system for SRs can be improved using a hybrid approach, combining topic-specific training data with data from other SR topics. Design A test collection was built using annotated reference files from 24 systematic drug class reviews. A support vector machine learning algorithm was evaluated with cross-validation, using seven different fractions of topic-specific training data in combination with samples from the other 23 topics. This approach was compared to both a baseline system, which used only topic-specific training data, and to a system using only the nontopic data sampled from the remaining topics. Measurements Mean area under the receiver-operating curve (AUC) was used as the measure of comparison. Results On average, the hybrid system improved mean AUC over the baseline system by 20 , when topic-specific training data were scarce. The system performed significantly better than the baseline system at all levels of topic-specific training data. In addition, the system performed better than the nontopic system at all but the two smallest fractions of topic specific training data, and no worse than the nontopic system with these smallest amounts of topic specific training data. Conclusions Automated literature prioritization could be helpful in assisting experts to organize their time when performing systematic reviews. Future work will focus on extending the algorithm to use additional sources of topic-specific data, and on embedding the algorithm in an interactive system available to systematic reviewers during the literature review process.", "", "", "Background Systematic reviews address a specific clinical question by unbiasedly assessing and analyzing the pertinent literature. Citation screening is a time-consuming and critical step in systematic reviews. Typically, reviewers must evaluate thousands of citations to identify articles eligible for a given review. We explore the application of machine learning techniques to semi-automate citation screening, thereby reducing the reviewers' workload.", "Objectives: Evidence-based medicine depends on the timely synthesis of research findings. An important source of synthesized evidence resides in systematic reviews. However, a bottleneck in review production involves dual screening of citations with titles and abstracts to find eligible studies. For this research, we tested the effect of various kinds of textual information (features) on performance of a machine learning classifier. Based on our findings, we propose an automated system to reduce screeing burden, as well as offer quality assurance. Methods: We built a database of citations from 5 systematic reviews that varied with respect to domain, topic, and sponsor. Consensus judgments regarding eligibility were inferred from published reports. We extracted 5 feature sets from citations: alphabetic, alphanumeric +, indexing, features mapped to concepts in systematic reviews, and topic models. To simulate a two-person team, we divided the data into random halves. We optimized the parameters of a Bayesian classifier, then trained and tested models on alternate data halves. Overall, we conducted 50 independent tests. Results: All tests of summary performance (mean F3) surpassed the corresponding baseline, P<0.0001. The ranks for mean F3, precision, and classification error were statistically different across feature sets averaged over reviews; P-values for Friedman's test were .045, .002, and .002, respectively. Differences in ranks for mean recall were not statistically significant. Alphanumeric+ features were associated with best performance; mean reduction in screening burden for this feature type ranged from 88 to 98 for the second pass through citations and from 38 to 48 overall. Conclusions: A computer-assisted, decision support system based on our methods could substantially reduce the burden of screening citations for systematic review teams and solo reviewers. Additionally, such a system could deliver quality assurance both by confirming concordant decisions and by naming studies associated with discordant decisions for further consideration. © 2014", "Background Citation screening is time consuming and inefficient. We sought to evaluate the performance of Abstrackr, a semi-automated online tool for predictive title and abstract screening.", "Active learning (AL) is an increasingly popular strategy for mitigating the amount of labeled data required to train classifiers, thereby reducing annotator effort. We describe a real-world, deployed application of AL to the problem of biomedical citation screening for systematic reviews at the Tufts Medical Center's Evidence-based Practice Center. We propose a novel active learning strategy that exploits a priori domain knowledge provided by the expert (specifically, labeled features)and extend this model via a Linear Programming algorithm for situations where the expert can provide ranked labeled features. Our methods outperform existing AL strategies on three real-world systematic review datasets. We argue that evaluation must be specific to the scenario under consideration. To this end, we propose a new evaluation framework for finite-pool scenarios, wherein the primary aim is to label a fixed set of examples rather than to simply induce a good predictive model. We use a method from medical decision theory for eliciting the relative costs of false positives and false negatives from the domain expert, constructing a utility measure of classification performance that integrates the expert preferences. Our findings suggest that the expert can, and should, provide more information than instance labels alone. In addition to achieving strong empirical results on the citation screening problem, this work outlines many important steps for moving away from simulated active learning and toward deploying AL for real-world applications.", "Display Omitted Active learning is promising in the areas with complex topics in systematic reviews.Certainty criteria is promising to accelerate screening regardless of the topic.Certainty criteria performs as well as uncertainty criteria in classification.Weighting positive instances is promising to overcome the data imbalance.Unsupervised methods enhance the classification performance. In systematic reviews, the growing number of published studies imposes a significant screening workload on reviewers. Active learning is a promising approach to reduce the workload by automating some of the screening decisions, but it has been evaluated for a limited number of disciplines. The suitability of applying active learning to complex topics in disciplines such as social science has not been studied, and the selection of useful criteria and enhancements to address the data imbalance problem in systematic reviews remains an open problem. We applied active learning with two criteria (certainty and uncertainty) and several enhancements in both clinical medicine and social science (specifically, public health) areas, and compared the results in both. The results show that the certainty criterion is useful for finding relevant documents, and weighting positive instances is promising to overcome the data imbalance problem in both data sets. Latent dirichlet allocation (LDA) is also shown to be promising when little manually-assigned information is available. Active learning is effective in complex topics, although its efficiency is limited due to the difficulties in text classification. The most promising criterion and weighting method are the same regardless of the review topic, and unsupervised techniques like LDA have a possibility to boost the performance of active learning without manual annotation." ] }
1611.01872
2951229220
As compared to simple actions, activities are much more complex, but semantically consistent with a human's real life. Techniques for action recognition from sensor generated data are mature. However, there has been relatively little work on bridging the gap between actions and activities. To this end, this paper presents a novel approach for complex activity recognition comprising of two components. The first component is temporal pattern mining, which provides a mid-level feature representation for activities, encodes temporal relatedness among actions, and captures the intrinsic properties of activities. The second component is adaptive Multi-Task Learning, which captures relatedness among activities and selects discriminant features. Extensive experiments on a real-world dataset demonstrate the effectiveness of our work.
Multi-task learning (MTL) is a learning paradigm that jointly learns multiple related tasks and can achieve better generalization performance than learning each task individually, especially with those insufficient training samples. The relations among tasks can be pairwise correlations @cite_9 , or pairwise correlation within a group @cite_0 , as well as higher-order relationships @cite_7 . However, for activity recognition, encoding only task relatedness is not enough. Since not all features are discriminative for the prediction tasks, it is reasonable to assume that only a small set of features is predictive for specific tasks. In the light of this, group Lasso @cite_12 is a technique used for selecting group variables that are key to the prediction tasks. As an important extension of Lasso @cite_3 , group Lasso combines the feature strength over all tasks and tends to select the features based on their overall strength. It ensures that all tasks share a common set of features, while each one keeps its own specific features @cite_20 .
{ "cite_N": [ "@cite_7", "@cite_9", "@cite_3", "@cite_0", "@cite_20", "@cite_12" ], "mid": [ "104574262", "1648933886", "2135046866", "2118099552", "2031250362", "2138019504" ], "abstract": [ "Multi-task learning is a way of bringing inductive transfer studied in human learning to the machine learning community. A central issue in multitask learning is to model the relationships between tasks appropriately and exploit them to aid the simultaneous learning of multiple tasks effectively. While some recent methods model and learn the task relationships from data automatically, only pairwise relationships can be represented by them. In this paper, we propose a new model, called Multi-Task High-Order relationship Learning (MTHOL), which extends in a novel way the use of pairwise task relationships to high-order task relationships. We first propose an alternative formulation of an existing multi-task learning method. Based on the new formulation, we propose a high-order generalization leading to a new prior for the model parameters of different tasks. We then propose a new probabilistic model for multi-task learning and validate it empirically on some benchmark datasets.", "Multi-task learning is a learning paradigm which seeks to improve the generalization performance of a learning task with the help of some other related tasks. In this paper, we propose a regularization formulation for learning the relationships between tasks in multi-task learning. This formulation can be viewed as a novel generalization of the regularization framework for single-task learning. Besides modeling positive task correlation, our method, called multi-task relationship learning (MTRL), can also describe negative task correlation and identify outlier tasks based on the same underlying principle. Under this regularization framework, the objective function of MTRL is convex. For efficiency, we use an alternating method to learn the optimal model parameters for each task as well as the relationships between tasks. We study MTRL in the symmetric multi-task learning setting and then generalize it to the asymmetric setting as well. We also study the relationships between MTRL and some existing multi-task learning methods. Experiments conducted on a toy problem as well as several benchmark data sets demonstrate the effectiveness of MTRL.", "SUMMARY We propose a new method for estimation in linear models. The 'lasso' minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients that are exactly 0 and hence gives interpretable models. Our simulation studies suggest that the lasso enjoys some of the favourable properties of both subset selection and ridge regression. It produces interpretable models like subset selection and exhibits the stability of ridge regression. There is also an interesting relationship with recent work in adaptive function estimation by Donoho and Johnstone. The lasso idea is quite general and can be applied in a variety of statistical models: extensions to generalized regression models and tree-based models are briefly described.", "Multi-task learning (MTL) learns multiple related tasks simultaneously to improve generalization performance. Alternating structure optimization (ASO) is a popular MTL method that learns a shared low-dimensional predictive structure on hypothesis spaces from multiple related tasks. It has been applied successfully in many real world applications. As an alternative MTL approach, clustered multi-task learning (CMTL) assumes that multiple tasks follow a clustered structure, i.e., tasks are partitioned into a set of groups where tasks in the same group are similar to each other, and that such a clustered structure is unknown a priori. The objectives in ASO and CMTL differ in how multiple tasks are related. Interestingly, we show in this paper the equivalence relationship between ASO and CMTL, providing significant new insights into ASO and CMTL as well as their inherent relationship. The CMTL formulation is non-convex, and we adopt a convex relaxation to the CMTL formulation. We further establish the equivalence relationship between the proposed convex relaxation of CMTL and an existing convex relaxation of ASO, and show that the proposed convex CMTL formulation is significantly more efficient especially for high-dimensional data. In addition, we present three algorithms for solving the convex CMTL formulation. We report experimental results on benchmark datasets to demonstrate the efficiency of the proposed algorithms.", "Alzheimer's Disease (AD), the most common type of dementia, is a severe neurodegenerative disorder. Identifying markers that can track the progress of the disease has recently received increasing attentions in AD research. A definitive diagnosis of AD requires autopsy confirmation, thus many clinical cognitive measures including Mini Mental State Examination (MMSE) and Alzheimer's Disease Assessment Scale cognitive subscale (ADAS-Cog) have been designed to evaluate the cognitive status of the patients and used as important criteria for clinical diagnosis of probable AD. In this paper, we propose a multi-task learning formulation for predicting the disease progression measured by the cognitive scores and selecting markers predictive of the progression. Specifically, we formulate the prediction problem as a multi-task regression problem by considering the prediction at each time point as a task. We capture the intrinsic relatedness among different tasks by a temporal group Lasso regularizer. The regularizer consists of two components including an L2,1-norm penalty on the regression weight vectors, which ensures that a small subset of features will be selected for the regression models at all time points, and a temporal smoothness term which ensures a small deviation between two regression models at successive time points. We have performed extensive evaluations using various types of data at the baseline from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database for predicting the future MMSE and ADAS-Cog scores. Our experimental studies demonstrate the effectiveness of the proposed algorithm for capturing the progression trend and the cross-sectional group differences of AD severity. Results also show that most markers selected by the proposed algorithm are consistent with findings from existing cross-sectional studies.", "Summary. We consider the problem of selecting grouped variables (factors) for accurate prediction in regression. Such a problem arises naturally in many practical situations with the multifactor analysis-of-variance problem as the most important and well-known example. Instead of selecting factors by stepwise backward elimination, we focus on the accuracy of estimation and consider extensions of the lasso, the LARS algorithm and the non-negative garrotte for factor selection. The lasso, the LARS algorithm and the non-negative garrotte are recently proposed regression methods that can be used to select individual variables. We study and propose efficient algorithms for the extensions of these methods for factor selection and show that these extensions give superior performance to the traditional stepwise backward elimination method in factor selection problems. We study the similarities and the differences between these methods. Simulations and real examples are used to illustrate the methods." ] }
1611.02147
2952480427
The typical goal of surface remeshing consists in finding a mesh that is (1) geometrically faithful to the original geometry, (2) as coarse as possible to obtain a low-complexity representation and (3) free of bad elements that would hamper the desired application. In this paper, we design an algorithm to address all three optimization goals simultaneously. The user specifies desired bounds on approximation error , minimal interior angle and maximum mesh complexity N (number of vertices). Since such a desired mesh might not even exist, our optimization framework treats only the approximation error bound as a hard constraint and the other two criteria as optimization goals. More specifically, we iteratively perform carefully prioritized local operators, whenever they do not violate the approximation error bound and improve the mesh otherwise. In this way our optimization framework greedily searches for the coarsest mesh with minimal interior angle above and approximation error bounded by . Fast runtime is enabled by a local approximation error estimation, while implicit feature preservation is obtained by specifically designed vertex relocation operators. Experiments show that our approach delivers high-quality meshes with implicitly preserved features and better balances between geometric fidelity, mesh complexity and element quality than the state-of-the-art.
The variety of applications leads to a large number of different remeshing techniques. We restrict the discussion to the most relevant aspects of our algorithm, i.e. high-quality remeshing, error-driven remeshing and feature-preserving remeshing. For a more complete discussion we refer the reader to the survey @cite_24 .
{ "cite_N": [ "@cite_24" ], "mid": [ "2134770338" ], "abstract": [ "We present a method for isotropic remeshing of arbitrary genus surfaces. The method is based on a mesh adaptation process, namely, a sequence of local modifications performed on a copy of the original mesh, while referring to the original mesh geometry. The algorithm has three stages. In the first stage the required number or vertices are generated by iterative simplification or refinement. The second stage performs an initial vertex partition using an area-based relaxation method. The third stage achieves precise isotropic vertex sampling prescribed by a given density function on the mesh. We use a modification of Lloyd's relaxation method to construct a weighted centroidal Voronoi tessellation of the mesh. We apply these iterations locally on small patches of the mesh that are parameterized into the 2D plane. This allows us to handle arbitrary complex meshes with any genus and any number of boundaries. The efficiency and the accuracy of the remeshing process is achieved using a patch-wise parameterization technique." ] }
1611.02147
2952480427
The typical goal of surface remeshing consists in finding a mesh that is (1) geometrically faithful to the original geometry, (2) as coarse as possible to obtain a low-complexity representation and (3) free of bad elements that would hamper the desired application. In this paper, we design an algorithm to address all three optimization goals simultaneously. The user specifies desired bounds on approximation error , minimal interior angle and maximum mesh complexity N (number of vertices). Since such a desired mesh might not even exist, our optimization framework treats only the approximation error bound as a hard constraint and the other two criteria as optimization goals. More specifically, we iteratively perform carefully prioritized local operators, whenever they do not violate the approximation error bound and improve the mesh otherwise. In this way our optimization framework greedily searches for the coarsest mesh with minimal interior angle above and approximation error bounded by . Fast runtime is enabled by a local approximation error estimation, while implicit feature preservation is obtained by specifically designed vertex relocation operators. Experiments show that our approach delivers high-quality meshes with implicitly preserved features and better balances between geometric fidelity, mesh complexity and element quality than the state-of-the-art.
is typically based on sampling and Centroidal Voronoi Tessellation (CVT) optimization @cite_16 . Early approaches apply 2D CVT in a parametric domain @cite_23 @cite_22 @cite_50 @cite_25 @cite_29 @cite_20 . Instead of CVT optimization, @cite_10 utilize a partial system approach in the parametric domain. In general, parametrization-based methods suffer from the additional distortion of the map and the need to stitch parameterized charts for high genus surfaces. @cite_14 perform a discrete version of CVT directly on the input surface. However, the resulting mesh quality can be poor due to the inexact computation. @cite_39 @cite_34 @cite_54 avoid the parameterization by computing the 3D CVT restricted to the surface. Additionally, they proposed blue-noise remeshing techniques using adaptive maximal Poisson-disk sampling @cite_48 @cite_52 , farthest point optimization @cite_45 , and push-pull operations @cite_43 , which improve the element quality as well as introducing blue-noise properties. However, these approaches still suffer from common limitations, e.g., geometric fidelity and the minimal angle cannot be explicitly bounded. Moreover, sharp features must be specified in advance.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_48", "@cite_29", "@cite_54", "@cite_52", "@cite_34", "@cite_39", "@cite_43", "@cite_45", "@cite_50", "@cite_23", "@cite_16", "@cite_10", "@cite_25", "@cite_20" ], "mid": [ "2129460996", "", "2043101875", "", "", "2087027299", "", "2114288655", "2561500936", "2051027418", "", "2142586112", "2051752778", "1973233195", "", "" ], "abstract": [ "In this paper, we propose a generic framework for 3D surface remeshing. Based on a metric-driven Discrete Voronoi Diagram construction, our output is an optimized 3D triangular mesh with a user-defined vertex budget. Our approach can deal with a wide range of applications, from high-quality mesh generation to shape approximation. By using appropriate metric constraints, the method generates isotropic or anisotropic elements. Based on point sampling, our algorithm combines the robustness and theoretical strength of Delaunay criteria with the efficiency of an entirely discrete geometry processing. Besides the general described framework, we show the experimental results using isotropic, quadric-enhanced isotropic, and anisotropic metrics, which prove the efficiency of our method on large meshes at a low computational cost.", "", "In this article, we study the generation of maximal Poisson-disk sets with varying radii. First, we present a geometric analysis of gaps in such disk sets. This analysis is the basis for maximal and adaptive sampling in Euclidean space and on manifolds. Second, we propose efficient algorithms and data structures to detect gaps and update gaps when disks are inserted, deleted, moved, or when their radii are changed. We build on the concepts of regular triangulations and the power diagram. Third, we show how our analysis contributes to the state-of-the-art in surface remeshing.", "", "", "Poisson-disk sampling is one of the fundamental research problems in computer graphics that has many applications. In this paper, we study the problem of maximal Poisson-disk sampling on mesh surfaces. We present a simple approach that generalizes the 2D maximal sampling framework to surfaces. The key observation is to use a subdivided mesh as the sampling domain for conflict checking and void detection. Our approach improves the state-of-the-art approach in efficiency, quality and the memory consumption. Graphical abstractDisplay Omitted HighlightsA simple algorithm for maximal Poisson-disk sampling and remeshing one triangular mesh surfaces.Efficient local conflict checking without any 3D grid, using a subdivided mesh as the main structure to track empty regions.A local clustering-based approach for mesh extraction, and a mesh optimization strategy for high quality remeshing.", "", "We propose a new isotropic remeshing method, based on Centroidal Voronoi Tessellation (CVT). Constructing CVT requires to repeatedly compute Restricted Voronoi Diagram (RVD), defined as the intersection between a 3D Voronoi diagram and an input mesh surface. Existing methods use some approximations of RVD. In this paper, we introduce an efficient algorithm that computes RVD exactly and robustly. As a consequence, we achieve better remeshing quality than approximation-based approaches, without sacrificing efficiency. Our method for RVD computation uses a simple procedure and a kd-tree to quickly identify and compute the intersection of each triangle face with its incident Voronoi cells. Its time complexity is O(mlogn), where n is the number of seed points and m is the number of triangles of the input mesh. Fast convergence of CVT is achieved using a quasi-Newton method, which proved much faster than Lloyd's iteration. Examples are presented to demonstrate the better quality of remeshing results with our method than with the state-of-art approaches.", "We describe a simple push-pull optimization (PPO) algorithm for blue-noise sampling by enforcing spatial constraints on given point sets. Constraints can be a minimum distance between samples, a maximum distance between an arbitrary point and the nearest sample, and a maximum deviation of a sample's capacity (area of Voronoi cell) from the mean capacity. All of these constraints are based on the topology emerging from Delaunay triangulation, and they can be combined for improved sampling quality and efficiency. In addition, our algorithm offers flexibility for trading-off between different targets, such as noise and aliasing. We present several applications of the proposed algorithm, including anti-aliasing, stippling, and non-obtuse remeshing. Our experimental results illustrate the efficiency and the robustness of the proposed approach. Moreover, we demonstrate that our remeshing quality is superior to the current state-of-the-art approaches.", "In this paper, we present a novel method for surface sampling and remeshing with good blue-noise properties. Our approach is based on the farthest point optimization FPO, a relaxation technique that generates high quality blue-noise point sets in 2D. We propose two important generalizations of the original FPO framework: adaptive sampling and sampling on surfaces. A simple and efficient algorithm for accelerating the FPO framework is also proposed. Experimental results show that the generalized FPO generates point sets with excellent blue-noise properties for adaptive and surface sampling. Furthermore, we demonstrate that our remeshing quality is superior to the current state-of-thei¾?art approaches.", "", "This paper proposes a new method for isotropic remeshing of triangulated surface meshes. Given a triangulated surface mesh to be resampled and a user-specified density function defined over it, we first distribute the desired number of samples by generalizing error diffusion, commonly used in image halftoning, to work directly on mesh triangles and feature edges. We then use the resulting sampling as an initial configuration for building a weighted centroidal Voronoi tessellation in a conformal parameter space, where the specified density function is used for weighing. We finally create the mesh by lifting the corresponding constrained Delaunay triangulation from parameter space. A precise control over the sampling is obtained through a flexible design of the density function, the latter being possibly low-pass filtered to obtain a smoother gradation. We demonstrate the versatility of our approach through various remeshing examples.", "A centroidal Voronoi tessellation is a Voronoi tessellation whose generating points are the centroids (centers of mass) of the corresponding Voronoi regions. We give some applications of such tessellations to problems in image compression, quadrature, finite difference methods, distribution of resources, cellular biology, statistics, and the territorial behavior of animals. We discuss methods for computing these tessellations, provide some analyses concerning both the tessellations and the methods for their determination, and, finally, present the results of some numerical experiments.", "", "", "" ] }
1611.02147
2952480427
The typical goal of surface remeshing consists in finding a mesh that is (1) geometrically faithful to the original geometry, (2) as coarse as possible to obtain a low-complexity representation and (3) free of bad elements that would hamper the desired application. In this paper, we design an algorithm to address all three optimization goals simultaneously. The user specifies desired bounds on approximation error , minimal interior angle and maximum mesh complexity N (number of vertices). Since such a desired mesh might not even exist, our optimization framework treats only the approximation error bound as a hard constraint and the other two criteria as optimization goals. More specifically, we iteratively perform carefully prioritized local operators, whenever they do not violate the approximation error bound and improve the mesh otherwise. In this way our optimization framework greedily searches for the coarsest mesh with minimal interior angle above and approximation error bounded by . Fast runtime is enabled by a local approximation error estimation, while implicit feature preservation is obtained by specifically designed vertex relocation operators. Experiments show that our approach delivers high-quality meshes with implicitly preserved features and better balances between geometric fidelity, mesh complexity and element quality than the state-of-the-art.
amounts to generating a mesh that optimizes the tradeoff between geometric fidelity and mesh complexity. Cohen- @cite_33 propose an error-driven clustering method to coarsen the input mesh. They formulate the approximation problem as a variational geometric partitioning problem, and optimize a set of planes iteratively using Lloyd's iteration @cite_12 to minimize a predefined approximation error.
{ "cite_N": [ "@cite_33", "@cite_12" ], "mid": [ "2137419625", "2150593711" ], "abstract": [ "A method for concise, faithful approximation of complex 3D datasets is key to reducing the computational cost of graphics applications. Despite numerous applications ranging from geometry compression to reverse engineering, efficiently capturing the geometry of a surface remains a tedious task. In this paper, we present both theoretical and practical contributions that result in a novel and versatile framework for geometric approximation of surfaces. We depart from the usual strategy by casting shape approximation as a variational geometric partitioning problem. Using the concept of geometric proxies, we drive the distortion error down through repeated clustering of faces into best-fitting regions. Our approach is entirely discrete and error-driven, and does not require parameterization or local estimations of differential quantities. We also introduce a new metric based on normal deviation, and demonstrate its superior behavior at capturing anisotropy.", "It has long been realized that in pulse-code modulation (PCM), with a given ensemble of signals to handle, the quantum values should be spaced more closely in the voltage regions where the signal amplitude is more likely to fall. It has been shown by Panter and Dite that, in the limit as the number of quanta becomes infinite, the asymptotic fractional density of quanta per unit voltage should vary as the one-third power of the probability density per unit voltage of signal amplitudes. In this paper the corresponding result for any finite number of quanta is derived; that is, necessary conditions are found that the quanta and associated quantization intervals of an optimum finite quantization scheme must satisfy. The optimization criterion used is that the average quantization noise power be a minimum. It is shown that the result obtained here goes over into the Panter and Dite result as the number of quanta become large. The optimum quautization schemes for 2^ b quanta, b=1,2, , 7 , are given numerically for Gaussian and for Laplacian distribution of signal amplitudes." ] }
1611.01982
2555727979
OCR character segmentation for multilingual printed documents is difficult due to the diversity of different linguistic characters. Previous approaches mainly focus on monolingual texts and are not suitable for multilingual-lingual cases. In this work, we particularly tackle the Chinese English mixed case by reframing it as a semantic segmentation problem. We take advantage of the successful architecture called fully convolutional networks (FCN) in the field of semantic segmentation. Given a wide enough receptive field, FCN can utilize the necessary context around a horizontal position to determinate whether this is a splitting point or not. As a deep neural architecture, FCN can automatically learn useful features from raw text line images. Although trained on synthesized samples with simulated random disturbance, our FCN model generalizes well to real-world samples. The experimental results show that our model significantly outperforms the previous methods.
Devonlutional Layers Deconvolution is also called up-convolution @cite_18 or fractionally-strided convolution @cite_12 . It is typically used for expanding the size of feature maps in FCN architecture.
{ "cite_N": [ "@cite_18", "@cite_12" ], "mid": [ "2473464331", "2950689937" ], "abstract": [ "In this paper, we present a novel technique to simplify sketch drawings based on learning a series of convolution operators. In contrast to existing approaches that require vector images as input, we allow the more general and challenging input of rough raster sketches such as those obtained from scanning pencil sketches. We convert the rough sketch into a simplified version which is then amendable for vectorization. This is all done in a fully automatic way without user intervention. Our model consists of a fully convolutional neural network which, unlike most existing convolutional neural networks, is able to process images of any dimensions and aspect ratio as input, and outputs a simplified sketch which has the same dimensions as the input image. In order to teach our model to simplify, we present a new dataset of pairs of rough and simplified sketch drawings. By leveraging convolution operators in combination with efficient use of our proposed dataset, we are able to train our sketch simplification model. Our approach naturally overcomes the limitations of existing methods, e.g., vector images as input and long computation time; and we show that meaningful simplifications can be obtained for many different test cases. Finally, we validate our results with a user study in which we greatly outperform similar approaches and establish the state of the art in sketch simplification of raster images.", "We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results." ] }
1611.01982
2555727979
OCR character segmentation for multilingual printed documents is difficult due to the diversity of different linguistic characters. Previous approaches mainly focus on monolingual texts and are not suitable for multilingual-lingual cases. In this work, we particularly tackle the Chinese English mixed case by reframing it as a semantic segmentation problem. We take advantage of the successful architecture called fully convolutional networks (FCN) in the field of semantic segmentation. Given a wide enough receptive field, FCN can utilize the necessary context around a horizontal position to determinate whether this is a splitting point or not. As a deep neural architecture, FCN can automatically learn useful features from raw text line images. Although trained on synthesized samples with simulated random disturbance, our FCN model generalizes well to real-world samples. The experimental results show that our model significantly outperforms the previous methods.
Fully Convolutional Networks FCN is prevalent in the research of semantic segmentation and object detection. The key feature that distinguishes FCN from CNN is that it is easy to control the output size of FCN via deconvolution. Therefore, FCN is also widely used in tasks where both input and output are images. For example, Simo-Serra al @cite_18 use FCN to simplify sketch drawing.
{ "cite_N": [ "@cite_18" ], "mid": [ "2473464331" ], "abstract": [ "In this paper, we present a novel technique to simplify sketch drawings based on learning a series of convolution operators. In contrast to existing approaches that require vector images as input, we allow the more general and challenging input of rough raster sketches such as those obtained from scanning pencil sketches. We convert the rough sketch into a simplified version which is then amendable for vectorization. This is all done in a fully automatic way without user intervention. Our model consists of a fully convolutional neural network which, unlike most existing convolutional neural networks, is able to process images of any dimensions and aspect ratio as input, and outputs a simplified sketch which has the same dimensions as the input image. In order to teach our model to simplify, we present a new dataset of pairs of rough and simplified sketch drawings. By leveraging convolution operators in combination with efficient use of our proposed dataset, we are able to train our sketch simplification model. Our approach naturally overcomes the limitations of existing methods, e.g., vector images as input and long computation time; and we show that meaningful simplifications can be obtained for many different test cases. Finally, we validate our results with a user study in which we greatly outperform similar approaches and establish the state of the art in sketch simplification of raster images." ] }
1611.01578
2553303224
Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.
Modern neuro-evolution algorithms, e.g., @cite_11 @cite_9 @cite_6 , on the other hand, are much more flexible for composing novel models, yet they are usually less practical at a large scale. Their limitations lie in the fact that they are search-based methods, thus they are slow or require many heuristics to work well.
{ "cite_N": [ "@cite_9", "@cite_6", "@cite_11" ], "mid": [ "2076242896", "2119814172", "2125303539" ], "abstract": [ "An automatic programming system, THESYS, for constructing recursive LISP programs from examples of what they do is described. The construction methodology is illustrated as a series of transformations from the set of examples to a program satisfying the examples. The transformations consist of (1) deriving the specific computation associated with a specific example, (2) deriving control flow predicates, and (3) deriving an equivalent program specification in the form of recurrence relations. Equivalence between certain recurrence relations and various program schemata is proved. A detailed description of the construction of four programs is presented to illustrate the application of the methodology.", "Research in neuroevolution---that is, evolving artificial neural networks (ANNs) through evolutionary algorithms---is inspired by the evolution of biological brains, which can contain trillions of connections. Yet while neuroevolution has produced successful results, the scale of natural brains remains far beyond reach. This article presents a method called hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) that aims to narrow this gap. HyperNEAT employs an indirect encoding called connective compositional pattern-producing networks (CPPNs) that can produce connectivity patterns with symmetries and repeating motifs by interpreting spatial patterns generated within a hypercube as connectivity patterns in a lower-dimensional space. This approach can exploit the geometry of the task by mapping its regularities onto the topology of the network, thereby shifting problem difficulty away from dimensionality to the underlying problem structure. Furthermore, connective CPPNs can represent the same connectivity pattern at any resolution, allowing ANNs to scale to new numbers of inputs and outputs without further evolution. HyperNEAT is demonstrated through visual discrimination and food-gathering tasks, including successful visual discrimination networks containing over eight million connections. The main conclusion is that the ability to explore the space of regular connectivity patterns opens up a new class of complex high-dimensional tasks to neuroevolution.", "Existing Recurrent Neural Networks (RNNs) are limited in their ability to model dynamical systems with nonlinearities and hidden internal states. Here we use our general framework for sequence learning, EVOlution of recurrent systems with LINear Outputs (Evolino), to discover good RNN hidden node weights through evolution, while using linear regression to compute an optimal linear mapping from hidden state to output. Using the Long Short-Term Memory RNN Architecture, Evolino outperforms previous state-of-the-art methods on several tasks: 1) context-sensitive languages, 2) multiple superimposed sine waves." ] }
1611.01642
2952838882
We propose a real-time pedestrian detection system for the embedded Nvidia Tegra X1 GPU-CPU hybrid platform. The pipeline is composed by the following state-of-the-art algorithms: Histogram of Local Binary Patterns (LBP) and Histograms of Oriented Gradients (HOG) features extracted from the input image; Pyramidal Sliding Window technique for candidate generation; and Support Vector Machine (SVM) for classification. Results show a 8x speedup in the target Tegra X1 platform and a better performance watt ratio than desktop CUDA platforms in study.
Since the appearance of GPGPU computational platforms, several object detection algorithms have been ported to the GPU. There are also examples of using Field Programmable Gate Array (FPGA) designs, obtaining outstanding results @cite_6 . In comparison, the reduced development costs of the CUDA programming environment and the affordable desktop CUDA enabled GPU cards make them more suitable for testing new algorithms.
{ "cite_N": [ "@cite_6" ], "mid": [ "2074091263" ], "abstract": [ "This paper focuses on real-time pedestrian detection on Field Programmable Gate Arrays (FPGAs) using the Histograms of Oriented Gradients (HOG) descriptor in combination with a Support Vector Machine (SVM) for classification as a basic method. We propose to process image data at twice the pixel frequency and to normalize blocks with the L1-Sqrt-norm resulting in an efficient resource utilization. This implementation allows for parallel computation of different scales. Combined with a time-multiplex approach we increase multiscale capabilities beyond resource limitations. We are able to process 64 high resolution images (1920 × 1080 pixels) per second at 18 scales with a latency of less than 150 u s. 1.79 million HOG descriptors and their SVM classifications can be calculated per second and per scale, which outperforms current FPGA implementations by a factor of 4." ] }
1611.01642
2952838882
We propose a real-time pedestrian detection system for the embedded Nvidia Tegra X1 GPU-CPU hybrid platform. The pipeline is composed by the following state-of-the-art algorithms: Histogram of Local Binary Patterns (LBP) and Histograms of Oriented Gradients (HOG) features extracted from the input image; Pyramidal Sliding Window technique for candidate generation; and Support Vector Machine (SVM) for classification. Results show a 8x speedup in the target Tegra X1 platform and a better performance watt ratio than desktop CUDA platforms in study.
In the previous work, the evaluations use desktop GPUs to evaluate the algorithms proposed as a first step to evaluate the GPU as a suitable target platform. In this work, we propose a real-time pedestrian detector running on a low-consumption GPU devices like the Tegra X1 platform. We also present for the first time a GPU implementation of the HOGLBP-SVM detection pipeline @cite_9 .
{ "cite_N": [ "@cite_9" ], "mid": [ "2548197316" ], "abstract": [ "By combining Histograms of Oriented Gradients (HOG) and Local Binary Pattern (LBP) as the feature set, we propose a novel human detection approach capable of handling partial occlusion. Two kinds of detectors, i.e., global detector for whole scanning windows and part detectors for local regions, are learned from the training data using linear SVM. For each ambiguous scanning window, we construct an occlusion likelihood map by using the response of each block of the HOG feature to the global detector. The occlusion likelihood map is then segmented by Mean-shift approach. The segmented portion of the window with a majority of negative response is inferred as an occluded region. If partial occlusion is indicated with high likelihood in a certain scanning window, part detectors are applied on the unoccluded regions to achieve the final classification on the current scanning window. With the help of the augmented HOG-LBP feature and the global-part occlusion handling method, we achieve a detection rate of 91.3 with FPPW= 10−6, 94.7 with FPPW= 10−5, and 97.9 with FPPW= 10−4 on the INRIA dataset, which, to our best knowledge, is the best human detection performance on the INRIA dataset. The global-part occlusion handling method is further validated using synthesized occlusion data constructed from the INRIA and Pascal dataset." ] }
1611.01799
2557088662
In this paper, we study deep generative models for effective unsupervised learning. We propose VGAN, which works by minimizing a variational lower bound of the negative log likelihood (NLL) of an energy based model (EBM), where the model density @math is approximated by a variational distribution @math that is easy to sample from. The training of VGAN takes a two step procedure: given @math , @math is updated to maximize the lower bound; @math is then updated one step with samples drawn from @math to decrease the lower bound. VGAN is inspired by the generative adversarial networks (GANs), where @math corresponds to the discriminator and @math corresponds to the generator, but with several notable differences. We hence name our model variational GANs (VGANs). VGAN provides a practical solution to training deep EBMs in high dimensional space, by eliminating the need of MCMC sampling. From this view, we are also able to identify causes to the difficulty of training GANs and propose viable solutions.
There has been a recent surge on improving GANs @cite_17 @cite_5 @cite_4 @cite_0 . @cite_17 proposes a set of techniques to stablize GANs, including using batch normlization, dropping pooling layers, reduced learning rate, and using strided convolutions, but there is little justification of the proposed designs. Our framework, however, directly addresses two most important issues, the energy parametrization and the entropy approximation, and allows the freedom of using the most conventional designs such as pooling and ReLU. @cite_5 proposes several tricks to enhance the stability. For example, the proposed batch discrimination is in nature similar to our energy design, but with a much higher complexity. @cite_0 @cite_4 are the two most directly related efforts that connect GANs with EBMs. However, our work is the first to the best of our knowledge to identify the nature of the variational training of EBMs and to provide practical solutions in this view at the same time.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_4", "@cite_17" ], "mid": [ "2416272112", "2432004435", "2521028896", "2173520492" ], "abstract": [ "Training energy-based probabilistic models is confronted with apparently intractable sums, whose Monte Carlo estimation requires sampling from the estimated probability distribution in the inner loop of training. This can be approximately achieved by Markov chain Monte Carlo methods, but may still face a formidable obstacle that is the difficulty of mixing between modes with sharp concentrations of probability. Whereas an MCMC process is usually derived from a given energy function based on mathematical considerations and requires an arbitrarily long time to obtain good and varied samples, we propose to train a deep directed generative model (not a Markov chain) so that its sampling distribution approximately matches the energy function that is being trained. Inspired by generative adversarial networks, the proposed framework involves training of two models that represent dual views of the estimated probability distribution: the energy function (mapping an input configuration to a scalar energy value) and the generator (mapping a noise vector to a generated configuration), both represented by deep neural networks.", "We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. We focus on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic. Unlike most work on generative models, our primary goal is not to train a model that assigns high likelihood to test data, nor do we require the model to be able to learn well without using any labels. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3 . We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes.", "We introduce the \"Energy-based Generative Adversarial Network\" model (EBGAN) which views the discriminator as an energy function that attributes low energies to the regions near the data manifold and higher energies to other regions. Similar to the probabilistic GANs, a generator is seen as being trained to produce contrastive samples with minimal energies, while the discriminator is trained to assign high energies to these generated samples. Viewing the discriminator as an energy function allows to use a wide variety of architectures and loss functionals in addition to the usual binary classifier with logistic output. Among them, we show one instantiation of EBGAN framework as using an auto-encoder architecture, with the energy being the reconstruction error, in place of the discriminator. We show that this form of EBGAN exhibits more stable behavior than regular GANs during training. We also show that a single-scale architecture can be trained to generate high-resolution images.", "In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations." ] }
1611.01714
2554354235
In this paper we present a technique to train neural network models on small amounts of data. Current methods for training neural networks on small amounts of rich data typically rely on strategies such as fine-tuning a pre-trained neural network or the use of domain-specific hand-engineered features. Here we take the approach of treating network layers, or entire networks, as modules and combine pre-trained modules with untrained modules, to learn the shift in distributions between data sets. The central impact of using a modular approach comes from adding new representations to a network, as opposed to replacing representations via fine-tuning. Using this technique, we are able surpass results using standard fine-tuning transfer learning approaches, and we are also able to significantly increase performance over such approaches when using smaller amounts of data.
Transferring knowledge from a source domain to a target domain is an important challenge in machine learning research. Many shallow methods have been published, those that learn feature invariant representations or by approximating value without using an instance's label ( @cite_4 @cite_17 @cite_28 @cite_2 @cite_13 @cite_30 ). More recent deep transfer learning methods enable identification of variational factors in the data and align them to disparate domain distributions ( @cite_14 @cite_6 @cite_18 @cite_25 ). @cite_20 presents the Unsupervised and Transfer Learning Challenge and discusses the important advances that are needed for representation learning, and the importance of deep learning in transfer learning. @cite_3 applied these techniques to mid-level image representations using CNNs. Specifically, they showed that image representations learned in visual recognition tasks (ImageNet) can be transferred to other visual recognition tasks (Pascal VOC) efficiently. Further study regarding the transferability of features by @cite_5 showed surprising results that features from distant tasks perform better than random features and that difficulties arise when optimizing splitting networks between co-adapted neurons. We build on these results by leveraging existing representations to transfer to target domains without overwriting the pre-trained models through standard fine-tuning approaches.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_14", "@cite_4", "@cite_28", "@cite_6", "@cite_3", "@cite_2", "@cite_5", "@cite_13", "@cite_25", "@cite_20", "@cite_17" ], "mid": [ "", "1882958252", "1565327149", "", "2115403315", "2951670162", "2161381512", "2153929442", "", "2147520416", "2953226914", "2186489521", "" ], "abstract": [ "", "Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of \"deep\" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation. Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.", "Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark. Fine-tuning deep models in a new domain can require a significant amount of data, which for many applications is simply not available. We propose a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant. We additionally show that a domain confusion metric can be used for model selection to determine the dimension of an adaptation layer and the best position for the layer in the CNN architecture. Our proposed adaptation method offers empirical performance which exceeds previously published results on a standard benchmark visual domain adaptation task.", "", "Domain adaptation allows knowledge from a source domain to be transferred to a different but related target domain. Intuitively, discovering a good feature representation across domains is crucial. In this paper, we first propose to find such a representation through a new learning method, transfer component analysis (TCA), for domain adaptation. TCA tries to learn some transfer components across domains in a reproducing kernel Hilbert space using maximum mean miscrepancy. In the subspace spanned by these transfer components, data properties are preserved and data distributions in different domains are close to each other. As a result, with the new representations in this subspace, we can apply standard machine learning methods to train classifiers or regression models in the source domain for use in the target domain. Furthermore, in order to uncover the knowledge hidden in the relations between the data labels from the source and target domains, we extend TCA in a semisupervised learning setting, which encodes label information into transfer components learning. We call this extension semisupervised TCA. The main contribution of our work is that we propose a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation. We propose both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce the distance between domain distributions by projecting data onto the learned transfer components. Finally, our approach can handle large datasets and naturally lead to out-of-sample generalization. The effectiveness and efficiency of our approach are verified by experiments on five toy datasets and two real-world applications: cross-domain indoor WiFi localization and cross-domain text classification.", "Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multi-kernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks.", "Convolutional neural networks (CNN) have recently shown outstanding image classification performance in the large- scale visual recognition challenge (ILSVRC2012). The suc- cess of CNNs is attributed to their ability to learn rich mid- level image representations as opposed to hand-designed low-level features used in other image classification meth- ods. Learning CNNs, however, amounts to estimating mil- lions of parameters and requires a very large number of annotated image samples. This property currently prevents application of CNNs to problems with limited training data. In this work we show how image representations learned with CNNs on large-scale annotated datasets can be effi- ciently transferred to other visual recognition tasks with limited amount of training data. We design a method to reuse layers trained on the ImageNet dataset to compute mid-level image representation for images in the PASCAL VOC dataset. We show that despite differences in image statistics and tasks in the two datasets, the transferred rep- resentation leads to significantly improved results for object and action classification, outperforming the current state of the art on Pascal VOC 2007 and 2012 datasets. We also show promising results for object and action localization.", "Let X denote the feature and Y the target. We consider domain adaptation under three possible scenarios: (1) the marginal PY changes, while the conditional PX Y stays the same (target shift), (2) the marginal PY is fixed, while the conditional PX Y changes with certain constraints (conditional shift), and (3) the marginal PY changes, and the conditional PX Y changes with constraints (generalized target shift). Using background knowledge, causal interpretations allow us to determine the correct situation for a problem at hand. We exploit importance reweighting or sample transformation to find the learning machine that works well on test data, and propose to estimate the weights or transformations by reweighting or transforming training data to reproduce the covariate distribution on the test domain. Thanks to kernel embedding of conditional as well as marginal distributions, the proposed approaches avoid distribution estimation, and are applicable for high-dimensional problems. Numerical evaluations on synthetic and real-world data sets demonstrate the effectiveness of the proposed framework.", "", "Transfer learning algorithms are used when one has sufficient training data for one supervised learning task (the source training domain) but only very limited training data for a second task (the target test domain) that is similar but not identical to the first. Previous work on transfer learning has focused on relatively restricted settings, where specific parts of the model are considered to be carried over between tasks. Recent work on covariate shift focuses on matching the marginal distributions on observations X across domains. Similarly, work on target conditional shift focuses on matching marginal distributions on labels Y and adjusting conditional distributions P(X|Y ), such that P(X) can be matched across domains. However, covariate shift assumes that the support of test P(X) is contained in the support of training P(X), i.e., the training set is richer than the test set. Target conditional shift makes a similar assumption for P(Y). Moreover, not much work on transfer learning has considered the case when a few labels in the test domain are available. Also little work has been done when all marginal and conditional distributions are allowed to change while the changes are smooth. In this paper, we consider a general case where both the support and the model change across domains. We transform both X and Y by a location-scale shift to achieve transfer between tasks. Since we allow more flexible transformations, the proposed method yields better results on both synthetic data and real-world data.", "Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias. Fine-tuning deep models in a new domain can require a significant amount of labeled data, which for many applications is simply not available. We propose a new CNN architecture to exploit unlabeled and sparsely labeled target domain data. Our approach simultaneously optimizes for domain invariance to facilitate domain transfer and uses a soft label distribution matching loss to transfer information between tasks. Our proposed adaptation method offers empirical performance which exceeds previously published results on two standard benchmark visual domain adaptation tasks, evaluated across supervised and semi-supervised adaptation settings.", "", "" ] }
1611.01714
2554354235
In this paper we present a technique to train neural network models on small amounts of data. Current methods for training neural networks on small amounts of rich data typically rely on strategies such as fine-tuning a pre-trained neural network or the use of domain-specific hand-engineered features. Here we take the approach of treating network layers, or entire networks, as modules and combine pre-trained modules with untrained modules, to learn the shift in distributions between data sets. The central impact of using a modular approach comes from adding new representations to a network, as opposed to replacing representations via fine-tuning. Using this technique, we are able surpass results using standard fine-tuning transfer learning approaches, and we are also able to significantly increase performance over such approaches when using smaller amounts of data.
@cite_6 developed the Deep Adaptation Network (DAN) architecture for convolutional neural networks that embed hidden representations of all task-specific layers in a reproducing kernel Hilbert space. This allows the mean of different domain distributions to be matched. Another feature of their work is that it can linearly scale and provide statistical guarantees on transferable features. The Net2Net approach ( @cite_29 ) accelerates training of larger neural networks by allowing them to grow gradually using function preserving transformations to transfer information between neural networks. However, it does not guarantee that existing representational power will be preserved on a different task. @cite_30 consider domain adaptation where transfer from source to domain is modeled as a causal system. Under these assumptions, conditional transferable components are extracted which are invariant after location-scale transformations. @cite_12 proposed a new method that overcomes the need for conditional components by comparing joint distributions across domains. Unlike our work, all of these require explicit assumptions or modifications to the pre-trained networks to facilitate adaptation.
{ "cite_N": [ "@cite_30", "@cite_29", "@cite_12", "@cite_6" ], "mid": [ "", "2178031510", "2408201877", "2951670162" ], "abstract": [ "", "We introduce techniques for rapidly transferring the information stored in one neural net into another neural net. The main purpose is to accelerate the training of a significantly larger neural net. During real-world workflows, one often trains very many different neural networks during the experimentation and design process. This is a wasteful process in which each new model is trained from scratch. Our Net2Net technique accelerates the experimentation process by instantaneously transferring the knowledge from a previous network to each new deeper or wider network. Our techniques are based on the concept of function-preserving transformations between neural network specifications. This differs from previous approaches to pre-training that altered the function represented by a neural net when adding layers to it. Using our knowledge transfer mechanism to add depth to Inception modules, we demonstrate a new state of the art accuracy rating on the ImageNet dataset.", "Deep networks have been successfully applied to learn transferable features for adapting models from a source domain to a different target domain. In this paper, we present joint adaptation networks (JAN), which learn a transfer network by aligning the joint distributions of multiple domain-specific layers across domains based on a joint maximum mean discrepancy (JMMD) criterion. Adversarial training strategy is adopted to maximize JMMD such that the distributions of the source and target domains are made more distinguishable. Learning can be performed by stochastic gradient descent with the gradients computed by back-propagation in linear-time. Experiments testify that our model yields state of the art results on standard datasets.", "Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multi-kernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks." ] }
1611.01714
2554354235
In this paper we present a technique to train neural network models on small amounts of data. Current methods for training neural networks on small amounts of rich data typically rely on strategies such as fine-tuning a pre-trained neural network or the use of domain-specific hand-engineered features. Here we take the approach of treating network layers, or entire networks, as modules and combine pre-trained modules with untrained modules, to learn the shift in distributions between data sets. The central impact of using a modular approach comes from adding new representations to a network, as opposed to replacing representations via fine-tuning. Using this technique, we are able surpass results using standard fine-tuning transfer learning approaches, and we are also able to significantly increase performance over such approaches when using smaller amounts of data.
We note that while writing this paper, the progressive network architecture of @cite_19 was released, sharing a number of qualities with our work. Both the results we present here and the progressive networks allow neural networks to extend their knowledge without forgetting previous information. In addition, @cite_21 discusses a semi-modular approach. also froze the weights of the original network, although it did not focus on the small data regime, where only a few tens of examples could be available. However, our modular approach detailed here focuses on leveraging small data to adapt to different domains. Our architecture also complements existing network building strategies, such as downloading pre-trained neural networks to then be fine-tuned for domain adaptation.
{ "cite_N": [ "@cite_19", "@cite_21" ], "mid": [ "2426267443", "2551440448" ], "abstract": [ "Methods and systems for performing a sequence of machine learning tasks. One system includes a sequence of deep neural networks (DNNs), including: a first DNN corresponding to a first machine learning task, wherein the first DNN comprises a first plurality of indexed layers, and each layer in the first plurality of indexed layers is configured to receive a respective layer input and process the layer input to generate a respective layer output; and one or more subsequent DNNs corresponding to one or more respective machine learning tasks, wherein each subsequent DNN comprises a respective plurality of indexed layers, and each layer in a respective plurality of indexed layers with index greater than one receives input from a preceding layer of the respective subsequent DNN, and one or more preceding layers of respective preceding DNNs, wherein a preceding layer is a layer whose index is one less than the current index.", "In the current study we investigate the ability of a Deep Neural Network (DNN) to reuse, in a new task, features previously acquired in other tasks. The architecture we realized, when learning the new task, will not destroy its ability in solving the previous tasks. Such architecture was obtained by training a series of DNNs on different tasks and then merging them to form a larger DNN by adding new neurons. The resulting DNN was trained on a new task, with only the connections relative to the new neurons allowed to change. The architecture performed very well, requiring few new parameters and a smaller dataset in order to be trained efficiently and, on the new task, outperforming several DNNs trained from scratch." ] }
1611.01474
2552197458
We prove a tight upper bound on the independence polynomial (and total number of independent sets) of cubic graphs of girth at least 5. The bound is achieved by unions of the Heawood graph, the point line incidence graph of the Fano plane. We also give a tight lower bound on the total number of independent sets of triangle-free cubic graphs. This bound is achieved by unions of the Petersen graph. We conjecture that in fact all Moore graphs are extremal for the scaled number of independent sets in regular graphs of a given minimum girth, maximizing this quantity if their girth is even and minimizing if odd. The Heawood and Petersen graphs are instances of this conjecture, along with complete graphs, complete bipartite graphs, and cycles.
The method we use is an extension of the method used in @cite_9 to prove Theorem and the analogous theorem for random matchings in regular graphs. At a high level, the method works as follows. To bound the occupancy fraction of the hard-core model on a graph @math , we consider the experiment of drawing an independent set @math from the hard-core model, then independently choosing a vertex @math uniformly from the graph. We then record a local view of both @math and @math from the perspective of @math . The depth- @math local view from @math includes both the information of the graph structure of the depth- @math neighborhood of @math as well as the boundary conditions the independent set @math induces on this neighborhood (that is, which vertices at the boundary are blocked from being in the independent set by some external vertex). @cite_9 , the local view considered was of the first neighborhood of @math , with each neighbor labeled according to whether or not it had an occupied neighbor among the second neighbors of @math .
{ "cite_N": [ "@cite_9" ], "mid": [ "2963702702" ], "abstract": [ "We prove tight upper bounds on the logarithmic derivative of the independence and matching polynomials of d-regular graphs. For independent sets, this theorem is a strengthening of Kahn's result that a disjoint union of copies of Kd;d maximizes the number of independent sets of a bipartite d-regular graph, Galvin and Tet ali's result that the independence polynomial is maximized by the same, and Zhao's extension of both results to all d-regular graphs. For matchings, this shows that the matching polynomial and the total number of matchings of a d-regular graph are maximized by a union of copies of Kd;d. Using this we prove the asymptotic upper matching conjecture of Friedland, Krop, Lundow, and Markstrom. In probabilistic language, our main theorems state that for all d-regular graphs and all �, the occupancy fraction of the hard-core model and the edge occupancy fraction of the monomer-dimer model with fugacity � are maximized by Kd;d. Our method involves constrained optimization problems over distributions of random variables and applies to all d-regular graphs directly, without a reduction to the bipartite case. Using a variant of the method we prove a lower bound on the occupancy fraction of the hard-core model on any d-regular, vertex-transitive, bipartite graph: the occupancy fraction of such a graph is strictly greater than the occupancy fraction of the unique translationinvariant hard-core measure on the infinite d-regular tree" ] }
1611.01474
2552197458
We prove a tight upper bound on the independence polynomial (and total number of independent sets) of cubic graphs of girth at least 5. The bound is achieved by unions of the Heawood graph, the point line incidence graph of the Fano plane. We also give a tight lower bound on the total number of independent sets of triangle-free cubic graphs. This bound is achieved by unions of the Petersen graph. We conjecture that in fact all Moore graphs are extremal for the scaled number of independent sets in regular graphs of a given minimum girth, maximizing this quantity if their girth is even and minimizing if odd. The Heawood and Petersen graphs are instances of this conjecture, along with complete graphs, complete bipartite graphs, and cycles.
The art in applying the method is choosing the right local view and which consistency conditions to impose. Enriching the local view as we have done here adds power to the optimization program, but comes at the cost of increasing the complexity of the resulting linear program. As an example, compare the upper bound on the independence polynomial in @math -regular triangle-free graphs @cite_9 with the lower bound for @math -regular triangle-free graphs given by Theorem : the proof of the first is short and elementary, while the proof of the second requires (at least in this iteration) a large mass of calculations given in the appendix and in the ancillary files.
{ "cite_N": [ "@cite_9" ], "mid": [ "2963702702" ], "abstract": [ "We prove tight upper bounds on the logarithmic derivative of the independence and matching polynomials of d-regular graphs. For independent sets, this theorem is a strengthening of Kahn's result that a disjoint union of copies of Kd;d maximizes the number of independent sets of a bipartite d-regular graph, Galvin and Tet ali's result that the independence polynomial is maximized by the same, and Zhao's extension of both results to all d-regular graphs. For matchings, this shows that the matching polynomial and the total number of matchings of a d-regular graph are maximized by a union of copies of Kd;d. Using this we prove the asymptotic upper matching conjecture of Friedland, Krop, Lundow, and Markstrom. In probabilistic language, our main theorems state that for all d-regular graphs and all �, the occupancy fraction of the hard-core model and the edge occupancy fraction of the monomer-dimer model with fugacity � are maximized by Kd;d. Our method involves constrained optimization problems over distributions of random variables and applies to all d-regular graphs directly, without a reduction to the bipartite case. Using a variant of the method we prove a lower bound on the occupancy fraction of the hard-core model on any d-regular, vertex-transitive, bipartite graph: the occupancy fraction of such a graph is strictly greater than the occupancy fraction of the unique translationinvariant hard-core measure on the infinite d-regular tree" ] }
1611.01474
2552197458
We prove a tight upper bound on the independence polynomial (and total number of independent sets) of cubic graphs of girth at least 5. The bound is achieved by unions of the Heawood graph, the point line incidence graph of the Fano plane. We also give a tight lower bound on the total number of independent sets of triangle-free cubic graphs. This bound is achieved by unions of the Petersen graph. We conjecture that in fact all Moore graphs are extremal for the scaled number of independent sets in regular graphs of a given minimum girth, maximizing this quantity if their girth is even and minimizing if odd. The Heawood and Petersen graphs are instances of this conjecture, along with complete graphs, complete bipartite graphs, and cycles.
The method has to this point been used for upper and lower bounds on independent sets and matchings in regular graphs @cite_9 @cite_14 @cite_13 , as well as the Widom-Rowlinson model @cite_0 , another statistical physics model with hard constraints. @cite_8 , the method is applied to models with soft constraints, namely the Ising and Potts models on regular graphs, which in the zero-temperature limit' yield extremal bounds on the number of @math -colorings of cubic graphs. See the survey of Zhao @cite_2 for more on extremal problems for regular graphs.
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_9", "@cite_0", "@cite_2", "@cite_13" ], "mid": [ "", "2541525350", "2963702702", "2963760054", "2545735748", "" ], "abstract": [ "", "We prove tight upper and lower bounds on the internal energy per particle (expected number of monochromatic edges per vertex) in the anti-ferromagnetic Potts model on cubic graphs at every temperature and for all @math . This immediately implies corresponding tight bounds on the anti-ferromagnetic Potts partition function. Taking the zero-temperature limit gives new results in extremal combinatorics: the number of @math -colorings of a @math -regular graph, for any @math , is maximized by a union of @math 's. This proves the @math case of a conjecture of Galvin and Tet ali.", "We prove tight upper bounds on the logarithmic derivative of the independence and matching polynomials of d-regular graphs. For independent sets, this theorem is a strengthening of Kahn's result that a disjoint union of copies of Kd;d maximizes the number of independent sets of a bipartite d-regular graph, Galvin and Tet ali's result that the independence polynomial is maximized by the same, and Zhao's extension of both results to all d-regular graphs. For matchings, this shows that the matching polynomial and the total number of matchings of a d-regular graph are maximized by a union of copies of Kd;d. Using this we prove the asymptotic upper matching conjecture of Friedland, Krop, Lundow, and Markstrom. In probabilistic language, our main theorems state that for all d-regular graphs and all �, the occupancy fraction of the hard-core model and the edge occupancy fraction of the monomer-dimer model with fugacity � are maximized by Kd;d. Our method involves constrained optimization problems over distributions of random variables and applies to all d-regular graphs directly, without a reduction to the bipartite case. Using a variant of the method we prove a lower bound on the occupancy fraction of the hard-core model on any d-regular, vertex-transitive, bipartite graph: the occupancy fraction of such a graph is strictly greater than the occupancy fraction of the unique translationinvariant hard-core measure on the infinite d-regular tree", "We consider the Widom–Rowlinson model of two types of interacting particles on d -regular graphs. We prove a tight upper bound on the occupancy fraction, the expected fraction of vertices occupied by a particle under a random configuration from the model. The upper bound is achieved uniquely by unions of complete graphs on d + 1 vertices, K d +1 . As a corollary we find that K d +1 also maximizes the normalized partition function of the Widom–Rowlinson model over the class of d -regular graphs. A special case of this shows that the normalized number of homomorphisms from any d -regular graph G to the graph H WR , a path on three vertices with a loop on each vertex, is maximized by K d +1 . This proves a conjecture of Galvin.", "AbstractThis survey concerns regular graphs that are extremal with respect to the number of independent sets and, more generally, graph homomorphisms. More precisely, in the family of of d-regular ...", "" ] }
1611.01484
2554757535
Recent progress in face detection (including keypoint detection), and recognition is mainly being driven by (i) deeper convolutional neural network architectures, and (ii) larger datasets. However, most of the large datasets are maintained by private companies and are not publicly available. The academic computer vision community needs larger and more varied datasets to make further progress. In this paper we introduce a new face dataset, called UMDFaces, which has 367,888 annotated faces of 8,277 subjects. We also introduce a new face recognition evaluation protocol which will help advance the state-of-the-art in this area. We discuss how a large dataset can be collected and annotated using human annotators and deep networks. We provide human curated bounding boxes for faces. We also provide estimated pose (roll, pitch and yaw), locations of twenty-one key-points and gender information generated by a pre-trained neural network. In addition, the quality of keypoint annotations has been verified by humans for about 115,000 images. Finally, we compare the quality of the dataset with other publicly available face datasets at similar scales.
The most popular datasets used for face detection are WIDER FACE @cite_21 , FDDB @cite_37 , and IJB-A @cite_0 . The WIDER FACE dataset contains annotations for 393,703 faces spread over 32,203 images. The annotations include bounding box for the face, pose (typical atypical), and occlusion level (partial heavy). FDDB has been driving a lot of progress in face detection in recent years. It has annotations for 5,171 faces in 2,845 images. For each face in the dataset, FDDB provides the bounding ellipse. However, FDDB does not contain any other annotations like pose. The IJB-A dataset was introduced targeting both face detection and recognition. It contains 49,759 face annotations over 24,327 images. The dataset contains both still images and video frames. IJB-A also does not contain any pose or occlusion annotations.
{ "cite_N": [ "@cite_0", "@cite_37", "@cite_21" ], "mid": [ "1949778830", "182571476", "2176613063" ], "abstract": [ "Rapid progress in unconstrained face recognition has resulted in a saturation in recognition accuracy for current benchmark datasets. While important for early progress, a chief limitation in most benchmark datasets is the use of a commodity face detector to select face imagery. The implication of this strategy is restricted variations in face pose and other confounding factors. This paper introduces the IARPA Janus Benchmark A (IJB-A), a publicly available media in the wild dataset containing 500 subjects with manually localized face images. Key features of the IJB-A dataset are: (i) full pose variation, (ii) joint use for face recognition and face detection benchmarking, (iii) a mix of images and videos, (iv) wider geographic variation of subjects, (v) protocols supporting both open-set identification (1∶N search) and verification (1∶1 comparison), (vi) an optional protocol that allows modeling of gallery subjects, and (vii) ground truth eye and nose locations. The dataset has been developed using 1,501,267 million crowd sourced annotations. Baseline accuracies for both face detection and face recognition from commercial and open source algorithms demonstrate the challenge offered by this new unconstrained benchmark.", "Despite the maturity of face detection research, it remains difficult to compare different algorithms for face detection. This is partly due to the lack of common evaluation schemes. Also, existing data sets for evaluating face detection algorithms do not capture some aspects of face appearances that are manifested in real-world scenarios. In this work, we address both of these issues. We present a new data set of face images with more faces and more accurate annotations for face regions than in previous data sets. We also propose two rigorous and precise methods for evaluating the performance of face detection algorithms. We report results of several standard algorithms on the new benchmark.", "Face detection is one of the most studied topics in the computer vision community. Much of the progresses have been made by the availability of face detection benchmark datasets. We show that there is a gap between current face detection performance and the real world requirements. To facilitate future face detection research, we introduce the WIDER FACE dataset, which is 10 times larger than existing datasets. The dataset contains rich annotations, including occlusions, poses, event categories, and face bounding boxes. Faces in the proposed dataset are extremely challenging due to large variations in scale, pose and occlusion, as shown in Fig. 1. Furthermore, we show that WIDER FACE dataset is an effective training source for face detection. We benchmark several representative detection systems, providing an overview of state-of-the-art performance and propose a solution to deal with large scale variation. Finally, we discuss common failure cases that worth to be further investigated. Dataset can be downloaded at: mmlab.ie.cuhk.edu.hk projects WIDERFace" ] }
1611.01484
2554757535
Recent progress in face detection (including keypoint detection), and recognition is mainly being driven by (i) deeper convolutional neural network architectures, and (ii) larger datasets. However, most of the large datasets are maintained by private companies and are not publicly available. The academic computer vision community needs larger and more varied datasets to make further progress. In this paper we introduce a new face dataset, called UMDFaces, which has 367,888 annotated faces of 8,277 subjects. We also introduce a new face recognition evaluation protocol which will help advance the state-of-the-art in this area. We discuss how a large dataset can be collected and annotated using human annotators and deep networks. We provide human curated bounding boxes for faces. We also provide estimated pose (roll, pitch and yaw), locations of twenty-one key-points and gender information generated by a pre-trained neural network. In addition, the quality of keypoint annotations has been verified by humans for about 115,000 images. Finally, we compare the quality of the dataset with other publicly available face datasets at similar scales.
Our dataset is about 15 times larger than AFLW. We provide the face box annotations which have been verified by humans. We also provide fine-grained pose annotations and keypoint location annotations generated using the all-in-one CNN @cite_20 method. The pose and keypoint annotations haven't been generated using humans as annotators. However, in section we analyze the accuracy of these annotations. This dataset can be used for building keypoint localization and head pose estimation models. We compare a model trained on our dataset with some recent models trained on AFLW in terms of keypoint localization accuracy in section .
{ "cite_N": [ "@cite_20" ], "mid": [ "2952198537" ], "abstract": [ "We present a multi-purpose algorithm for simultaneous face detection, face alignment, pose estimation, gender recognition, smile detection, age estimation and face recognition using a single deep convolutional neural network (CNN). The proposed method employs a multi-task learning framework that regularizes the shared parameters of CNN and builds a synergy among different domains and tasks. Extensive experiments show that the network has a better understanding of face and achieves state-of-the-art result for most of these tasks." ] }
1611.01484
2554757535
Recent progress in face detection (including keypoint detection), and recognition is mainly being driven by (i) deeper convolutional neural network architectures, and (ii) larger datasets. However, most of the large datasets are maintained by private companies and are not publicly available. The academic computer vision community needs larger and more varied datasets to make further progress. In this paper we introduce a new face dataset, called UMDFaces, which has 367,888 annotated faces of 8,277 subjects. We also introduce a new face recognition evaluation protocol which will help advance the state-of-the-art in this area. We discuss how a large dataset can be collected and annotated using human annotators and deep networks. We provide human curated bounding boxes for faces. We also provide estimated pose (roll, pitch and yaw), locations of twenty-one key-points and gender information generated by a pre-trained neural network. In addition, the quality of keypoint annotations has been verified by humans for about 115,000 images. Finally, we compare the quality of the dataset with other publicly available face datasets at similar scales.
There has been a lot of attention to face recognition for a long time now. Face recognition itself is composed of two problems: face identification and face verification. With the advent of high capacity deep convolutional networks, there is a need for larger and more varied datasets. The largest datasets that are targeted at recognition are the ones used by Google @cite_15 and Facebook @cite_10 . But these are not publicly available to researchers.
{ "cite_N": [ "@cite_15", "@cite_10" ], "mid": [ "2096733369", "2145287260" ], "abstract": [ "Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors.", "In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4, 000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35 on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27 , closely approaching human-level performance." ] }
1611.01484
2554757535
Recent progress in face detection (including keypoint detection), and recognition is mainly being driven by (i) deeper convolutional neural network architectures, and (ii) larger datasets. However, most of the large datasets are maintained by private companies and are not publicly available. The academic computer vision community needs larger and more varied datasets to make further progress. In this paper we introduce a new face dataset, called UMDFaces, which has 367,888 annotated faces of 8,277 subjects. We also introduce a new face recognition evaluation protocol which will help advance the state-of-the-art in this area. We discuss how a large dataset can be collected and annotated using human annotators and deep networks. We provide human curated bounding boxes for faces. We also provide estimated pose (roll, pitch and yaw), locations of twenty-one key-points and gender information generated by a pre-trained neural network. In addition, the quality of keypoint annotations has been verified by humans for about 115,000 images. Finally, we compare the quality of the dataset with other publicly available face datasets at similar scales.
The two datasets which are closest to our work are CASIA WebFace @cite_33 and CelebFaces+ @cite_32 datasets. The CASIA WebFace dataset contains 494,414 images of 10,575 people. This dataset does not provide any bounding boxes for faces or any other annotations. Celebfaces+ contains 10,177 subjects and 202,599 images. CelebA @cite_40 added five landmark locations and forty binary attributes to the CelebFaces+ dataset.
{ "cite_N": [ "@cite_40", "@cite_32", "@cite_33" ], "mid": [ "1834627138", "1998808035", "1509966554" ], "abstract": [ "Predicting face attributes in the wild is challenging due to complex face variations. We propose a novel deep learning framework for attribute prediction in the wild. It cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently. LNet is pre-trained by massive general object categories for face localization, while ANet is pre-trained by massive face identities for attribute prediction. This framework not only outperforms the state-of-the-art with a large margin, but also reveals valuable facts on learning face representation. (1) It shows how the performances of face localization (LNet) and attribute prediction (ANet) can be improved by different pre-training strategies. (2) It reveals that although the filters of LNet are fine-tuned only with image-level attribute tags, their response maps over entire images have strong indication of face locations. This fact enables training LNet for face localization with only image-level annotations, but without face bounding boxes or landmarks, which are required by all attribute recognition works. (3) It also demonstrates that the high-level hidden neurons of ANet automatically discover semantic concepts after pre-training with massive face identities, and such concepts are significantly enriched after fine-tuning with attribute tags. Each attribute can be well explained with a sparse linear combination of these concepts.", "This paper proposes to learn a set of high-level feature representations through deep learning, referred to as Deep hidden IDentity features (DeepID), for face verification. We argue that DeepID can be effectively learned through challenging multi-class face identification tasks, whilst they can be generalized to other tasks (such as verification) and new identities unseen in the training set. Moreover, the generalization capability of DeepID increases as more face classes are to be predicted at training. DeepID features are taken from the last hidden layer neuron activations of deep convolutional networks (ConvNets). When learned as classifiers to recognize about 10, 000 face identities in the training set and configured to keep reducing the neuron numbers along the feature extraction hierarchy, these deep ConvNets gradually form compact identity-related features in the top layers with only a small number of hidden neurons. The proposed features are extracted from various face regions to form complementary and over-complete representations. Any state-of-the-art classifiers can be learned based on these high-level representations for face verification. 97:45 verification accuracy on LFW is achieved with only weakly aligned faces.", "Pushing by big data and deep convolutional neural network (CNN), the performance of face recognition is becoming comparable to human. Using private large scale training datasets, several groups achieve very high performance on LFW, i.e., 97 to 99 . While there are many open source implementations of CNN, none of large scale face dataset is publicly available. The current situation in the field of face recognition is that data is more important than algorithm. To solve this problem, this paper proposes a semi-automatical way to collect face images from Internet and builds a large scale dataset containing about 10,000 subjects and 500,000 images, called CASIAWebFace. Based on the database, we use a 11-layer CNN to learn discriminative representation and obtain state-of-theart accuracy on LFW and YTF. The publication of CASIAWebFace will attract more research groups entering this field and accelerate the development of face recognition in the wild." ] }
1611.01455
2549108983
The GANs are generative models whose random samples realistically reflect natural images. It also can generate samples with specific attributes by concatenating a condition vector into the input, yet research on this field is not well studied. We propose novel methods of conditioning generative adversarial networks (GANs) that achieve state-of-the-art results on MNIST and CIFAR-10. We mainly introduce two models: an information retrieving model that extracts conditional information from the samples, and a spatial bilinear pooling model that forms bilinear features derived from the spatial cross product of an image and a condition vector. These methods significantly enhance log-likelihood of test data under the conditional distributions compared to the methods of concatenation.
The conditional GAN @cite_0 concatenates condition vector into the input of the generator and the discriminator. Variants of this method was successfully applied in @cite_13 @cite_7 @cite_9 . @cite_13 obtained visually-discriminative vector representation of text descriptions and then concatenated that vector into every layer of the discriminator and the noise vector of the generator. @cite_7 used a similar method to generate face images from binary attribute vectors such as hair styles, face shapes, etc. In @cite_9 , Structure-GAN generates surface normal maps and then they are concatenated into noise vector of Style-GAN to put styles in those maps.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_13", "@cite_7" ], "mid": [ "2125389028", "2298992465", "2949999304", "2202109488" ], "abstract": [ "Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.", "Current generative frameworks use end-to-end learning and generate images by sampling from uniform noise distribution. However, these approaches ignore the most basic principle of image formation: images are product of: (a) Structure: the underlying 3D model; (b) Style: the texture mapped onto structure. In this paper, we factorize the image generation process and propose Style and Structure Generative Adversarial Network ( ( S ^2 )-GAN). Our ( S ^2 )-GAN has two components: the Structure-GAN generates a surface normal map; the Style-GAN takes the surface normal map as input and generates the 2D image. Apart from a real vs. generated loss function, we use an additional loss with computed surface normals from generated images. The two GANs are first trained independently, and then merged together via joint learning. We show our ( S ^2 )-GAN model is interpretable, generates more realistic images and can be used to learn unsupervised RGBD representations.", "Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image model- ing, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions.", "We present an autoencoder that leverages learned representations to better measure similarities in data space. By combining a variational autoencoder with a generative adversarial network we can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective. Thereby, we replace element-wise errors with feature-wise errors to better capture the data distribution while offering invariance towards e.g. translation. We apply our method to images of faces and show that it outperforms VAEs with element-wise similarity measures in terms of visual fidelity. Moreover, we show that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic." ] }
1611.01455
2549108983
The GANs are generative models whose random samples realistically reflect natural images. It also can generate samples with specific attributes by concatenating a condition vector into the input, yet research on this field is not well studied. We propose novel methods of conditioning generative adversarial networks (GANs) that achieve state-of-the-art results on MNIST and CIFAR-10. We mainly introduce two models: an information retrieving model that extracts conditional information from the samples, and a spatial bilinear pooling model that forms bilinear features derived from the spatial cross product of an image and a condition vector. These methods significantly enhance log-likelihood of test data under the conditional distributions compared to the methods of concatenation.
The spatial bilinear pooling was mainly inspired from studies on multimodal learning @cite_2 . The key question of the multimodal learning is how can a model uncover the correlated interaction of two vectors from different domains. In order to achieve this, various methods (vector concatenation @cite_19 , element-wise operations @cite_10 , factorized restricted Boltzmann machine (RBM) @cite_2 , bilinear pooling @cite_6 @cite_15 , etc) were proposed for numerous challenging tasks. The RBM based models require expensive MCMC sampling which makes it difficult to scale them to large datasets. The bilinear pooling is more expressive then vector concatenation or element-wise operations, but they are inefficient due to squared complexity @math . To solve this problem, @cite_1 addressed the space and time complexity of bilinear features using Tensor Sketch @cite_11 .
{ "cite_N": [ "@cite_1", "@cite_6", "@cite_19", "@cite_2", "@cite_15", "@cite_10", "@cite_11" ], "mid": [ "2261271299", "2963383024", "2190656909", "2184188583", "", "2171810632", "2146897752" ], "abstract": [ "Bilinear models has been shown to achieve impressive performance on a wide range of visual tasks, such as semantic segmentation, fine grained recognition and face recognition. However, bilinear features are high dimensional, typically on the order of hundreds of thousands to a few million, which makes them impractical for subsequent analysis. We propose two compact bilinear representations with the same discriminative power as the full bilinear representation but with only a few thousand dimensions. Our compact representations allow back-propagation of classification errors enabling an end-to-end optimization of the visual recognition system. The compact bilinear representations are derived through a novel kernelized analysis of bilinear pooling which provide insights into the discriminative power of bilinear pooling, and a platform for further research in compact pooling methods. Experimentation illustrate the utility of the proposed representations for image classification and few-shot learning across several datasets.", "", "We describe a very simple bag-of-words baseline for visual question answering. This baseline concatenates the word features from the question and CNN features from the image to predict the answer. When evaluated on the challenging VQA dataset [2], it shows comparable performance to many recent approaches using recurrent neural networks. To explore the strength and weakness of the trained model, we also provide an interactive web demo and open-source code. .", "Deep networks have been successfully applied to unsupervised feature learning for single modalities (e.g., text, images or audio). In this work, we propose a novel application of deep networks to learn features over multiple modalities. We present a series of tasks for multimodal learning and show how to train deep networks that learn features to address these tasks. In particular, we demonstrate cross modality feature learning, where better features for one modality (e.g., video) can be learned if multiple modalities (e.g., audio and video) are present at feature learning time. Furthermore, we show how to learn a shared representation between modalities and evaluate it on a unique task, where the classifier is trained with audio-only data but tested with video-only data and vice-versa. Our models are validated on the CUAVE and AVLetters datasets on audio-visual speech classification, demonstrating best published visual speech classification on AVLetters and effective shared representation learning.", "", "This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer.", "Approximation of non-linear kernels using random feature mapping has been successfully employed in large-scale data analysis applications, accelerating the training of kernel machines. While previous random feature mappings run in O(ndD) time for @math training samples in d-dimensional space and D random feature maps, we propose a novel randomized tensor product technique, called Tensor Sketching, for approximating any polynomial kernel in O(n(d+D D )) time. Also, we introduce both absolute and relative error bounds for our approximation to guarantee the reliability of our estimation algorithm. Empirically, Tensor Sketching achieves higher accuracy and often runs orders of magnitude faster than the state-of-the-art approach for large-scale real-world datasets." ] }
1611.01455
2549108983
The GANs are generative models whose random samples realistically reflect natural images. It also can generate samples with specific attributes by concatenating a condition vector into the input, yet research on this field is not well studied. We propose novel methods of conditioning generative adversarial networks (GANs) that achieve state-of-the-art results on MNIST and CIFAR-10. We mainly introduce two models: an information retrieving model that extracts conditional information from the samples, and a spatial bilinear pooling model that forms bilinear features derived from the spatial cross product of an image and a condition vector. These methods significantly enhance log-likelihood of test data under the conditional distributions compared to the methods of concatenation.
The information retrieving model uses core algorithm of infoGAN @cite_20 that recover disentangled representations by maximizing the mutual information for inducing latent codes. In infoGAN, the input noise vector is decomposed into a source of incompressible noise and the latent code, and there is an auxiliary output in the discriminator to retrieve the latent codes. The infoGAN utilizes Variational Information Maximization @cite_14 to deal with intractability of the mutual information. Unlike infoGAN which randomly generates latent codes, we explicitly put condition information in the latent codes.
{ "cite_N": [ "@cite_14", "@cite_20" ], "mid": [ "115285041", "2434741482" ], "abstract": [ "The maximisation of information transmission over noisy channels is a common, albeit generally computationally difficult problem. We approach the difficulty of computing the mutual information for noisy channels by using a variational approximation. The resulting IM algorithm is analagous to the EM algorithm, yet maximises mutual information, as opposed to likelihood. We apply the method to several practical examples, including linear compression, population encoding and CDMA.", "This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound to the mutual information objective that can be optimized efficiently, and show that our training procedure can be interpreted as a variation of the Wake-Sleep algorithm. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods." ] }
1611.01802
2549695727
Question answering (QA) has been the subject of a resurgence over the past years. The said resurgence has led to a multitude of question answering (QA) systems being developed both by companies and research facilities. While a few components of QA systems get reused across implementations, most systems do not leverage the full potential of component reuse. Hence, the development of QA systems is currently still a tedious and time-consuming process. We address the challenge of accelerating the creation of novel or tailored QA systems by presenting a concept for a self-wiring approach to composing QA systems. Our approach will allow the reuse of existing, web-based QA systems or modules while developing new QA platforms. To this end, it will rely on QA modules being described using the Web Ontology Language. Based on these descriptions, our approach will be able to automatically compose QA systems using a data-driven approach automatically.
One of the earliest works is openQA @cite_10 which is a modular open-source framework for implementing QA systems. openQA's main work-flow consists of four stages ( interpretation , retrieval , synthesis and rendering ) as well as adjacent modules ( context and service ) written as rigid Java interfaces. The authors claim that openQA enables a conciliation of different architectures and methods.
{ "cite_N": [ "@cite_10" ], "mid": [ "1979710543" ], "abstract": [ "Billions of facts pertaining to a multitude of domains are now available on the Web as RDF data. However, accessing this data is still a difficult endeavour for non-expert users. In order to meliorate the access to this data, approaches imposing minimal hurdles to their users are required. Although many question answering systems over Linked Data have being proposed, retrieving the desired data is still significantly challenging. In addition, developing and evaluating question answering systems remains a very complex task. To overcome these obstacles, we present a modular and extensible open-source question answering framework. We demonstrate how the framework can be used by integrating two state-of-the-art question answering systems. As a result our evaluation shows that overall better results can be achieved by the use of combination rather than individual stand-alone versions." ] }
1611.01802
2549695727
Question answering (QA) has been the subject of a resurgence over the past years. The said resurgence has led to a multitude of question answering (QA) systems being developed both by companies and research facilities. While a few components of QA systems get reused across implementations, most systems do not leverage the full potential of component reuse. Hence, the development of QA systems is currently still a tedious and time-consuming process. We address the challenge of accelerating the creation of novel or tailored QA systems by presenting a concept for a self-wiring approach to composing QA systems. Our approach will allow the reuse of existing, web-based QA systems or modules while developing new QA platforms. To this end, it will rely on QA modules being described using the Web Ontology Language. Based on these descriptions, our approach will be able to automatically compose QA systems using a data-driven approach automatically.
QALL-ME @cite_2 is another open source approach using an architecture skeleton for multilingual QA, a domain- as well as a domain-independent ontology. The underling SOA architecture features several web service which are composed to a QA system in a predetermined way.
{ "cite_N": [ "@cite_2" ], "mid": [ "2109956628" ], "abstract": [ "This paper presents the QALL-ME Framework, a reusable architecture for building multi- and cross-lingual Question Answering (QA) systems working on structured data modelled by an ontology. It is released as free open source software with a set of demo components and extensive documentation, which makes it easy to use and adapt. The main characteristics of the QALL-ME Framework are: (i) its domain portability, achieved by an ontology modelling the target domain; (ii) the context awareness regarding space and time of the question; (iii) the use of textual entailment engines as the core of the question interpretation; and (iv) an architecture based on Service Oriented Architecture (SOA), which is realized using interchangeable web services for the framework components. Furthermore, we present a running example to clarify how the framework processes questions as well as a case study that shows a QA application built as an instantiation of the QALL-ME Framework for cinema movie events in the tourism domain." ] }
1611.01802
2549695727
Question answering (QA) has been the subject of a resurgence over the past years. The said resurgence has led to a multitude of question answering (QA) systems being developed both by companies and research facilities. While a few components of QA systems get reused across implementations, most systems do not leverage the full potential of component reuse. Hence, the development of QA systems is currently still a tedious and time-consuming process. We address the challenge of accelerating the creation of novel or tailored QA systems by presenting a concept for a self-wiring approach to composing QA systems. Our approach will allow the reuse of existing, web-based QA systems or modules while developing new QA platforms. To this end, it will rely on QA modules being described using the Web Ontology Language. Based on these descriptions, our approach will be able to automatically compose QA systems using a data-driven approach automatically.
The open source system OAQA @cite_7 aims at advancing the engineering of QA systems by following architectural commitments to components for interchangeability. Using these shared interchangeable components OAQA is able to search the most efficient combination of modules for a task at hand.
{ "cite_N": [ "@cite_7" ], "mid": [ "2013571149" ], "abstract": [ "Software frameworks which support integration and scaling of text analysis algorithms make it possible to build complex, high performance information systems for information extraction, information retrieval, and question answering; IBM's Watson is a prominent example. As the complexity and scaling of information systems become ever greater, it is much more challenging to effectively and efficiently determine which toolkits, algorithms, knowledge bases or other resources should be integrated into an information system in order to achieve a desired or optimal level of performance on a given task. This paper presents a formal representation of the space of possible system configurations, given a set of information processing components and their parameters (configuration space) and discusses algorithmic approaches to determine the optimal configuration within a given configuration space (configuration space exploration or CSE). We introduce the CSE framework, an extension to the UIMA framework which provides a general distributed solution for building and exploring configuration spaces for information systems. The CSE framework was used to implement biomedical information systems in case studies involving over a trillion different configuration combinations of components and parameter values operating on question answering tasks from the TREC Genomics. The framework automatically and efficiently evaluated different system configurations, and identified configurations that achieved better results than prior published results." ] }
1611.01802
2549695727
Question answering (QA) has been the subject of a resurgence over the past years. The said resurgence has led to a multitude of question answering (QA) systems being developed both by companies and research facilities. While a few components of QA systems get reused across implementations, most systems do not leverage the full potential of component reuse. Hence, the development of QA systems is currently still a tedious and time-consuming process. We address the challenge of accelerating the creation of novel or tailored QA systems by presenting a concept for a self-wiring approach to composing QA systems. Our approach will allow the reuse of existing, web-based QA systems or modules while developing new QA platforms. To this end, it will rely on QA modules being described using the Web Ontology Language. Based on these descriptions, our approach will be able to automatically compose QA systems using a data-driven approach automatically.
QANUS @cite_12 is a not disclosed QA framework for the rapid development of novel QA systems as well as a baseline system for benchmarking. It was designed to have interchangeable components in a pre-seeded system and comes with a set of common modules such as named entity recognition or part-of-speech tagging.
{ "cite_N": [ "@cite_12" ], "mid": [ "1549951060" ], "abstract": [ "In this paper, we motivate the need for a publicly available, generic software framework for question-answering (QA) systems. We present an open-source QA framework QANUS which researchers can leverage on to build new QA systems easily and rapidly. The framework implements much of the code that will otherwise have been repeated across different QA systems. To demonstrate the utility and practicality of the framework, we further present a fully functioning factoid QA system QA-SYS built on top of QANUS." ] }
1611.01802
2549695727
Question answering (QA) has been the subject of a resurgence over the past years. The said resurgence has led to a multitude of question answering (QA) systems being developed both by companies and research facilities. While a few components of QA systems get reused across implementations, most systems do not leverage the full potential of component reuse. Hence, the development of QA systems is currently still a tedious and time-consuming process. We address the challenge of accelerating the creation of novel or tailored QA systems by presenting a concept for a self-wiring approach to composing QA systems. Our approach will allow the reuse of existing, web-based QA systems or modules while developing new QA platforms. To this end, it will rely on QA modules being described using the Web Ontology Language. Based on these descriptions, our approach will be able to automatically compose QA systems using a data-driven approach automatically.
@cite_8 described a first semantic approach towards coupling components together via RDF to tailor search pipelines using semantic, geospatial and full text search modules. Here, modules add semantic information to a query until the search intend can be solved.
{ "cite_N": [ "@cite_8" ], "mid": [ "1967058221" ], "abstract": [ "Over the last decade, a growing importance of search engines could be observed. An increasing amount of knowledge is exposed and connected within the Linked Open Data Cloud, which raises users' expectations to be able to search for any information that is directly or indirectly contained. However, diverse data types require tailored search functionalities---such as semantic, geospatial and full text search. Hence, using only one data management system will not provide the required functionality at the expected level. In this paper, we will describe search services that provide specific search functionality via a generalized interface inspired by RDF. In addition, we introduce an application layer on top of these services that enables to query them in a unified way. This allows for the implementation of a distributed search that leverages the identification of the optimal search service for each query and subquery. This is achieved by connecting powerful tools like Openlink Virtuoso, ElasticSearch and PostGIS within a single framework." ] }
1611.01802
2549695727
Question answering (QA) has been the subject of a resurgence over the past years. The said resurgence has led to a multitude of question answering (QA) systems being developed both by companies and research facilities. While a few components of QA systems get reused across implementations, most systems do not leverage the full potential of component reuse. Hence, the development of QA systems is currently still a tedious and time-consuming process. We address the challenge of accelerating the creation of novel or tailored QA systems by presenting a concept for a self-wiring approach to composing QA systems. Our approach will allow the reuse of existing, web-based QA systems or modules while developing new QA platforms. To this end, it will rely on QA modules being described using the Web Ontology Language. Based on these descriptions, our approach will be able to automatically compose QA systems using a data-driven approach automatically.
QANARY @cite_11 is the first real implementation of a semantic approach towards generating QA systems from components. Using the provided QA ontology from QANARY, modules can be exchanged, e.g. various versions of NER tools, to benchmark various pipelines and choose the best performing one.
{ "cite_N": [ "@cite_11" ], "mid": [ "2584015706" ], "abstract": [ "Question answering (QA) systems focus on making sense out of data via an easy-to-use interface. However, these systems are very complex and integrate a lot of technology tightly. Previously presented QA systems are mostly singular and monolithic implementations. Hence, their reusability is limited. In contrast, we follow the research agenda of establishing an ecosystem for components of QA systems, which will enable the QA community to elevate the reusability of such components and to intensify their research activities." ] }
1611.01802
2549695727
Question answering (QA) has been the subject of a resurgence over the past years. The said resurgence has led to a multitude of question answering (QA) systems being developed both by companies and research facilities. While a few components of QA systems get reused across implementations, most systems do not leverage the full potential of component reuse. Hence, the development of QA systems is currently still a tedious and time-consuming process. We address the challenge of accelerating the creation of novel or tailored QA systems by presenting a concept for a self-wiring approach to composing QA systems. Our approach will allow the reuse of existing, web-based QA systems or modules while developing new QA platforms. To this end, it will rely on QA modules being described using the Web Ontology Language. Based on these descriptions, our approach will be able to automatically compose QA systems using a data-driven approach automatically.
The feasibility of our approach is supported by works from other domains. For example, @cite_3 developed RestDesc which allows for the automatic composition of HyperMedia APIs driven by RDF and Reasoning over N3. However, we are not aware of any QA web-modules which are either HyperMedia APIs or are described in a way directly usable in the proposed framework.
{ "cite_N": [ "@cite_3" ], "mid": [ "2274024995" ], "abstract": [ "Machine clients are increasingly making use of the Web to perform tasks. While Web services traditionally mimic remote procedure calling interfaces, a new generation of so-called hypermedia APIs works through hyperlinks and forms, in a way similar to how people browse the Web. This means that existing composition techniques, which determine a procedural plan upfront, are not sufficient to consume hypermedia APIs, which need to be navigated at runtime. Clients instead need a more dynamic plan that allows them to follow hyperlinks and use forms with a preset goal. Therefore, in this paper, we show how compositions of hypermedia APIs can be created by generic Semantic Web reasoners. This is achieved through the generation of a proof based on semantic descriptions of the APIs' functionality. To pragmatically verify the applicability of compositions, we introduce the notion of pre-execution and post-execution proofs. The runtime interaction between a client and a server is guided by proofs but driven by hypermedia, allowing the client to react to the application's actual state indicated by the server's response. We describe how to generate compositions from descriptions, discuss a computer-assisted process to generate descriptions, and verify reasoner performance on various composition tasks using a benchmark suite. The experimental results lead to the conclusion that proof-based consumption of hypermedia APIs is a feasible strategy at Web scale." ] }
1611.01576
2952436057
Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep's computation on the previous timestep's output limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasi-recurrent neural networks (QRNNs), an approach to neural sequence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in parallel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classification, and character-level neural machine translation demonstrate these advantages and underline the viability of QRNNs as a basic building block for a variety of sequence tasks.
Exploring alternatives to traditional RNNs for sequence tasks is a major area of current research. Quasi-recurrent neural networks are related to several such recently described models, especially the strongly-typed recurrent neural networks (T-RNN) introduced by @cite_1 . While the motivation and constraints described in that work are different, @cite_1 's concepts of learnware'' and firmware'' parallel our discussion of convolution-like and pooling-like subcomponents. As the use of a fully connected layer for recurrent connections violates the constraint of strong typing'', all strongly-typed RNN architectures (including the T-RNN, T-GRU, and T-LSTM) are also quasi-recurrent. However, some QRNN models (including those with attention or skip-connections) are not strongly typed''. In particular, a T-RNN differs from a QRNN as described in this paper with filter size 1 and -pooling only in the absence of an activation function on @math . Similarly, T-GRUs and T-LSTMs differ from QRNNs with filter size 2 and - or -pooling respectively in that they lack @math on @math and use @math rather than sigmoid on @math .
{ "cite_N": [ "@cite_1" ], "mid": [ "2212703438" ], "abstract": [ "Recurrent neural networks (RNNs) stand at the forefront of many recent developments in deep learning. Yet a major difficulty with these models is their tendency to overfit, with dropout shown to fail when applied to recurrent layers. Recent results at the intersection of Bayesian modelling and deep learning offer a Bayesian interpretation of common deep learning techniques such as dropout. This grounding of dropout in approximate Bayesian inference suggests an extension of the theoretical results, offering insights into the use of dropout with RNN models. We apply this new variational inference based dropout technique in LSTM and GRU models, assessing it on language modelling and sentiment analysis tasks. The new approach outperforms existing techniques, and to the best of our knowledge improves on the single model state-of-the-art in language modelling with the Penn Treebank (73.4 test perplexity). This extends our arsenal of variational tools in deep learning." ] }
1611.01572
2552525247
Crowd workers are distributed and decentralized. While decentralization is designed to utilize independent judgment to promote high-quality results, it paradoxically undercuts behaviors and institutions that are critical to high-quality work. Reputation is one central example: crowdsourcing systems depend on reputation scores from decentralized workers and requesters, but these scores are notoriously inflated and uninformative. In this paper, we draw inspiration from historical worker guilds (e.g., in the silk trade) to design and implement crowd guilds: centralized groups of crowd workers who collectively certify each other's quality through double-blind peer assessment. A two-week field experiment compared crowd guilds to a traditional decentralized crowd work model. Crowd guilds produced reputation signals more strongly correlated with ground-truth worker quality than signals available on current crowd working platforms, and more accurate than in the traditional model.
Ensuring high-quality crowd work is crucial for the sustainability of a microtask platform. Mechanisms such as voting by peer workers @cite_28 , establishing agreement between workers @cite_72 , and machine learning models trained on low-level behaviors @cite_65 , have been used to gauge and enhance the quality of crowd work. In addition to these techniques, task-specific feedback help crowd workers augment their behavior and improve their performance @cite_76 .
{ "cite_N": [ "@cite_28", "@cite_76", "@cite_72", "@cite_65" ], "mid": [ "2143539737", "2168765606", "2125943921", "2124994029" ], "abstract": [ "Manual evaluation of translation quality is generally thought to be excessively time consuming and expensive. We explore a fast and inexpensive way of doing it using Amazon's Mechanical Turk to pay small sums to a large number of non-expert annotators. For $10 we redundantly recreate judgments from a WMT08 translation task. We find that when combined non-expert judgments have a high-level of agreement with the existing gold-standard judgments of machine translation quality, and correlate more strongly with expert judgments than Bleu does. We go on to show that Mechanical Turk can be used to calculate human-mediated translation edit rate (HTER), to conduct reading comprehension experiments with machine translation, and to create high quality reference translations.", "Micro-task platforms provide massively parallel, on-demand labor. However, it can be difficult to reliably achieve high-quality work because online workers may behave irresponsibly, misunderstand the task, or lack necessary skills. This paper investigates whether timely, task-specific feedback helps crowd workers learn, persevere, and produce better results. We investigate this question through Shepherd, a feedback system for crowdsourced work. In a between-subjects study with three conditions, crowd workers wrote consumer reviews for six products they own. Participants in the None condition received no immediate feedback, consistent with most current crowdsourcing practices. Participants in the Self-assessment condition judged their own work. Participants in the External assessment condition received expert feedback. Self-assessment alone yielded better overall work than the None condition and helped workers improve over time. External assessment also yielded these benefits. Participants who received external assessment also revised their work more. We conclude by discussing interaction and infrastructure approaches for integrating real-time assessment into online work.", "This paper addresses the repeated acquisition of labels for data items when the labeling is imperfect. We examine the improvement (or lack thereof) in data quality via repeated labeling, and focus especially on the improvement of training labels for supervised induction. With the outsourcing of small tasks becoming easier, for example via Rent-A-Coder or Amazon's Mechanical Turk, it often is possible to obtain less-than-expert labeling at low cost. With low-cost labeling, preparing the unlabeled part of the data can become considerably more expensive than labeling. We present repeated-labeling strategies of increasing complexity, and show several main results. (i) Repeated-labeling can improve label quality and model quality, but not always. (ii) When labels are noisy, repeated labeling can be preferable to single labeling even in the traditional setting where labels are not particularly cheap. (iii) As soon as the cost of processing the unlabeled data is not free, even the simple strategy of labeling everything multiple times can give considerable advantage. (iv) Repeatedly labeling a carefully chosen set of points is generally preferable, and we present a robust technique that combines different notions of uncertainty to select data points for which quality should be improved. The bottom line: the results show clearly that when labeling is not perfect, selective acquisition of multiple labels is a strategy that data miners should have in their repertoire; for certain label-quality cost regimes, the benefit is substantial.", "Detecting and correcting low quality submissions in crowdsourcing tasks is an important challenge. Prior work has primarily focused on worker outcomes or reputation, using approaches such as agreement across workers or with a gold standard to evaluate quality. We propose an alternative and complementary technique that focuses on the way workers work rather than the products they produce. Our technique captures behavioral traces from online crowd workers and uses them to predict outcome measures such quality, errors, and the likelihood of cheating. We evaluate the effectiveness of the approach across three contexts including classification, generation, and comprehension tasks. The results indicate that we can build predictive models of task performance based on behavioral traces alone, and that these models generalize to related tasks. Finally, we discuss limitations and extensions of the approach." ] }
1611.01572
2552525247
Crowd workers are distributed and decentralized. While decentralization is designed to utilize independent judgment to promote high-quality results, it paradoxically undercuts behaviors and institutions that are critical to high-quality work. Reputation is one central example: crowdsourcing systems depend on reputation scores from decentralized workers and requesters, but these scores are notoriously inflated and uninformative. In this paper, we draw inspiration from historical worker guilds (e.g., in the silk trade) to design and implement crowd guilds: centralized groups of crowd workers who collectively certify each other's quality through double-blind peer assessment. A two-week field experiment compared crowd guilds to a traditional decentralized crowd work model. Crowd guilds produced reputation signals more strongly correlated with ground-truth worker quality than signals available on current crowd working platforms, and more accurate than in the traditional model.
Crowdsourcing platforms have collected work feedback through requesters, workers, using self-assessment rubrics, and with the help of expert evaluators. The Shepherd system @cite_76 allows workers to exchange synchronous feedback with requesters. Crowd guilds scale up this notion of distributed feedback @cite_38 to make peer assessment and collective reputation management a core feature of a crowdsourcing platform.
{ "cite_N": [ "@cite_38", "@cite_76" ], "mid": [ "2950213544", "2168765606" ], "abstract": [ "Young people worldwide are participating in ever-increasing numbers in online fan communities. Far from mere shallow repositories of pop culture, these sites are accumulating significant evidence that sophisticated informal learning is taking place online in novel and unexpected ways. In order to understand and analyze in more detail how learning might be occurring, we conducted an in-depth nine-month ethnographic investigation of online fanfiction communities, including participant observation and fanfiction author interviews. Our observations led to the development of a theory we term distributed mentoring, which we present in detail in this paper. Distributed mentoring exemplifies one instance of how networked technology affords new extensions of behaviors that were previously bounded by time and space. Distributed mentoring holds potential for application beyond the spontaneous mentoring observed in this investigation and may help students receive diverse, thoughtful feedback in formal learning environments as well.", "Micro-task platforms provide massively parallel, on-demand labor. However, it can be difficult to reliably achieve high-quality work because online workers may behave irresponsibly, misunderstand the task, or lack necessary skills. This paper investigates whether timely, task-specific feedback helps crowd workers learn, persevere, and produce better results. We investigate this question through Shepherd, a feedback system for crowdsourced work. In a between-subjects study with three conditions, crowd workers wrote consumer reviews for six products they own. Participants in the None condition received no immediate feedback, consistent with most current crowdsourcing practices. Participants in the Self-assessment condition judged their own work. Participants in the External assessment condition received expert feedback. Self-assessment alone yielded better overall work than the None condition and helped workers improve over time. External assessment also yielded these benefits. Participants who received external assessment also revised their work more. We conclude by discussing interaction and infrastructure approaches for integrating real-time assessment into online work." ] }
1611.01572
2552525247
Crowd workers are distributed and decentralized. While decentralization is designed to utilize independent judgment to promote high-quality results, it paradoxically undercuts behaviors and institutions that are critical to high-quality work. Reputation is one central example: crowdsourcing systems depend on reputation scores from decentralized workers and requesters, but these scores are notoriously inflated and uninformative. In this paper, we draw inspiration from historical worker guilds (e.g., in the silk trade) to design and implement crowd guilds: centralized groups of crowd workers who collectively certify each other's quality through double-blind peer assessment. A two-week field experiment compared crowd guilds to a traditional decentralized crowd work model. Crowd guilds produced reputation signals more strongly correlated with ground-truth worker quality than signals available on current crowd working platforms, and more accurate than in the traditional model.
Self-assessment is another route to help workers reflect, learn skills and more clearly draw connections between learning goals and evaluation criteria @cite_73 . However, workers in self-assessment systems become dependant on rubrics or use special domain language, which tends to be difficult for novices to understand @cite_11 . Automated feedback @cite_7 also enhances workers' performance. However, such systems are generally used to enhance the capabilities of specialized platforms; for example, Duolingo and LevelUp integrate interactive tutorials to enhance the quality of work @cite_61 , which requires significant customization for a given domain, and has not been demonstrated in general work platforms.
{ "cite_N": [ "@cite_61", "@cite_73", "@cite_7", "@cite_11" ], "mid": [ "2014100104", "1487711117", "2277895402", "2132028633" ], "abstract": [ "Crowdsourcing complex creative tasks remains difficult, in part because these tasks require skilled workers. Most crowdsourcing platforms do not help workers acquire the skills necessary to accomplish complex creative tasks. In this paper, we describe a platform that combines learning and crowdsourcing to benefit both the workers and the requesters. Workers gain new skills through interactive step-by-step tutorials and test their knowledge by improving real-world images submitted by requesters. In a series of three deployments spanning two years, we varied the design of our platform to enhance the learning experience and improve the quality of the crowd work. We tested our approach in the context of LevelUp for Photoshop, which teaches people how to do basic photograph improvement tasks using Adobe Photoshop. We found that by using our system workers gained new skills and produced high-quality edits for requested images, even if they had little prior experience editing images.", "Part 1 Self-assessment, learning and assessment: what is learner self-assessment? self-assessment and ideas about learning self-assessment and ideas about assessment what is the scope of self-assessment? Part 2 Examples of practice: self and peer marki", "Crowdsourced workflows are used in research and industry to solve a variety of tasks. The databases community has used crowd workers in query operators optimization and for tasks such as entity resolution. Such research utilizes microtasks where crowd workers are asked to answer simple yes no or multiple choice questions with little training. Typically, microtasks are used with voting algorithms to combine redundant responses from multiple crowd workers to achieve result quality. Microtasks are powerful, but fail in cases where larger context (e.g., domain knowledge) or significant time investment is needed to solve a problem, for example in large-document structured data extraction. In this paper, we consider context-heavy data processing tasks that may require many hours of work, and refer to such tasks as macrotasks. Leveraging the infrastructure and worker pools of existing crowdsourcing platforms, we automate macrotask scheduling, evaluation, and pay scales. A key challenge in macrotask-powered work, however, is evaluating the quality of a worker's output, since ground truth is seldom available and redundancy-based quality control schemes are impractical. We present Argonaut, a framework that improves macrotask powered work quality using a hierarchical review. Argonaut uses a predictive model of worker quality to select trusted workers to perform review, and a separate predictive model of task quality to decide which tasks to review. Finally, Argonaut can identify the ideal trade-off between a single phase of review and multiple phases of review given a constrained review budget in order to maximize overall output quality. We evaluate an industrial use of Argonaut to power a structured data extraction pipeline that has utilized over half a million hours of crowd worker input to complete millions of macrotasks. We show that Argonaut can capture up to 118 more errors than random spot-check reviews in review budget-constrained environments with up to two review layers.", "Assessment practices in higher education institutions tend not to equip students well for the processes of effective learning in a learning society. The purposes of assessment should be extended to include the preparation of students for sustainable assessment. Sustainable assessment encompasses the abilities required to undertake those activities that necessarily accompany learning throughout life in formal and informal settings. Characteristics of effective formative assessment identified by recent research are used to illustrate features of sustainable assessment. Assessment acts need both to meet the specific and immediate goals of a course as well as establishing a basis for students to undertake their own assessment activities in the future. To draw attention to the importance of this, the idea that assessment always has to do double duty is introduced." ] }
1611.01572
2552525247
Crowd workers are distributed and decentralized. While decentralization is designed to utilize independent judgment to promote high-quality results, it paradoxically undercuts behaviors and institutions that are critical to high-quality work. Reputation is one central example: crowdsourcing systems depend on reputation scores from decentralized workers and requesters, but these scores are notoriously inflated and uninformative. In this paper, we draw inspiration from historical worker guilds (e.g., in the silk trade) to design and implement crowd guilds: centralized groups of crowd workers who collectively certify each other's quality through double-blind peer assessment. A two-week field experiment compared crowd guilds to a traditional decentralized crowd work model. Crowd guilds produced reputation signals more strongly correlated with ground-truth worker quality than signals available on current crowd working platforms, and more accurate than in the traditional model.
If crowd workers can effectively assess each other, they could bootstrap their own work reputation signals @cite_58 @cite_71 . Worker peer review can scale more readily than external assessment, and leverages a community of practice built up amongst workers with the same expertise @cite_76 . It can also be comparably accurate: peer assessment matches expert assessment in massive open online classes @cite_32 .
{ "cite_N": [ "@cite_76", "@cite_58", "@cite_32", "@cite_71" ], "mid": [ "2168765606", "2131723483", "2088700588", "" ], "abstract": [ "Micro-task platforms provide massively parallel, on-demand labor. However, it can be difficult to reliably achieve high-quality work because online workers may behave irresponsibly, misunderstand the task, or lack necessary skills. This paper investigates whether timely, task-specific feedback helps crowd workers learn, persevere, and produce better results. We investigate this question through Shepherd, a feedback system for crowdsourced work. In a between-subjects study with three conditions, crowd workers wrote consumer reviews for six products they own. Participants in the None condition received no immediate feedback, consistent with most current crowdsourcing practices. Participants in the Self-assessment condition judged their own work. Participants in the External assessment condition received expert feedback. Self-assessment alone yielded better overall work than the None condition and helped workers improve over time. External assessment also yielded these benefits. Participants who received external assessment also revised their work more. We conclude by discussing interaction and infrastructure approaches for integrating real-time assessment into online work.", "In modern crowdsourcing markets, requesters face the challenge of training and managing large transient workforces. Requesters can hire peer workers to review others' work, but the value may be marginal, especially if the reviewers lack requisite knowledge. Our research explores if and how workers learn and improve their performance in a task domain by serving as peer reviewers. Further, we investigate whether peer reviewing may be more effective in teams where the reviewers can reach consensus through discussion. An online between-subjects experiment compares the trade-offs of reviewing versus producing work using three different organization strategies: working individually, working as an interactive team, and aggregating individuals into nominal groups. The results show that workers who review others' work perform better on subsequent tasks than workers who just produce. We also find that interactive reviewer teams outperform individual reviewers on all quality measures. However, aggregating individual reviewers into nominal groups produces better quality assessments than interactive teams, except in task domains where discussion helps overcome individual misconceptions.", "Peer and self-assessment offer an opportunity to scale both assessment and learning to global classrooms. This article reports our experiences with two iterations of the first large online class to use peer and self-assessment. In this class, peer grades correlated highly with staff-assigned grades. The second iteration had 42.9p of students’ grades within 5p of the staff grade, and 65.5p within 10p. On average, students assessed their work 7p higher than staff did. Students also rated peers’ work from their own country 3.6p higher than those from elsewhere. We performed three experiments to improve grading accuracy. We found that giving students feedback about their grading bias increased subsequent accuracy. We introduce short, customizable feedback snippets that cover common issues with assignments, providing students more qualitative peer feedback. Finally, we introduce a data-driven approach that highlights high-variance items for improvement. We find that rubrics that use a parallel sentence structure, unambiguous wording, and well-specified dimensions have lower variance. After revising rubrics, median grading error decreased from 12.4p to 9.9p.", "" ] }
1611.01572
2552525247
Crowd workers are distributed and decentralized. While decentralization is designed to utilize independent judgment to promote high-quality results, it paradoxically undercuts behaviors and institutions that are critical to high-quality work. Reputation is one central example: crowdsourcing systems depend on reputation scores from decentralized workers and requesters, but these scores are notoriously inflated and uninformative. In this paper, we draw inspiration from historical worker guilds (e.g., in the silk trade) to design and implement crowd guilds: centralized groups of crowd workers who collectively certify each other's quality through double-blind peer assessment. A two-week field experiment compared crowd guilds to a traditional decentralized crowd work model. Crowd guilds produced reputation signals more strongly correlated with ground-truth worker quality than signals available on current crowd working platforms, and more accurate than in the traditional model.
For effectively assessing each other's contributions, it may be prudent to recruit assessors based on prior task performance. Algorithms can facilitate this by adaptively selecting particular raters based on estimated quality, focusing high quality work where it's most needed @cite_39 . The Argonaut system @cite_7 and MobileWorks @cite_16 demonstrate reviews through hierarchies of trusted workers. However, promotions between hierarchies in the Argonaut system require human managers and MobileWorks needs managers to play an essential procedural role in the quality assurance process by determining the final answer for tasks that are reported by workers as difficult, which restricts these systems' ability to scale. In contrast, crowd guilds provide automatic reputation levels based on peer assessment results to algorithmically establish who is qualified to evaluate which work.
{ "cite_N": [ "@cite_16", "@cite_7", "@cite_39" ], "mid": [ "2116915306", "2277895402", "2096848877" ], "abstract": [ "Online labor marketplaces offer the potential to automate tasks too difficult for computers, but don't always provide accurate results. MobileWorks is a crowd platform that departs from the marketplace model to provide robust, high-accuracy results using three new techniques. A dynamic work-routing system identifies expertise in the crowd and ensures that posted work completes within a bounded time with fair wages. A peer-management system helps prevent wrong answers. Last, social interaction techniques let best workers manage and teach other crowd members. This process allows the crowd to collaboratively learn how to solve new tasks.", "Crowdsourced workflows are used in research and industry to solve a variety of tasks. The databases community has used crowd workers in query operators optimization and for tasks such as entity resolution. Such research utilizes microtasks where crowd workers are asked to answer simple yes no or multiple choice questions with little training. Typically, microtasks are used with voting algorithms to combine redundant responses from multiple crowd workers to achieve result quality. Microtasks are powerful, but fail in cases where larger context (e.g., domain knowledge) or significant time investment is needed to solve a problem, for example in large-document structured data extraction. In this paper, we consider context-heavy data processing tasks that may require many hours of work, and refer to such tasks as macrotasks. Leveraging the infrastructure and worker pools of existing crowdsourcing platforms, we automate macrotask scheduling, evaluation, and pay scales. A key challenge in macrotask-powered work, however, is evaluating the quality of a worker's output, since ground truth is seldom available and redundancy-based quality control schemes are impractical. We present Argonaut, a framework that improves macrotask powered work quality using a hierarchical review. Argonaut uses a predictive model of worker quality to select trusted workers to perform review, and a separate predictive model of task quality to decide which tasks to review. Finally, Argonaut can identify the ideal trade-off between a single phase of review and multiple phases of review given a constrained review budget in order to maximize overall output quality. We evaluate an industrial use of Argonaut to power a structured data extraction pipeline that has utilized over half a million hours of crowd worker input to complete millions of macrotasks. We show that Argonaut can capture up to 118 more errors than random spot-check reviews in review budget-constrained environments with up to two review layers.", "Crowd-sourcing is a recent framework in which human intelligence tasks are outsourced to a crowd of unknown people (\"workers\") as an open call (e.g., on Amazon's Mechanical Turk). Crowd-sourcing has become immensely popular with hoards of employers (\"requesters\"), who use it to solve a wide variety of jobs, such as dictation transcription, content screening, etc. In order to achieve quality results, requesters often subdivide a large task into a chain of bite-sized subtasks that are combined into a complex, iterative workflow in which workers check and improve each other's results. This paper raises an exciting question for AI — could an autonomous agent control these workflows without human intervention, yielding better results than today's state of the art, a fixed control program? We describe a planner, TURKONTROL, that formulates workflow control as a decision-theoretic optimization problem, trading off the implicit quality of a solution artifact against the cost for workers to achieve it. We lay the mathematical framework to govern the various decisions at each point in a popular class of workflows. Based on our analysis we implement the workflow control algorithm and present experiments demonstrating that TURKONTROL obtains much higher utilities than popular fixed policies." ] }
1611.01572
2552525247
Crowd workers are distributed and decentralized. While decentralization is designed to utilize independent judgment to promote high-quality results, it paradoxically undercuts behaviors and institutions that are critical to high-quality work. Reputation is one central example: crowdsourcing systems depend on reputation scores from decentralized workers and requesters, but these scores are notoriously inflated and uninformative. In this paper, we draw inspiration from historical worker guilds (e.g., in the silk trade) to design and implement crowd guilds: centralized groups of crowd workers who collectively certify each other's quality through double-blind peer assessment. A two-week field experiment compared crowd guilds to a traditional decentralized crowd work model. Crowd guilds produced reputation signals more strongly correlated with ground-truth worker quality than signals available on current crowd working platforms, and more accurate than in the traditional model.
Today, the intellectual inheritance of guilds persists via professional organizations, which replicate some of their benefits @cite_20 . Professions such as architecture, engineering, geology, and land surveying require varying lengths of apprenticeships before one can gain professional certification, which holds great legal and reputational weight @cite_0 @cite_47 . However, these professions fall into traditional work models and do not cater for global and location-independent flexible employment @cite_26 . Non-traditional work arrangements such as freelancing do not provide common work benefits such as economic security, career development, training, mentoring and social interaction with peers, which are legally essential for classification as full-time work in many cases @cite_40 . Although the support of professional organizations exists for freelance work, much of the underlying value of guilds does not.
{ "cite_N": [ "@cite_26", "@cite_0", "@cite_40", "@cite_47", "@cite_20" ], "mid": [ "1589762555", "2001157009", "1505257659", "", "1974193816" ], "abstract": [ "What if, rather than relying on an employer or the government to meet their human needs, individual workers joined independent organizations whose primary purpose was to provide stable \"homes\" as they moved from job to job? We call these organizations \"guilds\" by analogy to the craft associations of the Middle Ages, and in this paper we examine what they might do and how they might emerge.", "Guilds are social scientists’ favoured historical example of institutions generating a ‘social capital’ of trust that benefited entire economies. This article considers this view in the light of empirical findings for early modern Europe. It draws the distinction between a ‘particularized’ trust in persons of known attributes and a ‘generalized’ trust that applies even to strangers. This is paralleled by the distinction between a ‘differential’ trust in institutions that enforce the rights of certain groups and a ‘uniform’ trust in impartial institutions that enforce the rights of all. Guilds had the potential to generate the particularized and differential trust to solve market failures relating to product quality, training, and innovation, although the empirical findings suggest that they often failed to fulfil this potential. Guilds also had the potential to abuse their trust, and the empirical findings show that they indeed manipulated their social capital of shared norms, common information, mutual sanctions, and collective political action to benefit their members at others’ expense, blocking the spread of generalized and uniform trust. Counter to the assumptions of social capital theory, the example of pre-industrial guilds suggests that the particularized and differential trust fostered by associative institutions do not favour but hinder the generalized and uniform trust fostered by impartial institutions.", "This title looks at the likely evolution of the U.S. workforce and workplace over the next 10 to 15 years, focusing on demographics, technology and globalization.", "", "I would like to thank Howard Aldrich, Graham Astley, Rudi Bresser, Mark Ebers, Wilfried Gotsch, John Langton, Johannes Pennings, and four anonymous ASO reviewers for their insightful comments on earlier drafts of this paper. A theoretical frame is developed that links the genesis of formal organizations to societ al evolution. This theoretical frame lends itself to an analysis of the peculiarities of the craft guilds and the developments that caused their decline and replacement by formal organizations. Medieval guilds were not yet formal organizations but formed important predecessor institutions in the evolutionary process that led to the emergence of organizations.'" ] }
1611.01572
2552525247
Crowd workers are distributed and decentralized. While decentralization is designed to utilize independent judgment to promote high-quality results, it paradoxically undercuts behaviors and institutions that are critical to high-quality work. Reputation is one central example: crowdsourcing systems depend on reputation scores from decentralized workers and requesters, but these scores are notoriously inflated and uninformative. In this paper, we draw inspiration from historical worker guilds (e.g., in the silk trade) to design and implement crowd guilds: centralized groups of crowd workers who collectively certify each other's quality through double-blind peer assessment. A two-week field experiment compared crowd guilds to a traditional decentralized crowd work model. Crowd guilds produced reputation signals more strongly correlated with ground-truth worker quality than signals available on current crowd working platforms, and more accurate than in the traditional model.
Guilds re-emerged digitally in Massively Multiplayer Online games (MMOs) and behave somewhat like their brick-and-mortar equivalents @cite_1 . Guilds help their players level their characters, coordinate strategies, and develop a collectively recognized reputation @cite_68 @cite_8 . This paper draws on the strengths of guilds in assessing participants' reputation, and adapts them to the non-traditional employment model of crowd work. Crowd guilds formalize the feedback and advancement system so that it can operate at scale with a distributed membership.
{ "cite_N": [ "@cite_68", "@cite_1", "@cite_8" ], "mid": [ "2127471749", "819488460", "" ], "abstract": [ "Massively multiplayer online games (MMOGs) can be fascinating laboratories to observe group dynamics online. In particular, players must form persistent associations or \"guilds\" to coordinate their actions and accomplish the games' toughest objectives. Managing a guild, however, is notoriously difficult and many do not survive very long. In this paper, we examine some of the factors that could explain the success or failure of a game guild based on more than a year of data collected from five World of Warcraft servers. Our focus is on structural properties of these groups, as represented by their social networks and other variables. We use this data to discuss what games can teach us about group dynamics online and, in particular, what tools and techniques could be used to better support gaming communities.", "Guilds, a primary form of community in many online games, are thought to both aid gameplay and act as social entities. This work uses a three-year scrape of one game, World of Warcraft, to study the relationship between guild membership and advancement in the game as measured by character leveling, a defining and often studied metric. 509 guilds and 90,581 characters are included in the analysis from a three-year period with over 36 million observations, with linear regression to measure the effect of guild membership. Overall findings indicate that guild membership does not aid character leveling to any significant extent. The benefits of guilds may be replicated by players in smaller guilds or not in guilds through game affordances and human sociability.", "" ] }
1611.01572
2552525247
Crowd workers are distributed and decentralized. While decentralization is designed to utilize independent judgment to promote high-quality results, it paradoxically undercuts behaviors and institutions that are critical to high-quality work. Reputation is one central example: crowdsourcing systems depend on reputation scores from decentralized workers and requesters, but these scores are notoriously inflated and uninformative. In this paper, we draw inspiration from historical worker guilds (e.g., in the silk trade) to design and implement crowd guilds: centralized groups of crowd workers who collectively certify each other's quality through double-blind peer assessment. A two-week field experiment compared crowd guilds to a traditional decentralized crowd work model. Crowd guilds produced reputation signals more strongly correlated with ground-truth worker quality than signals available on current crowd working platforms, and more accurate than in the traditional model.
Despite being spread across the globe, crowd workers collaborate, share information, and engage in collective action @cite_46 @cite_3 @cite_50 . The Turkopticon activist system, for instance, provides workers with a collective voice to publically rate requesters @cite_17 , though it remains a worker tool, external to the Mechanical Turk platform. Along with Turkopticon, workers frequent various forums to share information identifying high-quality work and reliable requesters @cite_18 . However, these forums are hosted outside the marketplace. This de-centralization from the main platform makes it harder to locate and obtain reputational information @cite_35 and, when needed, bring about large scale collective action. Therefore, Dynamo identified strategies for small groups of workers to incite change @cite_41 .
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_41", "@cite_3", "@cite_50", "@cite_46", "@cite_17" ], "mid": [ "2097726984", "2007018772", "2055699460", "", "", "2293765620", "2147603330" ], "abstract": [ "Trust and reputation systems represent a significant trend in decision support for Internet mediated service provision. The basic idea is to let parties rate each other, for example after the completion of a transaction, and use the aggregated ratings about a given party to derive a trust or reputation score, which can assist other parties in deciding whether or not to transact with that party in the future. A natural side effect is that it also provides an incentive for good behaviour, and therefore tends to have a positive effect on market quality. Reputation systems can be called collaborative sanctioning systems to reflect their collaborative nature, and are related to collaborative filtering systems. Reputation systems are already being used in successful commercial online applications. There is also a rapidly growing literature around trust and reputation systems, but unfortunately this activity is not very coherent. The purpose of this article is to give an overview of existing and proposed systems that can be used to derive measures of trust and reputation for Internet transactions, to analyse the current trends and developments in this area, and to propose a research agenda for trust and reputation systems.", "Crowdsourcing is a key current topic in CSCW. We build upon findings of a few qualitative studies of crowdworkers. We conducted an ethnomethodological analysis of publicly available content on Turker Nation, a general forum for Amazon Mechanical Turk (AMT) users. Using forum data we provide novel depth and detail on how the Turker Nation members operate as economic actors, working out which Requesters and jobs are worthwhile to them. We show some of the key ways Turker Nation functions as a community and also look further into Turker-Requester relationships from the Turker perspective -- considering practical, emotional and moral aspects. Finally, following Star and Strauss [25] we analyse Turking as a form of invisible work. We do this to illustrate practical and ethical issues relating to working with Turkers and AMT, and to promote design directions to support Turkers and their relationships with Requesters.", "By lowering the costs of communication, the web promises to enable distributed collectives to act around shared issues. However, many collective action efforts never succeed: while the web's affordances make it easy to gather, these same decentralizing characteristics impede any focus towards action. In this paper, we study challenges to collective action efforts through the lens of online labor by engaging with Amazon Mechanical Turk workers. Through a year of ethnographic fieldwork, we sought to understand online workers' unique barriers to collective action. We then created Dynamo, a platform to support the Mechanical Turk community in forming publics around issues and then mobilizing. We found that collective action publics tread a precariously narrow path between the twin perils of stalling and friction, balancing with each step between losing momentum and flaring into acrimony. However, specially structured labor to maintain efforts' forward motion can help such publics take action.", "", "", "The main goal of this paper is to show that crowdworkers collaborate to fulfill technical and social needs left by the platform they work on. That is, crowdworkers are not the independent, autonomous workers they are often assumed to be, but instead work within a social network of other crowdworkers. Crowdworkers collaborate with members of their networks to 1) manage the administrative overhead associated with crowdwork, 2) find lucrative tasks and reputable employers and 3) recreate the social connections and support often associated with brick and mortar-work environments. Our evidence combines ethnography, interviews, survey data and larger scale data analysis from four crowdsourcing platforms, emphasizing the qualitative data from the Amazon Mechanical Turk (MTurk) platform and Microsoftâ s proprietary crowdsourcing platform, the Universal Human Relevance System (UHRS). This paper draws from an ongoing, longitudinal study of Crowdwork that uses a mixed methods approach to understand the cultural meaning, political implications, and ethical demands of crowdsourcing.", "As HCI researchers have explored the possibilities of human computation, they have paid less attention to ethics and values of crowdwork. This paper offers an analysis of Amazon Mechanical Turk, a popular human computation system, as a site of technically mediated worker-employer relations. We argue that human computation currently relies on worker invisibility. We then present Turkopticon, an activist system that allows workers to publicize and evaluate their relationships with employers. As a common infrastructure, Turkopticon also enables workers to engage one another in mutual aid. We conclude by discussing the potentials and challenges of sustaining activist technologies that intervene in large, existing socio-technical systems." ] }
1611.01572
2552525247
Crowd workers are distributed and decentralized. While decentralization is designed to utilize independent judgment to promote high-quality results, it paradoxically undercuts behaviors and institutions that are critical to high-quality work. Reputation is one central example: crowdsourcing systems depend on reputation scores from decentralized workers and requesters, but these scores are notoriously inflated and uninformative. In this paper, we draw inspiration from historical worker guilds (e.g., in the silk trade) to design and implement crowd guilds: centralized groups of crowd workers who collectively certify each other's quality through double-blind peer assessment. A two-week field experiment compared crowd guilds to a traditional decentralized crowd work model. Crowd guilds produced reputation signals more strongly correlated with ground-truth worker quality than signals available on current crowd working platforms, and more accurate than in the traditional model.
In existing marketplaces, workers are frustrated by capricious requester behavior related to work acceptance and having limited say in cultivating platform policies @cite_30 @cite_25 . Issues related to unfair rejections, unclear tasks, and platform policies have been publicly discussed @cite_18 , but workers have limited opportunities to impact platform operations leaving less room to accommodate emerging needs. Therefore, crowd guilds focus on providing peer assessed reputation signals. To do this, they internalize affordances of previous tools for peer-review and gathering feedback, and give the guild power in the marketplace to manage some of these issues.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_25" ], "mid": [ "173698209", "2007018772", "" ], "abstract": [ "Crowdsourcing offers new forms of work arrangements enabled and facilitated by the advancements in Internet technologies and growing popularity of social media. However, do these new forms of work empower workers to craft their own careers or do they create a sweatshop where workers complete fragmented tasks to earn minimal pay? We posit that career theories and job crafting approaches collectively provide valuable theoretical perspectives for examining this question. By assessing the degree to which these platforms afford or constrain the workers to exert their personal agencies (i.e., affords career and job crafting preferences), we argue, will partially determine whether these new forms of work are a harbinger of worker empowerment or exploitation. Preliminary findings of this exploratory research-in-progress, conducted using two types of workers on Amazon-Mechanical-Turk, reveal that these new forms of work arrangements have a potential for both empowerment and exploitation of workers.", "Crowdsourcing is a key current topic in CSCW. We build upon findings of a few qualitative studies of crowdworkers. We conducted an ethnomethodological analysis of publicly available content on Turker Nation, a general forum for Amazon Mechanical Turk (AMT) users. Using forum data we provide novel depth and detail on how the Turker Nation members operate as economic actors, working out which Requesters and jobs are worthwhile to them. We show some of the key ways Turker Nation functions as a community and also look further into Turker-Requester relationships from the Turker perspective -- considering practical, emotional and moral aspects. Finally, following Star and Strauss [25] we analyse Turking as a form of invisible work. We do this to illustrate practical and ethical issues relating to working with Turkers and AMT, and to promote design directions to support Turkers and their relationships with Requesters.", "" ] }
1611.01504
2552297262
We propose a method to classify the causal relationship between two discrete variables given only the joint distribution of the variables, acknowledging that the method is subject to an inherent baseline error. We assume that the causal system is acyclicity, but we do allow for hidden common causes. Our algorithm presupposes that the probability distributions @math of a cause @math is independent from the probability distribution @math of the cause-effect mechanism. While our classifier is trained with a Bayesian assumption of flat hyperpriors, we do not make this assumption about our test data. This work connects to recent developments on the identifiability of causal models over continuous variables under the assumption of "independent mechanisms". Carefully-commented Python notebooks that reproduce all our experiments are available online at this http URL
Most of this work has focused on distinguishing the causal orientation in the absence of confounding (i.e. distinguishing hypotheses A, C and D in Fig. , although @cite_0 have extended the linear non-Gaussian methods to the general hypothesis space of Fig, and showed that the additive noise assumption can be used to detect pure confounding with some success, i.e. to distinguish hypothesis B from hypotheses C and D.
{ "cite_N": [ "@cite_0" ], "mid": [ "2093947494" ], "abstract": [ "The task of estimating causal effects from non-experimental data is notoriously difficult and unreliable. Nevertheless, precisely such estimates are commonly required in many fields including economics and social science, where controlled experiments are often impossible. Linear causal models (structural equation models), combined with an implicit normality (Gaussianity) assumption on the data, provide a widely used framework for this task. We have recently described how non-Gaussianity in the data can be exploited for estimating causal effects. In this paper we show that, with non-Gaussian data, causal inference is possible even in the presence of hidden variables (unobserved confounders), even when the existence of such variables is unknown a priori. Thus, we provide a comprehensive and complete framework for the estimation of causal effects between the observed variables in the linear, non-Gaussian domain. Numerical simulations demonstrate the practical implementation of the proposed method, with full Matlab code available for all simulations." ] }
1611.01731
2553156677
Convolutional neural networks (ConvNets) have achieved excellent recognition performance in various visual recognition tasks. A large labeled training set is one of the most important factors for its success. However, it is difficult to collect sufficient training images with precise labels in some domains, such as apparent age estimation, head pose estimation, multilabel classification, and semantic segmentation. Fortunately, there is ambiguous information among labels, which makes these tasks different from traditional classification. Based on this observation, we convert the label of each image into a discrete label distribution, and learn the label distribution by minimizing a Kullback–Leibler divergence between the predicted and ground-truth label distributions using deep ConvNets. The proposed deep label distribution learning (DLDL) method effectively utilizes the label ambiguity in both feature learning and classifier learning, which help prevent the network from overfitting even when the training set is small. Experimental results show that the proposed approach produces significantly better results than the state-of-the-art methods for age estimation and head pose estimation. At the same time, it also improves recognition performance for multi-label classification and semantic segmentation tasks.
Methods based on hand-crafted features usually include two stages. The first stage is feature extraction. The second stage learns models for recognition, detection or estimation using these features. SVM, random forest @cite_45 and neural networks have commonly been used during the learning stage. In addition, Geng proposed the label distribution learning approach to utilize the correlation among adjacent labels, which further improved performance on age estimation @cite_36 and head pose estimation @cite_21 .
{ "cite_N": [ "@cite_36", "@cite_45", "@cite_21" ], "mid": [ "2066454034", "1974744233", "2022566595" ], "abstract": [ "One of the main difficulties in facial age estimation is that the learning algorithms cannot expect sufficient and complete training data. Fortunately, the faces at close ages look quite similar since aging is a slow and smooth process. Inspired by this observation, instead of considering each face image as an instance with one label (age), this paper regards each face image as an instance associated with a label distribution. The label distribution covers a certain number of class labels, representing the degree that each label describes the instance. Through this way, one face image can contribute to not only the learning of its chronological age, but also the learning of its adjacent ages. Two algorithms, named IIS-LLD and CPNN, are proposed to learn from such label distributions. Experimental results on two aging face databases show remarkable advantages of the proposed label distribution learning algorithms over the compared single-label learning algorithms, either specially designed for age estimation or for general purpose.", "Fast and reliable algorithms for estimating the head pose are essential for many applications and higher-level face analysis tasks. We address the problem of head pose estimation from depth data, which can be captured using the ever more affordable 3D sensing technologies available today. To achieve robustness, we formulate pose estimation as a regression problem. While detecting specific face parts like the nose is sensitive to occlusions, learning the regression on rather generic surface patches requires enormous amount of training data in order to achieve accurate estimates. We propose to use random regression forests for the task at hand, given their capability to handle large training datasets. Moreover, we synthesize a great amount of annotated training data using a statistical model of the human face. In our experiments, we show that our approach can handle real data presenting large pose changes, partial occlusions, and facial expressions, even though it is trained only on synthetic neutral face data. We have thoroughly evaluated our system on a publicly available database on which we achieve state-of-the-art performance without having to resort to the graphics card.", "Accurate ground truth pose is essential to the training of most existing head pose estimation algorithms. However, in many cases, the \"ground truth\" pose is obtained in rather subjective ways, such as asking the human subjects to stare at different markers on the wall. In such case, it is better to use soft labels rather than explicit hard labels. Therefore, this paper proposes to associate a multivariate label distribution (MLD) to each image. An MLD covers a neighborhood around the original pose. Labeling the images with MLD can not only alleviate the problem of inaccurate pose labels, but also boost the training examples associated to each pose without actually increasing the total amount of training examples. Two algorithms are proposed to learn from the MLD by minimizing the weighted Jeffrey's divergence between the predicted MLD and the ground truth MLD. Experimental results show that the MLD-based methods perform significantly better than the compared state-of-the-art head pose estimation algorithms." ] }
1611.01731
2553156677
Convolutional neural networks (ConvNets) have achieved excellent recognition performance in various visual recognition tasks. A large labeled training set is one of the most important factors for its success. However, it is difficult to collect sufficient training images with precise labels in some domains, such as apparent age estimation, head pose estimation, multilabel classification, and semantic segmentation. Fortunately, there is ambiguous information among labels, which makes these tasks different from traditional classification. Based on this observation, we convert the label of each image into a discrete label distribution, and learn the label distribution by minimizing a Kullback–Leibler divergence between the predicted and ground-truth label distributions using deep ConvNets. The proposed deep label distribution learning (DLDL) method effectively utilizes the label ambiguity in both feature learning and classifier learning, which help prevent the network from overfitting even when the training set is small. Experimental results show that the proposed approach produces significantly better results than the state-of-the-art methods for age estimation and head pose estimation. At the same time, it also improves recognition performance for multi-label classification and semantic segmentation tasks.
Although important progresses have been made with these features, the hand-crafted features render them suboptimal for particular tasks such as age or head pose estimation. More recently, learning feature representation has shown great advantages. For example, Lu @cite_39 tried to learn cost-sensitive local binary features for age estimation.
{ "cite_N": [ "@cite_39" ], "mid": [ "1669690275" ], "abstract": [ "In this paper, we propose a cost-sensitive local binary feature learning (CS-LBFL) method for facial age estimation. Unlike the conventional facial age estimation methods that employ hand-crafted descriptors or holistically learned descriptors for feature representation, our CS-LBFL method learns discriminative local features directly from raw pixels for face representation. Motivated by the fact that facial age estimation is a cost-sensitive computer vision problem and local binary features are more robust to illumination and expression variations than holistic features, we learn a series of hashing functions to project raw pixel values extracted from face patches into low-dimensional binary codes, where binary codes with similar chronological ages are projected as close as possible, and those with dissimilar chronological ages are projected as far as possible. Then, we pool and encode these local binary codes within each face image as a real-valued histogram feature for face representation. Moreover, we propose a cost-sensitive local binary multi-feature learning method to jointly learn multiple sets of hashing functions using face patches extracted from different scales to exploit complementary information. Our methods achieve competitive performance on four widely used face aging data sets." ] }
1611.01731
2553156677
Convolutional neural networks (ConvNets) have achieved excellent recognition performance in various visual recognition tasks. A large labeled training set is one of the most important factors for its success. However, it is difficult to collect sufficient training images with precise labels in some domains, such as apparent age estimation, head pose estimation, multilabel classification, and semantic segmentation. Fortunately, there is ambiguous information among labels, which makes these tasks different from traditional classification. Based on this observation, we convert the label of each image into a discrete label distribution, and learn the label distribution by minimizing a Kullback–Leibler divergence between the predicted and ground-truth label distributions using deep ConvNets. The proposed deep label distribution learning (DLDL) method effectively utilizes the label ambiguity in both feature learning and classifier learning, which help prevent the network from overfitting even when the training set is small. Experimental results show that the proposed approach produces significantly better results than the state-of-the-art methods for age estimation and head pose estimation. At the same time, it also improves recognition performance for multi-label classification and semantic segmentation tasks.
Deep learning has substantially improved upon the state-of-the-art in image classification @cite_43 , object detection @cite_33 , semantic segmentation @cite_41 and many other vision tasks. In many cases, the softmax loss is used in deep models for classification @cite_43 . Besides classification, deep ConvNets have also been trained for regression tasks such as head pose estimation @cite_26 and facial landmark detection @cite_27 . In regression problems, the training procedure usually optimizes a squared @math loss function. Satisfactory performance has also been obtained by using Tukey's biweight function in human pose estimation @cite_49 . In terms of model architecture, deep ConvNet models which use deeper architecture and smaller convolution filters (, VGG-Nets @cite_12 and VGG-Face @cite_10 ) are very powerful. Nevertheless, these deep learning methods do not make use of the presence of label ambiguity in the training set, and usually require a large amount of training data.
{ "cite_N": [ "@cite_26", "@cite_33", "@cite_41", "@cite_43", "@cite_27", "@cite_49", "@cite_10", "@cite_12" ], "mid": [ "1475738481", "2102605133", "1903029394", "2194775991", "1976948919", "2116626343", "2325939864", "2962835968" ], "abstract": [ "We propose an efficient and accurate head orientation estimation algorithm using a monocular camera. Our approach is leveraged by deep neural network and we exploit the architecture in a data regression manner to learn the mapping function between visual appearance and three dimensional head orientation angles. Therefore, in contrast to classification based approaches, our system outputs continuous head orientation. The algorithm uses convolutional filters trained with a large number of augmented head appearances, thus it is user independent and covers large pose variations. Our key observation is that an input image having (32 32 ) resolution is enough to achieve about 3 degrees of mean square error, which can be used for efficient head orientation applications. Therefore, our architecture takes only 1 ms on roughly localized head positions with the aid of GPU. We also propose particle filter based post-processing to enhance stability of the estimation further in video sequences. We compare the performance with the state-of-the-art algorithm which utilizes depth sensor and we validate our head orientation estimator on Internet photos and video.", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "We propose a new approach for estimation of the positions of facial key points with three-level carefully designed convolutional networks. At each level, the outputs of multiple networks are fused for robust and accurate estimation. Thanks to the deep structures of convolutional networks, global high-level features are extracted over the whole face region at the initialization stage, which help to locate high accuracy key points. There are two folds of advantage for this. First, the texture context information over the entire face is utilized to locate each key point. Second, since the networks are trained to predict all the key points simultaneously, the geometric constraints among key points are implicitly encoded. The method therefore can avoid local minimum caused by ambiguity and data corruption in difficult image samples due to occlusions, large pose variations, and extreme lightings. The networks at the following two levels are trained to locally refine initial predictions and their inputs are limited to small regions around the initial predictions. Several network structures critical for accurate and robust facial point detection are investigated. Extensive experiments show that our approach outperforms state-of-the-art methods in both detection accuracy and reliability.", "Convolutional Neural Networks (ConvNets) have successfully contributed to improve the accuracy of regression-based methods for computer vision tasks such as human pose estimation, landmark localization, and object detection. The network optimization has been usually performed with L2 loss and without considering the impact of outliers on the training process, where an outlier in this context is defined by a sample estimation that lies at an abnormal distance from the other training sample estimations in the objective space. In this work, we propose a regression model with ConvNets that achieves robustness to such outliers by minimizing Tukey's biweight function, an M-estimator robust to outliers, as the loss function for the ConvNet. In addition to the robust loss, we introduce a coarse-to-fine model, which processes input images of progressively higher resolutions for improving the accuracy of the regressed values. In our experiments, we demonstrate faster convergence and better generalization of our robust loss function for the tasks of human pose estimation and age estimation from face images. We also show that the combination of the robust loss function with the coarse-to-fine model produces comparable or better results than current state-of-the-art approaches in four publicly available human pose estimation datasets.", "The goal of this paper is face recognition – from either a single photograph or from a set of faces tracked in a video. Recent progress in this area has been due to two factors: (i) end to end learning for the task using a convolutional neural network (CNN), and (ii) the availability of very large scale training datasets. We make two contributions: first, we show how a very large scale dataset (2.6M images, over 2.6K people) can be assembled by a combination of automation and human in the loop, and discuss the trade off between data purity and time; second, we traverse through the complexities of deep network training and face recognition to present methods and procedures to achieve comparable state of the art results on the standard LFW and YTF face benchmarks.", "Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision." ] }
1611.01731
2553156677
Convolutional neural networks (ConvNets) have achieved excellent recognition performance in various visual recognition tasks. A large labeled training set is one of the most important factors for its success. However, it is difficult to collect sufficient training images with precise labels in some domains, such as apparent age estimation, head pose estimation, multilabel classification, and semantic segmentation. Fortunately, there is ambiguous information among labels, which makes these tasks different from traditional classification. Based on this observation, we convert the label of each image into a discrete label distribution, and learn the label distribution by minimizing a Kullback–Leibler divergence between the predicted and ground-truth label distributions using deep ConvNets. The proposed deep label distribution learning (DLDL) method effectively utilizes the label ambiguity in both feature learning and classifier learning, which help prevent the network from overfitting even when the training set is small. Experimental results show that the proposed approach produces significantly better results than the state-of-the-art methods for age estimation and head pose estimation. At the same time, it also improves recognition performance for multi-label classification and semantic segmentation tasks.
A latest approach, in Inception-v3 @cite_14 , is based on label smoothing (LS). Instead of only using the ground-truth label, they utilize a mixture of the ground-truth label and a uniform distribution to regularize the classifier. However, LS is limited to the uniform distribution among labels rather than mining labels' ambiguous information. We believe that label ambiguity is too important to ignore. If we make good use of the ambiguity, we expect the required number of training images for some tasks could be effectively reduced.
{ "cite_N": [ "@cite_14" ], "mid": [ "2183341477" ], "abstract": [ "Convolutional networks are at the core of most state of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we are exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21:2 top-1 and 5:6 top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3:5 top-5 error and 17:3 top-1 error on the validation set and 3:6 top-5 error on the official test set." ] }
1611.01644
2554706764
In this paper, we study a survivable network design problem on directed graphs, 2-Connected Directed Steiner Tree (2-DST): given an @math -vertex weighted directed graph, a root @math , and a set of @math terminals @math , find a min-cost subgraph @math that has two edge vertex disjoint paths from @math to any @math . 2-DST is a natural generalization of the classical Directed Steiner Tree problem (DST), where we have an additional requirement that the network must tolerate one failure. No non-trivial approximation is known for 2-DST. This was left as an open problem by , [SODA'09; JCSS] and has then been studied by [SODA'12; TALG] and Laekhanukit SODA'14]. However, no positive result was known except for the special case of a @math -shallow instance [Laekhanukit, ICALP'16]. We present an @math approximation algorithm for 2-DST that runs in time @math , for any @math . This implies a polynomial-time @math approximation for any constant @math , and a poly-logarithmic approximation running in quasi-polynomial time. We remark that this is essentially the best-known even for the classical DST, and the latter problem is @math -hard to approximate [Halperin and Krauthgamer, STOC'03]. As a by product, we obtain an algorithm with the same approximation guarantee for the @math -Connected Directed Steiner Subgraph problem, where the goal is to find a min-cost subgraph such that every pair of terminals are @math -edge vertex connected.
In the Directed Steiner Tree problem ( ), we are given an @math -vertex directed edge weighted graph, a root @math and a collection of @math terminal vertices @math . The goal is to find a min-cost arborescence rooted at @math and spanning @math . is one of the most fundamental network design problems in directed graphs. admits, for any positive integer @math , an @math approximation running in time @math @cite_35 @cite_13 . In particular, this implies a polynomial-time @math approximation for any constant @math , and an @math approximation running in quasi-polynomial time.
{ "cite_N": [ "@cite_35", "@cite_13" ], "mid": [ "2009976792", "2005771129" ], "abstract": [ "We give the first nontrivial approximation algorithms for the Steiner tree problem and the generalized Steiner network problem on general directed graphs. These problems have several applications in network design and multicast routing. For both problems, the best ratios known before our work were the trivial O(k)-approximations. For the directed Steiner tree problem, we design a family of algorithms that achieves an approximation ratio of i(i?1)k1 i in time O(nik2i) for any fixed i1, where k is the number of terminals. Thus, an O(k?) approximation ratio can be achieved in polynomial time for any fixed ?0. Setting i=logk, we obtain an O(log2k) approximation ratio in quasi-polynomial time. For the directed generalized Steiner network problem we give an algorithm that achieves an approximation ratio of O(k2 3log1 3k), where k is the number of pairs of vertices that are to be connected. Related problems including the group Steiner tree problem, the set TSP problem, and several others in both directed and undirected graphs can be reduced in an approximation preserving fashion to the directed Steiner tree problem. Thus, we obtain the first nontrivial approximations to those as well. All these problems are known to be as hard as the Set cover problem to approximate.", "Given an acyclic directed network, a subsetS of nodes (terminals), and a rootr, theacyclic directed Steiner tree problem requires a minimum-cost subnetwork which contains paths fromr to each terminal. It is known that unlessNP⊆DTIME[npolylogn] no polynomial-time algorithm can guarantee better than (lnk) 4-approximation, wherek is the number of terminals. In this paper we give anO(kź)-approximation algorithm for any ź>0. This result improves the previously knownk-approximation." ] }
1611.01644
2554706764
In this paper, we study a survivable network design problem on directed graphs, 2-Connected Directed Steiner Tree (2-DST): given an @math -vertex weighted directed graph, a root @math , and a set of @math terminals @math , find a min-cost subgraph @math that has two edge vertex disjoint paths from @math to any @math . 2-DST is a natural generalization of the classical Directed Steiner Tree problem (DST), where we have an additional requirement that the network must tolerate one failure. No non-trivial approximation is known for 2-DST. This was left as an open problem by , [SODA'09; JCSS] and has then been studied by [SODA'12; TALG] and Laekhanukit SODA'14]. However, no positive result was known except for the special case of a @math -shallow instance [Laekhanukit, ICALP'16]. We present an @math approximation algorithm for 2-DST that runs in time @math , for any @math . This implies a polynomial-time @math approximation for any constant @math , and a poly-logarithmic approximation running in quasi-polynomial time. We remark that this is essentially the best-known even for the classical DST, and the latter problem is @math -hard to approximate [Halperin and Krauthgamer, STOC'03]. As a by product, we obtain an algorithm with the same approximation guarantee for the @math -Connected Directed Steiner Subgraph problem, where the goal is to find a min-cost subgraph such that every pair of terminals are @math -edge vertex connected.
A well-studied special case of is the problem ( ). Here we are given an undirected weighted graph, a root vertex @math , and a collection of @math groups @math . The goal is to compute the cheapest tree that spans @math and at least one vertex from each group @math . The best-known polynomial-time approximation factor for is @math due to Garg et al. @cite_2 . Their algorithm uses probabilistic distance-based tree embeddings @cite_24 @cite_3 as a subroutine. Chekuri and Pal @cite_31 presented an @math approximation that runs in quasi-polynomial time. On the negative side, Halperin and Krauthgamer @cite_21 showed that admits no @math -approximation algorithm, for any constant @math , unless @math . This implies the same hardness for , hence for and .
{ "cite_N": [ "@cite_21", "@cite_3", "@cite_24", "@cite_2", "@cite_31" ], "mid": [ "", "2768461089", "2114493937", "2610052675", "2151239055" ], "abstract": [ "", "In this paper, we show that any n point metric space can be embedded into a distribution over dominating tree metrics such that the expected stretch of any edge is O(log n). This improves upon the result of Bartal who gave a bound of O(log n log log n). Moreover, our result is existentially tight; there exist metric spaces where any tree embedding must have distortion Ω(log n)-distortion. This problem lies at the heart of numerous approximation and online algorithms including ones for group Steiner tree, metric labeling, buy-at-bulk network design and metrical task system. Our result improves the performance guarantees for all of these problems.", "This paper provides a novel technique for the analysis of randomized algorithms for optimization problems on metric spaces, by relating the randomized performance ratio for any, metric space to the randomized performance ratio for a set of \"simple\" metric spaces. We define a notion of a set of metric spaces that probabilistically-approximates another metric space. We prove that any metric space can be probabilistically-approximated by hierarchically well-separated trees (HST) with a polylogarithmic distortion. These metric spaces are \"simple\" as being: (1) tree metrics; (2) natural for applying a divide-and-conquer algorithmic approach. The technique presented is of particular interest in the context of on-line computation. A large number of on-line algorithmic problems, including metrical task systems, server problems, distributed paging, and dynamic storage rearrangement are defined in terms of some metric space. Typically for these problems, there are linear lower bounds on the competitive ratio of deterministic algorithms. Although randomization against an oblivious adversary has the potential of overcoming these high ratios, very little progress has been made in the analysis. We demonstrate the use of our technique by obtaining substantially improved results for two different on-line problems.", "Given a weighted graph with some subsets of vertices called groups, the group Steiner tree problem is to find a minimum-weight subgraph which contains at least one vertex from each group. We give a randomized algorithm with a polylogarithmic approximation guarantee for the group Steiner tree problem. The previous best approximation guarantee was O(i2k1 i) in time O(nik2i) (Charikar, Chekuri, Goel, and Guha). Our algorithm also improves existing approximation results for network design problems with location-based constraints and for the symmetric generalized traveling salesman problem.", "Given an arc-weighted directed graph G = (V, A, spl lscr ) and a pair of nodes s, t, we seek to find an s-t walk of length at most B that maximizes some given function f of the set of nodes visited by the walk. The simplest case is when we seek to maximize the number of nodes visited: this is called the orienteering problem. Our main result is a quasi-polynomial time algorithm that yields an O(log OPT) approximation for this problem when f is a given submodular set function. We then extend it to the case when a node v is counted as visited only if the walk reaches v in its time window [R(v), D(v)]. We apply the algorithm to obtain several new results. First, we obtain an O(log OPT) approximation for a generalization of the orienteering problem in which the profit for visiting each node may vary arbitrarily with time. This captures the time window problem considered earlier for which, even in undirected graphs, the best approximation ratio known [Bansal, (2004)] is O(log sup 2 OPT). The second application is an O(log sup 2 k) approximation for the k-TSP problem in directed graphs (satisfying asymmetric triangle inequality). This is the first non-trivial approximation algorithm for this problem. The third application is an O(log sup 2 k) approximation (in quasi-poly time) for the group Steiner problem in undirected graphs where k is the number of groups. This improves earlier ratios (Garg, ) by a logarithmic factor and almost matches the inapproximability threshold on trees (Halperin and Krauthgamer, 2003). This connection to group Steiner trees also enables us to prove that the problem we consider is hard to approximate to a ratio better than spl Omega (log sup 1- spl epsi OPT), even in undirected graphs. Even though our algorithm runs in quasi-poly time, we believe that the implications for the approximability of several basic optimization problems are interesting." ] }
1611.01644
2554706764
In this paper, we study a survivable network design problem on directed graphs, 2-Connected Directed Steiner Tree (2-DST): given an @math -vertex weighted directed graph, a root @math , and a set of @math terminals @math , find a min-cost subgraph @math that has two edge vertex disjoint paths from @math to any @math . 2-DST is a natural generalization of the classical Directed Steiner Tree problem (DST), where we have an additional requirement that the network must tolerate one failure. No non-trivial approximation is known for 2-DST. This was left as an open problem by , [SODA'09; JCSS] and has then been studied by [SODA'12; TALG] and Laekhanukit SODA'14]. However, no positive result was known except for the special case of a @math -shallow instance [Laekhanukit, ICALP'16]. We present an @math approximation algorithm for 2-DST that runs in time @math , for any @math . This implies a polynomial-time @math approximation for any constant @math , and a poly-logarithmic approximation running in quasi-polynomial time. We remark that this is essentially the best-known even for the classical DST, and the latter problem is @math -hard to approximate [Halperin and Krauthgamer, STOC'03]. As a by product, we obtain an algorithm with the same approximation guarantee for the @math -Connected Directed Steiner Subgraph problem, where the goal is to find a min-cost subgraph such that every pair of terminals are @math -edge vertex connected.
As already mentioned, survivable network design is well studied in undirected (weighted) graphs. First, consider the edge connectivity version. The earliest work is initiated in early 80's by Frederickson and J ' a J ' a @cite_22 , where the authors studied the 2-Edge Connected Subgraph problem in both directed and undirected graphs. In the most general form of the problem, also known as the problem, we are given non-negative integer requirements @math for all pairs of vertices @math , and the goal is to find a min-cost subgraph that has @math edge-disjoint paths between @math and @math . Jain @cite_0 devised a @math -approximation algorithm for this problem. We remark that @math is the best known approximation factor even for @math @cite_39 , which is known as the problem. The classical problem is a special case of Steiner forest where all pairs share one vertex. Here the best known approximation factor is @math due to the work of Byrka et al. @cite_27 .
{ "cite_N": [ "@cite_0", "@cite_27", "@cite_22", "@cite_39" ], "mid": [ "2172955861", "1976584101", "2051699458", "2011823863" ], "abstract": [ "", "The Steiner tree problem is one of the most fundamental NP-hard problems: given a weighted undirected graph and a subset of terminal nodes, find a minimum-cost tree spanning the terminals. In a sequence of papers, the approximation ratio for this problem was improved from 2 to 1.55 [Robins and Zelikovsky 2005]. All these algorithms are purely combinatorial. A long-standing open problem is whether there is an LP relaxation of Steiner tree with integrality gap smaller than 2 [Rajagopalan and Vazirani 1999]. In this article we present an LP-based approximation algorithm for Steiner tree with an improved approximation factor. Our algorithm is based on a, seemingly novel, iterative randomized rounding technique. We consider an LP relaxation of the problem, which is based on the notion of directed components. We sample one component with probability proportional to the value of the associated variable in a fractional solution: the sampled component is contracted and the LP is updated consequently. We iterate this process until all terminals are connected. Our algorithm delivers a solution of cost at most ln(4) + e As a by-product of our analysis, we show that the integrality gap of our LP is at most 1.55, hence answering the mentioned open question.", "Graph augmentation problems on a weighted graph involve determining a minimum-cost set of edges to add to a graph to satisfy a specified property, such as biconnectivity, bridge-connectivity or str...", "We give the first approximation algorithm for the generalized network Steiner problem, a problem in network design. An instance consists of a network with link-costs and, for each pair @math of nodes, an edge-connectivity requirement @math . The goal is to find a minimum-cost network using the available links and satisfying the requirements. Our algorithm outputs a solution whose cost is within @math of optimal, where @math is the highest requirement value. In the course of proving the performance guarantee, we prove a combinatorial min-max approximate equality relating minimum-cost networks to maximum packings of certain kinds of cuts. As a consequence of the proof of this theorem, we obtain an approximation algorithm for optimally packing these cuts; we show that this algorithm has application to estimating the reliability of a probabilistic network." ] }
1611.01644
2554706764
In this paper, we study a survivable network design problem on directed graphs, 2-Connected Directed Steiner Tree (2-DST): given an @math -vertex weighted directed graph, a root @math , and a set of @math terminals @math , find a min-cost subgraph @math that has two edge vertex disjoint paths from @math to any @math . 2-DST is a natural generalization of the classical Directed Steiner Tree problem (DST), where we have an additional requirement that the network must tolerate one failure. No non-trivial approximation is known for 2-DST. This was left as an open problem by , [SODA'09; JCSS] and has then been studied by [SODA'12; TALG] and Laekhanukit SODA'14]. However, no positive result was known except for the special case of a @math -shallow instance [Laekhanukit, ICALP'16]. We present an @math approximation algorithm for 2-DST that runs in time @math , for any @math . This implies a polynomial-time @math approximation for any constant @math , and a poly-logarithmic approximation running in quasi-polynomial time. We remark that this is essentially the best-known even for the classical DST, and the latter problem is @math -hard to approximate [Halperin and Krauthgamer, STOC'03]. As a by product, we obtain an algorithm with the same approximation guarantee for the @math -Connected Directed Steiner Subgraph problem, where the goal is to find a min-cost subgraph such that every pair of terminals are @math -edge vertex connected.
Concerning vertex connectivity, two of the most well-studied problems are the @math -Vertex Connected Steiner Tree ( @math -ST) and @math -Vertex Connected Steiner Subgraph ( @math -SS) problems, i.e., the undirected versions of and , respectively. There are @math -approximation algorithms for @math -ST and @math -SS by Fleischer et al. @cite_38 using the iterative rounding method. For @math , Nutov devised an @math -approximation algorithm for @math -ST in @cite_32 and an @math -approximation algorithm for @math -SS in @cite_15 (also, see @cite_29 ). A special case of @math -SS with metric-costs is studied by Cheriyan and Vetta in @cite_36 who gave an @math -approximation algorithm for the problem. The most extensively studied special case of @math -SS is when all vertices are terminals, namely the @math -Vertex Connected Spanning Subgraph problem, which has been studied, e.g., in @cite_10 @cite_1 @cite_16 @cite_37 @cite_18 . The current best approximation guarantees are @math @cite_37 , and @math for the case @math @cite_18 @cite_7 . More references can be found in @cite_8 @cite_14 @cite_4 .
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_18", "@cite_14", "@cite_4", "@cite_7", "@cite_8", "@cite_36", "@cite_29", "@cite_1", "@cite_32", "@cite_15", "@cite_16", "@cite_10" ], "mid": [ "1968690018", "1983217642", "2178196238", "", "", "2174245573", "", "2079430060", "2163546498", "1967595345", "2052656799", "1975776599", "2058521212", "1986011879" ], "abstract": [ "The survivable network design problem (SNDP) is the following problem: given an undirected graph and values rij for each pair of vertices i and j, find a minimum-cost subgraph such that there are at least rij disjoint paths between vertices i and j. In the edge connected version of this problem (EC-SNDP), these paths must be edge-disjoint. In the vertex connected version of the problem (VC-SNDP), the paths must be vertex disjoint. The element connectivity problem (ELC-SNDP, or ELC) is a problem of intermediate difficulty. In this problem, the set of vertices is partitioned into terminals and nonterminals. The edges and nonterminals of the graph are called elements. The values rij are only specified for pairs of terminals i, j, and the paths from i to j must be element disjoint. Thus if rij-1 elements fail, terminals i and j are still connected by a path in the network.These variants of SNDP are all known to be NP-hard. The best known approximation algorithm for the EC-SNDP has performance guarantee of 2 and iteratively rounds solutions to a linear programming relaxation of the problem. ELC has a primal-dual O(log k)-approximation algorithm, where k = maxi,j rij. Since this work first appeared as an extended abstract, it has been shown that it is hard to approximate VC-SNDP to factor 2log1-en.In this paper we investigate applying iterative rounding to ELC and VC-SNDP We show that iterative rounding will not yield a constant factor approximation algorithm for general VC-SNDP. On the other hand, we show how to extend the analysis of iterative rounding applied to EC-SNDP to yield 2-approximation algorithms for both general ELC, and for the case of VC-SNDP when rij ∈ 0, 1, 2 . The latter result improves on an existing 3-approximation algorithm. The former is the first constant factor approximation algorithm for a general survivable network design problem that allows node failures.", "We consider Degree Constrained Survivable Network problems. For the directed Degree Constrained k -Edge-Outconnected Subgraph problem, we slightly improve the best known approximation ratio, by a simple proof. Our main contribution is giving a framework to handle node-connectivity degree constrained problems with the iterative rounding method. In particular, for the degree constrained versions of the Element-Connectivity Survivable Network problem on undirected graphs, and of the k -Outconnected Subgraph problem on both directed and undirected graphs, our algorithm computes a solution J of cost O(logk) times the optimal, with degrees O(2 k )?b(v). Similar result are obtained for the k -Connected Subgraph problem. The latter improves on the only degree approximation O(klogn)?b(v) in O(n k ) time on undirected graphs by Feder, Motwani, and Zhu.", "We present a 6-approximation algorithm for the minimum-cost @math -node connected spanning subgraph problem, assuming that the number of nodes is at least @math . We apply a combinatorial preprocessing, based on the Frank--Tardos algorithm for @math -outconnectivity, to transform any input into an instance such that the iterative rounding method gives a 2-approximation guarantee. This is the first constant factor approximation algorithm even in the asymptotic setting of the problem, that is, the restriction to instances where the number of nodes is lower bounded by a function of @math .", "", "", "We consider the problem of finding a minimum edge cost subgraph of a graph satisfying both given node-connectivity requirements and degree upper bounds on nodes. We present an iterative rounding algorithm of the biset linear programming relaxation for this problem. For directed graphs and @math -out-connectivity requirements from a root, our algorithm computes a solution that is a 2-approximation on the cost, and the degree of each node @math in the solution is at most @math , where @math is the degree upper bound on @math . For undirected graphs and element-connectivity requirements with maximum connectivity requirement @math , our algorithm computes a solution that is a @math -approximation on the cost, and the degree of each node @math in the solution is at most @math . These ratios improve the previous @math -approximation on the cost and @math -approximation on the degrees. Our algorithms can be used to improve approximation ratios for other node-connectivity problems such as undirected $k...", "", "We study undirected networks with edge costs that satisfy the triangle inequality. Let @math denote the number of nodes. We present an @math -approximation algorithm for a generalization of the metric-cost subset @math -node-connectivity problem. Our approximation guarantee is proved via lower bounds that apply to the simple edge-connectivity version of the problem, where the requirements are for edge-disjoint paths rather than for openly node-disjoint paths. A corollary is that, for metric costs and for each @math , there exists a @math -node connected graph whose cost is within a factor of @math of the cost of any simple @math -edge connected graph. Based on our @math -approximation algorithm, we present an @math -approximation algorithm for the metric-cost node-connectivity survivable network design problem, where @math denotes the maximum requirement over all pairs of nodes. Our results contrast with the case of edge costs of 0 or 1, where Kortsarz, Krauthgamer, and Lee. [SIAM J. Comput., 33 (2004), pp. 704-720] recently proved, assuming NP @math DTIME( @math ), a hardness-of-approximation lower bound of @math for the subset @math -node-connectivity problem, where @math denotes a small positive number.", "The minimum cost subset k-connected subgraph problem is a cornerstone problem in the area of network design with vertex connectivity requirements. In this problem, we are given a graph G=(V,E) with costs on edges and a set of terminals T⊆V. The goal is to find a minimum cost subgraph such that every pair of terminals are connected by k openly (vertex) disjoint paths. In this paper, we present an approximation algorithm for the subset k-connected subgraph problem which improves on the previous best approximation guarantee of O(k2logk) by Nutov (ACM Trans. Algorithms 9(1):1, 2012). Our approximation guarantee, ?(|T|), depends upon the number of terminals: @math (|T|) = O(k ^2 k) & if 2k |T| k. This suggests that the hardest instances of the problem are when |T|?k.", "We present two new approximation algorithms for the problem of finding a k-node connected spanning subgraph (directed or undirected) of minimum cost. The best known approximation guarantees for this problem were @math for both directed and undirected graphs, and @math for undirected graphs with @math , where @math is the number of nodes in the input graph. Our first algorithm has approximation ratio @math , which is @math except for very large values of @math , namely, @math . This algorithm is based on a new result on @math -connected @math -critical graphs, which is of independent interest in the context of graph theory. Our second algorithm uses the primal-dual method and has approximation ratio @math for all values of @math . Combining these two gives an algorithm with approximation ratio @math , which asymptotically improves the best known approximation guarantee for directed graphs for all values of @math , and for undirected graphs for @math . Moreover, this is the first algorithm that has an approximation guarantee better than @math for all values of @math . Our approximation ratio also provides an upper bound on the integrality gap of the standard LP-relaxation.", "We give approximation algorithms for the Survivable Network problem. The input consists of a graph G = (V,E) with edge node-costs, a node subset S ⊆ V, and connectivity requirements r(s,t):s,t ∈ T ⊆ V . The goal is to find a minimum cost subgraph H of G that for all s,t ∈ T contains r(s,t) pairwise edge-disjoint st-paths such that no two of them have a node in S s s,t in common. Three extensively studied particular cases are: Edge-Connectivity Survivable Network (S = ∅), Node-Connectivity Survivable Network (S = V), and Element-Connectivity Survivable Network (r(s,t) = 0 whenever s ∈ S or t ∈ S). Let k = maxs,t ∈ T r(s,t). In Rooted Survivable Network, there is s ∈ T such that r(u,t) = 0 for all u ≠ s, and in the Subset k-Connected Subgraph problem r(s,t) = k for all s,t ∈ T. For edge-costs, our ratios are O(k log k) for Rooted Survivable Network and O(k2 log k) for Subset k-Connected Subgraph. This improves the previous ratio O(k2 log n), and for constant values of k settles the approximability of these problems to a constant. For node-costs, our ratios are as follows. —O(k log |T|) for Element-Connectivity Survivable Network, matching the best known ratio for Edge-Connectivity Survivable Network. —O(k2 log |T|) for Rooted Survivable Network and O(k3 log |T|) for Subset k-Connected Subgraph, improving the ratio O(k8 log2 |T|). —O(k4 log2 |T|) for Survivable Network; this is the first nontrivial approximation algorithm for the node-costs version of the problem.", "A subset T@?V of terminals is k-connected to a root s in a directed undirected graph J if J has k internally-disjoint vs-paths for every v@?T; T is k-connected in J if for every s@?T the set T@? s is k-connected to s in J. We consider the Subsetk-ConnectivityAugmentation problem: given a graph G=(V,E) with edge node-costs, a node subset T@?V, and a subgraph J=(V,E\"J) of G such that T is (k-1)-connected in J, find a minimum-cost augmenting edge-set F@?E@?E\"J such that T is k-connected in J@?F. The problem admits trivial ratio O(|T|^2). We consider the case |T|>k and prove that for directed undirected graphs and edge node-costs, a @r-approximation algorithm for Rooted Subsetk-Connectivity Augmentation implies the following approximation ratios for Subsetk-Connectivity Augmentation:(i)b(@r+k)+(|T||T|-k)^2O(log|T||T|-k), where b=1 for undirected graphs and b=2 for directed graphs. (ii)@r@?O(|T||T|-klogk). The best known values of @r on undirected graphs are min |T|,O(k) for edge-costs and min |T|,O(klog|T|) for node-costs; for directed graphs @r=|T| for both versions. Our results imply that unless k=|T|-o(|T|), Subsetk-Connectivity Augmentation admits the same ratios as the best known ones for the rooted version. This improves the ratios in Nutov (2009) [19] and Laekhanukit (2011) [15].", "We present an @math -approximation algorithm for the problem of finding a @math -vertex connected spanning subgraph of minimum cost, where @math is the number of vertices in an input graph, and @math is a connectivity requirement. Our algorithm is the first that achieves a polylogarithmic approximation ratio for all values of @math and @math , and it works for both directed and undirected graphs. As in previous works, we use the Frank--Tardos algorithm for finding @math -outconnected subgraphs as a subroutine. However, with our structural lemmas, we are able to show that we need only partial solutions returned by the Frank--Tardos algorithm; thus, we can avoid paying the whole cost of an optimal solution every time the algorithm is applied.", "We present an approximation algorithm for the problem of finding a minimum-cost k-vertex connected spanning subgraph, assuming that the number of vertices is at least 6k 2 . The approximation guarantee is six times the kth harmonic number (which is O(log k)), and this is also an upper bound on the integrality ratio for a standard linear programming relaxation." ] }
1611.01043
2547425965
We suggest general methods to construct asymptotically uniformly valid confidence intervals post-model-selection. The constructions are based on principles recently proposed by (2013). In particular the candidate models used can be misspecified, the target of inference is model-specific, and coverage is guaranteed for any data-driven model selection procedure. After developing a general theory we apply our methods to practically important situations where the candidate set of models, from which a working model is selected, consists of fixed design homoskedastic or heteroskedastic linear models, or of binary regression models with general link functions. In an extensive simulation study, we find that the proposed confidence intervals perform remarkably well, even when compared to existing methods that are tailored only for specific model selection procedures.
The present article is devised in the spirit of @cite_1 , in the sense that we aim at inference post-model-selection that is valid irrespective of the employed model selection procedure. Very recently, @cite_4 have investigated a classical sample spitting procedure that is also independent of the underlying selection method. However, they consider only the i.i.d. case, thereby excluding, for instance, fixed design regression. Several other authors have proposed inference procedures post-model-selection that are tailored towards specific selection methods and for specific modeling situations. In the context of fitting linear regression models to Gaussian data, methods that provide valid confidence sets post-model-selection, and that are constructed for specific model selection procedures (e.g., forward stepwise, least-angle-regression or the lasso) and for targets of inference similar to those considered in the present article, have been recently obtained by @cite_12 , @cite_15 , @cite_3 and @cite_2 . @cite_11 extended the approach of @cite_12 to non-Gaussian data by obtaining uniform asymptotic results. Furthermore, valid inference post-model-selection on conventional regression parameters under sparsity conditions was considered, among others, by and .
{ "cite_N": [ "@cite_4", "@cite_1", "@cite_3", "@cite_2", "@cite_15", "@cite_12", "@cite_11" ], "mid": [ "2552339304", "2009462809", "2099932489", "1509689762", "2164275109", "2204774351", "2963312390" ], "abstract": [ "Several new methods have been proposed for performing valid inference after model selection. An older method is sampling splitting: use part of the data for model selection and part for inference. In this paper we revisit sample splitting combined with the bootstrap (or the Normal approximation). We show that this leads to a simple, assumption-free approach to inference and we establish results on the accuracy of the method. In fact, we find new bounds on the accuracy of the bootstrap and the Normal approximation for general nonlinear parameters with increasing dimension which we then use to assess the accuracy of regression inference. We show that an alternative, called the image bootstrap, has higher coverage accuracy at the cost of more computation. We define new parameters that measure variable importance and that can be inferred with greater accuracy than the usual regression coefficients. There is a inference-prediction tradeoff: splitting increases the accuracy and robustness of inference but can decrease the accuracy of the predictions.", "It is common practice in statistical data analysis to perform data-driven variable selection and derive statistical inference from the resulting model. Such inference enjoys none of the guarantees that classical statistical theory provides for tests and confidence intervals when the model has been chosen a priori. We propose to produce valid post-selection inference'' by reducing the problem to one of simultaneous inference and hence suitably widening conventional confidence and retention intervals. Simultaneity is required for all linear functions that arise as coefficient estimates in all submodels. By purchasing simultaneity insurance'' for all possible submodels, the resulting post-selection inference is rendered universally valid under all possible model selection procedures. This inference is therefore generally conservative for particular selection procedures, but it is always less conservative than full Scheffe protection. Importantly it does not depend on the truth of the selected submodel, and hence it produces valid inference even in wrong models. We describe the structure of the simultaneous inference problem and give some asymptotic results.", "To perform inference after model selection, we propose controlling the selective type I error; i.e., the error rate of a test given that it was performed. By doing so, we recover long-run frequency properties among selected hypotheses analogous to those that apply in the classical (non-adaptive) context. Our proposal is closely related to data splitting and has a similar intuitive justification, but is more powerful. Exploiting the classical theory of Lehmann", "1.1. Introduction.- 1.2. Outer Integrals and Measurable Majorants.- 1.3. Weak Convergence.- 1.4. Product Spaces.- 1.5. Spaces of Bounded Functions.- 1.6. Spaces of Locally Bounded Functions.- 1.7. The Ball Sigma-Field and Measurability of Suprema.- 1.8. Hilbert Spaces.- 1.9. Convergence: Almost Surely and in Probability.- 1.10. Convergence: Weak, Almost Uniform, and in Probability.- 1.11. Refinements.- 1.12. Uniformity and Metrization.- 2.1. Introduction.- 2.2. Maximal Inequalities and Covering Numbers.- 2.3. Symmetrization and Measurability.- 2.4. Glivenko-Cantelli Theorems.- 2.5. Donsker Theorems.- 2.6. Uniform Entropy Numbers.- 2.7. Bracketing Numbers.- 2.8. Uniformity in the Underlying Distribution.- 2.9. Multiplier Central Limit Theorems.- 2.10. Permanence of the Donsker Property.- 2.11. The Central Limit Theorem for Processes.- 2.12. Partial-Sum Processes.- 2.13. Other Donsker Classes.- 2.14. Tail Bounds.- 3.1. Introduction.- 3.2. M-Estimators.- 3.3. Z-Estimators.- 3.4. Rates of Convergence.- 3.5. Random Sample Size, Poissonization and Kac Processes.- 3.6. The Bootstrap.- 3.7. The Two-Sample Problem.- 3.8. Independence Empirical Processes.- 3.9. The Delta-Method.- 3.10. Contiguity.- 3.11. Convolution and Minimax Theorems.- A. Appendix.- A.1. Inequalities.- A.2. Gaussian Processes.- A.2.1. Inequalities and Gaussian Comparison.- A.2.2. Exponential Bounds.- A.2.3. Majorizing Measures.- A.2.4. Further Results.- A.3. Rademacher Processes.- A.4. Isoperimetric Inequalities for Product Measures.- A.5. Some Limit Theorems.- A.6. More Inequalities.- A.6.1. Binomial Random Variables.- A.6.2. Multinomial Random Vectors.- A.6.3. Rademacher Sums.- Notes.- References.- Author Index.- List of Symbols.", "We develop a framework for post model selection inference, via marginal screening, in linear regression. At the core of this framework is a result that characterizes the exact distribution of linear functions of the response @math , conditional on the model being selected ( condition on selection\" framework). This allows us to construct valid confidence intervals and hypothesis tests for regression coefficients that account for the selection procedure. In contrast to recent work in high-dimensional statistics, our results are exact (non-asymptotic) and require no eigenvalue-like assumptions on the design matrix @math . Furthermore, the computational cost of marginal regression, constructing confidence intervals and hypothesis testing is negligible compared to the cost of linear regression, thus making our methods particularly suitable for extremely large datasets. Although we focus on marginal screening to illustrate the applicability of the condition on selection framework, this framework is much more broadly applicable. We show how to apply the proposed framework to several other selection procedures including orthogonal matching pursuit, non-negative least squares, and marginal screening+Lasso.", "ABSTRACTWe propose new inference tools for forward stepwise regression, least angle regression, and the lasso. Assuming a Gaussian model for the observation vector y, we first describe a general scheme to perform valid inference after any selection event that can be characterized as y falling into a polyhedral set. This framework allows us to derive conditional (post-selection) hypothesis tests at any step of forward stepwise or least angle regression, or any step along the lasso regularization path, because, as it turns out, selection events for these procedures can be expressed as polyhedral constraints on y. The p-values associated with these tests are exactly uniform under the null distribution, in finite samples, yielding exact Type I error control. The tests can also be inverted to produce confidence intervals for appropriate underlying regression parameters. The R package selectiveInference, freely available on the CRAN repository, implements the new inference tools described in this article. Suppl...", "Recently, (2014) developed a method for making inferences on parameters after model selection, in a regression setting with normally distributed errors. In this work, we study the large sample properties of this method, without assuming normality. We prove that the test statistic of (2014) is asymptotically pivotal, as the number of samples n grows and the dimension d of the regression problem stays fixed; our asymptotic result is uniformly valid over a wide class of nonnormal error distributions. We also propose an efficient bootstrap version of this test that is provably (asymptotically) conservative, and in practice, often delivers shorter confidence intervals that the original normality-based approach. Finally, we prove that the test statistic of (2014) does not converge uniformly in a high-dimensional setting, when the dimension d is allowed grow." ] }
1611.01079
2548058031
We study convergence to equilibrium for a large class of Markov chains in random environment. The chains are sparse in the sense that in every row of the transition matrix @math the mass is essentially concentrated on few entries. Moreover, the random environment is such that rows of @math are independent and such that the entries are exchangeable within each row. This includes various models of random walks on sparse random directed graphs. The models are generally non reversible and the equilibrium distribution is itself unknown. In this general setting we establish the cutoff phenomenon for the total variation distance to equilibrium, with mixing time given by the logarithm of the number of states times the inverse of the average row entropy of @math . As an application, we consider the case where the rows of @math are i.i.d. random vectors in the domain of attraction of a Poisson-Dirichlet law with index @math . Our main results are based on a detailed analysis of the weight of the trajectory followed by the walker. This approach offers an interpretation of cutoff as an instance of the concentration of measure phenomenon.
The present paper considerably extends the results in @cite_21 by establishing cutoff for a large class of non-reversible sparse stochastic matrices, not necessarily arising as the transition matrix of the random walk on a (directed) graph. The time-irreversibility actually plays a crucial role in our proofs: despite the lack of an explicit underlying structure, the Markov chains that we consider turn out to exhibit a spontaneous non-backtracking" tendency which allows us to establish a certain i.i.d. approximation for the environment seen by a typical walker. While the overall strategy of proof of our main result is closely related to the one we introduced in @cite_21 , the level of generality allowed for in the transition probabilities requires an entirely new analysis of path weights. For instance, one of the features making the control of trajectory weights much more challenging here is the lack of nontrivial upper bounds on the probability of any particular transition (as opposed to the model studied in @cite_21 where the minimal out-degree was assumed to be at least @math ).
{ "cite_N": [ "@cite_21" ], "mid": [ "2267078894" ], "abstract": [ "A finite ergodic Markov chain exhibits cutoff if its distance to equilibrium remains close to its initial value over a certain number of iterations and then abruptly drops to near 0 on a much shorter time scale. Originally discovered in the context of card shuffling (Aldous-Diaconis, 1986), this remarkable phenomenon is now rigorously established for many reversible chains. Here we consider the non-reversible case of random walks on sparse directed graphs, for which even the equilibrium measure is far from being understood. We work under the configuration model, allowing both the in-degrees and the out-degrees to be freely specified. We establish the cutoff phenomenon, determine its precise window and prove that the cutoff profile approaches a universal shape. We also provide a detailed description of the equilibrium measure." ] }
1611.00463
2950218621
Sorting has been one of the most challenging studied problems in different scientific researches. Although many techniques and algorithms have been proposed on the theory of having efficient parallel sorting implementation, however achieving desired performance on different types of the architectures with large number of processors is still a challenging issue. Maximizing parallelism level in applications can be achieved by minimizing overheads due to load imbalance and waiting time due to memory latencies. In this paper, we present a distributed sorting algorithm implemented in PGX.D, a fast distributed graph processing system, which outperforms the Spark's distributed sorting implementation by around 2x-3x by hiding communication latencies and minimizing unnecessary overheads. Furthermore, it shows that the proposed PGX.D sorting method handles dataset containing many duplicated data entries efficiently and always results in keeping balanced workloads for different input data distribution types.
In this section, we briefly discuss about the implementation of some of the existing sorting techniques and their challenges. Different parallel algorithms have been studied for many years to achieve an efficient distributed sorting method implemented on distributed machines. Keeping balanced load is usually difficult while implementing most of these algorithms when facing dataset containing many duplicated data entries, which usually results in having some machines holding larger amount of data than the others. Most of these techniques consist of two main steps: partitioning, and merging. The big challenge in the partitioning step is to maintain the load balancing on different distributed machines in addition to distribute data among them in a sorting order @cite_12 @cite_10 . Also, the big challenge in the merging step is decreasing latencies and keeping the desired scalability during all merging steps till the end. The performance of the merging step is usually scalable when having a small number of machines. However, while utilizing larger number of machines, the synchronizations reduce its parallel scalability. Moreover, most of these existing techniques require a significant communication bandwidth, which increases overheads and degrades the parallel performance @cite_2 .
{ "cite_N": [ "@cite_10", "@cite_12", "@cite_2" ], "mid": [ "2016240806", "1694308253", "2039764713" ], "abstract": [ "We study the problems of sorting and ranking n processors that have initial values (not necessarily distinct) in a distributed system. Sorting means that the initial values have to move around in the network and be assigned to the processors according to their distinct identities, while ranking means that the numbers 1, 2,..., n have to be assigned to the processors according to their initial values; ties between initial values can be broken in any chosen way. Assuming a tree network, and assuming that a message can contain an initial value, an identity, or a rank, we present an algorithm for the ranking problem that uses, in the worst case, at most n2 2+O(n) such messages. The algorithm is then extended to perform sorting, using in the worst case at most 3n2 4+O(n) messages. Both algorithms are using a total of O(n) space. The algorithms are extended to general networks. The expected behavior of these algorithms for three classes of trees is discussed. Assuming that the initial values, identities, and ranks can be compared only within themselves, lower bounds of n2 2 and 3n2 4 messages are proved for a worst case execution of any algorithm to solve the ranking and sorting problems, respectively.", "Merge sort is useful in sorting a great number of data progressively, especially when they can be partitioned and easily collected to a few processors. Merge sort can be parallelized, however, conventional algorithms using distributed memory computers have poor performance due to the successive reduction of the number of participating processors by a half, up to one in the last merging stage. This paper presents load-balanced parallel merge sort where all processors do the merging throughout the computation. Data are evenly distributed to all processors, and every processor is forced to work in all merging phases. An analysis shows the upper bound of the speedup of the merge time as (P- 1) log P where P is the number of processors. We have reached a speedup of 8.2 (upper bound is 10.5) on 32-processor Cray T3E in sorting of 4M 32-bit integers.", "A straight-line-topology local area network (LAN) to which a number of nodes are connected either in series or in parallel is considered. A file F is arbitrarily partitioned among these sites. The problem studied is that of rearranging the records of the file such that the keys of records at lower-ranking sites are all smaller than those at higher-ranking sites. Lower bounds on the worst-case communication complexity are given for both the series and parallel arrangements, and algorithms optimal for all networks and files are presented. >" ] }
1611.00463
2950218621
Sorting has been one of the most challenging studied problems in different scientific researches. Although many techniques and algorithms have been proposed on the theory of having efficient parallel sorting implementation, however achieving desired performance on different types of the architectures with large number of processors is still a challenging issue. Maximizing parallelism level in applications can be achieved by minimizing overheads due to load imbalance and waiting time due to memory latencies. In this paper, we present a distributed sorting algorithm implemented in PGX.D, a fast distributed graph processing system, which outperforms the Spark's distributed sorting implementation by around 2x-3x by hiding communication latencies and minimizing unnecessary overheads. Furthermore, it shows that the proposed PGX.D sorting method handles dataset containing many duplicated data entries efficiently and always results in keeping balanced workloads for different input data distribution types.
Batcher's bitonic sorting @cite_7 is basically a parallel merge-sort and was considered as one of the most practical sorting algorithms for many years. In this algorithm, data on each local is sorted, then for each processors pair, the data sequence is merged into ascending order while on the other pair, the data sequence is merged into descending order. This algorithm is popular because of its simple communication pattern @cite_1 @cite_5 . However, it usually suffers from high communication overhead as its merging step highly depends on the data characteristics and it often needs to exchange the entire data assigned to each processor @cite_15 .
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_1", "@cite_7" ], "mid": [ "2064687921", "2147346217", "2156609241", "" ], "abstract": [ "An O(n) algorithm to sort n2elements on an Illiac IV-like n × n mesh-connected processor array is presented. This algorithm sorts the n2elements into row-major order and is an adaptation of Batcher's bitonic sort. A slight modification of our algorithm yields an O(n) algorithm to sort n2elements into snake-like row-major order. Extensions to the case of a j-dimensional processor array are discussed.", "Parallel sorting methods for distributed memory systems often use partitioning algorithms to prepare the redistribution of data items. This article proposes a partitioning algorithm that calculates a redistribution specified by the number of data items to be finally located on each process. This partitioning algorithm can also be used for data items with weights, which might express a computational load to be expected, and to produce a redistribution with an individual accumulated weight of data items specified for each process. Another important feature is that data sets with duplicated data keys can be handled. Parallel sorting with those properties is often needed for parallel scientific application codes, such as particle simulations, in which the dynamics of the simulated system may destroy locality and load balance required for an efficient computation. It is applied to random sample data and to a particle simulation code requiring a sorting. Performance results have been obtained on an IBM Blue Gene P platform with up to 32768 cores. The results show that the proposed parallel sorting method performs well in comparison to other existing algorithms.", "Sorting is an important component of many applications, and parallel sorting algorithms have been studied extensively in the last three decades. One of the earliest parallel sorting algorithms is bitonic sort, which is represented by a sorting network consisting of multiple butterfly stages. The paper studies bitonic sort on modern parallel machines which are relatively coarse grained and consist of only a modest number of nodes, thus requiring the mapping of many data elements to each processor. Under such a setting optimizing the bitonic sort algorithm becomes a question of mapping the data elements to processing nodes (data layout) such that communication is minimized. The authors developed a bitonic sort algorithm which minimizes the number of communication steps and optimizes the local computation. The resulting algorithm is faster than previous implementations, as experimental results collected on a 64 node Meiko CS-2 show.", "" ] }
1611.00463
2950218621
Sorting has been one of the most challenging studied problems in different scientific researches. Although many techniques and algorithms have been proposed on the theory of having efficient parallel sorting implementation, however achieving desired performance on different types of the architectures with large number of processors is still a challenging issue. Maximizing parallelism level in applications can be achieved by minimizing overheads due to load imbalance and waiting time due to memory latencies. In this paper, we present a distributed sorting algorithm implemented in PGX.D, a fast distributed graph processing system, which outperforms the Spark's distributed sorting implementation by around 2x-3x by hiding communication latencies and minimizing unnecessary overheads. Furthermore, it shows that the proposed PGX.D sorting method handles dataset containing many duplicated data entries efficiently and always results in keeping balanced workloads for different input data distribution types.
Radix sort @cite_8 is also used for implementing parallel and distributed sorting algorithms in many different applications due to the simplicity of its implementation @cite_16 . This algorithm is not based on data comparison for sorting purposes. It is implemented by considering bit representation of the data. The algorithm starts by examining the least significant bit in each data entry. Then, in each time step, data entries are being sorted by considering their r-bit values. One of the big challenges in implementing this sorting technique is having unequal number of input keys @cite_20 @cite_19 . It usually suffers in irregularity in communication and computation. Since it also highly depends on the data characteristics, the computation on each processor and the communication between different processors are not known in the very first steps, which can negatively affect its parallel performance.
{ "cite_N": [ "@cite_19", "@cite_16", "@cite_20", "@cite_8" ], "mid": [ "2349789703", "2062743552", "2003515726", "1992754027" ], "abstract": [ "Scheduling algorithm is always a hot topic in cloud computing environment. In order to eliminate system bottleneck and balance load dynamically. A load balancing task scheduling algorithm based on weighted random and feedback mechanisms was proposed in this paper. At first the chosen cloud scheduling host chose resources by needs and made static quantification, and then sorted them; secondly the algorithm chose resources from which sorted by weight randomly; then it acquired corresponding dynamic information to make load filter and sort the left. At last it achieved the self-adaptively to system load through feedback mechanisms. The experiment shows that the algorithm has avoided the system bottleneck effectively and has achieved balanced load as well as self-adaptability to it.", "Radix sort suffers from the unequal number of input keys due to the unknown characteristics of input keys. We present in this report a new radix sorting algorithm, called balanced radix sort which guarantees that each processor has exactly the same number of keys regardless of the data characteristics. The main idea of balanced radix sort is to store any processor which has over n P keys to its neighbor processor, where n is the total number of keys and P is the number of processors. We have implemented balanced radix sort on two distributed-memory machines IBM SPZWN and Cray T3E. Multiple versions of 32-bit and 64-bit integers and 64-bit doubles are implemented in Message Passing Interface for portability. The sequential and parallel versions consist of approximately 50 and 150 lines of C code respectively including parallel constructs. Experimental results indicate that balanced radix sort can sort OSG integers in 20 seconds and 128M doubles in 15 seconds on a 64-processor SPZWN while yielding over 40-fold speedup. When compared with other radix sorting algorithms, balanced radix sort outperformed, showing two to six times faster. When compared with sample sorting algorithms, which are known to outperform all similar methods, balanced radix sort is 30 to 100 faster based on the same machine and key initialization.", "Two emerging hardware trends will dominate the database system technology in the near future: increasing main memory capacities of several TB per server and massively parallel multi-core processing. Many algorithmic and control techniques in current database technology were devised for disk-based systems where I O dominated the performance. In this work we take a new look at the well-known sort-merge join which, so far, has not been in the focus of research in scalable massively parallel multi-core data processing as it was deemed inferior to hash joins. We devise a suite of new massively parallel sort-merge (MPSM) join algorithms that are based on partial partition-based sorting. Contrary to classical sort-merge joins, our MPSM algorithms do not rely on a hard to parallelize final merge step to create one complete sort order. Rather they work on the independently created runs in parallel. This way our MPSM algorithms are NUMA-affine as all the sorting is carried out on local memory partitions. An extensive experimental evaluation on a modern 32-core machine with one TB of main memory proves the competitive performance of MPSM on large main memory databases with billions of objects. It scales (almost) linearly in the number of employed cores and clearly outperforms competing hash join proposals -- in particular it outperforms the \"cutting-edge\" Vectorwise parallel query engine by a factor of four.", "Load balanced parallel radix sort solved the load imbalance problem present in parallel radix sort. By redistributing the keys in each round of radix, each processor has exactly the same number of keys, thereby reducing the overall sorting time. Load balanced radix sort is currently known as the fastest internal sorting method for distributed-memory multiprocessors. However, as the computation time is balanced, the communication time emerges as the bottleneck of the overall sorting performance due to key redistribution. We present in this report a new parallel radix sorter that solves the communication problem of balanced radix sort, called partitioned parallel radix sort. The new method reduces the communication time by eliminating the redistribution steps. The keys are first sorted in a top-down fashion (left-to-right as opposed to right-to-left) by using some most significant bits. Once the keys are localized to each processor, the rest of sorting is confined within each processor, hence eliminating the need for global redistribution of keys. It enables well balanced communication and computation across processors. The proposed method has been implemented in three different distributed-memory platforms, including IBM SP2, Cray T3E, and PC Cluster. Experimental results with various key distributions indicate that partitioned parallel radix sort indeed shows significant improvements over balanced radix sort. IBM SP2 shows 13 to 30 improvement while Cray SGI T3E does 20 to 100 in execution time. PC cluster shows over 2.4-fold improvement in execution time." ] }
1611.00463
2950218621
Sorting has been one of the most challenging studied problems in different scientific researches. Although many techniques and algorithms have been proposed on the theory of having efficient parallel sorting implementation, however achieving desired performance on different types of the architectures with large number of processors is still a challenging issue. Maximizing parallelism level in applications can be achieved by minimizing overheads due to load imbalance and waiting time due to memory latencies. In this paper, we present a distributed sorting algorithm implemented in PGX.D, a fast distributed graph processing system, which outperforms the Spark's distributed sorting implementation by around 2x-3x by hiding communication latencies and minimizing unnecessary overheads. Furthermore, it shows that the proposed PGX.D sorting method handles dataset containing many duplicated data entries efficiently and always results in keeping balanced workloads for different input data distribution types.
In order to increase the parallel performance of the sorting algorithm, its structure and design should be independent of the data characteristics to decrease communication overheads. Therefore, sample sort is often chosen for implementing distributed sorting. It is a fast sorting algorithm that keeps the load balancing better than the quick sort and doesn't have the mentioned communication problem faced in Batcher's bitonic sort and radix sort @cite_9 @cite_18 . The runtime in this algorithm is almost independent of the input data distribution types. It works by choosing some samples regularly from the locally sorted data of each processor and selecting final splitters from these samples to split data into different number of processors for the sorting purpose. However, in addition to having scalability challenge in its merging step, its performance highly depends on choosing sufficient amount of samples from each processor and picking the efficient final splitters from them. If this step is not efficiently designed, the algorithm may result in having load imbalance when facing dataset containing many duplicated data entries.
{ "cite_N": [ "@cite_9", "@cite_18" ], "mid": [ "2092574257", "1539580421" ], "abstract": [ "Abstract A new parallel sorting algorithm suitable for MIMD multiprocessor is presented. The algorithm reduces memory and bus contention, which many parallel sorting algorithms suffer from, by using a regular sampling of the data to ensure good pivot selection. For n data elements to be sorted and p processors, when n ≥ p3 the algorithm is shown to be asymptotically optimal. In theory, the algorithm is within a factor of 2 of achieving ideal load balancing. In practice, there is almost a perfect partitioning of work. On a variety of shared and distributed memory machines, the algorithm achieves better than half-linear speedups.", "Sample sort, a generalization of quicksort that partitions the input into many pieces, is known as the best practical comparison based sorting algorithm for distributed memory parallel computers. We show that sample sort is also useful on a single processor. The main algorithmic insight is that element comparisons can be decoupled from expensive conditional branching using predicated instructions. This transformation facilitates optimizations like loop unrolling and software pipelining. The final implementation, albeit cache efficient, is limited by a linear number of memory accesses rather than the ( O ! (n n ) ) comparisons. On an Itanium 2 machine, we obtain a speedup of up to 2 over std::sort from the GCC STL library, which is known as one of the fastest available quicksort implementations." ] }
1611.00679
2951972707
When sensors of different coexisting wireless body area networks (WBANs) transmit at the same time using the same channel, a co-channel interference is experienced and hence the performance of the involved WBANs may be degraded. In this paper, we exploit the 16 channels available in the 2.4 GHz international band of ZIGBEE, and propose a distributed scheme that avoids interference through predictable channel hopping based on Latin rectangles, namely, CHIM. In the proposed CHIM scheme, each WBAN's coordinator picks a Latin rectangle whose rows are ZIGBEE channels and columns are sensor IDs. Based on the Latin rectangle of the individual WBAN, each sensor is allocated a backup time-slot and a channel to use if it experiences interference such that collisions among different transmissions of coexisting WBANs are minimized. We further present a mathematical analysis that derives the collision probability of each sensor's transmission in the network. In addition, the efficiency of CHIM in terms of transmission delay and energy consumption minimization are validated by simulations.
* red A number of approaches have pursued cooperative communication and power control to mitigate co-channel interference. * Meanwhile, , @cite_9 have adopted cooperative communication integrated with transmit power control for multiple coexisting s. *Similarly, , @cite_12 , have proposed a distributed, carrier sense multiple access with collision avoidance scheme that enables interference-free sensors to transmit through a priori-agreed upon channel. Whilst, interfering sensors may double their contention windows or use a switched channel to avoid intra- interference* Whereas, , @cite_13 have proposed a game based power control approach to mitigate the impact of inter- interference.
{ "cite_N": [ "@cite_9", "@cite_13", "@cite_12" ], "mid": [ "2159297871", "2093634766", "2292788438" ], "abstract": [ "A scheme for two-hop relay-assisted cooperative communications integrated with transmit power control, based on simple channel prediction, is presented. A large set of empirical on- and inter-body channel data is employed to model various scenarios of wireless body area network (WBAN) communica- tions, from one isolated WBAN up to 10 closely located WBANs coexisting. Our study shows that relay assisted power control can reduce approximately 60 circuit power consumption from that of constant transmission at 0 dBm, without much loss in reliability. Further, interference mitigation is significantly enhanced over constant transmission at −5 dBm, with similar power consumption. Such performance is maintained from 2 to 10 closely-located WBANs coexisting. And the joint algorithm works best for one isolated WBAN. A trade-off between power saving and interference mitigation is motivated, taking remaining sensor-battery level, amount of interference and on-body channel quality into account. Index Terms—Cooperative communications, interference mit- igation, transmit power control, wireless body area networks.", "Wireless body area network (WBAN) is an emerging technology that provides socialized health monitoring service. However, the quality of service can be severely degraded by concomitant inter-WBAN interference in some specific environments where multiple WBANs are densely deployed, e.g., hospitals and senior citizen communities. In this work, we propose a Bayesian game based power control scheme to mitigate the impact of inter-WBAN interference. By modeling WBANs as players and active links as types of players in the Bayesian game model, the proposed power control scheme tries to maximize each player's expected payoff involving both throughput and energy efficiency. We prove the existence of Bayesian equilibrium (BE) for the proposed power control game and also derive a practical sufficient condition for the uniqueness of BE. A harmonic mean based algorithm is then proposed to obtain an approximation of BE point without the need to pass message among WBANs, which satisfies the non-cooperative manner for inter-WBAN interference mitigation. Simulation results show that the proposed algorithm can converge to the BE point effectively.", "In this paper, we propose a distributed multi-hop interference avoidance algorithm, namely, IAA to avoid co-channel interference inside a wireless body area network (WBAN). Our proposal adopts carrier sense multiple access with collision avoidance (CSMA CA) between sources and relays and a flexible time division multiple access (FTDMA) between relays and coordinator. The proposed scheme enables low interfering nodes to transmit their messages using base channel. Depending on suitable situations, high interfering nodes double their contention windows (CW) and probably use switched orthogonal channel. Simulation results show that proposed scheme has far better minimum SINR (12dB improvement) and longer energy lifetime than other schemes (power control and opportunistic relaying). Additionally, we validate our proposal in a theoretical analysis and also propose a probabilistic approach to prove the outage probability can be effectively reduced to the minimal." ] }
1611.00679
2951972707
When sensors of different coexisting wireless body area networks (WBANs) transmit at the same time using the same channel, a co-channel interference is experienced and hence the performance of the involved WBANs may be degraded. In this paper, we exploit the 16 channels available in the 2.4 GHz international band of ZIGBEE, and propose a distributed scheme that avoids interference through predictable channel hopping based on Latin rectangles, namely, CHIM. In the proposed CHIM scheme, each WBAN's coordinator picks a Latin rectangle whose rows are ZIGBEE channels and columns are sensor IDs. Based on the Latin rectangle of the individual WBAN, each sensor is allocated a backup time-slot and a channel to use if it experiences interference such that collisions among different transmissions of coexisting WBANs are minimized. We further present a mathematical analysis that derives the collision probability of each sensor's transmission in the network. In addition, the efficiency of CHIM in terms of transmission delay and energy consumption minimization are validated by simulations.
Other approaches have pursued multiple medium access schemes for interference mitigation. , @cite_3 have pursued multiple medium access schemes and proposed a distributed -based beacon interval shifting scheme for avoiding the overlap between superframe's active period through employing carrier sense before a beacon transmission. Similarly, , @cite_1 have adopted for scheduling intra- transmissions and carrier sensing to deal with inter- interference. Meanwhile the approach of @cite_4 has mapped the channel allocation as a graph coloring problem. The coordinators need to exchange messages to achieve a non-conflict coloring in a distributed manner. Whilst, , @cite_10 have proposed a multi-channel topology-transparent algorithm based on Latin squares for transmissions scheduling in multihop packet radio networks. Thus, in a multi-channel -based network, each node is equipped with a single transmitter and multiple receivers. Like @cite_10 , employs Latin rectangles to form a predictable non-interfering transmission schedule. However, considers the presence of single receiver rather than multiple receivers per a single node and single hop rather than multihop communication. In addition, avoids frequent channel switching by limiting it to the case when a sensor interference occurs.
{ "cite_N": [ "@cite_10", "@cite_4", "@cite_1", "@cite_3" ], "mid": [ "2157837962", "1922004315", "", "2065809582" ], "abstract": [ "Many transmission scheduling algorithms have been proposed to maximize spatial reuse and minimize the time division multiple access (TDMA) frame length in multihop packet radio networks. Almost all existing algorithms assume exact network topology information and require recomputations when the network topology changes. In addition, existing work focuses on single channel TDMA systems. In this paper, we propose a multichannel topology-transparent algorithm based on latin squares. The proposed algorithm has the flexibility to allow the growth of the network, i.e., the network can add more mobile nodes without recomputation of transmission schedules for existing nodes. At the same time, a minimum throughput is guaranteed. We analyze the efficiency of this algorithm and examine the topology-transparent characteristics and the sensitivity on design parameters by analytical and simulation techniques.", "In this paper, we study the interference mitigation scheme in a network with multiple co-located wireless body area networks (WBANs). Each WBAN consists of a coordinator and multiple sensor nodes. Interference happens when multiple nodes transmit to their coordinators at the same time. Our objective is twofold: firstly we want to construct an interference-free time slot schedule for all the nodes in the network; secondly we want to minimize the transmission cycle of all the nodes. Towards such goal, we map different time slots to distinct colors and propose a WBAN distributed coloring (DC) algorithm to find a color assignment for each node in the network. To implement the algorithm, the coordinators need to exchange messages for multiple rounds to achieve a non-conflict coloring scheme distributively. The simulation results show that on average the proposed algorithm has a significant performance gain over existing schemes.", "", "This paper investigates the issue of interference avoidance in body area networks (BANs). IEEE 802.15 Task Group 6 presented several schemes to reduce such interference, but these schemes are still not proper solutions for BANs. We present a novel distributed TDMA-based beacon interval shifting scheme that reduces interference in the BANs. A design goal of the scheme is to avoid the wakeup period of each BAN coinciding with other networks by employing carrier sensing before a beacon transmission. We analyze the beacon interval shifting scheme and investigate the proper back-off length when the channel is busy. We compare the performance of the proposed scheme with the schemes presented in IEEE 802.15 Task Group 6 using an OMNeT++ simulation. The simulation results show that the proposed scheme has a lower packet loss, energy consumption, and delivery-latency than the schemes of IEEE 802.15 Task Group 6." ] }
1611.00862
2547480197
In reinforcement learning, the standard criterion to evaluate policies in a state is the expectation of (discounted) sum of rewards. However, this criterion may not always be suitable, we consider an alternative criterion based on the notion of quantiles. In the case of episodic reinforcement learning problems, we propose an algorithm based on stochastic approximation with two timescales. We evaluate our proposition on a simple model of the TV show, Who wants to be a millionaire.
Additionally, in the artificial intelligence community, @cite_11 @cite_3 also investigated the use of EU as a decision criterion in MDPs. To optimize it, they proposed a functional variation of Value Iteration. In the continuation of this work, @cite_9 investigated the use of Skew-Symmetric Bilinear (SSB) utility functions --- a generalization of EU that enables intransitive behaviors and violation of the independence axiom --- as decision criteria in finite-horizon MDPs. Interestingly, SSB also encompasses probabilistic dominance, a decision criterion that is employed in preference-based sequential decision-making .
{ "cite_N": [ "@cite_9", "@cite_3", "@cite_11" ], "mid": [ "2235416906", "10913952", "40049998" ], "abstract": [ "In this paper we adopt Skew Symmetric Bilinear (SSB) utility functions to compare policies in Markov Decision Processes (MDPs). By considering pairs of alternatives, SSB utility theory generalizes von Neumann and Morgenstern's expected utility (EU) theory to encompass rational decision behaviors that EU cannot accommodate. We provide a game-theoretic analysis of the problem of identifying an SSB-optimal policy in finite horizon MDPs and propose an algorithm based on a double oracle approach for computing an optimal (possibly randomized) policy. Finally, we present and discuss experimental results where SSB-optimal policies are computed for a popular TV contest according to several instantiations of SSB utility functions.", "We study how to find plans that maximize the expected total utility for a given MDP, a planning objective that is important for decision making in high-stakes domains. The optimal actions can now depend on the total reward that has been accumulated so far in addition to the current state. We extend our previous work on functional value iteration from one-switch utility functions to all utility functions that can be approximated with piecewise linear utility functions (with and without exponential tails) by using functional value iteration to find a plan that maximizes the expected total utility for the approximate utility function. Functional value iteration does not maintain a value for every state but a value function that maps the total reward that has been accumulated so far into a value. We describe how functional value iteration represents these value functions in finite form, how it performs dynamic programming by manipulating these representations and what kinds of approximation guarantees it is able to make. We also apply it to a probabilistic blocksworld problem, a standard test domain for decision-theoretic planners.", "Decision-theoretic planning with nonlinear utility functions is important since decision makers are often risk-sensitive in high-stake planning situations. One-switch utility functions are an important class of nonlinear utility functions that can model decision makers whose decisions change with their wealth level. We study how to maximize the expected utility of a Markov decision problem for a given one-switch utility function, which is difficult since the resulting planning problem is not decomposable. We first study an approach that augments the states of the Markov decision problem with the wealth level. The properties of the resulting infinite Markov decision problem then allow us to generalize the standard risk-neutral version of value iteration from manipulating values to manipulating functions that map wealth levels to values. We use a probabilistic blocks-world example to demonstrate that the resulting risk-sensitive version of value iteration is practical." ] }
1611.00862
2547480197
In reinforcement learning, the standard criterion to evaluate policies in a state is the expectation of (discounted) sum of rewards. However, this criterion may not always be suitable, we consider an alternative criterion based on the notion of quantiles. In the case of episodic reinforcement learning problems, we propose an algorithm based on stochastic approximation with two timescales. We evaluate our proposition on a simple model of the TV show, Who wants to be a millionaire.
In theoretical computer science, sophisticated decision criteria have also been studied in MDPs. For instance, @cite_14 proved that many decision criteria based on expectation (of limsup, parity... of rewards) admit a stationary deterministic optimal policy. @cite_10 considered sophisticated preferences over policies, which amounts to searching for policies that maximize the standard criterion while ensuring an expected sum of rewards higher than a threshold with probability higher than a fixed value. This work has also been extended to the multiobjective setting .
{ "cite_N": [ "@cite_14", "@cite_10" ], "mid": [ "1562671620", "2288508040" ], "abstract": [ "Markov decision processes (MDPs) are controllable discrete event systems with stochastic transitions. Performances of an MDP are evaluated by a payoff function. The controller of the MDP seeks to optimize those performances, using optimal strategies. There exists various ways of measuring performances, i.e. various classes of payoff functions. For example, average performances can be evaluated by a mean-payoff function, peak performances by a limsup payoff function, and the parity payoff function can be used to encode logical specifications. Surprisingly, all the MDPs equipped with mean, limsup or parity payoff functions share a common non-trivial property: they admit pure stationary optimal strategies. In this paper, we introduce the class of prefix-independent and submixing payoff functions, and we prove that any MDP equipped with such a payoff function admits pure stationary optimal strategies. This result unifies and simplifies several existing proofs. Moreover, it is a key tool for generating new examples of MDPs with pure stationary optimal strategies.", "This paper is devoted to sequential decision making under uncertainty, in the multi-prior framework of Gilboa and Schmeidler [1989]. In this setting, a set of probability measures (priors) is defined instead of a single one, and the decision maker selects a strategy that maximizes the minimum possible value of expected utility over this set of priors. We are interested here in the resolute choice approach, where one initially commits to a complete strategy and never deviates from it later. Given a decision tree representation with multiple priors, we study the problem of determining an optimal strategy from the root according to min expected utility. We prove the intractability of evaluating a strategy in the general case. We then identify different properties of a decision tree that enable to design dedicated resolution procedures. Finally, experimental results are presented that evaluate these procedures." ] }
1611.00862
2547480197
In reinforcement learning, the standard criterion to evaluate policies in a state is the expectation of (discounted) sum of rewards. However, this criterion may not always be suitable, we consider an alternative criterion based on the notion of quantiles. In the case of episodic reinforcement learning problems, we propose an algorithm based on stochastic approximation with two timescales. We evaluate our proposition on a simple model of the TV show, Who wants to be a millionaire.
Recent work in Markov decision process and reinforcement learning considered conditional Value-at-risk (CVaR), a criterion related to quantile, as a risk measure. @cite_7 proved the existence of deterministic wealth-Markovian policies optimal with respect to CVaR. @cite_4 proposed gradient-based algorithms for CVaR optimization. In contrast, @cite_18 used CVaR in constraints instead of as objective function.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_7" ], "mid": [ "", "2949207039", "2128347943" ], "abstract": [ "", "In many sequential decision-making problems we may want to manage risk by minimizing some measure of variability in costs in addition to minimizing a standard criterion. Conditional value-at-risk (CVaR) is a relatively new risk measure that addresses some of the shortcomings of the well-known variance-related risk measures, and because of its computational efficiencies has gained popularity in finance and operations research. In this paper, we consider the mean-CVaR optimization problem in MDPs. We first derive a formula for computing the gradient of this risk-sensitive objective function. We then devise policy gradient and actor-critic algorithms that each uses a specific method to estimate this gradient and updates the policy parameters in the descent direction. We establish the convergence of our algorithms to locally risk-sensitive optimal policies. Finally, we demonstrate the usefulness of our algorithms in an optimal stopping problem.", "We propose a new constrained Markov decision process framework with risk-type constraints. The risk metric we use is Conditional Value-at-Risk (CVaR), which is gaining popularity in finance. It is a conditional expectation but the conditioning is defined in terms of the level of the tail probability. We propose an iterative offline algorithm to find the risk-contrained optimal control policy. A stochastic approximation-inspired ‘learning’ variant is also sketched." ] }
1611.00862
2547480197
In reinforcement learning, the standard criterion to evaluate policies in a state is the expectation of (discounted) sum of rewards. However, this criterion may not always be suitable, we consider an alternative criterion based on the notion of quantiles. In the case of episodic reinforcement learning problems, we propose an algorithm based on stochastic approximation with two timescales. We evaluate our proposition on a simple model of the TV show, Who wants to be a millionaire.
Closer to our work, several quantile-based decision models have been investigated in different contexts. In uncertain MDPs where the parameters of the transition and reward functions are imprecisely known, @cite_15 presented and investigated a quantile-like criterion to capture the trade-off between optimistic and pessimistic viewpoints on an uncertain MDP. The quantile criterion they use is different to ours as it takes into account the uncertainty present in the parameters of the MDP.
{ "cite_N": [ "@cite_15" ], "mid": [ "2058066080" ], "abstract": [ "Markov decision processes are an effective tool in modeling decision-making in uncertain dynamic environments. Since the parameters of these models are typically estimated from data, learned from experience, or designed by hand, it is not surprising that the actual performance of a chosen strategy often significantly differs from the designer's initial expectations due to unavoidable model uncertainty. In this paper, we present a percentile criterion that captures the trade-off between optimistic and pessimistic points of view on MDP with parameter uncertainty. We describe tractable methods that take parameter uncertainty into account in the process of decision making. Finally, we propose a cost-effective exploration strategy when it is possible to invest (money, time or computation efforts) in actions that will reduce the uncertainty in the parameters." ] }
1611.00558
2743887455
Online recommender systems often deal with continuous, potentially fast and unbounded flows of data. Ensemble methods for recommender systems have been used in the past in batch algorithms, however they have never been studied with incremental algorithms that learn from data streams. We evaluate online bagging with an incremental matrix factorization algorithm for top-N recommendation with positive-only user feedback, often known as binary ratings. Our results show that online bagging is able to improve accuracy up to 35 over the baseline, with small computational overhead.
Ensemble methods in machine learning are convenient techniques to improve the accuracy of algorithms. Typically, this is achieved by combining results from a number of weaker sub-models. Bagging @cite_1 , Boosting @cite_12 and Stacking @cite_11 are three well-known ensemble methods used with recommendation algorithms. Boosting is experimented in @cite_8 @cite_0 @cite_4 @cite_9 , bagging is studied also in @cite_4 @cite_9 , and stacking in @cite_2 . In all of these contributions, ensemble methods work with batch learning algorithms only.
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_9", "@cite_1", "@cite_0", "@cite_2", "@cite_12", "@cite_11" ], "mid": [ "2055119952", "1947880380", "2136606342", "", "1971387204", "", "2112076978", "" ], "abstract": [ "We analyze the application of ensemble learning to recommender systems on the Netflix Prize dataset. For our analysis we use a set of diverse state-of-the-art collaborative filtering (CF) algorithms, which include: SVD, Neighborhood Based Approaches, Restricted Boltzmann Machine, Asymmetric Factor Model and Global Effects. We show that linearly combining (blending) a set of CF algorithms increases the accuracy and outperforms any single CF algorithm. Furthermore, we show how to use ensemble methods for blending predictors in order to outperform a single blending algorithm. The dataset and the source code for the ensemble blending are available online.", "Personalised recommender systems are widely used information filtering for information retrieval, where matrix factorisation (MF) has become popular as a model-based approach to personalised recommendation. Classical MF methods, which directly approximate low rank factor matrices by minimising some rating prediction criteria, do not achieve a satisfiable performance for the task of top-N recommendation. In this paper, we propose a novel MF method, namely BoostMF, that formulates factorisation as a learning problem and integrates boosting into factorisation. Rather than using boosting as a wrapper, BoostMF directly learns latent factors that are optimised toward the top-N recommendation. The proposed method is evaluated against a set of state-of-the-art methods on three popular public benchmark datasets. The experimental results demonstrate that the proposed method achieves significant improvement over these baseline methods for the task of top-N recommendation.", "An essential goal of the present web engineering is the development of efficient and competitive applications. This objective can be achieved by building recommender systems endowed with suitable web mining algorithms. Multiclassifiers are reliable data mining models that have been hardly used in the web system area. The paper presents a comparative study among different simple classifiers and multiclassifiers using a dataset from MovieLens recommender system. The aim of the work is to identify when the use of multiclassifiers in this type of systems is efficient.", "", "Recommender systems provide consumers with ratings of items. These ratings are based on a set of ratings that were obtained from a wide scope of users. Predicting the ratings can be formulated as a regression problem. Ensemble regression methods are effective tools that improve the results of simple regression algorithms by iteratively applying the simple algorithm to a diverse set of inputs. The present paper describes a simple and effective ensemble regressor for the prediction of missing ratings in recommender systems. The ensemble method is an adaptation of the AdaBoost regression algorithm for recommendation tasks. In all iterations, interpolation weights for all nearest neighbors are simultaneously derived by minimizing the root mean squared error. From iteration to iteration instances that are hard to predict are reinforced by manipulating their weights in the goal function that needs to be minimized. The experimental evaluation demonstrates that the ensemble methodology significantly improves the predictive performance of single neighborhood-based collaborative filtering.", "", "In an earlier paper, we introduced a new \"boosting\" algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that con- sistently generates classifiers whose performance is a little better than random guessing. We also introduced the related notion of a \"pseudo-loss\" which is a method for forcing a learning algorithm of multi-label concepts to concentrate on the labels that are hardest to discriminate. In this paper, we describe experiments we carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems. We performed two sets of experiments. The first set compared boosting to Breiman's \"bagging\" method when used to aggregate various classifiers (including decision trees and single attribute- value tests). We compared the performance of the two methods on a collection of machine-learning benchmarks. In the second set of experiments, we studied in more detail the performance of boosting using a nearest-neighbor classifier on an OCR problem.", "" ] }
1611.00704
2952916478
The performance of wireless body area networks (WBANs) may be degraded due to co-channel interference, i.e., when sensors of different coexisting WBANs transmit at the same time-slots using the same channel. In this paper, we exploit the 16 channels available in the 2.4 GHz unlicensed international band of ZIGBEE, and propose a distributed scheme that opts to avoid interference through channel to time-slot hopping based on Latin rectangles, DAIL. In DAIL, each WBAN's coordinator picks a Latin rectangle whose rows are ZIGBEE channels and columns are time-slots of its superframe. Subsequently, it assigns a unique symbol to each sensor; this latter forms a transmission pattern according to distinct positions of its symbol in the rectangle, such that collisions among different transmissions of coexisting WBANs are minimized. We further present an analytical model that derives bounds on the collision probability of each sensor's transmission in the network. In addition, the efficiency of DAIL in interference mitigation has been validated by simulations.
A number of approaches have adopted cooperative communication, game theory and power control to mitigate co-channel interference. , @cite_10 have pursued joint a cooperative communication integrated with transmit power control based on simple channel predication for s coexistence problem. Similarly, in @cite_15 , a co-channel interference is mitigated using cooperative orthogonal channels and a contention window extension mechanism. Whereas, the approach of @cite_17 employs a Bayesian game based power control to mitigate inter- interference by modelling s as players and active links as types of players .
{ "cite_N": [ "@cite_15", "@cite_10", "@cite_17" ], "mid": [ "2292788438", "2159297871", "2093634766" ], "abstract": [ "In this paper, we propose a distributed multi-hop interference avoidance algorithm, namely, IAA to avoid co-channel interference inside a wireless body area network (WBAN). Our proposal adopts carrier sense multiple access with collision avoidance (CSMA CA) between sources and relays and a flexible time division multiple access (FTDMA) between relays and coordinator. The proposed scheme enables low interfering nodes to transmit their messages using base channel. Depending on suitable situations, high interfering nodes double their contention windows (CW) and probably use switched orthogonal channel. Simulation results show that proposed scheme has far better minimum SINR (12dB improvement) and longer energy lifetime than other schemes (power control and opportunistic relaying). Additionally, we validate our proposal in a theoretical analysis and also propose a probabilistic approach to prove the outage probability can be effectively reduced to the minimal.", "A scheme for two-hop relay-assisted cooperative communications integrated with transmit power control, based on simple channel prediction, is presented. A large set of empirical on- and inter-body channel data is employed to model various scenarios of wireless body area network (WBAN) communica- tions, from one isolated WBAN up to 10 closely located WBANs coexisting. Our study shows that relay assisted power control can reduce approximately 60 circuit power consumption from that of constant transmission at 0 dBm, without much loss in reliability. Further, interference mitigation is significantly enhanced over constant transmission at −5 dBm, with similar power consumption. Such performance is maintained from 2 to 10 closely-located WBANs coexisting. And the joint algorithm works best for one isolated WBAN. A trade-off between power saving and interference mitigation is motivated, taking remaining sensor-battery level, amount of interference and on-body channel quality into account. Index Terms—Cooperative communications, interference mit- igation, transmit power control, wireless body area networks.", "Wireless body area network (WBAN) is an emerging technology that provides socialized health monitoring service. However, the quality of service can be severely degraded by concomitant inter-WBAN interference in some specific environments where multiple WBANs are densely deployed, e.g., hospitals and senior citizen communities. In this work, we propose a Bayesian game based power control scheme to mitigate the impact of inter-WBAN interference. By modeling WBANs as players and active links as types of players in the Bayesian game model, the proposed power control scheme tries to maximize each player's expected payoff involving both throughput and energy efficiency. We prove the existence of Bayesian equilibrium (BE) for the proposed power control game and also derive a practical sufficient condition for the uniqueness of BE. A harmonic mean based algorithm is then proposed to obtain an approximation of BE point without the need to pass message among WBANs, which satisfies the non-cooperative manner for inter-WBAN interference mitigation. Simulation results show that the proposed algorithm can converge to the BE point effectively." ] }
1611.00704
2952916478
The performance of wireless body area networks (WBANs) may be degraded due to co-channel interference, i.e., when sensors of different coexisting WBANs transmit at the same time-slots using the same channel. In this paper, we exploit the 16 channels available in the 2.4 GHz unlicensed international band of ZIGBEE, and propose a distributed scheme that opts to avoid interference through channel to time-slot hopping based on Latin rectangles, DAIL. In DAIL, each WBAN's coordinator picks a Latin rectangle whose rows are ZIGBEE channels and columns are time-slots of its superframe. Subsequently, it assigns a unique symbol to each sensor; this latter forms a transmission pattern according to distinct positions of its symbol in the rectangle, such that collisions among different transmissions of coexisting WBANs are minimized. We further present an analytical model that derives bounds on the collision probability of each sensor's transmission in the network. In addition, the efficiency of DAIL in interference mitigation has been validated by simulations.
Other approaches pursued multiple access schemes for interference mitigation. , @cite_5 have proposed a distributed -based beacon interval shifting scheme where, the wake up period of a is made to not overlap with other s by employing carrier sense before a beacon transmission. Whilst, , @cite_1 adopts for scheduling transmissions within a and carrier sensing mechanism to deal with inter- interference. In @cite_9 , many topology-dependent transmission scheduling algorithms have been proposed to minimize the frame length in multihop packet radio networks using and . For single-channel networks, the and the for topology-transparent broadcast scheduling is proposed. The obtains much smaller than the existing scheme while the can even achieve possible performance gain when compared with the . In one-hop rather than multi-hop communication scheme, like , using Latin squares better schedules the medium access and consequently significantly diminishes the inter- interference.
{ "cite_N": [ "@cite_5", "@cite_9", "@cite_1" ], "mid": [ "2065809582", "2160254460", "" ], "abstract": [ "This paper investigates the issue of interference avoidance in body area networks (BANs). IEEE 802.15 Task Group 6 presented several schemes to reduce such interference, but these schemes are still not proper solutions for BANs. We present a novel distributed TDMA-based beacon interval shifting scheme that reduces interference in the BANs. A design goal of the scheme is to avoid the wakeup period of each BAN coinciding with other networks by employing carrier sensing before a beacon transmission. We analyze the beacon interval shifting scheme and investigate the proper back-off length when the channel is busy. We compare the performance of the proposed scheme with the schemes presented in IEEE 802.15 Task Group 6 using an OMNeT++ simulation. The simulation results show that the proposed scheme has a lower packet loss, energy consumption, and delivery-latency than the schemes of IEEE 802.15 Task Group 6.", "Many topology-dependent transmission scheduling algorithms have been proposed to minimize the time-division multiple-access frame length in multihop packet radio networks (MPRNs), in which changes of the topology inevitably require recomputation of the schedules. The need for constant adaptation of schedules-to-mobile topology entails significant problems, especially in highly dynamic mobile environments. Hence, topology-transparent scheduling algorithms have been proposed, which utilize Galois field theory and Latin squares theory. We discuss the topology-transparent broadcast scheduling design for MPRNs. For single-channel networks, we propose the modified Galois field design (MGD) and the Latin square design (LSD) for topology-transparent broadcast scheduling. The MGD obtains much smaller minimum frame length (MFL) than the existing scheme while the LSD can even achieve possible performance gain when compared with the MGD, under certain conditions. Moreover, the inner relationship between scheduling designs based on different theories is revealed and proved, which provides valuable insight. For topology-transparent broadcast scheduling in multichannel networks, in which little research has been done, the proposed multichannel Galois field design (MCGD) can reduce the MFL approximately M times, as compared with the MGD when M channels are available. Numerical results show that the proposed algorithms outperform existing algorithms in achieving a smaller MFL.", "" ] }
1611.00814
2964239702
Vindicating a sophisticated but non-rigorous physics approach called the cavity method, we establish a formula for the mutual information in statistical inference problems induced by random graphs. This general result implies the conjecture on the information-theoretic threshold in the disassortative stochastic block model [: Phys. Rev. E (2011)] and allows us to pinpoint the exact condensation phase transition in random constraint satisfaction problems such as random graph coloring, thereby proving a conjecture from [: PNAS (2007)]. As a further application we establish the formula for the mutual information in Low-Density Generator Matrix codes as conjectured in [Montanari: IEEE Transactions on Information Theory (2005)]. The proofs provide a conceptual underpinning of the replica symmetric variant of the cavity method, and we expect that the approach will find many future applications.
Previously the validity of the physics formulas was known in any generality only under the assumption that the factor graph models satisfies the Gibbs uniqueness condition, a very strong spatial mixing assumption @cite_80 @cite_47 @cite_76 @cite_9 . Gibbs uniqueness typically only holds for very small values of @math . Additionally, under weaker spatial mixing conditions it was known that the free energy in random graph models is given by some Belief Propagation fixed point @cite_73 @cite_9 . However, there may be infinitely many fixed points, and it was not generally known that the correct one is the maximizer of the functional @math . In effect, it was not possible to derive the formula for the free energy or, equivalently, the mutual information, from such results. Specifically, in the case of the teacher-student scheme Montanari @cite_74 proved (under certain assumptions) that the Gibbs marginals of @math correspond to a Belief Propagation fixed point as in , whereas identifies the particular fixed point that maximizes the functional @math as the relevant one.
{ "cite_N": [ "@cite_73", "@cite_9", "@cite_74", "@cite_80", "@cite_47", "@cite_76" ], "mid": [ "2963553740", "2018836315", "1677159791", "904137795", "2266756552", "2963994072" ], "abstract": [ "According to physics predictions, the free energy of random factor graph models that satisfy a certain \"static replica symmetry\" condition can be calculated via the Belief Propagation message passing scheme [ PNAS, 2007]. Here we prove this conjecture for a wide class of random factor graph models. Specifically, we show that the messages constructed just as in the case of acyclic factor graphs asymptotically satisfy the Belief Propagation equations and that the free energy density is given by the Bethe free energy formula.", "We consider homogeneous factor models on uniformly sparse graph sequences converging locally to a (unimodular) random tree T, and study the existence of the free energy density ϕ, the limit of the log-partition function divided by the number of vertices n as n tends to infinity. We provide a new interpolation scheme and use it to prove existence of, and to explicitly compute, the quantity ϕ subject to uniqueness of a relevant Gibbs measure for the factor model on T. By way of example we compute ϕ for the independent set (or hard-core) model at low fugacity, for the ferromagnetic Ising model at all parameter values, and for the ferromagnetic Potts model with both weak enough and strong enough interactions. Even beyond uniqueness regimes our interpolation provides useful explicit bounds on ϕ. In the regimes in which we establish existence of the limit, we show that it coincides with the Bethe free energy functional evaluated at a suitable fixed point of the belief propagation (Bethe) recursions on T. In the special case that T has a Galton–Watson law, this formula coincides with the nonrigorous “Bethe prediction” obtained by statistical physicists using the “replica” or “cavity” methods. Thus our work is a rigorous generalization of these heuristic calculations to the broader class of sparse graph sequences converging locally to trees. We also provide a variational characterization for the Bethe prediction in this general setting, which is of independent interest.", "Let X 1 ,...., X n be a collection of iid discrete random variables, and Y 1 ,...., Y m a set of noisy observations of such variables. Assume each observation Y a to be a random function of a random subset of the X i s, and consider the conditional distribution of X i given the observations, namely μ i (x i ) = P X i = x i |Y (a posteriori probability). We establish a general decoupling principle among the X i s, as well as a relation between the distribution of μ i , and the fixed points of the associated density evolution operator. These results hold asymptotically in the large system limit, provided the average number of variables an observation depends on is bounded. We discuss the relevance of our result to a number of applications, ranging from sparse graph codes and multi-user detection, to group testing.", "A wide class of problems in combinatorics, computer science and physics can be described along the following lines. There are a large number of variables ranging over a finite domain that interact through constraints that each bind a few variables and either encourage or discourage certain value combinations. Examples include the k-SAT problem or the Ising model. Such models naturally induce a Gibbs measure on the set of assignments, which is characterised by its partition function. The present paper deals with the partition function of problems where the interactions between variables and constraints are induced by a sparse random (hyper)graph. According to physics predictions, a generic recipe called the “replica symmetric cavity method” yields the correct value of the partition function if the underlying model enjoys certain properties [, PNAS (2007) 10318–10323]. Guided by this conjecture, we prove general sufficient conditions for the success of the cavity method. The proofs are based on a “regularity lemma” for probability measures on sets of the form Ωn for a finite Ω and a large n that may be of independent interest. © 2016 Wiley Periodicals, Inc. Random Struct. Alg., 2016", "Abstract Building upon the theory of graph limits and the Aldous–Hoover representation and inspired by Panchenko’s work on asymptotic Gibbs measures [Annals of Probability 2013], we construct continuous embeddings of discrete probability distributions. We show that the theory of graph limits induces a meaningful notion of convergence and derive a corresponding version of the Szemeredi regularity lemma. Moreover, complementing recent work (2015), we apply these results to Gibbs measures induced by sparse random factor graphs and verify the “replica symmetric solution” predicted in the physics literature under the assumption of non-reconstruction.", "Many problems of interest in computer science and information theory can be phrased in terms of a probability distribution over discrete variables associated to the vertices of a large (but finite) sparse graph. In recent years, considerable progress has been achieved by viewing these distributions as Gibbs measures and applying to their study heuristic tools from statistical physics. We review this approach and provide some results towards a rigorous treatment of these problems. AMS 2000 subject classifications: Primary 60B10, 60G60, 82B20." ] }
1611.00814
2964239702
Vindicating a sophisticated but non-rigorous physics approach called the cavity method, we establish a formula for the mutual information in statistical inference problems induced by random graphs. This general result implies the conjecture on the information-theoretic threshold in the disassortative stochastic block model [: Phys. Rev. E (2011)] and allows us to pinpoint the exact condensation phase transition in random constraint satisfaction problems such as random graph coloring, thereby proving a conjecture from [: PNAS (2007)]. As a further application we establish the formula for the mutual information in Low-Density Generator Matrix codes as conjectured in [Montanari: IEEE Transactions on Information Theory (2005)]. The proofs provide a conceptual underpinning of the replica symmetric variant of the cavity method, and we expect that the approach will find many future applications.
Yet the predictions of the replica symmetric cavity method have been verified in several specific examples. The first ones were the ferromagnetic Ising Potts model @cite_24 @cite_9 , where the proofs exploit model-specific monotonicity contraction properties. More recently, the ingenious spatial coupling technique has been used to prove replica symmetric predictions in several important cases, including low-density parity check codes @cite_40 . Indeed, spatial coupling provides an alternative probabilistic construction of, e.g., codes with excellent algorithmic properties @cite_51 . Yet the method falls short of providing a wholesale justification of the cavity method as a potentially substantial amount of individual ingredients is required for each application (such as problem-specific algorithms @cite_83 ).
{ "cite_N": [ "@cite_9", "@cite_24", "@cite_40", "@cite_83", "@cite_51" ], "mid": [ "2018836315", "2078168527", "2332339252", "179407972", "2179644698" ], "abstract": [ "We consider homogeneous factor models on uniformly sparse graph sequences converging locally to a (unimodular) random tree T, and study the existence of the free energy density ϕ, the limit of the log-partition function divided by the number of vertices n as n tends to infinity. We provide a new interpolation scheme and use it to prove existence of, and to explicitly compute, the quantity ϕ subject to uniqueness of a relevant Gibbs measure for the factor model on T. By way of example we compute ϕ for the independent set (or hard-core) model at low fugacity, for the ferromagnetic Ising model at all parameter values, and for the ferromagnetic Potts model with both weak enough and strong enough interactions. Even beyond uniqueness regimes our interpolation provides useful explicit bounds on ϕ. In the regimes in which we establish existence of the limit, we show that it coincides with the Bethe free energy functional evaluated at a suitable fixed point of the belief propagation (Bethe) recursions on T. In the special case that T has a Galton–Watson law, this formula coincides with the nonrigorous “Bethe prediction” obtained by statistical physicists using the “replica” or “cavity” methods. Thus our work is a rigorous generalization of these heuristic calculations to the broader class of sparse graph sequences converging locally to trees. We also provide a variational characterization for the Bethe prediction in this general setting, which is of independent interest.", "We establish an explicit formula for the limiting free energy density (log-partition function divided by the number of vertices) for ferromagnetic Potts models on uniformly sparse graph sequences converging locally to the d-regular tree for d even, covering all temperature regimes. This formula coincides with the Bethe free energy functional evaluated at a suitable fixed point of the belief propagation recursion on the d-regular tree, the so-called replica symmetric solution. For uniformly random d-regular graphs we further show that the replica symmetric Bethe formula is an upper bound for the asymptotic free energy for any model with permissive interactions.", "The aim of this paper is to show that spatial coupling can be viewed not only as a means to build better graphical models, but also as a tool to better understand uncoupled models. The starting point is the observation that some asymptotic properties of graphical models are easier to prove in the case of spatial coupling. In such cases, one can then use the so-called interpolation method to transfer known results for the spatially coupled case to the uncoupled one. Our main use of this framework is for Low-density parity check (LDPC) codes, where we use interpolation to show that the average entropy of the codeword conditioned on the observation is asymptotically the same for spatially coupled as for uncoupled ensembles. We give three applications of this result for a large class of LDPC ensembles. The first one is a proof of the so-called Maxwell construction stating that the MAP threshold is equal to the area threshold of the BP GEXIT curve. The second is a proof of the equality between the BP and MAP GEXIT curves above the MAP threshold. The third application is the intimately related fact that the replica symmetric formula for the conditional entropy in the infinite block length limit is exact.", "We report on a novel technique called spatial coupling and its application in the analysis of random constraint satisfaction problems (CSP). Spatial coupling was invented as an engineering construction in the area of error correcting codes where it has resulted in efficient capacity-achieving codes for a wide range of channels. However, this technique is not limited to problems in communications, and can be applied in the much broader context of graphical models. We describe here a general methodology for applying spatial coupling to random constraint satisfaction problems and obtain lower bounds for their (rough) satisfiability threshold. The main idea is to construct a distribution of geometrically structured random K-SAT instances - namely the spatially coupled ensemble - which has the same (rough) satisfiability threshold, and is at the same time algorithmically easier to solve. Then by running well-known algorithms on the spatially coupled ensemble we obtain a lower bound on the (rough) satisfiability threshold of the original ensemble. The method is versatile because one can choose the CSP, there is a certain amount of freedom in the construction of the spatially coupled ensemble, and also in the choice of the algorithm. In this work we focus on random K-SAT but we have also checked that the method is successful for Coloring, NAE-SAT and XOR-SAT. We choose Unit Clause propagation for the algorithm which is analyzed over the spatially coupled instances. For K = 3, for instance, our lower bound is equal to 3.67 which is better than the current bounds in the literature. Similarly, for graph 3-colorability we get a bound of 2.22 which is also better than the current bounds in the literature.", "We investigate spatially coupled code ensembles. For transmission over the binary erasure channel, it was recently shown that spatial coupling increases the belief propagation threshold of the ensemble to essentially the maximum a priori threshold of the underlying component ensemble. This explains why convolutional LDPC ensembles, originally introduced by Felstrom and Zigangirov, perform so well over this channel. We show that the equivalent result holds true for transmission over general binary-input memoryless output-symmetric channels. More precisely, given a desired error probability and a gap to capacity, we can construct a spatially coupled ensemble that fulfills these constraints universally on this class of channels under belief propagation decoding. In fact, most codes in this ensemble have this property. The quantifier universal refers to the single ensemble code that is good for all channels but we assume that the channel is known at the receiver. The key technical result is a proof that, under belief-propagation decoding, spatially coupled ensembles achieve essentially the area threshold of the underlying uncoupled ensemble. We conclude by discussing some interesting open problems." ] }
1611.00814
2964239702
Vindicating a sophisticated but non-rigorous physics approach called the cavity method, we establish a formula for the mutual information in statistical inference problems induced by random graphs. This general result implies the conjecture on the information-theoretic threshold in the disassortative stochastic block model [: Phys. Rev. E (2011)] and allows us to pinpoint the exact condensation phase transition in random constraint satisfaction problems such as random graph coloring, thereby proving a conjecture from [: PNAS (2007)]. As a further application we establish the formula for the mutual information in Low-Density Generator Matrix codes as conjectured in [Montanari: IEEE Transactions on Information Theory (2005)]. The proofs provide a conceptual underpinning of the replica symmetric variant of the cavity method, and we expect that the approach will find many future applications.
The random factor graph models that we consider in the present paper are of - type, i.e., the constraint nodes choose their adjacent variable nodes independently. In effect, the variable degrees are asymptotically Poisson with mean @math . While such models are very natural, models with given variable degree distributions are of interest in some applications, such as error-correcting codes (e.g. @cite_62 ). Although we expect that the present methods extend to models with (reasonable) given degree distributions, here we confine ourselves to the Poisson case for the sake of clarity. Similarly, the assumptions BAL , SYM and POS , and the strict positivity of the constraint functions strike a balance between generality and convenience. While these conditions hold in many cases of interest, BAL fails for the ferromagnetic Potts model, which is why does not cover the block model. Anyhow BAL , SYM and POS are (probably) not strictly necessary for our results to hold and our methods to go through, a point that we leave to future work.
{ "cite_N": [ "@cite_62" ], "mid": [ "2156595614" ], "abstract": [ "A new method for analyzing low-density parity-check (LDPC) codes and low-density generator-matrix (LDGM) codes under bit maximum a posteriori probability (MAP) decoding is introduced. The method is based on a rigorous approach to spin glasses developed by Francesco Guerra. It allows one to construct lower bounds on the entropy of the transmitted message conditional to the received one. Based on heuristic statistical mechanics calculations, we conjecture such bounds to be tight. The result holds for standard irregular ensembles when used over binary-input output-symmetric (BIOS) channels. The method is first developed for Tanner-graph ensembles with Poisson left-degree distribution. It is then generalized to \"multi-Poisson\" graphs, and, by a completion procedure, to arbitrary degree distribution" ] }
1611.00814
2964239702
Vindicating a sophisticated but non-rigorous physics approach called the cavity method, we establish a formula for the mutual information in statistical inference problems induced by random graphs. This general result implies the conjecture on the information-theoretic threshold in the disassortative stochastic block model [: Phys. Rev. E (2011)] and allows us to pinpoint the exact condensation phase transition in random constraint satisfaction problems such as random graph coloring, thereby proving a conjecture from [: PNAS (2007)]. As a further application we establish the formula for the mutual information in Low-Density Generator Matrix codes as conjectured in [Montanari: IEEE Transactions on Information Theory (2005)]. The proofs provide a conceptual underpinning of the replica symmetric variant of the cavity method, and we expect that the approach will find many future applications.
A further open problem is to provide a rigorous justification of the more intricate replica symmetry breaking' (1RSB) version of the cavity method. The 1RSB version appears to be necessary to pinpoint, e.g., the @math -SAT or @math -colorability thresholds for @math , @math respectively. Currently there are but a very few examples where predictions from the 1RSB cavity method have been established rigorously @cite_84 @cite_67 @cite_16 , the most prominent one being the proof of the @math -SAT conjecture for large @math @cite_56 . That said, the upshot of the present paper is that for teacher-student-type problems as well as for the purpose of finding the condensation threshold, the replica symmetric cavity method is provably sufficient.
{ "cite_N": [ "@cite_67", "@cite_16", "@cite_84", "@cite_56" ], "mid": [ "1489398757", "2343311415", "2287686747", "2070702809" ], "abstract": [ "We determine the asymptotics of the independence number of the random @math -regular graph for all @math . It is highly concentrated, with constant-order fluctuations around @math for explicit constants @math and @math . Our proof rigorously confirms the one-step replica symmetry breaking heuristics for this problem, and we believe the techniques will be more broadly applicable to the study of other combinatorial properties of random graphs.", "Recent work has made substantial progress in understanding the transitions of random constraint satisfaction problems. In particular, for several of these models, the exact satisfiability threshold has been rigorously determined, confirming predictions of statistical physics. Here we revisit one of these models, random regular k-NAE-SAT: knowing the satisfiability threshold, it is natural to study, in the satisfiable regime, the number of solutions in a typical instance. We prove here that these solutions have a well-defined free energy (limiting exponential growth rate), with explicit value matching the one-step replica symmetry breaking prediction. The proof develops new techniques for analyzing a certain \"survey propagation model\" associated to this problem. We believe that these methods may be applicable in a wide class of related problems.", "We consider the random regular k-nae- sat problem with n variables, each appearing in exactly d clauses. For all k exceeding an absolute constant ( k _0 ), we establish explicitly the satisfiability threshold ( d_ d_ (k) ). We prove that for ( d d_ ) the problem is unsatisfiable with high probability. If the threshold ( d_ ) lands exactly on an integer, we show that the problem is satisfiable with probability bounded away from both zero and one. This is the first result to locate the exact satisfiability threshold in a random constraint satisfaction problem exhibiting the condensation phenomenon identified by [Proc Natl Acad Sci 104(25):10318–10323, 2007]. Our proof verifies the one-step replica symmetry breaking formalism for this model. We expect our methods to be applicable to a broad range of random constraint satisfaction problems and combinatorial problems on random graphs.", "We establish the satisfiability threshold for random k-SAT for all k ≥ k0. That is, there exists a limiting density αs(k) such that a random k-SAT formula of clause density α is with high probability satisfiable for α αs. The satisfiability threshold αs is given explicitly by the one-step replica symmetry breaking (1SRB) prediction from statistical physics. We believe that our methods may apply to a range of random constraint satisfaction problems in the 1RSB class." ] }
1611.00707
2952946010
We prove that any mixed-integer linear extended formulation for the matching polytope of the complete graph on @math vertices, with a polynomial number of constraints, requires @math many integer variables. By known reductions, this result extends to the traveling salesman polytope. This lower bound has various implications regarding the existence of small mixed-integer mathematical formulations of common problems in operations research. In particular, it shows that for many classic vehicle routing problems and problems involving matchings, any compact mixed-integer linear description of such a problem requires a large number of integer variables. This provides a first non-trivial lower bound on the number of integer variables needed in such settings.
Kaibel and Weltge @cite_19 took a somewhat orthogonal approach in which they focus on the original variable space and assume integrality of all variables, with the goal of understanding how many linear inequalities are needed to describe a certain set of integer points. More precisely, they consider the set of integer points @math in some polyhedron @math , and then study how many facets a polytope @math needs to have such that @math . They show that even for natural polytopes that admit a compact extended formulation, like the spanning tree polytope, an exponential number of linear constraints is often necessary.
{ "cite_N": [ "@cite_19" ], "mid": [ "2115718900" ], "abstract": [ "Let @math X be the set of integer points in some polyhedron. We investigate the smallest number of facets of any polyhedron whose set of integer points is @math X. This quantity, which we call the relaxation complexity of @math X, corresponds to the smallest number of linear inequalities of any integer program having @math X as the set of feasible solutions that does not use auxiliary variables. We show that the use of auxiliary variables is essential for constructing polynomial size integer programming formulations in many relevant cases. In particular, we provide asymptotically tight exponential lower bounds on the relaxation complexity of the integer points of several well-known combinatorial polytopes, including the traveling salesman polytope and the spanning tree polytope. In addition to the material in the extended abstract Kaibel and Weltge (2014) we include omitted proofs, supporting figures, discussions about properties of coefficients in such formulations, and facts about the complexity of formulations in more general settings." ] }
1611.00020
2950632879
Harnessing the statistical power of neural networks to perform language understanding and symbolic reasoning is difficult, when it requires executing efficient discrete operations against a large knowledge-base. In this work, we introduce a Neural Symbolic Machine, which contains (a) a neural "programmer", i.e., a sequence-to-sequence model that maps language utterances to programs and utilizes a key-variable memory to handle compositionality (b) a symbolic "computer", i.e., a Lisp interpreter that performs program execution, and helps find good programs by pruning the search space. We apply REINFORCE to directly optimize the task reward of this structured prediction problem. To train with weak supervision and improve the stability of REINFORCE, we augment it with an iterative maximum-likelihood training process. NSM outperforms the state-of-the-art on the WebQuestionsSP dataset when trained from question-answer pairs only, without requiring any feature engineering or domain-specific knowledge.
Among deep learning models for program induction, Reinforcement Learning Neural Turing Machines (RL-NTMs) @cite_40 are the most similar to NSM, as a non-differentiable machine is controlled by a sequence model. Therefore, both models rely on REINFORCE for training. The main difference between the two is the abstraction level of the programming language. RL-NTM uses lower level operations such as memory address manipulation and byte reading writing, while NSM uses a high level programming language over a large knowledge base that includes operations such as following properties from entities, or sorting based on a property, which is more suitable for representing semantics. Earlier works such as OOPS @cite_41 has desirable characteristics, for example, the ability to define new functions. These remain to be future improvements for NSM.
{ "cite_N": [ "@cite_41", "@cite_40" ], "mid": [ "2011058337", "2204302769" ], "abstract": [ "We present a novel, general, optimally fast, incremental way of searching for a universal algorithm that solves each task in a sequence of tasks. The Optimal Ordered Problem Solver (OOPS) continually organizes and exploits previously found solutions to earlier tasks, efficiently searching not only the space of domain-specific algorithms, but also the space of search algorithms. Essentially we extend the principles of optimal nonincremental universal search to build an incremental universal learner that is able to improve itself through experience. The initial bias is embodied by a task-dependent probability distribution on possible program prefixes. Prefixes are self-delimiting and executed in online fashion while being generated. They compute the probabilities of their own possible continuations. Let p^n denote a found prefix solving the first n tasks. It may exploit previously stored solutions p^i, i >n, by calling them as subprograms, or by copying them and editing the copies before applying them. We provide equal resources for two searches that run in parallel until p^ n+1 is discovered and stored. The first search is exhaustive; it systematically tests all possible prefixes on all tasks up to n+1. The second search is much more focused; it only searches for prefixes that start with p^n, and only tests them on task n+1, which is safe, because we already know that such prefixes solve all tasks up to n. Both searches are depth-first and bias-optimal: the branches of the search trees are program prefixes, and backtracking is triggered once the sum of the runtimes of the current prefix on all current tasks exceeds the prefix probability multiplied by the total search time so far. In illustrative experiments, our self-improver becomes the first general system that learns to solve all n disk Towers of Hanoi tasks (solution size 2^n-1) for n up to 30, profiting from previously solved, simpler tasks involving samples of a simple context free language.", "The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete." ] }
1611.00020
2950632879
Harnessing the statistical power of neural networks to perform language understanding and symbolic reasoning is difficult, when it requires executing efficient discrete operations against a large knowledge-base. In this work, we introduce a Neural Symbolic Machine, which contains (a) a neural "programmer", i.e., a sequence-to-sequence model that maps language utterances to programs and utilizes a key-variable memory to handle compositionality (b) a symbolic "computer", i.e., a Lisp interpreter that performs program execution, and helps find good programs by pruning the search space. We apply REINFORCE to directly optimize the task reward of this structured prediction problem. To train with weak supervision and improve the stability of REINFORCE, we augment it with an iterative maximum-likelihood training process. NSM outperforms the state-of-the-art on the WebQuestionsSP dataset when trained from question-answer pairs only, without requiring any feature engineering or domain-specific knowledge.
We formulate NSM training as an instance of reinforcement learning @cite_20 in order to directly optimize the task reward of the structured prediction problem @cite_24 @cite_4 @cite_37 . Compared to imitation learning methods @cite_42 @cite_2 that interpolate a model distribution with an oracle, NSM needs to solve a challenging search problem of training from weak supervisions in a large search space. Our solution employs two techniques (a) a symbolic computer" helps find good programs by pruning the search space (b) an iterative ML training process, where beam search is used to find pseudo-gold programs. Wiseman and Rush @cite_32 proposed a max-margin approach to train a sequence-to-sequence scorer. However, their training procedure is more involved, and we did not implement it in this work. MIXER @cite_21 also proposed to combine ML training and REINFORCE, but they only considered tasks with full supervisions. Berant and Liang @cite_35 applied imitation learning to semantic parsing, but still requires hand crafted grammars and features.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_4", "@cite_42", "@cite_21", "@cite_32", "@cite_24", "@cite_2", "@cite_20" ], "mid": [ "2295690548", "2963363070", "2410983263", "", "2176263492", "2414484917", "2508728158", "", "2121863487" ], "abstract": [ "Semantic parsers conventionally construct logical forms bottom-up in a fixed order, resulting in the generation of many extraneous partial logical forms. In this paper, we combine ideas from imitation learning and agenda-based parsing to train a semantic parser that searches partial logical forms in a more strategic order. Empirically, our parser reduces the number of constructed partial logical forms by an order of magnitude, and obtains a 6x-9x speedup over fixed-order parsing, while maintaining comparable accuracy.", "", "Recent neural models of dialogue generation offer great promise for generating responses for conversational agents, but tend to be shortsighted, predicting utterances one at a time while ignoring their influence on future outcomes. Modeling the future direction of a dialogue is crucial to generating coherent, interesting dialogues, a need which led traditional NLP models of dialogue to draw on reinforcement learning. In this paper, we show how to integrate these goals, applying deep reinforcement learning to model future reward in chatbot dialogue. The model simulates dialogues between two virtual agents, using policy gradient methods to reward sequences that display three useful conversational properties: informativity (non-repetitive turns), coherence, and ease of answering (related to forward-looking function). We evaluate our model on diversity, length as well as with human judges, showing that the proposed algorithm generates more interactive responses and manages to foster a more sustained conversation in dialogue simulation. This work marks a first step towards learning a neural conversational model based on the long-term success of dialogues.", "", "Many natural language processing applications use language models to generate text. These models are typically trained to predict the next word in a sequence, given the previous words and some context such as an image. However, at test time the model is expected to generate the entire sequence from scratch. This discrepancy makes generation brittle, as errors may accumulate along the way. We address this issue by proposing a novel sequence level training algorithm that directly optimizes the metric used at test time, such as BLEU or ROUGE. On three different tasks, our approach outperforms several strong baselines for greedy generation. The method is also competitive when these baselines employ beam search, while being several times faster.", "Sequence-to-Sequence (seq2seq) modeling has rapidly become an important general-purpose NLP tool that has proven effective for many text-generation and sequence-labeling tasks. Seq2seq builds on deep neural language modeling and inherits its remarkable accuracy in estimating local, next-word distributions. In this work, we introduce a model and beam-search training scheme, based on the work of Daume III and Marcu (2005), that extends seq2seq to learn global sequence scores. This structured approach avoids classical biases associated with local training and unifies the training loss with the test-time usage, while preserving the proven model architecture of seq2seq and its efficient training approach. We show that our system outperforms a highly-optimized attention-based seq2seq system and other baselines on three different sequence to sequence tasks: word ordering, parsing, and machine translation.", "A key problem in structured output prediction is direct optimization of the task reward function that matters for test evaluation. This paper presents a simple and computationally efficient approach to incorporate task reward into a maximum likelihood framework. By establishing a link between the log-likelihood and expected reward objectives, we show that an optimal regularized expected reward is achieved when the conditional distribution of the outputs given the inputs is proportional to their exponentiated scaled rewards. Accordingly, we present a framework to smooth the predictive probability of the outputs using their corresponding rewards. We optimize the conditional log-probability of augmented outputs that are sampled proportionally to their exponentiated scaled rewards. Experiments on neural sequence to sequence models for speech recognition and machine translation show notable improvements over a maximum likelihood baseline by using reward augmented maximum likelihood (RAML), where the rewards are defined as the negative edit distance between the outputs and the ground truth labels.", "", "Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning." ] }
1611.00020
2950632879
Harnessing the statistical power of neural networks to perform language understanding and symbolic reasoning is difficult, when it requires executing efficient discrete operations against a large knowledge-base. In this work, we introduce a Neural Symbolic Machine, which contains (a) a neural "programmer", i.e., a sequence-to-sequence model that maps language utterances to programs and utilizes a key-variable memory to handle compositionality (b) a symbolic "computer", i.e., a Lisp interpreter that performs program execution, and helps find good programs by pruning the search space. We apply REINFORCE to directly optimize the task reward of this structured prediction problem. To train with weak supervision and improve the stability of REINFORCE, we augment it with an iterative maximum-likelihood training process. NSM outperforms the state-of-the-art on the WebQuestionsSP dataset when trained from question-answer pairs only, without requiring any feature engineering or domain-specific knowledge.
NSM is similar to Neural Programmer @cite_10 and Dynamic Neural Module Network @cite_1 in that they all solve the problem of semantic parsing from structured data, and generate programs using similar semantics. The main difference between these approaches is how an intermediate result (the memory) is represented. Neural Programmer and Dynamic-NMN chose to represent results as vectors of weights (row selectors and attention vectors), which enables backpropagation and search through all possible programs in parallel. However, their strategy is not applicable to a large KB such as Freebase, which contains about 100M entities, and more than 20k properties. Instead, NSM chooses a more scalable approach, where the computer" saves intermediate results, and the neural network only refers to them with variable names (e.g., @math " for all cities in the US).
{ "cite_N": [ "@cite_1", "@cite_10" ], "mid": [ "2230472587", "2214429195" ], "abstract": [ "We describe a question answering model that applies to both images and structured knowledge bases. The model uses natural language strings to automatically assemble neural networks from a collection of composable modules. Parameters for these modules are learned jointly with network-assembly parameters via reinforcement learning, with only (world, question, answer) triples as supervision. Our approach, which we term a dynamic neural model network, achieves state-of-the-art results on benchmark datasets in both visual and structured domains.", "Deep neural networks have achieved impressive supervised classification performance in many tasks including image recognition, speech recognition, and sequence to sequence learning. However, this success has not been translated to applications like question answering that may involve complex arithmetic and logic reasoning. A major limitation of these models is in their inability to learn even simple arithmetic and logic operations. For example, it has been shown that neural networks fail to learn to add two binary numbers reliably. In this work, we propose Neural Programmer, an end-to-end differentiable neural network augmented with a small set of basic arithmetic and logic operations. Neural Programmer can call these augmented operations over several steps, thereby inducing compositional programs that are more complex than the built-in operations. The model learns from a weak supervision signal which is the result of execution of the correct program, hence it does not require expensive annotation of the correct program itself. The decisions of what operations to call, and what data segments to apply to are inferred by Neural Programmer. Such decisions, during training, are done in a differentiable fashion so that the entire network can be trained jointly by gradient descent. We find that training the model is difficult, but it can be greatly improved by adding random noise to the gradient. On a fairly complex synthetic table-comprehension dataset, traditional recurrent networks and attentional models perform poorly while Neural Programmer typically obtains nearly perfect accuracy." ] }
1611.00020
2950632879
Harnessing the statistical power of neural networks to perform language understanding and symbolic reasoning is difficult, when it requires executing efficient discrete operations against a large knowledge-base. In this work, we introduce a Neural Symbolic Machine, which contains (a) a neural "programmer", i.e., a sequence-to-sequence model that maps language utterances to programs and utilizes a key-variable memory to handle compositionality (b) a symbolic "computer", i.e., a Lisp interpreter that performs program execution, and helps find good programs by pruning the search space. We apply REINFORCE to directly optimize the task reward of this structured prediction problem. To train with weak supervision and improve the stability of REINFORCE, we augment it with an iterative maximum-likelihood training process. NSM outperforms the state-of-the-art on the WebQuestionsSP dataset when trained from question-answer pairs only, without requiring any feature engineering or domain-specific knowledge.
NSM is similar to the Path Ranking Algorithm (PRA) @cite_3 in that semantics is encoded as a sequence of actions, and denotations are used to prune the search space during learning. NSM is more powerful than PRA by 1) allowing more complex semantics to be composed through the use of a key-variable memory; 2) controlling the search procedure with a trained neural network, while PRA only samples actions uniformly; 3) allowing input questions to express complex relations, and then dynamically generating action sequences. PRA can combine multiple semantic representations to produce the final prediction, which remains to be future work for NSM.
{ "cite_N": [ "@cite_3" ], "mid": [ "1756422141" ], "abstract": [ "We consider the problem of performing learning and inference in a large scale knowledge base containing imperfect knowledge with incomplete coverage. We show that a soft inference procedure based on a combination of constrained, weighted, random walks through the knowledge base graph can be used to reliably infer new beliefs for the knowledge base. More specifically, we show that the system can learn to infer different target relations by tuning the weights associated with random walks that follow different paths through the graph, using a version of the Path Ranking Algorithm (Lao and Cohen, 2010b). We apply this approach to a knowledge base of approximately 500,000 beliefs extracted imperfectly from the web by NELL, a never-ending language learner (, 2010). This new system improves significantly over NELL's earlier Horn-clause learning and inference method: it obtains nearly double the precision at rank 100, and the new learning method is also applicable to many more inference tasks." ] }
1611.00347
2548231749
Recently, there has been growing interest in developing optimization methods for solving large-scale machine learning problems. Most of these problems boil down to the problem of minimizing an average of a finite set of smooth and strongly convex functions where the number of functions @math is large. Gradient descent method (GD) is successful in minimizing convex problems at a fast linear rate; however, it is not applicable to the considered large-scale optimization setting because of the high computational complexity. Incremental methods resolve this drawback of gradient methods by replacing the required gradient for the descent direction with an incremental gradient approximation. They operate by evaluating one gradient per iteration and executing the average of the @math available gradients as a gradient approximate. Although, incremental methods reduce the computational cost of GD, their convergence rates do not justify their advantage relative to GD in terms of the total number of gradient evaluations until convergence. In this paper, we introduce a Double Incremental Aggregated Gradient method (DIAG) that computes the gradient of only one function at each iteration, which is chosen based on a cyclic scheme, and uses the aggregated average gradient of all the functions to approximate the full gradient. The iterates of the proposed DIAG method uses averages of both iterates and gradients in oppose to classic incremental methods that utilize gradient averages but do not utilize iterate averages. We prove that not only the proposed DIAG method converges linearly to the optimal solution, but also its linear convergence factor justifies the advantage of incremental methods on GD. In particular, we prove that the worst case performance of DIAG is better than the worst case performance of GD.
One may use a cyclic order instead of stochastic selection of functions in SGD which leads to the update of incremental gradient descent method (IGD) as in @cite_26 @cite_17 . Similar to the case for SGD, the sequence of iterates generated by the IGD method converges to the optimal argument at a sublinear rate of the order @math when the stepsize is diminishing. SGD and IGD are able to reduce the computational complexity of GD by requiring only one gradient evaluation per iteration; however, they both suffer from slow (sublinear) convergence rates.
{ "cite_N": [ "@cite_26", "@cite_17" ], "mid": [ "1988795359", "2017938640" ], "abstract": [ "An incremental aggregated gradient method for minimizing a sum of continuously differentiable functions is presented. The method requires a single gradient evaluation per iteration and uses a constant step size. For the case that the gradient is bounded and Lipschitz continuous, we show that the method visits infinitely often regions in which the gradient is small. Under certain unimodality assumptions, global convergence is established. In the quadratic case, a global linear rate of convergence is shown. The method is applied to distributed optimization problems arising in wireless sensor networks, and numerical experiments compare the new method with other incremental gradient methods.", "We consider incrementally updated gradient methods for minimizing the sum of smooth functions and a convex function. This method can use a (sufficiently small) constant stepsize or, more practically, an adaptive stepsize that is decreased whenever sufficient progress is not made. We show that if the gradients of the smooth functions are Lipschitz continuous on the space of n-dimensional real column vectors or the gradients of the smooth functions are bounded and Lipschitz continuous over a certain level set and the convex function is Lipschitz continuous on its domain, then every cluster point of the iterates generated by the method is a stationary point. If in addition a local Lipschitz error bound assumption holds, then the method is linearly convergent." ] }
1611.00347
2548231749
Recently, there has been growing interest in developing optimization methods for solving large-scale machine learning problems. Most of these problems boil down to the problem of minimizing an average of a finite set of smooth and strongly convex functions where the number of functions @math is large. Gradient descent method (GD) is successful in minimizing convex problems at a fast linear rate; however, it is not applicable to the considered large-scale optimization setting because of the high computational complexity. Incremental methods resolve this drawback of gradient methods by replacing the required gradient for the descent direction with an incremental gradient approximation. They operate by evaluating one gradient per iteration and executing the average of the @math available gradients as a gradient approximate. Although, incremental methods reduce the computational cost of GD, their convergence rates do not justify their advantage relative to GD in terms of the total number of gradient evaluations until convergence. In this paper, we introduce a Double Incremental Aggregated Gradient method (DIAG) that computes the gradient of only one function at each iteration, which is chosen based on a cyclic scheme, and uses the aggregated average gradient of all the functions to approximate the full gradient. The iterates of the proposed DIAG method uses averages of both iterates and gradients in oppose to classic incremental methods that utilize gradient averages but do not utilize iterate averages. We prove that not only the proposed DIAG method converges linearly to the optimal solution, but also its linear convergence factor justifies the advantage of incremental methods on GD. In particular, we prove that the worst case performance of DIAG is better than the worst case performance of GD.
The other alternative for solving the optimization problem in is the Incremental Aggregated Gradient (IAG) method which is a middle ground between GD and IGD. The IAG method requires one gradient evaluation per iteration, as in IG, while it approximates the gradient of the global objective function @math by the average of the most recent gradient of all instantaneous functions @cite_26 , and it has a linear convergence rate, as in GD. In the IAG method, the functions are chosen in a cyclic order and it takes @math iterations to have a pass over all the available functions. To introduce the update of IAG, recall the definition of @math as the copy of the decision variable @math for the last time that the function @math 's gradient is updated before step @math which can be updated as in . Then, the update of IAG is given by which is identical to the update of SAG in , and the only difference is in the scheme that the index @math is chosen.
{ "cite_N": [ "@cite_26" ], "mid": [ "1988795359" ], "abstract": [ "An incremental aggregated gradient method for minimizing a sum of continuously differentiable functions is presented. The method requires a single gradient evaluation per iteration and uses a constant step size. For the case that the gradient is bounded and Lipschitz continuous, we show that the method visits infinitely often regions in which the gradient is small. Under certain unimodality assumptions, global convergence is established. In the quadratic case, a global linear rate of convergence is shown. The method is applied to distributed optimization problems arising in wireless sensor networks, and numerical experiments compare the new method with other incremental gradient methods." ] }
1611.00135
2952074466
Image-based salient object detection (SOD) has been extensively studied in the past decades. However, video-based SOD is much less explored since there lack large-scale video datasets within which salient objects are unambiguously defined and annotated. Toward this end, this paper proposes a video-based SOD dataset that consists of 200 videos (64 minutes). In constructing the dataset, we manually annotate all objects and regions over 7,650 uniformly sampled keyframes and collect the eye-tracking data of 23 subjects that free-view all videos. From the user data, we find salient objects in video can be defined as objects that consistently pop-out throughout the video, and objects with such attributes can be unambiguously annotated by combining manually annotated object region masks with eye-tracking data of multiple subjects. To the best of our knowledge, it is currently the largest dataset for video-based salient object detection. Based on this dataset, this paper proposes an unsupervised baseline approach for video-based SOD by using saliency-guided stacked autoencoders. In the proposed approach, multiple spatiotemporal saliency cues are first extracted at pixel, superpixel and object levels. With these saliency cues, stacked autoencoders are unsupervisedly constructed which automatically infer a saliency score for each pixel by progressively encoding the high-dimensional saliency cues gathered from the pixel and its spatiotemporal neighbors. Experimental results show that the proposed unsupervised approach outperforms 30 state-of-the-art models on the proposed dataset, including 19 image-based & classic (unsupervised or non-deep learning), 6 image-based & deep learning, and 5 video-based & unsupervised. Moreover, benchmarking results show that the proposed dataset is very challenging and has the potential to boost the development of video-based SOD.
SegTrack @cite_42 is a popular dataset for video object segmentation. It contains 6 videos about animal and human with 244 frames in total, and videos are intentionally collected for benchmarking models with predefined challenges. Only one foreground object is manually annotated per frame.
{ "cite_N": [ "@cite_42" ], "mid": [ "2062563118" ], "abstract": [ "We present a novel off-line algorithm for target segmentation and tracking in video. In our approach, video data is represented by a multi-label Markov Random Field model, and segmentation is accomplished by finding the minimum energy label assignment. We propose a novel energy formulation which incorporates both segmentation and motion estimation in a single framework. Our energy functions enforce motion coherence both within and across frames. We utilize state-of-the-art methods to efficiently optimize over a large number of discrete labels. In addition, we introduce a new ground-truth dataset, called Georgia Tech Segmentation and Tracking Dataset (GT-SegTrack), for the evaluation of segmentation accuracy in video tracking. We compare our method with several recent on-line tracking algorithms and provide quantitative and qualitative performance comparisons." ] }
1611.00135
2952074466
Image-based salient object detection (SOD) has been extensively studied in the past decades. However, video-based SOD is much less explored since there lack large-scale video datasets within which salient objects are unambiguously defined and annotated. Toward this end, this paper proposes a video-based SOD dataset that consists of 200 videos (64 minutes). In constructing the dataset, we manually annotate all objects and regions over 7,650 uniformly sampled keyframes and collect the eye-tracking data of 23 subjects that free-view all videos. From the user data, we find salient objects in video can be defined as objects that consistently pop-out throughout the video, and objects with such attributes can be unambiguously annotated by combining manually annotated object region masks with eye-tracking data of multiple subjects. To the best of our knowledge, it is currently the largest dataset for video-based salient object detection. Based on this dataset, this paper proposes an unsupervised baseline approach for video-based SOD by using saliency-guided stacked autoencoders. In the proposed approach, multiple spatiotemporal saliency cues are first extracted at pixel, superpixel and object levels. With these saliency cues, stacked autoencoders are unsupervisedly constructed which automatically infer a saliency score for each pixel by progressively encoding the high-dimensional saliency cues gathered from the pixel and its spatiotemporal neighbors. Experimental results show that the proposed unsupervised approach outperforms 30 state-of-the-art models on the proposed dataset, including 19 image-based & classic (unsupervised or non-deep learning), 6 image-based & deep learning, and 5 video-based & unsupervised. Moreover, benchmarking results show that the proposed dataset is very challenging and has the potential to boost the development of video-based SOD.
SegTrack V2 @cite_21 extends SegTrack from two perspectives. First, additional annotations of foreground objects are provided for the six videos in SegTrack . Second, 8 new videos are carefully chosen to cover more challenges. In total, SegTrack V2 contains 14 videos about bird, animal, car and human with @math densely annotated frames.
{ "cite_N": [ "@cite_21" ], "mid": [ "2125378844" ], "abstract": [ "We present a parameter free approach that utilizes multiple cues for image segmentation. Beginning with an image, we execute a sequence of bottom-up aggregation steps in which pixels are gradually merged to produce larger and larger regions. In each step we consider pairs of adjacent regions and provide a probability measure to assess whether or not they should be included in the same segment. Our probabilistic formulation takes into account intensity and texture distributions in a local area around each region. It further incorporates priors based on the geometry of the regions. Finally, posteriors based on intensity and texture cues are combined using a mixture of experts formulation. This probabilistic approach is integrated into a graph coarsening scheme providing a complete hierarchical segmentation of the image. The algorithm complexity is linear in the number of the image pixels and it requires almost no user-tuned parameters. We test our method on a variety of gray scale images and compare our results to several existing segmentation algorithms." ] }
1611.00135
2952074466
Image-based salient object detection (SOD) has been extensively studied in the past decades. However, video-based SOD is much less explored since there lack large-scale video datasets within which salient objects are unambiguously defined and annotated. Toward this end, this paper proposes a video-based SOD dataset that consists of 200 videos (64 minutes). In constructing the dataset, we manually annotate all objects and regions over 7,650 uniformly sampled keyframes and collect the eye-tracking data of 23 subjects that free-view all videos. From the user data, we find salient objects in video can be defined as objects that consistently pop-out throughout the video, and objects with such attributes can be unambiguously annotated by combining manually annotated object region masks with eye-tracking data of multiple subjects. To the best of our knowledge, it is currently the largest dataset for video-based salient object detection. Based on this dataset, this paper proposes an unsupervised baseline approach for video-based SOD by using saliency-guided stacked autoencoders. In the proposed approach, multiple spatiotemporal saliency cues are first extracted at pixel, superpixel and object levels. With these saliency cues, stacked autoencoders are unsupervisedly constructed which automatically infer a saliency score for each pixel by progressively encoding the high-dimensional saliency cues gathered from the pixel and its spatiotemporal neighbors. Experimental results show that the proposed unsupervised approach outperforms 30 state-of-the-art models on the proposed dataset, including 19 image-based & classic (unsupervised or non-deep learning), 6 image-based & deep learning, and 5 video-based & unsupervised. Moreover, benchmarking results show that the proposed dataset is very challenging and has the potential to boost the development of video-based SOD.
Freiburg-Berkeley Motion Segmentation (FBMS) is designed for motion segmentation ( , segmenting regions with similar motion). It is first proposed in @cite_52 with 26 videos, and then Ochs al @cite_14 extended the dataset with another 33 videos. In total, this dataset contains 59 videos with 720 sparsely annotated frames. Although the dataset is much larger than SegTrack and SegTrack V2 , the scenarios it covers are still far from sufficient @cite_27 . Moreover, moving objects are not equivalent to salient objects, especially in a scene with complex content.
{ "cite_N": [ "@cite_27", "@cite_14", "@cite_52" ], "mid": [ "2470139095", "2076756823", "1496571393" ], "abstract": [ "Over the years, datasets and benchmarks have proven their fundamental importance in computer vision research, enabling targeted progress and objective comparisons in many fields. At the same time, legacy datasets may impend the evolution of a field due to saturated algorithm performance and the lack of contemporary, high quality data. In this work we present a new benchmark dataset and evaluation methodology for the area of video object segmentation. The dataset, named DAVIS (Densely Annotated VIdeo Segmentation), consists of fifty high quality, Full HD video sequences, spanning multiple occurrences of common video object segmentation challenges such as occlusions, motionblur and appearance changes. Each video is accompanied by densely annotated, pixel-accurate and per-frame ground truth segmentation. In addition, we provide a comprehensive analysis of several state-of-the-art segmentation approaches using three complementary metrics that measure the spatial extent of the segmentation, the accuracy of the silhouette contours and the temporal coherence. The results uncover strengths and weaknesses of current approaches, opening up promising directions for future works.", "Motion is a strong cue for unsupervised object-level grouping. In this paper, we demonstrate that motion will be exploited most effectively, if it is regarded over larger time windows. Opposed to classical two-frame optical flow, point trajectories that span hundreds of frames are less susceptible to short-term variations that hinder separating different objects. As a positive side effect, the resulting groupings are temporally consistent over a whole video shot, a property that requires tedious post-processing in the vast majority of existing approaches. We suggest working with a paradigm that starts with semi-dense motion cues first and that fills up textureless areas afterwards based on color. This paper also contributes the Freiburg-Berkeley motion segmentation (FBMS) dataset, a large, heterogeneous benchmark with 59 sequences and pixel-accurate ground truth annotation of moving objects.", "Unsupervised learning requires a grouping step that defines which data belong together. A natural way of grouping in images is the segmentation of objects or parts of objects. While pure bottom-up segmentation from static cues is well known to be ambiguous at the object level, the story changes as soon as objects move. In this paper, we present a method that uses long term point trajectories based on dense optical flow. Defining pair-wise distances between these trajectories allows to cluster them, which results in temporally consistent segmentations of moving objects in a video shot. In contrast to multi-body factorization, points and even whole objects may appear or disappear during the shot. We provide a benchmark dataset and an evaluation method for this so far uncovered setting." ] }
1611.00135
2952074466
Image-based salient object detection (SOD) has been extensively studied in the past decades. However, video-based SOD is much less explored since there lack large-scale video datasets within which salient objects are unambiguously defined and annotated. Toward this end, this paper proposes a video-based SOD dataset that consists of 200 videos (64 minutes). In constructing the dataset, we manually annotate all objects and regions over 7,650 uniformly sampled keyframes and collect the eye-tracking data of 23 subjects that free-view all videos. From the user data, we find salient objects in video can be defined as objects that consistently pop-out throughout the video, and objects with such attributes can be unambiguously annotated by combining manually annotated object region masks with eye-tracking data of multiple subjects. To the best of our knowledge, it is currently the largest dataset for video-based salient object detection. Based on this dataset, this paper proposes an unsupervised baseline approach for video-based SOD by using saliency-guided stacked autoencoders. In the proposed approach, multiple spatiotemporal saliency cues are first extracted at pixel, superpixel and object levels. With these saliency cues, stacked autoencoders are unsupervisedly constructed which automatically infer a saliency score for each pixel by progressively encoding the high-dimensional saliency cues gathered from the pixel and its spatiotemporal neighbors. Experimental results show that the proposed unsupervised approach outperforms 30 state-of-the-art models on the proposed dataset, including 19 image-based & classic (unsupervised or non-deep learning), 6 image-based & deep learning, and 5 video-based & unsupervised. Moreover, benchmarking results show that the proposed dataset is very challenging and has the potential to boost the development of video-based SOD.
DAVIS @cite_27 contains 50 high quality videos about human, animal, vehicle, object and action with @math densely annotated frames. Each video has Full HD 1080p resolution and lasts about 2 to 4 seconds. Each video clip in this dataset contains one foreground object or two spatially connected objects. Note that such objects may split into hundreds of small regions due to occlusion.
{ "cite_N": [ "@cite_27" ], "mid": [ "2470139095" ], "abstract": [ "Over the years, datasets and benchmarks have proven their fundamental importance in computer vision research, enabling targeted progress and objective comparisons in many fields. At the same time, legacy datasets may impend the evolution of a field due to saturated algorithm performance and the lack of contemporary, high quality data. In this work we present a new benchmark dataset and evaluation methodology for the area of video object segmentation. The dataset, named DAVIS (Densely Annotated VIdeo Segmentation), consists of fifty high quality, Full HD video sequences, spanning multiple occurrences of common video object segmentation challenges such as occlusions, motionblur and appearance changes. Each video is accompanied by densely annotated, pixel-accurate and per-frame ground truth segmentation. In addition, we provide a comprehensive analysis of several state-of-the-art segmentation approaches using three complementary metrics that measure the spatial extent of the segmentation, the accuracy of the silhouette contours and the temporal coherence. The results uncover strengths and weaknesses of current approaches, opening up promising directions for future works." ] }
1611.00135
2952074466
Image-based salient object detection (SOD) has been extensively studied in the past decades. However, video-based SOD is much less explored since there lack large-scale video datasets within which salient objects are unambiguously defined and annotated. Toward this end, this paper proposes a video-based SOD dataset that consists of 200 videos (64 minutes). In constructing the dataset, we manually annotate all objects and regions over 7,650 uniformly sampled keyframes and collect the eye-tracking data of 23 subjects that free-view all videos. From the user data, we find salient objects in video can be defined as objects that consistently pop-out throughout the video, and objects with such attributes can be unambiguously annotated by combining manually annotated object region masks with eye-tracking data of multiple subjects. To the best of our knowledge, it is currently the largest dataset for video-based salient object detection. Based on this dataset, this paper proposes an unsupervised baseline approach for video-based SOD by using saliency-guided stacked autoencoders. In the proposed approach, multiple spatiotemporal saliency cues are first extracted at pixel, superpixel and object levels. With these saliency cues, stacked autoencoders are unsupervisedly constructed which automatically infer a saliency score for each pixel by progressively encoding the high-dimensional saliency cues gathered from the pixel and its spatiotemporal neighbors. Experimental results show that the proposed unsupervised approach outperforms 30 state-of-the-art models on the proposed dataset, including 19 image-based & classic (unsupervised or non-deep learning), 6 image-based & deep learning, and 5 video-based & unsupervised. Moreover, benchmarking results show that the proposed dataset is very challenging and has the potential to boost the development of video-based SOD.
Hundreds of bottom-up and learning-based models @cite_43 @cite_62 @cite_73 @cite_44 @cite_7 @cite_50 have been proposed for image-based SOD in the past decade. With the booming of deep learning methodology and the presence of large-scale datasets @cite_61 @cite_55 @cite_36 , many deep models @cite_51 @cite_46 @cite_30 @cite_53 have been proposed for image-based SOD. For example, Han al @cite_12 proposed multi-stream stacked denoising autoencoders that can detect salient regions by measuring the reconstruction residuals that reflect the distinctness between background and salient regions. He al @cite_59 adopted CNNs to characterize superpixels with hierarchical features so as to detect salient objects at multiple scales, while the superpixel-based saliency computation was used by @cite_54 @cite_61 as well. Considering that the task of fixation prediction is tightly correlated with SOD, a unified deep network was proposed in @cite_8 for simultaneous fixation prediction and image-based SOD.
{ "cite_N": [ "@cite_61", "@cite_30", "@cite_62", "@cite_7", "@cite_8", "@cite_36", "@cite_55", "@cite_53", "@cite_54", "@cite_44", "@cite_43", "@cite_50", "@cite_59", "@cite_46", "@cite_73", "@cite_51", "@cite_12" ], "mid": [ "1894057436", "", "2002574940", "2047670868", "", "", "2039313011", "2963299740", "", "2166650627", "2037954058", "2211996548", "2011900468", "2949370174", "2161185676", "1947031653", "2080142539" ], "abstract": [ "Visual saliency is a fundamental problem in both cognitive and computational sciences, including computer vision. In this paper, we discover that a high-quality visual saliency model can be learned from multiscale features extracted using deep convolutional neural networks (CNNs), which have had many successes in visual recognition tasks. For learning such saliency models, we introduce a neural network architecture, which has fully connected layers on top of CNNs responsible for feature extraction at three different scales. We then propose a refinement method to enhance the spatial coherence of our saliency results. Finally, aggregating multiple saliency maps computed for different levels of image segmentation can further boost the performance, yielding saliency maps better than those generated from a single segmentation. To promote further research and evaluation of visual saliency models, we also construct a new large database of 4447 challenging images and their pixelwise saliency annotations. Experimental results demonstrate that our proposed method is capable of achieving state-of-the-art performance on all public benchmarks, improving the F-Measure by 5.0 and 13.2 respectively on the MSRA-B dataset and our new dataset (HKU-IS), and lowering the mean absolute error by 5.7 and 35.1 respectively on these two datasets.", "", "Salient object detection is not a pure low-level, bottom-up process. Higher-level knowledge is important even for task-independent image saliency. We propose a unified model to incorporate traditional low-level features with higher-level guidance to detect salient objects. In our model, an image is represented as a low-rank matrix plus sparse noises in a certain feature space, where the non-salient regions (or background) can be explained by the low-rank matrix, and the salient regions are indicated by the sparse noises. To ensure the validity of this model, a linear transform for the feature space is introduced and needs to be learned. Given an image, its low-level saliency is then extracted by identifying those sparse noises when recovering the low-rank matrix. Furthermore, higher-level knowledge is fused to compose a prior map, and is treated as a prior term in the objective function to improve the performance. Extensive experiments show that our model can comfortably achieves comparable performance to the existing methods even without the help from high-level knowledge. The integration of top-down priors further improves the performance and achieves the state-of-the-art. Moreover, the proposed model can be considered as a prototype framework not only for general salient object detection, but also for potential task-dependent saliency applications.", "Recent progresses in salient object detection have exploited the boundary prior, or background information, to assist other saliency cues such as contrast, achieving state-of-the-art results. However, their usage of boundary prior is very simple, fragile, and the integration with other cues is mostly heuristic. In this work, we present new methods to address these issues. First, we propose a robust background measure, called boundary connectivity. It characterizes the spatial layout of image regions with respect to image boundaries and is much more robust. It has an intuitive geometrical interpretation and presents unique benefits that are absent in previous saliency measures. Second, we propose a principled optimization framework to integrate multiple low level cues, including our background measure, to obtain clean and uniform saliency maps. Our formulation is intuitive, efficient and achieves state-of-the-art results on several benchmark datasets.", "", "", "Most existing bottom-up methods measure the foreground saliency of a pixel or region based on its contrast within a local context or the entire image, whereas a few methods focus on segmenting out background regions and thereby salient objects. Instead of considering the contrast between the salient objects and their surrounding regions, we consider both foreground and background cues in a different way. We rank the similarity of the image elements (pixels or regions) with foreground cues or background cues via graph-based manifold ranking. The saliency of the image elements is defined based on their relevances to the given seeds or queries. We represent the image as a close-loop graph with super pixels as nodes. These nodes are ranked based on the similarity to background and foreground queries, based on affinity matrices. Saliency detection is carried out in a two-stage scheme to extract background regions and foreground salient objects efficiently. Experimental results on two large benchmark databases demonstrate the proposed method performs well when against the state-of-the-art methods in terms of accuracy and speed. We also create a more difficult benchmark database containing 5,172 images to test the proposed saliency model and make this database publicly available with this paper for further studies in the saliency field.", "Recent progress on saliency detection is substantial, benefiting mostly from the explosive development of Convolutional Neural Networks (CNNs). Semantic segmentation and saliency detection algorithms developed lately have been mostly based on Fully Convolutional Neural Networks (FCNs). There is still a large room for improvement over the generic FCN models that do not explicitly deal with the scale-space problem. Holisitcally-Nested Edge Detector (HED) provides a skip-layer structure with deep supervision for edge and boundary detection, but the performance gain of HED on saliency detection is not obvious. In this paper, we propose a new saliency method by introducing short connections to the skip-layer structures within the HED architecture. Our framework provides rich multi-scale feature maps at each layer, a property that is critically needed to perform segment detection. Our method produces state-of-the-art results on 5 widely tested salient object detection benchmarks, with advantages in terms of efficiency (0.08 seconds per image), effectiveness, and simplicity over the existing algorithms.", "", "In this paper, we formulate saliency detection via absorbing Markov chain on an image graph model. We jointly consider the appearance divergence and spatial distribution of salient objects and the background. The virtual boundary nodes are chosen as the absorbing nodes in a Markov chain and the absorbed time from each transient node to boundary absorbing nodes is computed. The absorbed time of transient node measures its global similarity with all absorbing nodes, and thus salient objects can be consistently separated from the background when the absorbed time is used as a metric. Since the time from transient node to absorbing nodes relies on the weights on the path and their spatial distance, the background region on the center of image may be salient. We further exploit the equilibrium distribution in an ergodic Markov chain to reduce the absorbed time in the long-range smooth background regions. Extensive experiments on four benchmark datasets demonstrate robustness and efficiency of the proposed method against the state-of-the-art methods.", "Automatic estimation of salient object regions across images, without any prior assumption or knowledge of the contents of the corresponding scenes, enhances many computer vision and computer graphics applications. We introduce a regional contrast based salient object detection algorithm, which simultaneously evaluates global contrast differences and spatial weighted coherence scores. The proposed algorithm is simple, efficient, naturally multi-scale, and produces full-resolution, high-quality saliency maps. These saliency maps are further used to initialize a novel iterative version of GrabCut, namely SaliencyCut, for high quality unsupervised salient object segmentation. We extensively evaluated our algorithm using traditional salient object detection datasets, as well as a more challenging Internet image dataset. Our experimental results demonstrate that our algorithm consistently outperforms 15 existing salient object detection and segmentation methods, yielding higher precision and better recall rates. We also show that our algorithm can be used to efficiently extract salient object masks from Internet images, enabling effective sketch-based image retrieval (SBIR) via simple shape comparisons. Despite such noisy internet images, where the saliency regions are ambiguous, our saliency guided image retrieval achieves a superior retrieval rate compared with state-of-the-art SBIR methods, and additionally provides important target object region information.", "We propose a highly efficient, yet powerful, salient object detection method based on the Minimum Barrier Distance (MBD) Transform. The MBD transform is robust to pixel-value fluctuation, and thus can be effectively applied on raw pixels without region abstraction. We present an approximate MBD transform algorithm with 100X speedup over the exact algorithm. An error bound analysis is also provided. Powered by this fast MBD transform algorithm, the proposed salient object detection method runs at 80 FPS, and significantly outperforms previous methods with similar speed on four large benchmark datasets, and achieves comparable or better performance than state-of-the-art methods. Furthermore, a technique based on color whitening is proposed to extend our method to leverage the appearance-based backgroundness cue. This extended version further improves the performance, while still being one order of magnitude faster than all the other leading methods.", "Existing computational models for salient object detection primarily rely on hand-crafted features, which are only able to capture low-level contrast information. In this paper, we learn the hierarchical contrast features by formulating salient object detection as a binary labeling problem using deep learning techniques. A novel superpixelwise convolutional neural network approach, called SuperCNN, is proposed to learn the internal representations of saliency in an efficient manner. In contrast to the classical convolutional networks, SuperCNN has four main properties. First, the proposed method is able to learn the hierarchical contrast features, as it is fed by two meaningful superpixel sequences, which is much more effective for detecting salient regions than feeding raw image pixels. Second, as SuperCNN recovers the contextual information among superpixels, it enables large context to be involved in the analysis efficiently. Third, benefiting from the superpixelwise mechanism, the required number of predictions for a densely labeled map is hugely reduced. Fourth, saliency can be detected independent of region size by utilizing a multiscale network structure. Experiments show that SuperCNN can robustly detect salient objects and outperforms the state-of-the-art methods on three benchmark datasets.", "Salient object detection has recently witnessed substantial progress due to powerful features extracted using deep convolutional neural networks (CNNs). However, existing CNN-based methods operate at the patch level instead of the pixel level. Resulting saliency maps are typically blurry, especially near the boundary of salient objects. Furthermore, image patches are treated as independent samples even when they are overlapping, giving rise to significant redundancy in computation and storage. In this CVPR 2016 paper, we propose an end-to-end deep contrast network to overcome the aforementioned limitations. Our deep network consists of two complementary components, a pixel-level fully convolutional stream and a segment-wise spatial pooling stream. The first stream directly produces a saliency map with pixel-level accuracy from an input image. The second stream extracts segment-wise features very efficiently, and better models saliency discontinuities along object boundaries. Finally, a fully connected CRF model can be optionally incorporated to improve spatial coherence and contour localization in the fused result from these two streams. Experimental results demonstrate that our deep model significantly improves the state of the art.", "Salient object detection has been attracting a lot of interest, and recently various heuristic computational models have been designed. In this paper, we regard saliency map computation as a regression problem. Our method, which is based on multi-level image segmentation, uses the supervised learning approach to map the regional feature vector to a saliency score, and finally fuses the saliency scores across multiple levels, yielding the saliency map. The contributions lie in two-fold. One is that we show our approach, which integrates the regional contrast, regional property and regional background ness descriptors together to form the master saliency map, is able to produce superior saliency maps to existing algorithms most of which combine saliency maps heuristically computed from different types of features. The other is that we introduce a new regional feature vector, background ness, to characterize the background, which can be regarded as a counterpart of the objectness descriptor [2]. The performance evaluation on several popular benchmark data sets validates that our approach outperforms existing state-of-the-arts.", "This paper presents a saliency detection algorithm by integrating both local estimation and global search. In the local estimation stage, we detect local saliency by using a deep neural network (DNN-L) which learns local patch features to determine the saliency value of each pixel. The estimated local saliency maps are further refined by exploring the high level object concepts. In the global search stage, the local saliency map together with global contrast and geometric information are used as global features to describe a set of object candidate regions. Another deep neural network (DNN-G) is trained to predict the saliency score of each object region based on the global features. The final saliency map is generated by a weighted sum of salient object regions. Our method presents two interesting insights. First, local features learned by a supervised scheme can effectively capture local contrast, texture and shape information for saliency detection. Second, the complex relationship between different global saliency cues can be captured by deep networks and exploited principally rather than heuristically. Quantitative and qualitative experiments on several benchmark data sets demonstrate that our algorithm performs favorably against the state-of-the-art methods.", "Detection of salient objects from images is gaining increasing research interest in recent years as it can substantially facilitate a wide range of content-based multimedia applications. Based on the assumption that foreground salient regions are distinctive within a certain context, most conventional approaches rely on a number of hand-designed features and their distinctiveness is measured using local or global contrast. Although these approaches have been shown to be effective in dealing with simple images, their limited capability may cause difficulties when dealing with more complicated images. This paper proposes a novel framework for saliency detection by first modeling the background and then separating salient objects from the background. We develop stacked denoising autoencoders with deep learning architectures to model the background where latent patterns are explored and more powerful representations of data are learned in an unsupervised and bottom-up manner. Afterward, we formulate the separation of salient objects from the background as a problem of measuring reconstruction residuals of deep autoencoders. Comprehensive evaluations of three benchmark datasets and comparisons with nine state-of-the-art algorithms demonstrate the superiority of this paper." ] }
1611.00135
2952074466
Image-based salient object detection (SOD) has been extensively studied in the past decades. However, video-based SOD is much less explored since there lack large-scale video datasets within which salient objects are unambiguously defined and annotated. Toward this end, this paper proposes a video-based SOD dataset that consists of 200 videos (64 minutes). In constructing the dataset, we manually annotate all objects and regions over 7,650 uniformly sampled keyframes and collect the eye-tracking data of 23 subjects that free-view all videos. From the user data, we find salient objects in video can be defined as objects that consistently pop-out throughout the video, and objects with such attributes can be unambiguously annotated by combining manually annotated object region masks with eye-tracking data of multiple subjects. To the best of our knowledge, it is currently the largest dataset for video-based salient object detection. Based on this dataset, this paper proposes an unsupervised baseline approach for video-based SOD by using saliency-guided stacked autoencoders. In the proposed approach, multiple spatiotemporal saliency cues are first extracted at pixel, superpixel and object levels. With these saliency cues, stacked autoencoders are unsupervisedly constructed which automatically infer a saliency score for each pixel by progressively encoding the high-dimensional saliency cues gathered from the pixel and its spatiotemporal neighbors. Experimental results show that the proposed unsupervised approach outperforms 30 state-of-the-art models on the proposed dataset, including 19 image-based & classic (unsupervised or non-deep learning), 6 image-based & deep learning, and 5 video-based & unsupervised. Moreover, benchmarking results show that the proposed dataset is very challenging and has the potential to boost the development of video-based SOD.
The state-of-the-art deep SOD models often adopt recurrent frameworks that can achieve impressive performance. For example, Liu al @cite_56 adopted hierarchical recurrent CNNs to progressively refine the details of salient objects. In @cite_37 , a coarse saliency map was first generated by using the convolution-deconvolution networks. After that, it was refined by iteratively enhancing the results in various sub-regions. Wang al @cite_67 iteratively delivered the intermediate predictions back to the recurrent CNNs to refine saliency maps. In this way, salient objects can gradually pop-out, while distractors can be progressively suppressed.
{ "cite_N": [ "@cite_67", "@cite_37", "@cite_56" ], "mid": [ "2519528544", "2342171291", "2461475918" ], "abstract": [ "Deep networks have been proved to encode high level semantic features and delivered superior performance in saliency detection. In this paper, we go one step further by developing a new saliency model using recurrent fully convolutional networks (RFCNs). Compared with existing deep network based methods, the proposed network is able to incorporate saliency prior knowledge for more accurate inference. In addition, the recurrent architecture enables our method to automatically learn to refine the saliency map by correcting its previous errors. To train such a network with numerous parameters, we propose a pre-training strategy using semantic segmentation data, which simultaneously leverages the strong supervision of segmentation tasks for better training and enables the network to capture generic representations of objects for saliency detection. Through extensive experimental evaluations, we demonstrate that the proposed method compares favorably against state-of-the-art approaches, and that the proposed recurrent deep model as well as the pre-training method can significantly improve performance.", "Convolutional-deconvolution networks can be adopted to perform end-to-end saliency detection. But, they do not work well with objects of multiple scales. To overcome such a limitation, in this work, we propose a recurrent attentional convolutional-deconvolution network (RACDNN). Using spatial transformer and recurrent network units, RACDNN is able to iteratively attend to selected image sub-regions to perform saliency refinement progressively. Besides tackling the scale problem, RACDNN can also learn context-aware features from past iterations to enhance saliency refinement in future iterations. Experiments on several challenging saliency detection datasets validate the effectiveness of RACDNN, and show that RACDNN outperforms state-of-the-art saliency detection methods.", "Traditional1 salient object detection models often use hand-crafted features to formulate contrast and various prior knowledge, and then combine them artificially. In this work, we propose a novel end-to-end deep hierarchical saliency network (DHSNet) based on convolutional neural networks for detecting salient objects. DHSNet first makes a coarse global prediction by automatically learning various global structured saliency cues, including global contrast, objectness, compactness, and their optimal combination. Then a novel hierarchical recurrent convolutional neural network (HRCNN) is adopted to further hierarchically and progressively refine the details of saliency maps step by step via integrating local context information. The whole architecture works in a global to local and coarse to fine manner. DHSNet is directly trained using whole images and corresponding ground truth saliency masks. When testing, saliency maps can be generated by directly and efficiently feedforwarding testing images through the network, without relying on any other techniques. Evaluations on four benchmark datasets and comparisons with other 11 state-of-the-art algorithms demonstrate that DHSNet not only shows its significant superiority in terms of performance, but also achieves a real-time speed of 23 FPS on modern GPUs." ] }
1611.00135
2952074466
Image-based salient object detection (SOD) has been extensively studied in the past decades. However, video-based SOD is much less explored since there lack large-scale video datasets within which salient objects are unambiguously defined and annotated. Toward this end, this paper proposes a video-based SOD dataset that consists of 200 videos (64 minutes). In constructing the dataset, we manually annotate all objects and regions over 7,650 uniformly sampled keyframes and collect the eye-tracking data of 23 subjects that free-view all videos. From the user data, we find salient objects in video can be defined as objects that consistently pop-out throughout the video, and objects with such attributes can be unambiguously annotated by combining manually annotated object region masks with eye-tracking data of multiple subjects. To the best of our knowledge, it is currently the largest dataset for video-based salient object detection. Based on this dataset, this paper proposes an unsupervised baseline approach for video-based SOD by using saliency-guided stacked autoencoders. In the proposed approach, multiple spatiotemporal saliency cues are first extracted at pixel, superpixel and object levels. With these saliency cues, stacked autoencoders are unsupervisedly constructed which automatically infer a saliency score for each pixel by progressively encoding the high-dimensional saliency cues gathered from the pixel and its spatiotemporal neighbors. Experimental results show that the proposed unsupervised approach outperforms 30 state-of-the-art models on the proposed dataset, including 19 image-based & classic (unsupervised or non-deep learning), 6 image-based & deep learning, and 5 video-based & unsupervised. Moreover, benchmarking results show that the proposed dataset is very challenging and has the potential to boost the development of video-based SOD.
Compared with image-based SOD, video-based SOD is less explored due to the lack of large video datasets. For example, Liu al @cite_33 extended their image-based SOD model @cite_0 to the spatiotemporal domain for salient object sequence detection. In @cite_18 , visual attention ( , the estimated fixation density) was used as prior knowledge to guide the segmentation of salient regions in video. Rahtu al @cite_17 proposed to integrate local contrast features in illumination, color and motion channels with a statistical framework. A conditional random field was then adopted to recover salient objects from images and video frames. Due to the lack of large-scale benchmarking datasets, most of these early approaches only provide qualitative comparisons, and only a few works like @cite_33 have provided quantitative comparisons on a small dataset within which salient objects are roughly annotated with rectangles.
{ "cite_N": [ "@cite_0", "@cite_18", "@cite_33", "@cite_17" ], "mid": [ "2157554677", "2118490033", "2122194171", "2105454024" ], "abstract": [ "We study visual attention by detecting a salient object in an input image. We formulate salient object detection as an image segmentation problem, where we separate the salient object from the image background. We propose a set of novel features including multi-scale contrast, center-surround histogram, and color spatial distribution to describe a salient object locally, regionally, and globally. A conditional random field is learned to effectively combine these features for salient object detection. We also constructed a large image database containing tens of thousands of carefully labeled images by multiple users. To our knowledge, it is the first large image database for quantitative evaluation of visual attention algorithms. We validate our approach on this image database, which is public available with this paper.", "This paper proposes a new method for achieving precise video segmentation without any supervision or interaction. The main contributions of this report include 1) the introduction of fully automatic segmentation based on the maximum a posteriori (MAP) estimation of the Markov random field (MRF) with graph cuts and saliencydriven priors and 2) the updating of priors and feature likelihoods by integrating the previous segmentation results and the currently estimated saliency-based visual attention. Test results indicate that our new method precisely extracts probable regions from videos without any supervised interactions.", "We study video attention by detecting a salient object sequence from video segment. We formulate salient object sequence detection as energy minimization problem in a conditional random field framework, while static and dynamic salience, spatial and temporal coherence, global topic model are well defined and integrated to identify a salient object sequence. Dynamic programming algorithm is designed to resolve a global optimization, with a rectangle to represent each salient object. We validate our approach on a large number of video segments with the labeled salient object sequence.", "In this paper we introduce a new salient object segmentation method, which is based on combining a saliency measure with a conditional random field (CRF) model. The proposed saliency measure is formulated using a statistical framework and local feature contrast in illumination, color, and motion information. The resulting saliency map is then used in a CRF model to define an energy minimization based segmentation approach, which aims to recover well-defined salient objects. The method is efficiently implemented by using the integral histogram approach and graph cut solvers. Compared to previous approaches the introduced method is among the few which are applicable to both still images and videos including motion cues. The experiments show that our approach outperforms the current state-of-the-art methods in both qualitative and quantitative terms." ] }
1611.00135
2952074466
Image-based salient object detection (SOD) has been extensively studied in the past decades. However, video-based SOD is much less explored since there lack large-scale video datasets within which salient objects are unambiguously defined and annotated. Toward this end, this paper proposes a video-based SOD dataset that consists of 200 videos (64 minutes). In constructing the dataset, we manually annotate all objects and regions over 7,650 uniformly sampled keyframes and collect the eye-tracking data of 23 subjects that free-view all videos. From the user data, we find salient objects in video can be defined as objects that consistently pop-out throughout the video, and objects with such attributes can be unambiguously annotated by combining manually annotated object region masks with eye-tracking data of multiple subjects. To the best of our knowledge, it is currently the largest dataset for video-based salient object detection. Based on this dataset, this paper proposes an unsupervised baseline approach for video-based SOD by using saliency-guided stacked autoencoders. In the proposed approach, multiple spatiotemporal saliency cues are first extracted at pixel, superpixel and object levels. With these saliency cues, stacked autoencoders are unsupervisedly constructed which automatically infer a saliency score for each pixel by progressively encoding the high-dimensional saliency cues gathered from the pixel and its spatiotemporal neighbors. Experimental results show that the proposed unsupervised approach outperforms 30 state-of-the-art models on the proposed dataset, including 19 image-based & classic (unsupervised or non-deep learning), 6 image-based & deep learning, and 5 video-based & unsupervised. Moreover, benchmarking results show that the proposed dataset is very challenging and has the potential to boost the development of video-based SOD.
Beyond single video-based approaches, some approaches extend the idea of image co-segmentation to the video domain. For example, Chiu and Fritz @cite_38 proposed a generative model for multi-class video co-segmentation. A global appearance model was learned to connect the segments from the same class so as to segment the foreground targets shared by different videos. Fu al @cite_20 proposed to detect multiple foreground objects shared by a set of videos. Category-independent object proposals were first extracted and multi-state selection graph was then adopted to handle multiple foreground objects. Although video co-segmentation brings us a interesting new direction for studying video-based SOD, detecting salient objects in a single video is still the most common requirement in many real-world applications.
{ "cite_N": [ "@cite_38", "@cite_20" ], "mid": [ "1990802205", "1534510265" ], "abstract": [ "Video data provides a rich source of information that is available to us today in large quantities e.g. from on-line resources. Tasks like segmentation benefit greatly from the analysis of spatio-temporal motion patterns in videos and recent advances in video segmentation has shown great progress in exploiting these addition cues. However, observing a single video is often not enough to predict meaningful segmentations and inference across videos becomes necessary in order to predict segmentations that are consistent with objects classes. Therefore the task of video co-segmentation is being proposed, that aims at inferring segmentation from multiple videos. But current approaches are limited to only considering binary foreground background segmentation and multiple videos of the same object. This is a clear mismatch to the challenges that we are facing with videos from online resources or consumer videos. We propose to study multi-class video co-segmentation where the number of object classes is unknown as well as the number of instances in each frame and video. We achieve this by formulating a non-parametric Bayesian model across videos sequences that is based on a new videos segmentation prior as well as a global appearance model that links segments of the same class. We present the first multi-class video co-segmentation evaluation. We show that our method is applicable to real video data from online resources and outperforms state-of-the-art video segmentation and image co-segmentation baselines.", "We present a technique for multiple foreground video co-segmentation in a set of videos. This technique is based on category-independent object proposals. To identify the foreground objects in each frame, we examine the properties of the various regions that reflect the characteristics of foregrounds, considering the intra-video coherence of the foreground as well as the foreground consistency among the different videos in the set. Multiple foregrounds are handled via a multi-state selection graph in which a node representing a video frame can take multiple labels that correspond to different objects. In addition, our method incorporates an indicator matrix that for the first time allows accurate handling of cases with common foreground objects missing in some videos, thus preventing irrelevant regions from being misclassified as foreground objects. An iterative procedure is proposed to optimize our new objective function. As demonstrated through comprehensive experiments, this object-based multiple foreground video co-segmentation method compares well with related techniques that co-segment multiple foregrounds." ] }
1611.00354
2547273565
A common and effective way to train translation systems between related languages is to consider sub-word level basic units. However, this increases the length of the sentences resulting in increased decoding time. The increase in length is also impacted by the specific choice of data format for representing the sentences as subwords. In a phrase-based SMT framework, we investigate different choices of decoder parameters as well as data format and their impact on decoding time and translation accuracy. We suggest best options for these settings that significantly improve decoding time with little impact on the translation accuracy.
There has been a lot of work looking at optimizing specific components of SMT decoders in a general setting. provide a good overview of various approaches to optimizing decoders. Some of the prominent efforts include efficient language models @cite_0 , lazy loading @cite_12 , phrase-table design @cite_7 , multi-core environment issues @cite_10 , efficient memory allocation @cite_17 , alternative stack configurations @cite_17 and alternative decoding algorithms like cube pruning @cite_16 .
{ "cite_N": [ "@cite_7", "@cite_0", "@cite_16", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "52422094", "", "", "2278210975", "2115918455", "2538166571" ], "abstract": [ "We describe the structure of a space-efficient phrase table for phrase-based statistical machine translation with the Moses decoder. The new phrase table can be used in-memory or be partially mapped on-disk. Compared to the standard Moses on-disk phrase table implementation a size reduction by a factor of 6 is achieved.", "", "", "Abstract In this work we introduce a new Statistical Machine Translation (SMT) system whose main objective is to reduce the translation times exploiting efficiently the computing power of the current processors and servers. Our system processes each individual job in parallel using different number of cores in such a way that the level of parallelism for each job changes dynamically according to the load of the translation server. In addition, the system is able to adapt to the particularities of any hardware platform used as server thanks to an autotuning module. An exhaustive performance evaluation considering different scenarios and hardware configurations demonstrates the benefits and flexibility of our proposal.", "In phrase-based statistical machine translation, the phrase-table requires a large amount of memory. We will present an efficient representation with two key properties: on-demand loading and a prefix tree structure for the source phrases. We will show that this representation scales well to large data tasks and that we are able to store hundreds of millions of phrase pairs in the phrase-table. For the large Chinese‐ English NIST task, the memory requirements of the phrase-table are reduced to less than 20MB using the new representation with no loss in translation quality and speed. Additionally, the new representation is not limited to a specific test set, which is important for online or real-time machine translation. One problem in speech translation is the matching of phrases in the input word graph and the phrase-table. We will describe a novel algorithm that effectively solves this combinatorial problem exploiting the prefix tree data structure of the phrase-table. This algorithm enables the use of significantly larger input word graphs in a more efficient way resulting in improved translation quality.", "The utilization of statistical machine translation (SMT) has grown enormously over the last decade, many using open-source software developed by the NLP community. As commercial use has increased, there is need for software that is optimized for commercial requirements, in particular, fast phrase-based decoding and more efficient utilization of modern multicore servers. In this paper we re-examine the major components of phrase-based decoding and decoder implementation with particular emphasis on speed and scalability on multicore machines. The result is a drop-in replacement for the Moses decoder which is up to fifteen times faster and scales monotonically with the number of cores." ] }
1611.00271
2547699624
Compiler design is a course that discusses ideas used in construction of programming language compilers. Students learn how a program written in high level programming language and designed for humans understanding is systematically converted into low level assembly language understood by machines. We propose and implement a Case-based and Project-based Learning environment for teaching important Compiler design concepts (CPLC) to B.Tech third year students of a Delhi University (India) college. A case is a text that describes a real-life situation providing information but not solution. Previous research shows that case-based teaching helps students to apply the principles discussed in the class for solving complex practical problems. We divide one main project into sub-projects to give to students in order to enhance their practical experience of designing a compiler. To measure the effectiveness of case-based discussions, students complete a survey on their perceptions of benefits of case-based learning. The survey is analyzed using frequency distribution and chi square test of association. The results of the survey show that case-based teaching of compiler concepts does enhance students skills of learning, critical thinking, engagement, communication skills and team work.
: We make our dataset publicly available on GitHub https: github.com Divya-Kundra Case-Based-Teaching . GitHub is becoming popular as a platform for researchers and scientists to share, update and maintain their dataset as well as code http: www.nature.com news democratic-databases-science-on-github-1.20719 . We believe that sharing our dataset will further facilitate research on case-based teaching in computer science and in particular on compilers design course and can be used to explore new research problems and hypothesis. Due to limited space in the paper, we briefly describe only two case-studies, however, we make all the case-studies publicly available through the GitHub repository. : The study presented in this paper is an extended version of our short paper accepted in T4E 2016 (The Eighth IEEE International Conference on Technology for Education) by the same authors @cite_1 . Due to the limited four page limit of the T4E 2016 paper, several aspects of our study are not covered which are now described in detail in this paper. The objective of this paper is to provide a complete and detailed analysis of our work through arXiv open access https: arxiv.org .
{ "cite_N": [ "@cite_1" ], "mid": [ "2574057444" ], "abstract": [ "Compiler design is a course that discusses ideas used in construction of programming language compilers. Students learn how a program written in high level programming language and designed for humans understanding is systematically converted into low level assembly language understood by machines. We propose and implement a Case-based and Project-based Learning environment for teaching important Compiler design concepts (CPLC) to B.Tech third year students of a Delhi University (India) college. A case is a text that describes a reallife situation providing information but not solution. Previous research shows that case-based teaching helps students to apply the principles discussed in the class for solving complex practical problems. We divide one main project into sub-projects to give to students in order to enhance their practical experience of designing a compiler. To measure the effectiveness of case-based discussions, students complete a survey on their perceptions of benefits of case-based learning. The survey is analyzed using frequency distribution and chi square test of association. The results of the survey show that case-based teaching of compiler concepts does enhance students skills of learning, critical thinking, engagement, communication skills and team work." ] }
1611.00172
2953178061
Society is often polarized by controversial issues, that split the population into groups of opposing views. When such issues emerge on social media, we often observe the creation of 'echo chambers', i.e., situations where like-minded people reinforce each other's opinion, but do not get exposed to the views of the opposing side. In this paper we study algorithmic techniques for bridging these chambers, and thus, reducing controversy. Specifically, we represent the discussion on a controversial issue with an endorsement graph, and cast our problem as an edge-recommendation problem on this graph. The goal of the recommendation is to reduce the controversy score of the graph, which is measured by a recently-developed metric based on random walks. At the same time, we take into account the acceptance probability of the recommended edge, which represents how likely the edge is to materialize in the endorsement graph. We propose a simple model based on a recently-developed user-level controversy score, that is competitive with state-of- the-art link-prediction algorithms. We thus aim at finding the edges that produce the largest reduction in the controversy score, in expectation. To solve this problem, we propose an efficient algorithm, which considers only a fraction of all the combinations of possible edges. Experimental results show that our algorithm is more efficient than a simple greedy heuristic, while producing comparable score reduction. Finally, a comparison with other state-of-the-art edge-addition algorithms shows that this problem is fundamentally different from what has been studied in the literature.
Making recommendations to decrease polarization. The web offers the opportunity to easily access any kind of information. Nevertheless, several studies have observed that, when offered choice, users prefer to be exposed to agreeable and like-minded content. For instance, @cite_31 report that even when opposing views were presented side-to-side, people would still preferentially select information that reinforced their existing attitudes. '' This selective-exposure phenomenon (also called filter bubble'' or echo chamber'') has led to increased fragmentation and polarization online. A wide body of recent studies have studied @cite_23 @cite_27 @cite_15 and quantified @cite_5 @cite_24 @cite_14 @cite_29 this divide.
{ "cite_N": [ "@cite_14", "@cite_15", "@cite_29", "@cite_24", "@cite_27", "@cite_23", "@cite_5", "@cite_31" ], "mid": [ "2178519838", "1410854835", "2154393330", "", "2042714524", "2152284345", "2160450239", "2077724862" ], "abstract": [ "Polarization in social media networks is a fact in several scenarios such as political debates and other contexts such as same-sex marriage, abortion and gun control. Understanding and quantifying polarization is a long-term challenge to researchers from several areas, also being a key information for tasks such as opinion analysis. In this paper, we perform a systematic comparison between social networks that arise from both polarized and non-polarized contexts. This comparison shows that the traditional polarization metric -modularity - is not a direct measure of antagonism between groups, since non-polarized networks may be also divided into fairly modular communities. To bridge this conceptual gap, we propose a novel polarization metric based on the analysis of the boundary of a pair of (potentially polarized) communities, which better captures the notions of antagonism and polarization. We then characterize polarized and non-polarized social networks according to the concentration of high-degree nodes in the boundary of communities, and found that polarized networks tend to exhibit low concentration of popular nodes along the boundary. To demonstrate the usefulness of our polarization measures, we analyze opinions expressed on Twitter on the gun control issue in the United States, and conclude that our novel metrics help making sense of opinions expressed on online media.", "How do news sources tackle controversial issues? In this work, we take a data-driven approach to understand how controversy interplays with emotional expression and biased language in the news. We begin by introducing a new dataset of controversial and noncontroversial terms collected using crowdsourcing. Then, focusing on 15 major U.S. news outlets, we compare millions of articles discussing controversial and non-controversial issues over a span of 7 months. We find that in general, when it comes to controversial issues, the use of negative affect and biased language is prevalent, while the use of strong emotion is tempered. We also observe many differences across news sources. Using these findings, we show that we can indicate to what extent an issue is controversial, by comparing it with other issues in terms of how they are portrayed across different media.", "We say that a population is perfectly polarized when divided in two groups of the same size and opposite opinions. In this paper, we propose a methodology to study and measure the emergence of polarization from social interactions. We begin by proposing a model to estimate opinions in which a minority of influential individuals propagate their opinions through a social network. The result of the model is an opinion probability density function. Next, we propose an index to quantify the extent to which the resulting distribution is polarized. Finally, we apply the proposed methodology to a Twitter conversation about the late Venezuelan president, Hugo Chavez, finding a good agreement between our results and offline data. Hence, we show that our methodology can detect different degrees of polarization, depending on the structure of the network.", "", "", "In this paper, we study the linking patterns and discussion topics of political bloggers. Our aim is to measure the degree of interaction between liberal and conservative blogs, and to uncover any differences in the structure of the two communities. Specifically, we analyze the posts of 40 \"A-list\" blogs over the period of two months preceding the U.S. Presidential Election of 2004, to study how often they referred to one another and to quantify the overlap in the topics they discussed, both within the liberal and conservative communities, and also across communities. We also study a single day snapshot of over 1,000 political blogs. This snapshot captures blogrolls (the list of links to other blogs frequently found in sidebars), and presents a more static picture of a broader blogosphere. Most significantly, we find differences in the behavior of liberal and conservative blogs, with conservative blogs linking to each other more frequently and in a denser pattern.", "Political inclinations of individuals (liberal vs. conservative) largely shape their opinions on several issues such as abortion, gun control, nuclear power, etc. These opinions are openly exerted inonline forums, news sites, the parliament, and so on. In this paper, we address the problem of quantifying political polarity of individuals and of political issues for classification and ranking. We use signed bipartite networks to represent the opinions of individuals on issues, and formulate the problem as a node classification task. We propose a linear algorithm that exploits network effects to learn both the polarity labels as well as the rankings of people and issues in a completely unsupervised manner. Through extensive experiments we demonstrate that our proposed method provides an effective, fast, and easy-to-implement solution, while outperforming three existing baseline algorithms adapted to signed networks, on real political forum and US Congress datasets.Experiments on a wide variety of synthetic graphs with varying polarity and degree distributions of the nodes further demonstrate the robustness of our approach.", "We investigated participants' preferential selection of information and their attitude moderation in an online environment. Results showed that even when opposing views were presented side-to-side, people would still preferentially select information that reinforced their existing attitudes. Preferential selection of information was, however, influenced by both situational (e.g., perceived threat) and personal (e.g., topic involvement) factors. Specifically, perceived threat induced selective exposure to attitude consistent information for topics that participants had low involvement. Participants had a higher tendency to select peer user opinions in topics that they had low than high involvement, but only when there was no perception of threat. Overall, participants' attitudes were moderated after being exposed to diverse views, although high topic involvement led to higher resistance to such moderation. Perceived threat also weakened attitude moderation, especially for low involvement topics. Results have important implication to the potential effects of \"information bubble\" - selective exposure can be induced by situational and personal factors even when competing views are presented side-by-side." ] }
1611.00172
2953178061
Society is often polarized by controversial issues, that split the population into groups of opposing views. When such issues emerge on social media, we often observe the creation of 'echo chambers', i.e., situations where like-minded people reinforce each other's opinion, but do not get exposed to the views of the opposing side. In this paper we study algorithmic techniques for bridging these chambers, and thus, reducing controversy. Specifically, we represent the discussion on a controversial issue with an endorsement graph, and cast our problem as an edge-recommendation problem on this graph. The goal of the recommendation is to reduce the controversy score of the graph, which is measured by a recently-developed metric based on random walks. At the same time, we take into account the acceptance probability of the recommended edge, which represents how likely the edge is to materialize in the endorsement graph. We propose a simple model based on a recently-developed user-level controversy score, that is competitive with state-of- the-art link-prediction algorithms. We thus aim at finding the edges that produce the largest reduction in the controversy score, in expectation. To solve this problem, we propose an efficient algorithm, which considers only a fraction of all the combinations of possible edges. Experimental results show that our algorithm is more efficient than a simple greedy heuristic, while producing comparable score reduction. Finally, a comparison with other state-of-the-art edge-addition algorithms shows that this problem is fundamentally different from what has been studied in the literature.
@cite_22 @cite_13 attempt to limit the echo-chamber effect by making users aware of other users' stance on a given issue, the extremity of their position, and their expertise. Their results show that participants who seek to acquire more accurate information about an issue are exposed to a wider range of views, and agree more with users who express moderately-mixed positions on the issue.
{ "cite_N": [ "@cite_13", "@cite_22" ], "mid": [ "2137809006", "1959339536" ], "abstract": [ "A review of research suggests that the desire for opinion reinforcement may play a more important role in shaping individuals’ exposure to online political information than an aversion to opinion challenge. The article tests this idea using data collected via a webadministered behavior-tracking study with subjects recruited from the readership of 2 partisan online news sites (N = 727). The results demonstrate that opinion-reinforcing information promotes news story exposure while opinion-challenging information makes exposure only marginally less likely. The influence of both factors is modest, but opinionreinforcing information is a more important predictor. Having decided to view a news story, evidence of an aversion to opinion challenges disappears: There is no evidence that individuals abandon news stories that contain information with which they disagree. Implications and directions for future research are discussed.", "The wide availability of user-provided content in online social media facilitates the aggregation of people around common interests, worldviews, and narratives. Despite the enthusiastic rhetoric on the part of some that this process generates \"collective intelligence\", the WWW also allows the rapid dissemination of unsubstantiated conspiracy theories that often elicite rapid, large, but naive social responses such as the recent case of Jade Helm 15 -- where a simple military exercise turned out to be perceived as the beginning of the civil war in the US. We study how Facebook users consume information related to two different kinds of narrative: scientific and conspiracy news. We find that although consumers of scientific and conspiracy stories present similar consumption patterns with respect to content, the sizes of the spreading cascades differ. Homogeneity appears to be the primary driver for the diffusion of contents, but each echo chamber has its own cascade dynamics. To mimic these dynamics, we introduce a data-driven percolation model on signed networks." ] }
1611.00172
2953178061
Society is often polarized by controversial issues, that split the population into groups of opposing views. When such issues emerge on social media, we often observe the creation of 'echo chambers', i.e., situations where like-minded people reinforce each other's opinion, but do not get exposed to the views of the opposing side. In this paper we study algorithmic techniques for bridging these chambers, and thus, reducing controversy. Specifically, we represent the discussion on a controversial issue with an endorsement graph, and cast our problem as an edge-recommendation problem on this graph. The goal of the recommendation is to reduce the controversy score of the graph, which is measured by a recently-developed metric based on random walks. At the same time, we take into account the acceptance probability of the recommended edge, which represents how likely the edge is to materialize in the endorsement graph. We propose a simple model based on a recently-developed user-level controversy score, that is competitive with state-of- the-art link-prediction algorithms. We thus aim at finding the edges that produce the largest reduction in the controversy score, in expectation. To solve this problem, we propose an efficient algorithm, which considers only a fraction of all the combinations of possible edges. Experimental results show that our algorithm is more efficient than a simple greedy heuristic, while producing comparable score reduction. Finally, a comparison with other state-of-the-art edge-addition algorithms shows that this problem is fundamentally different from what has been studied in the literature.
@cite_18 perform a user study aimed to understand ways to best present information about controversial issues to users so as to persuade them. Their main relevant findings reveal that factors such as showing the credibility of a source, or the expertise of a user, increases the chances of other users believing in the content. In a similar spirit, @cite_9 create a browser widget that measures and displays the bias of users based on the news articles they read. Their study concludes that displaying to users their bias helps them read articles of opposing views.
{ "cite_N": [ "@cite_9", "@cite_18" ], "mid": [ "2182265315", "2085731449" ], "abstract": [ "The Internet gives individuals more choice in political news and information sources and more tools to filter out disagreeable information. Citing the preference described by selective exposure theory — people prefer information that supports their beliefs and avoid counter-attitudinal information — observers warn that people may use these tools to access only agreeable information and thus live in ideological echo chambers. We report on a field deployment of a browser extension that showed users feedback about the political lean of their weekly and all time reading behaviors. Compared to a control group, showing feedback led to a modest move toward balanced exposure, corresponding to 1-2 visits per week to ideologically opposing sites or 5-10 additional visits per week to centrist sites.", "Deciding whether a claim is true or false often requires a deeper understanding of the evidence supporting and contradicting the claim. However, when presented with many evidence documents, users do not necessarily read and trust them uniformly. Psychologists and other researchers have shown that users tend to follow and agree with articles and sources that hold viewpoints similar to their own, a phenomenon known as confirmation bias. This suggests that when learning about a controversial topic, human biases and viewpoints about the topic may affect what is considered \"trustworthy\" or credible. It is an interesting challenge to build systems that can help users overcome this bias and help them decide the truthfulness of claims. In this article, we study various factors that enable humans to acquire additional information about controversial claims in an unbiased fashion. Specifically, we designed a user study to understand how presenting evidence with contrasting viewpoints and source expertise ratings affect how users learn from the evidence documents. We find that users do not seek contrasting viewpoints by themselves, but explicitly presenting contrasting evidence helps them get a well-rounded understanding of the topic. Furthermore, explicit knowledge of the credibility of the sources and the context in which the source provides the evidence document not only affects what users read but also whether they perceive the document to be credible. Language: en" ] }